uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,563,515
arxiv
\section{Introduction} Observation of the correlated electron phenomena~\cite{Pablo1}, including superconductivity~\cite{Pablo2}, in the vicinity of the first magic angle in twisted bilayer graphene~\cite{NetoPRL07,MagaudNL10,Barticevic,BMModel} led to a large number of experimental~\cite{Cory1,David,Young,Dmitry1,Yazdani,Ashoori,Eva,Stevan,Young2,Dmitry2,Yazdani2,Shahal,Abhay19,Stevan19,Young3,YuanCao2021,Shahal2,Yacoby,Yacoby2,PabloNature2021,Young3,Zeldov,Young4,YoungCDW,JiaSC,JiaSOC,YazdaniSC} and theoretical studies~\cite{Senthil1,LiangPRX1,KangVafekPRX,FengchengSC,GuineaPNAS,Balents19,BJYangPRX,Bernevig1,Leon2,Dai1,Grisha,KangVafekPRL,KangVafekPRB,Senthil2,Dai2,MacDonald,Zaletel1,Zaletel2,ZaletelDMRG,NickKekule,BernevigTBG} of this remarkable physical system. Although the main experimental findings~\cite{Pablo1,Pablo2,Cory1,David,Young} were reproduced by a number of experimental groups, there is a nagging lack of reproducibility in the finer details of the physical characteristics of devices, even when manufactured within a same lab and even within a same device. This is likely due to spatial inhomogeneity in the twist angle~\cite{Zeldov,Shahal,Young2,Yazdani2} and unintentional strain~\cite{Yazdani} produced during the device fabrication, or more generally, due to lattice deformations which vary over distances long compared to the microscopic spacing between neighboring carbon atoms. It is thus being recognized that the twist angle is not the only parameter controlling the physics of a specific device~\cite{Yacoby}. This fact motivates a development of a theory whose input would be more than just the twist angle $\theta$, Fermi velocity $v_F$ and the two inter-layer tunneling constants through the AA ($w_0$) and AB ($w_1$) regions, as is the case for (the slight generalization of) the original Bistritzer-MacDonald (BM) model, but instead, the input would be a smooth and possibly inhomogeneous configuration of the atomic displacement field. This configuration could in principle be extracted from topography measured using a scanning tunneling microscope~\cite{Eva,Yazdani,Abhay19,Stevan19,AllanPRR21} or from Bragg interferometry\cite{BediakoNatMat2021}. The goal of this paper is to provide a systematic derivation of such a continuum Hamiltonian for an arbitrary smooth atomic displacement $\fvec u_j(\fvec r)$ starting from a microscopic {\it ab initio} calibrated tight-binding model on the carbon lattice. Expanding in gradients of $\fvec u_j(\fvec r)$ and of slowly varying envelope of the graphene's $\mathbf{K}$ and $\mathbf{K}'$ Bloch functions one can achieve any desired accuracy when comparing with the microscopic model, as we demonstrate in the companion paper for the relaxed atomic configurations of the magic angle twisted bilayer graphene. Here, we provide the general formulas for two different microscopic models. For the first, we consider a microscopic hopping function which depends only on the separation between two carbon atoms, as is the case in the Slater-Koster type models~\cite{Ando2001,Uryu2004,SlaterKoster1954,MagaudNL10,KoshinoPRB12,KangVafekPRX}. For the second, we allow for dependence of the inter-layer tunneling terms on the relative orientation of the interatomic separation vector and the nearest neighbor bonds, as is the case in the microscopic model derived from density functional theory (DFT) determined Wannier states of the monolayer (and untwisted bilayer) graphene's conduction and valance bands in Ref.~\cite{KaxirasPRB16}, as well as for the configuration dependence of the on-site term. The method which we develop here is inspired by the approach advanced by Balents~\cite{Balents19}, but strives to go beyond it in several ways. First, the continuum Hamiltonian is derived entirely from the microscopic tight binding models. As a consequence, all the parameters in the continuum Hamiltonian can be expressed by suitable moments of the hopping functions and the lattice distortions, yielding realistic values of the electron-phonon couplings as a byproduct. This allows for a direct comparison of the theory with the experiments if the deformed positions of atoms are measured by local probes. Second, when applied to the twisted bilayer graphene, the continuum model derived here goes beyond the BM model\cite{BMModel,Balents19} by systematically including higher order gradient terms i.e.~gradients of both the slowly varying envelope of the fermion fields and of the atomic displacement fields. Because they are derived directly in real space via a gradient expansion, each term in our continuum theory is local~\cite{MacDonalNonlocal} (although, of course, not necessarily just `contact'). The motivation for including higher order gradient terms is directly related to the physics of the magic angle. For a twist angle $\theta$ near the magic value $1.1{\degree}$, the estimate of the energy scale of the leading order terms constituting the BM model\cite{BMModel,Balents19} can be obtained by multiplying the Fermi velocity $v_F$ and the typical momentum deviation from the Dirac point, $\hbar v_F|\mathbf{K}|\theta\sim 200$meV for the intra-layer term, and $\sim 100$meV for the contact inter-layer term. The second order intra-layer derivative terms and the first order derivative in fermion and atomic displacement fields inter-layer terms are smaller by the factor of $\sim|\fvec K| a \theta = 4\pi\theta/3\sim 0.08$, seemingly justifying their omission (for definitions of various parameters mentioned, see the next section). As is well known, however, at the magic angle the non-interacting bandwidth is anomalously smaller than the scale of the leading order terms by at least an order of magnitude, making the higher order terms comparable to the non-interacting narrow bandwidth~\cite{GuineaModel}. Moreover, even if smaller, they can be of similar order to the scale of Coulomb interaction, and they break particle-hole symmetry~\cite{Bernevig1,Leon2,Dai1,KaxirasGradient}, thus lifting degeneracy of the ground state manifold in strong coupling~\cite{KangVafekPRL,Zaletel2,BernevigTBG,NickKekule,VafekKangPRL20}. Therefore, it is desirable to study their effects systematically as we do here and the companion paper. Finally, it was recognized\cite{LiangPRX1} that atomic relaxation of twisted bilayer graphene near the first magic angle leads to an increase in the size of the AB stacked regions of the moire pattern at the expense of the AA stacked regions, as compared to the structure resulting from a simple rigid twist. With few exceptions~\cite{KoshinoPRB20}, such relaxation has been modeled as a simple change of AA and AB tunneling parameters $w_0$ and $w_1$ respectively. However, because the difference between $w_0$ and $w_1$ arises from lattice distortions, such relaxation must include pseudo-magnetic vector potential terms --given by combinations of {\em first order} spatial derivatives of the atomic displacement fields-- in the intra-layer Hamiltonian. Within the gradient expansion, such terms appear at the same order as the intra-layer first order gradient of the slow Fermi fields i.e. same order as the massless Dirac terms. Indeed, we find that such terms are comparable to the inter-layer tunneling terms $w_{0,1}$ included in the BM model, and therefore there is no a'priori justification for neglecting them. \begin{widetext} \begin{figure*}[htb] \centering \includegraphics[width=1.9\columnwidth]{NewSchem.pdf} \caption{Schematic illustration of the one-to-one mapping between (left) the undistorted atomic position $\fvec r_S$ and (right, top view) the distorted atomic position $\fvec X_{j, S}$, where red is for the top layer and blue for the bottom layer. The separation between two carbon atoms $\fvec X_{j,S} - \fvec X'_{j',S'}$ and the corresponding nearest neighbor vector $\fvec n_{j,S}$ and $\fvec n_{j',S'}$ are labeled by black arrows. The distortion described by $\fvec X_{j, S}$ also includes the lattice corrugation, as shown in the side view.} \label{Fig:Schematic} \end{figure*} \end{widetext} \section{Microscopic derivation of the continuum low energy model for an arbitrary smooth lattice deformation} \label{Sec:DeriveEffModel} In order to derive the effective continuum Hamiltonian from the microscopic tight binding model, we start by noting that generally, the distorted position $\fvec X_{j,S}$ of a carbon atom in the layer $j$ and sublattice $S$ can be expressed as \begin{align} \fvec X_{j,S} & = \fvec r_S + \fvec u_{j, S}(\fvec r_S)\equiv \fvec X_{j,S}(\fvec r_S), \label{Eqn:XPos} \\ \fvec u_{j, S}(\fvec r_S) & = \fvec u^{\parallel}_{j,S}\left(\mathbf{r}_{S}\right) + \fvec u^{\perp}_{j,S}(\fvec \mathbf{r}_S),\label{Eqn:udecomp} \end{align} where $\fvec r_S = n_1 \fvec a_1 + n_2 \fvec a_2 + \fvec \tau_{S}$ is the reference undistorted position of the carbon atom within a honeycomb lattice (see Fig.~\ref{Fig:Schematic}), with $n_{1,2}$ being integers. The basis vectors of the undistorted lattice are $\fvec \tau_A = 0$ for the sublattice A and $\fvec \tau_B = (\fvec a_1 + \fvec a_2)/3$ for sublattice B, where $\fvec a_1 = a(1, 0)$ and $\fvec a_2 = a(\frac12,\frac{\sqrt{3}}2)$ are the two primitive lattice vectors and $a=0.246$nm being the lattice constant. The displacement $\fvec u_{j, S}(\fvec r_S)$ describes the deviation from the undistorted position of the carbon atoms. It is general enough to account for twist, in-plane relaxation, out-of-plane corrugation and strain, as well as any possible difference between atomic displacements of the two sublattices. The vector $\fvec u_{j, S}(\fvec r_S)$ in Eq.(\ref{Eqn:udecomp}) is decomposed into an in-plane component $\fvec u^{\parallel}_{j,S}\left(\mathbf{r}_{S}\right)$ and an out-of-plane component $\fvec u^{\perp}_{j,S}(\fvec \mathbf{r}_S)$. Its explicit dependence on the undistorted lattice point $\fvec r_S$ is referred to as the Lagrangian coordinates~\cite{Balents19,ChaikinLubensky}. Although we start with the Lagrangian formulation, we will reach a point in our derivation where we switch to the more convenient Eulerian coordinates~\cite{Balents19,ChaikinLubensky}, where the displacements are expressed in terms of the actual in-plane position of the atoms $\fvec X^\parallel_{j,S}$ as opposed to the undistorted positions $\fvec r_S$. Because each monolayer graphene sheet is assumed not to fold, there is a one-to-one mapping between $\fvec r_S$ and $\fvec X^\parallel_{j,S}$. If it folded, there would be overhangs for a sheet, and two different positions $\fvec r_S$ would map onto the same $\fvec X^\parallel_{j,S}$. Without overhangs, we can therefore adopt the Monge ``gauge''\cite{ChaikinLubensky} and use the Eulerian coordinates and write \begin{align}\label{Eqn:EulerCoord} \fvec X_{j,S} & = \fvec r_S + \fvec U^{\parallel}_{j,S}(\fvec X^{\parallel}_{j,S}) + \fvec U^{\perp}_{j,S}(\fvec X^\parallel_{j,S}). \end{align} The displacement functions $\fvec U^{\parallel,\perp}_{j,S}$ now depend on the actual in-plane location of the distorted atoms which can be determined by solving Eq.\ref{Eqn:XPos} for $\fvec{r}_S$ in terms of $\fvec X^\parallel_{j,S}$ and then expressing the displacement fields in terms of $\fvec X^\parallel_{j,S}$. To illustrate the main idea, in this section we allow the hopping amplitude $t$ to depend only on the separation of the two carbon atoms $\fvec X_{j,S} - \fvec X'_{j', S'}$. In general, $t$ depends also on the orientation~\cite{KaxirasPRB16} of this vector relative to the nearest neighbor sites of the atom at $\fvec X_{j,S}$ and at $\fvec X'_{j',S'}$ (Fig.\ref{Fig:Schematic}). Moreover, the general on-site term acquires configuration dependence. We treat this more intricate case in Sec.~\ref{Sec:Extension}. Thus, we start with a microscopic tight binding model \begin{align} H^{SK}_{tb} = & \sum_{S S'} \sum_{j j'} \sum_{\fvec r_S, \fvec r_{S'}} t(\fvec X_{j,S} - \fvec X_{j', S'}') c^{\dagger}_{j, S, \fvec r_S} c_{j', S', \fvec r_{S'}} , \label{Eqn:MicroH} \end{align} where the fermion creation and annihilation operators satisfy the anti-commutation relation $\{ c^{\dagger}_{j, S, \fvec r_S}, c_{j', S', \fvec r_{S'}} \} = \delta_{j j'} \delta_{S S'} \delta_{\fvec r_S \fvec r_{S'}}$. Because $H^{SK}_{tb}$ is Hermitian, \begin{equation} \label{Eqn:tHermitian} t(\mathbf{X}) = t^*(-\mathbf{X}), \end{equation} and because (spinless) time reversal symmetry is preserved \begin{equation} \label{Eqn:tTRS} t(\mathbf{X}) = t^*(\mathbf{X}). \end{equation} One example of a model with $t$ depending only on the separation of the two carbon atoms is the often used Slater-Koster (SK) type model\cite{Ando2001,Uryu2004,SlaterKoster1954,MagaudNL10,KoshinoPRB12,KangVafekPRX} for the carbon $p_z$ orbitals, \begin{align} t(\fvec d) & = V_{pp\pi}^0 e^{- \frac{|\fvec d| - a_0}{\Delta}} \left[ 1 - \left( \frac{\fvec d \cdot \hat z}{|\fvec d|} \right)^2 \right] + \nonumber \\ & V_{pp\sigma}^0 e^{- \frac{|\fvec d| - d_0}{\Delta}} \left( \frac{\fvec d \cdot \hat z}{|\fvec d|} \right)^2, \label{Eqn:SlaterKoster} \end{align} where for concreteness $V^0_{pp\pi} = -2.7$eV, $V^0_{pp\sigma} = 0.48$eV, $a_0 =|\fvec \tau_B| = 0.142$nm is the distance between the two nearest-neighbor carbon atoms on the same layer; $d_0 =0.335$nm is the inter-layer distance and the decay length for the hopping is $\Delta = 0.319a_0$. In this paper, we will not need to use the specific form of $t$ in (\ref{Eqn:SlaterKoster}). As explained later, we only rely on its fast decay. To obtain the continuum effective Hamiltonian from the microscopic tight binding model (\ref{Eqn:MicroH}), we next write \begin{align} & H^{SK}_{tb} = \ \sum_{S S'} \sum_{j j'} \sum_{\fvec r_S, \fvec r_{S'}} \int {\rm d}^2 \fvec r\ {\rm d}^2 \fvec r'\ \delta(\fvec r - \fvec r_S) \delta(\fvec r' - \fvec r_{S'}') \nonumber \\ & \times t(\fvec r + \fvec u_{j,S}(\fvec r) - \fvec r' - \fvec u_{j',S'}(\fvec r')) c^{\dagger}_{j, S, \fvec r} c_{j', S', \fvec r'}, \end{align} interchange the order of summation and integration and apply the ``Dirac comb'' formula \[ \sum_{\fvec r_S} \delta(\fvec r - \fvec r_S)= \frac1{A_{mlg}} \sum_{\fvec G} e^{i \fvec G \cdot (\fvec r - \fvec \tau_S)} . \] Here $\fvec G=2\pi\left(m_1\fvec a_2-m_2\fvec a_1\right)\times \hat{z}/A_{mlg}$ is the reciprocal lattice vector of the undistorted monolayer graphene, $m_{1,2}$ are integers, and $A_{mlg} = |\fvec a_1 \times \fvec a_2| = \frac{\sqrt{3}}2 a^2$ is the area of the undistorted monolayer graphene unit cell. Since the physically important states come from the vicinity of the Dirac points, we can decompose the fermion fields into two slowly spatially varying fields $\psi$ and $\phi$ multiplied by the fast spatially varying functions from the valley $\fvec K=4\pi\fvec a_1/(3a^2)$ and $\fvec K' = - \fvec K$ as \begin{align} A_{mlg}^{-1/2} c_{j, S, \fvec r} \simeq e^{i \fvec K \cdot \fvec r} \psi_{j, S}(\fvec r) + e^{- i \fvec K \cdot \fvec r} \phi_{j, S}(\fvec r) . \label{Eqn:fastslow} \end{align} The factor of $A_{mlg}^{-1/2}$ is included to satisfy the anti-commutation relation \begin{align} & \{\psi_{j,S}(\fvec r), \psi^{\dagger}_{j', S'}(\fvec r') \} = \{\phi_{j,S}(\fvec r), \phi^{\dagger}_{j', S'}(\fvec r') \} \nonumber \\ = & \delta_{jj'} \delta_{S S'} \delta(\fvec r - \fvec r'). \end{align} The effective Hamiltonian at the valley $\fvec K$ can now be written as \begin{align} & H_{SK,eff}^\mathbf{K} = \frac1{A_{mlg}} \sum_{j j'}\sum_{ S S'}\sum_{\fvec G, \fvec G'} \int {\rm d}^2 \fvec r\ {\rm d}^2 \fvec r'\ e^{i \fvec G \cdot (\fvec r - \fvec \tau_S)} e^{-i \fvec G' \cdot (\fvec r' - \fvec \tau_{S'})} \nonumber \\ & t(\fvec r + \fvec u_{j,S}(\fvec r) - \fvec r' - \fvec u_{j',S'}(\fvec r'))e^{-i\fvec K\cdot(\fvec r-\fvec r')} \psi^{\dagger}_{j,S}(\fvec r) \psi_{j',S'}(\fvec r'). \label{Eqn:EffH1} \end{align} The effective Hamiltonian at the valley $\fvec K'$ is related to $H_{SK,eff}^\mathbf{K}$ by spinless time reversal symmetry. The hopping amplitude $t$ is a short ranged function that decays exponentially fast as its argument increases beyond a few carbon lattice spacings, while $\psi$ varies slowly over such length scales. In order to take advantage of this fact it is convenient to switch to Eulerian coordinates\cite{Balents19}. By doing so, the locality of the effective theory will become manifest\cite{Balents19}. Thus, we now perform a coordinate transformation from $\fvec r$ to $\fvec X^\parallel$, where for each $j$ and $S$ we let the integration variable be $\fvec X^{\parallel}=\fvec r+ \fvec u^\parallel_{j,S}(\fvec r)$, and similarly for the primed variables. We also introduce new $\fvec X^\parallel$-dependent fermion fields as \begin{align} & \Psi_{j,S}(\fvec X^{\parallel}) = \left| J\left( \frac{\partial \fvec r}{\partial \fvec X^{\parallel}} \right) \right|^{\frac{1}{2}} \psi_{j, S}(\fvec r) \nonumber \\ = & \left| J\left( \frac{\partial (\fvec X^{\parallel} - \fvec U^{\parallel}_{j, S}(\fvec X^{\parallel}))}{\partial \fvec X^{\parallel}} \right) \right|^{\frac{1}{2}} \psi_{j, S}(\fvec X^{\parallel} - \fvec U^{\parallel}_{j,S}(\fvec X^{\parallel})), \label{Eqn:FieldX} \end{align} where $J$ is the Jacobi determinant\footnote{Note that this differs from the choice made in Ref.~\cite{Balents19} by a phase factor}. As emphasized in Ref.~\cite{Balents19}, $\fvec {U}$ is not small for twisted structures, and the formalism developed here does not make this assumption. However, we do assume that $\fvec {U}$ is smooth i.e. its gradients are small. Therefore, the $\Psi$ fields are also slow. By the property of the Dirac delta functions under the change of variables (and because the transformation between $\fvec X^\parallel$ and $\fvec r$ is one-to-one) these fields satisfy the canonical fermion commutation relations \begin{equation}\{ \Psi_{j,S}(\fvec X^{\parallel}) , \Psi_{j',S'}(\fvec X'^{\parallel}) \} = \delta_{jj'} \delta_{S S'} \delta(\fvec X^{\parallel} - \fvec X'^{\parallel}).\end{equation} For notational simplicity, we also introduce the symbol $\mathcal{J}_{j,S}(\fvec X^{\parallel}) \equiv \left| J\left( \frac{\partial \fvec r}{\partial \fvec X^{\parallel}} \right) \right|^{\frac{1}{2}}$. The effective continuum Hamiltonian now becomes \begin{widetext} \begin{align} & H_{SK,eff}^\mathbf{K} = \frac1{A_{mlg}} \sum_{j j'} \sum_{S S'} \sum_{\fvec G, \fvec G'} e^{-i (\fvec G \cdot \fvec \tau_s - \fvec G' \cdot \fvec \tau_{S'})} \int {\rm d}^2 \fvec X^{\parallel}\ {\rm d}^2 \fvec X'^{\parallel}\ \mathcal{J}_{j,S}(\fvec X^{\parallel}) \mathcal{J}_{j',S'}(\fvec X'^{\parallel}) e^{-i (\fvec G - \fvec K)\cdot \fvec U^{\parallel}_{j,S}(\fvec X^{\parallel})} e^{i (\fvec G' - \fvec K) \cdot \fvec U^{\parallel}_{j', S'}(\fvec X'^{\parallel})} \nonumber \\ & t(\fvec X^{\parallel} + \fvec U^{\perp}_{j,S}(\fvec X^{\parallel}) - \fvec X'^{\parallel} - \fvec U^{\perp}_{j',S'}(\fvec X'^{\parallel})) e^{i (\fvec G \cdot \fvec X^{\parallel} - \fvec G' \cdot \fvec X'^{\parallel})} e^{-i\fvec K\cdot(\fvec X^\parallel-\fvec X'^\parallel)} \Psi^{\dagger}_{j,S}(\fvec X^{\parallel}) \Psi_{j',S'}(\fvec X'^{\parallel}). \label{Eqn:EffH2} \end{align} \end{widetext} In order to exploit the short range nature of $t$, we switch to the center-of-mass $\fvec x = \frac{1}{2}(\fvec X^{\parallel} + \fvec X'^{\parallel})$ and relative coordinates $\fvec y = \fvec X^{\parallel} - \fvec X'^{\parallel}$. Thus $\int {\rm d}^2\fvec X^{\parallel}\ {\rm d}^2 \fvec X'^{\parallel} \ldots= \int {\rm d}^2\fvec x\ {\rm d}^2 \fvec y\ldots$, and $e^{i (\fvec G \cdot \fvec X^{\parallel} - \fvec G' \cdot \fvec X'^{\parallel})} = e^{i (\fvec G - \fvec G')\cdot \fvec x} e^{i \frac{1}{2}(\fvec G + \fvec G')\cdot \fvec y}$. The integral over $\fvec x$ contains the phase factor $e^{i (\fvec G - \fvec G')\cdot \fvec x}$ that oscillates strongly over the scale of the monolayer graphene lattice constant $a$ when $\fvec G \neq \fvec G'$, whereas all other factors are smooth functions of $\fvec x$. As a consequence, the integral over $\fvec x$ is negligible as long as $\fvec G \neq \fvec G'$; this collapses the double sum over $\fvec G,\fvec G'$ to a single sum. Moreover, the remaining fields, whether $\Psi$ or $\fvec U$, are smooth and can now be expanded in powers of gradients e.g. $\Psi(\fvec x-\frac{1}{2}\fvec y)\simeq \Psi(\fvec x)-\frac{1}{2}\fvec y\cdot\nabla_{\fvec x} \Psi(\fvec x)+\frac{1}{8}\left(\fvec y\cdot\nabla_{\fvec x}\right)^2\Psi(\fvec x)\ldots$, because powers of $\fvec y$ are compensated by the exponential decay of $t$ at large $\fvec y$, effectively confining $\fvec y$ to small values. Changing $\fvec G$ to $- \fvec G$ and to the first order in gradients, we obtain the main result of this section \begin{widetext} \begin{eqnarray} H^{\mathbf{K}}_{SK,eff} &&\simeq\frac{1}{A_{mlg}} \sum_{S,S'} \sum_{jj'} \sum_{\fvec G} e^{i\fvec G \cdot(\fvec{\tau}_S-\fvec{\tau}_{S'})} \int {\rm d}^2 \fvec x \mathcal{J}_{j,S}(\fvec x) \mathcal{J}_{j',S'}(\fvec x) e^{i(\fvec G + \fvec K)\cdot \left(\fvec U^\parallel_{j,S}(\fvec x) - \fvec U^\parallel_{j',S'}(\fvec x) \right)} \nonumber\\ & &\int {\rm d}^2\fvec y e^{-i(\fvec G + \fvec K) \cdot \fvec y} e^{i\frac{\fvec y}{2} \cdot \nabla_{\fvec x} \left( \fvec U^\parallel_{j,S}(\fvec x) + \fvec U^\parallel_{j',S'}(\fvec x) \right) \cdot (\fvec G + \fvec K)} t\left[ \fvec y + \fvec U^\perp_{j,S}\left(\fvec x\right) - \fvec U^\perp_{j',S'}(\fvec x)\right] \nonumber\\ & & \times \left[ \Psi^\dagger_{j,S}(\fvec x) \Psi_{j',S'}(\fvec x) + \frac{\fvec y}2 \cdot\left( \left(\nabla_{\fvec x} \Psi^\dagger_{j,S}(\fvec x) \right) \Psi_{j',S'}(\fvec x) - \Psi^\dagger_{j,S}(\fvec x) \nabla_{\fvec x} \Psi_{j',S'}(\fvec x)\right) \right]. \label{Eqn:EffContH} \end{eqnarray} \end{widetext} Extension to higher order gradients is straightforward (see Appendix \ref{App:quadratic}). We analyze the accuracy of this formula for the Slater-Koster like models of twisted bilayer graphene~\cite{MagaudNL10, KoshinoPRB12}, including in-plane lattice relaxation, by comparing the low energy continuum and tight-binding spectra in the companion paper, by also including a second order gradient terms in the intra-layer part of the effective Hamiltonian. Analogous formula is derived in the next two sections for model in Ref.~\cite{KaxirasPRB16} that includes dependence of $t$ on the relative orientation of the intra-layer nearest neighbor sites and the vector connecting two inter-layer sites $\fvec X_{j,S} - \fvec X'_{j', S'}$. The inter-valley scattering terms are negligibly small. This can be seen by a direct substitution of (\ref{Eqn:fastslow}), following the analysis above, and noticing that the vector $2\fvec K$ is {\it not} a reciprocal lattice vector $\mathbf{G}$, while $3\fvec K$ is. For the intervalley scattering we therefore need to compensate for the missing $\mathbf{K}$ using terms of order $\sim\mathbf{G} \cdot \partial_\mu \fvec{U}$. Because for rigid twist angle $\theta$, $\partial_\mu \fvec{U}\sim \theta$ and because the relaxed atomic configuration is smooth on the moire scale $L_m$, more generally $\partial_\mu \fvec{U}\sim a/L_m\ll 1$. This forces us to either go to very high $\mathbf{G}\sim \fvec K L_m/a$ for which the Fourier transform of $t$ is exponentially small, or, for smaller $\mathbf{G}$ to extract the Fourier component of terms of the form $e^{i(\mathbf{G}+\mathbf{K})\cdot \fvec{U}(\fvec x)}$ at $\fvec K$. Upon Fourier expanding $\fvec{U}(\fvec x)$, the function $e^{i(\mathbf{G}+\mathbf{K})\cdot \fvec{U}(\fvec x)}$ can be thought of as a product of generating functions for the Bessel functions. While non-zero, Fourier component of $e^{i(\mathbf{G}+\mathbf{K})\cdot \fvec{U}(\fvec x)}$ at $\fvec K$ corresponds to Bessel functions at high indices with arguments set by the $\partial_\mu\fvec{U}(\fvec x)$, which are exponentially small. Inspecting the tight binding spectra analyzed in the companion paper, which contain the intervalley scattering terms, and comparing them with the spectra obtained from the continuum models which neglect them, indeed justifies neglecting the intervalley scattering terms over the experimentally relevant energy scale. \subsection{Bond Orientation Dependent Hopping}\label{Sec:Extension} In the previous section we derived the effective continuum Hamiltonian when the hopping depends only on the separation of two carbon atoms $\fvec X_{j, S} - \fvec X'_{j', S'}$, as is the case for Slater-Koster type models~\cite{Ando2001,Uryu2004,SlaterKoster1954,MagaudNL10,KoshinoPRB12,KangVafekPRX}. In such models, the Wannier states are essentially the atomic $p_z$ orbitals on each carbon atom, and therefore the full azimuthal symmetry is retained making the inter-layer hoppings in-plane isotropic (see Eq.\ref{Eqn:SlaterKoster}), with no dependence on the three nearest neighbor bond vectors $\fvec n^{(\alpha)}_{j,S}(\fvec X)$ at the position of $\fvec X_{j,S}$ where $\alpha = 1,2$, or $3$ (see Fig.~\ref{Fig:Schematic}); and similarly no dependence on $\fvec n^{(\alpha')}_{j',S'}(\fvec X')$. In the more detailed microscopic model derived from DFT determined Wannier states of the monolayer (and untwisted bilayer) graphene's conduction and valance bands~\cite{KaxirasPRB16}, the localized state indeed has a dominant $p_z$ character, but the azimuthal symmetry is lost due to the trigonal crystal field of the neighboring atoms. The localized state is therefore a superposition of several lattice harmonics with angular momenta $L_z = 0$, $3$, $6$, etc. As a consequence, the inter-layer hopping part of $t$ acquires the dependence on the relative orientation of the atomic separation vector $\fvec X_{j, S} - \fvec X'_{j', S'}$ and $\fvec n^{(\alpha)}_{j,S}(\fvec X)$, $\fvec n^{(\alpha')}_{j',S'}(\fvec X')$. Here we generalize Eq.(\ref{Eqn:EffContH}) to include such effects on the effective continuum Hamiltonian. In Eq.(\ref{Eqn:MicroH}) therefore replace \begin{eqnarray} && t(\fvec X_{j,S} - \fvec X_{j', S'}')\rightarrow\nonumber\\ && t(\fvec X_{j,S} - \fvec X_{j', S'}', \{\fvec n^{(\alpha)}_{j, S}(\fvec X) \}, \{\fvec n_{j', S'}^{(\alpha)}( \fvec X')\} ), \label{Eqn:tBondOrient}\end{eqnarray} where the $\{\}$ denotes the dependence on each term in the set, i.e. $\alpha=1,2,3$. Next, from the definition of the nearest neighbor vectors we can write \begin{eqnarray}\label{Eqn:nVec} \fvec n^{(\alpha)}_{j, S}(\fvec X) = \fvec X^{\parallel}_{j, \bar{S}}(\fvec r_S + \fvec \delta_{S}^{(\alpha)}) - \fvec X^{\parallel}_{j,S}(\fvec r_S), \end{eqnarray} where $\fvec \delta_S^{(\alpha)}$ are the three nearest neighbor bond vectors of the undistorted lattice, that can be expressed as \begin{eqnarray} \fvec \delta_S^{(\alpha)} &=& R(2\pi(\alpha-1)/3) \fvec \delta_S^{(1)},\\ \fvec \delta_S^{(1)} &=& \fvec \tau_{\bar{S}} - \fvec \tau_S\equiv \fvec\delta_S, \end{eqnarray} where $R(\omega)$ is the two-dimensional rotation matrix with the angle $\omega$ \begin{eqnarray} R(\omega)=\left(\begin{array}{cc} \cos\omega & -\sin\omega \\ \sin\omega & \cos\omega\end{array}\right). \end{eqnarray} With our choice of the coordinate system, $\fvec \delta_A^{(1)}=\fvec\tau_B=\frac{1}{3}(\fvec a_1+\fvec a_2)$, $\fvec \delta_A^{(2)}=\frac{1}{3}\left(\fvec a_2 -2\fvec a_1\right)$, $\fvec \delta_A^{(3)}=\frac{1}{3}\left(\fvec a_1 -2\fvec a_2\right)$, and $\fvec\delta_B^{(\alpha)}=-\fvec\delta_A^{(\alpha)}$. We will consider only the atomic configurations which are varying smoothly not only within a sublattice but also between the two sublattices. All configurations examined in the companion paper are of this type. Therefore, we can drop the $S$ subscript in Eq.(\ref{Eqn:EulerCoord}) \begin{equation} \fvec X_{j,S}(\fvec r) \equiv \fvec X_j(\fvec r) = \fvec r + \fvec U^{\parallel}_j(\fvec X_j) + \fvec U^{\perp}_j(\fvec X_j).\end{equation} Correspondingly, the bond vectors in Eq.(\ref{Eqn:nVec}) become $\fvec n^{(\alpha)}_{j,S}(\fvec X) = \fvec X_j^{\parallel}(\fvec r_S + \fvec \delta_S^{\alpha}) - \fvec X_j^{\parallel}(\fvec r_S)$. Introducing a continuum variable $\fvec r$ and changing the integration variable to $\fvec X^\parallel=\fvec r+\fvec u^\parallel_j(\fvec r)$ for each $j$ and $S$, as in the previous section makes the bond vectors $\fvec n^{(\alpha)}_{j,S}$ a function of $\fvec{X}^\parallel$. Therefore, $t$ in Eq.(\ref{Eqn:EffH2}) gains an additional dependence on $\fvec n_{j,S}^{(\alpha)}(\fvec X^{\parallel})$ and $\fvec n_{j',S'}^{(\alpha)}(\fvec X'^{\parallel})$. For a smooth atomic displacement fields we can then write \begin{eqnarray} \fvec n_{j,S}^{(\alpha)}(\fvec X^{\parallel}) &=& \fvec \delta_S^{\alpha} + \fvec u_j^\parallel\left(\fvec X^\parallel-\fvec U_j^\parallel\left(\fvec X^\parallel\right)+\fvec \delta_S^{\alpha} \right)-\fvec U_j\left(\fvec{X}^\parallel\right)\nonumber\\ &&\simeq \fvec \delta_S^{\alpha}+ \delta_{S,\mu}^{\alpha}\frac{\partial \fvec u_j^\parallel}{\partial r_\mu}\simeq \fvec \delta_S^{\alpha}+ \delta_{S,\mu}^{\alpha}\frac{\partial \fvec U^\parallel_{j}\left(\fvec X^\parallel\right)}{\partial X^\parallel_\mu}, \label{Eqn:NNBond} \end{eqnarray} because \begin{eqnarray} && \frac{\partial u_{j,\nu}^\parallel}{\partial r_\mu}= \frac{\partial \left(X_{\nu}^\parallel-r_\nu\right)}{\partial r_\mu}=\frac{\partial X_{\nu}^\parallel}{\partial r_\mu}-\delta_{\mu\nu}=\left(\frac{\partial \fvec r}{\partial \fvec X}\right)_{\nu\mu}^{-1}-\delta_{\mu\nu}\nonumber\\ &&=\left(\frac{\partial \left(\fvec X^\parallel-\fvec U^\parallel_j\right)}{\partial \fvec X^\parallel}\right)_{\nu\mu}^{-1}-\delta_{\mu\nu}\simeq \frac{\partial U^\parallel_{j,\nu}}{\partial X^\parallel_\mu}. \end{eqnarray} By going to center-of-mass and relative coordinates, and keeping only term up the first order derivative of $\fvec U^{\parallel}$, we find \begin{align} & \fvec n^{(\alpha)}_{j, S}\left(\fvec x \pm \frac{1}{2} \fvec y\right) \simeq \fvec \delta_S^{(\alpha)}+ \delta_{S,\mu}^{(\alpha)}\frac{\partial \fvec U^\parallel_{j}\left(\fvec x\right)}{\partial x_\mu}. \end{align} Following the arguments that led to the Eq.(\ref{Eqn:EffH2}), we find that $H^\mathbf{K}_{eff}$ can be obtained from Eq.(\ref{Eqn:EffContH}) if for each layer index $j$, $j'$, we drop the sublattice index on $\fvec U$ i.e. we replace $\fvec U^{\parallel,\perp}_{j,S}(\fvec x)\rightarrow \fvec U^{\parallel,\perp}_{j}(\fvec x)$ and similarly for $j'$, $S'$, and we replace \begin{eqnarray}\label{Eqn:tReplacement} && t\left[\fvec d_{S,S'}\right]\rightarrow \\ && t\left[\fvec d, \left\{\fvec \delta_S^{(\alpha)}+ \delta_{S,\mu}^{(\alpha)}\frac{\partial \fvec U^\parallel_{j}\left(\fvec x\right)}{\partial x_\mu} \right\}, \left\{\fvec \delta_{S'}^{(\alpha)}+ \delta_{S',\mu}^{(\alpha)}\frac{\partial \fvec U^\parallel_{j'}\left(\fvec x\right)}{\partial x_\mu}\right \} \right].\nonumber \end{eqnarray} With these replacements, Eq.(\ref{Eqn:EffContH}) gives the effective continuum Hamiltonian for the bond orientation dependent inter-layer hopping for an arbitrary, sublattice independent, smooth atomic deformation. The additional configuration dependent on-site term is discussed in the next subsection. \subsection{Bond Dependent On-Site Energy} The onsite terms in the tight binding model need to be considered separately because in practice they may not be accounted for accurately by the continuous interpolation function $t$ in the expression (\ref{Eqn:tReplacement}). We assume that the difference between the full configuration dependence of the on-site term and the contribution from $t$ at $\fvec d=0$ can be approximated by the form \begin{align} H_{onsite} = & \sum_{j, S} \sum_{\fvec r_S} \epsilon\left( \left\{ |\fvec n_{j, S}^{(\alpha)}(\fvec X_{j, S})| \right\} \right) c^{\dagger}_{j, S, \fvec r_S} c_{j, S, \fvec r_S} , \label{Eqn:HOnSite} \end{align} where the onsite energy $\epsilon$ is assumed to depend on the length of three nearest bonds $\fvec n_{j, S}^{(\alpha)}(\fvec X)$, defined in Eq.~\ref{Eqn:NNBond}. Applying the same methods, we can write \begin{align} & H_{onsite}= \nonumber \\ & \frac1{A_{mlg}} \sum_{j,S, \fvec G} \int {\rm d}^2 \fvec r\ e^{i \fvec G \cdot (\fvec r - \fvec \tau_S)} \epsilon\left( \left\{ \fvec n_{j, S}^{(\alpha)}(\fvec X_{j, S}) \right\} \right) c^{\dagger}_{j, S, \fvec r} c_{j, S, \fvec r}. \end{align} Next, we introduce the field operator $\psi_{j, S}(\fvec r)$ via the Eq.~\ref{Eqn:fastslow} in order to obtain the correction to the effective Hamiltonian at the valley $\fvec K$ from the on-site term. Changing the integration variable $\fvec r$ to $\fvec X^{\parallel}$ introduces the Jacobi determinant $|J(\partial \fvec r/ \partial \fvec X^{\parallel})|$. As shown in Eq.~\ref{Eqn:FieldX}, this factor is absorbed by the redefinition of the field operator $\Psi_{j, S}(\fvec X^{\parallel})$. In addition, $e^{i \fvec G \cdot \fvec r} = e^{i \fvec G \cdot (\fvec X^{\parallel} - U^{\parallel}(\fvec X^{\parallel}))}$. Thus, the onsite term at the valley $\fvec K$ can be written as \begin{align} H^{\mathbf{K}}_{onsite} = & \sum_{j, S, \fvec G} e^{-i \fvec G \cdot \fvec \tau_S} \int {\rm d}^2 \fvec X^{\parallel}\ e^{i \fvec G \cdot (\fvec X^{\parallel} - U^{\parallel}(\fvec X^{\parallel}))} \nonumber \\ & \epsilon\left( \left\{ |\fvec n_{j, S}^{(\alpha)}(\fvec X^{\parallel}) |\right\} \right) \Psi^{\dagger}_{j,S}(\fvec X^{\parallel}) \Psi_{j,S}(\fvec X^{\parallel}). \label{Eqn:OnSiteGSum} \end{align} If $\fvec G \neq 0$, the factor $e^{i \fvec G \cdot \fvec X^{\parallel}}$ oscillates around zero on the scale of the carbon-carbon distance and because it multiplies much more slowly varying functions of $\fvec X^{\parallel}$ the integral vanishes. Therefore, we can keep only the term with $\fvec G = 0$ in the above sum and obtain \begin{align} H^{\mathbf{K}}_{eff,onsite} = & \sum_{j, S} \int {\rm d}^2 \fvec x\ \epsilon\left( \left\{ |\fvec n_{j, S}^{(\alpha)}(\fvec x) |\right\} \right) \Psi^{\dagger}_{j,S}(\fvec x) \Psi_{j,S}(\fvec x). \end{align} To the linear order of gradients of $\fvec U^{\parallel}$, the length of the distorted nearest neighbor bond is \begin{align} |\fvec n_{j, S}^{(\alpha)}(\fvec x) | \approx |\fvec \delta_S^{\alpha}| + \delta_{S, \mu}^{\alpha} \frac{\partial U^{\parallel}_{j,\mu}}{\partial x_{\nu}} \delta_{S, \nu}^{\alpha}/|\fvec \delta_S^{\alpha}|, \end{align} where the length of the undistorted nearest neighbor bond vectors is the same $|\fvec \delta_S^{\alpha}| = a/\sqrt{3}$. Thus, to leading order gradient expansion, the onsite energy $\epsilon$ is \begin{equation} \epsilon\left( \left\{ |\fvec n_{j, S}^{(\alpha)}(\fvec x) |\right\} \right) \approx \epsilon\left( \frac{a}{\sqrt{3}} \right) + \frac{\sqrt{3}}{a}\sum_{\alpha = 1}^3 \frac{\partial \epsilon_0}{\partial |\delta_S^{\alpha}|} \delta_{S, \mu}^{\alpha} \frac{\partial U^{\parallel}_{j,\mu}}{\partial x_{\nu}} \delta_{S, \nu}^{\alpha}. \end{equation} Due to $C_3$ and space inversion symmetries, $\partial \epsilon/\partial |\delta_S^{\alpha}|$ is independent of $\alpha$ and $S$. In addition, $\sum_{\alpha} \delta_{S, \mu}^{\alpha} \delta_{S, \nu}^{\alpha} = \delta_{\mu\nu}a^2/2$. Introducing $\epsilon_0 = \epsilon(a/\sqrt{3})$ and $\kappa = \frac{\sqrt{3}}2 a \left( \partial \epsilon/\partial |\delta_S^{\alpha}| \right)$, we obtain \begin{align} \epsilon\left( \left\{ |\fvec n_{j, S}^{(\alpha)}(\fvec x) |\right\} \right) \approx \epsilon_0 + \kappa \fvec \nabla \cdot \fvec U_j^{\parallel} \ . \end{align} Therefore, the contribution of the on-site term to the effective continuum Hamiltonian at $\fvec K$ is \begin{align} H_{eff,onsite}^{\mathbf{K}} = \sum_{j, S} \int {\rm d}^2\fvec x\ \left( \epsilon_0 + \kappa \fvec \nabla \cdot \fvec U^{\parallel}_j \right) \Psi_{j, S}^{\dagger}(\fvec x) \Psi_{j, S}(\fvec x), \label{Eqn:HEffOnSite} \end{align} thus correcting the value of the deformation potential obtained from $t$ alone. \subsection{Bond Orientation Dependent Microscopic Model of Ref.~\cite{KaxirasPRB16}} In the derivation above, we allow for a general form of the hopping, depending on all the nearest neighbor bond vectors $\fvec n^{(\alpha)}_{j,S}$ and $\fvec n^{(\alpha)}_{j',S'}$. The model of Ref.~\cite{KaxirasPRB16} was derived for configurations which are locally $C_3$ symmetric, i.e.~all three bond vectors $\fvec n^{(\alpha)}_{j,S}$ are equivalent to each other, as are the three bond vectors $\fvec n^{(\alpha)}_{j', S'}$. In this case, the bond dependence can be simplified because the hopping is the same for each one of the three bond vectors. With an eye towards generalizing to smooth lattice distortions which lead to a (small) violation of the local $C_3$ symmetry, we write the formula for the hoppings in Eq.(\ref{Eqn:tBondOrient}) as \begin{align} & t(\fvec X_{j,S} - \fvec X_{j', S'}', \{\fvec n^{(\alpha)}_{j, S}(\fvec X) \}, \{\fvec n_{j', S'}^{(\alpha)}( \fvec X')\} )= \nonumber \\ & \frac19 \sum_{\alpha = 1}^3 \sum_{\alpha' = 1}^3 t^{jj'}_{sym}(\fvec X_{j,S} - \fvec X_{j', S'}', \fvec n^{(\alpha)}_{j, S}(\fvec X), \fvec n_{j', S'}^{(\alpha')}( \fvec X')\} ), \label{EqnS:InterHopping} \end{align} where $t^{jj'}_{sym}$ is the hopping function of Ref.~\cite{KaxirasPRB16} when the configuration is locally $C_3$ symmetric. For the intra-layer hopping \begin{equation} t_{sym}^{j = j'}(\fvec X_{j,S} - \fvec X_{j', S'}', \fvec n^{(\alpha)}_{j, S}(\fvec X), \fvec n_{j', S'}^{(\alpha')}( \fvec X')\} =\tilde{V}_0(y), \label{Eqn:KaxirasIntra} \end{equation} where \begin{equation} \tilde{V}_0(y) = \tilde{\lambda}_0 e^{-\tilde{\xi}_0 \left(y/a\right)^2} \cos\left(\tilde{\kappa}_0 \frac{y}{a}\right) + \tilde{\lambda}_1 \frac{y^2}{a^2} e^{- \tilde{\xi}_1 (y/a - \tilde{x}_1)^2}, \end{equation} where $\fvec y=\fvec X^\parallel_{j,S} - \fvec X'^{\parallel}_{j',S'}$ is the in-plane projected separation vector, $y=|\fvec y|$ is its magnitude. The intra-layer hopping with $j = j'$ is rotationally isotropic, depending only on $y$. Note that its explicit formula is not provided by Ref.~\cite{KaxirasPRB16}, in which the hopping constants are listed only for discrete values of $y$, i.e. for distances of several pairs of carbon atoms on the undistorted monolayer graphene lattice. To obtain the values of the hopping constants with arbitrary $y$, we fit these hopping constants with the formula in Eq.~\ref{Eqn:KaxirasIntra} and extract the parameters that are listed in the left table of Table~\ref{Tab:KaxirasHopping}. \begin{table}[t] \centering \begin{tabular}{|c|c|c|} \hline $i$ & $0$ & $1$ \\ \hline $\tilde{\lambda}_i/\mathrm{eV}$& $-18.4295$ & $-3.7183$ \\ \hline $\tilde{\xi}_i$ & $1.2771$ & $6.2194$ \\ \hline $\tilde{x}_i$ & & $0.9071$ \\ \hline $\tilde{\kappa}_i$ & $2.3934$ & \\ \hline \end{tabular} \ \begin{tabular}{|c|c|c|c|c|c|} \hline $i$ & $0$ & $3$ & $6$ \\ \hline $\lambda_i/\mathrm{eV}$& $0.3155$ & $-0.0688$ & $-0.0083$ \\ \hline $\xi_i$ & $1.7543$ & $3.4692$ & $2.8764$ \\ \hline $x_i$ & & $0.5212$ & $1.5206$ \\ \hline $\kappa_i$ & $2.0010$ & & $1.5731$ \\ \hline \end{tabular} \caption{Parameters of the formula for the intra-layer hopping with (left) $j = j'$ in Eq.~\ref{Eqn:KaxirasIntra} and inter-layer hopping (right, and from Ref.\cite{KaxirasPRB16}) $j \neq j'$ in Eq.~\ref{Eqn:KaxirasInter}. } \label{Tab:KaxirasHopping} \end{table} In the locally $C_3$ symmetric case, the inter-layer part of the $t^{jj'}_{sym}$ depends only on two bond vectors, one at $\fvec X_{j,S}$ and another one at $\fvec X'_{j',S'}$, as \begin{eqnarray} t^{j\neq j'}_{sym}(&&\fvec X_{j,S} - \fvec X'_{j',S'}, \fvec n_{j,S}, \fvec n_{j',S'}) = V_0(y) + \nonumber \\ &&V_3(y) \left( \cos(3\theta_{12}) + \cos(3\theta_{21}) \right) + \nonumber\\ &&V_6(y) \left( \cos(6\theta_{12}) + \cos(6\theta_{21}) \right). \label{Eqn:KaxirasInter} \end{eqnarray} The explicit formulas for $V_i(y)$ are presented in Ref.~\cite{KaxirasPRB16} and we include them here for completeness \begin{eqnarray} V_0(y) &=& \lambda_0 e^{-\xi_0 (y/a)^2} \cos\left(\kappa_0 \frac{y}{a}\right), \\ V_3(y) &=& \lambda_3 \frac{y^2}{a^2} e^{-\xi_3 (y/a - x_3)^2}, \\ V_6(y) &=& \lambda_6 e^{-\xi_6 (y/a - x_6)^2} \sin\left(\kappa_6 \frac{y}{a}\right). \end{eqnarray} The parameters are specified in Table \ref{Tab:KaxirasHopping}. The variables $\theta_{12}$ and $\theta_{21}$ in Eq.(\ref{Eqn:KaxirasInter}) are the angles between $\fvec y$ and the nearest neighbor bond vectors on two layers, i.e. \begin{align} &\theta_{12} = \cos^{-1}\left( -\frac{\fvec y \cdot \fvec n_{j,S}}{y |\fvec n_{j,S}|} \right) = \theta_{\fvec y} - \theta_{j,S} + \pi, \\ &\theta_{21} = \cos^{-1} \left( \frac{\fvec y \cdot \fvec n_{j',S'}}{y |\fvec n_{j',S'}|} \right) = \theta_{\fvec y} - \theta_{j',S'} \ . \end{align} In the above we defined $\theta_{\fvec y}$ to be the angle between the separation vector $\fvec y$ and the $x$ axis, and $\theta_{j,S}$ ($\theta_{j',S'}$) to be the angle between the bond vector $\fvec n_{j, S}$ ($\fvec n_{j', S'}$) and the $x$ axis. $\theta_{j,S}^{(\alpha)}$ ($\theta_{j',S'}^{(\alpha)}$) is introduced similarly but with the superscript $\alpha$ to distinguish the angle of different bond vectors. In the absence of the lattice distortion (e.g. as for a rigid twist), the three in-plane nearest neighbors of a carbon atom are $C_3$ symmetric about the carbon atom, and $\theta^{(\alpha)}_{j, S} = \theta_{j, S}^{(1)} + 2\pi(\alpha - 1)/3$. Therefore, the angles $\theta_{12}$ and $\theta_{21}$ could differ by $2\pi/3$ if choosing a different nearest neighbor bond, leading to the same $\cos(3 m \theta_{12})$ and $\cos(3 m \theta_{21})$ with $m$ being an integer. Therefore, without distortions each term in the sum on the right hand side of (\ref{EqnS:InterHopping}) is the same and the sum is redundant. In the presence of the lattice relaxation, however, the local $C_3$ symmetry is in general broken and the bond vectors become inequivalent. In order to generalize $t$ to include such slowly varying atomic displacements, we use the formula (\ref{EqnS:InterHopping}). With the local $C_3$ symmetry broken, the difference between the angles $\theta^{(\alpha)}_{j, S}$ deviates from $\pm 2\pi/3$. For smooth lattice deformation the deviation is small. To obtain this deviation, we write $\fvec n_{j, S}^{(\alpha)} = \fvec \delta_S^{(\alpha)} + \delta \fvec n_{j, S}^{(\alpha)}$ with $\delta \fvec n_{j, S}^{(\alpha)} = \delta^{(\alpha)}_{S,\mu}\frac{\partial \fvec U_j^{\parallel}}{\partial x_{\mu}} $ and expand the angle $\theta_{j,S}^{(\alpha)}$ to the linear order of the derivatives of $\fvec U^{\parallel}_j$ as \begin{align} & \theta_{j,S}^{(\alpha)} = \theta_{\fvec \delta_S^{(\alpha)}} + \delta \theta^{(\alpha)}_{j,S}, \\ & \delta \theta_{j,S}^{(\alpha)} = \frac{(\hat z \times \fvec \delta_S^{(\alpha)})\cdot \delta \fvec n_{j,S}^{(\alpha)}}{|\fvec \delta_S^{(\alpha)}|^2} = \frac{\epsilon_{\mu\nu}}{|\fvec \delta_S^{(\alpha)}|^2} \delta_{S,\mu}^{(\alpha)} \frac{\partial U_{j,\nu}^{\parallel}}{\partial x_{\rho}} \delta_{S,\rho}^{(\alpha)}. \end{align} $\theta_{\fvec \delta_B^{(\alpha)}}=\theta_{\fvec \delta_A^{(\alpha)}}+\pi$, and for our choice of the coordinate system, $\theta_{\fvec \delta_A^{(1)}}=\pi/6$, $\theta_{\fvec {\delta}_A^{(2)}}=\pi/6+2\pi/3$, $\theta_{\fvec \delta_A^{(3)}}=\pi/6-2\pi/3$. For the inter-layer part we therefore introduce derivatives of $t^{j j'}_{sym}$ with respect to the angles as \begin{align} & t^{(1)}_{j\neq j',S}(\fvec y) = \left. \frac{\partial t^{j\neq j'}_{sym}}{\partial \theta_{j, S}} \right|_{\theta_{j,S} = \theta_{\fvec \delta_S}} = \\ & -3 V_3(y) \sin(3(\theta_{\fvec y} - \theta_{\fvec \delta_S})) + 6 V_6(y) \sin(6(\theta_{\fvec y} - \theta_{\fvec \delta_S})) , \nonumber \\ & t^{(2)}_{j\neq j',S'}(\fvec y) = \left. \frac{\partial t^{j\neq j'}_{sym}}{\partial \theta_{j',S'}} \right|_{\theta_{j',S'} = \theta_{\fvec \delta_{S'}}} = \nonumber \\ & 3 V_3(y) \sin(3(\theta_{\fvec y} - \theta_{\fvec \delta_{S'}})) + 6 V_6(y) \sin(6(\theta_{\fvec y} - \theta_{\fvec \delta_{S'}}))\end{align} and vanishing for the intra-layer part \begin{equation} t^{(1)}_{j=j',S}(\fvec y)=t^{(2)}_{j=j',S'}(\fvec y)=0. \end{equation} The above expressions are clearly independent under $\theta_{\fvec \delta_S}\rightarrow \theta_{\fvec \delta_S}\pm2\pi/3$, and therefore it does not matter which $\theta_{\fvec \delta^{(\alpha)}_S}$ is substituted for $\theta_{\fvec \delta_S}$. Thus, combining with Eq.~\ref{Eqn:HEffOnSite}, for an arbitrary smooth lattice deformation, the effective continuum Hamiltonian for the lattice model of Ref.~\cite{KaxirasPRB16} is \begin{widetext} \begin{eqnarray} && H^{\mathbf{K}}_{eff} \simeq \frac{1}{A_{mlg}} \sum_{S,S'} \sum_{jj'} \sum_{\fvec G} e^{i\fvec G \cdot(\fvec{\tau}_S-\fvec{\tau}_{S'})} \int {\rm d}^2 \fvec x\ \mathcal{J}_{j}(\fvec x) \mathcal{J}_{j'}(\fvec x) e^{i(\fvec G + \fvec K)\cdot \left(\fvec U^\parallel_{j}(\fvec x) - \fvec U^\parallel_{j'}(\fvec x) \right)} \int {\rm d}^2\fvec y e^{-i(\fvec G + \fvec K) \cdot \fvec y} \nonumber\\ && \times e^{i\frac{\fvec y}{2} \cdot \nabla_{\fvec x} \left( \fvec U^\parallel_{j}(\fvec x) + \fvec U^\parallel_{j'}(\fvec x) \right) \cdot (\fvec G + \fvec K)} \left(t^{j j'}_{sym}\left[ \fvec y + \fvec U^\perp_{j}\left(\fvec x\right) - \fvec U^\perp_{j'}(\fvec x), \fvec \delta_S, \fvec \delta_{S'} \right] % + t^{(1)}_{j j', S}(\fvec y) \frac13 \sum_{\alpha = 1}^3 \delta \theta_{j,S}^{(\alpha)} + t^{(2)}_{j j', S'}(\fvec y) \frac13\sum_{\alpha' = 1}^3 \delta \theta_{j', S'}^{(\alpha')} % \right)\nonumber\\ && \times \left[ \Psi^\dagger_{j,S}(\fvec x) \Psi_{j',S'}(\fvec x) + \frac{\fvec y}2 \cdot\left( \left(\nabla_{\fvec x} \Psi^\dagger_{j,S}(\fvec x) \right) \Psi_{j',S'}(\fvec x) - \Psi^\dagger_{j,S}(\fvec x) \nabla_{\fvec x} \Psi_{j',S'}(\fvec x)\right) \right] \nonumber \\ && + \sum_{j, S} \int {\rm d}^2\fvec x\ \left( \epsilon_0 + \kappa \fvec \nabla \cdot \fvec U^{\parallel}_j(\fvec x) \right) \Psi_{j, S}^{\dagger}(\fvec x) \Psi_{j, S}(\fvec x) \ . \label{Eqn:EffContHCorrection} \end{eqnarray} \end{widetext} The comparison between the continuum and tight-binding spectra for the model of Ref.~\cite{KaxirasPRB16} for rigid twist as well as for the (relaxed) atomic configurations obtained from solving continuum elastic theory for twisted bilayer are shown in the companion paper. \section{Discussion} \label{sec:discussion} In our derivation of the continuum effective Hamiltonians $H_{eff}^\mathbf{K}$ for graphene bilayers, we have not made use of symmetries. Although this might seem reasonable given that we are considering arbitrary smooth inhomogeneous atomic configurations which would remove any remaining symmetries, as pointed out by Balents~\cite{Balents19}, the form of the leading order terms in the effective Hamiltonian can nevertheless be further constrained. That is because $H_{eff}^\mathbf{K}$ must be invariant under symmetry operations of the undistorted lattice (i.e. AA-stacked bilayer) that leave a valley invariant if we {\it simultaneously} transform the fermion operators {\it and} the atomic displacement fields\cite{Balents19}. Although we postpone the detailed analysis of the symmetry, here and in the Appendix \ref{App:symmetry} we would like to highlight some of its consequences. The symmetries of interest to us will be $C_3$, $C_2\mathcal{T}$, $C_{2x}$, and $\mathcal{R}_y$. Here $C_3$ is the three-fold rotation along $z$ axis. $C_2\mathcal{T}$ is the time reversal followed by the two-fold rotation along $z$ axis with the two sublattices interchanged. $\mathcal{R}_y$ is the mirror reflection along $xz$ plane bisecting the nearest neighbor carbon bond so that $(x, y, z) \rightarrow (x, - y, z)$ and the sublattice index $A \leftrightarrow B$. And $C_{2x}$ is $\mathcal{R}_y$ followed by the interchange of the two layers i.e. followed by the $xy$ plane mirror reflection half-way between the layers $\mathcal{R}_z$. The consequences of $C_2\mathcal T$ and $\mathcal{R}_y$ at $\mathbf{G}=0$ for the contact interlayer tunneling term --independent of the spatial gradients of the atomic displacement i.e. to zeroth order in $\nabla_{\fvec x}\fvec U$-- were worked out in Ref.\cite{Balents19}. There it was shown that, when combined with $C_3$, only two independent real parameters are allowed for the first shell of wavectors $\mathbf{G}=0,-4\pi \fvec a_2\times \hat{z}/\sqrt{3}a^2, 4\pi(\fvec a_1-\fvec a_2)\times \hat{z}/\sqrt{3}a^2$. Physically, these correspond to the interlayer tunneling through the AA region and the AB region, and are the only interlayer tunneling terms kept in the Bistritzer-MacDonald model\cite{BMModel,Balents19}. As mentioned in the introduction, the anomalous decrease of the bandwidth near the magic twist angle promotes the importance of the next-to-leading order terms in setting the anisotropies, thus selecting from the nearly degenerate manifold of correlated states that are obtained if only the leading order terms are kept. Instead of listing all of the consequences of the above symmetries on such higher order terms, here we only mention in passing that $C_2\mathcal{T}$ and the combined operation $C_{2x}\mathcal{R}_y$ will be seen to allow for a particularly interesting inter-layer tunneling contact term which, as shown in the companion paper, is the main source of the particle-hole symmetry breaking in the model of Ref.~\cite{KaxirasPRB16}, but which is altogether absent in the Slater-Koster type models\cite{MagaudNL10,KoshinoPRB12,KangVafekPRX}. \acknowledgments O.~V.~is supported by NSF DMR-1916958 and is partially funded by the Gordon and Betty Moore Foundation's EPiQS Initiative Grant GBMF11070, National High Magnetic Field Laboratory through NSF Grant No.~DMR-1157490 and the State of Florida. J.~K.~acknowledges the support from the NSFC Grant No.~12074276 and the start-up grant of ShanghaiTech University.
1,108,101,563,516
arxiv
\section{Introduction} \label{sec:introduction} Tangle was created by the IOTA foundation for enabling scalable distributed ledger technology for IoT applications with no transaction fees~\cite{2018:SP}. Tangle is an example of a directed acyclic graph (DAG) based distributed ledgers~\cite{2018-FB-et-al},~\cite{2020-YL-et-al},~\cite{2020:SP-et-al}. Tangle is a generalization of linear blockchain technology: it stores data as a DAG in contrast to the linear data structure of blockchain. The DAG structure of Tangle allows multiple transactions to be added into the ledger simultaneously. Hence, the throughput of Tangle is significantly higher than blockchain. The modified data structure supports higher scalability and decreased mining cost~\cite{2018:SP} which are essential for Internet of Things (IoT)~\cite{2014:LX-et-al},~\cite{2019-SK-et-al} applications. This is because IoT devices have limited computational power and can not afford high transaction fees associated with mining in blockchain. Therefore, Tangle is used in IoT applications such as automating real-time trade and exchange of renewable energy amongst neighbourhoods~\cite{2020-MZ-et-al}. Two important elements of Tangle are proof of work (PoW) and weight of transactions (WoT). PoW requires users to solve a hash puzzle to add a transaction in the distributed ledger. WoT is the reputation assigned to the transaction. In the current implementation of Tangle, WoT is a fixed function of PoW. This makes Tangle susceptible to being dominated by users with powerful computational resources. Users with high computational resources can add a very high number of transactions with a high WoT. A natural question is: \textit{how to make Tangle fair\footnote{By fair, we mean that every agent, irrespective of its computational power, can add transactions into Tangle at a low cost. Also, the mechanism should compensate agents for completing a difficult PoW. In our transaction rate control mechanism, agents' PoW is compensated with WoT. It is of interest in future work to incorporate schemes such as proportional, max-min and social welfare fairness.} to all users?} In this paper, we utilize PoW difficulty level in conjunction with WoT to control the rate of new transactions in Tangle. Specifically, we formulate the transaction rate control problem for Tangle as a \textit{principal-agent problem (PAP) with adverse selection}~\cite{1995:AM}. The PAP has been studied widely in microeconomics to design a contract between two strategic players with misaligned utilities. Examples include labor contracts~\cite{2005:GM}, insurance market~\cite{2003:MH}, and differential privacy~\cite{2014:CD-AR}. There are two types of PAP~\cite{1995:AM}: moral hazard and adversarial selection. We restrict our attention to the PAP with adverse selection as the underlying information asymmetry\footnote{In the PAP with adverse selection, the principal cannot observe the state of the agents, and hence, it has to incentivize the agents to reveal their state truthfully. The principal then assigns the effort level and compensation based on the revealed information to maximize its own utility.} is similar to that of the transaction rate control problem in Tangle. In IoT applications, heterogeneous IoT devices with different computational power are the agents, and the transaction rate controller is the principal. The agents want to add new transactions in Tangle at the maximum possible rate. On the other hand, the principal wants to control the rate of new transactions to reduce network congestion and spamming. As the principal cannot observe the computational power of the agents, it also has to incentivize the agents to reveal their computational power truthfully. To control the transaction rate, the principal assigns the PoW difficulty level to each agent based on their revealed computational power. To ensure that the mechanism is truth-telling, the principal compensates the agents' PoW using an appropriate WoT. To summarize, the information asymmetry between the transaction rate controller and IoT devices motivates PAP with adverse selection as a suitable mechanism for controlling the transaction rate in Tangle. It yields a tractable linear program~\cite{2015:DB} with useful underlying structures that can be exploited to speed up the computation. \subsection*{Related Work} Several works study the transaction rate control problem for distributed ledgers.~\cite{2016-DK} formulates a mechanism to control the PoW difficulty level for blockchain that ensures stable average block times. \cite{2017-DM-et-al} proposes a difficulty adjustment algorithm for blockchain to disincentivize the miners from coin-hopping attacks: a malicious miner increases his mining profits while at the same time increasing the average delay between blocks. The transaction rate problem has also been studied from the agent's perspective, e.g., agents optimizing their contribution of computational power to the mining process~\cite{2020:EB-et-al}. For DAG based distributed ledger,~\cite{2019-LV-et-al} proposed an adaptive rate control for Tangle. Their scheme increases or decreases the difficulty level of the PoW depending on the historical transaction rate of an agent. \cite{2021:MJ-et-al} uses a utility maximization approach to transaction rate control in Tangle using a suitable choice of network performance metric. Their model assumes that the computational power of the agent is known to the transaction rate regulator. \cite{2021:AC-et-al} borrows an idea from wireless networks to control the transaction using an access control scheme. Congestion control has also been studied in the context of wireless communication. For example, \cite{2001:XL-et-al}, \cite{2006:AF-et-al} discuss optimal resource sharing among multiple users for a high quality of service. To the best of our knowledge, a PAP based approach to studying the transaction rate control problem in Tangle has not been explored in literature. The PAP framework allows us to model the information asymmetry between the distributed ledger's users and the transaction rate controller. It yields a tractable mixed-integer optimization problem that can be decomposed into multiple linear programs. The PAP also allows us to analyze the structure of decision variables. This is beneficial in reducing the dimension of the search space and decreasing the computation cost. \subsection*{Organization and Main Results} Sec.\ref{sec:problem-statement} describes Tangle protocol and the PAP approach to solve the transaction rate control problem in Tangle. The PAP is a mixed-integer program: PoW difficulty level takes values in a finite set, whereas the WoT takes values from the subset of real numbers. We show that for a fixed choice of PoW difficulty level, the PAP is a linear program. This facilitates efficient computation of the optimal solution using standard linear program solvers. Sec.\ref{sec:structural-results} exploits the structure of the PAP to characterize the decision variables. Our first result shows that the optimal PoW difficulty level increases with an increase in computational power. The second result shows that the optimal WoT assigned to the agent increases with computational power. The third result shows that the optimal PoW difficulty level increases with the number of agents. The results can be exploited to reduce the search space for the transaction rate control problem and decrease the computation cost. Sec.\ref{sec:simulations} illustrates the application of PAP for the transaction rate control problem using numerical examples. We also apply the structural result from Sec.\ref{sec:structural-results} to reduce the dimension of the search space of the decision variables. Finally, we incorporate the transaction rate control mechanism in Tangle and simulate its dynamics. This yields insight into the impact of the transaction rate controller on the average approval time of a tip transaction. \section{Transaction Rate Control Problem in Tangle} \label{sec:problem-statement} \begin{figure} \begin{center} \begin{tikzpicture} \tikzstyle{arrow} = [draw, -latex'] \tikzstyle{user} = [draw, rectangle, text width=1.5cm, align=center]; \tikzstyle{pow} = [draw, rectangle, text width=2cm, align=center] \node (ledger) at (-1.2,0) [draw, circle, text width=1.5cm, align=center] {Tangle}; \node (controller) at (2.5,0) [draw, rectangle, text width = 2cm, align=center] {Transaction Rate Controller}; \draw[arrow] (ledger) -- node[above] {$(p(x))_{x\inX}$} node[below]{$x,N$} (controller); \foreach \x in {1,2} { \node (pow\x) at (2.5,6-4*\x) [pow] {$w(x_\x),d(x_\x)$}; \node (user\x) at (5,6-4*\x) [user] {Type $x_\x$ Agents}; \draw [arrow] (user\x) -- (pow\x); \draw [arrow] (pow\x) -|(ledger); \draw [arrow] (controller) -- (pow\x); } \end{tikzpicture} \end{center} \caption{Block diagram for control of transaction rate in Tangle. For simplicity, we consider the case when the different types of agent $X$ is equal to $2$. Transaction rate controller block receives fraction of different types of agents $(p(x))_{x\inX},x$, total agents $N$ from Tangle as an input. It then assigns difficulty level of PoW $d(x)$ for each agent depending on its type $x\inX$. To compensate the PoW, new transactions created by the agent are given a weight $w(x)$. New transactions are part of the set of tip transactions. Having a higher weight increases the chance of a tip transaction being selected for approval by other transactions.} \label{fig:block_diagram} \end{figure} \input{tangle_example_figure} This section describes the PAP formulation for controlling the rate of new transactions in Tangle. Sec.\ref{sec:tangle-protocol} describes Tangle protocol, and Sec.\ref{sec:pap} describes the PAP formulation for the transaction rate control problem in Tangle. \subsection{Tangle protocol} \label{sec:tangle-protocol} Tangle can be abstracted as a time evolving DAG. Fig.~\ref{fig:tangle-evolution} shows an example of the time evolution of Tangle. Consider a time evolving directed graph~$G_t(V_t,E_t)$ representing Tangle at time~$t$ where $t\in\mathbb{Z}^+$ denotes discrete time. Each node in the graph corresponds to a transaction/record stored on Tangle. At $t=0$, $G_0(V_0,E_0)$ denotes the \textit{genesis} graph. We assume $|V_0|=1, |E_0|=0$. At each discrete time instant $t$, new transactions are added in Tangle by agents (IoT devices). In order to join Tangle, each new transaction at time $t$ chooses two \textit{tip transactions} from $G_{t-1}(V_{t-1},E_{t-1})$ randomly, and \textit{approves} them. The two tip nodes are selected independently with repetition. Here, a tip transaction means a transaction with no incoming edges. Approving a tip transaction in Tangle means verifying that the transaction is valid, i.e., there is no double-spending. If the two randomly chosen transactions are valid, the new transaction forms a directed edge to each of the randomly chosen tip transactions. We will assume that chosen tip transactions are always valid as handling double-spending attacks are beyond the scope of this study. To summarize, every new transaction forms outgoing edges from itself to two randomly chosen tip transactions to join Tangle. At the time of adding a new transaction, the agents are also required to show PoW by solving a hash puzzle. This is to prevent spamming\footnote{Solving the hash puzzle for PoW takes finite time. Hence, an agent's rate of transaction is restricted through PoW.} and also to ensure that tampering\footnote{To tamper with a transaction, a malicious entity would need to re-solve the hash puzzle for that transaction and also for all future transactions that approve it. This is because solution of the hash puzzle of a transaction depends on the hash of the previous transactions} with a transaction in Tangle is difficult~\cite{2017:FH-et-al}. Based on the PoW difficulty level, the transaction is assigned a weight. During random selection of the two tip transactions, the probability of choosing a particular tip transaction is directly proportional to its weight. Hence, agents optimally choose the PoW difficulty level based on their preference for weight. In the current implementation of Tangle, the relation between the WoT and its PoW difficulty level is fixed. We propose a modified mechanism to assign the PoW difficulty level and corresponding WoT based on the computational power of an agent. The mechanism is truth-telling, i.e., the agents truthfully reveal their computational power to the principal. \textit{Remarks: }Approving tip transactions takes finite time and leads to delay, i.e., transaction entering Tangle at time $t$ is available as a tip transaction at time $t+h,\;h\geq 1$. In this paper, we assume a constant delay $h=1$ for simplicity. \subsection{PAP approach to transaction rate control} \label{sec:pap} Our PAP formulation of transaction rate control in Tangle is constructed as follows: \begin{align} \text{PAP}&=\begin{cases}\text{Principal} &=\text{Transaction rate controller}\\ \text{Agents}&=\text{Users (devices) adding}\\ &\text{\hspace{0.25cm} transactions in Tangle} \end{cases} \end{align} Consider Tangle where multiple agents (IoT devices) add new transactions into the ledger as shown in Fig.~\ref{fig:block_diagram}. The agents are heterogeneous, i.e., agents have different computational power. Let $x\in X:=\{x_1,x_2,\ldots,x_n\}$ denote the computational power of an agent. Let $N$ denote the number of agents, and let $p(x)$ denote the fraction of agents having computational power $x$. In order to add a new transaction in Tangle, agents have to satisfy PoW requirement by solving a hash puzzle: search for a nonce~\cite{2008:SN} that results in hash code starting with a desired number of zeros. The more difficult the puzzle, the longer it takes for an agent to solve it. Hence, to control the rate of newly added transactions, the principal (transaction rate controller) adjusts the PoW difficulty level for each agent. Let $d(x)\inD:=\{1,2,\ldots,m\}$ denotes the PoW difficulty level assigned to an agent with computational power $x$. We define the PoW difficulty vector $d\inD^{n}$ as: \begin{align} \label{eq:effort-vector} d = [d(x_1), d(x_2), \ldots, d(x_n)] \end{align} The PoW difficulty level $d(x)$ corresponds to the number of zeros required at the beginning of the hash code. Higher difficulty decreases the rate at which an agent can add new transactions into the ledger. To compensate for the agent's PoW, the principal assigns a weight $w(x)\inW:=[1,\infty)$ to the transaction added by an agent with computational power~$x$. Every new transaction has to approve two existing tip transactions (transactions with no incoming edges) so as to join Tangle. In the random tip selection strategy~\cite{2018:SP}, the two tip transactions for approval are chosen randomly with a probability proportional to the weight of the tip transaction. Hence, higher the weight of a tip transaction, the faster it gets selected for approval by others on average. \subsection*{Utility function for IoT devices (agents)} We consider the following general utility function for an agent with computational power $x$: \begin{align} \label{eq:agent-utility-general} u(w(x),d(x),x) := h\bracketRound{w(x)-g\bracketRound{d(x),x}} \end{align} Here, $h(\cdot)$ is an increasing, concave function. The concavity assumption on $h(\cdot)$ models diminishing returns in utility. If the WoT $w(x)$ is large, the tip transaction gets quickly approved by new transactions. Hence, agents prefer a higher WoT $w(x)$. Here, the increasing function $g(\cdot)$ models the associated cost for an agent to satisfy the PoW requirement. If the PoW difficulty level is high, the agent has to spend large computational power to satisfy the PoW requirement. Hence, agents prefer low effort $d(x)$. Moreover, we assume that $g_e(\cdot)$ is decreasing in $x$. This models that the marginal cost due to increase in PoW difficulty level decreases with computational power $x$. For demonstration of our model, we work with a specific choice of utility function~\eqref{eq:agent-utility-specific} for the agents: \begin{align} \label{eq:agent-utility-specific} u(w(x),d(x),x) := \log\bracketRound{\betaw({x})-\frac{\exp(d({x}))}{x}} \end{align} Here, $\betaw(x)$ models agents' preference for higher weight and $\frac{\exp(d({x}))}{x}$ models preference for low effort $d(x)$ by the agents. Rate of new transaction $\frac{x}{\exp{d(x)}}$ is an increasing function of the computational power $x$; it decreases as exponential of the PoW difficulty level $d(x)$. This is because PoW difficulty $d(x)$ corresponds to a search for hash code starting with $d(x)$ number of zeros. Assuming each hash code is equally probable, the probability of finding a hash code starting with $d(x)$ number of zeros decreases exponentially with $d(x)$. Hence, cost is an exponential function of $d(x)$. \subsection*{Formulation of a PAP for transaction rate control} In our formulation, we assume that the distribution of computational power of the agents is known to the principal, but the computational power of the agents is unobserved\footnote{Computational power of an agent is typically estimated by the number of transactions added by an agent in the recent history~\cite{2020:SP-et-al}. For IoT applications, it may not be possible to estimate the computational power. This is because IoT devices are not dedicated miners. Hence, they need not add new transactions at maximum possible rate; IoT devices could be switching between multiple tasks or could be in a standby mode. Our approach is for the transaction rate controller (principal) to incentivize agents to truthfully report their computational power. Truth-telling is ensured through incentive constraints~\eqref{eq:pap-specific-incentive-constraint} of the PAP.} by the principal (transaction rate controller). Therefore, the principal has to design a truth-telling mechanism that maximizes its own utility. To control the rate of new transactions, the principal solves the PAP~\eqref{eq:pap-specific} to assign PoW difficulty level~$d(x)$ to different agents, and WoT~$w(x)$ added by them into Tangle. The incentive constraints ensure that agents truthfully choose the PoW assigned for their computational power~$x$. The PAP for transaction rate control is the following constrained optimization problem: \begin{subequations} \label{eq:pap-specific} \begin{align} \label{eq:pap-specific-objective} &\min_{\substack{d(x)\inD,\\w(x)\inW,\\\forall x\in X,\\ w(x_1)=1}} \sum_{x\in X}p(x)\bracketSquare{N\,\rate{x}{x} +\alpha w(x))}\\ \label{eq:pap-specific-incentive-constraint} \text{s.t.}\quad &x=\arg\max_{{\bar{x}\in X}} \log\bracketRound{\betaw(\bar{x})-\frac{1}{\rate{x}{\bar{x}}}},\,\forall x\in X\\ \label{eq:pap-specific-participation-constraint} &\log\bracketRound{\betaw(x)-\frac{1}{\rate{x}{\bar{x}}}}\geq u_0,\forall x\in X\\ \intertext{where,} &\rate{x}{\bar{x}}:=\frac{x}{\exp(d(\bar{x}))} \end{align} \end{subequations} Here, the non-negative parameters $\alpha,\beta\in\mathbb{R}^+$ trade-off PoW with WoT for principal (rate controller) and agents (users adding transactions), respectively. The objective function~\eqref{eq:pap-specific-objective} ensures a trade-off between the rate of new transaction and WoT assigned to different agents. $\rate{x}{\bar{x}}$ denotes the rate at which an agent with computational power $x$ can add new transactions if it falsely claims its computational power to be $\bar{x}$. The first term $N\rate{x}{x}$ controls the rate of new transactions by adjusting PoW difficulty level for each agent. This assumes that the agents are truthful; truth-telling is ensured through incentive constraint~\eqref{eq:pap-specific-incentive-constraint} (discussed later). The rate of new transactions is directly proportional to the number of agents $N$ and the computational power $x$ available to them. It is inversely proportional to the exponential of PoW difficulty level $d(x)$. This is because PoW difficulty $d(x)$ corresponds to a search for hash code starting with $d(x)$ number of zeros. Assuming each hash code is equally probable to occur, the probability of finding a hash code starting with $d(x)$ number of zeros is proportional to $\frac{x}{d(x)}$. The second term $\alpha w(x)$ ensures that agents are assigned similar WoT. Similar weight ensures that each tip transaction has a similar chance of being selected for approval by a new transaction. Hence, $w(x)$ affects the average time before a tip transaction gets approved. As the importance of WoT is only relative, we normalize the WoT with respect to the WoT of the agent with the smallest computational power. This is achieved by adding a constraint $w(x_1)=1$. The participation constraint~\eqref{eq:pap-specific-participation-constraint} guarantees a base utility level for all the agents; otherwise, agents would opt-out of the distributed ledger, i.e., they would not use Tangle to store their transactions. \subsection*{Structure of Transaction Rate Control Problem} The PAP~\eqref{eq:pap-specific} is a mixed-integer optimization problem. The principal first solves the PAP~\eqref{eq:pap-specific} to obtain the optimal WoT $w^*(x)$ for each possible choice of effort $d\inD^{n}$. For a fixed effort $d$, the PAP~\eqref{eq:pap-specific} is a linear optimization program and can be solved efficiently using standard solvers. \begin{theorem} \label{thm:lin-opt} For a fixed $d\inD^{n}$, PAP~\eqref{eq:pap-specific} for the transaction rate control is equivalent to a linear programming problem. \end{theorem} \begin{proof} Proof is in Appendix~\ref{proof:lin-opt}. \end{proof} Theorem~\ref{thm:lin-opt} allows the principal to convert the transaction rate control problem into multiple linear programs. The principle chooses the optimal PoW difficulty level $d^*\inD^{n}$ that maximizes its utility. \subsection*{Truth-telling and implications for Tangle} The incentive constraint~\eqref{eq:pap-specific-incentive-constraint} ensures that the mechanism is truth-telling\footnote{Our proposed transaction rate control mechanism does not prevent agents from colluding, i.e., it may be advantageous for two agents to combine their computational power and act as a single agent. Preventing malicious collusion of agents is a subject of future work.}. This is achieved by maximizing utility of each agent when they perform PoW assigned for their computational power. If~\eqref{eq:pap-specific-incentive-constraint} is omitted, agents can increase their utility by choosing a PoW difficulty level assigned for a different computational power. In such a case, the actual transaction rate can exceed the optimal transaction rate solved by the principal. This can lead to network congestion and delay in broadcasting new transactions among all agents. Therefore, \eqref{eq:pap-specific-incentive-constraint} ensures that an agent with computational power $x$ will be worse off in terms of its preference for PoW and WoT if it doesn't tell truth, i.e., $(d(x),w(x))$ yields better utility than $(d(\bar{x}),w(\bar{x}))$. \textit{Summary}. We have formulated the transaction rate control problem in Tangle as the PAP~\eqref{eq:pap-specific}; it is a mixed-integer optimization problem. In Sec.\ref{sec:structural-results}, we will exploit the structure of the PAP to formulate a mixed-integer optimization problem of smaller dimension. \section{Structural Analysis of Transaction Rate Control Problem} In the previous section, we formulated the PAP~\eqref{eq:pap-specific} for the transaction rate control problem in Tangle. We now present three structural results on the optimal solution of the PAP $\eqref{eq:pap-specific}$. These structural results can be used to decrease the computation cost of solving PAP~\eqref{eq:pap-specific}. \label{sec:structural-results} Our first result deals with the structure of the optimal PoW difficulty level $d^*(x)$ assigned to an agent with computational power $x$. We show that the optimal PoW difficulty level $d^*(x)$ is non-decreasing in $x$. Moreover, our result holds independent of the choice of the objective function~\eqref{eq:pap-specific-objective}. \begin{theorem} \label{thm:effort-monotone} The optimal PoW difficulty level $d^*(x)$ assigned to an agent with computational power $x$ by the PAP~\eqref{eq:pap-specific} to control transaction rate is increasing in $x$. \end{theorem} \begin{proof} Proof is in Appendix~\ref{proof:effort-monotone}. \end{proof} The formulation in Sec.\ref{sec:problem-statement} required the principal (transaction rate controller) to solve $m^{n}$ linear program corresponding to each possible value of PoW difficulty vector $d\inD^{n}$ (defined in \eqref{eq:effort-vector}). By using Theorem~\ref{thm:effort-monotone}, we can significantly reduce the number of linear programs to be solved by the principal. Specifically, Theorem~\ref{thm:effort-monotone} ensures that the principal has to solve the linear programs obtained by fixing PoW difficulty vector~\eqref{eq:effort-vector} in the PAP~\eqref{eq:pap-specific} that satisfy $d(x_i)\leqd(x_j),\forall i<j$. Our second result deals with the structure of the WoT $w^*(x)$ assigned to an agent with computational power $x$. We show that $w^*(x)$ is non-decreasing in $x$. Moreover, our result holds independent of the choice of the objective function~\eqref{eq:pap-specific-objective}. \begin{theorem} \label{thm:weight-monotone} The optimal WoT $w^*(x)$ assigned to an agent with computational power $x$ by the PAP~\eqref{eq:pap-specific} to control transaction rate is increasing in $x$. \end{theorem} \begin{proof} Proof is in Appendix~\ref{proof:weight-monotone}. \end{proof} Theorem~\ref{thm:weight-monotone} can be used to parametrize the WoT $w(x)$ in the PAP~\eqref{eq:pap-specific}. For example, we can solve for an optimal $w^*(x)$ within the class of increasing affine functions of $x$. This decreases the dimension of the search-space from $\mathbb{R}^{n}$ to $\mathbb{R}^2$ and provides a constrained optimal solution (constrained to affine increasing functions) at a reduced computation cost. Our third structural result uses Topkis' monotonicity theorem~\cite{1998:DT} to show that optimal PoW increases with the number of agents. \begin{theorem} \label{thm:pow-mon-stat} The optimal PoW difficulty level $d^*(x;N)$ assigned to an agent with computational power $x$ by the PAP~\eqref{eq:pap-specific} to control transaction rate increases with number of agents $N$. \end{theorem} \begin{proof} Proof is in Appendix~\ref{proof:pow-mon-stat} \end{proof} Here, $d(x;N)$ denotes the PoW difficulty level when the number of agents is $N$. Theorem~\ref{thm:pow-mon-stat} can be used to parametrize optimal $d(x;N)$ within a class of increasing functions of $N$; this yields a constrained optimal $d(x;N)$. To summarize, we presented three structural results on the solution of the transaction rate control problem~\eqref{eq:pap-specific}. The first result guarantees the monotonicity of the optimal PoW difficulty level $d^*(x)$ with respect to computational power. It helps in reducing the number of linear programs that the principal has to solve for the rate control problem~\eqref{eq:pap-specific}. The second result guarantees the monotonicity of the optimal WoT $w^*(x)$ with respect to computational power. This facilitates the principal to parametrize $w(x)$ within a class of increasing functions and obtain a constrained optimal solution at a low computation cost. The third result guarantees monotonicity of the optimal PoW with respect to the number of agents. This allows parametrization of the optimal PoW difficulty level within the class of increasing function of the number of agents. \section{Numerical Results. Transaction Rate control in Tangle} \label{sec:simulations} This section illustrates via numerical examples our proposed transaction rate control problem in Tangle. We first specify the model parameters and then solve the transaction rate control problem~\eqref{eq:pap-specific}. Finally, we utilize the solution of the PAP~\eqref{eq:pap-specific} to simulate the dynamics of an actual Tangle and study the impact of transaction rate control on the average approval time of a tip transaction. The main takeaway from the simulations below is that agents with lower computational power are assigned lower PoW difficulty levels at the cost of a larger average approval time of their transactions. We begin with a simulation of the PAP~\eqref{eq:pap-specific} to compute the optimal PoW dificulty level and WoT for the agents. The model parameters are listed in Table~\ref{tab:model-parameters}. We solve the PAP~\eqref{eq:pap-specific} for four different values of number of agents $N$. As the PoW difficulty level exponentially affects the objective function~\eqref{eq:pap-specific-objective}, we simulate for $N=\{1,10,100,1000\}$ to observe noticeable change in PoW difficulty level. The optimal PoW difficulty level $d(x)$ for each agent vs. number of agents $N$ is plotted in Fig.~\ref{fig:pow}. For a fixed value of $N$, $d(x)$ increases with $x$ (Theorem~\ref{thm:effort-monotone}). As $N$ increases, $d(x)$ increases for all the agents as transaction rate is directly proportional to $N$. Also, $d(x)$ is a concave function of $N$ as the difficulty of the hash puzzle increases exponentially with $d(x)$. Therefore, marginal increase in PoW difficulty level decreases with $N$. This implies that the transaction rate control does not incur substantial increase in computation cost to the agents with increase in application size (number of agents). Therefore, the transaction rate control mechanism~\eqref{eq:pap-specific} is suitable for IoT devices. \def1.5{1.5} \begin{table} \centering \caption{Simulation Parameters for Transaction Rate control Problem~\eqref{eq:pap-specific}} \begin{tabular}{r|r|l} \hline \textbf{Parameters} & \textbf{Eq.} & \textbf{Value}\\ \hline $X$ & \eqref{eq:pap-specific-objective} & $\{1, 3, 10\}$\\ $D$ & \eqref{eq:pap-specific-objective} & $\{1,2,\ldots,12\}$\\ $p=(p(x))_{x\inX}$ & \eqref{eq:pap-specific-objective} & $[1/3\,\, 1/3\,\, 1/3]$\\ $\alpha$ & \eqref{eq:pap-specific-objective} & 0.1\\ $\beta$ & \eqref{eq:pap-specific-incentive-constraint} & 80\\ $u_0$ & \eqref{eq:pap-specific-participation-constraint} & 10\\ \hline \end{tabular} \label{tab:model-parameters} \end{table} \begin{figure} \centering \includegraphics[scale=0.4]{pow.eps} \caption{PoW difficulty level $d(x)$ vs. number of agents $N$. As $N$ increases, $d(x)$ increases for all the agents but the marginal increase in PoW difficulty level decreases with $N$. So, the mechanism does not incur substantial increase in computation cost to the agents with increase in number of agents.} \label{fig:pow} \end{figure} Our next simulation evaluates the WoT $w(x)$ assigned to an agent. Fig.~\ref{fig:weight} displays the variation of WoT $w(x)$ for each agent vs.\ number of agents $N$. We simlutate the PAP~\eqref{eq:pap-specific} for $N=\{1,10,100,1000\}$. The simulation shows that $w(x)$ increases with $x$ (Theorem~\ref{thm:weight-monotone}). Moreover, the WoT $w(x)$ increases with $N$ to compensate the increase in PoW difficulty level. Marginal increase in WoT decreases with $N$. Hence, the transaction control mechanism~\eqref{eq:pap-specific} ensures that the average approval time for agents with small computational power does not degrade rapidly with number of agents. \begin{figure} \centering \includegraphics[scale=0.4]{weight.eps} \caption{WoT $w(x)$ vs. number of agents $N$. $w(x)$ increases with $N$ to compensate for increasing PoW difficulty level. Marginal increase in WoT decreases with $N$. Hence, the mechanism ensures that the relative WoT of agents with small computational power (with respect to WoT of agents with high computational power) does not degrade rapidly with number of agents.} \label{fig:weight} \end{figure} We now use the optimal PoW difficulty level $d^*(x)$ and optimal WoT $w^*(x)$, obtained from~\eqref{eq:pap-specific}, to simulate the dynamics of Tangle. Tangle protocol and its evolution have been described in Sec.~\ref{sec:tangle-protocol}. At each time $t$, new transactions are added by each agent at a rate that depends on their computational power and PoW difficulty level. Each new transaction chooses two tip transactions randomly for approval. As different transactions have different weights, we use the accept-reject method~\cite{2016:VK} to select tip transactions non-uniformly during simulations. An important parameter associated with Tangle dynamics is average approval time of transactions. The average approval time of a transaction is defined as the time difference between when a transaction is added into the ledger and the time that it gets approved by a new transaction. The average approval time of a transaction increases with WoT, and transaction rate per number of agents\footnote{As number of agents increases, so does the number of tip transactions; therefore, approval time is an increasing function of transaction rate divided by number of agents.}. The probability that a tip transaction is selected for approval at time $t$ is proportional to the WoT. Therefore, more the weight of a transaction, higher is its chance of being selected for approval. If PoW difficulty level increases, then the rate of transaction decreases. Consequently, number of tip transactions selected for approval also decreases. This leads to an increase in average approval time. \begin{figure} \centering \includegraphics[scale=0.4]{wait_time.eps} \caption{Average approval time of transaction vs. number of agents $N$. The agents that are assigned difficult PoW have to wait for a lesser amount of time on average for approval of their tip transactions. Therefore, the transaction rate control mechanism rewards agents for completing difficult PoW with a lesser average approval time of transactions.} \label{fig:wait_time} \end{figure} Fig.~\ref{fig:wait_time} plots the average time for approval of transaction for each agent vs. number of agents $N$. As $w(x_1),d(x_1)$ remains unchanged with $N$ (refer Fig.~\ref{fig:pow} and Fig.~\ref{fig:weight}), the average approval time for the agent with lowest computational power increase with $N$. This is because both the relative WoT (with respect to other agents) decreases and so does the transaction rate per number of agents. The agents that are assigned difficult PoW have to wait for a lesser amount of time on an average for approval of their tip transactions than the agents that are assigned easier PoW. As observed in Fig.~\ref{fig:wait_time}, average approval time for agents with higher computational power is almost constant because an increase in relative WoT is offset by a decrease in rate of transaction per number of agents. To summarize, we simulated PAP~\eqref{eq:pap-specific} for controlling the transaction rate in Tangle. As PAP~\eqref{eq:pap-specific} is a mixed-integer program, we obtained the optimal solution by solving a set of linear programs. Theorem~\ref{thm:effort-monotone} was exploited to reduce the number of linear programs to be solved. We also simulated the dynamics of Tangle after incorporating the transaction rate control mechanism~\eqref{eq:pap-specific}. \section{Conclusion and Future Work} \label{sec:conclusion} Tangle is a distributed ledger technology suitable for IoT applications. Motivated by designing strategic contracts with partial information in microeconomics, this paper has proposed a principal-agent problem (PAP) approach to transaction rate control in Tangle. The principal (transaction rate controller) designs a mechanism to assign proof of work (PoW) difficulty level and weight of transaction (WoT) to the agents (IoT devices). As the principal cannot observe the state (computational power) of the agents, the principal also has to incentivize the agents to be truthful. Our main results regarding the proposed transaction rate controller were the following: 1) optimal PoW difficulty level increases with the computational power of the agent; 2) optimal WoT increases with computational power of the agent; 3) optimal PoW difficulty level increase with the number of agents. We also simulated the dynamics of Tangle using the solution obtained from the transaction control mechanism. We observed that agents with higher computational power are assigned higher PoW difficulty levels but have a smaller average transaction approval time than the agents with lower computational power. Our transaction control mechanism ensures that the agents are truth-telling, but it does not prevent agents from colluding, i.e., it may be advantageous for multiple agents to combine their computational power and act as a single agent. It would, therefore, be interesting to study the rate control problem that disincentivizes agents from forming coalitions. This is essential from the security viewpoint as collusion of agents can lead to a majority attack on distributed ledgers. {\bf Acknowledgment}. This research was supported in part by the U.S.\ Army Research Office grant W911NF-21-1-0093, National Science Foundation grant CCF-2112457, and Air Force Office of Scientific Research grant FA9550-22-1-0016. \appendices \section{Proof of Theorem~\ref{thm:lin-opt} in Sec.~\ref{sec:pap}} \label{proof:lin-opt} \begin{proof} For a fixed $d\inD^{n}$, objective~\eqref{eq:pap-specific-objective} is linear in $w(x)$. \begin{subequations} \begin{align*} \intertext{As $\log$ is an increasing function, the incentive constraints~\eqref{eq:pap-specific-incentive-constraint} can be reformulated as a set of linear inequalities in $w(x)$} \betaw({x})-\frac{\exp(d({x}))}{x} &\geq \betaw(\bar{x})-\frac{\exp(d(\bar{x}))}{x},\;\forall x,\bar{x}\in X \intertext{Also, the participation constraint~\eqref{eq:pap-specific-participation-constraint} can be reformulated as a set of linear inequalities} \betaw(x)-\frac{\exp(d(x))}{x}&\geq \exp(u_0),\forall x\in X \end{align*} \end{subequations} \end{proof} \section{Proof of Theorem~\ref{thm:effort-monotone} in Sec.\ref{sec:structural-results}} \label{proof:effort-monotone} \begin{proof} \begin{subequations} \begin{align*} \intertext{Consider computational power $x_i, x_j\inX$ s.t. $x_i<x_j$. Constraints~\eqref{eq:pap-specific-incentive-constraint} imply} \betaw({x}_i)-\frac{\exp(d({x}_i))}{x_i}&\geq \betaw({x}_j)-\frac{\exp(d({x}_j))}{x_i}\\ \intertext{and} -\bracketRound{\betaw({x}_i)-\frac{\exp(d({x}_i))}{x_j}}&\geq -\bracketRound{\betaw({x}_j)-\frac{\exp(d({x}_j))}{x_j}}\\ \intertext{Adding above two inequalities yields} \frac{\exp(d({x}_i))}{x_i}-\frac{\exp(d({x}_i))}{x_j} &\leq \frac{\exp(d({x}_j))}{x_i}-\frac{\exp(d({x}_j))}{x_j}\\ \Rightarrow d(x_i) &\leq d(x_j) \end{align*} \end{subequations} \end{proof} \section{Proof of Theorem~\ref{thm:weight-monotone} in Sec.\ref{sec:structural-results}} \label{proof:weight-monotone} \begin{proof} \begin{subequations} \begin{align*} \intertext{Consider computational power $x_i, x_j\inX$ s.t. $x_i<x_j$. Constraints~\eqref{eq:pap-specific-incentive-constraint} imply} \betaw({x}_i)x_i-\exp(d({x}_i))&\geq \betaw({x}_j)x_i-\exp(d({x}_j))\\ \intertext{and} -\bracketRound{\betaw({x}_i)x_j-\exp(d({x}_i))}&\geq -\bracketRound{\betaw({x}_j)x_j-\exp(d({x}_j))}\\ \intertext{Adding above two inequalities yields} \betaw(x_i)(x_i-x_j) &\geq \betaw(x_j)(x_i-x_j)\\ \Rightarrow w(x_i) &\leq w(x_j) \end{align*} \end{subequations} \end{proof} \section{Proof of Theorem~\ref{thm:pow-mon-stat} in Sec.\ref{sec:structural-results}} \label{proof:pow-mon-stat} \begin{proof} \begin{align*} &\frac{\partial^2}{\partial N\partial(d(x))}\bracketRound{-N\rate{x}{x}}=\frac{x}{\exp(d(x))}\geq 0,\,\forall x \end{align*} This implies that $N\rate{x}{x}$ is a supermodular function~\cite{2005:RA}. Applying Topkis' monotonicity theorem~\cite{1998:DT}, optimal PoW $d^*(x;N)$ for the transaction control problem~\eqref{eq:pap-specific} increases with~$N$. \end{proof} \ifCLASSOPTIONcaptionsoff \newpage \fi
1,108,101,563,517
arxiv
\section{Introduction. Main definitions. Statement of results.} Theory of shadowing studies the problem of closeness of approximate and exact trajectories of dynamical systems on unbounded time intervals. Since the notions of an approximate trajectory and closeness can be formalized in several ways, various shadowing properties are considered (cf. \cite{1,2}). A problem about classical shadowing properties can be informally formulated in the following way: is it true that any sufficiently precise approximate trajectory of a dynamical system is close to some exact trajectory? In the paper \cite{4} an inverse problem was considered for the first time. This problem can be informally formulated as follows: suppose we have a method (pseudomethod) that generates approximate trajectories (pseudotrajectories) of a dynamical system; is it true that any exact trajectory is close to some pseudotrajectory generated by the pseudomethod? In the present paper we consider discrete dynamical systems generated by $C^1$-diffeomorphisms of closed smooth manifolds. One of the main tasks of theory of shadowing is to characterise sets of all diffeomorphisms having some shadowing property or the $C^1$-interiors of such sets. One has to consider the $C^1$-interiors of the sets because for most cases it is difficult or maybe impossible to characterise this sets in terms of hyperbolic theory of dynamical systems. For many shadowing properties their $C^1$-interiors coincide with the set of structurally stable or $\Omega$-stable diffeomorphisms. K. Sakai (cf. \cite{5}) proved that the $C^1$-interior of the set of all diffeomorphisms having the standard shadowing property (also known as pseudo-orbit tracing property) coincides with the set of structurally stable diffeomorphisms. However, recently S.Yu.Pilyugin and S.B.Tikhomirov showed \cite{3} that Lipschitz shadowing property is equivalent to structural stability; and S.Yu. Pilyugin, S.B. Tikhomirov, and the author proved \cite{6} that so-called Lipschitz periodic shadowing property is equivalent to $\Omega$-stability, and the $C^1$-interior of the set of all diffeomorphisms having periodic shadowing property coincides with the set of $\Omega$-stable diffeomorphisms. Besides, recently S.Yu.Pilyugin, D.I. Todorov, and G.I.Wolfson (cf. \cite{7}) developed and improved technique from the paper \cite{3} to prove the equivalence of Lipschitz inverse shadowing property (for certain classes of pseudomethods) and structural stability. While studying how well exact trajectories are approximated by pseudotrajectories generated by pseudomethods, it is not obligatory to consider all points of the phase space. It is possible to restrict research to some important invariant subset of the phase space, e.g., to the set of all periodic points. The corresponding shadowing properties can be called inverse periodic shadowing properties (rigorous definitions are given below). The present paper is devoted to study of such properties. First we give main definitions, and then formulate our results. Let $M$ be a closed smooth manifold with Riemannian metric dist, $f$ be a diffeomorphism of the manifold $M$. We say that a sequence of continuous mappings $\{\Psi_k\}_{k\in\mathbb{Z}}$ is a $d$-pseudomethod if \begin{equation} \label{pseudo} \mbox{dist}(\Psi_k(x),f(x))\leq d\quad\textrm{for all } x\in M. \end{equation} We say that a sequence $\theta=\{x_k\}\subset M$ is a pseudotrajectory generated by a $d$-pseudomethod $\Psi=\{\Psi_k\}$ if \begin{equation} \label{pseudo2} x_{k+1}=\Psi_k(x_k)\quad\forall k\in\mathbb{Z}. \end{equation} We say that a diffeomorphism $f$ has inverse periodic shadowing property InvPerSh if for any positive number $\epsilon$ there exists a positive number $d$ such that for any periodic point $p$ and for any $d$-pseudomethod $\Psi=\{\Psi_k\}$ there exists a pseudotrajectory $\theta=\{x_k\}$ generated by the $d$-pseudomethod $\Psi$ such that \begin{equation} \label{basis} \mbox{dist}(x_k,f^k(p))<\epsilon\quad\forall k\in\mathbb{Z}. \end{equation} If inequalities $(\ref{basis})$ hold, we say that the pseudomethod $\{\Psi_k\}$ $\epsilon$-shadows the point $p$, or that the point $p$ is $\epsilon$-shadowed by the pseudomethod $\{\Psi_k\}$. Let us define a Lipschitz version of InvPerSh. We say that a diffeomorphism $f$ has Lipschitz inverse periodic shadowing property LipInvPerSh if there exist positive numbers $L$ and $d_0$ such that for any periodic point $p$ and for any $d$-pseudomethod $\Psi=\{\Psi_k\}$ with $d\leq d_0$ there exists a pseudotrajectory $\theta=\{x_k\}$ generated by the $d$-pseudomethod $\Psi$ such that $$\mbox{dist}(x_k,f^k(p))\leq Ld\quad\forall k\in\mathbb{Z}.$$ As usual, we denote by $\Omega S$ the set of all $\Omega$-stable diffeomorphisms. We denote by $\mbox{Int}^1(A)$ the $C^1$-interior of a set $A$ of diffeomorphisms. Finally we denote by $\mbox{Per}(f)$ the set of all periodic points of a diffeomorphism $f$. Our main result is the following theorem: \textbf{Theorem.} \begin{enumerate} \item[1)] $\mbox{Int}^1(\mbox{InvPerSh})=\Omega S$; \item[2)] LipInvPerSh is equivalent to hyperbolicity of the set $\mbox{Cl}(\mbox{Per}(f))$; \item[3)] if we denote by LIPS the set of all diffeomorphisms that have LipInvPerSh and whose periodic points are dense in the nonwandering set, then LIPS coincides with the set of Axiom A diffeomorphisms. \end{enumerate} \textbf{Remark 1.} There are several ways of inroducing pseudomethods. One of them was described above. Such pseudomethods are called pseudomethods of class $\Theta_s$. The mappings $\Psi_k$ from the definition of pseudomethods of class $\Theta_s$ are close to the diffeomorphism $f$. It is possible to consider pseudomethods defined in the following way: We say that a sequence $\{\Psi_k\}$ of continuous mappings is a $d$-pseudomethod (of class $\Theta_t$) if (instead of inequalities $(\ref{pseudo})$) the following inequalities hold: $$\mbox{dist}(\Psi_{k+1}(x),f(\Psi_k(x)))\leq d\quad\textrm{for all }x\in M,\ k\in\mathbb{Z},$$ and we say that a sequence $\{x_k\}$ is a pseudotrajectory generated by the pseudomethod $\{\Psi_k\}$ if $$x_k=\Psi_k(x_0)\quad\textrm{for all }k\in\mathbb{Z}.$$ Such pseudomethods are called pseudomethods of class $\Theta_t$. There are examples of pseudomethods of class $\Theta_s$ that do not belong to class $\Theta_t$ and vice versa. It is possible to introduce inverse periodic shadowing properties for pseudomethods of class $\Theta_t$ and to prove for them an analog of Theorem. The proof will be similar; however, the pseudomethods should be constructed in a different way. In particular, for any fixed $k$ the mappings $\Psi_k$ should be constant. We do not describe here the process of construction of such constant mappings, as they can be easily constructed basing on the pseudomethods of class $\Theta_s$ whose construction will be described in details. \textbf{Scheme of the Theorem proof.} In essence, we use the strategy from the paper \cite{6}. 1) Denote by HP the set of all diffeomorphisms that do not have nonhyperbolic periodic points. Actually, it is proved in the paper \cite{9} that HP is a subset of LipInvPerSh. We use the result of Aoki and Hayashi (cf. \cite{10,11}) that states the equality $\mbox{Int}^1(\mbox{HP})=\Omega S$. Thus, in order to prove item 1) of Theorem, it is enough to establish the inclusion $\mbox{Int}^1(\mbox{InvPerSh})\subset\mbox{HP}$. In order to get this inclusion, we $C^1$-slightly perturb a diffeomorphism with a nonhyperbolic periodic point so that it does not have InvPerSh. 2) Let us describe the scheme of the proof of item 2). First we prove that LipInvPerSh implies hyperbolicity of any periodic point, then we prove uniform hyperbolicity of the set of all periodic points, and finally we establish the fact that the closure of all periodic points is a hyperbolic set. 3) In essence, item 2) implies item 3) because, by definition, Axiom A is equivalent to hyperbolicity of the nonwandering set and density of periodic points in the nonwandering set. \section{Technical remarks and the exponential map} The proof of Theorem consists of several lemmas. The proof of most of them uses a known technique based on the exponential map, which allows to transfer results from points in the manifold to tangent vectors and vice versa. We shall describe this technique in this section. Besides, it is convenient to introduce in this section main notations, which will be used in the sequel. Let $M$ be a closed smooth manifold with Riemannian metric dist. Denote by $\exp:TM\mapsto M$ the standard exponential map and by $\exp_x$ its restriction to $T_xM$, the tangent space at point $x$. Denote by $N(r,x)$ the $r$-neighborhood of the point $x$ in the manifold $M$, by $B_T(r,y)$ the ball of radius $r$ with center at the point $y$ in the space $T_xM$. Besides, we denote by $B(r,A)$ the $r$-neighborhood of the set $A$ that is a subset of an Euclidean space. There exists a positive number $r<1$ such that for any point $x\in M$ the mapping $\exp_x$ is a diffeomorphism of the set $B_T(r,0)$ onto its image, and the mapping $\exp^{-1}_x$ is a diffeomorphism of the set $N(r,x)$ onto its image. Besides, we assume that the number $r$ is chosen so small that the following holds: \begin{equation} \label{prop1} \frac{\mbox{dist}(\exp_x(v),\exp_x(w))}{|v-w|}\leq 2\quad\mbox{for }v,w\in B_T(r,x),v\neq w; \end{equation} \begin{equation} \label{prop2} \frac{|\exp^{-1}_x(y)-\exp^{-1}_x(z)|}{\mbox{dist}(y,z)}\leq 2\quad\mbox{for }y,z\in N(r,x), y\neq z. \end{equation} We can always get this conditions, because \begin{equation} \label{prop} D\exp_x(0) = \mbox{id}. \end{equation} Let $p$ be a point of a diffeomorphism $f$ of the manifold $M$, let $p_k~=f^k(p)$ and $A_k=Df(p_k)$ for all $k\in\mathbb{Z}$ (this notations will be used in the sequel). Consider the mappings \begin{equation} \label{3.2.1} F_k=\exp^{-1}_{p_{k+1}}\circ f\circ\exp_{p_k}:T_{p_k}M\mapsto T_{p_{k+1}}M. \end{equation} By the standard property $(\ref{prop})$ of the exponential map, $DF_k(0)=A_k$. We can always represent $F_k(v)$ in the following form: $$F_k(v)=A_kv+\phi_k(v),\quad\mbox{where }\frac{|\phi_k(v)|}{|v|}\longrightarrow0\ \mbox{as }|v|\rightarrow0.$$ We denote by $O(p,f)$ the trajectory of the point $p$ of the diffeomorphism~$f$, i.e., $O(p,f)=\{p_k=f^k(p)\mid k\in\mathbb{Z}\}$. Besides, we need the following auxiliary statement, which will be used for construction of pseudomethods: \textbf{Proposition.} Let $f:B(b,O(p,f))\mapsto\mathbb{R}^n$ be a $C^1$-smooth map, $p$ be a periodic point of the map $f$ of the fundamental period $m$, let sets $B(b,p_1),\ldots, B(b,p_m)$ be disjoint. 1) Let $\epsilon<b/2$ be a small number. Assume that $f(x) = p_{k+1} + A_k(x-p_k)$ for $x\in B(b,p_k)$ ($1\leq k\leq m$), i.e., $f$ is the linear map. Let $d<\epsilon/2$ be an arbitrary sufficiently small number. Assume that we have constructed a continuous map $\psi: B(b,O(p,f))\mapsto\mathbb{R}^n$ such that \begin{equation} \label{cond1} |\psi(x) - A_k(x-p_k) - p_{k+1}|\leq d\quad \textrm{for all }x\in B(b,p_k),\ 1\leq k\leq m. \end{equation} Then there exists a map $\Psi:B(b,O(p,f))\mapsto\mathbb{R}^n$ such that $|\Psi(x) - f(x)|\leq\allowbreak\leq d$ for all $x\in B(b,p_k)$, $1\leq k\leq m$, the mappings $\psi$ and $\Psi$ coincide on the set $B(\epsilon/2,O(p,f))$, and the maps $\Psi$ and $f$ coincide on the set $B(b,O(p,f))\backslash B(\epsilon,O(p,f))$. 2) Let $C$ be an arbitrary large number, $d$ be an arbitrary sufficiently small number such that $Cd < b/2$. Let $$f(x) = p_{k+1} + A_k(x-p_k) + \phi_k(x)\quad\textrm{for }x\in B(b,p_k), 1\leq k\leq m,$$ where $|\phi_k(x)|\longrightarrow 0$ as $|x|\rightarrow0$. Assume that $d$ is so small that $$|\phi_k(x)|\leq d/2\quad\textrm{for }|x|\leq Cd.$$ Finally assume that we have constructed a continuous map $\psi: B(b,O(p,f))\mapsto\mathbb{R}^n$ such that condition $(\ref{cond1})$ holds with $d/2$ instead of $d$. Then there exists a map $\Psi:B(b,O(p,f))\mapsto\mathbb{R}^n$ such that $|\Psi(x) - f(x)|\leq d$ for all $x\in B(b,O(p,f))$, the maps $\psi$ and $\Psi$ coincide on the set $B(Cd/2,O(p,f))$, and the maps $\Psi$ and $f$ coincide on the set $B(b,O(p,f))\backslash B(Cd,O(p,f))$. \textbf{Proof.} Let us start from item 1). Choose a smooth monotonous function $\beta:[0,+\infty)\mapsto[0,1]$ such that $\beta(x) = 0$ for $x\leq\epsilon/2$, $\beta(x) = 1$ for $x\geq\epsilon$. Define the map $\Psi$ by the following formula: $$\Psi(x) = (1-\beta(|x-p_k|))\psi(x) + \beta(|x-p_k|)(p_{k+1} + A_k(x - p_k))$$ for $x\in B(b,p_k)$, $1\leq k\leq m$. For $x\in B(b,p_k)$ we have the formula $$|\Psi(x) - f(x)| = |\Psi(x) - (\beta(|x-p_k|) + 1-\beta(|x-p_k|))f(x)|\leq (1-\beta(x))d + 0 \leq d.$$ Clearly, the map $\Psi$ is the desired. Item 2) can be proved in the similar way. \textbf{Remark 2.} The restricitions on the number $d$ do not depend on the map~$\psi$ if the condition $(\ref{cond1})$ holds. \section{Proof of the main result.} In the paper \cite{9} it is proved that if $A$ is a hyperbolic set, then there exist positive numbers $L$ and $d_0$ such that for any point $p\in A$, any number $d\leq d_0$, and any $d$-pseudomethod $\Psi=\{\Psi_k\}$ there exists a pseudotrajectory $\{x_k\}$ generated by the pseudomethod $\Psi$ such that the analog of relation $(\ref{basis})$ with $\epsilon=Ld$ holds. Thus, the following lemma holds: \textbf{Lemma 1.} If the set $\mbox{Cl}(\mbox{Per}(f))$ is hyperbolic, then $f$ has LipInvPerSh. \textbf{Corollary.} $\Omega S\subset\mbox{Int}^1(\mbox{InvPerSh})$. \textbf{Lemma 2.} $\mbox{Int}^1(\mbox{InvPerSh})\subset\Omega S$. \textbf{Proof.} By the lemma of Hayashi and Aoki (\cite{10,11}), $\mbox{Int}^1(\mbox{HP})=\Omega S$. That is why it is enough to prove that $\mbox{Int}^1(\mbox{InvPerSh})\subset\mbox{HP}$. Assume that the diffeomorphism $f\in \mbox{Int}^1(\mbox{InvPerSh})$ and does not belong to HP. Thus, there exists a neighborhood $W$ of the diffeomorphism $f$ in the $C^1$-topology such that $W\subset\mbox{InvPerSh}$ and $W\cap\mbox{HP}=\emptyset$. The diffeomorphism $f$ has a nonhyperbolic periodic point $p$ of the fundamental period $m$, i.e., the operator $Df^m(p)$ has an eigenvalue $|\lambda|=1$. Without loss of generality, we assume that $\lambda$ is a purely complex number. The case of a real $\lambda$ can be treated in the similar way. At first we $C^1$-slightly perturb the diffeomorphism $f$ to get a diffeomorphism $h$ with certain properties that is linear in a neighborhood of its periodic trajectory $p_1,\ldots,p_m$. There exists a number $a\in (0,r)$ (recall that the number $r$ was defined in section 2) and a diffeomorphism $h\in W$ such that $h(p_j)=p_{j+1}$; the point $p_j$ is assigned to 0 in coordinates $v_j=(\rho_j\cos\theta_j,\rho_j\sin\theta_j,w_j)_j$ in the space $T_{p_j}M$; and~if $$H_j=\exp^{-1}_{p_{j+1}}\circ h\circ\exp_{p_j}$$ and $|v_j|\leq a$, then for some real number $\chi$ and natural number $\nu$ such that $\cos\nu\chi=1$ the following holds: $$H_j(v_j)=A_jv_j=(r_j\rho_j\cos(\theta_j+\chi),r_j\rho_j\sin(\theta_j+\chi),B_jw_j)_{j+1}$$ (where $B_j$ is a matrix of size $(n-2)\times(n-2)$, $n=\dim M$, $B_{j+m}=B_j$) and $$r_0r_1\cdots r_{m-1}=1,\quad r_{j+m}=r_j.$$ Hereinafter we use the index $j$ after the brackets to emphasize that the vector is represented in the coordinates in the tangent space at the point $p_j$. Thus, the diffeomorphism $h$ is $C^1$-close to the diffeomorphism $f$, and the operator $Dh^m(p_0)$ has an eigenvalue $\lambda$ that is a root of degree $\nu$ of 1 and corresponds to a Jordan block of dimension one. Choose a number $\bar{a}<a$ such that $B_T(\bar{a},0)_j\subset H_j^{-1}(B_T(a,0)_{j+1})$ for all $j$. We use the index $j$ after the brackets to emphasize that we work with a ball in the space $T_{p_j}M$. Put $$R=2\max(r_0,\ldots,r_{m-1}).$$ We assume that the numbers $a$ and $\bar{a}$ were chosen so small that the neighborhoods $\exp_{p_k}(B_T(\bar{a},0)_k)$ are disjoint for all $1\leq k\leq m$. Let $\epsilon_0 = \bar{a}/3$, $\epsilon=\epsilon_0/10$, and let $m\nu d<\epsilon/3$ be an arbitrary sufficiently small number. Define maps $\psi_k: \bigcup_{1\leq l\leq m}B_T(\bar{a},0)_l\mapsto \bigcup_{1\leq l\leq m}B_T(a,0)_l$ in the following way: $$\psi_{k+m\nu} = \psi_k\quad\textrm{for all }k\in\mathbb{Z};$$ $$\psi_k(y)=A_ky+(dr_0\cdots r_k(\cos (k+1)\chi)/(2R^m),dr_0\cdots r_k(\sin (k+1)\chi)/(2R^m),0)_{k+1}$$ for $0\leq k\leq m\nu - 1$, $y\in B_T(\bar{a},0)_k$; and $\psi_k(y) = H_l(y)$ for $y\in B_T(\bar{a},0)_l$, $l\neq k$. The maps $\psi_k$ can be considered as mappings from the disjoint union of $n$-dimensional balls in $\mathbb{R}^n$ to the the disjoint union of larger $n$-dimensional balls in $\mathbb{R}^n$. Choose an arbitrary number $k\in\mathbb{Z}$. We observe that the maps $\psi_k$ satisfy condition $(\ref{cond1})$. Thus, in essence, all conditions of Proposition from section 2 are satisfied. Since we can decrease $d$, we can assume that $d$ is a number from item 1) of Proposition that is applied to the number $\epsilon_0$ (not number $\epsilon$) as $\epsilon$ and the map $\psi_k$ (we put $b=\bar{a}$). By Remark 2, the choice of $d$ does not depend on the map $\psi_k$ if condition $(\ref{cond1})$ holds. Consequently, the map $\psi_k$ can be extended to the map $\Phi_k$, which coincides with $H_k$ on the sets $\bigcup_{1\leq l\leq m} B_T(\bar{a},p_l) \backslash\bigcup_{1\leq l\leq m} B_T(\epsilon_0/2,p_l)$. Put $\Psi_k(x) = \exp_{p_{l+1}}\circ \Phi_k\circ\exp^{-1}_{p_l}(x)$ for $x\in \exp_{p_l}(B_T(\bar{a},0)_l)$. If we set $\Psi_k = h$ on the complement of $\bigcup_{1\leq l\leq m}\exp_{p_l}B_T(\bar{a},0)_{l}$ in the manifold $M$, then $\Psi_k$ will be a continuous map defined on $M$. By property $(\ref{prop1})$, the maps $\{\Psi_k\}$ are a $2d$-pseudomethod. However, no pseudotrajectory generated by this $2d$-pseudomethod can $\epsilon$-shadow the point $p$. In fact, suppose the contrary, consider a pseudotrajectory $\{y_k\}$ of the point $y$ defined by the analog of equalities $(\ref{pseudo2})$, and assume that this pseudotrajectory $\epsilon$-shadows the point $p$. Put $q=\exp_p^{-1}(y)=(q_1,q_2,q_3)$ (in the proof of Lemma 2 the first two components in such representation have dimension 1). Put $q^k = \exp^{-1}_{p_{k}}y_k $ for all $k\in\mathbb{Z}$, then $q^{k+1} = \Phi_kq^k$. Note that, by property $(\ref{prop2})$, the points $q^k$ are $2\epsilon$-close to 0 in the coordinates in $T_{p_k}M$. Let us emphasize that $2\epsilon<\epsilon_0/2$, and the maps $\Phi_k$ coincide with the maps $\psi_k$ in the $\epsilon_0/2$-neighborhood of zero (in the coordinates in $T_{p_k}M$). Hence, $$q^{m\nu k}=A^{m\nu k}q+(km\nu d/(2R^m),0,0).$$ We denote by $pr_{1,2}$ the projection of a vector onto its first and second components. Then \begin{equation} \label{mage} |pr_{1,2}q^{mk\nu}|\geq mk\nu d/(2R^m)-|pr_{1,2}q|^{mk\nu}=mk\nu d/(2R^m) - |pr_{1,2}q|. \end{equation} By our assumptions, $|q_k|\leq2\epsilon$ for all $k\in\mathbb{Z}$. However, by estimates $(\ref{mage})$, numbers $|\mbox{pr}_{1,2}q_k|$ will be larger than $3\epsilon$ for sufficiently large $k$. Hence, our assumptions are wrong, the diffeomorphism $f$ does not have any nonhyperbolic periodic points, and the $C^1$-interior of InvPerSh is contained in $\Omega S$. \textbf{Lemma 3.} If a diffeomorphism $f$ has LipInvPerSh, then any periodic point is hyperbolic. \textbf{Proof.} Without loss of generality, we assume that the number $L$ from the definition of LipInvPerSh is natural. Let $p$ be a nonhyperbolic point of the diffeomorphism $f$. We assume that $p$ is a fixed point, in order to simplify the notations. The general case can be treated in the similar way. In the case of a fixed point, the map $(\ref{3.2.1})$ is represented in the following way: $$F(v)=\exp_p^{-1}\circ f\circ\exp_p(v)=Av+\phi(v),$$ and the matrix $A$ has an eigenvalue $|\lambda|=1$. We assume that $\lambda$ is a purely complex number. The case of a real $\lambda$ can be treated in the similar way. By the choice of coordinates, we assume that the matrix $A$ is represented in the form $\mbox{diag}(H_1,H_2)$, where $$H_1=\left(\begin{matrix}Q&I&\cdots&\cdots& 0\\ 0&Q&I&\cdots& 0\\ \cdots&\cdots&\cdots&\cdots&\cdots\\ 0&0& \cdots&\cdots&Q \end{matrix}\right)$$ and $$Q=\left(\begin{matrix}\cos\theta&\sin\theta\\-\sin\theta&\cos\theta\end{matrix}\right).$$ When the new coordinates were chosen, the numbers $L,d_0,r$ could change (recall that the number $r$ was defined in section 2). We denote by the same symbols the new constants for convenience. If $v$ is a two-dimensional vector, then $$|Qv|=|v|.$$ Let $2l$ be the dimension of the Jordan block $H_1$, and put $n=\dim M$. Let us introduce some notations. Let $v$ be an $n$-dimensional vector. We denote by $pr_{i,j}v$ the two-dimensional vector that consists of the $i$th and $j$th components of the vector $v$. Let $V$ be a matrix of size $2\times 2$, and $W$ be a matrix of size $n\times n$. When we write $W = (0,V,0)^{k-1,k}$, we assume that the matrix $V$ is contained in the matrix $W$ on positions $(k-1,k)\times (k-1,k)$, and all other elements of the matrix $W$ are equal to zero. Choose a number $\bar{r}<r/2$ such that $B_T(\bar{r},0)\subset F^{-1}(B_T(r,0))$. Let $d$ be a sufficiently small arbitrary number such that $20Ld<\bar{r}/10$. Define the maps $\psi_k:B_T(\bar{r},0)\mapsto B_T(r,0)$ in the following way: $$\psi_k(y) = Ay + (d/2)(0,Q^kw,0)^{(2l-1,2l)},$$ where $w$ is a two-dimensional unit vector. Fix an arbitrary number $k$. The maps $\psi_k$ satisfy condition $(\ref{cond1})$. We assume that the number $d$ is less than the corresponding number from item 2) of Proposition applied to $C=20L$, $b=\bar{r}$. Let us emphasize that, by Remark 2, the choice of $d$ does not depend on the index $k$, since the maps $\psi_k$ satisfy the analog of condition $(\ref{cond1})$. Let $\Phi_k$ be the analog of the map $\Psi_k$ constructed in item 2) of Proposition. The map $\Phi_k$ coincides with $F$ on the set $B_T(\bar{r},0)\backslash B_T(10Ld,0)$. Hence, the map $\Psi_k = \exp_p\circ\Phi_k\circ\exp^{-1}_p$ can be extended to the continuous map $\Psi_k$ of the manifold $M$. By property $(\ref{prop1})$, the maps $\{\Psi_k\}$ are a $2d$-pseudomethod. By construction, for $k>l$ $$ |\Phi_{k-1}\circ\ldots\circ\Phi_0(q)| = |A^kq + c_k^1d(0,Q^{k-l+1}w,0)^{(1,2)} + \ldots + $$ $$ + c_k^{l-1}d(0,Q^{k-1}w,0)^{(2l-3,2l-2)} + d(0,kQ^kw,0)^{(2l-1,2l)})|, $$ where $c_k^m$ are some positive numbers. The following inequality holds: \begin{equation} \label{form} |pr_{(2l-1,2l)}(\Phi_{k-1}\circ\ldots\circ\Phi_0(q))|\geq kd - |pr_{(2l-1,2l)}(A^kq)|\geq kd - |pr_{(2l-1,2l)}q|^k. \end{equation} Assume that the point $p$ is $2Ld$-shadowed by a pseudotrajectory $\{y_k\}$ of some point $y$ generated by the pseudomethod $\{\Psi_k\}$. Put $q_k = \exp^{-1}_{p_{k}}y_k $ for all $k\in\mathbb{Z}$, then $q_{k+1} = \Phi_kq_k$. We observe that, by inequalities~$(\ref{prop2})$, \begin{equation} \label{contr1} |pr_{(2l-1,2l)}q_k|\leq|q_k|\leq 4Ld\quad\textrm{for all }k\in\mathbb{Z} \end{equation} (in the coordinates in $T_{p_k}M$). Note that the maps $\Phi_k$ and $\psi_k$ coincide on the $10Ld$-neighborhood of zero. By relations $(\ref{form})$ and $(\ref{contr1})$, $$|pr_{(2l-1,2l)}q_{10L}|\geq (10Ld - |pr_{(2l-1,2l)}q|)\geq 6Ld.$$ The last inequality contradicts to inequality $(\ref{contr1})$. Hence, the diffeomorphism $f$ does not have any nonhyperbolic periodic points. \textbf{Lemma 4.} If a diffeomorphism $f$ has LipInvPerSh, then all periodic points of the diffeomorphism $f$ are uniformly hyperbolic (i.e. they are hyperbolic with the same constants $C$ and $\lambda$). In other words, there exist constants $C>0$ and $0<\lambda<1$ depending only on $L$ such that for any periodic point $p$ of the diffeomorphism $f$ there exist $Df$-invariant complementary subspaces $S(p)$ and $U(p)$ of the tangent space $T_{p_k}M$ such that $$|Df^j(p)v|\leq C\lambda^j|v|\quad\textrm{for }v\in S(p),\ j\geq0;$$ $$|Df^{-j}(p)v|\leq C\lambda^j|v|\quad\textrm{for }v\in U(p),\ j\geq0.$$ \textbf{Proof.} Without loss of generality, we assume that the number $L$ from the property LipInvPerSh is natural. Let $p$ be a periodic point of a period $m$. Denote by $m_0$ the fundamental period of the point $p$. Put $p_i=f^i(p)$, $A_i=Df(p_i)$, and $B=Df^m(p)$. By Lemma~3, the point $p$ is a hyperbolic periodic point. Hence, there exist complementary $Df$-invariant linear spaces $S(p)$ and $U(p)$ at point $p$, and this spaces satisfy the conditions \begin{equation} \label{3.4.13} \lim_{n\rightarrow+\infty}B^nv_s=\lim_{n\rightarrow+\infty}B^{-n}v_u=0\quad\textrm{for } v_s\in S(p), v_u\in U(p). \end{equation} Consider an arbitrary nonzero vector $v_u\in U(p)$. Put $e_0=v_u/|v_u|$. Consider the sequence $$a_0=\tau,\quad a_{i+1} = a_i|A_ie_i| - 1,$$ where $e_{i+1} = A_ie_i/|A_ie_i|$, and the number $\tau$ is chosen such that $a_m=0$. An explicit formula for the number $\tau$ that satisfies all the required conditions is given in the paper \cite{6}. Note that $a_j>0$ for all $0\leq j\leq m-1$ (since the inequality $a_j\leq 0$ implies the inequality $a_{j+1}<0$, whereas $a_m=0$). It follows from relations $(\ref{3.4.13})$ that there exists a number $n>0$ such that \begin{equation} \label{3.4.16} |B^{-n}\tau e_0|<1. \end{equation} Consider a finite sequence $w_i\in T_{p_i}M$ for $0\leq i\leq m(n+1)$ given by the equalities $$w_i=a_ie_i\quad\textrm{for } i\in\{0,\ldots,m-1\},$$ $$w_m=B^{-n}\tau e_0,$$ $$w_{m+1+i}=A_iw_{m+i}\quad\textrm{for } i\in\{0,\ldots,mn-1\}.$$ Note that $$w_{km}=B^{k-1-n}\tau e_0\quad\textrm{for } k\in\{1,\ldots,n+1\}.$$ Thus, we can consider the sequence $\{w_i\}$ as an $m(n+1)$-periodic sequence that is well-defined for all $i\in\mathbb{Z}$. Put $N>\max_{k\in\mathbb{Z}}|w_k|$. Since we can increase the number $N$ if necessary, we assume that $N>20L$. It is clear that if all vectors from the sequence $\{w_k\}$ are multiplied by $d$, then the maximum will increase in $d$ times too. Choose a number $\epsilon_1<r$ such that the neighborhoods $\exp_{p_j}B_T(\epsilon_1,0)_j$ are disjoint for all $1\leq j\leq m_0$. Let us emphasize that the index $j$ after the brackets means that we work with a ball in the space $T_{p_j}M$. Choose a number $\epsilon_2\leq\epsilon_1$ such that $B_T(\epsilon_2,0)_j\subset F_j^{-1}(B_T(\epsilon_1,0))_j$ for all $1\leq j\leq m_0$. We assume that the number $d$ is a sufficiently small arbitrary number such that $100Nd<\epsilon_2$. Define the maps $\psi_k:\bigcup_{1\leq i\leq m_0} B_T(\epsilon_2,0)_i\mapsto\bigcup_{1\leq i\leq m_0} B_T(\epsilon_1,0)_i$ in the following way: \begin{enumerate} \item[1)] $\psi_{k + m(n+1)} = \psi_k$ for all $k\in\mathbb{Z}$; \item[2)] if $y\in B_T(\epsilon_2,0)_k$, then \begin{enumerate} \item[2.1)] $\psi_k(y)=A_ky - de_{k+1}$ for $0\leq k \leq m-2$, \item[2.2)] $\psi_{m-1}(y)=A_{m-1}y - de_{m} + B^{-n}\tau de_{0}$ for $k=m-1$, \item[2.3)] $\psi_k(y)=A_ky$ for $m\leq k\leq mn+m-1$; \end{enumerate} \item[3)] for other $y$ ($y\in B_T(\epsilon_2,0)_l$ and $l-k$ is not a multiple of $m_0$, the fundamental period of the point $p$) $\psi_k(y)=F_l(y)$. \end{enumerate} Let us show that the following equalities hold: \begin{equation} \label{defW1} \psi_{k-1}\circ\ldots\circ\psi_0(w_0d)=w_kd\quad\textrm{for all }k\geq 1, \end{equation} \begin{equation} \label{defW2} \psi^{-1}_k\circ\ldots\circ\psi^{-1}_{-1}(w_0d)=w_kd\quad\textrm{for all }k\leq 0. \end{equation} Indeed, for any $0\leq k\leq m-2$ $$dw_{k+1} = da_{k+1}e_{k+1} = d((a_k|A_ke_k| - 1)/|A_ke_k|)A_ke_k = A_kdw_k - de_{k+1} = \psi_k(dw_k).$$ Since, by the choice of $\tau$, $1=a_{m-1}|A_{m-1}e_{m-1}| = |w_{m-1}||A_{m-1}e_{m-1}|=\allowbreak=|A_{m-1}w_{m-1}|$, we have $A_{m-1}w_{m-1} = e_{m}$, hence, $$\psi_{m-1}(dw_{m-1}) = A_{m-1}dw_{m-1} - de_{m} + B^{-n}\tau de_{0} = B^{-n}\tau de_{0} = dw_m.$$ Thus, we proved equalities $(\ref{defW1})$ and $(\ref{defW2})$. Note that the maps $\psi_k$ were constructed in such way that this equalities would hold. Observe that, by inequality $(\ref{3.4.16})$, the maps $\psi_k$ satisfy the analog of condition $(\ref{cond1})$ with $2d$ instead of $d/2$. Fix an arbitrary $k\in\mathbb{Z}$. We see that all conditions of item 2) of Proposition are satisfied. We apply Proposition to $C = 100N$, $b=\epsilon_2$. Since we can decrease the number $d$, we can assume that it is less than $d/4$, where $d$ is from item 2) of Proposition. Let us emphasize that, by Remark 2, the number $d$ does not depend on the index~$k$. Denote by $\Phi_k$ the analog of the map $\Psi_k$ constructed in Proposition. By the statement of Proposition, $|\Phi_k(x) - F_j(x)|\leq 4d$ for $x\in\allowbreak B_T(\epsilon_2,0)_j$ and $1\leq j\leq m_0$. Let us emphasize that the map $\Phi_k$ coincides with the map $F_k$ on the set $\bigcup_{0\leq j\leq m_0} B_T(\epsilon_2,0)_j\backslash \bigcup_{0\leq j\leq m_0} B_T(50Nd,0)_j$. Consider the maps $\Psi_k$ given by the formula $$\Psi_k(y) = \exp_{p_{l+1}}\circ F_k\circ\exp^{-1}_{p_l}(y)\quad\textrm{for }y\in \exp_{p_l}(B_T(\epsilon_2,0)_l),\ 1\leq l\leq m_0.$$ Clearly, if we define the maps $\Psi_k$ to be equal to the diffeomorphism $f$ for all other points of the manifold $M$, then the maps $\Psi_k$ will remain being continuous maps. Note that, by inequalities $(\ref{prop1})$, the maps $\Psi_k$ generate an $8d$-pseudomethod. By our assumptions, the point $p$ is $8Ld$-shadowed by one of pseudotrajectories generated by the pseudomethod $\Psi =\{\Psi_k\}$. It follows that there exists a pseudotrajectory $\{y_k\}$ of the point $y$ defined by the analog of equalities $(\ref{pseudo2})$ such that the point $p$ is $8Ld$-shadowed by this pseudotrajectory. It follows from our assumptions that \begin{equation} \label{why} |q_k|\leq16Ld. \end{equation} Note that $16Ld<50Ld$, and the maps $\Phi_k$ and $\psi_k$ coincide on the $50Ld$-neighborhood of zero (in the space $T_{p_k}M$). It is easily seen that $$\Phi_0(q-w_0d + w_0d)=A_0(q-w_0d) + \Phi_0(w_0d)=A_0(q-w_0d)+w_1d,$$ $$\Phi_{k-1}\circ\cdots\circ\Phi_0(q-w_0d + w_0d)=A_{k-1}\cdots A_0(q-w_0d) + \Phi_{k-1}\circ\cdots\circ\Phi_0(w_0d)=$$ \begin{equation} \label{formula1} =A_{k-1}\cdots A_0(q-w_0d)+w_kd,\quad k\geq1, \end{equation} \begin{equation} \label{formula2} \Phi^{-1}_{-k}\circ\cdots\circ\Phi^{-1}_{-1}(q-w_0d + w_0d)=A^{-1}_{-k}\cdots A^{-1}_{-1}(q-w_0d)+w_{-k}d,\quad k\geq1. \end{equation} In $(\ref{formula1})$ and $(\ref{formula2})$ the norm of the second term is estimated from above by $Nd>20Ld$, whereas, by hyperbolicity of the point $p$, the first term in one of this formulae has a large norm (larger than $2Nd$) for $k$ with large absolute values (i.e. the norm of the first term is much larger than the norm of the second term) if $q\neq w_0d$. Thus, the point $p$ can be shadowed only by the pseudotrajectory corresponding to the vector $w_0d$, i.e. $q=w_0d$. But then inequalities $(\ref{why})$ imply the estimates $$|a_k|=|w_k|\leq 16L\quad\mbox{for }0\leq k\leq m-1.$$ This estimates imply the desired hyperbolicity estimates for any preliminary fixed vector $v$ (cf. \cite{6} for detailed explanation). \\ Note that, actually, Lemma 4 implies item 2) of Theorem (and, consequently, item 3) too); since it is proved in the paper \cite{6} that if the statement of Lemma 4 holds, i.e., the set $\mbox{Per}(f)$ has a hyperbolic structure, then the set $\mbox{Cl}(\mbox{Per}(f))$ has a hyperbolic structure too, i.e., the set $\mbox{Cl}(\mbox{Per}(f))$ is hyperbolic.
1,108,101,563,518
arxiv
\section{\label{SEC:INTRO}Introduction} The mass of the SM-like Higgs boson, discovered by ATLAS and CMS~\cite{Aad:2012tfa,Chatrchyan:2012ufa,Aad:2015zhl}, is now an electroweak precision observable, thanks to its outstandingly accurate determination at the LHC~\cite{Khachatryan:2016vau,Sirunyan:2018koj,Aad:2019mbh}, and it plays an important role in constraining the allowed parameter space of Beyond-the-Standard-Model (BSM) theories. On the one hand, the Higgs mass is a prediction in supersymmetric theories (see \citere{Slavich:2020zjv} and references therein for a recent review) and interestingly it depends most heavily on the electroweak couplings and scale -- quantities that are already known from other observations -- while it is only at loop level that a dependence on the scale of supersymmetric particles appears. This property has spurred significant developments in precision scalar-mass calculations, advanced in recent years by the KUTS initiative~\cite{Hollik:2014wea,Borowka:2014wla,Bagnaschi:2014rsa,Hollik:2014bua,Degrassi:2014pfa,Goodsell:2014bna,Goodsell:2014pla,Muhlleitner:2014vsa,Goodsell:2015ira,Borowka:2015ura,Staub:2015aea,Hahn:2015gaa,Lee:2015uza,Goodsell:2015yca,Drechsel:2016jdg,Goodsell:2016udb,Braathen:2016mmb,Bahl:2016brp,Athron:2016fuq,Braathen:2016cqe,Drechsel:2016htw,Staub:2017jnp,Bagnaschi:2017xid,Passehr:2017ufr,Bahl:2017aev,Braathen:2017izn,Harlander:2017kuc,Athron:2017fvs,Biekotter:2017xmf,Borowka:2018anu,Stockinger:2018oxe,Bahl:2018jom,Harlander:2018yhj,Braathen:2018htl,Gabelmann:2018axh,Bahl:2018qog,Bahl:2018ykj,Dao:2019qaz,Bagnaschi:2019esc,Goodsell:2019zfs,Harlander:2019,Bahl:2019hmm,Bahl:2019wzx,Kwasnitza:2020wli,Bahl:2020tuq,Bahl:2020jaq,Bahl:2020mjy} as described in the report~\cite{Slavich:2020zjv}. On the other hand, in non-supersymmetric theories, the Higgs mass is not a prediction by itself, but it can be used to extract the Higgs quartic coupling and, in turn, investigate the stability of the electroweak vacuum. In this context, a precise calculation is essential to produce reliable results on vacuum stability (see Refs.~\cite{Degrassi:2012ry,Buttazzo:2013uya,Kniehl:2015nwa,Kniehl:2016enc,Martin:2019lqd} for works in the SM) and to correctly appreciate the potential impact of new particles~\cite{Coriano:2015sea,Braathen:2017jvs,Krauss:2018thf,Hollik:2018yek,Wang:2018lhk,Hollik:2018wrr}. We refer the interested reader to Ref.~\cite{Slavich:2020zjv} and references therein for an in-depth review of Higgs-mass computations, and we only recall here the main steps involved (applicable for any BSM theory). The standard calculational technique begins with the extraction of SM-like parameters -- namely the electroweak and strong gauge couplings, the quark and lepton Yukawa couplings, and the Higgs vacuum expectation value (vev) -- from observables. Adding then the BSM parameters to these, the Higgs (and other particle) masses can be calculated, along with any other desired predictions. The relevant observables for the electroweak sector are typically, as in calculations in the SM, either $M_Z, M_W, \alpha(0)$ or $M_Z, G_F, \alpha(0)$ where $M_{Z,W}$ are the $Z$ and $W$ boson masses, $\alpha(0)$ is the fine-structure constant extracted in the Thompson limit, and $G_F$ is the Fermi constant. This latter quantity is extracted from muon three-body decays, whereas the others are related essentially to self-energies. In general, this extraction of the SM-like couplings and the Higgs vev can be performed at one-loop for any theory, but the two-loop relationships are only known for the SM and a small subset of other models in certain limits. At the tree level, the expectation value $v$ of the Higgs boson is related to the other parameters in the theory by the requirement that the theory be at the minimum of the potential. To be concrete, consider the Higgs potential of the SM, $V = \mu^2\,\lvert H\rvert^2 + \lambda\,\lvert H\rvert^4$; then the minimisation condition gives \begin{align} 0 &= \mu^2 + \lambda\, v^2\, . \end{align} Since we do not have an observable for $\mu^2$ we typically use this equation to eliminate it, giving the Higgs mass to be \begin{align} m_h^2 &= \mu^2 + 3\, \lambda\, v^2 = 2\, \lambda\, v^2\,. \end{align} However, once we go beyond tree level, there are several possible choices. The approach typically taken in BSM theories, and in the SM in Ref.~\cite{Martin:2014cxa}, is to insist that the expectation value $v$ is a fixed ``observable'', and instead keep solving for $\mu^2$ order-by-order in perturbation theory. \mbox{In this way,} \begin{align} \mu^2 &= - \lambda\, v^2 - \frac{1}{v}\, \frac{\partial \Delta V}{\partial h}\bigg|_{h=0} \equiv - \lambda\, v^2 - \frac{1}{v}\, t_h\,, \label{EQ:SM_solveformu2} \end{align} where $\Delta V$ are the loop corrections to the effective potential, and then the Higgs pole mass \mbox{$M_h$ reads} \begin{align} M_h^2 &= 2\,\lambda\, v^2 - \frac{1}{v}\, t_h + \Pi_{hh}{\left(M_h^2\right)} \equiv 2\, \lambda\, v^2 + \Delta M_h^2\,, \label{EQ:StandardSM} \end{align} where $\Pi_{hh}{\left(M_h^2\right)}$ is the Higgs self-energy evaluated on-shell. One of the chief advantages of this approach is that tadpole diagrams do not appear in any processes, since they vanish by construction. On the other hand, while this is in principle a straightforward procedure to follow, it is complicated by the fact that the self-energies and effective potential implicitly depend on $\mu^2$. In Landau gauge, or the gaugeless limit, this leads to the ``Goldstone Boson Catastrophe'' at two loops \cite{Martin:2002wn,Martin:2014bca,Elias-Miro:2014pca,Kumar:2016ltb} -- its solution appears by consistently solving the above equation order by order \cite{Braathen:2016cqe,Braathen:2017izn}. Indeed, one way to formalise this is as a finite (or possibly IR-divergent) counterterm for $\mu^2$: \begin{align} \mathcal{L} &\supset - \left(\mu^2 + \delta \mu^2 + \lambda\, v^2\right) v\, h - \frac{1}{2} \left(\mu^2 + \delta \mu^2 + 3\,\lambda\, v^2\right) h^2 + \ldots\,, \label{EQ:mucountertermSM} \end{align} where $\delta \mu^2 = - \frac{1}{v}\,t_h. $ Another drawback is that it manifestly breaks gauge invariance, since the loop corrections above depend on the gauge; and it also means that the expectation value $v$ is not an \ensuremath{\overline{\text{MS}}}\xspace parameter, so the renormalisation-group equations for the expectation value are no longer just given by those of $\mu^2$ and $\lambda$, but have extra contributions \cite{Sperling:2013eva,Sperling:2013xqa}. However, there is a further drawback to the above procedure which we wish to highlight in this paper. When considering a BSM theory with additional scalars that may have an expectation value, it is typical to take the same approach as for the scalar field in the SM and fix their expectation values, solving the additional tadpole equations for other dimensionful parameters -- for example, their mass-squared parameters, or sometimes a cubic scalar coupling. To take the example of a real singlet $S$ with mass-squared \emph{Lagrangian parameter} $m_S^2$ -- not to be confused with the pole mass, which we denote $M_S$ -- and expectation value $v_S$, this means that analagously to eq.~\eqref{EQ:SM_solveformu2}, \begin{align} m_S^2 &= \left(m_S^2\right)^{\rm tree} - \frac{1}{v_S}\, \frac{\partial \Delta V}{\partial S} . \end{align} If the loop corrections are not large, and $v_S$ is not small, this is completely acceptable -- so for models such as the NMSSM there is generally no problem. However, if we consider a different theory or regions of the parameter space where $v_S$ is small, for example if $m_S \gg v$ and $v_S \propto v^2$ (as may be found in examples of EFT matching \cite{Braathen:2018htl}) then we can easily find the case that $\delta m_S^2 > \left(m_S^2\right)^{\rm tree}$. This makes the calculation unreliable. The archetypal example of this problem is the case where the neutral scalar obtaining an expectation value actually comes from an $SU(2)$ triplet $\mathbf{T}$ with expectation value $v_T$ and mass-squared $m_T^2$ -- for example in Dirac-gaugino models \cite{Belanger:2009wf,Benakli:2011kz,Benakli:2012cy,Goodsell:2020lpx}. In that case, $v_T \propto v^2/m_T^2$ multiplied by other dimensionful parameters of the theory. Moreover, we require that $v_T \lesssim 4$ \ensuremath{\mathrm{GeV}}\xspace from electroweak-precision constraints, generally requiring $m_T \gtrsim 1$ TeV. So then \begin{align} \delta m_T^2 &\sim \frac{1}{4\ \ensuremath{\mathrm{GeV}}\xspace} \times \frac{1}{16\pi^2} \times \mathcal{O}{\left( \ensuremath{\mathrm{TeV}}\xspace^3 \right)} \sim 2.5 \times \mathcal{O}{\left( \ensuremath{\mathrm{TeV}}\xspace^2 \right)}\,, \end{align} \textit{i.\,e.}\xspace~we see that there is a severe problem whenever $v_T/m_T$ is of the order of a loop factor. Moreover, for such cases where $v_S$ is small, this procedure works in the opposite way to that which we would desire. In BSM theories the scalar expectation values beyond $v$ are not top-down inputs or tied closely to some observables, whereas we may typically want to define the masses and couplings as fixed by some high-energy boundary conditions (for example constrained or minimal SUGRA conditions where soft masses have a common origin). In this case we would like to solve the tadpole equations for $v_S$; even if this would typically lead to coupled cubic equations, nowadays it is almost trivial to solve them numerically, or start from an approximation. In this paper we will instead examine an alternative procedure, proposed by Fleischer and Jegerlehner in examining Higgs decays in the SM \cite{Fleischer:1980ub}, which has the potential to solve both of these issues. Instead of taking the expectation values as fixed, we take them to be the tree-level solutions of the tadpole equations. This means that we do not work at the ``true'' minimum of the potential and must include tadpole diagrams in all processes. While this implies the addition of some new Feynman diagrams in the Higgs mass calculation, it is not technically more complicated than including finite counterterm insertions for $\mu^2$. This approach has the additional advantages that, since the Lagrangian is specified in terms of \ensuremath{\overline{\text{MS}}}\xspace parameters only, the result is manifestly gauge independent, and the expectation values are just the solutions to the tree-level tadpole equations. For these reasons, it has been used and advocated in the SM, in particular at two loops in Ref.~\cite{Jegerlehner:2001fb, Jegerlehner:2002er,Jegerlehner:2002em,Jegerlehner:2003py,Bezrukov:2012sa,Kniehl:2015nwa}; and applied to certain extensions of the Two Higgs Doublet Model (THDM) when considering decays \cite{Krause:2016oke,Denner:2016etu,Altenkamp:2017ldc,Krause:2019qwe}. We also note that this approach is closely related to the various on-shell renormalisations used in \textit{e.\,g.}\xspace~\citeres{Chankowski:1992er,Dabelstein:1994hb,Freitas:2002um,Kanemura:2004mg} in the THDM and the Minimal Supersymmetric Standard Model (MSSM). In the example of the SM at the one-loop order, this would mean \begin{align} M_h^2 &= 2\, \lambda\, v^2 - \frac{6\,\lambda\, v}{m_h^2}\, t_h^{(1)} + \Pi_{hh}^{(1)}{\left(m_h^2\right)}\,, \end{align} where the superscripts in brackets indicate the loop order, and we put the momentum in the self-energy at the tree-level Higgs mass in order to respect the order of perturbation theory. In other words, the tadpole contribution is suppressed by the mass-squared of the Higgs, although -- since $m_h^2 = 2\, \lambda\, v^2$ -- here we find that they have a very similar form to the previous approach. On the other hand, in the case of a heavy singlet or triplet the contributions to the singlet self-energy would be similarly suppressed by $m_S^2$, and we can have $m_S$ much greater than the triplet coupling -- so the corrections to the singlet mass would be well under control. On the other hand, in the BSM context this approach was proposed by \citere{Farina:2013mla} for the following very different reason: by no-longer forcing the electroweak expectation value to have its observed value, we allow new physics to disturb the electroweak hierarchy. In the above approach, the contribution $- \frac{6\,\lambda\,v}{m_h^2}\, t_h^{(1)} = - \frac{3}{v}\,t_h^{(1)} $ is effectively the contribution \emph{from a shift in $v$}. We can view the calculation as equivalent to counterterms for the expectation value $\delta^{(1)} v$, where \begin{align} \mathcal{L} &\supset - \left(\mu^2 + \lambda\, v^2\right) v\, h - \left(\mu^2 + 3\, \lambda\, v^2\right) \delta^{(1)} v\, h - \ldots \end{align} so that now \begin{align} \delta^{(1)} v &= - \frac{1}{m_h^2}\, t_h^{(1)}\,. \end{align} In this case, if there is heavy new physics at a scale $\Lambda \gg m_h$, then we shift the Higgs expectation value up to that new scale suppressed only by a loop factor. Indeed in Ref.~\cite{Farina:2013mla} the proposal was to use \begin{align} \frac{\delta m_h^2}{m_h^2} &\equiv \frac{1}{m_h^2} \left[ -\frac{3}{v}\, t_h^{(1)} + \Pi_{hh}^{(1)}{\left(m_h^2\right)}\right] \end{align} as \emph{as a measure of fine-tuning of the theory}. Another perspective on the difference between the two approaches is given by viewing the SM as an EFT. In this case, in the EFT the SM receives corrections to both $\mu^2$ and $\lambda$ at the matching scale from integrating out heavy states which can be done with $v=0$. As discussed in Ref.~\cite{Braathen:2016cqe}, when expanding in $v$, in order to respect gauge invariance we must have: \begin{align} \Delta V &= \Delta V_0 + \frac{1}{2} \left.\Delta V_{hh}\right\rvert_{v=0}\,v^2 + \mathcal{O}{\left(v^4\right)} + \ldots\,, \nonumber\\ \Pi_{hh}{\left(m_h^2\right)} &= \left.\Delta V_{hh}\right\rvert_{v=0} + \mathcal{O}{\left(v^2\right)} \end{align} and therefore $t_h = v\left.\Delta V_{hh}\right\rvert_{v=0} + \ldots$ This shows that the EFT-matching correction to $\mu^2$, which is $\left.\Delta V_{hh}\right\rvert_{v=0}$, and the origin of the hierarchy problem, correspond to $t_h/v$ to lowest order in $v$. Hence in the ``standard'' approach of eq.~\eqref{EQ:StandardSM} this cancels out and leaves only corrections proportional to~$v^2$ --~whereas in the modified approach it remains and gives a large shift to the Higgs mass. \needspace{5ex} However, the reappearance of the hierarchy is a problem for the \emph{light} Higgs mass, whereas the problem we wished to solve actually appeared in new, \emph{heavy} states! If we wish to explore theories which may remain natural while having heavy states, such as those in Ref.~\cite{Farina:2013mla}, then the modified tadpole approach should work best. There must consequently be some trade-off between losing control of the light Higgs and losing control of the heavier states (and losing gauge invariance too). In section~\ref{SEC:TOYMODEL} we will set up the necessary general formalism and explore this in detail for a toy model. However, there are \emph{two} potential solutions to allow us to have the best of both worlds: \begin{enumerate} \item Retain counterterms for $\mu^2$ as in eq.~\eqref{EQ:mucountertermSM} for the SM Higgs, but \emph{only} for them. This is somewhat tricky to automate, since we must make a special case of the electroweak sector, and we also lose gauge invariance. \item For cases where the tuning of the hierarchy becomes large, use EFT pole matching \cite{Athron:2016fuq} with the modified treatment of tadpoles. This way, the heavy states remain entirely under control, we keep the heavy masses and couplings as top-down inputs (that remain genuinely \ensuremath{\overline{\text{MS}}}\xspace or \ensuremath{\overline{\text{DR}}'}\xspace), and we have gauge invariance built-in. \end{enumerate} In section \ref{SEC:GNMSSM} we will adopt the second approach for the example of the general NMSSM (and apply it specifically to the variant known as the $\mu$NMSSM \cite{Hollik:2018yek}). We establish the necessary formalism for the matching and give a detailed examination, via implementing the computation in a modified \texttt{SPheno}\xspace~\cite{Porod:2003um,Porod:2011nf} code generated from \texttt{SARAH}\xspace~\cite{Staub:2008uz,Staub:2009bi,Staub:2010jh,Staub:2012pb,Staub:2013tta,Goodsell:2014bna,Goodsell:2015ira, Braathen:2017izn}. \section{\label{SEC:TOYMODEL}Treatment of tadpoles for theories with heavy scalars} For a general renormalisable field theory, once we have solved the vacuum minimisation conditions and diagonalised the mass matrices, we can write the potential in terms of real scalar fields $\{ \phi_i \}$ as \begin{align} V &= \text{const} + \frac{1}{2}\,m_i^2\,\phi_i^2 + \frac{1}{6}\,a_{ijk}\,\phi_i\,\phi_j\,\phi_k + \frac{1}{24}\,\lambda_{ijkl}\,\phi_i\,\phi_j\,\phi_k\,\phi_l\, . \end{align} If we take the standard approach and fix the expectation values, adjusting the mass parameters order by order in perturbation theory, then as described in \citere{Braathen:2016cqe} we can write the pole masses as \begin{align} \left(M_i^2\right)^{(1)} &= m_i^2 + \Delta_{ii} + \Pi_{ii}^{(1)}{\left(m_i^2\right)} \equiv m_i^2 + \Delta M_i^2\, . \label{EQ:generic_stdtad}\end{align} To define the shifts $\Delta_{ii}$ in a general way, we must start from some basis of fields $\big\{\phi_i^0\big\}$ split into expectation values and fluctuations so that $\phi_i^0 \equiv v_i + \hat{\phi}_i^0$ and then diagonalise the fields via $\hat{\phi}_i^0 = R_{ij}\,\phi_i$. In the simplest case where we solve the tadpole equations for some mass-squared parameters in the original basis and where we ignore pseudoscalars, we can then write \begin{align} \Delta_{ii} = - \sum_k R_{ki}^2\, \frac{1}{v_k} \left.\frac{\partial \Delta V}{\partial \hat{\phi}_k^0}\right|_{\hat{\phi}_k^0=0} = - \sum_{k,l} R_{ki}^2\, R_{lk}\, \frac{1}{v_k}\, t_l^{(1)}\,. \end{align} The generalisation to solving for other variables (such as cubic scalar couplings) and to include pseudoscalar mass shifts is given in \citere{Braathen:2017izn}. On the other hand, taking the modified approach and including the tadpole diagrams, the pole masses up to one loop are simply \begin{align} \label{EQ:generic_modtad} \left(M_i^2\right)^{(1)} &= \hat{m}_i^2 - \frac{1}{\hat{m}_j^2}\, a_{iij}\,t_j^{(1)} + \Pi_{ii}^{(1)}{\left(\hat{m}_i^2\right)} \equiv \hat{m}_i^2 + \hat{\Pi}_{ii}^{(1)}{\left(\hat{m}_i^2\right)}\,, \end{align} where we have defined $\hat{m}_i^2$ to be the tree-level mass when we are using the modified scheme (we will later drop the distinction between $m_i$ and $\hat{m}_i$, see below) and $\hat{\Pi}_{ij}\big(p^2\big)$ for later use to be the self-energies including the tadpoles. The expressions for the tadpoles and self-energies at one loop can be found \textit{e.\,g.}\xspace~in \citeres{Martin:2003it,Braathen:2016cqe}; this calculation is therefore more straightforward to automate, being purely diagrammatic in nature. An \emph{explicitly} gauge-invariant expression for this (\textit{i.\,e.}\xspace~one where there are no gauge-fixing parameters present) will be given in future work. At this point the reader may object that, no matter what technique we use to calculate masses, the result for a given theory should be the same up to higher-loop corrections. Unfortunately this is made obscure by the difficulties in general in defining the parameters of our theory. To compare the two calculations \emph{for the same parameter point}, in the standard approach we are invited to treat the expectation values as fundamental, so if we start from a theory defined in this way, we must: \begin{enumerate} \item Calculate loop-level masses in the standard approach for a given choice of expectation values (with the associated problems when those expectation values are small). \item Extract the Lagrangian parameters from the loop-corrected tadpole equations. \item Solve the tree-level vacuum stability equations with these new parameters, obtaining the expectation values for use in the alternative approach. \item Compute the new tree-level spectrum using these expectation values \item Compute the loop-corrected masses in the alternative approach. \end{enumerate} Let us denote the tree-level masses and expectation values in the alternative approach as~$\hat{m}_i$ and~$\hat{v}_i$, and for simplicity assume that we solve the tadpole equations for some mass-squared parameters (rather than cubic couplings, say). Then, by passing back to the basis in which the fields are not diagonalised, where the Lagrangian mass parameters are $m_{0,ij}^2 = \hat{m}_{0,ij}^2 + \delta m_{0,ij}^2$ and the Lagrangian couplings are~$a_{0,ijk},$ $\lambda_{0,ijkl}$, we can carry out the above steps and solve perturbatively for the expectation values~$\hat{v}_i$ in the modified scheme: \begin{align} 0 &= \big(m_{0,ij}^2 + \delta m_{0,ij}^2\big)\, v_j + \frac{1}{2}\, a_{0,ijk}\, v_j\, v_k + \frac{1}{6}\, \lambda_{0,ijkl}\, v_j\, v_k\, v_l + t_{0,i}\nonumber\\ &= \big(m_{0,ij}^2 + \delta m_{0,ij}^2\big)\, \hat{v}_j + \frac{1}{2}\, a_{0,ijk}\, \hat{v}_j\, \hat{v}_k + \frac{1}{6}\, \lambda_{0,ijkl}\, \hat{v}_j\, \hat{v}_k\, \hat{v}_l\,. \end{align} We have written $t_{0,i}$ for the one-loop tadpole to emphasise that it is in the undiagonalised basis; to go to the mass-diagonal basis we need to rotate by the matrix $R_{ij}$ as above. Writing $\hat{v}_i =v_i + \delta v_i$ we obtain \begin{align} 0 &= - t_{0,i} + \mathcal{M}_{0, ij}^2\, \delta v_j, \end{align} where $\mathcal{M}_{0,ij}^2$ is the tree-level mass matrix of scalars in the standard scheme. This can be trivially solved by rotating to the mass-diagonal basis. We then write the tree-level mass matrix in the alternative scheme as \begin{align} \hat{\mathcal{M}}_{0, ij}^2 &= \mathcal{M}_{0,ij}^2 + \delta m_{0,ij}^2 + a_{0,ijk}\, \delta v_k + \lambda_{0,ijkl}\, v_k\, \delta v_l\,. \end{align} Using the \emph{same} matrix $R_{ij}$ we can rotate this to obtain\footnote{Recall that $a_{ijk} = (a_{0,i'j'k'} + \lambda_{0,i'j'k'l'}\, v_{l'})\,R_{i'i}\, R_{j'j}\, R_{k'k}$} \begin{align} \hat{m}_{i}^2 = \big(R^T\, \hat{\mathcal{M}}_{0}^2\, R\big)_{ii} &= m_i^2 + \Delta_{ii} + a_{iik}\, \frac{t_k^{(1)}}{m_k^2} + \mathcal{O}(\text{2-loop}). \end{align} Inserting this into (\ref{EQ:generic_modtad}) gives (\ref{EQ:generic_stdtad}). Of course, this comes with the associated problems of defining the theory in the standard approach: if we have a small expectation value, then (as we shall illustrate below) the loop corrections in $\Delta_{ii} $ can be very large, so the mass of the heavy scalar may differ greatly from the tree-level one. Making a conversion in this way just ensures that we see the same problem in the alternative treatment. Instead, for such points we should start with a theory defined in the \emph{alternative} manner. \needspace{5ex} \noindent Then, to compare the same point for the standard calculation one should: \begin{enumerate} \item Calculate loop-level masses in the alternative approach for a given choice of masses and couplings. \item Iteratively solve the loop-level vacuum stability equations to obtain the loop-corrected expectation values~$v_i$ for use in the standard scheme. \item Use these expectation values to compute the tree-level spectrum for use in the standard scheme (if we are using the approach with ``consistent tadpoles'')\footnote{In principle it is possible, and simpler, to just use the ``true'' input masses in the standard approach. This would vitiate the problem to a large extent, but would then lead to the well-known infra-red issues at two loops, or uncancelled logarithms in EFT matching, etc.} \item Compute the loop-corrected masses in the standard approach. \end{enumerate} In this way, we should obtain the same result (up to higher-order differences) for our desired point as in the alternative scheme. However, the key complicating factor is step 2: it assumes that we can efficiently and accurately find the true minimum of the potential. This can only be done by iteration of the tadpole equations; this involves \emph{recomputing the masses and couplings of the theory at each step} and is therefore often numerically expensive (especially at higher loop orders). On the other hand, if we do this perturbatively, then we are effectively using the alternative scheme! \subsection*{Disclaimer} While the above discussion is reassuring for the consistency of our calculations, in the following we will \emph{not} (for the most part) compare masses at the same parameter point, for the obvious reason that the results would be almost the same. Instead, what we want to illustrate is the difficulty in even defining our theory: in the standard approach, since we are required to choose a vacuum-expectation value for the heavy singlet fields (which are not physical parameters), the phenomenologist will often use a guess or a tree-level-approximate solution for this, rather than iteratively solve the tadpole equations (which, in any case, would lead to a different input value depending on the chosen loop order). We shall take this naive approach below, and compare (in most cases) theories \emph{with the same tree-level spectrum} by taking the expecation values to be the same in both the standard and modified schemes. Of course, according to the discussion above, these are not the same parameter points: we are instead illustrating the differences in methods of defining the theory, and will show how the alternative scheme gives a much more stable and efficient definition (at least in cases where the hierarchy problem for the light Higgs does not become severe). \subsection{A toy model} Let us now apply the above general expressions to the simplest toy model that can illustrate the differences of prescriptions for dealing with radiative corrections to tadpoles. This consists of the abelian Goldstone model coupled to a real singlet $S$, and has scalar potential \begin{align} \label{EQ:toymodeldef} V &= \mu^2\,\lvert H\rvert^2 + \frac{1}{4}\,\lambda\,\lvert H\rvert^4 + \frac{1}{2}\,m_S^2\,S^2 + a_{SH}\,S\,\lvert H\rvert^2 + \lambda_{SH}\,S^2\,\lvert H \rvert^2 + a_S\,S^3 + \lambda_S\,S^4 \end{align} with the fields \begin{align} H &\equiv \frac{1}{\sqrt{2}}\left(v + h + i\,G\right), \quad S \equiv v_S + \hat{S}\,, \end{align} $v$ and $v_S$ denoting the Higgs and singlet vacuum expectation values (vevs), respectively. The minimisation conditions at the tree level yield the equations \begin{subequations} \begin{align} \label{EQ:toytadpoleh} -\mu^2 &= \frac{1}{4}\,\lambda\,v^2 + a_{SH}\,v_S + \lambda_{SH}\,v_S^2\,,\\ \label{EQ:toytadpoleS} \left(m_S^2 + \lambda_{SH}\, v^2\right) v_S &= -\frac{1}{2}\,a_{SH}\,v^2 - 3\,a_S\,v_S^2 - 4\,\lambda_S\,v_S^3 \end{align} \end{subequations} that lead to the tree-level (squared) mass matrix for the scalars (which do not mix with the massless pseudoscalar): \begin{align} \mathcal{M}^2_\text{tree} &= \begin{pmatrix} \frac{1}{2}\,\lambda\,v^2 & a_{SH}\,v + 2\,\lambda_{SH}\,v\,v_S \\ a_{SH}\,v + 2\,\lambda_{SH}\,v\,v_S & m_S^2 + \lambda_{SH}\,v^2 + 6\, a_S\,v_S + 12\,\lambda_S\,v_S^2 \end{pmatrix}. \end{align} \begin{figure}[t!] \centering \begin{tabular}{c|c|c} \hline \includegraphics[width=.24\textwidth]{plots/tad.pdf}& \includegraphics[width=.36\textwidth]{plots/se.pdf}& \includegraphics[width=.24\textwidth]{plots/se_tad.pdf}\\ tadpole topologies & self-energy topologies & connected tadpole topologies\\ \hline \end{tabular} \caption{\label{fig:diagrams}{\em left:} one-loop tadpole diagrams; {\em middle:} one-loop self-energy diagrams appearing in standard and modified calculation; {\em right:} additional self-energy diagrams in the modified approach.} \end{figure} The one-particle irreducible one-loop contributions to the one- and two-point functions (see figure~\ref{fig:diagrams}) of this toy model are given by \begin{subequations} \begin{align} t_i^{(1)} &= -\,\frac{\kappa}{2}\,a_{ijj}\,A{\left(m_j^2\right)}\,,\\ \Pi_{ij}^{(1)}{\left(p^2\right)} &= \kappa\left[\frac{1}{2}\,\lambda_{ijkk}\,A{\left(m_k^2\right)} - \frac{1}{2}\,a_{ikl}\,a_{jkl}\,B{\left(p^2,m_k^2,m_l^2\right)} \right] \end{align} \end{subequations} with $A$ and $B$ denoting the scalar one-point and two-point one-loop integrals in the conventions of \textit{e.\,g.}\xspace~\citeres{Martin:2003it,Braathen:2016cqe}, $\kappa \equiv (16\pi^2)^{-1}$ and $p^2$ denoting the external momentum. In the approach of keeping the vevs fixed, we find for the one-loop pole masses: \begin{align} \left(M_i^2\right)^{(1)} &= m_i^2 - R_{i1}^2\,\frac{1}{v}\,t_h^{(1)} - R_{i2}^2\,\frac{1}{v_S}\,t_S^{(1)} + \Pi_{ii}{\left(m_i^2\right)}\,, \end{align} where $t_h^{(1)} = \partial \Delta V\big/\partial h\big|_{h,\hat{S}=0}$\,, $t_S^{(1)}= \partial \Delta V\big/\partial S\big|_{h,\hat{S}=0}$\,. Thus the tadpole corrections suffer from the division by the vev; in particular, the mass predictions can become numerically unstable in scenarios with a small singlet vev. Let us see this in practice for our example when $m_S^2$ is large; in this case \begin{align} v_S \sim - \frac{a_{SH}\,v^2}{2\,m_S^2}, \qquad R \sim \begin{pmatrix} 1 & - \frac{a_{SH}\,v}{m_S^2} \\ \frac{a_{SH}\,v}{m_S^2} & 1 \end{pmatrix}. \end{align} If we take $v$ small and just look at the singlet mass in the limit $p^2 \rightarrow 0$ for simplicity,\footnote{This limit is not implemented in our code and serves only the more lucid presentation. In fact, an off-shell evaluation of the self-energies implies unphysical behaviour of Higgs-mass predictions~\cite{Domingo:2020wiy}.} we have \begin{align} \Delta M_S^2 \approx \Pi_{SS}(0) - \frac{1}{v_S}\, t_S \supset - \frac{3\,a_S\, m_S^2\, \kappa}{v_S} \left(\,\overline{\log}\, m_S^2 - 1\right) + \ldots \end{align} where $\overline{\log} m_S^2 \equiv \log m_S^2/Q^2$ for renormalisation scale $Q$. When the system is really decoupled and $v=0$, then $v_S \sim m_S^2\big/(6a_S)$ and this expression remains well-controlled, but when $0 < v \ll m_S$ -- which is the case we are interested in -- we instead have \begin{align} \label{EQ:toymodel_stdbreakdown} \Delta M_S^2 &\propto \frac{6\, a_S\, m_S^4 }{16\,\pi^2\, a_{SH}\,v^2}\ \overline{\log}\, m_S^2 \end{align} which can be very large compared to $m_S^2$. If we take the modified approach to tadpoles, then the relevant generic expression for the self-energy is \begin{align} \hat{\Pi}_{ij}^{(1)}{\left(p^2\right)} &= \frac{1}{16\,\pi^2}\left[\frac{1}{2}\,\lambda_{ijkk}\,A{\left(m_k^2\right)} - \frac{1}{2}\,a_{ikl}\,a_{jkl}\,B{\left(p^2,m_k^2,m_l^2\right)} - \frac{1}{2\,m_k^2}\,a_{ijk}\,a_{kll}\,A{\left(m_l^2\right)}\right]; \end{align} and for our example \begin{align} \label{EQ:DMS_mod} \hat{\Pi}_{SS}^{(1)}{\left(m_S^2\right)} &\approx \Pi_{SS}(0) - \frac{a_{SH}^2\, \kappa}{2\,m_h^2}\,A{\left(m_S^2\right)} - \frac{3\,a_S^2\, \kappa}{m_S^2}\, A{\left(m_S^2\right)} + \ldots \sim - \frac{\kappa}{2} \left(\frac{a_{SH}^2 }{m_h^2} - 24\,\lambda_S\right) m_S^2\ \overline{\log}\, m_S^2\,.\hspace{3em}\taghere \end{align} Provided that $a_{SH} \lesssim m_h$ this is well under control, in contrast to the previous ``standard'' approach. \subsection{Numerical examples} In this section we shall illustrate the different behaviours of the two approaches to tadpoles in the toy model defined in eq.~(\ref{EQ:toymodeldef}) through numerical examples. For this purpose, we present results for the one-loop pole masses $M_h$ and $M_S$ computed diagrammatically both in the standard approach \mbox{-- following} eq.~(\ref{EQ:generic_stdtad}) -- and in the modified approach of equation~(\ref{EQ:generic_modtad}). We shall consider points defined to have \emph{the same tree-level spectrum}, but whose loop-corrected masses differ according to the scheme used. As described in the disclaimer above, these are not therefore the same points in parameter space: this illustrates \emph{the difficulty in defining the model}. For all the following figures, we set $\lambda=0.52$, to reproduce a light ``Higgs'' (noting that there are no gauge fields) near $125$ GeV, and we also fix $\lambda_{SH}=0$ and $\lambda_S=1/24$. In each case, we shall fix the \ensuremath{\ov{\mathrm{MS}}}\xspace parameter $m_S$ and solve the tree-level tadpole equations numerically to obtain $v_S$ and fix $v = 246 \ensuremath{\mathrm{GeV}}\xspace$. Then the calculation in the modified scheme gives the correct value for the scalar masses. For comparison, in each of the figures~\ref{FIG:TM_stdbreakdown}, \ref{FIG:TM_stdbreakdown2}, \ref{FIG:TM_modworse} and \ref{FIG:TM_bothgood} we use these same values as inputs for the conventional scheme, where we treat the derived value for $v_S$ as the ``all orders'' expectation value; this means that, in the standard scheme, $(m_S^2)^{\rm mod.} = (m_S^2)^{\rm tree}$, the tree-level value, and is not actually the \ensuremath{\ov{\mathrm{MS}}}\xspace mass-squared parameter any more. Hence, as mentioned above, these represent different parameter points now; only the tree-level spectra are the same. To avoid ambiguity, we shall therefore use $(m_S^2)^{\rm tree}$ since it is the input value for both schemes. In this way we see that two ways of defining the theory at tree-level can give, at times, drastically different results. In section \ref{sec:faircompare} we provide as a consistency check a comparison of the approaches with a conversion of the parameters. \begin{figure} \vspace{-7ex} \centering \includegraphics[width=\textwidth]{plots/MhMS_vs_aSH_mS=2000_aS=100.pdf}\\[-3ex] \caption{\label{FIG:TM_stdbreakdown}$M_h$ (\textit{left}) and $M_S$ (\textit{right}) as a function of $a_{SH}$. $m_S^{\rm tree}=Q=2000\,\ensuremath{\mathrm{GeV}}\xspace$, $a_S=100~\ensuremath{\mathrm{GeV}}\xspace$, $\lambda=0.52$, $\lambda_{SH}=0$, $\lambda_S=1/24$. The tree-level values are shown with the green curves, while the red and blue curves correspond to the one-loop results using respectively the standard (eq.~(\ref{EQ:generic_stdtad})) and modified (eq.~(\ref{EQ:generic_modtad})) treatments of tadpoles.} \vspace{2ex} \capstart \includegraphics[width=\textwidth]{plots/Figure3.pdf}\\[-3ex] \caption{\label{FIG:TM_stdbreakdown2}$M_h$ (\textit{left}) and $M_S$ (\textit{right}) as a function of $m_S^{\rm tree}$. $Q=m_S^{\rm tree}$, $a_{SH}=150~\ensuremath{\mathrm{GeV}}\xspace$, $a_S=100~\ensuremath{\mathrm{GeV}}\xspace$, $\lambda=0.52$, $\lambda_{SH}=0$, $\lambda_S=1/24$. The colours for the different curves are the same as in figure~\ref{FIG:TM_stdbreakdown}. } \vspace{2ex} \capstart \includegraphics[width=\textwidth]{plots/MhMS_vs_aSH_mS=1000_aS=0_Q=5000.pdf}\\[-3ex] \caption{\label{FIG:TM_modworse}$M_h$ (\textit{left}) and $M_S$ (\textit{right}) as a function of $a_{SH}$. $m_S^{\rm tree}=1000~\ensuremath{\mathrm{GeV}}\xspace$, $Q=5000~\ensuremath{\mathrm{GeV}}\xspace$, $a_S=0~\ensuremath{\mathrm{GeV}}\xspace$, $\lambda=0.52$, $\lambda_{SH}=0$, $\lambda_S=1/24$. The colours for the different curves are the same as in figure~\ref{FIG:TM_stdbreakdown}. } \end{figure} In figure~\ref{FIG:TM_stdbreakdown}, we show first $M_h$ (left side) and $M_S$ (right side) as a function of the trilinear coupling~$a_{SH}$, at tree level (green curves) and at one loop in the standard (red curves) and modified (blue curves) schemes for the tadpoles. We choose here a scenario with a large Lagrangian mass term \mbox{$m_S^{\rm tree} =2000~\ensuremath{\mathrm{GeV}}\xspace$} and a non-zero trilinear self-coupling $a_S=100~\ensuremath{\mathrm{GeV}}\xspace$ for the singlet (and we also fix the renormalisation scale to be $Q=2000~\ensuremath{\mathrm{GeV}}\xspace$). Consequently, we find ourselves exactly in the dangerous region \mbox{$0<v\ll m_S$}, \textit{c.\,f.}\xspace~eq.~(\ref{EQ:toymodel_stdbreakdown}), and as expected from our theoretical discussion, we find that the standard treatment of the tadpoles breaks down. On the one hand, for $M_h$ one can observe that the radiative corrections are larger in the standard approach and lead to larger variations of the loop-corrected mass than in the modified tadpole scheme. On the other hand, more strikingly, the results for $M_S$ in the standard approach are manifestly spurious. Indeed, while the loop corrections in the modified scheme remain very small (the green tree-level and blue one-loop curves are almost superimposed), in the standard scheme the corrections are huge: for large $a_{SH}\gtrsim v$ -- meaning not too small values of the singlet vev~$v_S$~-- they already amount to several hundred GeV, and if one decreases~$a_{SH}$ (thereby increasing $\Delta M_S^2$, \textit{c.\,f.}\xspace~eq.~(\ref{EQ:toymodel_stdbreakdown})) the singlet pole mass becomes tachyonic below $a_{SH}=v$. Next, in figure~\ref{FIG:TM_stdbreakdown2}, we fix the trilinear coupling $a_{SH}=150~\ensuremath{\mathrm{GeV}}\xspace$ and now consider $M_h$ (left) and $M_S$ (right) as a function of the Lagrangian mass term $m_S^{\rm tree}$. We also set $Q=m_S^\text{tree}$ and $a_S=100~\ensuremath{\mathrm{GeV}}\xspace$. Once again, with our choice of a non-zero singlet trilinear self-coupling $a_S$ and relatively small $a_{SH}$ --~hence also a small singlet vev -- we expect the standard approach to exhibit instabilities. For~$M_h$ (left side of figure~\ref{FIG:TM_stdbreakdown2}) both approaches behave relatively well and no instability seems to occur, although the radiative corrections are significantly larger in the standard scheme. However, for~$M_S$ the calculation in the standard approach (red curve) once again breaks down when $m_S^{\rm tree}$ is increased -- equivalently for small $v_S$ -- while the loop corrections to $M_S$ in the modified approach (blue curve) remain minute. In figure~\ref{FIG:TM_modworse}, we illustrate the behaviour of eq.~\eqref{EQ:DMS_mod}. We plot once more $M_h$ (left) and $M_S$ (right) as a function of the trilinear coupling $a_{SH}$, but now for a scenario where $a_S=0$ (in order to avoid large corrections $\Delta M_S^2$ in the standard scheme), and with $m_S^{\rm tree} =1000~\ensuremath{\mathrm{GeV}}\xspace$ and $Q=5000~\ensuremath{\mathrm{GeV}}\xspace$ so as to increase the size of the logarithms $\overline{\log} \, (m_S^2)^{\rm tree}$\,. For small values of $a_{SH}$, both schemes (red and blue curves) produce very similar results, however, as $a_{SH}$ becomes larger the radiative corrections to $M_h$ as well as $M_S$ increase significantly in the modified tadpole scheme, leading to less reliable predictions (especially for $a_{SH}\gtrsim 300$--$400~\ensuremath{\mathrm{GeV}}\xspace$). Finally, we present in figure~\ref{FIG:TM_bothgood} an example of scenario in which both ways to treat the tadpole contributions give reliable results. We take a small singlet mass parameter $m_S=500~\ensuremath{\mathrm{GeV}}\xspace$, set $a_S=0$ and maintain $a_{SH}<200~\ensuremath{\mathrm{GeV}}\xspace$. We observe here that the radiative corrections to $M_h$ and~$M_S$ remain well behaved in both approaches. \begin{figure} \centering \includegraphics[width=\textwidth]{plots/MhMS_vs_aSH_mS=500_aS=0_Q=500.pdf}\\[-3ex] \caption{\label{FIG:TM_bothgood}$M_h$ (\textit{left}) and $M_S$ (\textit{right}) as a function of $a_{SH}$. $m_S^{\rm tree}=Q=500\,\ensuremath{\mathrm{GeV}}\xspace$, $a_S=0~\ensuremath{\mathrm{GeV}}\xspace$, $\lambda_{SH}=0$, $\lambda=0.52$, $\lambda_S=1/24$. The colours for the different curves are the same as in figure~\ref{FIG:TM_stdbreakdown}.} \end{figure} \subsection{\label{sec:faircompare}Comparisons at the same point} Here, for clarity (and as a consistency check) we shall follow the (first) prescription in section \ref{SEC:TOYMODEL} and compare the two schemes for computing the one-loop masses in our toy model at the same parameter point. We consider the same input parameters as in figure \ref{FIG:TM_stdbreakdown2}, except that now we scan over the true \ensuremath{\ov{\mathrm{MS}}}\xspace mass $m_S$ in both schemes. The calculation in the modified scheme is therefore identical to those in figure \ref{FIG:TM_stdbreakdown2}, but we then solve the tadpole equations for $\mu^2$ and $m_S^2$ at the one-loop order to find the values of $v, v_S$; while the value for $v$ changes little, the equation for $v_S$ becomes \begin{align} 0 &= \left(m_S^2 + \lambda_{SH}\, v^2\right) v_S + \frac{1}{2}\,a_{SH}\,v^2 + 3\,a_S\,v_S^2 + 4\,\lambda_S\,v_S^3 + t_S \big(m_S^2\big)\,. \label{EQ:FindTrueMs}\end{align} We then use this new value for $v_S$ to compute the tree and loop-level spectra in the standard scheme. In figure \ref{FIG:TM_faircompare} we employ consistent tadpoles, so that we obtain a value for $(m_S^2)^{\rm tree}$ which satisfies eq.~\eqref{EQ:toytadpoleS} and use this to compute the tree-level spectrum, and as input for the loop computation with the appropriate perturbative shifts to the loop mass; neglecting mixing between the light and heavy scalars we have \enlargethispage{1.3ex} \begin{subequations} \begin{align} (M_S^2)^{\rm tree} &\simeq -\frac{a_{SH}\, v^2}{2\, v_S} + v_S \left(3\, a_S + 8\, v_S\, \lambda_S\right),\\ M_S^2 &\simeq (M_S^2)^{\rm tree} - \frac{1}{v_S}\, t_S\big((m_S^2)^{\rm tree}\big) + \Pi_{SS}\big((M_S^2)^{\rm tree};(m_S^2)^{\rm tree}\big)\, . \end{align} \end{subequations} We have written $(m_S^2)^{\rm tree}$ in the arguments of the tadpoles and self-energies to show the explicit dependence in the loop functions. In the left and right-hand plots of figure \ref{FIG:TM_faircompare} we therefore see that the shift between $m_S^2$ and $(m_S^2)^{\rm tree}$ becomes very large, and this leads to a breakdown of the (primitive) iterative algorithm that we use to solve for $v_S$, hence the standard scheme curves end near $m_S = 1250$~GeV, while the modified scheme has no such issue and the difference between loop-corrected and tree-level masses is negligible. This gives a different perspective on the general problem of calculating masses in such models. On the other hand, we see that, while the tree-level masses can differ significantly (even for the light ``Higgs'') the loop masses agree to a high precision, as they should. \begin{figure}[t!] \vspace{-7ex} \centering \includegraphics[width=\textwidth]{plots/FairCompare.pdf}\\[-10ex] \caption{\label{FIG:TM_faircompare}$M_h$ (\textit{left}) and $M_S$ (\textit{right}) as a function of the true \ensuremath{\ov{\mathrm{MS}}}\xspace parameter $m_S$ in both the standard and modified schemes, where the standard scheme is performed according to the ``consistent tadpole'' prescription. Other parameters as in figure \ref{FIG:TM_stdbreakdown2}.} \capstart \vspace{-5ex} \includegraphics[width=\textwidth]{plots/NoConsistentTadpoles.pdf}\\[-10ex] \caption{\label{FIG:TM_faircompare2}$M_h$ (\textit{left}) and $M_S$ (\textit{right}) as a function of the true \ensuremath{\ov{\mathrm{MS}}}\xspace parameter $m_S$ in both the standard and modified schemes, where the standard scheme does not involve ``consistent tadpoles'' but the true \ensuremath{\ov{\mathrm{MS}}}\xspace mass $m_S$ is used everywhere. Other parameters as in figure \ref{FIG:TM_stdbreakdown2}.} \end{figure} For a final comparison, we give in figure \ref{FIG:TM_faircompare2} the same computation but where, instead of ``consistent tadpoles'' we use the true \ensuremath{\ov{\mathrm{MS}}}\xspace mass $m_S^2$ obtained from eq.~\eqref{EQ:FindTrueMs} in all of the loop functions so that \begin{subequations} \begin{align} (M_S^2)^{\rm tree} &\simeq -\frac{a_{SH} v^2}{2\, v_S} + v_S \left(3\, a_S + 8\, v_S\, \lambda_S\right) - \frac{1}{v_S}\, t_S \big( m_S^2\big)\,, \\ M_S^2 &\simeq (M_S^2)^{\rm tree} + \Pi_{SS} \big( (M_S^2)^{\rm tree};m_S^2\big)\, . \end{align} \end{subequations} \needspace{5ex} \noindent Aside from a shuffling of the tadpole term between the ``tree-level'' mass in the standard scheme, which now ensures that all of the curves on the right-hand side of figure \ref{FIG:TM_faircompare2} lie on top of each other (modulo the same proviso that the algorithm for finding $v_S$ breaks down) the differences between these two versions of the standard scheme then only exist at two loops. From figure \ref{FIG:TM_faircompare2} it would seem that avoiding the consistent tadpoles would be preferable in these cases, but of course then the above equations mix tree-level and loop-level quantities, so we have problems with EFT matching at one loop (because subleading logarithms do not cancel) and infra-red issues at two loops. \section{\label{SEC:POLEMATCHING}Pole mass matching with tadpole insertions} When matching two theories via pole masses, care must be taken that subleading logarithms are correctly subtracted. The best way to do this is to expand the expressions on both sides of the matching relation in terms of the same parameters; the most efficient way to do this is to use those of the high-energy theory (HET) even though this adds a layer of complication because it is the SM parameters that we know from the bottom-up observations. To this end we require the shifts in the vacuum expectation value as well as gauge, Yukawa and of course quartic couplings. The most straightforward way to match the vacuum-expectation value of the Higgs is via matching the $Z$ mass, which gives (see \textit{e.\,g.}\xspace~\citeres{Athron:2016fuq,Staub:2017jnp,Braathen:2018htl}): \begin{align} v_{\mathrm{SM}}^2 &= v_{\mathrm{HET}}^2 + \frac{4}{g_Y^2 + g_2^2} \left[ \hat{\Pi}_{ZZ}^{\mathrm{HET}}(0) - \hat{\Pi}_{ZZ}^{\mathrm{SM}}(0) \right] + \mathcal{O}{\left(v^4\right)} \label{vexp}\,. \end{align} If we match the one-loop Higgs mass in the SM to the HET, where the light Higgs mass at tree level is $m_0$, then we have \begin{subequations} \begin{align} & 2\,\lambda_{\mathrm{SM}}\, v^2_{\mathrm{SM}} + \hat{\Pi}_{hh}^{\mathrm{SM}}{\left(2\,\lambda_{\mathrm{SM}}\, v^2_{\mathrm{SM}}\right)} = m_0^2 + \hat{\Pi}_{hh}^{\mathrm{HET}}{\left(m_0^2\right)} \\ & \lambda_{\mathrm{SM}} = \frac{1}{2\, v_{\mathrm{HET}}^2} \bigg\{ m_0^2 + \hat{\Pi}_{hh}^{\mathrm{HET}}{\left(m_0^2\right)} - \hat{\Pi}_{hh}^{\mathrm{SM}}{\left(m_0^2\right)} - \frac{4\,m_0^2 }{v_{\mathrm{HET}}^2\left( g_Y^2 + g_2^2\right)} \Big[\hat{\Pi}_{ZZ}^{\mathrm{HET}}(0) - \hat{\Pi}_{ZZ}^{\mathrm{SM}}(0)\Big]\bigg\}. \end{align} \end{subequations} It should be noted that -- in order to preserve gauge invariance, and cancel large logarithms exactly without introducing spurious subleading ones -- the matching of the quartic coupling should be performed according to this equation, as opposed to performing some iteration, matching eigenvalues of the mass matrices, or separately matching the expectation values and Higgs mass (as performed in some codes) \cite{Bahl:2017aev,Kwasnitza:2020wli}. With the prescription of including tadpole diagrams, this leads to \begin{align} \hat{\Pi}_{hh} &\equiv \Pi_{hh} - a^{hhk}\, \frac{1}{m_k^2}\, t_k\,, \qquad \hat{\Pi}_{ZZ} \equiv \Pi_{ZZ} - g^{ZZk}\, \frac{1}{m_k^2}\, t_k\, . \end{align} In the SM with $ \mathcal{L} \supset - \lambda_{\mathrm{SM}}\,\lvert H\rvert^4$ we have \begin{align} \hat{\Pi}_{hh}^{\mathrm{SM}} &\equiv \Pi_{hh}^{\mathrm{SM}} - \frac{6\, \lambda\, v}{m_h^2}\, t_h^{\mathrm{SM}} = \Pi_{hh}^{\mathrm{SM}} - \frac{3}{v}\, t_h^{\mathrm{SM}}\,, \qquad \hat{\Pi}_{ZZ}^{\mathrm{SM}} \equiv \Pi_{ZZ}^{\mathrm{SM}} - \frac{2\, M_Z^2}{v\, m_h^2}\, t_k^{\mathrm{SM}}\,,\\[-.6ex] \intertext{and so} \hat{\Pi}_{hh}^{\mathrm{SM}} - \frac{m_h^2}{M_Z^2}\, \hat{\Pi}_{ZZ}^{\mathrm{SM}} &= \Pi_{hh}^{\mathrm{SM}} - \frac{m_h^2}{M_Z^2}\, \Pi_{ZZ}^{\mathrm{SM}} - \frac{3}{v}\, t_h^{\mathrm{SM}} + \frac{2}{v}\, t_h^{\mathrm{SM}} = \Delta M^2_{\mathrm{SM}} - \frac{m_h^2}{M_Z^2}\, \Pi_{ZZ}^{\mathrm{SM}}\,, \end{align} where the $\Delta M^2_{\mathrm{SM}}$ is now just the standard set of vacuum conditions as in eqs.~\eqref{EQ:StandardSM} or \eqref{EQ:generic_stdtad}. So what we have shown is that the modified treament of tadpoles cancels out exactly in the matching of the light Higgs, \emph{for the SM part}. Of course, the shift in the matching condition should only depend on the Lagrangian parameters, which are not affected by the treatment of tadpoles, so the same is true for the matching in the HET part \emph{up to terms of higher order in $v$}. \needspace{3ex} We have already implicitly shown how the change in scheme affects the matching of the gauge bosons; now for fermions we have \begin{align} \Gamma_{F_i F_j}(p) &= i \left(\slashed{p} - m_{F}\right) \delta_{ij} + i \left[\slashed{p} \left(P_L\,\hat\Sigma_{ij}^{L}{\left(p^2\right)} + P_R\,\hat\Sigma_{ij}^{R}{\left(p^2\right)}\right) + P_L\,\hat\Sigma_{ij}^{SL}{\left(p^2\right)} + P_R\,\hat\Sigma_{ij}^{SR}{\left(p^2\right)}\right]. \end{align} For fermions at one loop we can write the mass-matrix corrections as \begin{align} \delta m_F &= - \Sigma^{SL} - \frac{1}{2} \left(\Sigma^R\, m + m\, \Sigma^L\right). \end{align} This means that our tadpole shift just affects \begin{align} \delta \Sigma^{SL} &= \delta \Sigma^{SR} = \frac{1}{m_k^2}\, y^{ijk}\, \frac{\partial V}{\partial \phi_k}\,, \end{align} where $y^{ijk}$ are the Yukawa couplings, that can be written in terms of Weyl spinors $\{\psi_i\}$ as \begin{align} \mathcal{L} \supset - \frac{1}{2}\, y^{ijk}\, \psi_i\, \psi_j\, \phi_k\,. \end{align} To match the Yukawa couplings via the pole masses of the quarks, the matching of the electroweak expectation value must also be included; working in the basis with diagonalised Yukawa couplings, we can match the diagonal elements as (using $Y^F \equiv y^{FFh}$ for $h$ the SM Higgs and a general fermion $F$) \begin{subequations} \begin{align} M_F &= v\, Y^F - \Sigma^{SL} - \frac{1}{2} \left(\Sigma^R\, m + m\, \Sigma^L\right),\\ Y^F_{\mathrm{SM}} &= Y^{F}_{\mathrm{HET}} + \frac{1}{v_{\mathrm{HET}}} \left[ (\delta m_F)^{\mathrm{HET}} - (\delta m_F)^{\mathrm{SM}} - \frac{1}{m_k^2}\, y^{FFk}_{\mathrm{HET}}\, t_k + \frac{1}{m_h^2}\, Y^{F}_{\mathrm{SM}}\, t_h^{\mathrm{SM}} \right]\nonumber\\[-1ex] &\quad - \frac{Y^F_{\mathrm{HET}}}{2\, M_Z^2} \left[ \hat{\Pi}_{ZZ}^{\mathrm{HET}}(0) - \hat{\Pi}_{ZZ}^{\mathrm{SM}}(0)\right]\nonumber\\[-1ex] &= Y^{F}_{\mathrm{HET}} + \frac{1}{v_{\mathrm{HET}}} \left[ (\delta m_F)^{\mathrm{HET}} - (\delta m_F)^{\mathrm{SM}} - \frac{1}{m_k^2}\, y^{FFk}_{\mathrm{HET}}\, t_k \right] - \frac{Y^F_{\mathrm{HET}}}{2\, M_Z^2} \left[ \hat{\Pi}_{ZZ}^{\mathrm{HET}} (0) - \Pi_{ZZ}^{\mathrm{SM}} (0) \right] , \end{align} \end{subequations} where we once again see that the shift in the tadpole scheme cancels out exactly in the SM part. This procedure is particularly important since the shift to the expectation value arising in eq.~\eqref{vexp} is very large, as discussed in the introduction. In this case, since the corrections to $\mu^2$ -- and therefore also to $v^2$ -- are very large, it becomes impractical in an implementation to actually use the ``correct'' value of $v^2$ in the high-energy theory. Indeed, this can even become impossible, if $\delta \mu^2$ is such that $\mu^2$ would become positive in the SM! Instead, provided we take $v$ much less than the matching scale, we can just treat it as perturbation parameter to extract the SM values. In our numerical calculation in the next section we do exactly this: we just use the SM value of $v$ in both high- and low-energy theories, but use the correct shifts of the expectation values in the matching of the parameters. This is very similar to a standard EFT calculation, which assumes \textit{e.\,g.}\xspace~in split supersymmetry that the heavy Higgs masses are tuned according to the mixing angle given as an input, and takes $v=0$ explicitly, since we are not interested in corrections to Lagrangian parameters of order $v^2\big/M^2$ where $M$ is the matching scale. \section{\label{SEC:GNMSSM}Application in the $\mu$NMSSM} \enlargethispage{1.4ex} In the introduction, we explained that the modified treatment of tadpoles can be useful for stability under perturbation theory of heavy scalar masses when they are associated with a small expectation value. In section \ref{SEC:TOYMODEL} we showed how it worked in practice in a toy model. In section \ref{SEC:POLEMATCHING} we described how, for theories where the new scalars are substantially above the electroweak scale, it can be practically applied via EFT matching of the pole masses. Here, we shall apply this technique to a real test case, the $\mu$NMSSM. \subsection{NMSSM, $\mu$NMSSM and GNMSSM} The superpotential of the most general form of the NMSSM -- the GNMSSM -- is \cite{Ellwanger:2009dp,Ross:2012nr} \begin{align} W_{\mathrm{GNMSSM}} &= Y_u\, Q \cdot H_u\, U - Y_d\, Q \cdot H_d\, D - Y_e\, L \cdot H_d\, E + \frac{1}{3}\, \kappa\, S^3 + ( \mu + \lambda\, S) H_u\cdot H_d + \xi\, S + \frac{1}{2}\, \mu_S\, S^2 \nonumber \end{align} and the supersymmetry-breaking terms in the Higgs sector are \begin{align} V_{\rm soft} &\supset m_S^2\, \lvert S\rvert^2 + m_{H_u}^2\, \lvert H_u\rvert^2 + m_{H_d}^2 \lvert H_d\rvert^2 \nonumber\\ &\quad + \bigg( B_\mu\, H_u \cdot H_d + T_\lambda\, S\, H_u \cdot H_d + \frac{1}{3}\, T_\kappa\, S^3 + \frac{1}{2}\, B_S\, S^2 + \xi_S\, S + \text{h.\,c.} \bigg)\,. \end{align} Once the singlet develops an expectation value, we can write effective terms \begin{align} \mu_{\rm eff} &\equiv \mu + \frac{1}{\sqrt{2}}\, \lambda\, v_S\,, & B_{\rm eff} &\equiv B_\mu + \frac{1}{\sqrt{2}}\, T_\lambda\, v_S + \lambda\left(\xi + \frac{1}{\sqrt{2}}\, \mu_S\, v_S + \frac{1}{2}\, \kappa\, v_S^2\right) \end{align} and the tadpole equations become \begin{subequations} \begin{align} 0 &= - B_{\rm eff} \cot \beta + m_{H_u}^2 + \mu_{\rm eff}^2 - \frac{M_Z^2}{2}\, c_{2\beta} + \frac{1}{2}\, \lambda\, c_\beta^2\,,\\ 0 &= - B_{\rm eff} \tan \beta + m_{H_d}^2 + \mu_{\rm eff}^2 + \frac{M_Z^2}{2}\, c_{2\beta} + \frac{1}{2}\, \lambda\, s_\beta^2\,,\\ 0 &= v_S \left( B_S + m_{S}^2 + \mu_S^2 + 2\, \kappa\, \xi \right) + \frac{1}{\sqrt{2}}\, v_S^2 \left( T_\kappa + 3\,\kappa\, \mu_S\right) + \kappa^2\, v_S^3 \nonumber\\ &\quad + \sqrt{2}\,\mu_S\,\xi + \sqrt{2}\,\xi_S + \frac{1}{2\,\sqrt{2}}\,v^2 \Big( 2\,\lambda\, \mu_{\rm eff} - \left(T_\lambda + 2\, \kappa\, \lambda\, v_S + \mu_S\, \lambda\right) s_{2\beta} \Big)\,. \end{align} \end{subequations} The first two lines are essentially modified versions of the MSSM tadpole equations with an extra term from the $\lambda$ coupling. The third line, however, is the crucial one for our discussion. In a general non-supersymmetric theory, we can redefine singlet fields to remove their tadpole terms. However, in the GNMSSM, which has tadpole parameters $\xi$ in the superpotential and $\xi_S$ in the soft-breaking terms, we can only remove one of these, or the combination $\sqrt{2}\, \mu_S\, \xi + \sqrt{2}\, \xi_S$. Clearly in the GNMSSM, it is most logical to choose a linear combination of the singlet tadpole terms $\xi$ and $\xi_S$ (or just one) as the variable to be eliminated by the tadpole equations. However, this is not possible in the NMSSM or $\mu$NMSSM, since these terms vanish by the assumption of (at least partial) $\mathbb{Z}_3$ symmetry. Then aside from $(m_{H_u}^2,m_{H_d}^2)$ or $(\mu,B_\mu)$, the dimensionful parameter that we can now choose for elimination via the singlet tadpole equation is one of $\big\{m_S^2,\mu_{\rm eff},T_\lambda,T_\kappa\big\}$. We are interested in the case that the singlet is rather heavier than the SM-like Higgs, so that \mbox{$v^2\big/m_S^2 \ll 1$}. This is clearly at best problematic in the NMSSM, since $\mu_{\rm eff}, B_{\rm eff} \propto v_S$ so if we imagine $v_S \sim $ GeV we will have very light higgsinos, pseudoscalar/charged Higgs and difficulties solving the tadpole equations. Hence we turn to the $\mu$NMSSM, where we neglect all terms that break the $\mathbb{Z}_3$~symmetry except for $\mu$ and~$B_\mu$, and find \begin{align} v_S &\simeq - v^2 \left(\frac{ 2\,\lambda\, \mu - T_\lambda\, s_{2\beta}}{2\,\sqrt{2}\, m_S^2 }\right), \end{align} where the true value can be found numerically. The logical choice for this case is to solve for $T_\lambda$. In this case we have \begin{align} \Delta T_\lambda &= - \frac{2\,\sqrt{2}}{v^2\,s_{2\beta}}\, \frac{\partial \Delta V}{\partial v_S}\,, \end{align} and the terms in the mass matrix become \begin{align} \mathcal{M}^2_{h_u^0 h_u^0} &\supset -\frac{v_S\, t_S^{(1)}}{v^2\, s_\beta\, c_\beta} + \ldots \propto \frac{t_S^{(1)}}{m_S^2}\,, & \mathcal{M}^2_{h_u^0 s_R} &= - \frac{m_S^2\, v_S + t_S^{(1)}}{v\, s_\beta} + \ldots\,. \end{align} Note that this is in the ``flavour basis'' before we diagonalise the fields at tree level, so the contributions to the light Higgs and heavy singlet masses are $\propto t_S^{(1)}\big/m_S^2$\,. On the other hand, this choice leads to a (potentially very) large quantum correction to $T_\lambda$. Suppose we want to investigate gauge-mediation scenarios where trilinears are small (nearly vanishing), or are otherwise specified by the top-down inputs -- this would be completely inappropriate. Furthermore, we have to not only take into account shifts in the masses but also the \emph{couplings} -- this is moderately cumbersome to implement at one loop, but much more so if we want to compute the two-loop corrections. Indeed, it is not included in the algorithm to generate ``consistent vacuum equations'' of \citeres{Kumar:2016ltb,Braathen:2016cqe}, which assumes that the parameters that we solve the tadpole equations for only affect scalar masses. To solve both of these issues the simplest choice is to solve for $m_S^2$, and this leads to exactly the same problem as in the toy model, that the corrections to the singlet mass scale as $t_S^{(1)}\big/v_S$ leading to numerical instabilities for tiny $v_S$. Hence this model is an excellent prototype for comparing the different approaches to solving the tadpole equations. \subsection{Numerical comparison of tadpole schemes} In the $\mu$NMSSM and GNMSSM, we not only have a Higgs sector, but also squarks, sleptons, a gluino and electroweakinos. In particular the colourful states have a large impact on the mass of the light Higgs, and, when they are heavy enough to be safe from current collider searches, they cause the ``little hierarchy problem'' to manifest itself. If we try to apply our modified tadpole scheme directly to these models, then we find all of the problems associated with this little hierarchy in our Higgs-mass calculation. Therefore it is only sensible to use EFT matching for the light Higgs mass. In this section we shall endeavour to show that with such an approach we can solve the technical difficulties with computing the masses of both light and heavy Higgs bosons. We shall present here numerical investigations of several scenarios of the $\mu$NMSSM and GNMSSM illustrating the differences between the two approaches to the treatment of tadpoles, both using EFT matching. For this, we compare results obtained using the original version of \texttt{SPheno}\xspace code obtained directly from \texttt{SARAH}\xspace (for the model \texttt{SMSSM}), as well as with a version of the {\tt Fortran} output extensively modified according to the prescriptions described in section \ref{SEC:POLEMATCHING}.\footnote{This private code is not intended for public release, although it is available on request from the authors. The new functionality should eventually be made available in a future release of \texttt{SARAH}\xspace.} In these calculations we must refer the reader again to our disclaimer, that we shall compare parameter points that generate the same \emph{tree-level spectrum} in the two schemes, but that differ from each other at higher order; because this provides the clearest illustration of the problems faced (namely how to even define the parameter point). In contrast to the toy model, we will give no examples with a complete conversion of parameters, \textit{i.\,e.}\xspace~a comparison of both calculations at the same point, since the actual procedure of converting between the schemes is too onerous for technical reasons. In the \texttt{SARAH}\xspace/\texttt{SPheno}\xspace code, while a numerical solution of the tadpole equations (required for providing \ensuremath{\ov{\mathrm{MS}}}\xspace input to the standard scheme) is in principle possible, it is labourious and not implemented for loop computations where the variable to solve for is the vacuum expectation value.\footnote{This development in \texttt{SARAH}\xspace is envisioned in the future.}\enlargethispage{1.6ex} Therefore, again we take the tree-level value of $v_S$ as input for the modified scheme, and treat it as the ``all-orders'' expectation value in the standard scheme (with consistent tadpoles) thus ensuring the same tree-level spectrum, but potentially vastly different results at one loop due to the large corrections to $m_S^2$ in the standard scheme. Again we stress that this is typical of the ambiguity in defining a parameter point that the phenomenologist is invited to suffer, thanks to the expedient in the standard scheme of hiding loop corrections in the definition of the expectation values. In section~\ref{sec:numeric_munmssm} we give an example of the above reasoning in the $\mu$NMSSM. For illustration in section~\ref{sec:numeric_gnmssm} we also give examples in the GNMSSM where we solve the tadpoles for the same variables ($m_{H_u}^2$, $m_{H_d}^2$, $m_S^2$) which allows us to compare several different scenarios. \subsubsection{$\mu$NMSSM} \label{sec:numeric_munmssm} \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{plots/scmuNMSSM_Mh123_vs_vS_logscale.pdf}\\[-2ex] \caption{$M_{h_1}$ (\textit{left}) and $M_{h_2}$ and $M_{h_3}$ (\text{right}) as a function of $v_S$ in a scenario of the $\mu$NMSSM. The other inputs are taken as follows: $\lambda=\kappa=0.1$, $T_\lambda=200\text{ GeV}$, $T_\kappa=-10\text{ GeV}$, $\mu=100\text{ GeV}$, $B_\mu=6\cdot 10^5\text{ GeV}^2$. Tree-level values are shown with green curves, while the red and blue curves correspond to the pole masses computed at one loop, respectively with the standard and modified approaches to the tadpoles. The colour coding of the lines remains the same for all figures in this section. } \label{FIG:scmunmssm_vs} \end{figure} In figure~\ref{FIG:scmunmssm_vs}, we present the behaviour of the three CP-even mass eigenvalues -- \textit{i.\,e.}\xspace~the lightest Higgs mass $M_{h_1}$ (left side) and the masses of the additional CP-even states $M_{h_2}$ and $M_{h_3}$ (right side) -- as a function of the singlet vev $v_S$ in a $\mu$NMSSM scenario, where the underlying parameters are given in the caption. The tree-level values are shown in green, while the one-loop results using the standard and the modified treatments of tadpoles are in red and blue respectively. We consider here a low range of values for $v_S$, so that, following our discussion in the previous section, we expect the standard approach to perform poorly for the singlet-like mass eigenstate. This is indeed what we observe if we turn to the right-side plot: for lower $v_S$ ($\lesssim 0.1$--$1\text{ GeV}$) the singlet-like scalar is the heaviest eigenstate~$h_3$, while after level crossing it is $h_2$ for larger $v_S$. For the entire range of $v_S$ the mass corrections in the standard approach are huge, and they grow as large as~50~TeV for $v_S=0.001\text{ GeV}$ -- \textit{i.\,e.}\xspace~250\% of the tree-level result! On the other hand, if we look instead at the lightest Higgs boson $h_1$, we find that the radiative corrections are somewhat larger with the modified treatment of the tadpole diagrams, and increase significantly with $v_S$ in this scenario -- due to the contributions from the tadpole diagram with a relatively large value of~$T_\lambda=200\text{ GeV}$ and a relatively small tree-level mass of the singlet-like state. \subsubsection{GNMSSM} \label{sec:numeric_gnmssm} While the $\mu$NMSSM provides an excellent prototype for the case of a heavy singlet with a small expectation value, where we cannot hide the loop corrections in a tadpole term, since it is a subset of the GNMSSM we can find more varied scenarios exhibiting the same behaviour. Of course, this is with the proviso that (with less justification in general) we restrict ourselves to solving the tadpole equations for $m_S^2$. We have devised three types of scenarios: \begin{itemize} \item Scenario 1: large singlet vev and intermediate $\lambda$; \item Scenario 2: small singlet vev and small $\lambda$; \item Scenario 3: small singlet vev but large $\lambda$. \end{itemize} Table~\ref{TAB:GNMSSMscenarios} summarises the values taken for the BSM input parameters relevant for \texttt{SPheno}\xspace\ -- note that we have adjusted the soft terms $m_0$ (scalar mass) and $A_0$ (scalar trilinear coupling) in order to obtain a mass for the lightest Higgs boson within the interval $[123~\ensuremath{\mathrm{GeV}}\xspace,127~\ensuremath{\mathrm{GeV}}\xspace]$. We should also emphasise that the numbers in table~\ref{TAB:GNMSSMscenarios} are given to \texttt{SPheno}\xspace as high-scale inputs (as this only requires a limited set of values). We then convert these into low-scale input parameters using the standard version of the $\mu$NMSSM \texttt{SPheno}\xspace code, and the plots presented in the following are obtained by varying one of the low-scale inputs. In light of the analytic expressions in the previous section, we can expect the two approaches to the tadpoles to give relatively similar results in scenario~\hyperref[TAB:GNMSSMscenarios]{1}, where the singlet vev is large. However, in scenarios~\hyperref[TAB:GNMSSMscenarios]{2} and~\hyperref[TAB:GNMSSMscenarios]{3}, the singlet vev is taken to be small, so that the differences between the two schemes should be more pronounced. Scenario~\hyperref[TAB:GNMSSMscenarios]{3} furthermore allows us to investigate the effect of increasing the coupling~$\lambda$. \begin{table}[t!] \begin{center} \begin{tabular}{ |r@{\,}l|c|c|c| } \hline \multicolumn{2}{|c|}{Scenario} & 1 & 2 & 3 \\ \hline $m_0$ & [GeV] & 2000 & 1500 & 1500 \\ $\lambda$ & & $0.1^\dagger$ & 0.01 & 0.15 \\ $\kappa$ & & 0.005 & 0.05 & 0.05 \\ $T_\lambda$ & [GeV] & 1000 & $1000^\dagger$ & $7500^\dagger$ \\ $v_S$ & [GeV] & 3000 & $1.0^\dagger$ & $1.0^\dagger$ \\ $\mu$ & [GeV] & 500 & 200 & 200 \\ $\mu_S$ & [GeV] & 0 & --200 & --200 \\ $\xi$ & $\big[\ensuremath{\mathrm{GeV}}\xspace^2\big]$ & $1.0\cdot 10^8$ & $1.7\cdot 10^6$ & $5.0\cdot 10^4$ \\ $B_\mu$ & $\big[\ensuremath{\mathrm{GeV}}\xspace^2\big]$ & $2.0\cdot 10^5$ & $1.0\cdot 10^6$ & $4.0\cdot 10^5$ \\ \hline \end{tabular} \end{center} \caption{\label{TAB:GNMSSMscenarios} Definitions of the input parameters in the considered $\mu$NMSSM scenarios. Some of the BSM parameters are not modified, and remain the same for the three scenario. Namely, we take: $\tan\beta=10$, $m_{12}=2~\ensuremath{\mathrm{TeV}}\xspace$, $A_0=3~\ensuremath{\mathrm{TeV}}\xspace$, $B_0=0$, $m_A=500~\ensuremath{\mathrm{GeV}}\xspace$, $T_\kappa=-0.5~\ensuremath{\mathrm{GeV}}\xspace$. The renormalisation scale is kept at $Q=3~\text{TeV}$ for all computations. Finally, the numbers marked with a ``$\dagger$'' are varied for some of the parameter scans. } \end{table} \begin{figure} \centering \vspace{-7ex} \includegraphics[width=\textwidth]{plots/sc1_Mh123_vs_lambda3.pdf}\\[-3ex] \caption{\label{FIG:GNMSSM_sc1_lambda}$M_{h_1}$ (\textit{left}) and $M_{h_2}$ and $M_{h_3}$ (\textit{right}) as a function of $\lambda$, in scenario 1. The other inputs are taken as in table~\ref{TAB:GNMSSMscenarios}. Tree-level values are shown with green curves, while the red and blue curves correspond to the pole masses computed at one loop, respectively with the standard and modified approaches to the tadpoles. } \vspace{2ex} \capstart \includegraphics[width=\textwidth]{plots/sc2_Mh123_vs_Alamb3.pdf}\\[-3ex] \caption{\label{FIG:GNMSSM_sc2_Tlam}$M_{h_1}$ (\textit{left}) and $M_{h_2}$ and $M_{h_3}$ (\textit{right}) as a function of the soft trilinear coupling $T_\lambda$, in scenario 2. The values of the other BSM parameters are taken as in table~\ref{TAB:GNMSSMscenarios}. } \vspace{2ex} \capstart \includegraphics[width=\textwidth]{plots/sc2_Mh123_vs_vS3.pdf}\\[-3ex] \caption{\label{FIG:GNMSSM_sc2_vS}$M_{h_1}$ (\textit{left}) and $M_{h_2}$ and $M_{h_3}$ (\textit{right}) as a function of $v_S$, in scenario 2. Input values for the other BSM parameters are given in table~\ref{TAB:GNMSSMscenarios}. } \end{figure} We show first in figure~\ref{FIG:GNMSSM_sc1_lambda} the behaviour of the lightest Higgs mass $M_{h_1}$ (left side) and of the additional CP-even Higgs-boson masses $M_{h_2}$ and $M_{h_3}$ (right side) as a function of the superpotential coupling $\lambda$. Among the two BSM states $h_2$ and $h_3$, the former is singlet-like while the latter is doublet-like, in this figure. As can be seen in the right-hand side plot of figure~\ref{FIG:GNMSSM_sc1_lambda}, the heavy Higgs bosons receive only minute mass corrections in either of the approaches for tadpoles. For the lightest scalar mass $M_{h_1}$, the results in the two schemes are also in excellent agreement. However, we have cut off the plot before $\lambda =0.19$ because beyond this value perturbativity is lost: in the \emph{standard approach} the singlet-like pseudoscalar Higgs becomes tachyonic at one loop (from a tree-level mass of $750$ GeV!). If we continued the plot into this regime we would see the predictions diverging, with the standard approach predicting ever decreasing masses and the modified approach increasing ones for larger $\lambda$ (compare $104$ GeV and $138$ GeV respectively for $\lambda =0.3$). Next, we turn to scenario~\hyperref[TAB:GNMSSMscenarios]{2}, \textit{i.\,e.}\xspace~we consider a small $\lambda=0.01$ and small singlet vev $v_S=1~\ensuremath{\mathrm{GeV}}\xspace$. Figure~\ref{FIG:GNMSSM_sc2_Tlam} shows the behaviour of the CP-even masses as a function of the soft trilinear coupling~$T_\lambda$, at tree level and one loop (the colouring of the curves is the same as previously explained). We should emphasise that we have made sure to fulfill constraints from vacuum stability (and the absence of a charge-breaking minimum) on $T_\lambda$ -- see Ref.~\cite{Hollik:2018yek} -- and the tree-level mass of the charged Higgs boson remains positive for the entire range of $T_\lambda$ investigated here. While for $M_{h_1}$ (left side) and $M_{h_2}$ (lower curves of the right-side plot) it seems essentially impossible to distinguish the two approaches to the tadpole treatment, the radiative corrections to $M_{h_3}$ -- the mass of the singlet-like scalar -- are clearly much larger with the standard method, and the result of the modified scheme is certainly more reliable. As a concrete comparison, we have for the intermediate value $T_\lambda=2~\text{TeV}$ a one-loop correction to~$M_{h_3}$ of 2752~GeV (\textit{i.\,e.}\xspace~24\% of the tree-level result) in the standard approach, but only of --4.5 GeV in the modified scheme. We can confirm that the large difference between the two treatments of the tadpoles arises from the small value of the singlet vev $v_S$. Indeed, in figure~\ref{FIG:GNMSSM_sc2_vS}, we present the same three CP-even scalar masses for $v_S$ varying between 0.5 and 100 GeV. One can observe that the results using both approaches for all three masses are in good agreement for large values of the singlet vev. A short comment should be made for $M_{h_1}$: indeed, as $v_S$ increases the results from the two schemes seem to grow apart, and it is somewhat difficult to determine which one should be trusted more in this case. We note that the radiative corrections to $M_{h_1}$ keep increasing with $v_S$ in the standard approach while their size remains relatively stable in the modified scheme. On the other hand, if we consider the situation for $v_S\gtrsim 0.5~\ensuremath{\mathrm{GeV}}\xspace$, the breakdown of the standard approach for small singlet vevs becomes obvious. Indeed, considering the different results for the mass $M_{h_3}$ of the CP-even singlet-like scalar at $v_S=0.5~\ensuremath{\mathrm{GeV}}\xspace$, the one-loop corrections in the standard scheme amount to 6.5 TeV -- in other words, 40\% of the tree-level result -- compared to only --3.3 GeV (--0.02\% of the tree-level mass) in the modified scheme. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{plots/sc3_Mh123_vs_Tlamb.pdf}\\[-3ex] \caption{\label{FIG:GNMSSM_sc3_Tlam} $M_{h_1}$ (\textit{left}) and $M_{h_2}$ and $M_{h_3}$ (\textit{right}) as a function of $T_\lambda$, in scenario 3. The other BSM inputs are taken as in table~\ref{TAB:GNMSSMscenarios}. } \vspace{2ex} \capstart \includegraphics[width=\textwidth]{plots/sc3_Mh123_vs_vS.pdf}\\[-3ex] \caption{\label{FIG:GNMSSM_sc3_vS} $M_{h_1}$ (\textit{left}) and $M_{h_2}$ and $M_{h_3}$ (\textit{right}) as a function of $v_S$, in scenario 3. The values of the other relevant inputs are given in table~\ref{TAB:GNMSSMscenarios}. } \end{figure} Lastly, we consider the type of scenario~\hyperref[TAB:GNMSSMscenarios]{3}, \textit{i.\,e.}\xspace~what happens if we keep a small singlet vev $v_S=1~\ensuremath{\mathrm{GeV}}\xspace$ but increase the coupling $\lambda$ to 0.15. In figure~\ref{FIG:GNMSSM_sc3_Tlam}, we present the CP-even scalar masses as a function of $T_\lambda$ -- having once again made sure to maintain vacuum stability~\cite{Hollik:2018yek}. Considering first the masses of the two doublet-like scalars $h_1$ and $h_2$, we observe an excellent agreement of the results from the two tadpole schemes for low to intermediate values of $T_\lambda$ -- for $0\leq T_\lambda \lesssim 4~\text{TeV}$. However, as $T_\lambda$ becomes larger, the corrections to $M_{h_1}$ and $M_{h_2}$ in the modified approach start growing out of control. This appears similar to the loss of accuracy of the modified scheme that we encountered in the toy model of section~\ref{SEC:TOYMODEL} when increasing the trilinear coupling $a_{SH}$, which plays the same role as $T_\lambda$ -- see eq.~\eqref{EQ:DMS_mod} and figure~\ref{FIG:TM_modworse}. Turning however to the singlet-like mass $M_{h_3}$ we find (as in figure~\ref{FIG:GNMSSM_sc2_Tlam} for scenario~\hyperref[TAB:GNMSSMscenarios]{2}) that the radiative corrections are huge with the standard treatment of tadpoles, but remain well-behaved with the modified one. Interestingly, having increased the value of $\lambda$ has not made the breakdown of the standard calculation for the singlet-like mass more severe than in scenario~\hyperref[TAB:GNMSSMscenarios]{2}. Nevertheless, the one-loop result $M_{h_3}$ using the modified tadpole scheme is undoubtedly \mbox{more reliable here.} Finally, we present in figure~\ref{FIG:GNMSSM_sc3_vS} the behaviour of the CP-even scalar masses as a function of the singlet vev $v_S$ -- restricting our attention to the low range $0.5~\ensuremath{\mathrm{GeV}}\xspace\leq v_S\leq 5~\ensuremath{\mathrm{GeV}}\xspace$. As can be read from table~\ref{TAB:GNMSSMscenarios}, we have chosen for this figure a large value of the soft trilinear coupling $T_\lambda=7.5~\text{TeV}$, which corresponds to the right parts of the plots in figure~\ref{FIG:GNMSSM_sc3_Tlam}. Therefore, it is not surprising that we observe some discrepancy between the results of the two tadpole schemes for all three masses, as discussed above. More interestingly, we can compare the size of the loop corrections to $M_{h_3}$ in the two approaches, as we vary $v_S$. On the one hand, in the standard approach, the one-loop corrections increase from 2.3 TeV (19\% of the tree-level result) for $v_S=2.5~\ensuremath{\mathrm{GeV}}\xspace$ to as much as 9 TeV (40\% of the tree-level mass) for $v_S=0.75~\ensuremath{\mathrm{GeV}}\xspace$, for instance. On the other hand, in the modified scheme, the effects remain minute and vary from --46 GeV for $v_S=2.5~\ensuremath{\mathrm{GeV}}\xspace$ to --3.6 GeV for $v_S=0.75~\ensuremath{\mathrm{GeV}}\xspace$ (this amounts to --0.38\% and --0.02\% of the results at tree level, respectively). \section{\label{SEC:CONCLUSIONS}Conclusions} We have shown the advantages and limitations of taking a different prescription for the solution of tadpole equations. In contrast to previous applications of this technique, in the SM or as a measure of fine-tuning, we have shown that it can be very useful when new scalars having a small expectation value are present in the theory, and in the case that they are much heavier than the electroweak scale, it is best employed via the matching of pole masses in an EFT approach. While this technique offers the advantages of perturbative stability for the heavy scalar masses, easy generalisability (the corrections are simply computed diagramatically rather than via taking derivatives of the tadpole equations) and gauge invariance, it can also lead to numerical instabilities in extracting the \emph{light} Higgs mass, and the loss of the ability to match the electroweak expectation value. In future work, other than a general numerical implementation in \texttt{SARAH}\xspace, it would be interesting to explore a hybrid approach (along the lines of option 1 described at the end of the introduction), where only the electroweak expectation value is fixed by appropriate counterterms. On the other hand, we intend to consider the corrections at two loops in this approach, and we shall also provide general expressions for the one-loop self-energies which are explicitly gauge independent. \section*{Acknowledgements} We would like to thank Sven Heinemeyer and Pietro Slavich for interesting discussions. MDG acknowledges support from the grant \mbox{``HiggsAutomator''} of the Agence Nationale de la Recherche (ANR) (ANR-15-CE31-0002). JB is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy -- EXC 2121 ``Quantum Universe'' -- 390833306. The work of SP is supported by the BMBF Grant No. 05H18PACC2. This project received support from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860881-HIDDeN. \newpage \bibliographystyle{h-physrev}
1,108,101,563,519
arxiv
\section{Introduction} Modified and extended theories of gravity \cite{clifton} play an important role in the description of the cosmological observations \cit {Teg,Kowal,Komatsu}. In particular, the gravitational Action Integral is modified by introducing geometric invariants. The result of this modification is that new degrees of freedom are entering the gravitational field equations so that to drive the dynamics away from that of General Relativity (GR) and thus provide a better theoretical prediction for the observations \cite{cc1}. The Lagrangian density of Einstein's General Relativity is based on the Ricci scalar, $R$, which is defined by the symmetric Levi-Civita connection. The simplest modification of the Einstein-Hilbert Action is the introduction of a function $f$, of this scalar. This leads to the so-called $f( R) $-theory of gravity \cite{Buda}. The Levi-Civita connection is not the unique choice which can be applied in a gravitational theory. From a more general connection someone can define the fundamental scalar invariants of the curvature, $R$, the torsion, $T$, and the non-metricity, $Q$. The nature of the physical space depends on the invariant which is used to define the gravitational Action Integral \cite{be1}. Indeed, for the Levi-Civita connection, where only the curvature invariant survives, the resulting theory is the General Relativity. On the other hand, for the curvature-less Weitzenb{\"{o}}ck connection \cite{Weitzenb23}, we end up with the Teleparallel Equivalent of General Relativity \cit {Hayashi79,Maluf:1994ji}. In addition, for a gravitational theory defined by a torsion-free, flat connection we obtain the Symmetric Teleparallel Equivalent of General Relativity \cite{Nester:1998mp}. While these three theories admit the same field equations, this is not true for their modifications. Indeed, the modified $f$-theories, for instance the f\left( T\right) $-teleparallel theory \cite{f7} and the $f\left( Q\right) -symmetric teleparallel theory \cite{f6} are quite distinct from the $f\left( R\right) $-theory. In this piece of study we are interested in the existence of cosmological solutions for the $f\left( Q\right) $-symmetric teleparallel theory. There is a plethora of studies in the literature on $f\left( Q\right) $-theory. Some exact analytic solutions are presented in \cite{f15}. Power-law functions of $f(Q)$ and the values of the parameters that they admit have been studied in \cite{Capo}. The $f\left( Q\right) $-theory as a dark energy model is investigated in \cite{ff1,ff2,Chee}. A recent work on the properties of the effective fluid, owed to the non-metricity, can be found in \cite{Tsagas}. Some anisotropic spacetimes were studied in \cite{ww1,ww2,Esposito,Heis1,ww4,ww5}. The Hamiltonian analysis for the theory was performed in \cite{qq1}, while the quantization process in the case of cosmology was presented in \cite{qq2}. Wormhole solutions in the context of $f(Q)$ gravity are given in \cite{Banerjee}. Non-minimal couplings to matter have been considered in \cite{Harko2}, generalizations including the trace of the energy momentum tensor are found in \cite{Harko,Shiravand} and studies on observational constraints are given in \cite{Ferreira,Ayuso}. An important characteristic of $f\left( Q\right) $-theory is the use of a flat connection pertaining the existence of affine coordinates in which all its components vanish, turning covariant derivatives into partial (coincident gauge). Thus, in $f(Q)$-theory it is possible to separate gravity from the inertial effects. Hence, the construction of the $f\left( Q\right) $-theory forms a novel starting point for various modified gravity theories. It also presents a simple formulation in which self-accelerating solutions arise naturally in both the early and late Universe. Although the coincident gauge is always achievable through an appropriate coordinate transformation, extra care is needed when we a priori adopt a specific coordinate system through an ansatz for the space-time metric \cite{Zhao}. This is because, when starting from a particular line element, e.g. a Friedmann--Lema\^{\i}tre--Robertson--Walker (FLRW) metric expressed in spherical coordinates, we already have partially fixed the gauge. Thus, the connection may not become zero at the coordinate system which we have already assumed for the metric. In particular for the FLRW case, it has been shown \cite{Heis1,Hohmann}, that there exist four distinct possible connections that are compatible with its isometries; three for the spatially flat case and one when the spatial curvature is not zero. One of the three connections of the spatially flat case, becomes zero, when we transform the metric to Cartesian coordinates; the rest assume their coincident gauge form in totally different coordinate systems, which lead to metrics having non-diagonal terms. In this work we derive the field equations in FLRW geometry, with or without spatial curvature, for all four existing forms of the symmetric, flat connection. We prove the existence of interesting analytic solutions in all of the cases involving the different connections and study specific examples. The plan of the paper is as follows: In Section \ref{sec2} we present the basic properties and definitions of $f\left( Q\right) $-theory. In Section \ref{sec3}, we present the symmetric connection which are compatible with the FLRW geometry in spherical coordinates. For the spatially flat FLRW geometry, the field equations are derived in Section \ref{sec4}. Exact solutions are determined for the latter and we show that inflationary solutions exist. The case of a non-zero spatial curvature is assumed in Section \ref{sec5}. Finally, in Section \ref{sec6} we draw our conclusions. \section{Preliminaries} \label{sec2} In metric-affine gravitational theories, the basic dynamical objects are the metric $g_{\mu\nu}$ and the connection $\Gamma^\kappa_{\; \mu\nu}$. The fundamental tensors that can be constructed with the help of these objects are the curvature $R^\kappa_{\;\lambda\mu\nu}$, the torsion $\mathrm{T}^\lambda_{\mu\nu}$ and the non-metricity $Q_{\lambda\mu\nu}$, whose components are given respectively by \begin{align} R^\kappa_{\;\lambda\mu\nu} & = \frac{\partial \Gamma^\kappa_{\;\lambda \nu}}{\partial x^\mu} - \frac{\partial \Gamma^\kappa_{\;\lambda \mu}}{\partial x^\nu} + \Gamma^\sigma_{\; \lambda \nu} \Gamma^\kappa_{\; \mu\sigma} - \Gamma^\sigma_{\; \lambda \mu} \Gamma^\kappa_{\; \mu \sigma} \\ \mathrm{T}^\lambda_{\mu\nu} & = \Gamma^{\lambda}_{\; \mu\nu} - \Gamma^{\lambda}_{\; \nu\mu} \\ Q_{\lambda\mu\nu} & = \nabla_{\lambda} g_{\mu\nu} = \frac{\partial g_{\mu\nu}}{\partial x^\lambda} - \Gamma^\sigma_{\; \lambda\mu} g_{\sigma\nu} - \Gamma^\sigma_{\; \lambda\nu} g_{\mu\sigma}. \end{align} In the above relations $\nabla_\mu$ is used to denote the covariant derivative with respect to the affine connection $\Gamma^\kappa_{\; \mu\nu}$ and $x^\mu$ are the coordinate components upon the manifold. In the case of a symmetric connection the torsion is zero, $\mathrm{T}^\lambda_{\mu\nu}=0$. This, together with the condition $Q_{\lambda\mu\nu}=0$, results in the well-known metric theories of gravity. In an attempt to consider theories outside the scope of (pseudo-)Riemannian geometry, more general connections are taken into account, which lead to the torsion and/or the non-metricity being non-zero. In the theory of symmetric teleparallelism and its modifications, in which we are interested in this work, the flatness, $R^\kappa_{\;\lambda\mu\nu}=0$ and the torsionless, $\mathrm{T}^\lambda_{\mu\nu}=0$, conditions are imposed, leaving only $Q_{\lambda\mu\nu}\neq 0$. The basic geometric scalar of the theory is defined as \begin{equation} \label{defQ} Q= Q_{\lambda\mu\nu} P^{\lambda\mu\nu}, \end{equation} where $P^\lambda_{\;\mu\nu}$ is used for the components of the non-metricity conjugate tensor \begin{equation} \label{defP} P^\lambda_{\;\mu\nu}= - \frac{1}{4} Q^{\lambda}_{\; \mu\nu} + \frac{1}{2} Q^{\phantom{(\mu} \lambda \phantom{\nu)}}_{(\mu \phantom{\lambda} \nu)} + \frac{1}{4} \left(Q^\lambda- \bar{Q}^\lambda\right) g_{\mu\nu} -\frac{1}{4} \delta^{\lambda}_{\; (\mu}Q_{\nu)}, \end{equation} which is written with the help of the traces $Q_\mu= Q_{\mu \nu}^{\phantom{\mu\nu} \nu}$ and $\bar{Q}_\mu= Q_{\phantom{\nu} \mu \nu}^{\nu \phantom{\mu} \phantom{\mu}}$. The $\delta^{\mu}_{\;\nu}$ in \eqref{defP} is the Kroncker delta and the parentheses in the indices denote the usual symmetrization, i.e. $A_{(\mu\nu)}=\frac{1}{2} \left(A_{\mu\nu}+A_{\nu\mu}\right)$. The non-metricity scalar $Q$ of \eqref{defQ} is defined in such a way so that, when taken as a Lagrangian density, the theory which it produces, is dynamically equivalent to Einstein's general relativity. Non-linear generalizations of symmetric teleparallelism, involve a gravitational Lagrangian density which is characterized by a generally non-linear function $f(Q)$ \begin{equation}\label{action} S = \frac{1}{2} \int d^4x \sqrt{-g} f(Q) + \int d^4x \sqrt{-g} \mathcal{L}_M + \lambda_{\kappa}^{\; \lambda\mu\nu} R^{\kappa}_{\; \lambda\mu\nu} + \tau_{\lambda}^{\; \mu\nu} \mathrm{T}^\lambda_{\;\mu\nu}. \end{equation} In the above action, $g=\mathrm{det}(g_{\mu\nu})$, is the determinant of the space-time metric, the $\mathcal{L}_M$ is the matter fields' Lagrangian density, while $\lambda_{\kappa}^{\; \lambda\mu\nu}$ and $\tau_{\lambda}^{\; \mu\nu}$ are Lagrange multipliers, whose variation enforces the flatness and torsionless conditions $R^{\kappa}_{\; \lambda\mu\nu}=0=\mathrm{T}^\lambda_{\;\mu\nu}$. The above action, and the subsequent results in this work, are expressed in units $8\pi G=c=1$. Variation of \eqref{action} with respect to the metric results in \cite{Harko} \begin{equation}\label{feq1a} \frac{2}{\sqrt{-g}} \nabla_{\lambda}\left(\sqrt{-g} f'(Q) P^\lambda_{\; \mu\nu} \right) - \frac{1}{2}f(Q) g_{\mu\nu} + f'(Q) \left(P_{\mu\rho\sigma}Q_{\nu}^{\;\rho\sigma}- 2 Q_{\rho\sigma\mu}P^{\rho\sigma}_{\phantom{\rho\sigma}\nu}\right) = T_{\mu\nu}, \end{equation} where the primes are used to express derivation with respect to the argument, e.g. $f'(Q)=\frac{df}{dQ}$. The $T_{\mu\nu}=-\frac{2}{\sqrt{-g}} \frac{\partial \left(\sqrt{-g}\mathcal{L}_M\right)}{\partial g^{\mu\nu}}$ is the energy-momentum tensor emerging from the matter contribution in the action \eqref{action}. Variation with respect to the connection leads to the additional equations of motion \begin{equation}\label{feq2} \nabla_{\mu}\nabla_{\nu} \left( \sqrt{-g} f'(Q) P^{\mu\nu}_{\phantom{\mu\nu}\sigma} \right) =0 . \end{equation} The set of equations \eqref{feq1a} can assume the more convenient expression \cite{Zhao} \begin{equation} \label{feq1} f'(Q) G_{\mu\nu} + \frac{1}{2} g_{\mu\nu} \left( f'(Q) Q- f(Q) \right) + 2 f''(Q) \left(\nabla_{\lambda}Q\right) P^\lambda_{\; \mu\nu} = T_{\mu\nu}, \end{equation} where $G_{\mu\nu}= \tilde{R}_{\mu\nu}-\frac{1}{2}g_{\mu\nu} \tilde{R}$ is the usual Einstein tensor, with $\tilde{R}_{\mu\nu}$ and $\tilde{R}$ being the Riemannian Ricci tensor and scalar respectively (constructed with the Levi-Civita connection). In this form the equations for the metric allow a direct comparison with General Relativity (GR) since the (dynamical) deviation from the latter can be perceived as the effect of an effective energy momentum tensor \begin{equation}\label{Teff} \mathcal{T}_{\mu\nu} = -\frac{1}{f'(Q)} \left[ \frac{1}{2} g_{\mu\nu} \left( f'(Q) Q- f(Q) \right) + 2 f''(Q) \left(\nabla_{\lambda}Q \right) P^\lambda_{\; \mu\nu} \right] . \end{equation} With the help of \eqref{Teff}, equation \eqref{feq1} becomes $G_{\mu\nu} = \mathcal{T}_{\mu\nu} + \frac{1}{f'(Q)}T_{\mu\nu}$, which reveals the role of $\mathcal{T}_{\mu\nu}$ as that of an energy-momentum tensor of geometric origin. From \eqref{Teff} it can be directly seen that $f(Q)\propto Q$ results in the same equations as GR, since the assumption leads to $\mathcal{T}_{\mu\nu}=0$. It is also obvious, that the case $Q=$const. leads to solutions of GR plus a cosmological constant $\Lambda$ whose value is $\Lambda = \frac{1}{2}\left(Q- \frac{f(Q)}{f'(Q)}\right)$. As is well-known \cite{Eisenhart}, the flatness condition, $R^\kappa_{\;\lambda\mu\nu}=0$, implies that there exists a coordinate system in which the connection becomes zero, $\Gamma^\lambda_{\;\mu\nu}=0$. This is usually referred to as the coincident gauge. However, special care is needed when the equations of motion are considered after a partial gauge fixing at the level of the metric. For example, when we take a FLRW space-time or a static and spherically symmetric manifold, there is the possibility that the gauge in which $\Gamma^\lambda_{\;\mu\nu}=0$ is realised is incompatible with the coordinate system in which the metric is expressed and this may lead to unnecessary restrictions in the equations of motion \cite{Zhao,Heis1}. Another interesting point, mentioned in \cite{Eisenhart}, is that all flat spaces are necessarily Riemannian; this however, in the sense that, if a connection $\Gamma^\lambda_{\;\mu\nu}$ is flat, leading to $R^\kappa_{\;\lambda\mu\nu}=0$, then there must exist some metric $\bar{g}_{\mu\nu}$ for which the $\Gamma^\lambda_{\;\mu\nu}$ are its Christoffel symbols. Of course, in our case, the $g_{\mu\nu}$ we consider is a completely disassociated object from this $\bar{g}_{\mu\nu}$ and independent from the connection, thus allowing us to have $Q_{\lambda\mu\nu}\neq 0$. In symmetric teleparallel gravity and its modifications, for a matter content which is minimally coupled to the metric, the conservation law $T^{\mu}_{\phantom{\mu}\nu;\mu}=0$ holds for the matter energy-momentum tensor. The semicolon ``$;$'' here is used to denote the covariant derivative with respect to the Christoffel symbols. The $T^{\mu}_{\phantom{\mu}\nu;\mu}=0$ relation holds by virtue of equation \eqref{feq2} for the connection, which in itself it can also be perceived as a conservation law for the theory \cite{Koiv}. \section{FLRW space-time} \label{sec3} We start by writing the FLRW line element, which in spherical coordinates reads \begin{equation} \label{genlineel} ds^2 = - N(t)^2 dt^2 + a(t)^2 \left[ \frac{dr^2}{1-k r^2} + r^2 \left( d\theta^2 + \sin^2\theta d\phi^2 \right) \right] . \end{equation} In \cite{Hohmann,Heis2} the general form of all compatible connections with \eqref{genlineel}, that lead to a zero curvature tensor, has been derived. This has been done by enforcing on a generic connection the six Killing symmetries of \eqref{genlineel}, plus the demand to satisfy $R^\kappa_{\;\lambda\mu\nu}=0$. The six isometries of \eqref{genlineel}, associated with the isotropy and the homogeneity of space, are given in the above coordinates by \begin{equation} \label{Kil1} \zeta_1 = \sin\phi \partial_\theta + \frac{\cos\phi}{\tan\theta} \partial_\phi, \quad \zeta_2 = -\cos\phi \partial_\theta + \frac{\sin\phi}{\tan\theta} \partial_\phi, \quad \zeta_3 = - \partial_\phi \end{equation} and \begin{equation} \label{Kil2} \begin{split} \xi_1 & = \sqrt{1-k r^2}\sin\theta \cos\phi \partial_r + \frac{\sqrt{1-k r^2}}{r} \cos\theta \cos\phi \partial_\theta - \frac{\sqrt{1-k r^2}}{r} \frac{\sin\phi}{\sin\theta} \partial_\phi \\ \xi_2 & = \sqrt{1-k r^2}\sin\theta \sin\phi \partial_r + \frac{\sqrt{1-k r^2}}{r} \cos\theta \sin\phi \partial_\theta + \frac{\sqrt{1-k r^2}}{r} \frac{\cos\phi}{\sin\theta} \partial_\phi \\ \xi_3 & = \sqrt{1-k r^2} \cos\theta \partial_r - \frac{\sqrt{1-k r^2}}{r} \sin\theta \partial_\phi . \end{split} \end{equation} The Lie derivative of an affine connection with respect to a vector $X$ is calculated to be \cite{Bardeen} \begin{equation} \mathcal{L}_X \Gamma^\mu_{\;\kappa\lambda} = X^\sigma \frac{\partial \Gamma^\mu_{\;\kappa\lambda}}{\partial x^\sigma} + \Gamma^\mu_{\;\sigma\lambda} \frac{\partial X^\sigma}{\partial x^\kappa} + \Gamma^\mu_{\;\kappa\sigma} \frac{\partial X^\sigma}{\partial x^\lambda} - \Gamma^\sigma_{\;\kappa\lambda} \frac{\partial X^\mu}{\partial x^\sigma} + \frac{\partial^2 X^\mu}{\partial x^\kappa \partial x^\lambda} . \end{equation} The requirement $R^\kappa_{\;\lambda\mu\nu}=0$, in conjunction with $\mathcal{L}_X \Gamma^\mu_{\;\kappa\lambda}=0$, where $X$ is any of the $\zeta_i$ or $\xi_i$ ($i=1,2,3$), lead to the following possibilities: \begin{itemize} \item \textbf{Spatially flat case $k=0$}. There are three admissible connections. The common non-zero components that all three have are the following: \begin{equation} \label{common} \begin{split} & \Gamma^r_{\;\theta\theta}=-r, \quad \Gamma^r_{\;\phi\phi}=-r \sin^2\theta, \\ & \Gamma^\theta_{\; r\theta}= \Gamma^\theta_{\;\theta r}=\Gamma^\phi_{\; r\phi} =\Gamma^\phi_{\;\phi r}= \frac{1}{r}, \quad \Gamma^\theta_{\;\phi\phi}=-\sin\theta \cos\theta, \Gamma^\phi_{\; \theta\phi}=\Gamma^\phi_{\; \phi\theta} = \cot\theta. \end{split} \end{equation} However, they do differ in the way a free function of time enters in some of their other components. The first connection has only one additional non-zero component \begin{equation} \label{con1} \Gamma^t_{\;tt} = \gamma(t), \end{equation} where $\gamma(t)$ is a function of the time variable $t$. The second connection has the following extra non-zero components \begin{equation} \label{con2} \Gamma^t_{\;tt} = \frac{\dot{\gamma}(t)}{\gamma(t)} + \gamma(t), \quad \Gamma^r_{\;tr}= \Gamma^r_{\;rt} =\Gamma^\theta_{\;t\theta}= \Gamma^\theta_{\;\theta t}= \Gamma^\phi_{\;t\phi}= \Gamma^\phi_{\;\phi t} =\gamma(t), \end{equation} where the dot denotes differentiation with respect to $t$. Finally, the third connection has the additional components \begin{equation} \label{con3} \Gamma^t_{\;tt} = -\frac{\dot{\gamma}(t)}{\gamma(t)}, \quad \Gamma^t_{\;rr} = \gamma(t), \quad \Gamma^t_{\;\theta\theta} = \gamma(t) r^2, \quad \Gamma^t_{\;\phi\phi} = \gamma(t) r^2 \sin^2\theta . \end{equation} The first connection, consisting of \eqref{common} and \eqref{con1}, when $\gamma(t)=0$, becomes itself zero when transforming it from spherical to Cartesian coordinates. In other words, it corresponds to the coincident gauge in the latter coordinate system \cite{Zhao}. \item \textbf{Case of non-zero spatial curvature $k\neq0$}. Here, the following connection is obtained (listing again only the non-zero components): \begin{equation}\label{conk1} \begin{split} & \Gamma^t_{\;tt} = - \frac{k+\dot{\gamma}(t)}{\gamma (t)}, \quad \Gamma^t_{\;rr} = \frac{\gamma (t)}{1-k r^2} \quad \Gamma^t_{\;\theta\theta} = \gamma(t) r^2 , \quad \Gamma^t_{\;\phi\phi} = \gamma (t) r^2 \sin ^2(\theta ) \\ & \Gamma^r_{\;tr}= \Gamma^r_{\;rt}= \Gamma^\theta_{\;t\theta}=\Gamma^\theta_{\;\theta t} =\Gamma^\phi_{\;t\phi}= \Gamma^\phi_{\;\phi t} = -\frac{k}{\gamma(t)}, \quad \Gamma^r_{\;rr} = \frac{k r}{1-k r^2}, \quad \Gamma^r_{\;\theta\theta} = -r \left(1- k r^2\right), \\ & \Gamma^r_{\;\phi\phi} = - r \sin ^2(\theta ) \left(1-k r^2\right) \quad \Gamma^\theta_{\;r\theta}=\Gamma^\theta_{\;\theta r} = \Gamma^\phi_{\;r\phi} = \Gamma^\phi_{\;\phi r} = \frac{1}{r}, \quad \Gamma^\theta_{\;\phi\phi} = -\sin\theta\cos\theta, \quad \Gamma^\phi_{\;\theta\phi} =\Gamma^\phi_{\;\phi\theta} = \cot\theta . \end{split} \end{equation} As it is obvious, when $k=0$, the above connection yields the third one from the previous set consisting of \eqref{common} and \eqref{con3}. This connection has also been presented previously in various works \cite{Zhao,Hohmann,Heis2}.\footnote{For the convenience of the reader, and to avoid possible confusion, we just mention that there is a minor typo in the expression of the connection \eqref{conk1} as is given in \cite{Heis2}. The authors there use $\chi=\sqrt{1-k r^2}$ inside the connection, when actually, as we see from \eqref{conk1}, it should be $\chi^2=1-k r^2$.} \end{itemize} \section{Spatially flat case} \label{sec4} It is usual in the literature of $f(Q)$ theory to study the cosmological aspects of a spatially flat FLRW line element in Cartesian coordinates $ds^2= -N^2 dt^2 +a(t)^2 (dx^2+dy^2+dz^2)$ and in the coincident gauge $\Gamma^{\mu}_{\;\kappa\lambda}=0$. In the spherical coordinates, where the line element is given by \eqref{genlineel}, this corresponds to taking the first connection with $\gamma=0$. Here, we are interested to see how all possible connections may affect the dynamics and compare the obtained solutions to what happens in General Relativity. \subsection{First connection} We start by considering the connection $\Gamma^{\mu}_{\;\kappa\lambda}$ whose non-zero components are given by \eqref{common} and \eqref{con1}. The emerging dynamics is equivalent to that of the coincident gauge, since the function $\gamma$ does not appear in the resulting expression for $Q$. The latter is obtained, from the definition \eqref{defQ}, to be \begin{equation}\label{Qcon1} Q = - \frac{6\dot{a}^2}{N^2 a^2} = - 6 H^2. \end{equation} In the above relation we have used $H = \frac{\dot{a}}{N a}$ for the Hubble function as expressed in the time gauge where the lapse is $N(t)$. When we are at the cosmic time gauge, $N=1$, of course we obtain the well-known $H=\frac{\dot{a}}{a}$. At this point however, we shall avoid fixing the gauge, since we are going to utilize this freedom later on, in order to simplify the process of obtaining solutions from the field equations. Another point that needs to be made is, that in our conventions, the non-metricity scalar $Q$, as can be seen by \eqref{Qcon1}, is negative (or possibly zero). This is because we defined it as $Q=Q_{\lambda\mu\nu} P^{\lambda\mu\nu}$. There are works in the literature where $Q$ is taken as $Q=-Q_{\lambda\mu\nu} P^{\lambda\mu\nu}$, which yields a positive $Q$. This is similar to what happens in General Relativity where there exist two equivalent definitions for the Riemann curvature differing only in an overall sign. We mention this point so that it can be taken into account when comparing with other works in the literature using a different convention and thus avoid any confusion. The equations of motion for the connection, \eqref{feq2}, are identically satisfied, while those for the metric, Eqs. \eqref{feq1}, are independent of $\gamma(t)$ and they are equivalent to \begin{subequations} \label{feq11} \begin{align} \label{feq11a} & \frac{3 \dot{a}^2}{N^2 a^2} f'(Q) + \frac{1}{2} \left( f(Q) - Q f'(Q) \right) = \rho \\ \label{feq11b} & -\frac{2}{N} \frac{d}{dt} \left( \frac{f'(Q) \dot{a}}{N a} \right) - \frac{3 \dot{a}^2}{N^2 a^2} f'(Q) - \frac{1}{2} \left(f(Q)- Q f'(Q)\right) = p, \end{align} \end{subequations} where we have used $T^{\mu}_{\phantom{\mu}\nu}= \mathrm{diag}(-\rho,p,p,p)$ for the energy momentum tensor. Note that in the above equation, we have not substituted $Q$ from its expression given in \eqref{Qcon1}. For consistency, it is easy to see that upon setting $f(Q)=Q-2\Lambda$, and considering the cosmic time gauge $N=1$, the above equations reduce to the well known Friedmann equations (remember we work in units $8\pi G=c=1$) \begin{align} & \frac{\dot{a}^2}{a^2} = \frac{1}{3} \left( \rho + \Lambda \right) \\ & \frac{\ddot{a}}{a} = \frac{\Lambda}{3} - \frac{1}{2} \left(p+\frac{\rho}{3} \right) . \end{align} Thus, General Relativity is recovered for a linear $f(Q)$ function, as it is expected. But, let us return to the generic problem described by \eqref{feq11}. As a first observation it is easy to see that in the case of vacuum, $p=\rho=0$, the two equations combined result in the constraint \begin{equation} \left(f(Q)-2 Q f'(Q)\right) \dot{Q} f''(Q) =0 . \end{equation} This is obtained if you solve \eqref{feq11a} algebraically with respect to the lapse and substitute the latter in \eqref{feq11b}. From the above relation we distinguish three possibilities: i) First, we get a theory with $f(Q)\propto \sqrt{-Q}$, for which all equations are identically satisfied. If we would write the minisuperspace Lagrangian which produces $\eqref{feq11}$ as Euler-Lagrange equations, we would see that $f(Q)\propto \sqrt{-Q}$ turns the Lagrangian into a total derivative, i.e. the action is a pure surface term and as a result the equations are trivially satisfied. ii) The non-metricity scalar is constant, $Q=$const., which, as previously mentioned, is equivalent to having General Relativity with a cosmological constant. This leads to the known de-Sitter solution, $N=1$, $a=e^{\pm \sqrt{\frac{\Lambda}{3}}t}$, with $\Lambda$ acquiring the value $\Lambda = \frac{1}{2}\left(Q- \frac{f(Q)}{f'(Q)}\right)$. iii) Finally, there is the possibility that $f(Q)$ is a linear in $Q$ function which again yields the relativistic solutions (either de-Sitter or the flat space). So, we see that in the context of the first connection there are no vacuum solutions outside General Relativity, not unless we consider $f(Q)\propto \sqrt{-Q}$, which however gives rises to infinitely many solutions thus stripping the theory of any predictability. The situation changes with the consideration of matter. Let us consider a perfect fluid of the typical linear barotropic equation $p=w \rho$. It has been shown that the equations give rise to the same continuity equation as the one emerging in metric theories of gravity \cite{Harko} \begin{equation} \dot{\rho} +\frac{3\dot{a}}{a} \left( \rho+p \right) =0. \end{equation} The above leads the well-known solution $\rho=\rho_0 a^{-3(1+w)}$, with $\rho_0$ being the integration constant. By solving equation \eqref{feq11a} with respect to the lapse, we obtain \begin{equation} \label{sol1N} N(t) = \pm \frac{\dot{a}}{a} \left( \frac{6 f'(Q)}{2\rho+ Q f'(Q)-f(Q)} \right)^{\frac{1}{2}}. \end{equation} With its substitution in the second equation, \eqref{feq11b}, the latter becomes \begin{equation} \left(2 \rho_0 - a^{3(1+ w)} \left(f(Q)-2 Q f'(Q)\right)\right) \dot{Q} f''(Q)=0 . \end{equation} Assuming that we want to encounter solutions that are distinguishable from General Relativity, we need to consider $\dot{Q}\neq0 $ and $f''(Q)\neq 0$ (later we are going to see what happens if a posteriori we set $f(Q)=Q$ in our result). The above relation can be simply solved algebraically with respect to the scale factor (as long as $w\neq-1$) leading to \begin{equation} \label{sol1a} a(t) = \left( \frac{2 \rho_0}{f(Q)-2 Q f'(Q)}\right)^{\frac{1}{3(1+ w)}}. \end{equation} Of course we consider that $f(Q)$ is not proportional to $\sqrt{-Q}$, so the denominator cannot be zero. Up to now we have not made use of the freedom of fixing the time gauge. We may choose to set $Q=-t$ which will give us straightforwardly $N(t)$ and $a(t)$ from \eqref{sol1N} and \eqref{sol1a} for any given $f(Q)$ theory. We insert the minus sign for simplicity, because, as we already mentioned, in our definition $Q$ is negative, thus, by setting $Q=-t$ the solution will be valid in the positive half line $t\in\mathbb{R}_+$ (if we had set $Q=t$ we would need to consider $t\in\mathbb{R}_{-}$). We thus see that, in the chosen time gauge, we are able to express the solution in terms of elementary functions, assuming of course that $f(Q)$ is also such a function. In order to check the validity of the result, by testing if it can be connected to a relativistic solution, let us consider setting $f(Q)=Q$ into \eqref{sol1N} and \eqref{sol1a}. At the same time let us make the gauge fixing choice $Q=-t$. With these substitutions, equations \eqref{sol1N} and \eqref{sol1a} result in \begin{align} \label{ex1N} N(t) = \pm \sqrt{\frac{2}{3}} \frac{1}{(1+w)t^{\frac{3}{2}}} \\ a(t) = \left(\frac{2 \rho_0}{t}\right)^{\frac{1}{3(1+w)}}. \end{align} We may recognize this solution if we transform it into the cosmic time gauge, where $N(\tau)=1$. From now on we shall use $\tau$ to denote the time in that gauge. We thus want to make a mapping $t\rightarrow \tau$ that yields $N(\tau)=1$. From the transformation law of the lapse function, we have \begin{equation} \label{gentimetr} N(t)dt = N(\tau) d\tau \Rightarrow \int\!\! N(t)dt = \tau + C . \end{equation} For the $N(t)$ given by \eqref{ex1N}, we get \begin{equation} \sqrt{\frac{2}{3}} \frac{1}{(1+w)t^{\frac{3}{2}}} dt = d\tau \Rightarrow \sqrt{\frac{2}{3}} \frac{1}{(1+w)}\int\!\! \frac{1}{t^{\frac{3}{2}}} dt = \tau , \end{equation} where, in order to simplify our considerations, we just chose to use the positive branch of \eqref{ex1N} and set the integration constant, $C$, on the right hand side of the expression \eqref{gentimetr} equal to zero. If we solve to find $t$ as a function of $\tau$ we obtain \begin{equation} t= \frac{8}{3\rho_0 (1+w)^2 \tau^2}. \end{equation} With the use of this mapping from $t$ to $\tau$ we are led to $N(\tau)=1$, while the scale factor becomes \begin{equation} a(\tau) = \left(\frac{3 \rho_0(1+w)^2}{4}\right)^{\frac{1}{3(1+w)}} \tau^{\frac{2}{3(1+w)}}. \end{equation} This is none other than the well-known perfect fluid solution of Einstein's equations $G_{\mu\nu}=T_{\mu\nu}$. Hence, we see that \eqref{sol1N} and \eqref{sol1a} truly retrieve the General Relativistic solution when $f(Q)=Q$. We now gather the pair that forms the solution \begin{subequations} \label{fsol1} \begin{align} N(t) & = \pm \left(-\frac{2}{3 Q}\right)^{\frac{1}{2}} \frac{\left(2 Q f''(Q)+f'(Q)\right) \dot{Q}}{(w+1)\left(f(Q)-2 Q f'(Q)\right)} \\ a(t) & = \left( \frac{2 \rho_0}{f(Q)-2 Q f'(Q)}\right)^{\frac{1}{3(1+ w)}} , \end{align} \end{subequations} which we can use to write the general line element that solves the equations, with a perfect fluid satisfying a linear barotropic equation, for any $f(Q)$ theory with a non-constant $Q$, \begin{equation} \label{lineel1} ds^2 = \frac{2 \left(2 Q f''(Q)+f'(Q)\right)^2}{3 (w+1)^2 Q \left(f(Q)-2 Q f'(Q)\right)^2} dQ^2 + \left( \frac{2 \rho_0}{f(Q)-2 Q f'(Q)}\right)^{\frac{2}{3(1+ w)}} \left(dr^2 + r^2 \left(d\theta^2 + \sin^2\theta d\phi^2\right)\right) . \end{equation} The $-Q$ assumes effectively the role of the time variable. Note that the solution is of Lorentzian signature for $Q<0$, which is consistent with the convention we use. Of course we need to note that although we arrive at a specific metric that forms a solution, there exists a degeneracy due to the fact that the function entering the connection remains arbitrary. Thus, we are dealing in reality with infinitely many solutions, one for each distinct $\gamma(t)$ function. We also need to mention that the above solution can also be obtained through studying the inverse problem, i.e. the derivation of the matter content with respect to the gravitational functions. Equations \eqref{feq11} can obviously be perceived as definitions for $\rho$ and $p$. Then, we need only observe that we can integrate \eqref{Qcon1} with respect to $a(t)$. This supplements as with $a$ as an integral $N$ and $Q$: \begin{equation} \label{tchrisgen} a(t) = a_0 \exp\left(\int\!\! N\sqrt{-Q} dt \right). \end{equation} At this point, equations \eqref{feq11} can be considered solved (having given the necessary $\rho$ and $p$) and the corresponding scale factor is given just by \eqref{tchrisgen}. The $-Q$ plays again the role of the time variable, while $N$ is arbitrary. The latter will obtain a particular dependence if we decide to adopt some specific equation of state. For example, if we set $p=w\rho$ with $\rho$ and $p$ given by \eqref{feq11} with the substitution of \eqref{tchrisgen}, then the $p=w\rho$ is algebraically solvable with respect to $N$, yielding again solution \eqref{fsol1}. One needs not be restricted to setting $p=w\rho$, different equations of state can also be assumed, leading to distinct results. However, in this work, we just restrict our attention to the linear equation of state. At this point it is useful to consider some examples and see how \eqref{lineel1} can be used to derive conclusions about the evolution implied by some specific $f(Q)$ functions. \subsubsection{The \texorpdfstring{$f(Q)=Q + \alpha Q^\mu$}{f(Q)=Q+a Q\^{}m} example} From relations \eqref{fsol1}, or equivalently from \eqref{lineel1}, we may write the lapse and the scale factor, for a function $f(Q)=Q + \alpha Q^\mu$, in the time gauge $Q=-t$, as \begin{subequations} \label{solex1} \begin{align} \label{solex1N} N = & \pm \sqrt{\frac{2}{3}} \frac{t-\alpha \mu (2 \mu -1) (-t)^{\mu }}{t^{3/2} (w+1) \left(t-\alpha (2 \mu -1) (-t)^{\mu }\right)} \\ a = & \left[\frac{2 \rho_0}{t-\alpha (2 \mu -1) (-t)^{\mu }}\right]^{\frac{1}{3( w+1)}}. \end{align} \end{subequations} The corresponding energy density for the perfect fluid is given by \begin{equation} \label{solex1rho} \rho=\rho_0 a^{-3(1+w)}= \frac{1}{2} \left(t-\alpha (2 \mu -1) (-t)^{\mu }\right) \end{equation} For simplicity, we shall consider integer values for $\mu$, so that $\alpha$ is restricted to be real in order to have a real valued $f(Q)=Q + \alpha Q^\mu$ function when $Q<0$. With this consideration, we may observe from \eqref{solex1}, that, if we want a solution of Lorentzian signature, we need to set the restriction $t>0$. The expression $t-\alpha (2 \mu -1) (-t)^{\mu } $ that we see in the scale factor can be either positive or negative with the additional condition that the constant $\rho_0$ must also be of the same sign. By \eqref{solex1rho} however, we see that considering $t-\alpha (2 \mu -1) (-t)^{\mu } <0$ leads to a negative energy density $\rho$. Although there exist matter contents that give rise to such a negative energy density \cite{Nemiroff}, for our example, let us consider the more usual case where $\rho>0$. Thus, in the end, we require \begin{equation} \label{exLor} t>0 \quad \text{and} \quad t-\alpha (2 \mu -1) (-t)^{\mu } >0. \end{equation} For some particular values of $\mu$ the behaviour of the Hubble function, $H$ with respect to the scale factor can be extracted directly by solving algebraically the temporal field equation with respect to $H$. For example, for $\mu=2$ the equation \eqref{feq11a} leads to \begin{equation} 54 \alpha H^4 - 3 H^2 + \rho_0 a^{-3 (w+1)} =0 . \end{equation} The latter is of course algebraically solvable with respect to $H$. In order to make use of \eqref{solex1} we shall consider cases where such an easy algebraic derivation is not possible. To this end, let us first consider the $\mu=5$ case. For this choice, conditions \eqref{exLor} become $t>0$ and $9 \alpha t^5+t>0$ respectively. As a first step, let us take $\alpha>0$ to simply trivialize the second inequality and have $t$ running in the infinite half-line. In the time gauge that we are, where $Q=-t$, and since \eqref{Qcon1} holds, the Hubble function is just given by $H = \sqrt{\frac{t}{6}}$. In order to obtain its functional behaviour in the cosmic time gauge, where $N=1$, we need to first calculate $\tau$ as a function of $t$ from \eqref{gentimetr}. If we choose the minus sign expression from \eqref{solex1N} (this is done, as we are going to see immediately afterwards, so as to map the function $\tau$ to the positive half-line) and consider that $\alpha$ is positive, then we obtain \begin{equation} \begin{split} \label{ex1time} \tau(t) = & \frac{2}{w+1}\sqrt{\frac{2}{3 t}}+ \frac{2^{\frac{3}{4}} \alpha^{\frac{1}{8}}}{3^{\frac{1}{4}} (w+1)} \Bigg\{ \sqrt{\sqrt{2}+1} \Bigg[ \arctan\left(\frac{\left(3-2 \sqrt{2}\right)^{\frac{1}{4}} \left(1-\sqrt{3} \alpha^{\frac{1}{4}} t\right)}{ 6^{\frac{1}{4}} \alpha^{\frac{1}{8}} \sqrt{t}}\right) \\ & + \mathrm{arctanh} \left(\frac{6^{\frac{1}{4}} \left(3+2 \sqrt{2}\right)^{\frac{1}{4}} \alpha^{\frac{1}{8}} \sqrt{t}}{\sqrt{3} \alpha^{\frac{1}{4}} t+1}\right) \Bigg] + \sqrt{\sqrt{2}-1} \Bigg[ \arctan\left(\frac{\left(3+2 \sqrt{2}\right)^{\frac{1}{4}} \left(1-\sqrt{3} \alpha^{\frac{1}{4}} t\right)}{ 6^{\frac{1}{4}} \alpha^{\frac{1}{8}} \sqrt{t}}\right) \\ &+ \mathrm{arctanh} \left(\frac{6^{\frac{1}{4}} \left(3-2 \sqrt{2}\right)^{\frac{1}{4}} \alpha^{\frac{1}{8}} \sqrt{t}}{\sqrt{3} \alpha^{\frac{1}{4}} t+1}\right)\Bigg] \Bigg\} - C . \end{split} \end{equation} We already discussed that $t\in (0,+\infty)$, we observe that at the one border value we have $\Lim{t\rightarrow 0^+} \tau(t) = +\infty$, while at the other $t\rightarrow +\infty$ the limit of $\tau$ equals some finite value. The latter can be set to zero by choosing appropriately the integration constant $C$. In this case, this particular choice is $C=-\left(\frac{4 \sqrt{2}}{3}+2\right)^{1/4} \pi \alpha^{1/8} \left(1+w\right)^{-1}$. So now, the above function, maps the $t\in (0,+\infty)$ range of the solution \eqref{solex1} to the $\tau \in (0,+\infty)$ in the cosmic time gauge. We can now use the function \eqref{ex1time} of the cosmic time, to make parametric plots of $H(t)=\sqrt{\frac{t}{6}}$ as $H(\tau)$ for various values of the parameters. The process can be repeated for other values of $\mu$ as well. In Figure \ref{fig1} we show the parametric plot of $H(t)$ with respect to $\tau(t)$ for two different $f(Q)$ theories with $\mu=5$ and $\mu=6$, for a radiation matter content, $w=\frac{1}{3}$, and in each graph we also provide the corresponding Hubble function of General Relativity ($\alpha=0$) for comparison. Due to the complexity of the expression, we refrain of giving the $\tau(t)$ for $\mu=6$ here and we just restrict ourselves to presenting the resulting parametric plot. We prefer to present the parametric plots $H(\tau)$ here instead of $H(a)$ since $\tau$ can serve as an ``absolute'' variable for comparison, in contrast to $a(\tau)$ whose evolution with respect to the time $\tau$ changes for different values of the parameters. Note also, that in the graph for $\mu=6$ we have used a negative $\alpha$ parameter, this is so that the corresponding inequalities \eqref{exLor} are satisfied for $t\in (0,+\infty)$, as for our example of the $\mu=5$ case. We will return to this point a little later in our analysis. \begin{figure}[ptb] \includegraphics[width=1\textwidth]{Example1H.eps}\caption{Plots of the Hubble function in $f(Q)=Q + \alpha Q^\mu$ theory for $\mu=5$ and $\mu=6$ and for different orders of magnitude of the coupling constant $\alpha$ in the cosmic time, $\tau$, gauge. In both graphs the corresponding GR solution is displayed with the dotted $\alpha=0$ line. For the equation of state parameter we have considered $w=\frac{1}{3}$. \label{fig1 \end{figure} A first observation with respect to Figure \ref{fig1} is that at early times the evolution for different values of $\alpha$ is hardly distinguishable, but as the universe expands a closer to zero value of $\alpha$ tends to the General Relativity solution faster. As far as the $\mu$ value is concerned, we may conclude that as it assumes highest values, it makes the departure from GR more prominent; since, for the same time values, it tends to give higher expansion rates. In Figure \ref{fig2}, we explore further the $\mu=5$ case for different values of $w$ (radiation, dust and stiff matter) comparing always next to the relevant results from GR. It seems that the general functional behaviour of the Hubble function is similar in the two theories. However, the $\mu=5$ case gives higher expansions rates for the same values of the cosmic time, $\tau$, variable. \begin{figure}[ptb] \includegraphics[width=1\textwidth]{Example2H.eps}\caption{The behaviour of the Hubble function in $f(Q)=Q+\alpha Q^5$ theory and in General Relativity for various values of the perfect fluid equation of state constant $w$, with respect to the cosmic time $\tau$. The plots are given for $\alpha=1$. \label{fig2 \end{figure} Another interesting observation that we can make is the possibility of obtaining bouncing solutions for appropriate values of the involved parameters. As we previously mentioned, in Fig. \ref{fig1}, we used positive values of $\alpha$ for $\mu=5$, while negative for $\mu=6$. The reason behind this choice is the following: Let us turn to inequalities \eqref{exLor} and consider that $2\mu-1>0$. Then for all odd integers $\mu$ the second inequality is satisfied for $t\in(0,+\infty)$ if $\alpha>0$ and for all even integers $\mu$ if $\alpha<0$. What happens however if we choose $\alpha$ in the opposite manner? The answer is that we get a bounded $t$ in a region $0<t< \frac{1}{(2\mu-1)\alpha}$ for $\mu$ even and $\alpha>0$ and in $0<t< \frac{-1}{(2\mu-1)\alpha}$ for $\mu$ odd and $\alpha<0$. It is for these values of the parameters that a bouncing takes place; albeit a singular one. To facilitate the construction of how this happens, let us consider a simpler model, like $f(Q)=Q + \alpha Q^2$. The $\mu=2$ for this theory is an odd number, so we consider $\alpha>0$. The solution \eqref{solex1} becomes \begin{subequations} \label{solexbounce} \begin{align} \label{solexbounceN} N_{\pm} = & \pm \sqrt{\frac{2}{3}} \frac{\sqrt{\frac{2}{3}} (1-6 \alpha t) (1-2 \alpha t)^{3/2}}{(w+1) (t (1-2 \alpha t))^{3/2} (3 \alpha t-1)} \\ \label{solexbouncea} a = & \left(\frac{2\rho_0}{t-3 \alpha t^2}\right)^{\frac{1}{3( w+1)}} . \end{align} \end{subequations} As explained before, due to $\alpha$ being positive, and in order for the solution to be Lorentzian, the time variable in this gauge is bound in the region $0<t<\frac{1}{3\alpha}$. Notice however that there is a value in this region for which the lapse function, denoted here by $N_{\pm}$, becomes zero, i.e. $t=\frac{1}{6\alpha}$. This is a possibly problematic point of which we need to take care in the construction of $\tau(t)$. What we will do, is, split the range of the time variable in two parts, one considering $t<\frac{1}{6\alpha}$ and another $t\geq\frac{1}{6\alpha}$, the $t=\frac{1}{6\alpha}$, as we are going to see is a point of a (Riemannian) curvature singularity. According to the previous consideration we define the cosmic time as the following function \begin{equation} \label{proptbounce} \tau(t) = \int\!\! N_{\pm} dt = \begin{cases} \frac{1}{3(w+1)} \left(\frac{2 \sqrt{6}}{\sqrt{t}} + 6 \sqrt{2} \sqrt{\alpha }\; \mathrm{arctanh}\left(\sqrt{3\alpha t}\right) \right) + C_+, & \mbox{if } 0<t<\frac{1}{6\alpha} \\ \frac{-1}{3(w+1)} \left(\frac{2 \sqrt{6}}{\sqrt{t}} + 6 \sqrt{2} \sqrt{\alpha } \; \mathrm{arctanh}\left(\sqrt{3\alpha t} \right) \right) + C_- , & \mbox{if } \frac{1}{6\alpha} \leq t<\frac{1}{3\alpha} , \end{cases} \end{equation} where for the constants of integration we have $C_\pm = \mp \frac{2\sqrt{\alpha}}{w+1} \left(2+\sqrt{2} \mathrm{arctanh}\left(\frac{1}{\sqrt{2}}\right)\right)$. This last choice has been made so that $\Lim{t\rightarrow\frac{1}{6\alpha}^-}\tau(t)= \Lim{t\rightarrow\frac{1}{6\alpha}^+}\tau(t)=0$. So, we see that the point $t=\frac{1}{6\alpha}$ corresponds to the origin of the cosmic time $\tau=0$. The limits at $t\rightarrow 0$ and $t\rightarrow \frac{1}{3\alpha}$ respectively yield $\tau \rightarrow +\infty$ and $\tau \rightarrow -\infty$. Hence, relation \eqref{proptbounce} defines a continuous one-to-one function that takes values in the whole real line. We notice that in the origin of the cosmic time $\tau=0$ (or equivalently at $t=\frac{1}{6\alpha}$) the scale factor given by \eqref{solexbouncea}, assumes a non-zero minimum value. However, this is a (Riemannian) curvature singularity point as it can be easily seen from the Riemannian Ricci scalar, $\tilde{R} = \frac{t (3 \alpha t (3 w-5)-3 w+1)}{2(1-6 \alpha t)}$ corresponding to this solution. We need to also mention that, the scale factor is a continuous function, but its first derivative, i.e. $\frac{da}{d\tau}=\frac{1}{N_{\pm}}\frac{da}{d\tau}$ has a discontinuity at $t=\frac{1}{6\alpha}$. This can be also seen by the graph that we present in Figure \ref{fig3}. The same can also be checked to be true for the second derivative as well. \begin{figure}[ptb] \includegraphics{Example3a.eps}\caption{Parametric plot that depicts the bouncing solution of the scale factor as a function of the comsic time $\tau$, as defined by \eqref{proptbounce}. The graph shows the existing discontinuity in the first derivative. The values that have been used for the involved parameters are $\rho_0=\alpha=1$, $w=\frac{1}{3}$. \label{fig3 \end{figure} Thus, for the range of parameters, where $t$ is bounded, we obtain bouncing, but singular solutions; at least from the perspective of the Riemannian scalars, the non-metricity scalar $Q=-t$ is finite since $t$ is bounded. The scale factor is continuous with a minimum non-zero value, but a discontinuity in its derivatives takes place at the origin. This same behaviour can be also derived for the cases $\mu=5$ and $\mu=6$ for the appropriate choices for the range of the parameter $\alpha$ (negative and positive respectively). Thus, we see a pattern forming in $f(Q)= Q + \alpha Q^\mu$ theory with a perfect fluid, where for each value of $\mu$ there exist two types of solutions depending on the sign of $\alpha$: one with a scale factor starting from zero and another where a non-zero value and a bounce can be obtained, which however hides a discontinuity in the derivatives of the metric. We need to mention here that the possibility of bouncing solutions in $f(Q)$ theory has also been investigated in \cite{Myrzakulov}. \subsubsection{The \texorpdfstring{$f(Q)=Q e^{\frac{q}{Q}}$}{f(Q)=Q exp(q/Q)} example} Another interesting choice of $f(Q)$ function has been proposed in \cite{Saridakis} and is of the form $f(Q)=Q e^{\frac{q}{Q}}$, where of course, for $q=0$, General Relativity is recovered. We work in a similar fashion to the previous example. The substitution of the $f(Q)$ function under investigation in \eqref{fsol1}, together with the adoption of the time gauge $Q=-t$ leads to the expressions \begin{subequations} \label{ex2sol} \begin{align} \label{ex2solN} N= & \pm \sqrt{\frac{2}{3}} \frac{t^2 + q t + 2 q^2}{t^{\frac{5}{2}}\left(w+1\right)\left(t+2 q\right)} \\ a= & \left(\frac{2 \rho_0 e^{\frac{q}{t}}}{t+2 q}\right)^{\frac{1}{3(1+w)}} . \end{align} \end{subequations} The corresponding energy density of the fluid in this case is $\rho = \frac{1}{2} e^{-\frac{q}{t}} (2 q+t)$. Following the same reasoning as before, by requiring a positive energy density and a Lorentzian solution, we are led to the restrictions $\rho_0>0$, \begin{equation} \label{ineqex2} t>0 \quad \text{and} \quad t + 2 q >0. \end{equation} The latter, lead to two possibilities: one requires just $q>0$ and $t>0$, while the other is $q<0$ and $t>-2q$. The lapse does not become zero at any point in these ranges of values and the procedure followed for both cases is the same. For the cosmic time we use again definition \eqref{gentimetr}, which gives \begin{equation}\label{tauex2} \tau(t) = \frac{2}{3 (w+1)}\sqrt{\frac{2}{3}} \left( \frac{q}{t^{3/2}} - \frac{3 \arctan \left(\sqrt{\frac{t}{2q}}\right)}{\sqrt{2 q}} \right) - C. \end{equation} For the above expression we used the minus sign of \eqref{ex2solN}, because, once more, it is this what leads to a $\tau$ ranging from zero to plus infinity. The value of $C$ is set so that the limit of $\tau(t)$ at $t\rightarrow+\infty$, which is finite, becomes zero; thus, we obtain $C= \frac{\pi}{w+1} \sqrt{\frac{1}{3 q}}$. For any of the two cases we may consider \eqref{ineqex2}, the other limit leads to $\tau\rightarrow +\infty$. For example, in the first case, where $q>0$ and $t>0$, the limit of $t$ going to zero yields $\tau\rightarrow +\infty$, while in the second, where $q<0$ and $t>-2q$, the limit $t\rightarrow -2q$ yields again $\tau\rightarrow +\infty$. As a result, in both cases, the function \eqref{tauex2} takes values in the positive half-line. The Hubble function in this gauge is of course the same as before, $H=\sqrt{\frac{t}{6}}$. In Figure \ref{fig4}, we give the parametric plot of the latter with respect to $\tau(t)$ for various positive and negative values of $q$. \begin{figure}[ptb] \includegraphics{Example4H.eps}\caption{Parametric plot of the Hubble function with respect to the cosmic time for an equation of state parameter $w=\frac{1}{3}$. The first graph includes positive values of $q$ and the second negative. For comparison, the corresponding GR solution ($q=0$) is given by the dotted line. \label{fig4 \end{figure} We notice that the deviation from the GR solution, which is depicted with the dotted line, becomes more important for later times. We additionally observe some differences between the $q>0$ and $q<0$ cases. For the negative values of $q$ we see in general higher expansion rates for the same order of magnitude of the parameter. For example, if we compare the graph for $q=10$ with that of $q=-10$, the latter leads to higher expansion rates for the same $\tau$. Another difference is that for $q>0$, there appears to be a crossing from expansion rates lower of those of GR, for early times, to those higher of GR at later instants of $\tau$, e.g. see how the line for $q=1$ crosses the $q=0$ line of General Relativity at a specific instant of time. The latter does not seem to happen for $q<0$, at least not for the values that are depicted in the graph. At this point we need to mention a limitation regarding these graphs. Due to the fact that the $\tau\rightarrow 0$ corresponds to $t\rightarrow +\infty$, we cannot present plots that go arbitrary close to $\tau = 0$, because that would require giving values to $t$ that go to infinity, which is practically impossible. \subsection{Second connection} Here, we assume that the non-zero components of the connection $\Gamma^\mu_{\kappa\lambda}$ are given by \eqref{common} and \eqref{con2}. From the latter set we see that $\gamma(t)$ cannot be zero. Unlike the previous case, this function plays now a dynamical role. The non-metricity scalar reads \begin{equation}\label{Qcon2} Q = - \frac{6 \dot{a}^2}{N^2 a^2}+ \frac{3\gamma}{ N^2} \left(\frac{3\dot{a}}{a} -\frac{\dot{N}}{N} \right) + \frac{3\dot{\gamma}}{N^2} \end{equation} and it clearly involves $\gamma$ in its expression. The field equations for the metric result in \begin{subequations} \label{feq12} \begin{align} \label{feq12a} & \frac{3 \dot{a}^2 f'(Q)}{a^2 N^2} +\frac{1}{2} \left(f(Q)-Q f'(Q)\right) + \frac{3 \gamma \dot{Q} f''(Q)}{2 N^2} = \rho, \\ \label{feq12b} & -\frac{2}{N} \frac{d}{dt} \left( \frac{f'(Q) \dot{a}}{N a} \right) - \frac{3 \dot{a}^2}{N^2 a^2} f'(Q) - \frac{1}{2} \left(f(Q)- Q f'(Q)\right) + \frac{3 \gamma \dot{Q} f''(Q)}{2 N^2} = p, \end{align} \end{subequations} while the one for the connection yields \begin{equation} \label{feq22} \dot{Q}^2 f'''(Q) + \left[\ddot{Q} + \dot{Q} \left( \frac{3 \dot{a} }{a}-\frac{\dot{N} }{N} \right) \right] f''(Q) =0 . \end{equation} In the previous case, we saw that the vacuum solutions become those of General Relativity with a cosmological constant and that the type of $f(Q)$ theory only affected the value of the effective cosmological constant. However, here, due to the dynamical involvement of the connection, we will see that different solutions than the de-Sitter space can emerge in vacuum and that the choice of $f(Q)$ theory makes a difference in the resulting solution space. So, lets consider the vacuum case $p=\rho=0$ and as a base theory let us choose the function $f(Q)=Q^\mu$, where in the limit $\mu\rightarrow 1$ becomes General relativity. Before proceeding, let us note that for a theory of the form $f(Q)=Q^\mu$, and as long as $\mu>2$, then, any combination of functions $a$, $N$ and $\gamma$ that results in $Q=0$ in \eqref{Qcon2}, is trivially a solution of the equations. Due to the infinity of metrics that satisfy such a relation, we may again consider that this realization does not allow for making specific predictions about the results of the theory, thus, we shall refrain from considering this type of solutions. In fact, the process that we later follow removes completely the possibility of arriving at solutions where $Q$ is a constant altogether. The constraint equation \eqref{feq12a} can be solved algebraically with respect to $N$. Once more we keep $Q$ as it is and we do not substitute it through expression \eqref{Qcon2}. This is because we are again going to utilize the gauge fixing choice to make $Q$ a particular function of time, which significantly simplifies the resulting equations. Note that, from \eqref{Qcon2}, $Q$ now is not necessarily negative. So, we will just set it this time to be $Q(t)=t$ and upon the end result we will later see for what domain of definition and for which range of the parameters we may have a solution of Lorentzian signature. By solving \eqref{feq12a} with respect to $N$ and substituting it inside the equation $Q=t$, where $Q$ is given by \eqref{Qcon2}, we obtain a differential equation that involves the second derivative of the scale factor \begin{equation} \label{secorda} \ddot{a}=\frac{1}{4} \left(\frac{(\mu -1) a^2 \left(t \dot{\gamma}-2 (\mu -1) \gamma\right)}{t^2 \dot{a}}+\frac{2 \dot{a} \left(2 t \dot{\gamma}+3(1-2 \mu ) \gamma\right)}{t \gamma }+\frac{8 (1-2 \mu ) \dot{a}^3}{(\mu -1) a^2 \gamma }+\frac{16 \dot{a}^2}{a}+\frac{6 (\mu -1) a \gamma}{t}\right). \end{equation} We use this equation to eliminate $\ddot{a}$ from \eqref{feq12b}, in which we also have substituted the expression for $N$ and set the gauge fixing choice $Q=t$. The result is a simple first order differential equation for $\gamma$ \begin{equation} 2 (2 \mu -1) t \dot{a}^2+(\mu -1) a^2 \left(2 (\mu -1) \gamma-t \dot{\gamma}\right) =0. \end{equation} This can be directly integrated to yield \begin{equation} \label{solgam} \gamma (t) = \frac{2(2\mu-1)t^{2(\mu-1)}}{(\mu-1)}\int\!\! t^{-2(\mu-1)}\frac{\dot{a}^2}{a^2} dt . \end{equation} Substitution of the above expression for $\gamma$ into \eqref{secorda} yields an integro-differential equation for the scale factor $a(t)$. However, if we isolate the integral term on one side and take the derivative of that expression, we obtain the following third order equation \begin{equation} \label{thirdord} \dddot{a} = -\frac{8 \dot{a}^3}{a^2}+\frac{2 \left((\mu -2) t \ddot{a} + (\mu -1) \dot{a}\right)}{t^2}+\frac{\dot{a} \left(9 t \ddot{a}-2 (\mu -5) \dot{a}\right)}{t a} . \end{equation} The problem of solving the previous equation can be addressed with the help of the theory of symmetries of differential equations. We avoid the technical details here and we refer to well-known textbooks \cite{Olver,Stephani} for more information. We just mention that \eqref{thirdord} admits a three-dimensional algebra of Lie-point symmetry generators, two of which form an Abelian subalgebra. This implies that these can be used to generate a transformation which will both reduce the order of the equation and also make it autonomous. In our case this transformation is \begin{equation}\label{transf} a = e^{\frac{1}{6} (1-2 \mu ) \omega(s)+s}, \quad t = e^{\omega(s)}, \end{equation} where $\omega(s)$ and $s$ are the new dependent and independent variables that are going to substitute $a(t)$ and $t$. Under the change of variables implied by \eqref{transf}, equation \eqref{thirdord} becomes \begin{equation}\label{tranthird} 3 \left(\frac{d^2\omega}{ds^2}\right)^2+ \frac{d\omega}{ds} \left(6 \frac{d^2\omega}{ds^2}-\frac{d^3\omega}{ds^3}\right) =0 . \end{equation} As we see, the new equation truly is autonomous, since now there is no explicit dependence in $s$, and it is also effectively a second order differential equation because no $\omega(s)$ term appears. The general solution of \eqref{tranthird} is \begin{equation} \label{solthird} \omega(s) = \lambda_3 + \lambda_2 \ln\left( \frac{1-\sqrt{A(s)} \sqrt{A(s)^2-3 A(s)+3}}{1+\sqrt{A(s)} \sqrt{A(s)^2-3 A(s)+3}} \right), \quad \text{where} \quad A(s) = 1 - \lambda_1 e^s \end{equation} and $\lambda_i$, $i=1,2,3$, are constants of integration. Of course, at this point, we need use expression \eqref{solthird} in \eqref{transf} to obtain the resulting $a(t)$, which, together with the $\gamma(t)$ of \eqref{solgam}, are to be substituted in the original equations to see under which conditions they form a solution. This is necessary since we did not solve the actual equations, but another of higher order, so we expect to have in our expressions at least one redundant constant of integration. Through this process, we conclude that the constant of integration that results from the calculation of the integral in \eqref{solgam} needs to be set equal to zero. The desired triplet that we finally obtain is \begin{subequations} \label{sol2} \begin{align} \label{sol2N} N(t) & = \pm \frac{\sqrt{\frac{2}{3}} \sqrt{\kappa } \lambda t^{\frac{\lambda -3}{2}}}{\sqrt{\frac{\mu -1}{\mu }} \left(\kappa -t^{\lambda } \right)} , \\ \label{sol2a} a(t) & = \frac{a_0 t^{\frac{1}{6} (\lambda -2 \mu +1)}}{\left(\kappa -t^{\lambda } \right)^{\frac{1}{3}}}, \\ \gamma(t) & = \frac{\kappa (\lambda -2 \mu +1)^2-(\lambda +2 \mu -1)^2 t^{\lambda }}{18 (\mu -1) t \left(t^{\lambda }-\kappa \right)}, \end{align} \end{subequations} where, in order to simplify the expressions, we have made the re-parameterizations of the original constants of integration as $\lambda_2=\frac{1}{\lambda}$ and $\lambda_3= \frac{\ln(-\kappa)}{\lambda}$. The constant $\lambda_1$, together with the rest of the constants that appear multiplicatively in the expression for $a(t)$ can be normalized to any value through a constant scaling of the radial variable $r$ in the line element. We choose to depict this arbitrariness with the constant $a_0$ appearing in \eqref{sol2a}. As it is evident from \eqref{sol2}, the case $\mu=1$ is excluded from this solution. This has to do with our gauge fixing choice of $Q=t$, which straightforwardly excludes the GR vacuum solution, where $Q=$constant. It can be directly checked, that relations \eqref{sol2}, not only solve the set of equations \eqref{feq12} and \eqref{feq22} for $f(Q)=Q^\mu$, but also, upon substitution in \eqref{Qcon2}, yield $Q=t$, which verifies the consistency of the result. Thus, we obtain the general solution for $f(Q)=Q^\mu$ theory in the time gauge where $Q=t$ is the time parameter. Of course, going to the cosmic time gauge, where $N(\tau)=1$ will not be possible for every value of the parameters, since the inverse of the transformation in general will not be expressed in terms of elementary functions. For example, by using \eqref{sol2N} we see that, for $\lambda\neq 1$ and $\frac{3\lambda-1}{2\lambda} \notin \mathbb{Z}_{-}\cup \{0\}$, \begin{equation} \label{2ndconpropt} \int N(t) dt = \tau + C \Rightarrow \pm \frac{2 \sqrt{\frac{2}{3}} \lambda t^{\frac{\lambda -1}{2}}}{\sqrt{\kappa } (\lambda -1) \sqrt{\frac{\mu -1}{\mu }}} { }_2F_1\left(1,\frac{\lambda -1}{2 \lambda };\frac{3\lambda-1}{2\lambda};\frac{t^{\lambda }}{\kappa }\right) = \tau + C, \end{equation} where ${ }_2F_1$ is the Gauss hypergeometric function. The values for which the integral of \eqref{sol2N} can be calculated in terms of elementary functions are specified by Chebyshev's theorem \cite{Rittbook}, which is used in cosmology to distinguish analytic solutions of the Friedmann equations \cite{Gibbons,Faraoni}. In order to comply with the requirements of the theorem, $\lambda$ needs to be just any rational number, i.e. $\lambda \in \mathbb{Q}$. Of course this still does not guarantee that the expression will be easily inverted to obtain $t(\tau)$. Acquiring the inverse, $t(\tau)$, is achievable for very specific values of the parameter $\lambda$. We can use however the $\tau(t)$, as previously, to make parametric plots of the Hubble function $H(t)$ and get a glimpse of the time evolution in the cosmic time gauge for any $\lambda$. For the derived solution \eqref{sol2}, the Hubble function is (in the $t$ time variable) \begin{equation} H(t)= \frac{1}{Na} \frac{da}{dt} = \frac{1}{2 \lambda } \sqrt{\frac{\mu -1}{6 \kappa \mu }} \left[ \kappa (2 \mu -\lambda -1)+(1 -\lambda -2 \mu ) t^{\lambda } \right] t^{\frac{1-\lambda}{2}} . \end{equation} In Figure \ref{fig5} we present two plots of $H(t)$ with respect to $\tau(t)$ for two different sets of values of $\kappa$, $\lambda$ and for various values of $\mu$. Interestingly enough, the functional behaviour for the integration constants, beside $\mu$, can be quite different. The first graph depicts the case $\kappa=1$, $\lambda=2$, where for the calculation of $\tau$ we used the positive sign branch of \eqref{sol2N}. The latter is a real function that takes values in the range $\tau \in (0,+\infty)$ as long as $t\in (0,1)$ (the constant $C$ is set to zero). Note, that having $ 0< t<1$, with $\kappa=1$ and $\lambda=2$, does not really cause a problem in \eqref{sol2a}, in the sense of the latter taking imaginary values, because there is the arbitrary constant $a_0$ in the solution, inside which, we can ``absorb'' any complex constant number. As we see in the first plot of Fig. \ref{fig5} this solution yields a universe which initially expands, with a continuously diminishing expansion rate. Then, after a finite time, a contraction phase takes place. On the other hand, we obtain a completely different behaviour by setting $\lambda=-1$. In fact, for this value we can invert the expression for the cosmic time. For $\kappa=1$, $\lambda=-1$, the integral yielding the cosmic time is simply \begin{equation} \label{2ndconpropt2} \tau(t)=\int N(t) dt = \pm \frac{\sqrt{\frac{2}{3}} \log \left(\frac{t-1}{t}\right)}{\sqrt{\frac{\mu -1}{\mu }}} -C. \end{equation} We can eliminate the constant of integration $C$ and invert the above relation to get directly the function $H(\tau)$, which is (considering the positive branch of \eqref{2ndconpropt2}) \begin{equation} H(\tau)= \left(\frac{\mu -1}{6 \mu }\right)^{\frac{1}{2}} \frac{(\mu -1) e^{\sqrt{\frac{3}{2}} \sqrt{\frac{\mu -1}{\mu }} \tau}+1}{e^{\sqrt{\frac{3}{2}} \sqrt{\frac{\mu -1}{\mu }} \tau}-1} . \end{equation} The plot of this function is seen in the second part of Fig. \ref{fig5} and depicts an expanding universe with an ever slower expansion rate as time progresses. Unlike the previous case however, it never results in a contracting phase. \begin{figure}[ptb] \includegraphics{Example5H.eps}\caption{Graphs of the Hubble function with respect to the cosmic time for two different pairs of $\kappa$, $\lambda$ and for various different values of the power $\mu$ of $f(Q)=Q^\mu$. \label{fig5 \end{figure} \subsection{Third connection} Here, we make use of the connection with non-zero components given by \eqref{common} together with \eqref{con3}. This connection is also involved in the dynamics of the system. The non-metricity scalar assumes the form \begin{equation}\label{Qcon3} Q = - \frac{6 \dot{a}^2}{N^2 a^2}+ \frac{3\gamma}{ a^2} \left(\frac{\dot{a}}{a} +\frac{\dot{N}}{N} \right) + \frac{3\dot{\gamma}}{a^2} . \end{equation} The equations of motion for the metric are \begin{subequations}\label{feq13} \begin{align}\label{feq13a} & \frac{3 \dot{a}^2 f'(Q)}{a^2 N^2} +\frac{1}{2} \left(f(Q)-Q f'(Q)\right) - \frac{3 \gamma \dot{Q} f''(Q)}{2 a^2} = \rho, \\ \label{feq13b} & -\frac{2}{N} \frac{d}{dt} \left( \frac{f'(Q) \dot{a}}{N a} \right) - \frac{3 \dot{a}^2}{N^2 a^2} f'(Q) - \frac{1}{2} \left(f(Q)- Q f'(Q)\right) + \frac{ \gamma \dot{Q} f''(Q)}{2 a^2} = p \end{align} \end{subequations} and for the connection \begin{equation} \label{feq23} \dot{Q}^2 f'''(Q) + \left[\ddot{Q} + \dot{Q} \left(\frac{\dot{a}}{a} + \frac{\dot{N}}{N}+ \frac{2 \dot{\gamma}}{\gamma} \right) \right] f''(Q) =0 . \end{equation} Once more, we shall consider the vacuum case $p=\rho=0$ in the context of a $f(Q)=Q^\mu$ theory. We employ a similar strategy as before, but with a few modifications. First, we solve the constraint equation \eqref{feq13a} for the lapse $N(t)$. We substitute this result into equation \eqref{feq13b} and make the gauge fixing choice $Q=t$, the result being the following equation which is algebraic in $a(t)$: \begin{equation} (2 \mu -1) t^2 a^2+3 \mu \left(t \dot{\gamma}+(2 \mu -3) \gamma\right) =0. \end{equation} Assuming that $\mu\neq \frac{1}{2}$ we may solve for $a$ and substitute this result, together with all the previous assertions, in the only remaining equation \eqref{feq23}. Subsequently, we arrive at the following third order equation for $\gamma$ \begin{equation} \label{gamthird} \begin{split} & 2 t^3 \gamma \left(t \dot{\gamma}-2 \gamma \right) \dddot{\gamma} - t^4 \gamma \ddot{\gamma}^2 + 4 t^2 \left(t^2 \dot{\gamma}^2 + (\mu -4) t \gamma \dot{\gamma} + (5-2 \mu ) \gamma^2\right) \ddot{\gamma} \\ & + 8 (\mu -2) t^3 \dot{\gamma}^3+4 ((\mu -15) \mu +23) t^2 \gamma \dot{\gamma}^2-8 (\mu (2 \mu -17)+22) t \gamma^2 \dot{\gamma} +4 (2 \mu -9) (2 \mu -3) \gamma^3 = 0. \end{split} \end{equation} The above equation is rather tedious, however there do exists some symmetries that allow for its simplification. If we perform the transformation \begin{equation} \gamma = \exp\left[\int\!\!(1+2 s)\omega(s)ds\right], \quad t = \exp\left(\int\!\! s\omega(s)ds\right), \end{equation} with $s$, $\omega(s)$ the new independent and dependent variables respectively, then the equation is reduced to first order Abel equation \begin{equation} 2 s \frac{d\omega}{ds}+ s^2 \left(\left(4 \mu ^2-1\right) s^2+4 (3 \mu -1) s+5\right) \omega^3-4 s (\mu s+2) \omega^2+5 \omega =0. \end{equation} Unfortunately, we did not manage to associate the later to some known integrable class. However, we do have to report an exact particular solution of the original equation \eqref{gamthird}, which is in the form of a power law with respect to $t$. The solving triplet is \begin{subequations} \label{sol3} \begin{align} N(t) & = \pm \left[\frac{3 \mu (4 \mu -3)}{(2 \mu +1) (1-\mu )}\right]^{\frac{1}{2}} \frac{2 \mu +1}{5 t^{3/2}} \\ a(t) & = \sigma \left[\frac{6 \mu (3-4 \mu )}{5 (2 \mu -1)}\right]^{\frac{1}{2}} t^{-\frac{1}{10} (2 \mu +1)} \\ \gamma(t) & = \sigma^2 t^\frac{9-2\mu}{5} , \end{align} \end{subequations} which it is easily seen that it is compatible with our gauge fixing choice $Q=t$. However, there are certain limits constraining the parameters in order to have a Lorentzian signature of the metric. One choice is that both the expressions inside the square roots appearing in $N$ and $a$ have to be positive; this leads to $-\frac{1}{2}<\mu<0$. The other possibility is to have only the expression appearing inside the square root in $N$ positive and the one in $a$ being negative, then the imaginary unit appearing in $a(t)$ can be absorbed inside $\sigma$, by considering the latter to be a purely imaginary number, this leads to the restriction $\frac{3}{4}<\mu<1$ with $\sigma \in \mathrm{i}\mathbb{R}-\{0\}$. These restrictions are however with the condition that $t$ is positive in \eqref{sol3}. Since, it is quite convenient with working with a positive $t$, the easiest way to take a Lorentzian solution for the other values of the parameter $\mu$ is to go back and instead of fixing the gauge to $Q=t$, fix it as $Q=-t$ (or equivalently making a change $t\rightarrow -t$ in \eqref{sol3}, with an appropriate reparametrization of $\sigma$). The end result can be written as \begin{subequations} \label{sol3sec} \begin{align} N(t) & = \pm \left[\frac{3 \mu (3-4 \mu)}{(2 \mu +1) (1-\mu )}\right]^{\frac{1}{2}} \frac{2 \mu +1}{5 t^{3/2}} \\ a(t) & = \sigma \left[\frac{6 \mu (4 \mu - 3 )}{5 (2 \mu -1)}\right]^{\frac{1}{2}} t^{-\frac{1}{10} (2 \mu +1)} \\ \gamma(t) & = \sigma^2 t^\frac{9-2\mu}{5} , \end{align} \end{subequations} and it now yields $Q=-t$. For a positive $t$, there are again two possibilities for a Lorentzian metric. The first is, like before, to have both expressions under the square roots being positive. This leads to $0<\mu<\frac{1}{2}$ or $\mu>1$. The second option yields $\mu<-\frac{1}{2}$ or $\frac{1}{2}<\mu<\frac{3}{4}$ with the necessary supplementary condition $\sigma \in \mathrm{i}\mathbb{R}-\{0\}$. Both of the above solutions can be easily transformed into the cosmic time gauge. We just need to use \eqref{gentimetr} to derive the $t(\tau)$ relation, then the scalars $a(t)$ and $Q(t)$ can be easily calculated by a straightforward substitution. For the $\gamma(t)$ however, we need to remind ourselves that this is not a scalar, so we cannot simply substitute in it the $t(\tau)$. Its transformation law can be derived from the general transformation law of a connection \begin{equation} \bar{\Gamma}^{\lambda}_{\;\mu\nu} = \frac{\partial \tilde{x}^\lambda}{\partial x^\rho} \frac{\partial x^\eta}{\partial \tilde{x}^\mu} \frac{\partial x^\sigma}{\partial \tilde{x}^\nu} \Gamma^\rho_{\;\eta\sigma} - \frac{\partial x^\rho}{\partial \tilde{x}^\nu} \frac{\partial x^\sigma}{\partial \tilde{x}^\mu} \frac{\partial^2 \tilde{x}^\lambda}{\partial x^\rho \partial x^\sigma}, \end{equation} which in our case results in $\gamma(\tau) = \gamma(t(\tau))\left(\frac{dt(\tau)}{d\tau}\right)^{-1}$. With this in consideration, after making the appropriate calculations, both of the previous sets can be mapped to the following expressions for $a$ and $\gamma$ in the gauge $N(\tau)=1$ \begin{subequations} \label{sol3cosmic} \begin{align} \label{sol3cosmica} a(\tau) & = \bar{\sigma} \left[\frac{5 (\mu -1) (4 \mu -3)}{(2 \mu -1) (2 \mu +1) (3-4 \mu )}\right]^{\frac{1}{2}} \tau^{\frac{1}{5} (2 \mu +1)}, \\ \gamma(\tau) & = \bar{\sigma} \tau^{\frac{1}{5} (4 \mu -3)}, \end{align} \end{subequations} where $\bar{\sigma}$ is a new constant that we introduce to simplify the product of the multiplying constants that appears in the expression of $\gamma(\tau)$ after the transformation. The $\bar{\sigma}$ can be either real or imaginary, depending on the sign of the expression inside the square root of \eqref{sol3cosmica}, so that the $a^2$ remains positive and the signature in the metric is Lorentzian. The ensuing non-metricity scalar, $Q(\tau)$, is given by \begin{equation} \label{cosmicQ3} Q(\tau) = \frac{12 \mu (2 \mu +1) (3-4 \mu )}{25 (\mu -1) \tau^2}. \end{equation} It is straightforward to verify that \eqref{sol3cosmic} and \eqref{cosmicQ3} solve the equations for $f(Q)=Q^\mu$ in the cosmic time gauge $N=1$. The fact that we get a power-law solution for the scale factor in this time gauge is reminiscent of the solution that one gets in General Relativity in the presence of a perfect fluid, which is characterized by a linear barotropic equation. Truly, if we calculate the effective energy-momentum tensor, as defined by \eqref{Teff}, and consider $\mathcal{T}^{\mu}_{\phantom{\mu}\nu}= \mathrm{diag}(-\rho_{\text{eff}},p_{\text{eff}},p_{\text{eff}},p_{\text{eff}})$, then we see that for \eqref{sol3cosmic} and \eqref{cosmicQ3} we get \begin{equation} p_{\text{eff}} = \frac{(2 \mu +1) (7-6 \mu )}{25 \tau^2}, \quad \rho_{\text{eff}} = \frac{3 (2 \mu +1)^2}{25 \tau^2} \end{equation} with an effective equation of state parameter \begin{equation} w_{\text{eff}} =\frac{p_{\text{eff}}}{\rho_{\text{eff}}}= \frac{7-6 \mu }{3(2 \mu +1)} . \end{equation} The phantom divide line $w_{\text{eff}}=-1$ is set at $\mu\rightarrow\pm\infty$. There is also a critical value $\mu=-\frac{1}{2}$, which is excluded by the solution, since it appears also in the denominator of the scale factor \eqref{sol3cosmica}. We notice that theories with $\mu>-\frac{1}{2}$ have $w_{\text{eff}}>-1$, while those of $\mu<-\frac{1}{2}$ correspond to $w_{\text{eff}}<-1$. We see how, in this case, the non-trivial connection affects the dynamics. We obtain a vacuum solution, which has the same effect as that of a perfect fluid energy momentum tensor in General Relativity. Only that in this case the effective fluid contribution is related to the geometry and the non-zero connection. Of course, since it is a flat connection, a coordinate system can be found in which $\Gamma^{\lambda}_{\;\mu\nu}$ becomes zero. However, such a transformation would also change the FLRW metric introducing non-diagonal terms. Thus, we see in practice that assuming $\Gamma^{\lambda}_{\mu\nu}=0$ in the same coordinate system where the homogeneity and isotropy of the FLRW metric is obvious, is not a necessity. There exist admissible non-zero connections in the coordinate system where the metric is given by \eqref{genlineel}, which lead to distinct solutions and they affect the dynamics. \section{Spatially curved models} \label{sec5} For a non-zero spatial curvature $k$, the connection \eqref{conk1} is to be used. Now, the non-metricity scalar acquires the form \begin{equation}\label{Qconk} Q = - \frac{6 \dot{a}^2}{N^2 a^2}+ \frac{3\gamma}{ a^2} \left(\frac{\dot{a}}{a} +\frac{\dot{N}}{N} \right) + \frac{3\dot{\gamma}}{a^2} + k \left[\frac{6}{a^2} + \frac{3}{\gamma N^2 } \left(\frac{\dot{N}}{N} + \frac{\dot{\gamma}}{\gamma} -\frac{3\dot{a}}{a} \right)\right]. \end{equation} We see that upon setting $k=0$, it reduces to that corresponding to the third connection of the $k=0$ case, as obtained in \eqref{Qcon3}. The equations of motion for the metric yield \begin{subequations}\label{feq1k} \begin{align}\label{feq1ka} & \frac{3 \dot{a}^2 f'(Q)}{a^2 N^2} +\frac{1}{2} \left(f(Q)-Q f'(Q)\right) - \frac{3 \gamma \dot{Q} f''(Q)}{2 a^2} + 3 k \left(\frac{ f'(Q)}{a^2}-\frac{ \dot{Q} f''(Q)}{2 \gamma N^2 }\right) = \rho, \\ \label{feq1kb} & -\frac{2}{N} \frac{d}{dt} \left( \frac{f'(Q) \dot{a}}{N a} \right) - \frac{3 \dot{a}^2}{N^2 a^2} f'(Q) - \frac{1}{2} \left(f(Q)- Q f'(Q)\right) + \frac{ \gamma \dot{Q} f''(Q)}{2 a^2} - k \left(\frac{f'(Q)}{a^2}+\frac{3 \dot{Q} f''(Q)}{2 \gamma N^2 }\right) = p , \end{align} \end{subequations} and the field equation for the connection becomes \begin{equation} \label{feq2k} \dot{Q}^2 f'''(Q) \left(1+\frac{k a^2}{N^2 \gamma^2}\right) + \left[\ddot{Q} \left(1+ \frac{k a^2}{N^2 \gamma^2}\right) + \dot{Q} \left(\left(1+ \frac{3 k a^2}{N^2 \gamma^2}\right)\frac{\dot{a}}{a} + \left(1-\frac{k a^2}{N^2 \gamma^2}\right) \frac{\dot{N}}{N}+ \frac{2 \dot{\gamma}}{\gamma} \right) \right] f''(Q) =0 . \end{equation} The situation with these equations is quite more complicated and the same trick we performed previously by adopting a gauge fixing that utilized $Q$ (or $-Q$) as the time parameter is not so helpful. However, we are able to disclose an exact solution in the case of a vacuum $\rho=p=0$, $f(Q)=Q^\mu$, theory, to which we arrive in the manner that we subsequently describe. Of course, as previously stated, one can disclose infinitely many solutions by enforcing $Q=0$ and $\mu>2$, but as we explained, we are not interested in obtaining this type of solutions. In order to proceed, it is first useful to remember that most cosmological solutions that we know from General Relativity, when $k\neq 0$, are expressible in terms of elementary functions in the conformal time gauge, i.e. when $N=a$. So, by making now this gauge fixing choice and additionally enforcing the restrictive condition, that the function $\gamma$ is equal to a particular constant, namely $\gamma = \mp \sqrt{-k}$, then we observe that the constraint equation \eqref{feq1ka} with the substitution of $Q$ from \eqref{Qconk} is easily integrated to give \begin{equation}\label{solk1} a(t) = a_0 e^{\pm \frac{\sqrt{-k} t}{2\mu -1}}, \end{equation} where $a_0$ is a constant which we can normalize to unity through a combined scaling transformation in $t$, $r$ and $k$. The conditions we have set, together with \eqref{solk1}, satisfy all equations \eqref{feq1k} and \eqref{feq2k}. Of course the solution is real only for a negative spatial curvature $k=-1$. In solution \eqref{solk1} we recognize a Milne-like universe. It can be easily seen that $a(\tau)\propto \tau$ in the gauge $N(\tau)=1$. However, the difference is in the parameter $\mu$ of the theory. The solution corresponds to a Riemann flat universe, when $\mu=1$, which is the Milne case. The non-metricity scalar of the solution is given by \begin{equation} \label{Qwithkspsol} Q= \frac{24 k \mu ^2 }{(1-2 \mu )^2 a_0^2} e^{\mp \frac{2 \sqrt{-k} t}{2 \mu -1}} \end{equation} and it does not become a constant even at the limit where the solution of General Relativity is recovered ($\mu=1$). We need to note however, that the $\gamma = \mp \sqrt{-k}$ condition for the function appearing in the connection is not necessary when $\mu=1$. In the latter case, any arbitrary function $\gamma(t)$ serves, together with $N=a=\exp\left(\pm\sqrt{-k} t\right)$. This arbitrariness is also carried in the value of $Q$; for, $\mu=1$, the latter reads \begin{equation} Q = \frac{3 e^{\mp 2 \sqrt{-k} t }}{a_0^2 \gamma} \left[ \left(k +\gamma^2\right)\dot{\gamma} \pm 2 \sqrt{-k} \gamma^3 + 4 k \gamma^2 \pm 2 (-k)^{\frac{3}{2}} \gamma \right] \end{equation} in which $\gamma$ remains a free function. Substitution of $\gamma=\mp \sqrt{-k}$, results of course in the expression \eqref{Qwithkspsol} with $\mu=1$. However, as we stated, these values of $\gamma$ become necessary for the satisfaction of the field equations only when $\mu\neq1$. \section{Conclusion} \label{sec6} We studied the effect that different connections have in the dynamics of FLRW cosmology in the context of $f(Q)$ theory. The spatially flat case admits three different families of connections. The first, corresponds to the most studied case in the literature, of the coincident gauge. We managed, by using $Q$ as the time variable of the problem, to express the general solution of a perfect fluid for an arbitrary $f(Q)$ theory. The final solution involves an arbitrary function in the connection, which does not affect the gravitational equations. We need to mention however, that it is not clear if this degeneracy can affect the motion of a particle in such a spacetime. In Riemannian geometry the auto-parallel and the extremal length curves coincide, but in theories with non-metricity this is not the case \cite{geo1,geo2}. If we are to assume that the geodesic equations are still given with respect to the metric compatible Levi-Civita connection (see the appendix of \cite{geotrin} for the interpretation of non-metricity in these equations), then the arbitrariness of the function $\gamma(t)$ should not affect them. The other two connections of the spatially flat model offer quite more complicated dynamics. The existence and the derivation of solutions for these connections is rarely encountered in the literature. This is because most authors assume directly the coincident gauge in a FLRW universe in Cartesian coordinates, which is the dynamically equivalent of the first connection we studied. However, we do see how distinct it is to assume these two connections in place of the first. Their equation of motion is not satisfied identically and both of them are also involved in the definition of $Q$. By using the same trick as in the first connection, namely choosing the non-metricity as the time variable, we managed to extract new solutions. For the second connection, we derived the general vacuum solution for a power-law $f(Q)$ theory, while for the third we were restricted to just obtaining a partial exact solution. However, we did manage to reduce the problem up to the integration of an Abel equation. We need to stress that the choice of the non-metricity scalar $Q$ as the time variable of the system, apart from simplifying the equations, served to also guarantee that we would acquire solutions that go outside the scope of General Relativity. This, because the condition $Q\neq$const. automatically excludes the possibility of the theory becoming dynamically equivalent to General Relativity with a cosmological constant. This is what we exactly wanted to study with this work; investigate the possibilities of going beyond General Relativistic solutions. From the particular examples, which we studied in the flat case, it is clear that the theory has rich dynamics and can give various interesting behaviours; from bouncing solutions to inflationary expansions, and even reproducing power-law GR-type of perfect fluid solutions in the absence of matter. The spatially non-flat metric, leads to severely more complicated equations. Surprisingly enough, in the presence of non-metricity, and for a power-law $f(Q)$ function, we derived a special solution which is reminiscent of the Milne solution in GR. For the future we plan to expand this study by including various types of matter for all possible cases of the admissible connections. \begin{acknowledgments} N. D. acknowledges the support of the Fundamental Research Funds for the Central Universities, Sichuan University Full-time Postdoctoral Research and Development Fund No. 2021SCU12117 \end{acknowledgments}
1,108,101,563,520
arxiv
\section{Non-Fermi liquid behavior} \label{sec_nfl} \begin{figure}[t] \begin{minipage}{6cm} \vspace*{1cm} \includegraphics[width=5cm]{sdl1.eps} \end{minipage}\begin{minipage}{9cm} \includegraphics[width=9cm]{sdl2p.eps} \end{minipage} \caption{One loop corrections to the fermion propagator and the quark gluon vertex in dense QCD. The solid squares denote hard dense loop insertions.} \label{fig_sdl} \end{figure} At high baryon density the relevant degrees of freedom are particle and hole excitations which move with the Fermi velocity $v$. Since the momentum $p\sim v\mu$ is large, typical soft scatterings cannot change the momentum by very much and the velocity is approximately conserved. An effective field theory of particles and holes in QCD is given by \cite{Hong:2000tn} \be \label{l_hdet} {\cal L} =\psi_{v}^\dagger \left(iv\cdot D - \frac{1}{2p_F}D_\perp^2 \right) \psi_{v} + {\cal L}_{HDL} -\frac{1}{4}G^a_{\mu\nu} G^a_{\mu\nu}+ \ldots , \ee where $v_\mu=(1,\vec{v})$. The field $\psi_v$ describes particles and holes with momenta $p=\mu(0,\vec{v})+l$, where $l\ll\mu$. We will write $l=l_0+l_{\|}+l_\perp$ with $\vec{l}_{\|}=\vec{v}(\vec{l} \cdot \vec{v})$ and $\vec{l}_\perp = \vec{l}-\vec{l}_{\|}$. At energies below the screening scale $g\mu$ hard dense loops have to be resummed. The generating functional for hard dense loops in gluon $n$-point functions is given by \cite{Braaten:1991gm} \be \label{S_hdl} {\cal L}_{HDL} = -\frac{m^2}{2}\sum_v \,G^a_{\mu \alpha} \frac{v^\alpha v^\beta}{(v\cdot D)^2} G^b_{\mu\beta}, \ee where $m^2=N_f g^2\mu^2/(4\pi^2)$ is the dynamical gluon mass and the sum over patches corresponds to an average over the direction of $\vec{v}$. The hard dense loop action describes static screening of electric fields and dynamic screening of magnetic modes. Since there is no screening of static magnetic fields low energy gluon exchanges are dominated by magnetic modes. The resummed transverse gauge boson propagator is given by \be \label{d_trans} D_{ij}(k) = \frac{\delta_{ij}-\hat{k}_i\hat{k}_j}{k_0^2-\vec{k}^2+ i\eta |k_0|/|\vec{k}|} , \ee where $\eta=\frac{\pi}{2}m^2$ and we have assumed that $|k_0|<|\vec{k}|$. We observe that the gluon propagator becomes large in the regime $|\vec{k}|\sim (\eta k_0)^{1/3}\gg k_0$. This leads to an unusual scaling behavior of Green functions in the low energy limit. Consider a generic Feynman diagram and scale all energies by a factor $s$. Because of the functional form of the gluon propagator in the Landau damped regime gluon momenta scale as $|\vec{k}|\sim s^{1/3}$. This implies that the gluon momenta are much larger than the gluon energies. The quark dispersion relation is $k_0\simeq k_{||}+k_\perp^2/(2p_F)$. The only way a quark can emit a very spacelike gluon and remain close to the Fermi surface is if the gluon momentum is transverse to the Fermi velocity. We find \be k_0 \sim s, \hspace{0.5cm} k_{||}\sim s^{2/3},\hspace{0.5cm} k_\perp \sim s^{1/3}, \ee and $k_0\ll k_{||}\ll k_\perp$. These scaling relations have many interesting consequences. As an example we consider the fermion self energy, see Fig.~\ref{fig_sdl}. The one-loop self energy is \be \Sigma(p) \sim \int dk_0 \int dk_\perp^2 \ \frac{k_\perp}{k_\perp^2+i\eta k_0} \int dk_{||} \ \frac{\Theta(p_0+k_0)}{k_{||}+p_{||} -(k_\perp^2+p_\perp^2)/(2p_F)+i\epsilon} \sim p_0\log(p_0). \ee A more careful calculation gives \cite{Ipp:2003cj} \be \label{sig} \Sigma(\omega) = \frac{g^2}{9\pi^2} \left[ \omega \log\left( \frac{4\sqrt{2}m}{\pi|\omega|} \right) +\omega + i\frac{\pi}{2}|\omega| \right]. \ee There are no corrections of the form $g^{2n}\log^n(\omega)$ \cite{Schafer:2004zf}. Higher order corrections involve fractional powers $(\omega/m)^{1/3}$. Eq.~(\ref{sig}) shows that cold quark matter is not a Fermi liquid. The Fermi velocity vanishes on the Fermi surface and the specific heat scales as $T\log(T)$. \section{CFL Phase} \label{sec_cfl} If the temperature is sufficiently low quark matter is expected to become a color superconductor and quarks acquire a gap due to pairing near the Fermi surface. In the following we will concentrate on the color flavor locked (CFL) phase which is the ground state of three flavor quark matter at very high baryon density. Our starting point is the effective theory of the CFL phase derived in \cite{Casalbuoni:1999wu,Kryjevski:2004jw}. The effective lagrangian contains Goldstone boson fields $\Sigma$ and baryon fields $N$. The meson fields arise from chiral symmetry breaking in the CFL phase. The leading terms in the effective theory is \be \label{l_mes} {\cal L} = \frac{f_\pi^2}{4} {\rm Tr}\left( \nabla_0 \Sigma \nabla_0 \Sigma^{\dagger} - v_\pi^2 \vec{\nabla} \Sigma \vec{\nabla}\Sigma^\dagger \right). \ee Baryon fields originate from quark-hadron complementarity. The effective lagrangian is \bea \label{l_bar} {\cal L} &=& {\rm Tr}\left(N^\dagger iv^\mu D_\mu N\right) - D{\rm Tr} \left(N^\dagger v^\mu\gamma_5 \left\{ {\cal A}_\mu,N\right\}\right) - F{\rm Tr} \left(N^\dagger v^\mu\gamma_5 \left[ {\cal A}_\mu,N\right]\right) \nonumber \\ & & \mbox{} + \frac{\Delta}{2} \left\{ \left( {\rm Tr}\left(N_LN_L \right) - \left[ {\rm Tr}\left(N_L\right)\right]^2 \right) - \left( {\rm Tr} \left(N_RN_R \right) - \left[ {\rm Tr}\left(N_R\right)\right]^2 \right) + h. c. \right\}, \eea where $N_{L,R}$ are left and right handed baryon fields in the adjoint representation of flavor $SU(3)$, $v^\mu=(1,\vec{v})$ is the Fermi velocity, and $\Delta$ is the superfluid gap. We can think of $N$ as being composed of a quark and a diquark field, $N_L\sim q_L \langle q_Lq_L\rangle$. The interaction of the baryon field with the Goldstone bosons is dictated by chiral symmetry. The covariant derivative is given by $D_\mu N=\partial_\mu N +i[{\cal V}_\mu,N]$. The vector and axial-vector currents are \be {\cal V}_\mu = -\frac{i}{2}\left\{ \xi \partial_\mu\xi^\dagger + \xi^\dagger \partial_\mu \xi \right\}, \hspace{1cm} {\cal A}_\mu = -\frac{i}{2} \xi\left(\partial_\mu \Sigma^\dagger\right) \xi , \ee where $\xi$ is defined by $\xi^2=\Sigma$. The low energy constants $f_\pi,v_\pi,D,F$ can be calculated in perturbative QCD. Symmetry arguments can be used to determine the leading mass terms in the effective lagrangian. Bedaque and Sch\"afer observed that $X_L= MM^\dagger/(2p_F)$ and $X_R=M^\dagger M/(2p_F)$ act as effective chemical potentials and enter the theory like the temporal components of left and right handed flavor gauge fields \cite{Bedaque:2001je}. We can make the effective lagrangian invariant under this symmetry by introducing the covariant derivatives \bea \label{V_X} D_0N &=& \partial_0 N+i[\Gamma_0,N], \hspace{0.5cm} \Gamma_0 = -\frac{i}{2}\left\{ \xi \left(\partial_0+ iX_R\right)\xi^\dagger + \xi^\dagger \left(\partial_0+iX_L\right) \xi \right\}, \\ \nabla_0\Sigma &=& \partial_0\Sigma+iX_L\Sigma-i\Sigma X_R. \eea Using eqs.~(\ref{l_bar}-\ref{V_X}) we can calculate the dependence of the gap in the fermion spectrum on the strange quark mass. For $m_s=0$ there are 8 quasi-particles with gap $\Delta$ and one quasi-particle with gap $2\Delta$. As $m_s$ increases some of the gaps decrease. The gap of the lowest mode is approximately given by $\Delta=\Delta_0-3\mu_s/4$ where $\mu_s=m_s^2/(2p_F)$ and $\Delta_0$ is the gap in the chiral limit. \begin{figure} \begin{center} \includegraphics[width=10cm]{kstar_mass_2_txt.eps} \vspace*{-1cm} \end{center} \caption{Screening mass in the CFL phase as a function of the effective chemical potential $\mu_s=m_s^2/(2p_F)$. The screening mass is defined as the second derivative of the energy density with respect to an isospin or hypercharge current. The blue and black curves show the result in the gapless CFL phase with (pKCFL) and without (gCFL) a supercurrent. } \label{fig_kst} \end{figure} For $\mu_s>4\Delta_0/3$ the system contains gapless fermions interacting with light or even massless Goldstone bosons. This situation is superficially similar to the normal phase discussed in Sect.~\ref{sec_nfl}, but this is not the case. The gapless CFL phase is unstable with respect to the formation of non-zero currents \cite{Huang:2004bg}. This can be seen by considering the dispersion relation in the presence of a hypercharge or isospin current. The dispersion relation of the lowest mode is \cite{Kryjevski:2005qq,Schafer:2005ym} \be \label{disp_ax} \omega_l = \Delta_0 +\frac{l^2}{2\Delta_0}-\frac{3}{4} \mu_s -\frac{1}{4}\vec{v}\cdot\vec{\jmath}_K, \ee where $l$ is the momentum relative to the Fermi surface and $\jmath_K$ is the current. The energy relative to the CFL phase is the kinetic energy of the current plus the energy of occupied gapless modes \be \label{efct} {\cal E} = \frac{1}{2}v_\pi^2f_\pi^2\vec{\jmath}_K^2 + \frac{\mu^2}{\pi^2}\int dl \int \frac{d\Omega}{4\pi} \;\omega_l \theta(-\omega_l) . \ee The energy functional can develop a minimum at non-zero $\jmath_K$ because the current lowers the energy of the fermions near one of the pole caps on the Fermi surface. Introducing the dimensionless variables $x=\jmath_K/(a\Delta)$ and $h=(3\mu_s-4\Delta)/(a\Delta)$ we can write \be \label{efct_x} {\cal E} = Cf_h(x), \hspace{0.5cm} f_h(x) = x^2-\frac{1}{x}\left[ (h+x)^{5/2}\Theta(h+x) - (h-x)^{5/2}\Theta(h-x) \right] , \ee where $C$ and $a$ are numerical constants. The functional eq.~(\ref{efct_x}) was analyzed in \cite{Son:2005qx,Kryjevski:2005qq,Schafer:2005ym}. There is a critical chemical potential $\mu_s= (4/3+ah_{crit}/3)\Delta$ above which the groundstate contains a non-zero supercurrent $\jmath_K$. This current is canceled by a backflow of gapless fermions. The screening mass $m_V^2=(\partial^2{\mathcal E})/(\partial\jmath_K^2)$ is shown in Fig.~\ref{fig_kst}. Without the supercurent an instability occurs for $\mu_s= 4\Delta/3$, but the instability is resolved by a non-zero current. The new phase is analogous to $p$-wave pion condensates at lower densities because the current is carried by Goldstone kaons in the CFL phase and the instability is caused by the $p$-wave interaction between kaons and fermions.
1,108,101,563,521
arxiv
\section{Introduction}\label{sec:intro} Current versions of TCP \cite{rfc793} like TCP--CUBIC \cite{ha_rhee_xu:2008} and Compound--TCP \cite{tan_song_zhang_sridharan:2005} use elaborate techniques to avoid and to cope with network congestion, which is the main reason for packet losses in wired networks. However, the increasing deployment of wireless networks like IEEE-802.11 (WLAN) or IEEE-802.16 (WiMAX) imposes new challenges on TCP as a reliable transport protocol. In wireless networks, packet losses can occur due to effects like shadowing, interference, and multipath propagation to mention only a few. The problem is, that TCP in all it's current implementations reacts on packet losses with some congestion avoidance strategy, thus reducing throughput and transmission round--trip time. During the last few years, network coding became a prominent field of interest in the communications- and coding community. The main idea is to view packets as elements of an algebraic structure, e.g. a finite field or a vector space. Then, packet losses correspond to erasures and there exist well--known algebraic techniques to correct them. The most important example of this approach is probably random network coding \cite{ho_koetter_medard_karger_effros:2003} based on the Rank metric \cite{silva_kschischang_koetter:2007, silva_kschischang:2007a, koetter_kschischang:2008}, the corresponding codes are frequently called Rank-- or Gabidulin codes \cite{gabidulin:1985, gabidulin_bossert:2008, bossert_gabidulin:2009}. In this paper we follow a different approach for network coding, first proposed by Kabatiansky and Krouk \cite{kabatiansky_krouk_semenov:2005} and further developed and extended by Sidorenko et al. \cite{sidorenko_shen_krouk_bossert:2008}. It utilizes MDS codes and the traditional Hamming metric. With MDS codes, it is possible to correct errors and erasures, the tradeoff parameter between the two can be selected by the system designer. The idea of combining TCP with network coding was first presented by Sundararajan et al. \cite{sundararajan_sha_medard_mitzenmacher_barros:2008}, following the Rank metric approach. We complement their work by following the Hamming metric approach and by introducing a new {\em MDS Network Coding layer (NC layer)} between the IP and TCP layer. The NC layer is transparent to the established layers which enables an easy deployment. The paper is organized as follows. In Section~\ref{sec:MDS} we explain MDS codes and show their most important property for network coding. In Section~\ref{sec:layer} we explain the NC layer together with algorithmic descriptions of the transmitter and receiver sides. The properties and capabilities of TCP with the NC layer are investigated in Section~\ref{sec:sim} by simulation. We also analyze how network coding--capable nodes affect a mixed network environment. \section{Maximum Distance Separable (MDS) Codes}\label{sec:MDS} Let $\vec{a}:=(a_0, \ldots, a_{n-1})$ and $\vec{b}:=(b_0, \ldots, b_{n-1})$ be two vectors over an extended Galois field $\ensuremath{\mathbbm{F}}_q$ with $q=2^m$ where $m$ is a non-negative integer. The {\em Hamming distance} $\Hd{\vec{a}, \vec{b}}$ is defined as the number of differing coordinates of $\vec{a}$ and $\vec{b}$, i.e. \begin{equation*} \Hd{\vec{a}, \vec{b}}:=\vert\{i:a_i\neq b_i, 0\leq i\leq n-1\}\vert. \end{equation*} Let $\mathcal C$ be a subspace of $\ensuremath{\mathbbm{F}}_q^n$ with dimension $k$. Define the {\em minimum distance $\ensuremath{d_\mathrm{min}}$} as \begin{equation*} \ensuremath{d_\mathrm{min}}:=\min_{\vec{a}, \vec{b}\in {\mathcal C}}\{\Hd{\vec{a}, \vec{b}}\}. \end{equation*} Then, $\ensuremath{\mathcal{C}}(q;n,k, \ensuremath{d_\mathrm{min}}) := {\mathcal C}$ is called a $q$-nary linear block code of length $n$, dimension $k$ and minimum distance $\ensuremath{d_\mathrm{min}}$. One basic result of Coding Theory is the {\em Singleton Bound}, which bounds the minimum distance for given code length and dimension. The following results were established in \cite{singleton:1964}, see also \cite{macwilliams_sloane:1992}. \begin{theorem}[Singleton Bound] The minimum distance of a block code $\ensuremath{\mathcal{C}}(q; n, k, \ensuremath{d_\mathrm{min}})$ is bounded by \begin{equation*} \ensuremath{d_\mathrm{min}}\leq n-k+1. \end{equation*} \end{theorem} \begin{proof} By definition of $\ensuremath{d_\mathrm{min}}$ there are no two codewords in $\ensuremath{\mathcal{C}}$ with $\ensuremath{d_\mathrm{min}}$ coinciding coordinates, but there might be codewords with $\ensuremath{d_\mathrm{min}}-1$ coinciding coordinates. Their quantity is at most $q^{n-\ensuremath{d_\mathrm{min}}+1}$. The total number of codewords must be smaller or equal than this quantity, hence $q^k\leq q^{n-\ensuremath{d_\mathrm{min}}+1}$. \end{proof} An interesting subset of the block codes are those, that have maximum possible minimum distance. \begin{definition}[MDS Code] A block code $\ensuremath{\mathcal{C}}(q; n, k, \ensuremath{d_\mathrm{min}})$ is called {\em Maximum Distance Separable (MDS)} if it fulfills the Singleton Bound with equality, i.e. \begin{equation*} \ensuremath{d_\mathrm{min}}= n-k+1. \end{equation*} \end{definition} It can be shown that there are no MDS codes over $\ensuremath{\mathbbm{F}}_2$ besides the repetition codes, the single parity check codes and the codes without any redundancy. Probably, the most important MDS codes are the {\em Reed--Solomon (RS)} codes \cite{reed_solomon:1960, macwilliams_sloane:1992} over $\ensuremath{\mathbbm{F}}_q$ with $q>2$. In our considerations, we will always assume that $q$ is a power of two, i.e. $q=2^m$. In this case, Elements of $\ensuremath{\mathbbm{F}}_q$ can be represented by binary vectors of length $m$. We do not use any of the code's properties in this paper except that they are MDS. The core of our proposed scheme is the following well--known property of MDS codes. \begin{theorem}\label{theorem:k-is-enough} If $\ensuremath{\mathcal{C}}(q; n, k, \ensuremath{d_\mathrm{min}})$ is an MDS code, then any $k$ coordinates of $\vec{c}\in\ensuremath{\mathcal{C}}$ unambiguously determine $\vec{c}$. \end{theorem} \begin{proof} We have $\ensuremath{d_\mathrm{min}}=n-k+1$, hence in any codeword at most $k-1$ positions can coincide. Thus any $k$ positions uniquely determine the codeword. \end{proof} This allows the following procedure. Assume that a binary message of length $kq$ bit should be transmitted. Let us denote a symbol from $\ensuremath{\mathbbm{F}}_q$ as a {\em segment}. The binary message can be considered as an information vector containing $k$ segments. Theses segments can be encoded into $n\geq k$ segments using an MDS code $\ensuremath{\mathcal{C}}(q; n, k, \ensuremath{d_\mathrm{min}})$. The resulting $n$ segments are transmitted over a network. Then, by Theorem~\ref{theorem:k-is-enough}, the receiver is able to reconstruct the $n$ segments if it receives {\bf arbitrary} $k$ segments. \section{Network Coding Layer}\label{sec:layer} We are now ready to explain our proposed scheme for End--to--End algebraic network coding based on MDS codes. It works as a transparent new layer between the Internet and the Transport layer of the Internet protocol stack \cite{rfc1122:1989}, see Figure~\ref{fig:stack}. Transparent means that the functionality of the NC layer is hidden to the existing layers, which simplifies deployment in existing networks. \begin{figure}[htbp] \centering \includegraphics[width=150pt]{stack} \caption{The Internet protocol stack extended by the NC layer.} \label{fig:stack} \end{figure} In our considerations, we identify the Transport layer with the TCP layer and the Internet Layer with the IP layer. For simplicity, we assume a half duplex communication, i.e that payload data is only transmitted in one direction. Further we assume that the connection setup and termination is always initiated by the transmitter side TCP instance. The generalization to full duplex is straightforward and does not yield any additional insights. Connection management segments like SYN or FIN usually pass the NC layer in a transparent manner. However, such segments invoke some special connection management segment handling routines, see Section~\ref{subsec:conman} for details about connection management. At the transmitter, data segments arriving from the TCP are buffered until $k$ segments are in the buffer. As soon as the buffer contains $k$ segments (the information vector), they are encoded\footnote{See Section~\ref{subsec:header} for details about which parts of the TCP segments are encoded.} into $n$ segments (the codeword) using an MDS code $\ensuremath{\mathcal{C}}(q; n, k, \ensuremath{d_\mathrm{min}})$. The parameters $q$ and $k$ of the code must be chosen such that their product $kq$ matches the data segment sizes arriving from the TCP. To cope with this restriction the NC instance has to change the \emph{Maximum Segment Size (MSS)} parameter \cite{rfc879} offered by the receiving TCP instance during the connection negotiation If the transmitting TCP instance uses segment sizes less than the (modified) MSS, the NC layer has to pad the data. Another approach would be to implement a strategy for segmenting in the NC layer. For convenience we assume in the following that the TCP instance always uses segments with size equal to the MSS. After encoding, the $n$ segments are passed to the IP layer. At the receiver, the NC layer acknowledges every received segment from the IP layer. It collects up to $k$ segments in a buffer. This quantity is sufficient for decoding since $\ensuremath{\mathcal{C}}$ bears the MDS property. After decoding, the NC layer hands the decoded $k$ segments (the segments which arrived from the TCP at the transmitter) to the receiver side TCP. Back at the transmitter, the NC layer counts the acknowledgments of coded segments. If their number is greater than some system parameter which we call the {\em speculative ACK threshold}, then the NC layer acknowledges all $k$ data segments to the TCP. This is a {\em speculative acknowledgment} since at this time the receiver side TCP did not necessarily receive all the $k$ data segments -- some might still be stuck within the NC layer. However, the receiver side NC layer will be able to reconstruct the missing data segments from one of the following received codewords. Our simulations show that this happens very fast if the system parameters are chosen appropriately. The NC layer behavior at the transmitter and receiver is shown in Algorithms~\ref{alg:transmitter} and \ref{alg:receiver}, respectively. Note that the procedures handle\_TCP\_handshake() and handle\_TCP\_teardown() are both capable of handling incoming SYN segments from TCP and IP. \begin{algorithm}[htbp] {% \While{true} { $\mathrm{buffer} \leftarrow [\;]$\; $\mathrm{ACKcount} \leftarrow 0$\; \While{$\vert \mathrm{buffer} \vert < k$} { receive TCP segment from TCP\; \If{TCP segment is SYN} { handle\_TCP\_handshake()\; } \ElseIf{TCP segment is FIN} { handle\_TCP\_teardown()\; reset NC layer\; } \ElseIf{TCP segment is RST} { forward segment to IP\; reset NC layer\; } \Else { put TCP segment into buffer\; } } encode $k$ TCP segments into $n$ NC segments\; hand over the $n$ NC segments to IP\; \While{$\mathrm{ACKcount} < s$} { receive NC acknowledgment from IP\; $\mathrm{ACKcount} \leftarrow \mathrm{ACKcount} + 1$\: } acknowledge all buffered segments to TCP\; } } \caption{Network Coding Layer at the Transmitter} \label{alg:transmitter} \end{algorithm} \begin{algorithm}[htbp] {% \While{true} { $\mathrm{buffer} \leftarrow [\;]$\; \While{$\vert \mathrm{buffer} \vert < k$} { receive NC segment from IP\; \If{NC segment is SYN} { handle\_TCP\_handshake()\; } \ElseIf{NC segment is FIN} { handle\_TCP\_teardown()\; reset NC layer\; } \ElseIf{NC segment is RST} { forward segment to TCP\; reset NC layer\; } \Else { put NC segment into buffer\; } hand over NC acknowledgment to IP\; } decode $k$ NC segments into $n$\; extract $k$ information segments, hand over to TCP\; discard TCP acknowledgment\; } } \caption{Network Coding Layer at the Receiver} \label{alg:receiver} \end{algorithm} After the coarse description of the NC layer, we now focus on some details. The NC layer copies most of the TCP layer's functionalities, e.g. sequence numbers and the header structure. The following subsections deal with the relations, similarities and differences between the NC and the TCP layer. \subsection{Header Structure}\label{subsec:header} The NC layer protocol reuses parts of the TCP header without any modification. To reduce protocol overhead, these common header parts can be stripped off the TCP header before encoding of the TCP segments. They can be easily reconstructed at the receiver side by simple extraction from the NC header. The common and stripped--off header parts are source port, destination port, all control flags except ACK, and sequence number. See notes about the latter one in Section~\ref{subsec:seqnum}. The reused and stripped--off header fields are blue shaded in Figure~\ref{fig:headers}. The remaining TCP header fields are not required by the NC layer. Thus, they are not part of the NC layer header and become part of the encoded TCP segment, i.e. the NC layer payload data. The TCP header fields which are encoded include acknowledgment number, offset, reserved, window, checksum, urgent pointer and options. They are non-shaded on the left-hand side of Figure~\ref{fig:headers}. Besides the reused TCP header fields, the NC layer adds two additional header fields, i.e. symbol indicator (symb.) and NC options. The new fields are yellow shaded on the right-hand side of Figure~\ref{fig:headers}. The symbol indicator is responsible to determine the position of a segment within an MDS codeword, for details see the following section. The NC options are not used in the current basic version of our protocol. They might be used to signal adaptions of code rate or speculative ACK threshold in later versions. \begin{figure}[htbp] \centering \includegraphics[width=252pt]{headers} \caption{Header structure of the TCP (left) and NC (right) layers.} \label{fig:headers} \end{figure} \subsection{Sequence Numbers and Acknowledgments}\label{subsec:seqnum} We have already mentioned the symbol indicator field in the previous section. In the TCP layer, the sequence number is responsible to order segments at the receiver side. The sequence number is incremented by one for each transmitted segment. Consider an information vector consisting of $k$ TCP segments. The position of each TCP segment is uniquely determined by it's sequence number. But now the information vector is encoded into a codeword consisting of $n$ segments. The NC layer should be able to uniquely determine the position of each segment, this is achieved by the symbol indicator field. The procedure is as follows: The first $k$ segments of an MDS codeword simply copy the sequence numbers from TCP to their NC sequence number field. Their symbol indicator field is set to all zeros. The remaining $n-k$ segments share the sequence number of the $k$-th segment but their symbol indicator field is incremented by one for each segment. Simple concatenation of sequence number and symbol indicator provides unique identification of each segment's position within an MDS codeword. More precisely, let $\mu$ denote the TCP sequence number and $\nu$ the symbol indicator. Then the value $2^8 \mu+\nu$ uniquely determines the position of each segment. The length of the symbol indicator field is $8$ bit, hence an MDS codeword can contain at most $256$ redundancy segments. This bounds the minimal code rate by \begin{equation*} R=\frac{k}{n}=\frac{k}{k+(n-k)}\geq\frac{k}{k+256}\geq\frac{1}{257}, \end{equation*} which seems to be sufficient for all practical applications. No TCP acknowledgments are transmitted over an NC enabled network. On the receiver side, any TCP acknowledgments are discarded immediately. On the transmitter side, the TCP acknowledgments are ''transmitted'' only between the NC and the TCP layers. Hence, the ACK flag is the only flag which is not reused and stripped off before encoding. It is overwritten by the NC layer and used for NC acknowledgment segments, i.e. NC layer segments with empty payload transmitted from the receiver to the transmitter. \subsection{Connection Management}\label{subsec:conman} Besides the change of the MSS, the NC layer does not introduce any new routines for connection setup and teardown. This functions are inherited from the TCP layer. If any connection management segment is detected by the NC layer it starts the corresponding routine. In case of a TCP teardown or a connection reset, the NC layer schedules a self--reset immediately. The result of this is that the TCP and the NC layer share a common state for the full duration of a connection. \section{Simulation Results}\label{sec:sim} In this section we give simulation results to obtain some insights in the general behavior, advantages and disadvantages of the proposed End--to--End algebraic network coding based on MDS codes scheme. All simulations were done using the network simulator NS--2 \cite{ns-2} and a basic network topology with one bottleneck link, see Figure~\ref{fig:topology}. Traffic is generated by two FTP sources A1 and A2 which transmit to sinks S1 and S2, respectively. The bottleneck in this setup is link N3$\rightarrow$N4, with a reduced data rate of only 1 Mbit/s. NC layer segments are erased within the network with probability {\em PER (packet erasure rate)}. \begin{figure}[htbp] \centering \includegraphics[width=210pt]{topology} \caption{Basic network topology for throughput and fairness simulations.} \label{fig:topology} \end{figure} For all simulations we have used an algebraic code with rate $R=k/n=1/2$ and MDS property. We refrain from giving the detailed code parameters here since this section is devoted to the general behavior of the NC scheme. Figure~\ref{fig:comparison} compares the achievable network throughput in TCP segments per second for a range of PER values. As expected, the NC--enabled TCP is outperformed by traditional TCP in channels with low PER. However, when the channel state deteriorates the traditional TCP throughput quickly falls below the NC--enabled version's throughput, which remains more or less constant over the full PER range. This gives rise to a natural extension of the NC scheme, which adapts the coding rate according to the current state of the channel. The NC options field in the NC layer header can be utilized for this purpose, see Section~\ref{subsec:header}. \begin{figure}[htbp] \centering \includegraphics[width=252pt]{TCP_NC_comp} \caption{Throughput comparison between NC--enabled TCP and traditional TCP for a range of packet erasure probabilities.} \label{fig:comparison} \end{figure} The throughput gain of NC--enabled TCP can be interpreted as follows. In a traditional TCP environment, the protocol copes with increased PER by retransmission of non-acknowledged segments by {\em adaptive repeat request (ARQ)}. This causes a significant amount of delay, especially when the PER is very high. The NC--enabled TCP's strategy to cope with increased PER is to send redundant segments in advance, such that the receiver is not required to wait for requested retransmissions even if segments were erased in the network. Depending on the round trip time and the selected code parameters, this enables the NC--enabled TCP to achieve a higher throughput than traditional TCP as shown by our simulations. Besides throughput, the most important performance measure of the NC--enabled TCP is fairness, i.e. how fair available network capacities are distributed among the transmitting nodes of a network. We should distinguish between fairness \begin{itemize} \item of the NC--enabled TCP against traditional TCP and \item of a pure NC--enabled environment. \end{itemize} The fairness simulation results for both cases are shown in Figure~\ref{fig:nc_vs_tcp} and Figure~\ref{fig:nc_vs_nc}, respectively. In both figures, two connections are active over the bottleneck link. In terms of Figure~\ref{fig:topology} this means that node A1 is transmitting to node S1 and node A2 is transmitting to node S2. In the NC/TCP case of Figure~\ref{fig:nc_vs_tcp}, it can be seen that NC--enabled TCP does not behave fair against traditional TCP. It dominates the bottleneck link and at any time instance achieves a higher throughput compared to traditional TCP. However, traditional TCP still can use a constant and non-zero share of the bottleneck's capacity, and hence gradual deployment of NC--enabled TCP is still feasible. \begin{figure}[htbp] \centering \includegraphics[width=252pt]{nc_vs_tcp} \caption{Fairness comparison in a mixed environment where the traditional TCP transmission starts before the NC--enabled TCP transmission.} \label{fig:nc_vs_tcp} \end{figure} In the NC/NC case of Figure~\ref{fig:nc_vs_nc}, we observe that after a short balancing period, both NC--enabled TCP links receive approximately the same share of the bottleneck's capacity. Thus, our scheme can be considered as fair in a homogeneous environment. \begin{figure}[htbp] \centering \includegraphics[width=252pt]{nc_vs_nc} \caption{Fairness comparison in an NC--enabled TCP environment.} \label{fig:nc_vs_nc} \end{figure} For reference, we also give fairness simulation results for a pure traditional TCP environment in Figure~\ref{fig:tcp_vs_tcp}. \begin{figure}[htbp] \centering \includegraphics[width=252pt]{tcp_vs_tcp} \caption{Fairness comparison in a traditional TCP environment.} \label{fig:tcp_vs_tcp} \end{figure} \section{Conclusions} In this paper we proposed a network coding scheme based on MDS codes for application in TCP/IP networks over wireless links. A new layer between the IP and TCP layers was introduced, which hides segment losses caused by the data link layer to the TCP. This is achieved by encoding a set of $k$ TCP segments into a larger set $n$ of encoded NC layer segments and transmitting the larger set. The receiver is then able to reconstruct all $n$ segments by receiving an arbitrary subset of cardinality $k$ by using the MDS property of the used code. By simulations it was shown that the throughput of a connection is superior compared to traditional TCP if the packet erasure probability is sufficiently large. Furthermore, the fairness between two TCP streams was studied. It turns out that an NC--enabled TCP stream dominates a traditional TCP stream. However, the fairness property is retained if both streams use NC--enabled TCP. The proposed network coding scheme rises up an abundance of new research questions. First, optimal code parameters should be derived for realistic settings, i.e. the product of symbol length and dimension should match the average segment size of TCP to avoid padding. Second, the scheme should be modified such that it can adapt to the channel conditions. It is counter-productive to apply encoding of segments when the packet erasure rate is small and in this case the coding rate bounds the achievable throughput. In an adaptive version of the scheme, the code rate might be adapted such that high rates (in the extremal case even code rate $R=1$ which means no encoding at all) are used at low packet erasure rates and smaller code rates are used in the opposite case. Third, one might think about exploiting the error correction capabilities of the utilized MDS codes, i.e. to cope with maliciously introduced packet errors. \def\noopsort#1{}
1,108,101,563,522
arxiv
\chapter*{Acknowledgement} \addtocontents{toc}{\protect\contentsline {chapter}{\protect\numberline {\ }Acknowledgement}{iii}} I want to thank all the people who have directly or indirectly contributed to this work and made my stay in Copenhagen possible, in particular: \begin{itemize} \item my supervisor Jan Ambj{\o}rn for the interesting project, for many discussions, for a good collaboration and for partial financial support of my visit in Santa Fe at the workshop New Directions in Simplicial Quantum Gravity, \item Jakob L.~Nielsen for many discussions and for a good collaboration, \item my collaborators Dimitrij Boulatov, George Savvidy and Yoshiyuki Watabiki for the good work done together, \item Martin Harris and Jakob L. Nielsen for proofreading the thesis, \item my parents and my wife for their support and encouragement, \item Konstantinos N.~Anagnostopoulos, Martin Harris, Lars Jensen, Jakob L.~Nielsen, Kaspar Olsen, J{\o}rgen Rasmussen and Morten Weis for some discussions and for creating the nice atmosphere in our office, \item the Niels Bohr Institute for warm hospitality, \item the Studienstiftung des deutschen Volkes for financial support. \end{itemize} \newpage\thispagestyle{empty} ~ \newpage \setcounter{chapter}{0} \chapter*{Introduction} \addtocontents{toc}{\protect\contentsline {chapter}{\protect\numberline {\ }Introduction}{1}} This Ph.D. thesis pursues two goals: The study of the geometrical structure of two-dimensional quantum gravity and in particular its fractal nature. To address these questions we review the continuum formalism of quantum gravity with special focus on the scaling properties of the theory. We discuss several concepts of fractal dimensions which characterize the extrinsic and intrinsic geometry of quantum gravity. This work is partly based on work done in collaboration with Jan Ambj{\o}rn, Dimitrij Boulatov, Jakob L. Nielsen and Yoshiyuki Watabiki \cite{Ambjorn:1997jf}. The other goal is the discussion of the discretization of quantum gravity and to address the so called quantum failure of Regge calculus. We review dynamical triangulations and show that it agrees with the continuum theory in two dimensions. Then we discuss Regge calculus and prove that a continuum limit cannot be taken in a sensible way and that it does not reproduce continuum results. This work is partly based on work done in collaboration with Jan Ambj{\o}rn, Jakob L.~Nielsen and George Savvidy \cite{Ambjorn:1997ub}. In chapter \sref{chap1} we introduce the main ingredients for the formulation of two-dimensional quantum gravity as an Euclidean functional integral over geometries. It contains a brief reminder of Liouville theory and the technical issues in the continuum formalism. We use these techniques to discuss the extrinsic and intrinsic Hausdorff dimension and the spectral dimension of two-dimensional quantum gravity. Chapter \sref{disc} is a review of dynamical triangulation in two dimensions. We begin with an introduction of the main ideas of how to discretize two-dimensional quantum geometries. The scaling properties are illustrated by means of the two-point function of pure gravity and of branched polymers. In chapter \sref{regge} we discuss quantum Regge calculus which has been suggested as an alternative method to discretize quantum geometries. We prove by a simple scaling argument that a sensible continuum limit of this theory cannot be taken and that it disagrees with continuum results. \pagenumbering{arabic} \setcounter{chapter}{0} \chapter{Two-dimensional quantum gravity} \label{sec:chap1} Euclidean quantum gravity is an attempt to quantize general relativity based on Feynman's functional integral and on the Einstein-Hilbert action principle. One integrates over all Riemannian metrics on a $d$-dimensional manifold $M$. It is based on the hope that one can recover the Lorentzian signature after performing the integration analogously to the Wick rotation in Euclidean quantum field theory. For a general discussion of further problems and for motivation of a theory of quantum gravity we refer to \cite{Hawking:1979zw,Hawking:1980gf,Isham:1995wr}. General relativity is a reparametrization invariant theory which can be formulated with no reference to coordinates at all. This diffeomorphism invariance is a central issue in the quantum theory. Its importance is most apparent in two dimensions, since the Einstein-Hilbert action is trivial and consists only of a topological term and the cosmological constant coupled to the volume of the spacetime. All the non-trivial dynamics of the two-dimensional theory of quantum gravity thus come from gauge fixing the diffeomorphisms while keeping the geometry exactly fixed. This is the famous representation of the functional integral over geometries as a Liouville field theory by Polyakov \cite{Polyakov:1981rd}. Based on this formulation the scaling exponents can be obtained \cite{Knizhnik:1988ak,David:1988hj,Distler:1989jt}. Any theory of quantum gravity must aim at answering questions about the geometrical structure of quantum spacetime. The interplay between matter and geometry is well known from general relativity. The quantum average over all geometries changes the dynamics of this interaction. It turns out that the quantum spacetime has a fractal nature and even its dimension is a dynamical quantity. The characterization of the fractal nature of the quantum spacetime in two-dimensional quantum gravity is one of the central themes of this work. In section \sref{sqg} the main concepts in the continuum formalism are introduced. In the following section we review technical details about the factorization of the diffeomorphisms from the functional integral. In section \sref{fd} we introduce fractal dimensions to characterize the fractal nature of quantum spacetime. That section, and in particular section \sref{erg:sd} are partly based on work presented in \cite{Ambjorn:1997jf,Ambjorn:1997pr}. Reviews about the continuum approach to two-dimensional quantum gravity can be found in \cite{Ginsparg:1993is} and the references therein. For an introduction to conformal field theory see \cite{Belavin:1984vu,Cardy:1987,Cardy:1988}. \section{Scaling in quantum gravity} \label{sec:sqg} In this section we introduce the basic concepts in two-dimensional quantum gravity. We begin with the partition function and the Hartle-Hawking wavefunctionals. The most natural object to address questions about the scaling behaviour of the theory is the geodesic two-point function. We discuss its scaling behaviour in detail and introduce the concept of the intrinsic fractal dimension which illustrates the fractal nature of the intrinsic geometry of the two-dimensional quantum space-time. \subsection{Partition function} Let $M$ be a two-dimensional, closed, compact, connected, orientable manifold of genus $h$. Then the partition function for two-dimensional quantum gravity can formally be written as the functional integral \begin{equation} \label{e1} Z(G,\Lambda) = \sum_{h\geq 0} \int\! {\cal D}[{\gmn}]\ e^{-S_{\text{EH}}(g;G,\Lambda)} \int {\cal D}_gX\ e^{-S_{\text{matter}}(g, X)}. \end{equation} Here \begin{equation} \label{e2} S_{\text{EH}}(g;G,\Lambda) = \Lambda \int_{M}\! d^2\xi \sqrt{g} -\frac{1}{4\pi G} \int_{M}\! d^2\xi \sqrt{g}\ {\cal R}(\xi) \end{equation} is the classical reparametrization invariant Einstein-Hilbert action \cite{Einstein:1916,Hilbert:1915} with the gravitational coupling constant $G$, the cosmological constant $\Lambda$ and the curvature scalar $\cal R$. According to the Gauss--Bonnet theorem, the last term in (\ref{e2}) is a topological invariant, called the Euler characteristic $\chi$ of $M$: \begin{equation} \label{e3} \chi(h) = \frac{1}{4\pi} \int_{M}\! d^2\xi \sqrt{g}\ {\cal R}(\xi) = 2-2h, \end{equation} while the first term is the volume $V_g$ of $M$ equipped with the metric ${\gmn}$: \begin{equation} \label{e4} V_g = \int_{M}\! d^2\xi \sqrt{g}. \end{equation} Therefore, \eref{e1} can be rewritten as \begin{equation} \label{e5} Z(G,\Lambda) = \sum_{h\geq 0} e^{\frac{\chi(h)}{G}} Z(\Lambda), \end{equation} where $Z(\Lambda)$ is defined as \begin{equation} \label{e5a} Z(\Lambda) = \int\! {\cal D}[{\gmn}]\ e^{-S({g}, \La)} \int\! {\cD}_gX\ e^{-S_{\text{matter}}(g, X)}, \end{equation} with $S({g}, \La) = \La V_g$. $S_{\text{matter}}(g, X)$ in \eref{e1} denotes any reparametrization invariant action for conformal matter fields $X$ with central charge $D$ coupled to gravity. A typical example is the coupling of $D$ free scalar matter fields $X^1, \ldots, X^D$ to gravity. In this case \begin{equation} \label{e2a} S_{\text{matter}}(g, X) = \frac{1}{8\pi} \int_M\! d^2\xi\sqrt{g}\ g^{\mu\nu} \partial_{\mu}X^a\partial_{\nu}X^a, \end{equation} which is diffeomorphism invariant and invariant under Weyl rescalings of the metric: \begin{equation} \label{e5b} S_{\text{matter}}(e^{\phi}g, X) = S_{\text{matter}}(g, X). \end{equation} Note that \eref{e2a} can also be interpreted as an embedding of $M$ in a $D$-dimensional Euclidean space, thus leading to an interpretation of \eref{e1} as bosonic string theory in $D$ dimensions \cite{Polyakov:1981rd}. The functional integration $\int\!\cD[\gmn]$ in \eref{e1} is an integration over all geometries, that means all diffeomorphism classes $[\gmn]$ of metrics $\gmn$ on the manifold $M$. This is often denoted formally as \begin{equation} \label{add1} \int\!\frac{\cD\gmn}{\text{Vol(Diff)}}, \end{equation} where $\text{Vol(Diff)}$ is the volume of the group of diffeomorphisms on $M$. Since this group is not compact the quotient \eref{add1} does not make sense beyond a formal level. We will derive expressions for the measures $\cD\gmn$, $\cD[\gmn]$ and $\cD_gX$ in section \sref{cc}. In \eref{e1} the sum goes over all topologies of two-dimensional manifolds, that means over all genera $h$. It is presently unknown how to define such a sum over topologies different from a perturbative expansion in $h$. It comes as a bitter aspect of the theory that we only know how to perform the functional integration in \eref{e1} for a given manifold and thus only for a given fixed topology, while a summation over topologies has to be performed by hand. In four dimensions it is known that the topologies even cannot be classified. Therefore the role of topological fluctuations is still unclear in two and in higher dimensions. In the remainder of this work we will always fix the topology and thereby disregard the sum over the genera in \eref{e1}. Sometimes it is useful to define the partition function in an ensemble of universes with fixed volume $V$: \begin{eqnarray} \label{e3a} Z(V) &=& \int\! {\cal D}[{\gmn}]\ \de(V-V_g) \int {\cal D}_gX\ e^{-S_{\text{matter}}(g, X)}\nonumber\\ &\equiv& \int\! {\cal D}[{\gmn}]_V\ \int {\cal D}_gX\ e^{-S_{\text{matter}}(g, X)}. \end{eqnarray} $Z(\La)$ is the Laplace transform of $Z(V)$ \footnote{That is also the reason why we denote both with the same symbol and distinguish between them by the names of their arguments.}, that means: \begin{eqnarray} \label{e3b} Z(\La) &=& \int_0^{\infty}\! dV\ e^{-\La V} Z(V),\\ Z(V) &=& \int_{\sigma-i\infty}^{\sigma+i\infty} \frac{d\La}{2\pi i}\ e^{\La V} Z(\La), \end{eqnarray} where $\sigma$ is a real constant which exceeds the real part of all the singularities of $Z(\La)$. Formula \eref{e3a} shows that although the action of pure two-dimensional quantum gravity is trivial and does not contain any propagating degrees of freedom, quantum gravity in two dimensions is a true quantum problem, since each equivalence class of metrics is counted once with the same weight. That means that there is no classical spacetime around which we expand and the theory is as ``quantum-like'' as it can get. Therefore it might contain important information about quantum gravity in higher dimensions. As we show in section \sref{cc} $Z(V)$ scales as \begin{equation} \label{e3c} Z(V) \propto V^{\ga-3}, \quad\text{or~} Z(\La) \propto \La^{2-\ga}, \end{equation} which defines a scaling exponent $\ga$ that depends on the matter coupled to gravity and on the topology of the universe. The computation of $\ga$ gives the result \cite{Knizhnik:1988ak,David:1988hj,Distler:1989jt} \begin{equation} \label{e3d} \ga = 2 + \frac{1-h}{12}\left(D-25 - \sqrt{(25-D)(1-D)}\right). \end{equation} This formula is often called KPZ-formula. For pure gravity ($D=0$) $\ga$ equals $-1/2$, for the Ising model coupled to gravity ($D=1/2$) we have $\ga = -1/3$ and at $D=1$, $\ga$ equals $0$ for spherical topology. Clearly this formula breaks down for conformal charges $D>1$, where $\ga$ assumes complex values. This has sometimes been called the $D=1$ barrier. There is now some evidence for the fact that two-dimensional quantum gravity coupled to matter with central charge $D>1$ is in a branched polymer phase~\cite{Harris:1996hk,David:1996vp} in which the surfaces collapse to tree-like objects. The expectation value of some observable $\cO(g,X)$ is defined as \begin{equation} \label{add2} \bra \cO(g,X)\ket_{\La} = \frac{1}{Z(\La)} \int\! {\cal D}[{\gmn}]\ e^{-\La V_g} \int\! {\cD}_gX\ e^{-S_{\text{matter}}(g, X)} \cO(g,X), \end{equation} or for fixed volume $V$ as \begin{equation} \label{add3} \bra \cO(g,X)\ket_{V} = \frac{1}{Z(V)} \int\! {\cal D}[{\gmn}]_V \int\! {\cD}_gX\ e^{-S_{\text{matter}}(g, X)} \cO(g,X). \end{equation} \subsection{Hartle-Hawking wavefunctionals} \label{sec:hh} Let us in the remainder of this section ignore the coupling of possible matter fields to quantum gravity. The definitions below can easily be expanded to the general case. Furthermore we concentrate on spherical topology only. Again the generalization is straightforward. Typical observables in two-dimensional quantum gravity are loop amplitudes, that means amplitudes for one-dimensional universes. Let $M$ be topologically equivalent to a sphere with $b$ holes. The induced gravity on the boundary is covered by the modified action \begin{equation} \label{e7} S(g,\La,Z_1, \ldots,Z_b) = \La V_g + \sum_{i=1}^b Z_i L_{g,i}, \end{equation} where $L_{g,i}$ denotes the length of the $i$'th boundary component in the metric ${\gmn}$. In this case the partition function is given by \begin{equation} \label{e8} W(\La, Z_1, \ldots, Z_b) = \int\! {\cal D}[{\gmn}]\ e^{-S(g, \La, Z_1, \ldots, Z_b)}. \end{equation} This can also be interpreted as the amplitude for $b$ one-dimensional universes of arbitrary lengths. Since the lengths of the boundary components are invariant under diffeomorphisms it makes sense to fix them to prescribed values $L_1, \ldots, L_b$. This yields the definition of the Hartle-Hawking wave functionals \cite{Hartle:1983ai}: \begin{equation} \label{e6} W(\La, L_1, \ldots, L_b) = \int\! {\cal D}[{\gmn}]\ e^{-S(g, \La)} \prod_{i=1}^b \delta(L_i-L_{g,i}). \end{equation} We note that (\ref{e8}) is the Laplace transform of (\ref{e6}), that means: \begin{equation} \label{e9} W(\La, Z_1, \ldots, Z_b) = \int_{0}^{\infty} \prod_{i=1}^b dL_i e^{-Z_iL_i}\ W(\La, L_1, \ldots, L_b). \end{equation} In the ensemble of universes with fixed volume $V$ the wave functionals are given by: \begin{equation} \label{e12} W(V,L_1, \ldots, L_b) = \int\! {\cal D}[{\gmn}]\ \delta(V-V_g)\prod_{i=1}^b\delta(L_i-L_{g,i}). \end{equation} The cosmological constant $\La$ and the volume $V$ are conjugate variables, that means: \begin{equation} \label{e13} W(\La, L_1, \ldots, L_b) = \int_0^{\infty}\! dV\ e^{-\La V} W(V, L_1, \ldots, L_b). \end{equation} In the case $b=0$ one recovers the partition functions \eref{e5a} and \eref{e3a}. \subsection{The two-point function} \label{sec:obs} To define reparametrization invariant correlation functions let $d_g(\xi,\xi')$ be the geodesic distance between two points $\xi$ and $\xi'$ with respect to the metric ${\gmn}$. Then the invariant two-point function is defined as \begin{equation} \label{e10} G(\La,R) = \int\! {\cal D}[{\gmn}]\ e^{-S(g,\La)} \int_M\! d^2\xi \sqrt{g(\xi)} \int_M\! d^2\xi'\sqrt{g(\xi')}\ \delta(R-d_g(\xi,\xi')). \end{equation} $G(\La,R)$ can also be interpreted as the partition function for universes with two marked points separated by a geodesic distance $R$. The integrated two-point function, called the susceptibility $\chi$ has the following behaviour: \begin{equation} \label{e10a} \chi(\La) = \int_0^{\infty}\! dR\ G(\La, R) = \frac{\partial^2 Z(\La)}{\partial \La^2} \sim \La^{-\ga}. \end{equation} Therefore $\ga$ is often called the susceptibility exponent. Yet another characterization of the exponent $\ga$ from the branching ratio of the two-dimensional universes will be explained in chapter \sref{disc}. For fixed volume of spacetime the definition of the two-point function is \begin{equation} \label{e16} G(V, R) = \int\! {\cal D}[{\gmn}]_V\ \int_M\! d^2\xi\sqrt{g(\xi)} \int_M\! d^2\xi'\sqrt{g(\xi')}\ \delta(R-d_g(\xi, \xi')), \end{equation} which is related to formula (\ref{e10}) via \begin{equation} \label{e17} G(\La,R) = \int_0^{\infty}\! dV\ e^{-\La V} G(V, R). \end{equation} Similar to (\ref{e10}) a whole set of invariant correlation functions which depend on the geodesic distance can be defined by multiplying the local measures $d^2\xi\sqrt{g(\xi)}$ and $d^2\xi'\sqrt{g(\xi')}$ by powers of the invariant curvature scalars ${\cal R}(\xi)$ and ${\cal R}(\xi')$. Note that by (\ref{e10}) or \eref{e16} the concept of geodesic distance becomes meaningful even in quantum gravity. The two-point function can be interpreted as the partition function of the ensemble of universes which have two marked points separated by a geodesic distance $R$. Both, its short distance behaviour and its long distance behaviour reveal the fractal structure of the most important metrics that contribute to the functional integral (\ref{e1}). An important property of the two-point function \eref{e10} is the inequality \begin{equation} \label{e16a} G(\La, R_1+R_2) \geq \text{const}\times G(\La, R_1)\ G(\La, R_2), \end{equation} which follows from a simple gluing argument which is explained in chapter \sref{disc}. Up to a constant, \eref{e16a} is equivalent to the subadditivity of $-\log{G(\La, R)}$, from which the existence of the limit \begin{equation} \label{e16b} \lim_{R\rightarrow\infty} \frac{-\log{G(\La, R)}}{R} = M(\La) \end{equation} follows from general arguments (see for example \cite{Ruelle:1977}). Furthermore one can deduce that $M(\La)$ is an increasing function of $\La$ and $M(\La)>0$ for $\La>0$. At this stage one cannot prove that $M(\La)$ scales to zero as $\La$ goes to zero. However let us assume that this is the case and verify it later by an explicit calculation of the two-point function in section \sref{fs}: \begin{equation} \label{e16c} M(\La) = c\ \La^{\nu}, \end{equation} with a dimensionless constant $c$. That means that for large $R\gg M(\La)^{-1}$ the two point function falls off exponentially with a subleading correction: \begin{equation} \label{e16d} G(\La, R) \sim \La^{\nu-\ga}\ e^{-c\La^{\nu}R},\text{~for~} R\gg\frac{1}{M(\La)}. \end{equation} The power correction can be found by applying dimensional arguments to \eref{e10a}. We conclude that the average volume of two-dimensional universes with two marked points separated by a geodesic distance $R$ is proportional to $R$ if $R$ is large enough: \begin{equation} \label{e16e} \bra V\ket_R \equiv - \frac{\partial \log{G(\La, R)}}{\partial \La} \underset{R\gg M(\La)^{-1}}{\sim} \La^{\nu - 1} R. \end{equation} That means, for large $R$ typical universes have the shape of long thin tubes. On the other hand, for $R\sim\La^{-\nu}$ the exponential decay turns over into some power law and we get \begin{equation} \label{e16f} \bra V\ket_R \sim R^{\frac{1}{\nu}}, \text{~for~}R\sim\La^{-\nu}, \end{equation} by inserting into \eref{e16e}. Per definition, the exponent of $R$ in this equation equals the (grand canonical) intrinsic Hausdorff dimension $d_H=\frac{1}{\nu}$. The large $R$ behaviour of $G(V, R)$ can be computed from \eref{e16d} by a saddle-point calculation. The result up to power corrections is: \begin{equation} \label{e16g} {G(V,R)} \sim e^{-\tilde{c}\left( \frac{R}{V^{\nu}}\right)^{\frac{1}{1-\nu}}}, \text{~for~}\frac{R}{V^{\nu}}\gg 1, \end{equation} where $\tilde{c}=\frac{1-\nu}{\nu}(\nu c)^{\frac{1}{1-\nu}}$. Another concept of fractal dimension can be applied in the ensemble of surfaces with fixed volume $V$ for small distances $R$. Let us define with \begin{equation} \label{e17a} l(g,R) = \frac{1}{V_g} \int_{M}\! d^2\xi\sqrt{g(\xi)} \int_{M}\! d^2\xi'\sqrt{g(\xi')}\ \de(R-d_g(\xi,\xi')) \end{equation} the average length of boundaries on a manifold $M$ with the metric $\gmn$, which have a geodesic distance $R$ from a marked point. Then $l(g,R)dR$ is the average volume of a spherical shell of radius $R$ and thickness $dR$. The quantum expectation value in the ensemble of universes with fixed volume is given by \begin{equation} \label{e17b} \bra l(g,R)\ket_V = \frac{1}{Z(V)} \int\! \cD[\gmn]_V\ l(g,R) = \frac{G(V,R)}{VZ(V)}. \end{equation} Now we conclude from \begin{equation} \label{e17ca} \int_0^{\infty}\! dR\ \bra l(g,R)\ket_V = V \end{equation} that \begin{equation} \label{add4} \text{dim}[V] = \text{dim}[R] + \text{dim}[\bra l(g,R)\ket_V]. \end{equation} Thus $\bra l(g,R)\ket_V$ has the scaling behaviour \begin{equation} \label{add5} \bra l(g,R)\ket_V=V^{1-\nu}\ F\left(\frac{R}{V^{\nu}}\right), \end{equation} where $F(x)$ is a function which falls off exponentially for large $x$, see \eref{e16g}. The (canonical) intrinsic Hausdorff dimension $d_h$ is now defined by the scaling of $\bra l(g,R)\ket_V\sim R^{d_h-1}$ for small $R$. To be precise we expand \eref{add5} around $R=0$ and get: \begin{equation} \label{e17e} \bra l(g,R)\ket_V \sim V^{1-\nu d_h}\ R^{d_h-1}, \text{~for~} R\ll V^{\frac{1}{d_h}}. \end{equation} For smooth $d$-dimensional manifolds we have $d_H=d_h=d$. If $\bra l(g,R)\ket_V$ stays nonzero and finite for $V\rightarrow \infty$ we have \begin{equation} \label{e17f} \nu d_h=1, \text{~that means $d_H=d_h$}. \end{equation} This requirement is called the smooth fractal condition. It means that the average circumference of circles with a small geodesic radius $R$ does not depend on the global volume of the universe. However it is well known that the smooth fractal condition is not fulfilled for the model of multicritical branched polymers \cite{Ambjorn:1990wp}, compare section \sref{bpo}. Therefore its validity is non-trivial and should not be taken for granted uncritically. It turns out that in two-dimensional quantum gravity the smooth fractal condition is fulfilled. That means that there is only one fractal dimension for short and for long distances together. For pure gravity, in the case of spherical topology, the two-point function $G(\La, R)$ can be calculated exactly with a transfer-matrix method \cite{Kawai:1993cj} or alternatively by a peeling method \cite{Watabiki:1995ym}. The result \begin{equation} \label{e17g} G(\La, R) \propto \La^{3/4} \frac{\cosh{c\La^{1/4}R}}{\sinh^3{c\La^{1/4}R}} \end{equation} fulfills the standard scaling relations, but with non-standard exponents. $G(\La, R)$ falls off as $e^{-2c\La^{1/4}R}$, from which we read off $\nu=1/4$, so that the Hausdorff dimension of pure two-dimensional quantum gravity equals {\em four}. Another scaling exponent, though not independent of $\ga$ and $\nu$, can be defined by: \begin{equation} \label{e17h} G(\La, R) \sim R^{1-\eta}, \text{~for $1\ll R\ll \La^{-\nu}$}. \end{equation} $\eta$ is called the anomalous scaling dimension. By expanding \eref{e17g} for small $R$ we get \begin{equation} \label{e17j} G(\La, R) = \frac{1}{R^{3}} - \frac{1}{15}\La R + O(R^3), \end{equation} and thus $\eta=4$. This is a notable result, since in ordinary statistical systems $\eta$ is always smaller than $2$. The so-called Fisher scaling relation \begin{equation} \label{e17i} \ga = \nu(2-\eta) \end{equation} relates the exponents defined above. It can be derived by applying dimensional arguments to \eref{e10a}. \section{Liouville theory: A brief reminder} \label{sec:cc} A central problem in the continuum approach to two-dimensional quantum gravity is posed by the diffeomorphism invariance of the theory. It is comparatively easy \cite{Mottola:1995sj} to derive an expression for the formal integration $\cD\gmn$ since one can define a natural scalar product on the cotangent space to the space of all metrics which defines a volume form in the same way as in finite dimensional Riemannian geometry. However, since the measure and the action are diffeomorphism invariant, these expressions are ill-defined. Therefore a gauge fixing and the factorization of the diffeomorphisms from the measure is required. This has been performed in \cite{Polyakov:1981rd} where the functional integration over geometries is expressed as a Liouville field theory. This theory has been developed and explained in \cite{Alvarez:1983zi,Moore:1986}. The measure for the Liouville mode is very complicated. Therefore two-dimensional quantum gravity has strictly speaking not been solved yet in the continuum approach. However, the critical scaling exponents could be obtained in the light-cone gauge \cite{Knizhnik:1988ak} and later in the conformal gauge by consistent scaling assumptions \cite{David:1988hj,Distler:1989jt}. \subsection{Functional measures} \label{sec:fm} Let $R(M)$ be the space of all positive definite Riemannian metrics on a $d$-dimensional manifold ${M}$ and let $T_gR(M)$ be its cotangent space at a point $\gmn\in R(M)$. To define the functional measure $\cD\gmn$ one makes use of the fact, that the cotangent space $T_gR(M)$ can be naturally equipped with a diffeomorphism invariant scalar product $\bra\cdot,\cdot\ket_T$. This is used to define a measure $\cD_g\de\gmn$ on $T_gR(M)$ which descends to a measure on $R(M)$. The line element $ds^2$ for fluctuations $\de\gmn\in T_gR(M)$ of the metric can be written as \begin{equation} \label{meas1.1} ds^2 = \bra\de g,\de g\ket_T\equiv \int_{{M}}\! d^d\xi \sqrt{g}\ \dgmn \GMNAB \dgab, \end{equation} with the DeWitt (super)metric \cite{DeWitt:1967} \begin{equation} \label{meas1.3} \GMNAB = \frac{1}{2} ( g^{\mu\al} g^{\nu\be} + g^{\mu\be} g^{\nu\al} + C \gMN\gAB). \end{equation} Up to an overall normalization constant, \eref{meas1.1} is the only diffeomorphism invariant, ultralocal, that means dynamically neutral distance on $T_gR(M)$. The constant $C$ in \eref{meas1.3} takes the value $-2$ in canonical quantum gravity \cite{DeWitt:1967}. In our framework $C$ cannot be computed. The choice of $C$ determines the signature of the metric $\bra\cdot,\cdot\ket_T$ as can be seen by splitting the fluctuation $\de\gmn$ into its trace part $\gmn\de c$ and its tracefree part $\de h_{\mu\nu}$. This decomposition is orthogonal with respect to $\bra\cdot,\cdot\ket_T$. On the tracefree subspace $\GMNAB$ has eigenvalue $1$, while it has eigenvalue $1+\frac{Cd}{2}$ on the trace sector. Thus, the DeWitt metric is positive definite for $C>-\frac{2}{d}$ and negative definite for $C<-\frac{2}{d}$. For $C=-\frac{2}{d}$, $\GMNAB$ is the orthogonal projection on the tracefree part of $\de\gmn$. The inverse of the DeWitt metric, which satisfies \begin{equation} \label{meas1.10} \Gmnab G^{\al\be,\rho\sig} = \frac{1}{2} \left( \de_{\mu}^{\rho} \de_{\nu}^{\sig} + \de_{\mu}^{\sig} \de_{\nu}^{\rho} \right), \end{equation} is given by \begin{equation} \label{meas1.11} \Gmnab = \frac{1}{2} \left( g_{\mu\al} g_{\nu\be} + g_{\mu\be} g_{\nu\al} -\frac{C}{1+\frac{Cd}{2}}{\gmn} g_{\al\be} \right), \end{equation} as can be verified by multiplication with \eref{meas1.3}. The line element \eref{meas1.1} can be rewritten as \begin{equation} \label{meas1.7} ds^2 = \int_{{M}}\! d^d\xi \int_{{M}}\! d^d\xi'\ \sum_{\mu\leq\nu}\sum_{\al\leq\be} \dgmn(\xi) \Om^{\mu\nu,\al\be}(\xi,\xi') \dgab(\xi'), \end{equation} with $\Om^{\mu\nu,\al\be}(\xi,\xi')= \sqrt{g(\xi)}\ \Ga^{\mu\nu,\al\be}(\xi)\ \de(\xi-\xi')$, and the $\frac{d(d+1)}{2}\times\frac{d(d+1)}{2}$ matrix $\Ga^{\mu\nu,\al\be}(\xi)$ is defined as \begin{equation} \label{meas1.7a} \Ga^{\mu\nu,\al\be}(\xi) = \left\{ \begin{array}{ll} 4\GMNAB(\xi), & \text{if $\mu<\nu$ and $\al<\be$},\\ 2\GMNAB(\xi), & \text{if $\mu<\nu$ and $\al=\be$, or $\mu=\nu$ and $\al<\be$},\\ \GMNAB(\xi), & \text{if $\mu=\nu$ and $\al=\be$}. \end{array} \right. \end{equation} This induces a volume form $\cD\dgmn$ on the cotangent space, which descends to a volume form $\cD\gmn$ on $R(M)$: \begin{eqnarray} \label{meas1.9} \cD\dgmn &=& \sqrt{\det \Om^{\mu\nu,\al\be}(\xi,\xi')} \prod_{\xi\in{M},\ \mu\leq\nu}\! d\dgmn(\xi),\\ \label{meas1.9a} \cD\gmn &=& \sqrt{\det \Om^{\mu\nu,\al\be}(\xi,\xi')} \prod_{\xi\in{M},\ \mu\leq\nu}\! d\gmn(\xi). \end{eqnarray} We first note that due to the $\de$-function the determinant of $\Om^{\mu\nu,\al\be}(\xi,\xi')$ factorizes into a product over all spacetime points $\xi\in{M}$. The local determinant of the $\frac{d(d+1)}{2}\times\frac{d(d+1)}{2}$ matrix $\sqrt{g(\xi)}\ \Ga^{\mu\nu,\al\be}(\xi)$ can be evaluated by variation with respect to $\de\log g=-\gMN\dgmn$: \begin{eqnarray} \label{meas1.12} \de \log\det\left(\sqrt{g}\ \Ga^{\mu\nu,\al\be}\right) &=& \tr\ \de\log\left(\Ga^{\mu\nu,\al\be}\right) + \tr\ \de\log\left(\sqrt{g}\ \id^{\mu\nu,\al\be}\right) \nonumber\\ &=& \Gmnab\de G^{\al\be,\mu\nu} + \frac{d(d+1)}{4} \de\log g \nonumber \\ &=& \frac{1}{4} (d+1)(d-4)\ \de\log g, \end{eqnarray} where it has been used that the inverse of $\Ga^{\mu\nu,\al\be}$ equals $\Gmnab$ restricted to all indices with $\mu\leq\nu$ and $\al\leq\be$. Therefore it follows that \begin{equation} \label{meas1.13} \det\left(\sqrt{g}\ \Ga^{\mu\nu,\al\be}\right) = \kappa g^{\frac{(d+1)(d-4)}{4}}, \end{equation} where the constant $\kappa$ can be determined by specializing to $\gmn=\de_{\mu\nu}$, which gives $\ka=\left( 1+\frac{Cd}{2} \right)/2^{\frac{d(d-1)}{2}}$. Thus the measures \eref{meas1.9} and \eref{meas1.9a} equal \begin{eqnarray} \label{meas1.15} {\cal D}\dgmn &=& \text{const}\times \prod_{\xi\in{M}} \prod_{\mu\leq\nu} g(\xi)^{\frac{d-4}{4d}} d\dgmn(\xi),\\ \label{meas1.15a} {\cal D}\gmn &=& \text{const}\times \prod_{\xi\in{M}} \prod_{\mu\leq\nu} g(\xi)^{\frac{d-4}{4d}} d\gmn(\xi), \end{eqnarray} which we normalize such that \begin{equation} \label{meas1.15b} \int\! \cD\dgmn\ e^{-\frac{1}{2} \bra\de g,\de g\ket_T} = 1. \end{equation} Note that $\sig = \frac{d-4}{4d}$ is the only exponent of $g(\xi)$ in \eref{meas1.15} or \eref{meas1.15a} such that these measures are diffeomorphism invariant. With the measure \begin{equation} \label{meas1.15d} \prod_{\xi\in{M}}\prod_{\mu\leq\nu} g(\xi)^{\sig}d\dgmn(\xi), \end{equation} the Gaussian integral \eref{meas1.15b} is resolved by computing the determinant of $g(\xi)^{\frac{1}{2}-2\sig}\Ga^{\mu\nu,\al\be}$. This can be done along the lines of \eref{meas1.12}. The result is \begin{equation} \label{meas1.15c} \det\left( g(\xi)^{\frac{1}{2}-2\sig}\Ga^{\mu\nu,\al\be}\right) = \text{const}\times g(\xi)^{\frac{(d+1)((1-4\sig)d-4)}{4}}, \end{equation} which is a diffeomorphism invariant constant if and only if $\sig = \frac{d-4}{4d}$. The same result can be derived more rigorously by using the BRS-symmetry associated with general coordinate transformations \cite{Fujikawa:1983im,Fujikawa:1984qk}. Analogously, the measures for scalar and vector fields can be derived. The corresponding metrics $\bra\cdot,\cdot\ket_S$ and $\bra\cdot,\cdot\ket_V$ are defined as \begin{eqnarray} \label{add10} \bra\phi,\phi\ket_S &=& \int_M\! d^d\xi \sqrt{g}\ \phi(\xi) \phi(\xi),\\ \bra \psi,\psi\ket_V &=& \int_M\! d^d\xi \sqrt{g}\ \psi_{\mu}(\xi) \gMN \psi_{\nu}(\xi). \end{eqnarray} The measures $\cD_g\phi$ and $\cD_g\psi_{\mu}$ turn out as: \begin{eqnarray} \label{add11} \cD_g\phi &=& \prod_{\xi\in M} g(\xi)^{\frac{1}{4}}d\phi(\xi),\\ \label{add11a} \cD_g\psi_{\mu} &=& \prod_{\xi\in M} \prod_{\mu=1}^d g(\xi)^{\frac{d-2}{4d}}d\psi_{\mu}(\xi). \end{eqnarray} \subsection{Factorization of the diffeomorphisms} \label{sec:liouville} By construction, the functional measure and the action in \eref{e1} and \eref{e3a} are invariant under diffeomorphisms. Therefore, the measure overcounts physically equivalent configurations related by the group of diffeomorphisms. To avoid this overcounting of gauge equivalent metrics, the diffeomorphisms have to be factored from the measure. In two dimensions, this can be done in a covariant way in the conformal gauge. Let us first recall that the space of metrics on $M$ modulo diffeomorphisms and Weyl transformations is a finite dimensional compact space, called the moduli space $\cM$ of $M$, which has dimension $0$, $2$ and $6h-6$ for genus $0$, $1$ and $h\geq 2$ respectively. It is parametrized by the Teichm\"uller parameters $\tau=(\tau_1,\ldots,\tau_N)$. That means that if for each moduli $\tau\in\cM$ we choose a fixed background metric $\hat{g}_{\mu\nu}(\tau)$, all other metrics $\gmn$ are contained in the orbits under diffeomorphisms and Weyl transformations: \begin{equation} \label{e18} \gmn = f^{\star}\left(e^{\phi} \hat{g}_{\mu\nu}(\tau)\right), \end{equation} where $f^{\star}$ denotes the action of the diffeomorphism $f:{M}\rightarrow {M}$. (Actually the background metrics $\hat{g}_{\mu\nu}$ can be chosen such that they all have constant curvature.) Any infinitesimal change $\de\gmn$ in the metric can be decomposed into an infinitesimal Weyl transformation $\de\phi$, the effect of an infinitesimal diffeomorphism $\xi$ and the effect of varying the Teichm\"uller parameters: \begin{equation} \label{z1} \de \gmn = f^{\star}\de\phi\ \gmn + (\nabla_{\mu}\xi_{\nu}+\nabla_{\nu}\xi_{\mu}) + \frac{\partial \gmn}{\partial\tau_i}\de\tau_i. \end{equation} Here $\nabla_{\mu}$ denotes the covariant derivative. We want to orthogonalize this decomposition with respect to the scalar product $\bra\cdot,\cdot\ket_T$. First note that the tracefree part of the effect of the diffeomorphism $\xi$ is given by the conformal Killing form \begin{equation} \label{z2} (P\xi)_{\mu\nu} = \nabla_{\mu}\xi_{\nu}+\nabla_{\nu}\xi_{\mu} -\gmn \nabla_{\al}\xi^{\al}, \end{equation} which maps vector fields into traceless symmetric tensors. The zero modes of $P$ are called conformal Killing vectors. For spherical topology there are six linearly independent conformal Killing vectors, for torus topology there are two while there are none for higher genus. The conformal Killing vectors are important because they induce a variation $\de\gmn$ in the direction of $\gmn$ which can thus be compensated by a Weyl rescaling. In other words, the decomposition \eref{z1} is in general not unique. To make it unique one chooses a diffeomorphism $\tilde{\xi}\in (\text{ker} P)^{\bot}$ orthogonal to the zero modes of $P$. That means the gauge will only be fixed up to the conformal Killing vectors and we expect a remaining, at most six-dimensional symmetry in the gauge-fixed expressions. It follows that all variations $\de\gmn\in T_gR(M)$ of the metric can be uniquely decomposed into the variation of the conformal factor, the action of a diffeomorphism orthogonal to the conformal Killing vectors and the variation of the Teichm\"uller parameters. This decomposition can be written as a mutually orthogonal sum of a trace part, a tracefree part orthogonal to the moduli deformations and these moduli deformations: \begin{eqnarray} \label{z3} \de\gmn &=& \lbrace \tilde{f}^{\star}\de\phi + \nabla_{\al}\tilde{\xi}^{\al} +\frac{1}{2}\gAB\frac{\partial \gab}{\partial\tau_i}\de\tau_i\rbrace\gmn +\lbrace P\tilde{\xi} + P(P^+P)^{-1}P^+k^i_{\mu\nu}\de\tau_i\rbrace\nonumber\\ &&+\left(1-P(P^+P)^{-1}P^+\right)k_{\mu\nu}^i\de\tau_i, \end{eqnarray} where \begin{equation} \label{z4} k_{\mu\nu}^i = \frac{\partial\gmn}{\partial\tau_i} - \frac{1}{2}\gmn \gAB \frac{\partial\gab}{\partial\tau_i}. \end{equation} The adjoint $P^+$ of $P$ is defined by the relation $\bra h,P\xi\ket_T = \bra P^+h,\xi\ket_V$. To prove that the decomposition \eref{z3} is orthogonal just note that $1-P(P^+P)^{-1}P^+$ is the projector on the zero modes $\psi_l$ of ${P^+}$. The tracefree part is orthogonal to the trace part by definition of the scalar product $\bra\cdot,\cdot\ket_T$. The change of variables $\gmn\rightarrow(\phi,\tilde{f},\tau)$ involves a Jacobian $J(\phi,\tau)$: \begin{equation} \label{z5} \int\!\cD[\gmn]\ \cF(g) = \int\!\cD f\int\!\frac{\cD\tilde{f}}{\cD f}\cD_{e^{\phi}\hat{g}}\phi d\tau\ J(\phi,\tau)\ \cF(e^{\phi}\hat{g}(\tau)). \end{equation} Here one integrates some reparametrization invariant functional $\cF$. To find $J$ one substitutes the orthogonal decomposition \eref{z3} into the normalization condition \begin{equation} \label{z6} 1=\int\!\cD\de\gmn\ e^{-\frac{1}{2}\bra\de g,\de g\ket_T} \end{equation} and performs the change of variables on the tangent space. The result of these Gaussian integrations is \begin{equation} \label{z7} 1=J(\phi,\tau) \bigg[\frac{\det\bra\psi_k,\psi_l\ket_T}{\det'(P^+P)}\bigg]^{\frac{1}{2}}\ \bigg[\det\bra\psi_m,\frac{\partial g}{\partial\tau_n}\ket_T\bigg]^{-1}. \end{equation} At last, the change from the diffeomorphisms $f$ to $\tilde{f}$ and the conformal Killing vectors $\om_a$, which fulfill $P(e^{\phi}\om_a)=0$, has to be computed. The final result is \begin{eqnarray} \label{z8} \int\!\cD[\gmn]\ &&\hspace{-0.7cm}\cF(g) = \int\!\cD f_{\mu}\int\!\frac{d\tau}{v(\tau)} \det\bra\psi_m,\frac{\partial g}{\partial\tau_n}\ket_T\times \nonumber\\ && \int\!\cD_{e^{\phi}\hat{g}}\phi \bigg[\frac{\det'(P^+P)}{\det\bra\psi_k,\psi_l\ket_T \det\bra\om_a,\om_b\ket_V}\bigg]^{\frac{1}{2}}\ \cF(e^{\phi}\hat{g}(\tau)), \end{eqnarray} where $v(\tau)$ is the volume of the group generated by the conformal Killing vectors. Now the diffeomorphisms can be factored out. Since it turns out that $\frac{d\tau}{v(\tau)} \det\bra\psi_m,\frac{\partial g}{\partial\tau_n}\ket_T$ depends only on the Teichm\"uller parameters and not on $\phi$ we denote this as $\cD\tau$. The square root is called the Faddeev-Popov determinant and denoted as $Z_{\text{FP}}(e^{\phi}\hat{g})$. \subsection{Gravitational dressing of scaling exponents} \label{sec:gd} After factoring the diffeomorphisms from the measure the gauge-fixed expression for the partition function \eref{e3a} is: \begin{equation} \label{e18b} Z(V) = \int\! \cD\tau\!\int\! {\cal D}_{e^{\phi}\hat{g}}\phi\ \de(V-V_{e^{\phi}\hat{g}})\ Z_{\text{FP}}(e^{\phi}\hat{g}) Z_{\text{mat}}(e^{\phi}\hat{g}), \end{equation} where $Z_{\text{mat}}$ is defined as: \begin{equation} \label{add20} Z_{\text{mat}}(e^{\phi}\hat{g}) = \int\! {\cal D}_{e^{\phi}\hat{g}}X\ e^{-S_{\text{matter}}(e^{\phi}\hat{g}, X)}, \end{equation} with some matter action for matter with central charge $D$. The measure for the Liouville mode $\phi$ is defined by the line element \begin{equation} \label{e18c} d^2s = \bra\de\phi,\de\phi\ket_S = \int_{M}\! d^2\xi\sqrt{\hat{g}}\ e^{\phi}\ \de\phi(\xi) \de\phi(\xi), \end{equation} which depends on $\phi$ itself. Therefore the corresponding measure $\cD_{e^{\phi}\hat{g}}\phi$ is very complicated and it is unknown how to perform the functional integration over $\phi$ with this measure. The idea used to overcome these problems is to go over to the translational invariant measure $\cD_{\hat{g}}\phi$ and to shift all dependence on $\phi$ in \eref{e18b} into the action. For this transformation we use the relations \begin{eqnarray} \label{e19} Z_{\text{mat}}(e^{\phi}\hat{g}) &=& Z_{\text{mat}}(\hat{g}) e^{\frac{D}{48\pi}S_{\text{L}}(\hat{g}, \phi)},\\ \label{e18a} Z_{\text{FP}}(e^{\phi}\hat{g}) &=& Z_{\text{FP}}(\hat{g}) e^{-\frac{26}{48\pi}S_{\text{L}}(\hat{g}, \phi)},\\ \label{e21} {\cal D}_{e^{\phi}\hat{g}}\phi &=& {\cal D}_{\hat{g}}\phi\ e^{\frac{1}{48\pi}S_{\text{L}}(\hat{g}, \phi)}, \end{eqnarray} where $S_{\text{L}}(\hat{g}, \phi)$ is the Liouville action: \begin{equation} \label{e20} S_{\text{L}}(\hat{g}, \phi) = \int_{M}\! d^2\xi\sqrt{\hat{g}}\ \left(\frac{1}{2}\hat{g}^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi + \hat{\cal R}\phi\right). \end{equation} Here $\hat{\cal R}$ is the curvature scalar in the metric $\hat{g}_{\mu\nu}$. The relation \eref{e19} for the matter partition function can be derived by a couple of methods such as the heat kernel regularization \cite{Brown:1977sj} or the zeta function regularization \cite{Hawking:1977ja}. Since the matter action is invariant under Weyl transformations we can actually say that the whole contribution comes from the transformation of the measure $\cD_{e^{\phi}\hat{g}}X$ under conformal rescalings of the metric. This is also the reason for the trace anomaly of the energy momentum tensor. The transformation property \eref{e18a} for the Faddeev-Popov determinant can be derived in a similar way. However, the involved operators are much more complicated, compare \eref{z8}. The transformation \eref{e21} could not be derived rigorously by these or other methods. David \cite{David:1988hj} and Distler and Kawai \cite{Distler:1989jt} use a selfconsistent bootstrap method to find this relation. Under the assumption that the factor takes the form of an exponential of a Liouville action with arbitrary coefficients the coefficients are determined from general invariance considerations. In this calculation the Liouville mode $\phi$ is rescaled by a factor $\al$, which has to be determined. Taking all factors together, the partition function \eref{e18b} can be rewritten as: \begin{equation} \label{e22} Z(V) = \int\! \cD\tau\! \int\! {\cal D}_{\hat{g}}\phi\ Z_{\text{FP}}(\hat{g})\ Z_{\text{mat}}(\hat{g})\ e^{\frac{D-25}{48\pi}S_{\text{L}}(\hat{g}, \phi)}\ \de(V-V_{e^{\alpha \phi}\hat{g}}), \end{equation} where the measures depend only on the background metric $\hat{g}$. Thus the partition function scales under a rescaling of the volume as: \begin{equation} \label{e33} Z(\la V) = \la^{\frac{D-25}{12\al}\chi - 1} Z(V), \end{equation} where $\chi$ is the Euler characteristic of the manifold $M$. To find \eref{e33} we have shifted $\phi$ by $\frac{1}{\al}\log\la$ and used the Gauss-Bonnet theorem \eref{e3}. Now we have to compute the dressing exponent $\al$. It turns out that more generally we can compute the anomalous scaling dimensions for a whole family of operators. Consider the observable \begin{equation} \label{add30} \cO(g) = \int_{M}\! d^2\xi\sqrt{g}\ \Phi(g), \end{equation} where $\Phi$ is a spinless primary field of dimension $\De_0$. Typical simple examples are the volume $V_g=\int_M\! d^2\xi\sqrt{g}$ with $\De_0=0$ and the identity $1=\int_M\! d^2\xi\sqrt{g}\ \frac{1}{\sqrt{g}}\de(\xi)$ with $\De_0=1$. Then $\cO(g)$ transforms under a rescaling $\gmn\rightarrow\la\gmn$ of the metric as \begin{equation} \label{add31} \cO(\la g) = \la^{1-\De_0}\cO(g), \end{equation} if the geometry is kept fixed. Under the average over all geometries the scaling behaviour exponent $\De_0$ is in general changed to some other value $\De$: \begin{equation} \label{add32} \bra \cO(g)\ket_{\la V} = \la^{1-\De}\bra \cO(g)\ket_V. \end{equation} The coupling to gravity dresses the scaling behaviour of the field $\Phi$. Therefore we make the ansatz \begin{equation} \label{add33} \cO(\hat{g},\phi) = \int_M\! d^2\xi \sqrt{g}\ e^{\be\phi}\Phi(\hat{g}) \end{equation} for the observable after the transition to the translational invariant measure. Note that $\cO(\hat{g},\phi)$ is diffeomorphism invariant and fulfills $\cO(\hat{g},0) = \cO(\hat{g})$. Now observe that \eref{e19}, \eref{e18a}, \eref{e21} and $\cO(e^{\phi}\hat{g})$ are invariant under the transformation \begin{equation} \label{e23} \hat{g}_{\mu\nu}(\xi)\rightarrow e^{\sigma(\xi)}\hat{g}_{\mu\nu}(\xi), \quad \phi(\xi) \rightarrow\phi(\xi) - \sigma(\xi). \end{equation} Therefore also $\cO(\hat{g}, \phi)$ has to be invariant. This requirement leads to an expression for the gravitational dressing exponent $\be$ as follows. Under \eref{e23} the product $\sqrt{\hat{g}}\ \Phi(\hat{g})$ transforms as \begin{equation} \label{add34} \sqrt{\hat{g}(\xi)}\Phi(\hat{g}) \rightarrow e^{(1-\De_0)\sig(\xi)}\sqrt{\hat{g}(\xi)}\Phi(\hat{g}). \end{equation} Thus $e^{\be\phi(\xi)}$ has to transform as \begin{equation} \label{add35} e^{\be\phi(\xi)} \rightarrow e^{(\De_0-1)\sig(\xi)}e^{\be\phi(\xi)}. \end{equation} On the other hand we can determine the transformation of $e^{\be\phi(\xi)}$ by computing the Gaussian integral \begin{equation} \label{add36} F_{\phi}(\hat{g},\be) = \int\!\cD_{\hat{g}}\phi\ e^{\frac{D-25}{48\pi}S_{\text{L}}(\hat{g},\phi) + \be\phi(\xi)}, \end{equation} where $\be\phi(\xi)$ is interpreted as an additional $\de$-function source. If we define \begin{equation} \label{add37} \hat{J}(x) = \hat{\cR}(x) + \frac{48\pi\be}{D-25}\frac{1}{\sqrt{\hat{g}(x)}}\de(x-\xi), \end{equation} we get: \begin{eqnarray} \label{add38} \lefteqn{ F_{\phi}(\hat{g},\be) = \exp{\left[\frac{D-25}{96\pi}\int_M\! d^2x\sqrt{g}\ \hat{J}(x)(\De_{\hat{g}}^{-1}\hat{J})(x)\right]}}\\ &=& F_{\phi}(\hat{g},0)\ \exp{\left[\be\int_M\! d^2x\sqrt{\hat{g}}\ \De_{\hat{g}}^{-1}(\xi,x) \hat{\cR}(x)\ +\ \frac{24\pi\be^2}{D-25} \De_{\hat{g}}^{-1}(\xi,\xi)\right]}.\nonumber \end{eqnarray} Under the transformation \eref{e23} the first term in the exponent gives an extra $-\be\sig(\xi)$. The second term is an evaluation of the propagator at coinciding points which naively seen is singular. Thus we have to renormalize this expression \cite{Polchinski:1986zf}. The renormalized propagator is defined as \begin{equation} \label{e30} \De_{\hat{g},\text{R}}^{-1}(\xi,\xi) = \lim_{x\rightarrow \xi} \left( \De_{\hat{g}}^{-1}(x,\xi) - \frac{1}{4\pi}\log{d_{\hat{g}}(x,\xi)^2}\right), \end{equation} where $d_{\hat{g}}(x,\xi)$ is the geodesic distance between $x$ and $\xi$. Therefore the second term in the exponent gains an extra $\frac{6\be^2}{25-D}\sig(\xi)$ under \eref{e23}. Taking both contributions together we conclude: \begin{equation} \label{e31} e^{\be\phi(x)} \rightarrow e^{-\be\sig(x) + \frac{6\be^2}{25-c}\sig(x)}\ e^{\be\phi(x)} \overset{!}{=} e^{(\De_0-1)\sig(x)}\ e^{\be\phi(x)}. \end{equation} The solution to this quadratic equation determines $\be$ and thus also $\al$: \begin{eqnarray} \label{e32} \be &=& \frac{1}{12}\left(25-D - \sqrt{(25-D)(1-D+24\De_0)}\right),\\ \al &=& \frac{1}{12}\left(25-D - \sqrt{(25-D)(1-D)}\right). \end{eqnarray} The sign of the square root is determined by requesting that in the classical limit $c\rightarrow -\infty$ the scaling of observables is the same as in a fixed geometry. With $\al$ inserted into \eref{e33} we can now verify the form \eref{e3d} for the susceptibility exponent $\ga$. For the expectation values of observables the dependence on the topology cancels , and we get: \begin{equation} \label{e35} \bra\cO(g)\ket_{\la V} = \la^{\frac{\be}{\al}} \bra\cO(g)\ket_V = \la^{1-\De} \bra\cO(g)\ket_V. \end{equation} $\De$ evaluates as \begin{equation} \label{add39} \De = \frac{\sqrt{1-D+24\De_0}-\sqrt{1-D}}{\sqrt{25-D}-\sqrt{1-D}}. \end{equation} The expectation value of products of observables scales with the sum of their scaling exponents. \section{Fractal dimensions} \label{sec:fd} Quantum gravity as defined above is a functional integration over geometries. Therefore we have to ask which geometrical concepts survive the quantum average and how geometry is changed by it. In any theory of quantum gravity the structure of spacetime is of primary interest and even its dimensionality is a {\em dynamical} quantity. The transfer matrix method in \cite{Kawai:1993cj} applied to pure two-dimensional quantum gravity revealed that the spacetime has a self-similar structure on all scales with a fractal dimension {\em four}. An understanding of the quantum spacetime in two-dimensional quantum gravity might help to understand similar dynamical changes of the geometry in higher dimensions. In this section we introduce three different concepts of dimensionality to characterize the fractal structure of spacetime, each of which might describe different aspects of the geometry. The intrinsic Hausdorff dimension or fractal dimension $d_H$ has already been defined above by the scaling of observables such as the average circumference of circles with their ``geodesic'' radii. These measurements are performed within the intrinsic geometry of the universes. They are analogous to the usual ``clock and rod'' measurements in classical gravity. General results for $d_H$ from Liouville theory have been given in \cite{Watabiki:1993fk}. We can also characterize the fractal structure of spacetime by embedding it into a $D$-dimensional, typically Euclidean flat space. Measurements of, for example, the mean square extent of the universes are then performed with the metric of the embedding space. The finite size scaling of the mean square extent of the surfaces defines the extrinsic Hausdorff dimension $D_H$. Intuitively the embedding gives us an idea what typical universes would ``look like''. Indeed it is known that the extrinsic Hausdorff dimension of two-dimensional quantum gravity is infinity \cite{Distler:1990jv,Kawai:1991qv}. That means that the surfaces are crumpled together in a complicated way. Instead of determining the metrical structure of the spacetime directly by putting out clocks and rods, one can also choose to observe the propagation of test particles. From such observations one can in the classical case determine the propagator and via an inverse Laplace transformation the associated heat kernel. Since all this can be done in a reparametrization invariant way, we can define corresponding quantum observables. This leads to the definition of the spectral dimension $d_s$ as the scaling exponent of the short time return probability distribution of diffusing matter. The first general analytical result for the spectral dimension of two-dimensional quantum gravity coupled to conformal matter with central charge $D$ was given in \cite{Ambjorn:1997jf}. \subsection{Extrinsic Hausdorff dimension} \label{sec:eHd} The extrinsic Hausdorff dimension $D_H$ of a fractal object embedded in a $D$-dimensional space is a measure for the extent of the object measured with the metric of the embedding space. $D_H$ can be defined for mathematical fractals as well as for random surfaces and for quantum gravity. Let us focus on quantum gravity. The partition function for two-dimensional quantum gravity with fixed volume $V$ of spacetime coupled to $D$ Gaussian matter fields is \begin{equation} \label{eh1} Z(V) = \int\!\cD[\gmn]_V \int\!\cD_gX_{\mu}\big\vert_{\text{cm}} e^{-\int_M\! d^2\xi\sqrt{g}\ \gAB\partial_{\al}X_{\mu}\partial_{\be}X_{\mu}}. \end{equation} Here $\cD_gX_{\mu}\big\vert_{\text{cm}}$ denotes the functional integration over the $D$ matter fields such that the centre of mass is fixed to zero, which amounts in dropping the zero modes of the Laplacian when integrating over the matter fields. This is a slight variation of \eref{e3a} which allows us to define the extrinsic Hausdorff dimension as \begin{equation} \label{eh2} \bra X^2\ket_V \sim V^{\frac{2}{D_H}}\text{,~for $V\rightarrow\infty$}, \end{equation} with the definition \begin{eqnarray} \label{eh3a} \bra X^2\ket_V &\equiv& \frac{1}{Z(V)} \int\!\cD[\gmn]_V\int\!\cD_gX_{\mu}\big\vert_{\text{cm}} e^{-\int_M\! d^2\xi\sqrt{g}\ \gAB\partial_{\al}X_{\mu}\partial_{\be}X_{\mu}} \nonumber \\ &&\times \frac{1}{DV}\int_M\!d^2\xi \sqrt{g}\ X_{\mu}^2(\xi), \end{eqnarray} which is equivalent to \begin{eqnarray} \label{eh3} \bra X^2\ket_V &\equiv& \frac{1}{Z(V)} \int\!\cD[\gmn]_V\int\!\cD_gX_{\mu}\big\vert_{\text{cm}} e^{-\int_M\! d^2\xi\sqrt{g}\ \gAB\partial_{\al}X_{\mu}\partial_{\be}X_{\mu}} \nonumber \\ &&\times \frac{1}{2DV^2}\int_M\!d^2\xi_1 \sqrt{g} \int_M\!d^2\xi_2 \sqrt{g}\ (X(\xi_1)-X(\xi_2))^2. \end{eqnarray} To understand the definition of $D_H$ note that the fields $X_{\mu}$ define an embedding of the manifold $M$ in a $D$-dimensional Euclidean space. That means that we measure the Euclidean mean-square extent of spacetime embedded in $D$ dimensions. Mathematically more conservatively the extrinsic Hausdorff dimension is defined by constructing a covering of the geometrical object embedded in some $D$-dimensional space by a union of $n$ small balls. $n$ depends on the typical macroscopic scale $r$ of the system and scales with the dimension $D_H$ and is related to the mean square extent of the system as \begin{equation} \label{eh4} n\sim r^{D_H}\sim \bra X^2\ket^{\frac{D_H}{2}}. \end{equation} The extrinsic Hausdorff dimension for two-dimensional quantum gravity has been computed in \cite{Distler:1990jv,Kawai:1991qv}. Note that the treatment in \cite{Distler:1990jv} is based on a more general scaling assumption than is needed in two-dimensional quantum gravity. Their general result was specialized to quantum gravity in \cite{Kawai:1991qv}. In the following we briefly summarize the derivation. If we define the two-point function by \begin{equation} \label{eh5} G(p) = \bra \int_M\!d^2\xi_1\sqrt{g}\int_M\!d^2\xi_2\sqrt{g}\ e^{ip(X(\xi_1)-X(\xi_2))}\ket_V, \end{equation} it follows that \begin{equation} \label{eh6} \bra X^2\ket_V = -\frac{1}{2DV^2}\frac{\partial^2}{\partial p^2}G(p)\big\vert_{p=0}. \end{equation} In flat space the two-point function behaves as $V^{2-\De_0(p)}$ with $\De_0\sim p^2$. Coupled to gravity this acquires a gravitational dressing which can be computed to be: \begin{equation} \label{eh7} \De(p) = \frac{\sqrt{1-D+24\De_0(p)}-\sqrt{1-D}}{\sqrt{25-D}-\sqrt{1-D}}. \end{equation} If we differentiate $G(p)\underset{V\rightarrow\infty}{\sim} V^{2-\De(p)}$ two times with respect to $p$ at $p=0$ we get \begin{eqnarray} \label{eh8} \bra X^2\ket_V&\sim&\log V,\text{~for $D<1$},\\ \bra X^2\ket_V&\sim&\log^2 V,\text{~for $D=1$}, \end{eqnarray} for large volumes $V$. In both cases $\bra X^2\ket_V$ grows slower than any power of $V$. Thus we conclude that $D_H=\infty$. For spherical topology this has already been found for $D\rightarrow -\infty$ and to one-loop order in \cite{Jurkiewicz:1984pq}. \subsection{Spectral dimension} \label{sec:erg:sd} To probe the structure of classical spacetime one can study the properties of propagating particles, typically of free particles or diffusing matter. Let $\Psi(\xi,T)$ be the wave function for diffusion on the $d$-dimensional compact manifold $M$. $\Psi$ depends on the time-parameter $T$ and is a function of the points in $M$ which fulfills the diffusion equation \begin{equation} \label{erg:sd1} \frac{\partial}{\partial T} \Psi(\xi,T) = \De_g\Psi(\xi,T) \end{equation} for the initial condition \begin{equation} \label{erg:sd2} \Psi(\xi,0) = \frac{1}{\sqrt{g(\xi)}}\de(\xi_0-\xi). \end{equation} Here $\De_g$ is the Laplace-Beltrami operator corresponding to the metric $\gmn$ of $M$. The solution of the diffusion equation can be written as \begin{equation} \label{erg:sd2a} \Psi(\xi,T) = e^{T\De_g'}\Psi(\xi,0) = \int_M\! d^d \xi'\sqrt{g}\ K_g'(\xi,\xi';T) \Psi(\xi',0), \end{equation} which defines the probability distribution (or heat kernel) $K_g'(\xi,\xi';T)$ for diffusion on a compact manifold $M$ with metric $\gmn$. Here and in the following we take into account, that the Laplace operator $\De_g$ on compact surfaces has zero modes which should be projected out. This is indicated with a prime. $K_g'(\xi,\xi';T)$ is related to the massless scalar propagator $(-\De_g)^{-1}$ by \begin{equation} \label{erg:sd2b} \bra \xi'\vert(-\De_g)^{-1}\vert\xi\ket' = \int_{0}^{\infty}\! dT\ K_g'(\xi,\xi';T). \end{equation} It is known that the average return probability distribution \begin{equation} \label{erg:sd2ba} RP_g'(T) \equiv \frac{1}{V_g}\int_M\! d^d\xi\sqrt{g}\ K_g'(\xi,\xi;T) \end{equation} at time $T$ for diffusion on a $d$-dimensional manifold with a smooth geometry $[\gmn]$ admits the following asymptotic expansion for small times $T$: \begin{equation} \label{erg:sd2c} RP_g'(T) \sim \frac{1}{T^{d/2}}\sum_{r=0}^{\infty} a_n T^n = \frac{1}{T^{d/2}}(1+O(T)), \end{equation} where the coefficients $a_n$ are {\em diffeomorphism invariant} integrals of polynomials in the metric and the curvature and in their covariant derivatives, see for example \cite{DeWitt:1979book}. This asymptotic expansion breaks down when $T$ is of the order $V^{\frac{2}{d}}$, when the exponential decay of the heat kernel becomes dominant. Since $RP_g'(T)$ is reparametrization invariant, we can define its quantum average over geometries for fixed volume $V$ of spacetime as: \begin{equation} \label{erg:sd5} RP_V'(T) \equiv \frac{1}{Z(V)}\int\!\cD[\gmn]_V\ e^{-S_{\text{eff}}(g)} RP_g'(T). \end{equation} Here $S_{\text{eff}}(g)$ denotes the effective action of quantum gravity after integrating out possible matter fields. The spectral dimension $d_s$ is now defined by the short time asymptotic behaviour of $RP_V'(T)$: \begin{equation} \label{erg:sd6} RP_V'(T) \sim \frac{1}{T^{d_s/2}}(1+O(T)). \end{equation} It is natural to expect that under the average \eref{erg:sd5} over all geometries the only remaining geometric invariant is the fixed volume $V$. Thus we expect that $RP_V'(T)$ has the form \begin{equation} \label{erg:sd7} RP_V'(T) = \frac{1}{T^{d_s/2}} F\left(\frac{T}{V^{2/d_s}}\right), \end{equation} where $F(0)>0$ and $F(x)$ falls off exponentially for $x\rightarrow\infty$. This scaling assumption is the main input into our derivation of the spectral dimension of two-dimensional quantum gravity. It is very well documented in numerical simulations \cite{Ambjorn1998ab,Ambjorn:1995rg}. For a fixed smooth geometry the spectral dimension is by definition equal to the dimension $d$ of the manifold. However, the quantum average can a priori change this behaviour. Actually we know that the intrinsic Hausdorff dimension of two-dimensional quantum gravity is different from two. In this sense generic geometries are fractal with probability one. For diffusion on fixed (often embedded) fractal structures it is well known, that the spectral dimension can be different from the embedding dimension as well as from the fractal dimension. The diffusion law, measured in the embedding space, becomes anomalous: \begin{equation} \label{erg:sd7a} \bra r^2(T)\ket \sim T^{2/d_w}, \end{equation} with a gap exponent $d_w>2$. This slowing down of the transport is caused by the fractal ramification of the system. The gap exponent $d_w$ is related to the spectral dimension $d_s$ and the intrinsic Hausdorff dimension $d_h$ by \begin{equation} \label{erg:sd7b} d_s = \frac{2d_h}{d_w}. \end{equation} If the diffusion law is not anomalous, we have $d_w=2$ and $d_s=d_h$, analogous to $d_s=d$ for diffusion on fixed smooth $d$-dimensional geometries. For a review of diffusion on fractals see \cite{Havlin:1987}. While the above concepts are in principal valid for Euclidean quantum gravity in any dimension $d$, let us now specialize to two dimensions. The starting point is again the partition function \eref{eh1} for two-dimensional quantum gravity coupled to $D$ Gaussian matter fields $X_{\mu}$ for fixed volume $V$ of spacetime. We use \eref{eh3a} to define $\bra X^2\ket_V$. The Gaussian action implies that: \begin{eqnarray} \label{erg:sd8} \bra X^2\ket_V &=& \frac{1}{DV} \frac{\partial}{\partial\om} \bra e^{\om\int_M\!d^2\xi\sqrt{g}\ X_{\mu}^2(\xi)}\ket_V\Big\vert_{\om=0}\nonumber\\ &=& \frac{1}{DVZ(V)}\frac{\partial}{\partial\om} \int\!\cD[\gmn]_V\ \big({\det}'(-\De_g-\om)\big)^{-D/2}\Big\vert_{\om=0}\nonumber\\ &=& \frac{1}{2VZ(V)}\int\!\cD[\gmn]_V\ \big({\det}'(-\De_g)\big)^{-D/2} \tr'\left[\frac{1}{-\De_g}\right]\nonumber\\ &=& \frac{1}{2V}\bra \tr'\left[\frac{1}{-\De_g}\right]\ket_V. \end{eqnarray} By inserting \eref{erg:sd2b} and \eref{erg:sd7} into this formula we get: \begin{eqnarray} \label{erg:sd9} \bra X^2\ket_V &=& \frac{1}{2V} \bra\int_0^{\infty}\! dT\ \int_M\! d^2\xi\sqrt{g}\ K_g'(\xi,\xi;T)\ket_V\nonumber\\ &=& \frac{1}{2}\int_{0}^{\infty}\! dT\ RP_V'(T) = \frac{1}{2}\int_{0}^{\infty}\! dT\ \frac{1}{T^{d_s/2}} F(\frac{T}{V^{2/d_s}})\nonumber\\ &\sim& V^{\frac{2}{d_s}-1}, \end{eqnarray} for $V\rightarrow\infty$. Comparing this with \eref{eh2} we conclude: \begin{equation} \label{erg:sd10} \frac{1}{d_s} = \frac{1}{D_H} + \frac{1}{2}. \end{equation} Using the result $D_H=\infty$ from the preceeding section we arrive at: \begin{equation} \label{erg:sd11} d_s = 2,\text{~for all $D\leq 1$.} \end{equation} Strictly speaking, this assumes $d_s\leq 2$. However, at $D=-\infty$ a fixed geometry implies $d_s=2$ and $D_H=\infty$, in agreement with \eref{erg:sd10}, and we expect that a saddle point calculation around $D=-\infty$ is reliable, compare \cite{Jurkiewicz:1984pq}. Therefore $d_s$ equals $2$ in a neighbourhood of $D=-\infty$. The average over all geometries in \eref{erg:sd5} includes many degenerate geometries, for which $D_H<\infty$, for example the branched polymer-like geometries which are discussed in section \sref{bpo}. Thus, a priori we would expect that under the average over fluctuating geometries the spectral dimension decreases. Therefore it is reasonable to assume that $d_s\leq 2$. \footnote{If $d_s>2$, the integral in \eref{erg:sd9} diverges at $0$, and we have to introduce a cut-off $\eps$ for small times $T$. Then it is convenient to consider $\bra (X^2)^n\ket_V\sim V^{2n/D_H}$ instead, with $n=[d_s/2]+1$ for non-integer $d_s$. Then the leading large $V$ behaviour will be $V^{2n/d_s-1}$, and we get $\frac{1}{d_s} = \frac{1}{D_H} + \frac{1}{2n}$.} Thus we have shown, that the spectral dimension of two-dimensional quantum gravity coupled to $D$ Gaussian fields equals two for $D\leq 1$ \cite{Ambjorn:1997jf}. Our derivation relies on the scaling form \eref{erg:sd7} of the averaged return probability $RP_V'(T)$ and on properties of Gaussian matter fields. To corroborate the result \eref{erg:sd11}, let us present another derivation completely within Liouville theory\footnote{Thanks to Jakob L.~Nielsen for showing me this argument.}. Let us define an observable \begin{equation} \label{erg:sd12} \cO(g) = \int_M\! d^2\xi_0\sqrt{g}\ \frac{1}{-\De_g} \frac{1}{\sqrt{g(\xi)}}\de(\xi_0-\xi)\Big\vert_{\xi=\xi_0}. \end{equation} Under a rescaling of the metric, $\cO$ behaves like $\cO(\la g) = \la\cO(g)$, that means it is a correlator with conformal weight $(-1,-1)$. Therefore we deduce from \eref{e35} that \begin{equation} \label{erg:sd13} \bra\cO(g)\ket_{\la V} = \la \bra\cO(g)\ket_V, \end{equation} or equivalently: \begin{equation} \label{erg:sd14} \bra\frac{1}{V}\cO(g)\ket_{\la V} = \bra\cO(g)\ket_V. \end{equation} On the other hand we can write: \begin{eqnarray} \label{erg:sd15} \bra\frac{1}{V}\cO(g)\ket_{V} &=& \bra \frac{1}{V}\int_M\!d^2\xi_0\sqrt{g(\xi_0)}\ \frac{1}{-\De_g} \frac{1}{\sqrt{g(\xi)}}\de(\xi_0-\xi)\Big\vert_{\xi=\xi_0} \ket_V\nonumber\\ &=& \bra \frac{1}{V}\int_M\!d^2\xi_0\sqrt{g(\xi_0)}\ \int_0^{\infty}\! dT\ e^{T\De_g}\frac{1}{\sqrt{g(\xi)}}\de(\xi_0-\xi)\Big\vert_{\xi=\xi_0} \ket_V \nonumber\\ &=&\int_0^{\infty}\! dT\ RP_V'(T)\sim V^{\frac{2}{d_s}-1}, \end{eqnarray} using the scaling assumption \eref{erg:sd7}. From \eref{erg:sd14} and \eref{erg:sd15} it follows that $d_s=2$ in Liouville theory for all types of matter coupled to gravity with central charge $D\leq 1$, at which point the continuum calculations break down. This situation is very remarkable: A generic geometry in the functional integral \eref{eh1} is a typical fractal, when looked at in the usual way by computing the fractal dimension $d_h$ from measurements of the circumference $\bra l(g,R)\ket_V$ of circles versus the geodesic distance $R$. Also the gap exponent $d_w$ is anomalous and larger than two. It is exactly equal to the intrinsic Hausdorff dimension and consequently, the spectral dimension $d_s$ equals two. It takes the same value as for smooth two-dimensional geometries. The spectral dimension has received considerable attention also in numerical studies of the fractal structure of two-dimensional quantum gravity. In \cite{Ambjorn:1995rg} the return probability is computed for pure gravity and for gravity coupled to matter with central charge $\frac{1}{2}$ and $1$. They find that their results are consistent with $d_s=2$. In \cite{Ambjorn1998ab} $d_s$ is measured for central charge $-2$, $0$, $\frac{1}{2}$ and $\frac{4}{5}$. They get the results $d_s=2.00(3)$, $d_s=1.991(6)$, $d_s=1.989(5)$ and $d_s=1.991(5)$, respectively\footnote{Thanks to Konstantinos N.~Anagnostopoulos for showing me these data prior to the publication of \cite{Ambjorn1998ab}}. For $D>1$ it is generally believed that two-dimensional quantum gravity is in a branched polymer phase, see \cite{Harris:1996hk,David:1996vp} for recent analytical and \cite{Thorleifsson:1997ac} for some numerical evidence. The Gaussian fields define an embedding of these polymers in $\mathbb{R}^D$. The extrinsic Hausdorff dimension of generic branched polymers is $D_H=4$, thus we conclude from \eref{erg:sd10} that the spectral dimension equals $\frac{4}{3}$, the famous Alexander-Orbach value \cite{Alexander:1982}. The value $d_s=\frac{4}{3}$ and formula \eref{erg:sd10} for branched polymers have been derived in \cite{Cates:1984} by different methods, see also \cite{Jonsson:1997gk} for a recent complete proof of $d_s=\frac{4}{3}$. Furthermore, for the multicritical branched polymers \cite{Ambjorn:1990wp} it is known that $D_H=\frac{2m}{m-1}, m=2,3,\ldots$, where $m=2$ for the ordinary branched polymers. Thus we obtain $d_s = \frac{2m}{2m-1}$, in agreement with the analysis in \cite{Correia:1997gf}. For $m\rightarrow\infty$ the multicritical branched polymers approach ordinary random walks, for which $D_H=2$ and $d_s=1$. Let us in the end comment on a subtlety in the definition \eref{erg:sd5} of the averaged return probability $RP_V'(T)$. We have defined it as the quantum average of the return probability for fixed geometry over fluctuating geometries. Instead, we could have defined it as the limit for $R\rightarrow 0$ of \begin{eqnarray} \label{erg:sd100} K_V'(R,T) &=& \frac{1}{G(V,R)}\int\!\cD[\gmn]_V\ e^{-S_{\text{eff}}(g)} \\ &&\times \int_M\! d^2\xi_1\sqrt{g} \int_M\! d^2\xi_2\sqrt{g}\ \de(R-d_g(\xi_1,\xi_2))K_g'(\xi_1,\xi_2;t),\nonumber \end{eqnarray} where $G(V,R)=V\bra l(g,R)\ket_VZ(V)$ is the partition function for universes with two marked points separated by a geodesic distance $R$ (``two-point function'') as defined (up to matter fields) in \eref{e16}, see also \eref{e17b}. $K_V'(R,T)$ is the average probability distribution for diffusing a geodesic distance $R$ in the time $T$. It is natural to identify $RP_V'(T)$ with $K_V'(0,T)$. However it is unknown whether the limit $R\rightarrow 0$ of \eref{erg:sd100} commutes with the functional integration over geometries. Actually, for the two-point function $G(V,R)$ it is easy to see that this is not the case. If the limits do not commute there are two inequivalent definitions of the return probability $RP_V'(T)$. However, the scaling form of $K_V'(0,T)$ is the same as the scaling form \eref{erg:sd7} of $RP_V'(T)$ \cite{Watabiki:1996ja}. Indeed, we have \begin{equation} \label{erg:sd101} \int_0^{\infty}\! dR\bra l(g,R)\ket_V\ K_V'(R,T) = 1, \end{equation} and using $\text{dim}[R]=\text{dim}[V^{\nu}]$ we arrive at the scaling form \begin{equation} \label{erg:sd102} K_V'(R,T) = \frac{1}{V}P\left(\frac{R}{V^{\nu}},\frac{T}{V^{\la}}\right), \end{equation} where $\la$ is chosen such that $\frac{T}{V^{\la}}$ is dimensionless. Expanding the return probability $K_V'(0,T)$ around $T=0$ we get \begin{equation} \label{erg:sd103} K_V'(0,T) \sim \frac{V^{\frac{\la d_s}{2}-1}}{T^{d_s/2}},\text{~for $T\sim 0$,} \end{equation} by definition of $d_s$. If this short time asymptotic stays nonzero and finite for infinite volume $V$ we get $\la=\frac{2}{d_s}$, and $K_V'(0,T)$ scales as \begin{equation} \label{erg:sd104} K_V'(0,T) = \frac{1}{T^{d_s/2}} \tilde{F}\left(\frac{T}{V^{2/d_s}}\right), \end{equation} which has the same form as the scaling behaviour of $RP_V'(T)$. \subsection{Intrinsic Hausdorff dimension} \label{sec:erg:ihd} The intrinsic Hausdorff dimension $d_h$ of two-dimensional quantum gravity has been defined in section \sref{obs}. One way is to measure the average circumference $\bra l(g,R)\ket_V$ of circles with a small radius $R$, which scales as $\bra l(g,R)\ket_V\sim R^{d_h-1}$. This is analogous to the classical ``clock and rod'' procedure to study the geometrical properties of spacetime. Alternatively one can define the fractal dimension $d_H$ as the scaling exponent of the average volume of the universes: $\bra V\ket_R \sim R^{d_H}$, compare \eref{e16f}. In two-dimensional quantum gravity the first ``local'' and the second ``global'' concept of the intrinsic Hausdorff dimension agree. For pure gravity, we have $d_H=4$, see \eref{e17g}. The intrinsic Hausdorff dimension has been computed in \cite{Watabiki:1993fk} by studying the diffusion equation in Liouville theory. Although the mathematical status of this derivation is unclear and although it is not consistent with the derivation of the spectral dimension in section~\sref{erg:sd}, the result is valid for a number of special cases and agrees with many numerical simulations. In~\cite{Watabiki:1993fk} the mean square distance $\bra R^2(T)\ket_V$ of diffusion in the fluctuating space time is defined as: \begin{eqnarray} \label{erg:id1} \bra R^2(T)\ket_V &\equiv& \frac{1}{VZ(V)} \int\!\cD[\gmn]_V\ e^{-S_{\text{eff}}(g)} \int_M\! d^2\xi_1\sqrt{g}\int_M\! d^2\xi_2\sqrt{g}\ \nonumber\\ && \times d_g^2(\xi_1,\xi_2)K_g'(\xi_1,\xi_2;T). \end{eqnarray} $\bra R^2(T)\ket_V$ scales with the volume as \begin{equation} \label{erg:id1suppl} \bra R^2(T)\ket_V \sim V^{\frac{2}{d_h}}, \end{equation} which yields yet another way to define the intrinsic Hausdorff dimension $d_h$. On the other hand, the mean square distance for diffusion on a fixed smooth geometry $[\gmn]$ with the initial condition \eref{erg:sd2} can be expanded for small times $T$ as \cite{DeWitt:1965} \begin{equation} \label{erg:id4} \int_M\! d^2\xi\sqrt{g}\ d_g^2(\xi_0,\xi) K_g'(\xi_0,\xi;T) = 4T - \frac{2}{3}T^2\cR(\xi_0) + O(T^3), \end{equation} with the curvature scalar $\cR$. If we assume that this expansion commutes with the functional average over geometries in \eref{erg:id1}, we get \begin{equation} \label{erg32} \bra R^2(T)\ket_V \sim \text{const}\ \cdot T, \text{~for small times $T$.} \end{equation} The dimension of $T$ can be computed from Liouville theory by expanding the (unnormalized) return probability for small times to observe how this scales under the transformation $V\rightarrow\la V$: \begin{eqnarray} \label{erg:id5} \lefteqn{\!\!\!\!\!\!\!\!\!\!\!\!\bra\int_M\!d^2\xi_0\sqrt{g}\ K_g'(\xi_0,\xi_0;T)\ket_{\la V} =\ \bra\int_M\!d^2\xi_0\sqrt{g}\ e^{T\De_g}\Psi(\xi,0)\Big\vert_{\xi=\xi_0}\ket_{\la V} } \nonumber \\ &=& \bra\int_M\!d^2\xi_0\sqrt{g}\ \left[ \Psi(\xi_0,0) + T\De_g\Psi(\xi,0)\Big\vert_{\xi=\xi_0} + O(T^2)\right]\ket_{\la V}\nonumber\\ &=& 1 + T\bra\int_M\! d^2\xi_0\sqrt{g}\ \De_g \frac{1}{\sqrt{g(\xi)}}\de(\xi-\xi_0) \Big\vert_{\xi=\xi_0}\ket_{\la V} + O(T^2)\nonumber\\ &\overset{\eref{e35}}{=}& 1 + T\ \la^{\frac{\be}{\al}} + O(T^2), \end{eqnarray} where $\be$ is computed for $\De_0=2$. This calculation is based on the assumption that the expansion of the exponential commutes with the functional integration over geometries. Then we have: \begin{equation} \label{erg:id7} \text{dim}[T] = \text{dim}[V^{2/d_H}] = \text{dim}[V^{-\be/\al}], \end{equation} and thus \begin{equation} \label{erg:id8} d_H = -\frac{2\al}{\be} = 2\ \frac{\sqrt{25-D}+\sqrt{49-D}}{\sqrt{25-D}+\sqrt{1-D}}, \end{equation} for matter with central charge $D$ coupled to two-dimensional gravity. For $D=0$, that means in the case of pure gravity, \eref{erg:id8} predicts $d_H=4$, in agreement with \eref{e17g}. In the semiclassical limit $D\rightarrow -\infty$ we expect $d_H=2$, which is also correctly predicted by \eref{erg:id8}. The validity of \eref{erg:id8} has been widely discussed. The main assumption of the derivation sketched above is that one is allowed to interchange the expansion of $e^{T\De_g}$ with the functional integration in \eref{erg:id5}. Further input is the scaling assumption for $\bra R^2(T)\ket_V$. In principle the same result for the intrinsic Hausdorff dimension should be obtained if one uses some power of the Laplace operator. However this is not true and results obtained from higher orders of the expansion of $e^{T\De_g}$ contradict the result from the first order. An alternative theoretical prediction based on the transfer matrix method has been given in \cite{Distler:1990jv,Ishibashi:1994sv,Ishibashi:1995in}. They get: \begin{equation} \label{erg:id9} d_H = -\frac{2}{\ga} = \frac{24}{1-D+\sqrt{(25-D)(1-D)}}. \end{equation} For $D=0$ this yields again $d_H=4$, while for $D\rightarrow-\infty$ \eref{erg:id9} predicts $d_H=0$. To settle this question, the authors of \cite{Ambjorn:1997kb,Ambjorn:1997sy} performed a high precision Monte Carlo analysis for $D=-2$. Their result is $d_H=3.574(8)$, in agreement with the value $d_H=3.562$ from \eref{erg:id8} and in clear disagreement to $d_H=2$ from \eref{erg:id9}. This clearly rules out the validity of \eref{erg:id9} for matter with $D<0$ coupled to gravity. However, for unitary matter with $D>0$ coupled to gravity the situation is less clear. Most simulations report values $d_H\approx 4$ \cite{Bowick:1995,Ambjorn:1995rg,Ambjorn:1997wc}, and a back-reaction of the matter on the structure of the quantum spacetime cannot be observed. However, values extracted from the scaling of the matter correlation functions are higher, though less reliable. All data are consistent with the conjecture \cite{Bowick:1995,Ambjorn:1995rg}: \begin{equation} \label{erg:id10} d_H=4,\text{~for $0\leq D\leq 1$,} \end{equation} although the validity of \eref{erg:id8} for $D>0$ is not completely ruled out. \begin{table}[tb] \renewcommand{\baselinestretch}{1.2} \normalsize \begin{center} \begin{tabular}{lllllll} \hline\hline\hline $D=-5$ & $D=-2$ & $D=0$ & $D=\frac{1}{2}$ & $D=\frac{4}{5}$ & $D=1$ & ref.\\ \hline 3.236 & 3.562 & 4 & 4.212 & 4.421 & 4.828 & \eref{erg:id8}\\ 1.236 & 2 & 4 & 6 & 10 & $\infty$ & \eref{erg:id9}\\ \hline % % % 3.36(4) & 3.574(8) & 4.05(15) & 4.11(10) & 4.01(9) & 3.8--4.0 & \cite{Anagnostopoulos:1998ab,Ambjorn:1997nf}\\ \hline & & & 3.96--4.38 & 3.97--4.39 & & \cite{Ambjorn:1997nf}\\ \hline\hline\hline \end{tabular} \renewcommand{\baselinestretch}{1.0} \normalsize \parbox[t]{\textwidth} { \caption[bv1] {\label{tab:erg:id1} \small Current status of numerical values for the intrinsic Hausdorff dimension $d_H$ compared to the two different theoretical predictions \eref{erg:id8} and \eref{erg:id9}. The values in the first part of the table are the theoretical predictions. In the second part of the table we summarize results from the scaling of the two-point function for the random surfaces, while in the last part the Hausdorff dimension is determined from the scaling of the matter correlation functions. See the references for technical details about the simulations and the data analysis. } } \end{center} \end{table} The current status of the numerical simulations is summarized in table \tref{erg:id1}. \chapter{Discretization of $2d$ quantum gravity} \label{sec:disc} The formulation of quantum field theory in terms of renormalized Euclidean functional integrals leads to an identification of quantum field theory with statistical mechanics. A key ingredient for this identification is the discretization of spacetime. Then the powerful techniques from the theory of critical phenomena get available and have proved to be invaluable for the understanding of renormalization and of non-perturbative phenomena. Therefore it is natural to attempt a discretization of the continuum theory of quantum gravity. However, spacetime no longer plays the role of a mere background but is the dynamical variable itself. This, combined with the diffeomorphism invariance of the continuum theory poses a problem for the discretization which has successfully been solved by the method of dynamical triangulations~\cite{Ambjorn:1985az,David:1985tx,Kazakov:1985ea}. The main idea is to discretize geometry directly, with no reference to coordinate parametrizations. That means that the functional integral over equivalence classes of metrics on a manifold $M$ is replaced by a finite sum over piecewise linear spaces which are constructed by successively gluing $d$-simplices together. The fixed edge length of the simplices introduces a reparametrization invariant cutoff into the theory. In this way quantum gravity can be formulated as an ensemble of discrete random manifolds and the machinery of critical phenomena can be applied. The continuum theory is recovered at a critical point of this ensemble. Masses and continuum coupling constants are defined by the approach to the critical point in the scaling limit. Research in the formulation of classical gravity in purely geometrical terms was initiated in~1961 by T.~Regge~\cite{Regge:1961}, while the first attempt at a discretization of quantum gravity in the sense of this chapter has been done by Weingarten in~1980~\cite{Weingarten:1980,Weingarten:1982}. A good review with some comments about the historical development of the subject is~\cite{Ambjorn1997}. Other reviews can be found in~\cite{David:1992,Ambjorn:1994,Ambjorn:1995aw}. Much of the material presented in this chapter is taken from these articles. In~\cite{DiFrancesco:1995nw} the relation between the matrix-model technique and dynamical triangulation is reviewed. In the first section of this chapter we introduce the method of dynamical triangulations and discuss the scaling properties of the theory. Branched polymers, which have been mentioned before, are discussed in some detail. In section~\sref{mm} we give a brief review of the matrix-model techniques which have been applied to solve two-dimensional quantum gravity. For more details and further issues we refer to the review articles given above. The explicit solution of discretized two-dimensional quantum gravity shows, that the continuum theory is the scaling limit of dynamical triangulation. Therefore both theories can be identified. In section~\sref{fs} we outline how the fractal structure and the scaling properties of pure two-dimensional quantum gravity can be obtained. An alternative method for the discretization of quantum gravity, known as quantum Regge calculus, has been suggested. This is discussed in chapter~\sref{regge} of this thesis. \section{Dynamical triangulation} Dynamical triangulation is a discretization of quantum geometries. No reference to parametrizations has to be made. At the same time it provides a regularization of the continuum theory by introducing an explicit cutoff. In this section we can only outline the beauty and power of dynamical triangulations in two dimensions. We begin with a short introduction into discretization of continuum manifolds and geometries. Then we define the partition function of two-dimensional dynamical triangulation and discuss its scaling properties. The susceptibility exponent $\ga$ is characterized through the fractal structure of the random spacetimes. We end this section with a discussion of the scaling properties of the two-point function which are illustrated by means of the model of branched polymers. \subsection{Discretization of geometry} \label{sec:dg} Let us define an oriented $n$-simplex as an $(n+1)$-tuple of points modulo even permutations. Thus a $0$-simplex is a point, a $1$-simplex is a pair of points which can be identified with a line, a $2$-simplex is a triangle, a $3$-simplex a tetrahedron etc. A subsimplex $\sig'$ of a simplex $\sig$ is defined as a subset of $\sig$: $\sig'\subseteq\sig$. By gluing simplices together one gets a simplicial complex. Simplicial complexes have no fixed dimension, that means they are not manifolds. However, their structure is sufficient to define exterior calculus on them, which physically means that scalar fields, gauge fields, antisymmetric tensor fields etc can be defined on them. A $d$-dimensional simplicial manifold is a simplicial complex such that the neighbourhood of each point in the complex is homeomorphic to a $d$-dimensional ball. In one and two dimensions each simplicial manifold can be obtained by gluing pairs of $d$-simplices along some of their $(d-1)$-edges. In higher dimensions this successive gluing yields in some cases only pseudo-manifolds, in which a neighbourhood of a point can also be homeomorphic to a topologically more complicated object. The structure of simplicial manifolds corresponds to a discretization of the continuous manifold structure. To define geometrical concepts we have to introduce a metric. A $d$-dimensional simplicial manifold can be equipped canonically with a Riemannian metric by demanding that the metric is flat inside each $d$-simplex, continuous when the $(d-1)$-subsimplices are crossed and that each $(d-1)$-subsimplex is a linear flat subspace of this simplex. A simplicial manifold equipped with such a metric is called piecewise linear space. The canonical metric can be defined by assigning lengths to the $N_l$ links ($1$-subsimplices) of the simplicial manifold. First note that any $d$-dimensional simplicial manifold $M$ can be covered with charts $(U,\phi)$ such that each $d$-simplex is parametrized by barycentric coordinates. Here \begin{equation} \label{r0} U=\lbrace\xi\in \mathbb{R}^d_+\vert\xi_1+\cdots +\xi_d<1\rbrace, \end{equation} and $\phi: U\rightarrow M$ is given by \begin{equation} \label{r0a} \phi(\xi) = \xi_1y_1 + \ldots + \xi_dy_d + (1-\xi_1-\ldots-\xi_d)y_{d+1}, \end{equation} where $y_1, \ldots, y_{d+1}$ are the coordinates of the vertices of the simplicial complex which live in some ambient space $\mathbb{R}^n$. Then on each chart the canonical metric is defined by \begin{equation} \label{r0b} \gmn(\xi) = \frac{\partial\phi}{\partial\xi^{\mu}}\cdot \frac{\partial\phi}{\partial\xi^{\nu}}. \end{equation} This metric is Euclidean inside each $d$-simplex and continuous if a $(d-1)$-face is crossed. It is compatible with the manifold structure and can be expressed solely in terms of the link lengths $l_1, \ldots, l_{N_l}$. For a triangle~\eref{r0b} can be written as \begin{equation} \label{r0cc} \gmn = \left( \begin{array}[c]{cc} l_1^2 & \frac{1}{2}(l_1^2+l_2^2-l_3^2) \\ \frac{1}{2}(l_1^2+l_2^2-l_3^2) & l_2^2 \end{array} \right). \end{equation} The intrinsic curvature of a two-dimensional piecewise linear space is located at the vertices, that means its $0$-subsimplices. The curvature can clearly not reside in the triangles, since they are flat. Furthermore the metric is continuous when crossing the edges of the triangles. Since we define curvature as an intrinsic concept of the piecewise linear space it should be independent of the embedding. We can bend the surface around an edge without changing the geometry -- an observation which the German mathematician C.~F.~Gauss made in his famous paper {\em Disquisitiones generales circa superficies curva} (1828) and which he called {\em theorema egregrium}. Therefore it is intuitive that the curvature cannot reside in the edges. Rather, it is concentrated at the vertices or in general at the $(d-2)$-hinges of the piecewise linear space. To each vertex $v$ in the surface we can assign a deficit angle $\de_v$ as the difference between $2\pi$ and the sum of the angles meeting at $v$: \begin{equation} \label{di1} \de_v = 2\pi - \sum_{t:v\in t}\al_{v,t}, \end{equation} where the sum goes over all triangles $t$ containing $v$. $\al_{v,t}$ is the angle at $v$ inside $t$. The scalar curvature $\cR_v$ at $v$ is defined as \begin{equation} \label{meas2.5} \cR_v = 2\frac{\de_v}{dA_v},~dA_v = \frac{1}{3}\sum_{t:\, v\in t} A_t, \end{equation} where $A_t$ is the area of the triangle $t$. The scalar curvature $\cR$ in two dimensions equals two times the Gaussian curvature. With these definitions we finally have: \begin{equation} \label{di5} \text{total area of the surface}=\sum_{v=1}^{N_v}dA_v, \end{equation} and the Gauss-Bonnet theorem is now valid in the form \begin{equation} \label{di4} \sum_{v=1}^{N_v} \cR_vdA_v = 4\pi\chi, \end{equation} which is equivalent to the polyeder formula $N_t-N_l+N_v=\chi$ by Descartes and Euler. \subsection{Definition of the model} The original proposal of Regge~\cite{Regge:1961} consists in constructing a sequence of piecewise linear spaces to approximate a given smooth manifold $M$ such that for any sufficiently smooth function $f$ \begin{equation} \label{di2} \sum_{v}dA_v f_v\rightarrow \int_M\! dA(\xi)\ f(\xi), \end{equation} where $f_v$ is the value of the function $f$ at the vertex $v$. The applicability of this point of view in the quantum case will be discussed in chapter~\sref{regge}. Here we take a somewhat different point of view. Our aim is to integrate over all equivalence classes of metrics on a given manifold $M$. We introduce a reparametrization invariant cutoff by assigning the length $a$ to each edge of piecewise linear spaces. We construct all possible piecewise linear spaces by gluing equilateral triangles together. In this case the formulas above simplify considerably. Let $n_v$ be the order of the vertex $v$. Then we have: \begin{equation} \label{di6} dA_v = \frac{\sqrt{3}}{12}a^2n_v,\quad \cR_vdA_v = \frac{2\pi}{3}(6-n_v). \end{equation} For simplicity we rescale $a^2$ by $\frac{\sqrt{3}}{4}$ and then set $a=1$. \eref{di6} shows that two triangulations which cannot be mapped onto each other by a relabelling of the vertices lead to different local curvature assignments. Thus they define different metric structures. The set of combinatorially non-equivalent triangulations defines a grid in the space of diffeomorphism classes of metrics. The proposal described in this chapter relies on the hope that this grid gets uniformly dense in the limit $a\rightarrow 0$. To define a regularized theory of quantum gravity we replace the action $S({g}, \La) = \La V_g$ in~\eref{e5a} by \begin{equation} \label{di7} S_T(\mu) = \mu N_t, \end{equation} where $\mu$ is the bare cosmological constant. The integration over equivalence classes of metrics on $M$ is replaced by a summation over all non-isomorphic equilateral triangulations $T$ with the topology of $M$. In the case of matter coupled to quantum gravity, the matter fields $X$ can be defined on the vertices, the links or on the triangles. In general the action $S_{\text{matter}}$ and the functional integration over the matter fields will depend on the triangulation $T$. The discretized version of the partition function can thus be written as \begin{equation} \label{di8} Z(\mu) = \sum_{T\in \cT} \frac{1}{C_T} e^{-S_T(\mu)}\int\! \cD_T[X]\ e^{-S_{\text{matter}}(T,X)}, \end{equation} where $C_T$ denotes the symmetry factor of the triangulation $T$ which is equal to the order of the automorphism group of $T$. The summation goes over a suitable class $\cT$ of abstract triangulations $T$ defined by their vertices and a connectivity matrix. Appealing to universality details of the chosen class of triangulations such as whether closed two-loops are allowed or not should not be important for the theory. This has been verified a posteriori. For simplicity we will ignore any possible matter fields coupled to gravity in the following equations. Then we can write: \begin{equation} \label{di9} Z(\mu) = \sum_{N=1}^{\infty} e^{-\mu N} Z(N),\quad Z(N) = \sum_{T\in \cT_N} \frac{1}{C_T}, \end{equation} where $\cT_N$ denotes the subset of triangulations with $N$ triangles of $\cT$. Expectation values of observables $\cO$ are defined as: \begin{eqnarray} \label{di9a} \bra\cO\ket_{\mu} &=& \frac{1}{Z(\mu)} \sum_{T\in \cT} \frac{1}{C_T} e^{-S_T(\mu)} \cO(T),\\ \bra\cO\ket_{N} &=& \frac{1}{Z(N)}\sum_{T\in \cT_N} \frac{1}{C_T} \cO(T), \end{eqnarray} in the ``grand-canonical'' ensemble with fixed cosmological constant and in the ``canonical'' ensemble with fixed volume respectively. $\mu$ can be understood as the chemical potential for adding new triangles to the surfaces. $Z(N)$ can be interpreted as the number of triangulations in $\cT_N$. This number is exponentially bounded: \begin{equation} \label{di10} Z(N) = e^{\mu_c N}N^{\ga-3}\left(1+O(1/N)\right). \end{equation} The proof for spherical topology has been given in~\cite{Tutte:1962}. For a general proof by purely combinatorial methods in the spirit of Tutte see~\cite{Ambjorn1997}. It can be proved that the critical point $\mu_c$ does not depend on the topology. That means that the statistical ensemble defined by~\eref{di8} has a critical point $\mu_c$. $Z(\mu)$ is analytical for $\mu>\mu_c$ and contains non-analytical parts at the critical point. The latter have our biggest interest since they are the universal parts given by the large $N$ behaviour of $Z(N)$. The continuum limit should be defined as $\mu\rightarrow\mu_c$. Close to the critical point we have: \begin{equation} \label{di11} Z(\mu) = (\mu-\mu_c)^{2-\ga} + \text{less singular terms,} \end{equation} from a discrete Laplace transformation of~\eref{di10}. Many analytical and numerical investigations have revealed that $\ga$ assumes the same values as in the continuum theory of two-dimensional quantum gravity and is given by~\eref{e3d}. Thus the susceptibility exponent $\ga$ can also be characterized as the subleading power correction of the exponentially growing number of triangulations for a fixed topology. Furthermore, $\ga$ can be characterized by the branching ratio into minimal bottleneck baby universes~\cite{Jain:1992bs}, that means parts of the universe which are connected to the rest by a three-loop of links which do not form a triangle, see figure \fref{di1}. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.6\linewidth]{FIGURES/baby.eps} \parbox[t]{.85\textwidth} { \caption[babyfig] { \label{fig:di1} \small Branching of a minimal bottleneck baby universe from the larger parent. } } \end{center} \end{figure} The smaller part is then called minimal bottleneck baby universe and the larger part is called parent. In the ensemble of universes with volume $N$ the average number $\bra\cN(V)\ket_N$ of baby universes with volume $V$ is given by \begin{equation} \label{di12} \bra\cN(V)\ket_N \sim \frac{3!}{Z(N)} V Z(V)\ (N-V)Z(N-V). \end{equation} The factors $Z(V)$ and $Z(N-V)$ are the weights of the baby universe and of the parent respectively. The volume factors $V$ and $N-V$ reflect the fact that the baby universe can be attached at any triangle. Finally, $3!$ is the number of ways the two boundaries of the parent and of the minimal bottleneck baby universe can be glued together.~\eref{di12} is valid in the generic case where there are no additional symmetries. Assuming that $Z(N)$ is given by~\eref{di10} we get: \begin{equation} \label{di13} \bra\cN(V)\ket_N \sim N V^{\ga-2} \left(1-\frac{V}{N}\right)^{\ga-2}. \end{equation} For $V\ll N$ this reduces to $\bra\cN(V)\ket_N \sim N V^{\ga-2}$. It is interesting to note that $\ga$ is a function of the matter coupled to gravity. Thus it describes an aspect of the fractal structure of quantum spacetime that originates from the back-reaction of the matter on gravity. \subsection{The two-point function} \label{sec:tpf} In analogy to the continuum formalism in chapter~\sref{chap1} we can define a two-point function which in a natural way contains information about the fractal structure of the random surfaces. In principle, we could define a geodesic distance by using the canonical metric described in section~\sref{dg}. Alternatively, the use of simplified definitions has been suggested. The geodesic distance between two links $l_1$ and $l_2$ is defined as the length of the shortest path of triangles connecting $l_1$ and $l_2$. The geodesic distance between a link $l_1$ and a set of links $\cL$ is defined as the minimum of the geodesic distances between $l_1$ and the elements of $\cL$. Furthermore we define the geodesic distance between a loop $\cL_1$ and a loop $\cL_2$ to be $r$ if all links of $\cL_1$ lie a geodesic distance $r$ from the loop $\cL_2$. Note that this definition is not symmetric in $\cL_1$ and $\cL_2$. Similarly we could have defined the geodesic distance as the distance between vertices or between triangles. Then the two-point function is defined as: \begin{eqnarray} \label{di14} G(\mu,r) &=& \sum_{T\in \cT(2,r)} e^{-\mu N_t} = \sum_{N=1}^{\infty} e^{-\mu N} G(N,r),\\ G(N,r) &=& \sum_{T\in \cT_N(2,r)} 1, \end{eqnarray} where $\cT(2,r)$ denotes the class of triangulations $T$ with two marked links separated a geodesic distance $r$, and $\cT_N(2,r)$ the subclass where all triangulations consist of $N$ triangles. Note that $r$ is an integer in units of the lattice spacing. $G(\mu,r)$ is the discretized form of $G(\La,R)$ in~\eref{e10}, while $G(N,r)$ is the discretized form of $G(V,R)$ in~\eref{e16}. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.75\linewidth]{FIGURES/glue.eps} \parbox[t]{.85\textwidth} { \caption[gluefig] { \label{fig:di2} \small Two surfaces with two marked links separated a geodesic distance $r_1$ and $r_2$ respectively can be glued together after cutting open one of the marked links each to form a surface with two marked links separated a geodesic distance $r_1+r_2$. } } \end{center} \end{figure} An important property of the two-point function is: \begin{equation} \label{di15} G(\mu,r_1+r_2)\geq \text{const}\times G(\mu,r_1)\ G(\mu,r_1), \end{equation} which follows from the fact that each term on the right hand side uniquely defines a term on the left hand side. This is demonstrated in figure \fref{di2}. From this one can conclude that the limit \begin{equation} \label{di16} \lim_{r\rightarrow\infty}\frac{-\log G(\mu,r)}{r} = m(\mu) \end{equation} exists with $m(\mu)\geq 0$ and $m'(\mu)>0$, for $\mu>\mu_c$. Thus the two-point function falls off exponentially for $r\rightarrow\infty$. These relations are by now well known. However, the complete proofs require some technical arguments, compare~\cite{Ambjorn1997}. Below we will show that a continuum limit of the discretized theory can only exist if $m(\mu)$ scales to zero as $\mu\rightarrow\mu_c$. Let us assume that this is the case: \begin{equation} \label{di17a} m(\mu) \sim (\mu-\mu_c)^{\nu},~\text{for $\mu\rightarrow\mu_c$.} \end{equation} In general we expect that close to the critical point the exponential decay of $G(\mu,r)$ turns over into a power law. More precisely: \begin{eqnarray} \label{di18} G(\mu,r) &\sim& e^{-m(\mu) r},~\text{for}~1\ll m(\mu)r,\\ \label{di18a} G(\mu,r) &\sim& r^{1-\eta},~\text{for}~m(\mu)\ll m(\mu)r\ll 1. \end{eqnarray} The exponent $\eta$ is called the anomalous dimension in analogy to ordinary statistical systems. Furthermore we can define the susceptibility $\chi(\mu)$ in the discretized ensemble as \begin{equation} \label{di19} \chi(\mu) = \sum_{r=1}^{\infty} G(\mu,r), \end{equation} Close to the critical point, where triangulations with large $N_t$ dominate and symmetry factors play no role, we have \begin{equation} \label{di20} \chi(\mu) \sim \frac{\partial^2Z(\mu)}{\partial\mu^2} \sim (\mu-\mu_c)^{-\ga}. \end{equation} Similar to chapter~\sref{chap1} we can introduce several concepts of fractal dimensions to characterize the geometrical structure of the two-dimensional quantum spacetime. The Hausdorff dimension $d_H$ in the grand-canonical ensemble is defined by \begin{equation} \label{di20a} \bra N\ket_r \sim r^{d_H}, \text{~for $r\rightarrow\infty$ and $m(\mu)r=\text{const}$,} \end{equation} where the average $\bra N\ket_r$ is given by \begin{equation} \label{di20b} \bra N\ket_r = \frac{1}{G(\mu,r)} \sum_{T\in\cT(2,r)} N_te^{-\mu N_t} = -\frac{\partial \log G(\mu,r)}{\partial \mu}. \end{equation} If we perform the derivative under the constraint in~\eref{di20a} it follows that \begin{equation} \label{di20c} \bra N\ket_r\sim m'(\mu)r \sim r^{\frac{1}{\nu}}. \end{equation} Thus the Hausdorff dimension $d_H$ is related to the scaling exponent $\nu$ by: \begin{equation} \label{di20d} \nu = \frac{1}{d_H}. \end{equation} With~\eref{di20} we can derive Fisher's scaling relation \begin{equation} \label{di21} \ga = \nu (2-\eta), \end{equation} which relates the critical exponents defined above. From the long distance behaviour of $G(\mu,r)$ we can compute the long distance behaviour of $G(N,r)$ by a saddle point calculation: \begin{equation} \label{di22} G(N,r) \sim e^{-c\left(\frac{r}{N^{\nu}}\right)^{\frac{1}{1-\nu}}}, \end{equation} with $c=\left(\frac{1}{\nu}-1\right)\nu^{\frac{1}{1-\nu}}$. Analogously to the continuum formalism in chapter~\sref{chap1} we can define a Hausdorff dimension $d_h$ by the short distance scaling of the two-point function in the canonical ensemble with fixed number $N$ of triangles. For $r=0$, $G(N,0)$ is the one-point function and behaves as \begin{equation} \label{di22a} G(N,0) \sim e^{\mu_c N} N^{\ga-2}. \end{equation} This is because for large $N$ the one-point function is proportional to $NZ(N)$ since it counts triangulations with one marked link. Now let $r\ll N^{1/d_h}$ and count the number $n(r)$ of triangles which lie a geodesic distance $r$ away from a marked link. The average of $n(r)$ in the ensemble of surfaces with one marked link defines the Hausdorff dimension $d_h$ by \begin{equation} \label{di23} \bra n(r)\ket_N \sim \frac{G(N,r)}{G(N,0)} \sim r^{d_h-1}, ~\text{for $1\ll r\ll N^{1/d_h}$.} \end{equation} The first relation follows from the definition of $\bra n(r)\ket_N$. Together with~\eref{di22a} we conclude: \begin{equation} \label{di24} G(N,r) \sim r^{d_h-1} N^{\ga-2} e^{\mu_c N}, ~\text{for $1\ll r\ll N^{1/d_h}$.} \end{equation} Finally, the short distance behaviour of $G(\mu,r)$ follows by a discrete Laplace transformation. Close to the critical point we get: \begin{equation} \label{di25} G(\mu,r) \sim r^{\ga d_h-1}. \end{equation} Together with~\eref{di18a} this proves again Fisher's scaling relation. Furthermore it turns out that both definitions of the intrinsic Hausdorff dimension are equivalent in two-dimensional quantum gravity. However, counter examples exist, as will be demonstrated in the next section. The mass $m(\mu)$ determines the scaling of the discretized theory. To see how continuum expressions can be approached in the discretized theory let us reintroduce dimensions. The renormalized cosmological constant $\La$ in terms of the bare cosmological constant $\mu$ is defined by: \begin{equation} \label{di26} \mu-\mu_c = \La a^2, ~\text{that means $a(\mu) \sim (\mu-\mu_c)^{1/2}$.} \end{equation} If the mass shall survive in the continuum limit, it has to be introduced as: \begin{equation} \label{di27} M = m(\mu)\ a(\mu)^{-2\nu} = c \La^{\nu}, \end{equation} with a constant $c$. If the two-point function shall survive, we must have \begin{equation} \label{di17} m(\mu) r = M R, \end{equation} where the continuum parameter $M$ and $R$ are held fixed and the number of steps $r$ goes to infinity, that means: \begin{equation} \label{di28} R = r\ a(\mu)^{2\nu}. \end{equation} From~\eref{di18} and~\eref{di18a} we see that the continuum two-point function is given by: \begin{equation} \label{di29} G(\La,R) = \lim_{\mu\rightarrow\mu_c}a(\mu)^{2\nu(1-\eta)} G(\mu,r), \quad m(\mu)r = MR. \end{equation} This scaling form can be explicitly verified by an exact calculation of $G(\mu,r)$ for pure gravity~\cite{Kawai:1993cj,Watabiki:1995ym}, compare section~\sref{fs}. \subsection{Branched polymers} \label{sec:bpo} An interesting simple example for the scenario described above is provided by the model of branched polymers which is important in the understanding of the phases of two-dimensional quantum gravity. The critical indices have been computed in detail in~\cite{Ambjorn:1986dn,Ambjorn:1990wp}. The partition function is defined as \begin{equation} \label{bp1} Z(\mu) = \sum_{BP} \frac{1}{C_{BP}}e^{-\mu N_{l}}\prod_{v\in BP}f(n_v), \end{equation} where the sum goes over all branched polymers, that means all connected planar tree graphs. $N_l$ denotes the number of links in a branched polymer. The product goes over all vertices in the tree graphs and $n_v$ is the number of links joining at the vertex $v$, called the order of $v$. Usually, the (unnormalized) branching weight $f(n)$ is a non-negative function of the order of the vertices. Finally the symmetry factor $C_{BP}$ is chosen such that rooted branched polymers, that means with the first link marked, are counted only once. The one-point function $G(\mu)$ is defined similarly as the sum over the rooted branched polymers. In \begin{figure}[htbp] \begin{center} \includegraphics[width=0.70\linewidth]{FIGURES/bp.eps} \parbox[t]{.85\textwidth} { \caption[bpfig] { \label{fig:bp1} \small Illustration of the self-consistent equation for rooted branched polymers. The first link gives a factor $e^{-\mu}$. At the vertices the branching weight has to be included. } } \end{center} \end{figure} this case, the symmetry factor drops out. From figure \fref{bp1} we follow that $G(\mu)$ satisfies the equation \begin{equation} \label{bp2} G(\mu) = e^{-\mu}\left(1 + f(2) G(\mu) + f(3) G(\mu)^2 + \ldots \right). \end{equation} We can solve this relation for $e^{\mu}$ as a function of $G(\mu)$: \begin{equation} \label{bp3} e^{\mu} = \frac{1 + f(2) G(\mu) + f(3) G(\mu)^2 + \ldots }{G(\mu)} \equiv \frac{\cF(G)}{G} \equiv F(G). \end{equation} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.7\linewidth]{FIGURES/solution.eps} \parbox[t]{.85\textwidth} { \caption[solfig] { \label{fig:bp2} \small Graphical solution of~\eref{bp3}. The critical point is identified as the minimum of $e^{\mu}$ as a function of $G$. } } \end{center} \end{figure} This equation is illustrated in figure \fref{bp2}. The critical point is at the minimum of $F(G)$. Since all $f(n)$ are positive the minimum is unique and satisfies $F'(G(\mu_c))=0$ and $F''(G(\mu_c))>0$. Therefore we have: \begin{equation} \label{bp4} G(\mu) \sim G(\mu_c) - c (\mu-\mu_c)^{\frac{1}{2}}, ~\text{for $\mu\rightarrow\mu_c$,} \end{equation} where $c$ is some constant. Since $G(\mu)$ is the one-point function we would have expected a behaviour $G(\mu) \sim (\mu-\mu_c)^{1-\ga}$. Thus we conclude that the generic value of $\ga$ for branched polymers is $\frac{1}{2}$. If we allow some of the values $f(n)$ to be negative, we can fine tune the $f(n)$ such that the minimum of $F(G)$ satisfies \begin{equation} \label{bp5} F^{(k)}(G(\mu_c)) = 0, ~\text{for $k=1,\ldots,m-1$, and $F^{(m)}(G(\mu_c))\neq 0$.} \end{equation} This model is called $m$'th multicritical branched polymer model. For $m=2$ we simply recover the ordinary branched polymer model. The critical behaviour of the one-point function is changed to \begin{equation} \label{bp6} G(\mu) \sim G(\mu_c) - c (\mu-\mu_c)^{\frac{1}{m}}, ~\text{for $\mu\rightarrow\mu_c$,} \end{equation} and we get $\ga=\frac{m-1}{m}$. To define the two-point function $G(\mu,r)$ for branched polymers, let the geodesic distance between two marked vertices $x$ and $y$ be the minimal length of paths between the two points. This is of course unique. The two-point function is defined as the partition function of branched polymers with the constraint that there are two marked points separated a geodesic distance $r$. The path between the marked points can be viewed as a random walk where at each vertex a rooted branched polymer can grow out. If we think only in terms of intrinsic geometrical properties this leads to \cite{Ambjorn:1990wp}: \begin{equation} \label{bp7} G(\mu,r) \sim \left(e^{-\mu}\cF'(G(\mu))\right)^r = e^{-m(\mu)r}, \end{equation} from which we conclude, using $e^{-\mu}\cF'(G(\mu))= 1 - \frac{G(\mu)}{\chi(\mu)}$, that \begin{equation} \label{bp8} m(\mu) = \ka (\mu-\mu_c)^{\frac{m-1}{m}}, \end{equation} in the general case for $m\geq 2$, with a positive constant $\ka$. Therefore it follows with~\eref{di20d} that the intrinsic Hausdorff dimension $d_H$ equals $\frac{m}{m-1}$. For $m=2$ we have $d_H=2$ and the branched polymers have the same intrinsic dimension as smooth two-dimensional manifolds. For $m\rightarrow\infty$, $d_H$ approaches one, in agreement with the fact that in this limit the branched polymers approach ordinary random walks. The two-point function $G(\mu,r)$ is given by \begin{equation} \label{bp8a} G(\mu,r) = \text{const}\times e^{-\ka r (\mu-\mu_c)^{\frac{m-1}{m}}}. \end{equation} We can also compute the canonical intrinsic Hausdorff dimension $d_h$ for branched polymers defined in the ensemble of branched polymers with a fixed number $N$ of links. The volume $\bra n(r)\ket_N$ of a spherical shell of geodesic radius $r$ is defined in analogy to~\eref{e17b} and~\eref{di23}. $Z(N)$ for branched polymers is given by an inverse Laplace transformation of~\eref{bp1} and scales as $Z(N) \sim e^{\mu_c N}N^{\ga-3}$. Similarly the two-point function $G(N,r)$ for graphs with fixed volume $N$ is given by an inverse Laplace transformation of $G(\mu,r)$. Inserting the scaling behaviour and taking only the leading orders we find for small $r$~\cite{Ambjorn:1997sy}: \begin{equation} \label{bp9} \bra n(r)\ket_N \sim \frac{r}{N^{1-\frac{2}{m}}}, ~\text{for $N^{1-\frac{2}{m}}\ll r\ll N^{1-\frac{1}{m}}$.} \end{equation} This shows that $d_h=2$ for all values of $m$. Thus for the generic branched polymers ($m=2$) the canonical and the grand-canonical definition of the intrinsic Hausdorff dimension give the same result. However, this is not true for the $m$'th multicritical branched polymer model with $m>2$, where $d_H=\frac{m}{m-1}$ and $d_h=2$. The fractal structure of branched polymers embedded in $\mathbb{R}^D$ is described by the extrinsic Hausdorff dimension $D_H$. $D_H$ can be computed from the scaling of the embedded two-point function. The path between two marked points $x$ and $y$ is an embedded random walk at which at each point branched polymers can grow out. The integration over the embedding field is the same as for the ordinary random walk in $D$ dimensions. The branching at the vertices corresponds to a renormalization of $e^{-\mu}$. From this renormalization $D_H$ can be computed with the result $D_H=\frac{2}{\ga} = \frac{2m}{m-1}$~\cite{Ambjorn:1990wp}. Thus $D_H=4$ for ordinary branched polymers and $D_H\rightarrow 2$ for $m\rightarrow\infty$. \section{Matrix models} \label{sec:mm} Major advances in the theory of two-dimensional quantum gravity have been made when it was realized, that the theory is integrable in the discrete formulation, even for some types of matter coupled to the surfaces~\cite{Ambjorn:1985az,Boulatov:1987sb,David:1985tx,David:1985nj,David:1985et,Kazakov:1985ea,Kazakov:1986hu}. In the development of these results the formulation of dynamical triangulation in terms of matrix models has been of some importance. The Hermitean one-matrix model has a direct interpretation in terms of randomly triangulated surfaces. This is demonstrated in the beginning of this section. Also the Hartle-Hawking wavefunctionals have a direct natural formulation via matrix models. The Dyson-Schwinger equations for these models allow the treatment of the theory by methods of complex analysis. These so-called loop equations can alternatively be derived using only combinatorial methods~\cite{Ambjorn1997}. The explicit solution of the loop equations provides deep insight into the properties of the theory and allows the computation of the scaling limit of the Hartle-Hawking wavefunctionals. We cannot review the vast literature on numerical simulations in quantum gravity in this work. However, let us remark that all numerical simulations of two-dimensional dynamical triangulation are consistent with the scaling hypotheses, with universality, and -- most remarkably -- with the continuum theory of two-dimensional quantum gravity. \subsection{Dynamical triangulation by matrix models} The discretized partition function of two-dimensional quantum gravity can be represented as an integral over a Hermitean $N\times N$-matrix $\phi$. Consider the Gaussian integral \begin{equation} \label{mm1} \int\! d\phi\ e^{-\frac{1}{2}\tr\phi^2}\frac{1}{K!}\left(\frac{1}{3}\tr\phi^3\right)^K, \end{equation} where the measure $d\phi$ is defined as \begin{equation} \label{mm2} d\phi = \prod_{i\leq j} d\Re \phi_{ij} \prod_{i<j} d\Im \phi_{ij}. \end{equation} The propagator of $\phi$ is given by \begin{equation} \label{mm3} \bra\phi_{ij}\phi_{i'j'}\ket \equiv \frac{1}{Z(N,0)} \int\! d\phi\ e^{-\frac{1}{2}\tr\phi^2} \phi_{ij}\phi_{i'j'} = \de_{ij'}\de_{ji'}, \end{equation} with $Z(N,0) = \int\! d\phi\ e^{-\frac{1}{2}\tr\phi^2}$. Diagrammatically, the propagator can be represented as a double line where the two lines are oriented in opposite directions. The integral~\eref{mm1} is performed by doing all possible contractions of the $K$ $\tr\phi^3$ vertices. The dual picture of this corresponds to gluing $K$ triangles together to form all possible closed and not necessarily connected surfaces of arbitrary topology. The contribution from a particular graph forming a closed surface is $N^{N_v}$, since each vertex contributes with a factor $N$. If we make the substitution \begin{equation} \label{mm4} \tr\phi^3 \rightarrow \frac{g}{\sqrt{N}}\tr\phi^3, \end{equation} each closed surface with Euler characteristic $\chi$ contributes with $g^KN^{N_v-K/2}=g^KN^{\chi}$. We get the sum over arbitrary closed surfaces with any number of triangles if we sum~\eref{mm1} over $K$: \begin{equation} \label{mm5} Z(N,g) = \int\! d\phi\ e^{-\frac{1}{2}\tr\phi^2 + \frac{g}{3\sqrt{N}}\tr\phi^3}. \end{equation} Taking the logarithm of this projects on the connected surfaces only. Thus we see that \begin{equation} \label{mm6} Z(\mu,G) = \log{\frac{Z(N,g)}{Z(N,0)}}, \end{equation} with \begin{equation} \label{mm7} \frac{1}{G} = \log N, ~\text{and $\mu = -\log g$} \end{equation} is a formal definition of the discretized partition function of two-dimensional quantum gravity, including the sum over topologies. Here $G$ is the gravitational coupling constant. The matrix integral~\eref{mm5} is of course not convergent, but it has been suggested, that a closed form like~\eref{mm5} might define a non-perturbative definition of the sum over topologies after analytic continuation. In general however, the integral will be complex~\cite{Ambjorn:1991pt,Ambjorn:1992km} and the problem of summing over topologies has not yet been solved. Equation~\eref{mm5} admits a $1/N^2$-expansion which is identical to an expansion over topologies. Here we will restrict ourselves to spherical topology which corresponds to the leading order in $1/N^2$. It is convenient to generalize the matrix integral~\eref{mm5} to \begin{eqnarray} \label{mm8} Z(N,g_i) &=& \int\! d\phi\ e^{-N\tr V(\phi)},\\ V(\phi) &=& \sum_{n=1}^{\infty}\frac{g_n}{n}\phi^n, \end{eqnarray} where we have performed the rescaling $\phi\rightarrow \sqrt{N}\phi$.~\eref{mm8} is interpreted as a perturbative expansion around a Gaussian integral. Thus the Gaussian coupling constant is bigger than zero, while the others are chosen smaller than zero. The expectation value of some observable $\cO(\phi)$ is defined as \begin{equation} \label{mm8a} \bra\cO(\phi)\ket = \frac{1}{Z(N,g_i)} \int\! d\phi\ e^{-N\tr V(\phi)} \cO(\phi). \end{equation} Differentiating the logarithm of~\eref{mm8} with respect to the coupling constants $g_n$ defines expectation values of observables like $\tr \phi^{k_1} \cdots \tr\phi^{k_b}$. They are interpreted as the sum over all surfaces which have $b$ polygons with $k_i$ links as their boundary. The generating function of the connected expectation values of these observables is given by \begin{eqnarray} \label{mm9} W(z_1,\ldots,z_b) &=& N^{b-2} \sum_{k_1,\ldots,k_b=0}^{\infty} \frac{\bra\tr\phi^{k_1}\cdots\tr\phi^{k_b}\ket_{\text{conn}}}{z_1^{k_1+1}\cdots z_b^{k_b+1}}\nonumber\\ &=& N^{b-2}\bra\tr\frac{1}{z_1-\phi}\cdots\tr\frac{1}{z_b-\phi}\ket_{\text{conn}}. \end{eqnarray} In the large-$N$ limit this is the generating function for the discretized Hartle-Hawking wavefunctionals, whose continuum counterparts have been defined in section~\sref{hh}. In fact, already the one-loop correlator $W(z)$ contains all necessary information, since the higher correlators can be computed from it by differentiation: \begin{equation} \label{mm10} W(z_1,\ldots,z_b) = \frac{d}{dV(z_b)}\frac{d}{dV(z_{b-1})}\cdots\frac{d}{dV(z_2)} W(z_1), \end{equation} where the loop insertion operator $\frac{d}{dV(z)}$ is defined as \begin{equation} \label{mm11} \frac{d}{dV(z)} \equiv -\sum_{n=1}^{\infty} \frac{n}{z^{n+1}} \frac{\partial}{\partial g_n}. \end{equation} Let us define the density $\rho(\la)$ of eigenvalues of the matrix integral~\eref{mm8} by \begin{equation} \label{mm12} \rho(\la) = \bra \sum_{i=1}^N\de(\la_i-\la)\ket, \end{equation} where the $\la_i$ are the $N$ eigenvalues of the matrix $\phi$. Then the one-loop correlator $W(z)$ can be written as: \begin{equation} \label{mm13} W(z) = \int_{-\infty}^{\infty}\! d\la\ \frac{\rho(\la)}{z-\la}. \end{equation} For polynomial potentials the support of $\rho(\la)$ is confined to one or several intervals on the real axes in the limit $N\rightarrow\infty$. The solution in the case of several cuts has been given recently~\cite{Akemann:1996zr}. Here we will only deal with a single cut $[\afr,\bfr]$ on the real axes. Then $W(z)$ will be analytic in the complex plane with a cut at the support $[\afr,\bfr]$ of $\rho(\la)$. It follows that \begin{equation} \label{mm14} 2\pi i\ \rho(\la) = \lim_{\eps\rightarrow 0} \left( W(\la+i\eps) - W(\la-i\eps)\right). \end{equation} \subsection{The loop equations} The matrix model~\eref{mm8} can be solved by many methods. It is most conveniently and systematically done by using loop equations, the Dyson-Schwinger equations for the matrix models. Let us consider the transformation \begin{equation} \label{le1} \phi \rightarrow \phi + \eps\frac{1}{z-\phi}, \end{equation} which makes sense if $z$ is not an eigenvalue of $\phi$ and real, so that the new matrix remains Hermitean. Then the measure and the action transform as \begin{eqnarray} \label{le2} d\phi &\rightarrow& \left(1 + \eps \tr\frac{1}{z-\phi}\tr\frac{1}{z-\phi}\right)d\phi,\\ \tr V(\phi) &\rightarrow& \tr V(\phi) + \eps \tr\frac{V'(\phi)}{z-\phi}, \end{eqnarray} to first order in $\eps$. Since the matrix integral~\eref{mm8} is invariant under such a definition of the integration variable we get: \begin{equation} \label{le3} \int\! d\phi\ e^{-N\tr V(\phi)}\ \left\lbrace \left(\tr\frac{1}{z-\phi}\right)^2 - N\tr\frac{V'(\phi)}{z-\phi}\right\rbrace = 0. \end{equation} The second term can be rewritten as a contour integral involving the one-loop correlator by using the eigenvalue density $\rho(\la)$, while the first term is related to the two-loop correlator. This leads to the standard form of the loop equations~\cite{David:1990ge}: \begin{equation} \label{le4} \oint_{\cC}\! \frac{d\om}{2\pi i}\ \frac{V'(\om)}{z-\om}W(\om) = W(z)^2 + \frac{1}{N^2} W(z,z). \end{equation} The contour $\cC$ goes around the cut $[\afr,\bfr]$ but does not enclose $z$. The loop equation can be solved systematically to all orders in $1/N^2$, that means for all genera of the random surfaces~\cite{Ambjorn:1992jf,Ambjorn:1993gw}. For spherical topology, that means in the limit $N\rightarrow\infty$,~\eref{le4} simplifies to \begin{equation} \label{le5} \oint_{\cC}\! \frac{d\om}{2\pi i}\ \frac{V'(\om)}{z-\om}W_0(\om) = W_0(z)^2, \end{equation} where the subscript $0$ denotes the genus of the surfaces. By the definition we know that $W_0(z) = \frac{1}{z} + O(z^{-2})$. Therefore~\eref{le5} can be solved by a deformation of the contour $\cC$ to infinity. For a polynomial action $V$ of degree $n$ we get: \begin{eqnarray} \label{le8} W_0(z) &=& \frac{1}{2}\left(V'(z) - \sqrt{(z-\afr)(z-\bfr)}\sum_{k=1}^{\infty}M_k[\afr,\bfr,g_i] (z-\bfr)^{k-1}\right)\nonumber \\ &=& \oint_{\cC}\! \frac{d\om}{2\pi i}\ \frac{V'(\om)}{z-\om}\frac{\sqrt{(z-\afr)(z-\bfr)}}{\sqrt{(\om-\afr)(\om-\bfr)}}, \end{eqnarray} where the moments $M_k$ are defined as \begin{equation} \label{le9} M_k[\afr,\bfr,g_i] = \oint_{\cC}\! \frac{d\om}{2\pi i}\ \frac{V'(\om)}{(\om-\afr)^{\frac{1}{2}}(\om-\bfr)^{k+\frac{1}{2}}}, \end{equation} and vanish for $k>n-1$. The endpoints $\afr$ and $\bfr$ of the cut are self-consistently determined by the equations \begin{eqnarray} \label{le10} M_{-1}[\afr,\bfr,g_i] &=& 2,\\ M_0[\afr,\bfr,g_i] &=& 0, \end{eqnarray} which are a consequence of~\eref{le8} together with $W_0(z) = \frac{1}{z} + O(z^{-2})$. For the matrix model~\eref{mm5} which directly corresponds to triangulated surfaces, the solution for $W_0(z)$ is: \begin{eqnarray} \label{le10a} W_0(z) &=& \frac{1}{2}V'(z) + f(\mu,z),\nonumber \\ f(\mu,z) &=& \frac{g}{2}(z-\cfr)\sqrt{(z-\afr)(z-\bfr)}, \end{eqnarray} where $g=e^{-\mu}$ and $\cfr = \frac{1}{g}-\frac{\afr+\bfr}{2}$. In principle one could compute all higher loop correlators from~\eref{le8} by applying the loop insertion operator~\eref{mm11}, which gives a complete solution of two-dimensional dynamical triangulation for spherical topology. In fact, a closed expression for the $b$-loop correlators can be obtained if the small scale details of the theory are adjusted. Instead of the Hermitean matrix model one considers the complex matrix model with a potential \begin{equation} \label{le11} \tr V(\phi^+\phi) = \sum_{n=1}^{\infty}\frac{g_n}{n}\tr (\phi^+\phi)^n, \end{equation} and an integration measure \begin{equation} \label{le12} d\phi = \prod_{i,j=1}^N d\Re \phi_{ij}d\Im \phi_{ij}. \end{equation} The one-loop correlator is now defined as \begin{equation} \label{le13} W(z) = \frac{1}{N} \sum_{n=0}^{\infty}\frac{\bra\tr (\phi^+\phi)^n\ket}{z^{2n+1}}, \end{equation} higher loop correlators are defined analogously. The term $\tr(\phi^+\phi)^n$ can be interpreted as a $2n$-sided polygon whose links are alternately coloured black and white. These polygons can be glued together as in the Hermitean matrix model with the additional constraint that black links have to be glued to white links~\cite{Morris:1991cq}. Such short distance details about the gluing should be unimportant in the continuum limit. The loop equations for this model have been derived in~\cite{Ambjorn:1990ji}. Because of the symmetry $\phi\rightarrow -\phi$ we have $\afr=-\bfr$ for the cut of the one-loop correlator. The solution for spherical topology is given by~\cite{Ambjorn:1990ji}: \begin{eqnarray} \label{le14} W_0(z) &=& \frac{1}{2} \left(V'(z) - M(z) \sqrt{z^2-\bfr^2}\right),\nonumber \\ M(z) &=& \oint_{\cC_{\infty}}\! \frac{d\om}{4\pi i}\ \frac{\om V'(\om)}{(\om^2-z^2)\sqrt{\om^2-\bfr^2}} = \sum_{k=1}^{\infty} M_k[\bfr,g_i](z^2-\bfr^2)^{k-1},\nonumber\\ M_k[\bfr,g_i] &=& \oint_{\cC}\! \frac{d\om}{4\pi i}\ \frac{\om V'(\om)}{(\om^2-\bfr^2)^{k+\frac{1}{2}}}. \end{eqnarray} Here $\cC_{\infty}$ is a contour around the cut pushed to infinity. The position of the cut is determined by the equation \begin{equation} \label{le14a} M_0[\bfr,g_i]=2. \end{equation} The higher loop correlators are given by the following expressions~\cite{Ambjorn:1990ji}: \begin{eqnarray} \label{le15} W_0(z_1,z_2) &=& \frac{1}{4(z_1^2-z_2^2)^2}\left(z_2^2\sqrt{\frac{z_1^2-\bfr^2}{z_2^2-\bfr^2}} + z_1^2\sqrt{\frac{z_2^2-\bfr^2}{z_1^2-\bfr^2}} -2z_1z_2\right),\nonumber \\ W_0(z_1,z_2,z_3) &=& \frac{\bfr^4}{16M_1}\frac{1}{\sqrt{(z_1^2-\bfr^2)(z_2^2-\bfr^2)(z_3^2-\bfr^2)}},\nonumber \\ W_0(z_1,\ldots,z_b) &=& \left(\frac{1}{M_1}\frac{d}{d\bfr^2}\right)^{b-3} \frac{1}{2\bfr^2M_1} \prod_{k=1}^b\frac{\bfr^2}{2(z_k^2-\bfr^2)^{3/2}}. \end{eqnarray} In these formulas all dependence on the coupling constants is hidden in $M_1$ and $\bfr$. Similar and only slightly more complicated statements are valid also for the Hermitean matrix model. \subsection{Scaling limit and computation of $\ga$} It has been discussed in section~\sref{tpf} that the ensemble of triangulated random surfaces has a critical point $\mu_c$. The continuum limit with a renormalized cosmological constant $\La$ is approached by $\mu\rightarrow\mu_c$ with $\mu-\mu_c=\La a^2$, where $a$ is the lattice spacing. In terms of the coupling constant $g$ for the model~\eref{mm5} this relation is $g_c-g\sim g_c\La a^2$, close to the critical point. For the generalized matrix model~\eref{mm8} with $n$ coupling constants we expect no qualitative changes except that the theory will be critical on a $(n-1)$-dimensional hypersurface. This hypersurface is identified by \begin{equation} \label{le16} M_1[\bfr(g_i),g_i] = 0, \end{equation} since the $b$-loop correlators $W_0(z_1,\ldots,z_b)$ are exactly divergent for $M_1=0$. Let us denote a point in the critical hypersurface by $g_{c,i}, i=1,\ldots,n$ and the corresponding endpoint of the eigenvalue distribution by $\bfr_c$. If we move slightly away from the critical surface \begin{equation} \label{le17} g_{i} = g_{c,i} (1-\La a^2)=g_{c,i}+\de g_i, \end{equation} there will be a corresponding change $\bfr_c^2\rightarrow \bfr_c^2+\de \bfr^2$ which can be computed from~\eref{le14a} to be \begin{equation} \label{le18} (\de \bfr^2)^2 = -\frac{16}{3M_2[\bfr_c,g_{c,i}]}\La a^2 \sim \de g_i. \end{equation} We rescale the cosmological constant such that $\bfr^2 = \bfr_c^2 - a\sqrt{\La}$. Because $z_i$ appears in~\eref{le15} always in the combination $(z_i^2-\bfr^2)$ it is natural to introduce a scaling of $z_i$ by \begin{equation} \label{le19} z_i^2 = \bfr_c^2 + aZ_i. \end{equation} Since $W_0(z_1,\ldots,z_b)$ is the generating function for $b$-loop correlators, we can compute the transition amplitude for $b$ one-dimensional universes of lengths $n_1,\ldots,n_b$ from it by a multiple contour integration. The physical lengths of the loops will be $L_i = n_i a$. We see that in the continuum limit the number of links $n_i$ on the boundaries has to go to infinity as $a\rightarrow 0$ if the loops shall survive. By following this procedure one gets the generating functional for macroscopic $b$-loop amplitudes~\cite{Ambjorn:1990ji}: \begin{eqnarray} \label{le20} W_0(z_1,\ldots,z_b) &\sim& a^{5-\frac{7b}{2}} W_0(\La,L_1,\ldots,L_b), \nonumber\\ W_0(\La,Z_1,\ldots,Z_b) &=& \left(-\frac{d}{d\La}\right)^{b-3} \frac{1}{\sqrt{\La}} \prod_{k=1}^b (Z_k+\sqrt{\La})^{-\frac{3}{2}}, \end{eqnarray} for $b\geq 3$. The inverse Laplace transform of $W_0(\La,Z_1,\ldots,Z_b)$ in the variables $Z_i$ gives the Hartle-Hawking wavefunctionals of two-dimensional quantum gravity. It can be computed from~\eref{le20} to be: \begin{equation} \label{le21} W_0(\La,L_1,\ldots,L_b) = \left(-\frac{d}{d\La}\right)^{b-3} \frac{1}{\sqrt{\La}} \sqrt{L_1\cdots L_b}\ e^{-\sqrt{\La}(L_1 + \ldots + L_b)}. \end{equation} Since the $b$-point function in two-dimensional quantum gravity should scale as $\La^{2-b-\ga}$, we can directly read off from this expression that \begin{equation} \label{le22} \ga = -\frac{1}{2} \end{equation} for pure gravity, in agreement with the KPZ-formula~\eref{e3d}. These calculations can be generalized to arbitrary topology~\cite{David:1990ge,Ambjorn:1992jf,Ambjorn:1992xu,Ambjorn:1993gw} and to models with matter coupled to gravity~\cite{Kazakov:1989bc,Ambjorn:1990wg,Ambjorn1997} by considering multicritical models which are obtained by a fine-tuning of the critical coupling constants such that higher moments vanish as well as $M_1$. It is a major result that all values of the susceptibility exponent $\ga$ computed by these or other methods in the model of dynamically triangulated quantum gravity agree with the continuum formula~\eref{e3d} which has been derived in section~\sref{gd}. In fact, all calculations which can be done by dynamical triangulation and in the continuum approach have so far yielded the same results. Therefore we identify both theories as the theory of two-dimensional quantum gravity. Objects like the Hartle-Hawking wavefunctionals and other correlation functions can much easier be obtained in the discretized approach whose scaling limit yields the theory of two-dimensional quantum gravity. \section{The fractal structure of pure gravity} \label{sec:fs} Another major advance in two-dimensional quantum gravity was made by the explicit computation of the two-point function for pure quantum gravity by constructing a transfer matrix \cite{Kawai:1993cj} and later by using a peeling decomposition \cite{Watabiki:1995ym}. The two-point function is a natural object since it provides all details about the scaling properties of two-dimensional quantum gravity. The astonishing result of these investigations is that the intrinsic Hausdorff dimension of two-dimensional quantum gravity is {\em four}. That means that even the dimensionality of spacetime is a dynamical quantity. \subsection{The geodesic two-loop function} Let us define with $\cT(l_1,l_2,r)$ the class of triangulations with an entrance loop $l_1$ with one marked link and an exit loop $l_2$, separated a geodesic distance $r$. We will also denote the number of links of $l_1$ and $l_2$ with the same symbols. Then the geodesic two-loop function is defined as \begin{equation} \label{tl1} G(\mu,r;l_1,l_2) = \sum_{T\in \cT(l_1,l_2,r)} e^{-\mu N_t}. \end{equation} For $r=0$ we have the initial condition: \begin{equation} \label{tl1a} G(\mu,0;l_1,l_2) = \de_{l_1,l_2}. \end{equation} We introduce the generating function for $G(\mu,r;l_1,l_2)$ by \begin{equation} \label{tl3} G(\mu,r;z_1,z_2) = \sum_{l_1,l_2=1}^{\infty} z_1^{-(l_1+1)}z_2^{-(l_2+1)} G(\mu,r;l_1,l_2), \end{equation} with the initial condition \begin{equation} \label{tl4} G(\mu,0;z_1,z_2) = \frac{1}{z_1z_2}\frac{1}{z_1z_2-1}. \end{equation} By a two-fold contour integration the geodesic two-loop function can be reconstructed: \begin{equation} \label{tl5} G(\mu,r;l_1,l_2) = \oint_{\cC_1}\! \frac{dz_1}{2\pi i}\ z_1^{l_1} \oint_{\cC_2}\! \frac{dz_2}{2\pi i}\ z_2^{l_2}\ G(\mu,r;z_1,z_2). \end{equation} It is an important observation that the two-point function $G(\mu,r)$ can be obtained from the geodesic two-loop function. Consider the two-loop function with an entry loop of length $l_1=1$ which is equivalent to a marked link, and an exit loop of arbitrary length $l_2$ separated a geodesic distance $r$. We can close this surface by gluing a disc to the boundary $l_2$. The amplitude $W_0(l_2)$ of the disc can be computed from the one-loop correlator $W_0(z)$ by a contour integration. An additional factor of $l_2$ arises because we have to mark one of the $l_2$ links of the exit loop. Thus we get: \begin{eqnarray} \label{lt6} G(\mu,r) &=& \sum_{l_2=1}^{\infty}G(\mu,r;1,l_2)\ l_2 W_0(l_2) \nonumber\\ &=& \oint_{\cC} \! \frac{d\om}{2\pi i}\frac{1}{\om} \left[z^2 G(\mu,r;z,\frac{1}{\om})\right] \left[-\frac{\partial}{\partial\om}\om W_0(\om)\right] \Big\vert_{z=\infty}, \end{eqnarray} where $\cC$ is a contour around zero. Thus we see that all information about the scaling of the theory can be obtained if the generating function of geodesic two-loop amplitudes can be computed. By a step-by-step decomposition of triangulations in $\cT(l_1,l_2,r)$ a differential equation for $G(\mu,r;z,\om)$ can be obtained . It is intuitive that any triangulation in $\cT(l_1,l_2,r)$ can be decomposed into $r$ rings of thickness one \cite{Kawai:1993cj}. This leads to a transfer matrix formalism for the two-point function. Alternatively, one can decompose the triangulations by a peeling procedure~\cite{Watabiki:1995ym}. The differential equation one obtains is not exact but should be valid close to the critical point of the theory: \begin{equation} \label{lt7} \frac{\partial}{\partial r} G(\mu,r;z,\om) = -2 \frac{\partial}{\partial z} \left[f(\mu,z)G(\mu,r;z,\om)\right], \end{equation} where $f(\mu,z)$ is given by~\eref{le10a} for triangulated surfaces. \subsection{Scaling of the two-point function} The differential equation~\eref{lt7} can be solved by the method of characteristic equations. The result is: \begin{equation} \label{lt8} G(\mu,r;z,\om) = \frac{f(\mu,\hat{z})}{f(\mu,z)} \frac{\hat{z}\om}{\hat{z}\om-1}, \end{equation} where $\hat{z}$ is the solution to the characteristic equation \begin{equation} \label{lt9} \frac{d\hat{z}(z,r)}{dr} = 2 f(\mu,\hat{z}), \end{equation} given by \begin{equation} \label{lt10} \frac{1}{\hat{z}(z,r)} = \frac{1}{\cfr} - \frac{\de_1}{\cfr} \frac{1}{\sinh^2\left(-\de_0r+\sinh^{-1}\sqrt{\frac{\de_1}{1-c/z}-\de_2}\right)+\de_2}. \end{equation} Here the positive constants $\de_i$ scale as \begin{eqnarray} \label{lt11} \de_0 &=& \frac{g}{2}\sqrt{(\cfr-\afr)(\cfr-\bfr)} = O((\mu-\mu_c)^{\frac{1}{4}}),\\ \de_1 &=& \frac{(\cfr-\afr)(\cfr-\bfr)}{\cfr(\bfr-\afr)} = O((\mu-\mu_c)^{\frac{1}{2}}),\\ \de_2 &=& -\frac{\afr(\cfr-\bfr)}{\cfr(\bfr-\afr)} = O((\mu-\mu_c)^{\frac{1}{2}}). \end{eqnarray} Now the two-point function $G(\mu,r)$ can be computed by inserting~\eref{lt8} into \eref{lt6}. Close to the critical point one obtains~\cite{Ambjorn:1995dg}: \begin{equation} \label{lt12} G(\mu,r) = \text{const}\times \de_0\de_1\frac{\cosh(\de_0 r)}{\sinh^3(\de_0 r)}(1+O(\de_0)). \end{equation} From~\eref{lt11} we get: \begin{equation} \label{lt13} G(\mu,r) = \text{const}\times (\mu-\mu_c)^{\frac{3}{4}} \frac{\cosh \left(c(\mu-\mu_c)^{\frac{1}{4}}r\right)}{ \sinh^3\left(c(\mu-\mu_c)^{\frac{1}{4}}r\right)}, \end{equation} where $c=\sqrt{6}e^{-\mu_c}$ is a nonuniversal constant. To read off the critical behaviour of the theory we only have to analyze the asymptotic behaviour of the two-point function for large and for small distances. We see that: \begin{itemize} \item $G(\mu,r)$ falls off exponentially as $e^{-2c(\mu-\mu_c)^{\frac{1}{4}}r}$, for $r\rightarrow\infty$. Thus the susceptibility exponent equals $\nu=\frac{1}{4}$ and the intrinsic Hausdorff dimension $d_H$ of pure two-dimensional quantum gravity equals {\em four}. \item For $1\ll r\ll (\mu-\mu_c)^{-\frac{1}{4}}$ the two-point function behaves like $r^{-3}$, that means the anomalous scaling dimension is $\eta=4$. For ordinary statistical systems $\eta$ is smaller than two. It is remarkable, that the critical exponents of the theory still satisfy Fisher's scaling relation $\ga = \nu(2-\eta)$. \item Any geodesic two-loop function $G(\mu,r;l_1,l_2)$, which can be computed from~\eref{tl5} and~\eref{lt8} fulfills the same scaling relations as the two-point function, provided that $l_1$ and $l_2$ stay finite in the limit $\mu\rightarrow\mu_c$. \item The continuum limit of the two-point function is given by \begin{equation} \label{lt14} G(\La,R) = C\La^{\frac{3}{4}} \frac{\cosh\left(c\La^{\frac{1}{4}}R\right)}{ \sinh^3\left(c\La^{\frac{1}{4}}R\right)}, \end{equation} where $C$ is a constant. \end{itemize} \section{Conclusion} In this chapter we have demonstrated, how to use discretized systems to describe two-dimensional quantum gravity. In two dimensions, the continuum theory of quantum gravity and the theory of dynamical triangulations agree with each other in that region, where the continuum theory can be evaluated, that means for matter with central charge $D\leq 1$ coupled to gravity. The continuum limit of dynamical triangulations in two dimensions can therefore be identified with the continuum theory. In fact, the discretized approach is more powerful than the continuum approach in the sense that natural observables like the Hartle-Hawking wavefunctionals can be computed. Also the two-point function for pure quantum gravity can be obtained. This object is of central interest when questions about the scaling of the theory are to be addressed. On the technical side we have reviewed matrix models, which have developed a status of interest on their own beyond dynamical triangulation in string theory and in condensed matter theory. Since there is at present no successful way to find a continuum theory of quantum gravity in higher dimensions than two, it is natural to ask, whether such a theory could be defined as the scaling limit of the corresponding higher dimensional discretized theory. It turns out that dynamical triangulation can be defined in three and four dimensions and that these theories have a phase transition similar to the two-dimensional case. While there has been some analytical progress in the understanding of entropy bounds on the number of triangulations in higher dimensions~\cite{Carfora:1997} most work in this field is numerical. Questions about the nature of the phase transition in four dimensions are not settled yet and exciting research remains to be done. \chapter{The failure of quantum Regge calculus} \label{sec:regge} In the continuum path integral formulation of quantum gravity we are instructed to compute the integral \begin{equation} \label{in1} \int\!\cD[\gmn] e^{-S_{\text{EH}}(g)} \equiv \int\!\frac{\cD\gmn}{\text{vol(Diff)}} e^{-S_{\text{EH}}(g)} \end{equation} over diffeomorphism classes of metrics weighted with the exponential of the Einstein-Hilbert action, as has been discussed in chapter \sref{chap1}. Two different discretization schemes have been suggested. One, dynamical triangulation, provides a regularization of the functional integral~\eref{in1}. An explicit cutoff is introduced and one sums over all (abstract) triangulations with equilateral $d$-simplices of a $d$-dimensional manifold $M$, compare chapter \sref{disc}. A different attempt to discretize the functional integral~\eref{in1}, called quantum Regge calculus (QRC), has been suggested, see \cite{Hamber:1984tm,Williams:1992cd,Williams:1996jb} for reviews and references. This formalism is closely related to the classical coordinate independent Regge discretization of general relativity: One {\em fixes} a (suitably chosen) triangulation, while the dynamical degrees of freedom are the $L$ link lengths. Thus the functional integration in~\eref{in1} is replaced by \begin{equation} \label{in2} \int\! d\mu(l_1,\ldots,l_L)\equiv \int_0^{\infty}\! \prod_{i=1}^L dl_i\ J(l_1,\ldots,l_L)\ \de(\De), \end{equation} where $J(l_1,\ldots,l_L)$ is the Jacobian of the transformation $d\gmn(\xi)\rightarrow dl_i$. The integral in~\eref{in2} is over all link length assignments consistent with the triangle inequalities, as denoted by the delta function $\de(\De)$. While it provides a discretization of the integration in~\eref{in1}, this replacement does not provide a regularization of~\eref{in1}. Contrary to dynamical triangulation, no cutoff has to be introduced~\cite{Hamber:1986gw}. One can of course choose to work with a cutoff which has to be taken to zero at the end of the computations. This will not make any difference in the argumentation below. The Jacobian in~\eref{in2} is very complicated and its form is presently unknown. For analytical and numerical investigations it is usually replaced by local measures of the form $\prod_{i=1}^L f(l_i)$. If a sensible continuum limit of this theory existed, it would be approached by taking the number $L$ of links in the triangulation to infinity at a critical point of the statistical ensemble defined by~\eref{in2} and by the discretized Einstein Hilbert action. For this limit to make sense, the resulting continuum theory should not depend on details of the discretized theory like the chosen fixed triangulation or the local measure $f(l_i)$. It is one central objective of this chapter to show analytically that such a continuum limit of quantum Regge calculus in its present formulation cannot be defined in any dimension $d>1$. Such a result has been indicated by simulations in two-dimensional quantum Regge calculus~\cite{Bock:1995mq,Holm:1995xr,Holm:1996fd}. These simulations revealed that the KPZ-exponents~\eref{e3d} for the susceptibility of conformal matter coupled to continuum quantum gravity could not be obtained. This has often been called the failure of quantum Regge calculus. Most discussion about this failure to reproduce continuum results in two dimensions has been centered around the choice of the measure. In the first part of this chapter some notation is introduced. In the second part we discuss the measures which have been suggested and used in the context of quantum Regge calculus. In the third part we show that in two dimensions all discussed measures for Regge calculus fail to reproduce continuum results. Not even the concept of length can be defined~\cite{Ambjorn:1997ub}. In the conclusion we discuss the situation in higher dimensions. We give a short review of an alternative approach, in which the continuum functional integral over metrics is restricted to piecewise linear metrics~\cite{Menotti:1996de}. This chapter is partly based on work presented in \cite{Ambjorn:1997ub}. \section{Formulation of quantum Regge calculus} Let us fix the connectivity of a triangulation with $V$ vertices\footnote{We denote the volume of manifolds with the same symbol $V$. No confusion should arise.}, $T$ triangles and $L$ links of lengths $l_1, \ldots, l_L$ of a two-dimensional closed manifold $M$ with Euler-cha\-rac\-te\-ristic $\chi$. The number of links can also be expressed as $L=3V-3\chi$. We view the interior of the simplices as flat. As explained in section \sref{dg} the curvature of this piecewise linear space is concentrated at the vertices and each assignment of link lengths uniquely determines a metric on $M$. For a triangle this metric can be written as \begin{equation} \label{r0c} \gmn = \left( \begin{array}[c]{cc} x_1 & \frac{1}{2}(x_1+x_2-x_3) \\ \frac{1}{2}(x_1+x_2-x_3) & x_2 \end{array} \right), \end{equation} where the squared edge lengths are defined as $x_i=l_i^2,\ i=1, \ldots, L$. The area $A$ of a single triangle can be expressed as \begin{equation} \label{r0d} A = \int\! d^2\xi \sqrt{g(\xi)} = \frac{1}{2}\sqrt{g} = \frac{1}{2}\sqrt{x_1x_2-\frac{1}{4}(x_1+x_2-x_3)^2}. \end{equation} Since in quantum Regge calculus the link lengths are the dynamical variables one attempts to replace the functional integration over equivalence classes of metrics by an integration over all link length assignments which correspond to different geometries. Clearly, this replacement \begin{equation} \label{r1} \int\! \cD[\gmn] = \int\! \frac{\cD \gmn}{\text{Vol(Diff)}} \rightarrow \int\!\prod_{i=1}^L dl_i\ J(l_1, \ldots, l_L) \de(\De) \equiv \int\! d\mu(l_1,\ldots,l_L) \end{equation} involves a highly non-trivial Jacobian $J(l_1,\ldots,l_L)$. The integration is over all link lengths compatible with the triangle inequalities. But not all assignments of link lengths define independent geometries as can be seen in the flat case. All vertices can be moved around in the plane, changing the link lengths without changing the flat geometry. The Jacobian has to be such that this additional invariance is divided out of the integral. Thus, we obtain an $L=3V-3\chi$-dimensional subspace of the infinite dimensional space of equivalence classes of metrics. Quantum Regge calculus replaces the functional integration in~\eref{e1} by an integral over this finite dimensional subspace. It is hoped that in the limit $L\rightarrow\infty$ expectation values of observables converge in some suitable way to their continuum values. \section{Regge integration measures} Presently, the form of the Jacobian in~\eref{r1} is not known. However, by appealing to universality one might assume that the choice of this measure is not very important. A large number of measures have been suggested and tried out in numerical experiments to study whether quantum Regge calculus in its present form agrees with the results of continuum calculations and dynamical triangulations in two dimensions. The result of these investigations is negative. In this section we will define the most important and common of these measures and study some of their properties. \subsection{DeWitt-like measures} One possible way to construct a measure for quantum Regge calculus in terms of the link lengths is to repeat the construction in section \sref{fm} with the constraint that the deformations of the metric are allowed by quantum Regge calculus. For an alternative discussion see~\cite{Williams:1996jb} where the corresponding metric is called Lund-Regge metric, refering to an unpublished work by Lund and Regge. However, since the starting point of our discussion is the DeWitt metric we rather call the measure a DeWitt-like measure. Note however that it is not the DeWitt measure restricted to piecewise linear metrics. This will be given in section \sref{menotti}. For a single triangle the variation of the metric~\eref{r0c} in terms of the $\de x_i$ is given by \begin{equation} \label{dw1} \de\gmn = \left( \begin{array}[c]{cc} \de x_1 & \frac{1}{2}(\de x_1+\de x_2-\de x_3) \\ \frac{1}{2}(\de x_1+\de x_2-\de x_3) & \de x_2 \end{array} \right). \end{equation} Using this and the DeWitt metric, the scalar product $\langle\de g,\de g\rangle_T$, which has been defined in section \sref{fm} can be computed for a single triangle with area $A$. It turns out that it simplifies considerably when we use the canonical value $-2$ for the parameter $C$ in the DeWitt metric: \begin{eqnarray} \label{dw2} \langle\de g,\de g\rangle_T &=& \int\! d^2\xi \sqrt{g(\xi)}\ \de\gmn(\xi) \GMNAB \de\gab(\xi)\nonumber\\ &=& \frac{A}{2}\left(2\de\gmn g^{\mu\al} g^{\nu\be} \de\gab + C \de\gmn\gMN\gAB\de\gab\right)\nonumber\\ &=& -2A\det(\de\gmn) \det(\gMN),~\text{for $C=-2$}. \end{eqnarray} Using~\eref{r0c} and~\eref{dw1} leads to \begin{eqnarray} \label{dw3} \langle\de g,\de g\rangle_T &=& [\,\delta x_1, \delta x_2, \delta x_3\,]\:\frac{1}{16A}\!\left[ \begin{array}{rrr} 1 & -1 & -1 \\ -1 & 1 & -1 \\ -1 & -1 & 1 \end{array} \right] \left[ \begin{array}{c} \delta x_1 \\ \delta x_2 \\ \delta x_3 \end{array} \right]. \end{eqnarray} For a general two-dimensional triangulation this line element is given by the sum of~\eref{dw3} over all triangles: \begin{eqnarray} \label{dw4} \langle\de g,\de g\rangle_T &=& \sum_t\int d^2\xi \sqrt{g^t(\xi)}\,\delta g^t_{\mu\nu}(\xi)(G^t)^{\mu\nu\alpha\beta}\delta g^t_{\alpha\beta}(\xi) \nonumber \\ &=& [\,\delta x_1,\ldots,\delta x_L\,]\:{M}\left[ \begin{array}{c} \delta x_1 \\ \vdots \\ \delta x_L \end{array} \right]. \end{eqnarray} Here ${M}$ is an $L\times L$ matrix. ${M}_{ij}$ is the element corresponding to the links $l_i$ and $l_j$. If both $l_i$ and $l_j$ belong to the same triangle $t$ with area $A_t$ the corresponding non-diagonal entry is \begin{equation} \label{dw5} {M}_{ij} = {M}_{ji} = -\frac{1}{A_t}. \end{equation} All other off-diagonal entries are zero. It follows that ${M}$ has four nonvanishing off-diagonal entries in each row. Diagonal entries are given by \begin{equation} \label{dw6} M_{ii} = \frac{1}{A_{t_1}} + \frac{1}{A_{t_2}}, \end{equation} where $t_1$ and $t_2$ are the two triangles which contain $l_i$. From the i'th row of ${M}$ the product $(A_{t_1}A_{t_2})^{-1}$ can be factorized. Since each triangle has three sides, this amounts to factorizing $\prod_{k=1}^T A_k^{-3}$ from the determinant of ${M}$: \begin{eqnarray} \label{dw7} \det M &=& \prod_{k=1}^T A_k^{-3} \left\vert \begin{array}{ccccccc} A_1+A_2 & -A_2 & -A_2 & -A_1 & -A_1 & 0 & \ldots \\ \mc{7}{c}{\ldots} \end{array} \right\vert \nonumber \\ &=:& P(A_1,\ldots,A_T) \prod_{k=1}^T A_k^{-3}. \end{eqnarray} $P(A_1 , \ldots ,A_T)$ is a polynomial in the areas of the triangles which vanishes, whenever two adjacent triangle areas vanish. It follows directly that $P$ is a highly nonlocal function of the areas, since each monomial of $P$ must contain at least half of the triangles. Therefore the DeWitt-like integration measure for quantum Regge calculus is given by \begin{equation} \label{dw8} d\mu(l_1,\ldots,l_L) = \text{const}\times \frac{\sqrt{P(A_1 , \ldots ,A_T)}}{\prod_{k=1}^T A_k^{3/2}} \prod_{j=1}^L l_jdl_j\ \de(\De) \end{equation} On a first glance there are similarities with the continuum measure \eref{meas1.15}. Using $g(\xi) \sim A^2$ we see that the power of the ``local areas'' in both measures equals $-3/2$. However, reparametrization invariance fixes the continuum measure completely, while in the discretized version~\eref{dw3} might be multiplied by the diffeomorphism invariant area factor $A^{1-2\be/3}$, leading to the measure \begin{equation} \label{dw9} d\mu(l_1,\ldots,l_L) = \text{const}\times \frac{\sqrt{P(A_1^{\frac{2\be}{3}} , \ldots ,A_T^{\frac{2\be}{3}})}}{\prod_{k=1}^T A_k^{\be}} \prod_{j=1}^L l_jdl_j\ \de(\De). \end{equation} This generalization stresses the fact that the power of the areas is not fixed in quantum Regge calculus by any obvious principle since the areas are diffeomorphism invariant quantities with no direct local interpretation in the continuum. The measure~\eref{dw9} is highly nonlocal and thus not suited for numerical simulations. Below we will show that it does not reproduce continuum physics. \subsection{The DeWitt-like measure in other dimensions than two} \label{sec:dwone} We can perform the derivation of the DeWitt-like measure in one and in higher dimensions. A one-dimensional piecewise linear manifold consists of $L$ links of lengths $l_i~(x_i=l_i^2), i=1,\ldots,L$, which are glued together at their ends. The canonical metric is thus given by $\gmn^i = (x_i)$. Therefore the DeWitt metric gives: \begin{equation} \label{od1} \langle\de g,\de g\rangle_T = (1+\frac{C}{2})\sum_{i=1}^L x_i^{-\frac{3}{2}} (\de x_i)^2. \end{equation} Thus the DeWitt-like measure in one dimension is \begin{equation} \label{od2} d\mu(l_1,\ldots,l_L) = \text{const}\times \prod_{i=1}^L \frac{dl_i}{l_i^{\frac{1}{2}}}. \end{equation} In this case the measure is local. The factor $(1+\frac{Cd}{2})$ factorizes as in the continuum. Note that in $d=1$ we cannot use the canonical value $C=-2$ for $C$ since the DeWitt metric would be singular. In higher dimensions the situation is more complicated. For a $d$-simplex the measure is derived from the scalar product \begin{equation} \label{hd1} \langle\de g,\de g\rangle_T = \de x {M}_d \de x, \end{equation} where $\de x$ is the $\frac{d(d+1)}{2}$-dimensional vector associated with link length deformations of the $d$-simplex. As in two dimensions, the complete norm for the triangulation can be written as a straightforward superposition of the matrices ${M}_d$. However, in dimensions larger than two, the entries of this matrix ${M}_d^{\text{total}}$ depend on the $x_i$'s. In general, the measure is given by \begin{equation} \label{hd1a} d\mu(l_1,\ldots,l_L) = \text{const}\times\sqrt{\det{{M}_D^{\text{total}}}} \prod_{i=1}^L l_i dl_i\ \de(\De), \end{equation} where $\de(\De)$ now stands for the generalized triangle inequalities. As an example we have computed the measure for the simplest closed three-dimensional manifold which consists of two tetrahedra glued together along the six links. The result is \begin{equation} \label{hd2} d\mu(l_1,\ldots,l_6) = \text{const}\times \frac{1}{V} \prod_{i=1}^6 l_idl_i\ \de(\De), \end{equation} where $V$ is the $3$-volume of the manifold. Note that also in four dimensions, where the DeWitt metric is simply $\prod_{\xi\in M}\prod_{\mu\leq\nu}d\gmn$, the determinant of ${M}_d^{\text{total}}$ does not evaluate to a constant, not even for the simplest $4$-geometries. \subsection{The DeWitt-like measures for special geometries} \label{sec:DWspecial} We have not been able to obtain a general closed expression for the polynomial $P$. However, in a number of special cases it is possible to find explicit expressions for the measure~\eref{dw8} which can immediately be generalized for the measure~\eref{dw9}~\cite{Ambjorn:1997ub}. The easiest case of a closed piecewise linear manifold are two triangles of area $A$ glued together along the three links. The measure is \begin{equation} \label{dw10} d\mu(l_1,l_2,l_3) = \text{const}\times\frac{1}{A^{\frac{3}{2}}} \prod_{j=1}^3 l_jdl_j\ \de(\De). \end{equation} A tetrahedron has six links and four faces. The determinant of the $6\times 6$ matrix ${M}$ can be computed. The measure is given by \begin{equation} \label{dw11} d\mu(l_1,\ldots,l_6) = \text{const}\times \frac{\sum_{i=1}^4 A_i}{\prod_{i=1}^4 A_i} \prod_{j=1}^6 l_jdl_j\ \de(\De). \end{equation} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.4\linewidth]{FIGURES/singular.eps} \parbox[t]{.85\textwidth} { \caption[sing] { \label{fig:hat} \small Building blocks for the hedgehog geometry which consists of $K$ of these blocks glued together along the links $l_{a_k}$ and $l_{b_k}$. } } \end{center} \end{figure} Several other more complicated examples are possible. As a last example we display the measure for a hedgehog geometry which consists of $K$ hat-like building blocks such as the one shown in figure \fref{hat}. This block consists of two triangles with areas $A_{2k-1}$ and $A_{2k}$ respectively, glued together along the links $l_{2k-1}$ and $l_{2k}$. The hedgehog geometry is built by gluing these elements together along the links $l_{b_k}$ and $l_{a_{k+1}}$ successively. After this gluing the vertices $x$ and $y$ have the order $2K$, while the other vertices are of order two. Although this geometry might seem a bit artificial it plays an important role in the analysis of the phase transitions of two dimensional quantum gravity coupled to matter fields~\cite{Ambjorn:1986dn}. The matrix ${M}$ for this geometry can be written in a block diagonal form. Its determinant can be computed, resulting in the measure \begin{eqnarray} \label{dw12} d\mu(l_1,\ldots,l_{3K}) & = & {\rm const}\times\frac{\prod_{j=1}^{3K} l_j dl_j}{\prod_{k=1}^{2K}A_k^{3/2}}\prod_{k=1}^K\left(A_{2k-1}+A_{2k} \right)^{1/2} \nonumber \\ & & \times \left|\,\prod_{k=1}^K A_{2k}+(-1)^{K-1}\prod_{k=1}^K A_{2k-1}\,\right|. \end{eqnarray} For an even number of elements this measure vanishes whenever the product over the even areas equals the product over the odd areas. For odd $K$, the measure is always positive. Clearly, this points to a severe sickness of the measure~\eref{dw8}. \subsection{Commonly used measures} \label{sec:cum} As has been mentioned above, the DeWitt-like measures~\eref{dw9} are not suited for numerical simulations. Also, we have demonstrated, that the measure in quantum Regge calculus is not fixed as in the continuum case. Therefore it is natural to search for other, local measures for which a sensible continuum limit can be defined. The principle of universality should ensure that all reasonable choices of the measure result in the same continuum theory. For numerical simulations in two-dimensional quantum Regge calculus, a number of local measures have been suggested~\cite{Hamber:1984tm,Bander:1986kd,Gross:1990fq}. In all cases these are of the form \begin{equation} \label{a1} d\mu(l_1,\ldots,l_L) = \frac{1}{\prod_{j=1}^T A_j^{\be}} \prod_{i=1}^L l_i^{-\al}dl_i\ \de(\De), \end{equation} for some parameter $\al$ or $\be$, or derived from this\label{p1}\footnote{Sometimes the product over all triangle areas is replaced by the product over all areas assigned to the links. In our arguments this is not important and corresponds to choosing another value for the parameters $\al$ and $\be$.}. These measures are arrived at by intuitively translating the continuum measure $\prod_{\xi\in M}\prod_{\mu\leq\nu}d\gmn$ as $\prod_{i=1}^L l_i dl_i$ and by replacing the product $\prod_{\xi\in M} g(\xi)^{\sig}$ by the product of the ``local'' areas $\prod_{k=1}^T A_k^{2\sig}$. The most popular choices are the scale invariant measure $\prod_{i=1}^L\frac{dl_i}{l_i}$ and the uniform measure $\prod_{i=1}^Ldx_i\sim \prod_{i=1}^Ll_idl_i$. Motivation for the first of these measures is often suggested by an analogy to the scale invariant continuum-measure \begin{equation} \label{a3} \cD\gmn = \prod_{x\in M} g(x)^{-\frac{d+1}{2}}\prod_{\mu\leq\nu} d\gmn(x), \end{equation} which has been advocated for four-dimensional quantum gravity in~\cite{Misner,FP}. However, this measure is not diffeomorphism invariant, as has been stressed in section \sref{fm}, although the opposite has been claimed and ``proved'' by wrong formal arguments~\cite{FP,Hamber:1997ut}. The DeWitt measure is the only diffeomorphism invariant measure for continuum quantum gravity. Therefore the measure~\eref{a3} cannot replace the DeWitt measure in continuum quantum gravity and any translations to discretized versions should be avoided. Also the DeWitt-like measure~\eref{dw8} cannot be used to motivate~\eref{a1} in dimensions higher than one, since~\eref{dw8} is highly nonlocal while the measures~\eref{a1} are local. Note that also in four dimensions, where the DeWitt measure is simply \begin{equation} \label{a4} \prod_{x\in M}\prod_{\mu\leq\nu} d\gmn(x), \end{equation} the discrete DeWitt-like measure is very complicated and highly nonlocal and does not equal the uniform measure~\eref{a1} with $\al=-1$ and $\be=0$, not even for the simplest 4-geometries. This clearly shows that translations like \begin{equation} \label{a5} \prod_{x\in M}g(x)^{\sig}\prod_{\mu\leq\nu} d\gmn(x) \rightarrow \prod_{j=1}^T A_j^{2\sig}\prod_{i=1}^L l_idl_i\ \de(\De) \end{equation} or similar are far too simple minded, since the Jacobian of this variable transformation is not taken into account. The correct treatment of the diffeomorphisms and the Jacobian leads to the results in section \sref{menotti}. However, it has been argued that, appealing to universality, the precise form of the measure need not to be known. In the continuum limit, averages of observables should result in their continuum values independent of details of the discretization like the exponents $\al$ and $\be$ in~\eref{a1} or the underlying connectivity of the piecewise linear space. But numerical simulations revealed that averages of observables in the context of quantum Regge calculus \begin{itemize} \item do not reproduce results of continuum quantum gravity. In two dimensions not even the KPZ exponents~\eref{e3d} could be obtained. \item show a significant dependence on the measure. \item show a significant dependence on the underlying triangulation. \end{itemize} These are serious flaws in the Regge approach to quantum gravity. \subsection{Numerical simulations in two-dimensional quantum Regge calculus} \label{sec:sim} Quantum Regge calculus has repeatedly been used for numerical simulations in two, three and four dimensions. However, its status as a theory of quantum gravity remained unclear even in two dimensions. To clarify this situation, two groups have recently performed large scale numerical investigations in two dimensions where it is possible to compare with continuum results. Bock and Vink~\cite{Bock:1995mq} have performed a Monte Carlo simulation of two-dimensional Regge calculus applied to pure gravity. They added the term \begin{equation} \label{bv1} \be \int_{M}\! d^2\xi\sqrt{g}\ \cR^2 \end{equation} to the action. Such curvature square terms have been introduced in the context of four-dimensional quantum gravity to overcome the unboundedness of the Einstein-Hilbert action from below, which is induced by conformal fluctuations~\cite{Gibbons:1978ab}. In two dimensions this term is the only nontrivial part of the action. Its discretized version can be written as \begin{equation} \label{bv2} \be \sum_{i=1}^V \frac{\de_i^2}{A_{(i)}}, \end{equation} where $\de_i$ is the deficit angle at the vertex $i$ and $A_{(i)}$ is the area assigned to the vertex $i$ by a barycentric division of the triangle areas around $i$. The addition of this term to the action allows a scaling analysis which gives the susceptibility exponent $\ga$. This exponent can for example be defined as an entropy exponent for the partition function \begin{equation} \label{bv2a} Z(V) \propto V^{3-\ga} e^{\La_c V}, \end{equation} where the dominant exponential is nonuniversal and the universal part is given by the subleading power correction. Continuum calculations show, that for pure gravity $\ga$ is given by \begin{equation} \label{bv3} \ga = 2 - \frac{5}{2}(1-h), \end{equation} compare~\eref{e3d}. It has been shown in~\cite{David:1985nj,Boulatov:1986jd}, that the addition of a small curvature square term does not change the value of $\ga$ in dynamical triangulation. To be conservative, Bock and Vink used the scale invariant measure~\eref{a1} with $\al=1$ and $\be=0$ for their simulations. This enabled them to compare their results with previous calculations~\cite{Gross:1990fq}, where agreement between quantum Regge calculus and the formula~\eref{bv3} has been reported. They computed the susceptibility exponent for the topology of a sphere ($h=0$), a torus ($h=1$) and a bitorus, that means a sphere with two handles ($h=2$). \begin{table} \renewcommand{\baselinestretch}{1.2} \normalsize \begin{center} \begin{tabular}{llll} \hline\hline\hline $h$ & $0$ (sphere) & $1$ (torus) & $2$ (bitorus)\\ \hline $\ga$ in Liouville theory& $-0.5$ & $2$ & $4.5$ \\ $\ga$ in QRC from~\cite{Bock:1995mq} & $\gtrapprox 5.5$ & $\approxeq 2.0$ & $\gtrapprox 5.5$\\ \hline\hline\hline \end{tabular} \renewcommand{\baselinestretch}{1.0} \normalsize \parbox[t]{0.85\textwidth} { \caption[bv1] {\label{tab:bv1} \small Values for the susceptibility exponent $\ga$ for pure two-dimensional gravity at different topologies as computed in Liouville theory compared to the values from the Monte Carlo simulation in~\cite{Bock:1995mq}. Bock and Vink do not give error bars for their values. However, the deviation from the Liouville values is significant. } } \end{center} \end{table} The results are compared with the continuum values in table \tref{bv1}. For spherical and bitoroidal topology they found that $\ga$ in quantum Regge calculus differs significantly from Liouville theory. That means that quantum Regge calculus does not reproduce the fundamental KPZ-result~\eref{bv3}. Note that the torus topology is not well suited to test the KPZ formula~\eref{e3d}. For $h=1$ $\ga$ is independent of the conformal charge of the matter coupled to gravity. Furthermore $\ga$ takes the same value $2$ in the classical case without metrical fluctuations ($c=-\infty$) and in the quantum case. The results of Bock and Vink have first been confirmed~\cite{Holm:1994un} and later been questioned~\cite{Holm:1995kq,Holm:1996kw} by Holm and Janke. Using a very careful finite size scaling ansatz for pure quantum gravity on a fixed random triangulation of the sphere, they arrive at the value $\ga=-10(2)$ for the susceptibility exponent~\cite{Holm:1996fd}, in the same setup. This is in clear disagreement with the prediction from Liouville theory and with the computations in~\cite{Bock:1995mq,Gross:1990fq}. Using an alternative measure similar to~\eref{a1} with $\al=0$ and $\be=\frac{1}{2}$ it was found in~\cite{Holm:1994un} that a scaling can no longer be observed and one encounters numerical difficulties. The observables did not reach equilibrium values. A similar change in the scaling behaviour was observed in~\cite{Bock:1995mq}. The value of $\ga$ is changed when the scale invariant measure is replaced by $\prod_{i=1}^L \frac{dl_i}{l_i}l_i^{\zeta}$, for $\zeta\neq 0$. Furthermore it has been demonstrated in~\cite{Holm:1996fd}, that the results of quantum Regge calculus depend on the chosen fixed triangulation. \section{The appearance of spikes} \label{sec:spikes} In this section we will show, that two-dimensional quantum Regge calculus does not reproduce results of continuum quantum gravity with any of the proposed measures mentioned above~\cite{Ambjorn:1997ub}. Let us recall that the partition function $G(V,R)$ for closed two-dimensional universes with fixed volume $V$ and with two marked points separated a geodesic distance $R$ has the asymptotic behaviour~\eref{e16g} \begin{equation} \label{as1} G(V,R) \sim V^{-1/4} e^{-\left(\frac{R}{V^{1/4}}\right)^{\frac{4}{3}}},~\text{ for}~\frac{R}{V^{1/4}}\rightarrow\infty. \end{equation} Thus the expectation value of any power of the radius $R$ can be calculated as \begin{equation} \label{as2} \langle R^n \rangle = \int_0^{\infty}\! dr\ r^n G(V,r) \sim V^{n/4}. \end{equation} We can show that~\eref{as2} is not fulfilled in quantum Regge calculus. The physical reason is, that contrary to continuum quantum gravity in two dimensions, the volume, or equivalently the cosmological constant, does {\it not} set an intrinsic scale for the theory. It follows that the concept of length has no natural definition in this formalism and that every generic manifold degenerates into spikes. \subsection{General proof of appearance} \begin{theorem} \label{th:1} For any value of $\al$ or $\be$ in~\eref{dw9} and~\eref{a1} there exists an $n$ such that for any link $l$ in a given arbitrary triangulation \begin{equation} \label{as3} \langle l^n \rangle_V = \infty \end{equation} for any fixed value $V$ of the spacetime volume. Thus the average radius $\langle R\rangle$, or some suitable power $\langle R^n\rangle$, does not exist~\cite{Ambjorn:1997ub}. \end{theorem} \vspace{2mm} \noindent {\bf Proof:}~In the situation of figure \fref{supplfig}, \begin{figure}[htbp] \begin{center} \includegraphics[width=0.45\linewidth]{FIGURES/suppl.eps} \parbox[t]{.85\textwidth} { \caption[supplfig] { \label{fig:supplfig} \small Parametrization of the link lengths and the triangle areas in a part of the piecewise linear manifold $M$. The vertex $v$ is of order $N$. A spike can be formed by making the links $l_1,\ldots,l_N$ arbitrary large. When the area of the surface is fixed, the link lengths $l_{N+1},\ldots,l_{2N}$ have to be small of order $1/l_1$. } } \end{center} \end{figure} which displays a small part of the piecewise linear manifold $M$, the vertex $v$ has the arbitrary order $N$. The links connected to $v$ are labelled as $l_1,\ldots,l_N$, while the links opposite to $v$ are labelled as $l_{N+1},\ldots,l_{2N}$. We want to analyze the Regge integral in that part of the phase space where the vertex $v$ is pulled to infinity and forms a spike while the volume of $M$ is held bounded. In this situation the link lengths $l_1,\ldots,l_N$ are large, while the link lengths $l_{N+1},\ldots,l_{2N}$ have to be of the order $l_1^{-1}$. First we analyze the measure~\eref{a1} with $\be=0$. Let $\La$ be a large number. Then $l_1$ can be integrated freely between $\La$ and infinity, while the integration over $l_2,\ldots,l_N$ is restricted by triangle inequalities. The integral over $l_2,\ldots,l_N$ gives a factor $l_1^{(1-N)\al}(l_{N+1}\ldots l_{2N})^{\frac{N-1}{N}}$ after symmetrizing over $l_{N+1},\ldots,l_{2N}$. An additional factor $l_{N+1}\ldots l_{2N}$ comes from the triangle inequalities for the adjacent triangles which contain the links $l_{N+1},\ldots,l_{2N}$. Taking these factors together results in \begin{equation} \label{as} \int_{\La}^{\infty}\! dl_1\ l_1^{-N\al} \int_0^{\frac{\La}{l_1}}\! dl_{N+1}\ldots dl_{2N}\ (l_{N+1}\ldots l_{2N})^{\frac{2N-1}{N} - \al}, \end{equation} which exists if \begin{equation} \label{as5} \al < 3 - \frac{1}{N}. \end{equation} If that is fulfilled, the integral over $l_{N+1}\ldots l_{2N}$ can be performed, which gives \begin{equation} \label{as6} \int_{\La}^{\infty}\! dl_1\ l_1^{-N\al} \left(\frac{\La}{l_1}\right)^{N\left(\frac{3N-1}{N}-\al\right)} \sim \int_{\La}^{\infty}\! dl_1\ l_1^{1-3N}. \end{equation} This means that $\langle l_1^{n}\rangle = \infty$ for all $n\geq 3N-2$, which proves the theorem~\ref{th:1} for the measure~\eref{a1} with $\be=0$.. To discuss the general form of~\eref{a1} we have to parametrize the areas $A_1,\ldots,A_{2N}$. In our situation these are given as the products of a small and a large link length if $v$ is pulled to infinity. Thus the product ${\prod_{j=1}^T A_j^{-\be}}$ gives an extra factor $(l_1\ldots l_N)^{-\be}(l_{N+1}\ldots l_{2N})^{-2\be}$. After integrating over $l_2,\ldots,l_N$ and after taking the triangle inequalities into account we get: \begin{equation} \label{as7} \int_{\La}^{\infty}\! dl_1\ l_1^{-N(\al+\be)} \int_0^{\frac{\La}{l_1}}\! dl_{N+1}\ldots dl_{2N}\ (l_{N+1}\ldots l_{2N})^{\frac{2N-1}{N}-\al-2\be}, \end{equation} which exists if $\al+2\be<3-\frac{1}{N}$. Then the integrals can be computed to give \begin{equation} \label{as8} \int_{\La}^{\infty}\! dl_1\ l_1^{1+N(\be-3)}. \end{equation} Thus for all $n\geq N(3-\be)-2$ the expectation value of $l_1^n$ is infinite. For the measure~\eref{dw9} the same analysis can be repeated if one notes, that from the nonlocal factor $\sqrt{P(A_1^{2\be/3},\ldots,A_T^{2\be/3})}$ the product $\prod_{i=1}^L l_i^{\be/3}$ can be split of. It turns out that this doesn't change the result and we get again~\eref{as8}. This completes the proof. We denote $\langle l^n\rangle=\infty$ as the appearance of spikes. \subsection{Spikes in special geometries} Clearly, if a discretized theory shall make any sense, the results should not depend on details like the chosen triangulation. In fact however, the proof of theorem \ref{th:1} shows, that $\langle l^n\rangle$ depends crucially on the order of the vertices, to which the link $l$ is attached. To illustrate this dependence of quantum Regge calculus on the underlying triangulation further, we analyze some special geometries in the same way as in the proof to theorem \ref{th:1}. For some geometries even the partition function is ill defined for some measures, while for other measures or geometries $\langle l^n\rangle$ might be finite for some values of $n$. \subsubsection{Spikes in a hexagonal geometry} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.7\linewidth]{FIGURES/tst.eps} \parbox[t]{.85\textwidth} { \caption[hex] { \label{fig:hex} \small Parametrization of a hexagonal geometry; $k\neq m$. All centers of the cells, which are marked by thicker lines form spikes. } } \end{center} \end{figure} We analyze a regular triangulation with torus topology in which all vertices have the order $6$, consisting of $K$ hexagonal cells and parametrized as in figure \fref{hex}. We want to study the measure~\eref{a1} in that part of the configuration space, where all centers of cells form spikes while the total area of the surface is held bounded to prevent exponential damping from the action. That means that the $6K$ link lengths $l_i, i=1,\ldots 6K$ are very large while the $3K$ link lengths $l_{a_k},l_{b_k},l_{c_k}, k=0,\ldots,K-1$ are very small, of order $l_1^{-1}$. In cell number $k$ one link, say $l_{6k+1}$ can be integrated freely from a large number $\La$ to infinity, while the integration over $l_{6k+2},\ldots,l_{6k+6}$ is then constrained by the triangle inequalities. Integrating these out and symmetrizing over the small links $l_{a_k},l_{b_k},l_{c_k}, k=0,\ldots,K-1$ gives \begin{equation} \label{sg1} \int_{\La}^{\infty}\! \prod_{k=1}^K l_{6k+1}^{-6(\al+\be)} \int_0^{\frac{\La}{l_{6k+1}}}\! \prod_{k=1}^K (l_{a_k}l_{b_k}l_{c_k})^{\frac{5}{3}-\al-2\be} dl_{a_k}dl_{b_k}dl_{c_k}. \end{equation} Thus the integral over $l_{a_k},l_{b_k},l_{c_k}$ only exists if $\al+2\be<\frac{8}{3}$. Furthermore $\langle l_1^n\rangle=\infty$ if $n\geq 7+3\al$. For the measure~\eref{dw9} we extract a factor $\prod_{i=1}^L l_i^{\frac{\be}{3}}$ from the nonlocal factor. Performing the analysis gives the bound $\be<\frac{11}{5}$ on $\be$. $\langle l_1^{n}\rangle$ is infinite if $n\geq 4-\be$. These results are summarized in table \tref{bounds}. \subsubsection{Spikes in a $12$-$3$ geometry} Similarly we can analyze the integration measures for quantum Regge calculus for a triangulation of the torus where two thirds of the vertices have order \begin{figure}[htbp] \begin{center} \includegraphics[width=0.7\linewidth]{FIGURES/twelthree.eps} \parbox[t]{.85\textwidth} { \caption[12-3] { \label{fig:12-3} \small Illustration of the $12$-$3$ geometry. All vertices of order $3$ form spikes such that the total area of the surface is held bounded from above, to prevent exponential damping from the action. } } \end{center} \end{figure} $3$ and one third of the vertices have order $12$, see figure \fref{12-3}. We consider that part of the configuration space where all vertices of order $3$ form spikes. The bounds on the exponents $\al$ and $\be$ in~\eref{a1} and~\eref{dw9} are depicted in table \tref{bounds}. The resulting bounds are sharper because the order of the vertices which form spikes is lower which leaves less geometrical restriction. For $\al\leq -1$ the the average of a link length cannot be defined with the measure~\eref{a1} (independent of $\be$). The same holds true for the measure~\eref{dw9} if $\be>0$. \subsubsection{Spikes in degenerate geometries} One can get sharper bounds for geometries with vertices of order two, like the hedgehog geometry introduced in section \sref{DWspecial}. As explained in that section, these geometries do occur in phases of simplicial quantum gravity in the framework of dynamical triangulation and it would be unnatural to forbid their occurrence as basically local parts of very large triangulations. As a first example we analyze the situation for the geometry shown \begin{figure}[htbp] \begin{center} \includegraphics[width=0.7\linewidth]{FIGURES/brilliant2.eps} \parbox[t]{0.85\textwidth} { \caption[brill] { \label{fig:degenerate} \small A degenerate geometry which has two vertices of high order while all other vertices are of order $4$. Spikes can be formed in several ways. In the text we analyze the behaviour in that part of the phase space where every second vertex of order $4$ forms a spike. } } \end{center} \end{figure} in figure \fref{degenerate} which has two vertices of high order while all other vertices are of order $4$. This is slightly more regular than the hedgehog geometry associated with figure \fref{hat}. In table \tref{bounds} we present the results from our analysis of the situation, where every second vertex of order $4$ forms a spike. While with measure~\eref{a1} $\langle l^2\rangle$ is undefined for $\al\leq -1$, the DeWitt-like measure~\eref{dw8} does not even admit the computation of the average $\langle l\rangle$. For the hedgehog geometry these measures themselves are illdefined, see table \tref{bounds}. Measure~\eref{a1} is illdefined for $\al=-1$ (the uniform measure), while for instance for the scale invariant measure~\eref{a1} with $\al=1$ and $\be=0$ expectation values of $l^2$ and higher are infinite. \begin{table} \renewcommand{\baselinestretch}{1.2} \normalsize \begin{center} \begin{tabular}{lcll} \hline\hline\hline Geometry & Measure & Bounds & Values of $n$ \\ & & & s.t.~$\langle l^n\rangle=\infty$\\ \hline hexagonal, &\eref{a1}&$\al+2\be<\frac{8}{3}$, $-\frac{7}{3}<\al$&$n\geq 7+3\al$\\ see figure \fref{hex} &\eref{dw9}&$\be<\frac{11}{5}$&$n\geq 4-\be$\\ &&&\\ $12$-$3$, see &\eref{a1}&$\al+2\be<\frac{7}{3}$, $-\frac{5}{3}<\al$&$n\geq \frac{5}{2}+\frac{3}{2}\al$\\ figure \fref{12-3} &\eref{dw9}&$\be<2$&$n\geq 1-\frac{\be}{2}$\\ &&&\\ degenerate,&\eref{a1}&$\al+2\be<\frac{5}{2}$, $-2<\al$&$n\geq 4+2\al$\\ see figure \fref{degenerate}&\eref{dw9}&$\be<\frac{21}{10}$&$n\geq 2-\frac{2\be}{3}$\\ &&&\\ hedgehog, see&\eref{a1}&$\al+2\be<2$, $-1<\al$&$n\geq 1+\al$\\ section \sref{DWspecial}&\eref{dw9}&$\be<0$&$n\geq -\frac{\be}{3}$\\ \hline\hline\hline \end{tabular} \renewcommand{\baselinestretch}{1.0} \normalsize \parbox[t]{0.85\textwidth} { \caption[bounds] {\label{tab:bounds} \small Bounds for the exponents $\al$ and $\be$ for the measures ~\eref{a1} and~\eref{dw9} and values of $n$ such that $\langle l^n\rangle=\infty$ for various geometries. } } \end{center} \end{table} \section{Concluding remarks} \subsection{The appearance of spikes in higher dimensions} Although the derivation in the last section has been given only for the two-dimensional case, we expect, that one has to deal with the same problems in higher-dimensional quantum Regge calculus. The measures \eref{a1} have obvious generalizations in higher dimensions. An extra problem in higher dimensions is the unboundedness of the action which can be overcome by adding a cutoff involving for instance a curvature squared term. However, if the cutoff is taken to zero, we expect the spikes to reemerge. Actually, this has been observed in numerical simulations \cite{Beirl:1992ap,Beirl:1994st}. In \cite{Beirl:1994st} four-dimensional quantum Regge calculus is explored on a fixed but not regular triangulation of the $4$-torus. The triangulation is constructed from a regular triangulation with $31$ vertices following \cite{KuehnelLassmann} by inserting three additional vertices with low order. While in the conventional regular triangulation of the $4$-torus all vertices are of order $30$, the additional vertices are of order $5$. The Einstein-Hilbert action plus a cosmological constant term is discretized in the usual way. A cutoff is introduced by the requirement that the local fatness \begin{equation} \label{c1} \phi_s = 576 \frac{V_s^2}{\text{max}_{l_i\in s}(l_i^8)} \end{equation} for every $4$-simplex $s$ is held bounded away from zero, $\phi_s\geq f=\text{const}>0$. Here we have introduced the $4$-volume $V_s$ of the simplex $s$. The fatness is maximal for equilateral simplices and vanishes for collapsing ones. For the integration over the link lengths $l_i$ the measure \begin{equation} \label{c2} d\mu(l_1,\ldots,l_L) = \prod_{i=1}^L l_i^{2\sig-1} dl_i \de(\De) \end{equation} has been chosen for $\sig$ between $0$ and $1$. In figure 1 of \cite{Beirl:1994st} the expectation value of $l_i^2$ is depicted as a function of the cutoff $f$ at the values $f=2^m10^{-6},~m=9,\ldots,0$ for $\sig=1$, $\sig=\frac{1}{2}$ and $\sig=\frac{1}{10}$. For $\sig=1$ and $\sig=\frac{1}{2}$ this expectation value tends to infinity if the cutoff is removed while for $\sig=\frac{1}{10}$ the situation cannot be decided from the numerical data in \cite{Beirl:1994st}. From the figure we deduce that the behaviour of $\langle l_i^2\rangle$ is a power of $f$: \begin{eqnarray} \label{c3} \langle l_i^2\rangle &\sim& f^{-0.26(2)},~\text{for $\sig=1$,}\\ \langle l_i^2\rangle &\sim& f^{-0.16(1)},~\text{for $\sig=\frac{1}{2}$}. \end{eqnarray} This clearly demonstrates the appearance of spikes in four dimensions and shows that one has to expect the dependence of the physical results on the choice of the measure and on the choice of the fixed triangulation in four dimensions as well as in two dimensions. \subsection{The role of diffeomorphisms in two dimensions} \label{sec:menotti} While we have shown that quantum Regge calculus in its present form is not a suitable candidate for a theory of quantum gravity it is an interesting question, whether one can develop a theory of quantum gravity in the spirit of quantum Regge calculus. A promising alternative approach to the functional integration~\eref{e1} was studied in~\cite{Menotti:1995ih,Menotti:1996de,Menotti:1997tm}. The integration over all metrics modulo diffeomorphisms is replaced by the continuum functional integral over piecewise linear metrics with a finite number of singularities modulo diffeomorphisms. The singularities are the $V$ vertices of the piecewise linear space at which the curvature is located. This approach is remarkable since at each stage in the calculations diffeomorphisms are treated exactly. Thus it includes all metrics which are related to a piecewise linear metric by diffeomorphisms. It turns out, that this is sufficient to reproduce a discretized form of the Liouville action. Therefore it is believed, that for $V\rightarrow\infty$ this discretized version approaches the continuum expressions in some sense. No ad hoc assumption about the form of the measure has to be made. Note that the connectivity of the piecewise linear space is not fixed. In particular, all metrics of dynamical triangulation are contained in this approach. We will give a short review of these ideas for the two-dimensional case. As in the continuum calculations, the starting point is the DeWitt metric. Since the reduction of the degrees of freedom involves only geometries but not diffeomorphisms, the gauge fixing which leads to~\eref{z8} can be performed in the same way as in the continuum. Now the integration $\cD\phi$ over the conformal factor has to be restricted to those conformal factors which describe Regge geometries. For spherical topology, to which we confine the discussion here, there are no Teichm\"uller parameters and one can choose a unique background metric $\hat{g}$. Menotti et al.~adopt the usual way of a stereographic projection of the piecewise linear surface on the plane with $\hat{g}_{\mu\nu} = \de_{\mu\nu}$~\cite{Forster:1987bw}. Then the conformal factor for piecewise linear geometries can be parametrized as \begin{equation} \label{m1} e^{\phi} = e^{2\la_0}\prod_{i=1}^V \vert z-z_i\vert^{2(\al_i-1)}. \end{equation} Here $2\pi\al_i$ is the angular aperture at the vertex $i$ while the $z_i$ are the coordinates of the singularities in the complex plane. The sum of the deficit angles $1-\al_i$ has to equal the Euler characteristic, \begin{equation} \label{m2} \sum_{i=1}^V (1-\al_i) = 2, \end{equation} that means that $\al_V$ can be expressed by the other angles. $\la_0$ is an overall scale factor. Note that this parametrization simply stems from a generalization of the common Schwarz-Christoffel mappings which map the complex upper half plane to an arbitrary polygon~\cite{Smirnov}. One advantage of this parametrization over a parametrization via link lengths is that there are no triangle inequalities. Due to the presence of six conformal Killing vectors, compare section \sref{liouville}, the gauge fixing is not complete and the conformal factor $\phi$ is invariant under an $SL(2,\mathbb{C})$ transformation given by \begin{equation} \label{m3} z' = \frac{az+b}{cz+d},~ad-bc=1,~a,b,c,d\in\mathbb{C}. \end{equation} A short calculation gives \begin{equation} \label{m4} \phi'(z',\la_0, z_i, \al_i) = \phi(z',\la_0', z_i', \al_i), \end{equation} with \begin{equation} \label{m5} \la_0' = \la_0 - \sum_{i=1}^V (1-\al_i)\log\vert cz_i+d\vert. \end{equation} Geometrical invariants as the angles $\al_i$ and the area \begin{equation} \label{m6} A=e^{2\la_0}\int\! d^2z\ \prod_{i=1}^V \vert z-z_i\vert^{2(\al_i-1)} \end{equation} are invariant under the transformation~\eref{m3}. A counting shows, that as in the usual parametrization in Regge calculus there are $3V-6$ dynamical degrees of freedom. To compute the Fadeev-Popov determinant for these conformal factors one employs the same techniques as in the continuum. Namely one computes the change of \begin{equation} \label{m7} \log\frac{\det'(P^+P)}{\det\langle\om_a,\om_b\rangle_V} \end{equation} under a variation of the parameters $\la_0$, $z_i$ and $\al_i$ and integrates the result back. One gets~\cite{Menotti:1996de}: \begin{eqnarray} \label{m8} \lefteqn{ \log\sqrt{\frac{\det'(P^+P)}{\det\langle\om_a, \om_b\rangle_V}} =}\\ && {\frac{26}{12}\left\lbrace \sum_{i, j\neq i} \frac{(1-\al_i)(1-\al_j)}{\al_i} \log\vert z_i-z_j\vert + \la_0\sum_{i}(\al_i-\frac{1}{\al_i}) - \sum_i F(\al_i) \right\rbrace}.\nonumber \end{eqnarray} This is the discretized form of the Liouville action. In the limit $V\rightarrow\infty$ it approaches the continuum Liouville action. The function $F$ depends only on the angles $\al_i$. For $V\rightarrow\infty$, $\sum_{i=1}^VF(\al_i)$ goes to a topological invariant. The discretized Liouville action is invariant under the $SL(2,\mathbb{C})$ transformation~\eref{m3}. Indeed, it has been shown in~\cite{Menotti:1997xz} that this invariance induced by the conformal Killing vectors is sufficient to derive the form~\eref{m8} of the Liouville action up to the sum over the $F(\al_i)$. Note that the discretized Liouville action is strongly unlocal. Overcounting of equivalent locally flat geometries is avoided by expression~\eref{m8}. Indeed, an angle $\al_i=1$ simply does not contribute to~\eref{m8}. The integration measure for the conformal factor is defined by the scalar product $ds^2 = \int\! d^2z\ e^{\phi}\ \de\phi\de\phi$. The change to the variables $(q_1,\ldots,q_{3V})= ((z_1)_x,\ldots,(z_V)_y, \la_0, \al_1,\ldots,\al_{V-1})$ involves a Jacobian: \begin{equation} \label{m9} \cD_{e^{\phi}\de_{\mu\nu}}\phi = \prod_{i=1}^V d^2z_i \prod_{j=1}^{V-1}d\al_j d\la_0\ \sqrt{\det J}, \end{equation} which is given by the determinant of \begin{equation} \label{m10} J_{ij} = \int\! d^2z\ e^{\phi} \frac{\partial\phi}{\partial q_i} \frac{\partial\phi}{\partial q_j}, \end{equation} with the conformal factor $\phi$ defined by~\eref{m1}. The diagonal entries $J_{ii}$ for $i=1,\ldots,2V$ of this matrix require some regularization. The explicit form of the measure is unfortunately unknown even for the simplest geometries which makes the approach presently inaccessible for numerical tests. While it has been possible to extract the dependence of the Jacobian on the scale factor $\la_0$, almost nothing is known about the other variables. It is an interesting open problem to show that the conformal anomaly arises from this measure. \subsection{Conclusion} We have shown that in two-dimensional quantum Regge calculus the expectation value of a suitable power of a single link length diverges. No natural scale is set by the measures \eref{a1} or \eref{dw9}. Even if the total area is kept bounded from above, the probability of having arbitrary large link lengths is not exponentially suppressed. In two-dimensional quantum gravity however, the probability of having two points separated a distance $R$ is exponentially suppressed as $e^{-R\sqrt{\La}}$ for a given cosmological constant $\La$ and suppressed as $e^{-R^{4/3}/V^{1/3}}$ if the area $V$ of spacetime is fixed. That means, that quantum Regge calculus in two dimensions does not provide a formulation of quantum gravity. In addition, the mere existence of the partition function depends crucially on the chosen measure as well as on the chosen fixed triangulation. Clearly, this is a severe problem: if quantum Regge calculus was a realistic theory, these microscopic details should be unimportant and not affect physical expectation values. Furthermore, we have shown that in four dimensions spikes have been observed in numerical studies of quantum Regge calculus. Numerical investigations demonstrate that also in four dimensions the results do depend crucially on the underlying triangulation and on the measure. Thus our conclusion in four dimensions is the same as in two dimensions: Quantum Regge calculus does not provide a theory of quantum gravity. A sensible continuum limit cannot be defined. Not even the concept of length can be defined unambiguously. \chapter*{Conclusion} \addtocontents{toc}{\protect\contentsline {chapter}{\protect\numberline {\ }Conclusion}{82}} \enlargethispage*{\baselineskip} In this work we have analyzed the fractal structure of two-dimensional quantum gravity. A central result is that the spectral dimension equals two for all types of conformal matter with central charge smaller than one coupled to gravity. Other aspects of the fractal nature of the spacetime have been introduced and discussed such as the extrinsic and intrinsic Hausdorff dimension, the branching ratio into minimal bottleneck baby universes and the scaling of the two-point function. We have compared our results with the latest numerical data. Numerical simulations are performed in the framework of dynamical triangulation. We have discussed this method and illustrated its power by a computation of an exact expression of the two-point function in two-dimensional quantum gravity. Furthermore we have discussed quantum Regge calculus which has been suggested as an alternative discretization of quantum gravity. We have shown that this theory does not provide a suitable discretization of quantum gravity since it disagrees with the continuum results. This point constitutes another central result of this work. There are several open questions which have to be addressed in future work. It would be very interesting to develop a method which would allow the solution of Liouville theory. More modestly, the computation of the geodesic distance or similar geometrical objects would be interesting. The derivation of the intrinsic Hausdorff dimension is not very well understood. An alternative derivation of this result would provide further insight into the theory. Related to this is the question, why the fractal dimension seems to equal four for unitary matter coupled to quantum gravity in two dimensions. Of course the final triumph would be an analogous analysis of the fractal properties of the quantum spacetime of four dimensional quantum gravity in the continuum approach and in the discretized approach. While this work settles the question about the viability of quantum Regge calculus in its present formulation it would be interesting to develop different schemes to discretize quantum gravity. We have discussed one possibility in chapter \sref{regge}. It would be interesting to study further properties of this approach. \small \bibliographystyle{utphys} \addtocontents{toc}{\protect\contentsline {chapter}{\protect\numberline {\ }Bibliography}{83}}
1,108,101,563,523
arxiv
\section{Introduction} \label{sect1} Black phosphorus (BP) is a layered structure of buckled atomic phosphorus for which the layers are connected by weak van der Waals forces. \cite{Jami} This is the most stable phosphorus-based crystal at room temperature and for a wide range of pressure values. Crystalline BP is a semiconductor, for which $P$ atoms are covalently bonded with three adjacent atoms. Such a structure exhibits strong anisotropy for an arbitrary number of layers, as well as hybrid electron and hole states in the vicinity of the band edge in phosphorene featuring both Dirac cone and a conventional Schr$\ddot{o}$dinger electron behavior. While bulk BP is a semiconductor with a small bandgap of $0.3\,eV$ , its monlayer counterpart is a semiconductor with a large direct bandgap $(\backsim 2\,eV)$ and is referred to as \textit{phosphorene} in analogy to graphene and could be exfoliated in a mechanical way. \medskip \par It is not surprsing that reliable information for the band structure of BP with a specified number of layers has been appreciated as being extremely important for device applications, including a field-effect transistor. \cite{Litran} This has stimulated a large number of first-principles calculations based on the density functional theory as well as the tight-binding model \,\cite{Rudenko}, group-theoretical calculations \,\cite{Appel} and the continuum model. \cite{GGBpPRB,GGBpPRL} of BP structures, which recently received a significant amount of attention for their success in analyzing experimental data. \cite{transp1,transp2, transp3} The quality of all such models \,\cite{jpcmrud,japrud} depends on the accuracy of the exchange-correlation approximation. Over time, we are going to witness numerous attempts to engineer new types of anisotropic electronic bandstruture of BP-based devices using various mechanisms such as electron-photon interaction with a dressing field. The corresponding effect from an imposed electrostatic field was addressed in Ref.~[\onlinecite{elfield}]. \medskip \par The possibility of an electronic topological transition and electrically-tunable Dirac cone was theoretically predicted for multi-layer BP. \cite{Per17, Per18} The fact that the bandgap of BP is mainly determined by the number of layers was confirmed in experiment,\cite{buscema} i.e., the energy gap is strongly decreased for a larger number of layers and could be effectively neglected for $N_L > 5$. Yet demonstrating a Dirac cone with no effective mass or energy bandgap, such electrons still posses strong anisotropic properties for pristine black phosphorus. \,\cite{liklein} Consequently, non-symmetric Klein tunneling could be observed \,\cite{liklein} with a shifted transmission peaks similar to electron tunneling for graphene in the presence of magnetic field. \cite{pe,mcc} \medskip \par Phosphorene is one of the most recently discovered members of a sizable group of low-dimensional structures with great potential for nanoelectronics applications. The most famous part of this family is graphene, fabricated in 2004. Because of its unique properties,\,\cite{gr1,gr2,gr3} graphene has initiated a new direction in electronic devices. A subsequent crucial advance was the discovery of the buckled honeycomb lattices such as silicene and germanene. Their most distinguished feature is the existence and tunability of two non-equivalent spin-orbit and sublattice-asymmetry bandgaps. The latter one, $\Delta_{z}$ is directly modified by an external electrostatic field. \cite{ezawaprl,ezawa9prl,ezawa, SilMain} This is a result of the out-of plane buckling, which is due to a larger radius of $Si$ and $Ge$ atoms compared to carbon and $sp^3$ hybridization. The most recently fabricated germanene possesses similar buckling features but with different bandgaps and Fermi velocity. \cite{G1,G2,G3,G4} \medskip \par Another important class of such materials are the \textit{transition metal dichalcogenides} (TMDC's) structures such as \texttt{MC}$_2$, where \texttt{M} denotes a metal such as $Mo$, $W$, and \texttt{C} is a chalcogen atom ($S$, $Se$, or $Te$). Molybdenum disulfide MoS$_2$, a typical representative \,\cite{habib,most} of TMDC's, has exhibited a large energy bandgap $\backsimeq 1.9 \,eV$, and broken symmetry between the electron and hole subbands so that for all experimentally accessible electron density values only one hole subband is doped. As a result, all the electronic, collective and transport properties vary significantly for the electron and hole types of doping. However, all these low-dimensional materials exhibit almost complete (with a slight deviation for MoS$_2$) isotropy in the $x-y$ plane. Consequently, phosphorene is a totally new material with its very unusual properties, so that complete and thorough studies of these characteristics open a new important chapter in low-dimensional science and technology. \begin{figure} \centering \includegraphics[width=0.55\textwidth]{FIG1} \caption{(Color online)\ Schematics of a phosphorene layer(s) irradiated by external dressing light with either linear or circular polarization is shown in $(a)$. Panel $(b)$ presents electron and hole energy bands $\varepsilon_{\Delta}^{\pm} ({\bf k})$ given by Eq.\eqref{disp01} along the $x$ ($\phi_0 = 0$, solid black lines) and $y$ ($\phi_0 = \pi/2$, blue and short-dashed) directions, and also for the $(\phi_0 = \pi/4$, the red and long-dashed curve). Negative (reflected) \textit{electron} dispersion $- \varepsilon_{\Delta}^{+} (k_x)$ is also presented for comparison to highlight the electron-hole asymmetry in phosphorene. } \label{FIG:1} \end{figure} \par \medskip \par With the newest achievements in laser and microwave science, it has become possible to achieve substantial control and tunability of the electronic properties of low-dimensional condensed-matter materials by subjecting them to strong off-resonant high-frequency periodic fields (so-called "\textit{Floquet engineering}``, schematically shown in Fig.~]ref{FIG:1} $(a)$).\cite{fl1,fl2,fl4,fl5} If the electron-phonon coupling strength is high, such a bound system could be considered as a single, holistic object and has been investigated using quantum optics and mechanics. These electrons with substantially modified energy dispersions, referred to as ``\textit{dressed states}"', became a commonly used model in present-day low-dimensional physics. \cite{kibis10,kibis11, kibis12,kibis19, kibis23, kibissrep} One of the first significant achievements has been the demonstration of a metal-insulator transition in graphene \,\cite{kibis0}, which drastically affected the electron tunneling and the Klein paradox. \cite{m1,m2} Important collective properties such as exchange and correlation energies are also affected by the presence of an energy gap, \cite{mPQE} and spin dynamics on the surface of a three-dimensional topological insulator \,\cite{kibisnjp} is also modified. \par \medskip \par The rest of our paper is organized in the following way. In Sec.~\ref{sect2}, we present our model, the low-energy Hamiltonian and the energy dispersions for phosphorene, i.e., single-layer black phosphorus with strong in-plane anisotropy and a large energy bandgap. The electron-photon dressed states for phosphorene are presented and discussed in Sec.~\ref{sect3}. Section ~\ref{sect4} is devoted to calculating the dressed states for few-layer phosphorus, in which the electrons are anisotropic massless Dirac fermions without a gap. Mathematical generalizations of such a model with both on- and off-diagonal bandgaps is considered, and the corresponding dressed states are also obtained. Concluding remarks are provided in Sec.~\ref{sect5}. \section{Low-energy model for phosphorene} \label{sect2} Our calculations utilize the model for BP presented in Refs.~[\onlinecite{GGBpPRL, GGBpPRB}]. Being somewhat similar to the truly two-dimensional hexagon structure of graphene, the atomic arrangement for single layer BP results in a \textit{packered} surface due to the $sp^3$ hybridization composed of the $3s$ and $3p$ orbitals. For silicene, such hybridization is responsible for out-of-plane ``buckling" displacement of the $Si$ atoms. \medskip \par The continuouum $k-$dependent Hamiltonian is usually based on the tight-binding model. Close to the $\Gamma$ point, approximated up to second order in the wave vector components, it is given as \begin{equation} \mathbb{H}_{ph}^{\Delta} ({\bf k}) = \left( \mathbb{E}_i + \sum\limits_{i = x,y} \eta_i k_i^2 \right) \hat{\mathbb{I}}_{2\times 2}+ \left( \Delta_O + \sum\limits_{i = x,y} \gamma_i k_i^2 \right) \hat{\Sigma}_x- \chi k_y \hat{\Sigma}_y \, , \label{mosham} \end{equation} or in the matrix form, \begin{equation} \mathbb{H}_{ph}^{\Delta} ({\bf k}) = \left[ \begin{array}{cc} \mathbb{E}_i + \sum\limits_{i = x,y} \eta_i k_i^2 & \Delta_O + \sum\limits_{i = x,y} \gamma_i k_i^2 - i \chi k_y\\ \Delta_O + \sum\limits_{i = x,y} \gamma_i k_i^2 + i \chi k_y & \mathbb{E}_i + \sum\limits_{i = x,y} \eta_i k_i^2 \end{array} \right] \, . \label{h01} \end{equation} This Hamiltonian clearly displays significantly different structure and properties, compared to that for graphene. First, there are no linear $k-$ terms, except $\pm i \chi k_y$. Furthermore, there are no linear $k_x$ elements. As one of the most evident consequences of this structure, we note that circularly polarized irradiation with $x$- and $y$-components being equally important, couples to such electrons only in the $\backsimeq { \bf A}^2$ level. \medskip \par Secondly, the energy bandgap is presented in a $\hat{\Sigma}_x$, off-diagonal form, contributing to the asymmetry between the electron and hole states in contrast to the $\hat{\Sigma}_z$-type gap. These properties, coming directly from the Hamiltonian structure, are new and have not been encountered previously. The energy dispersions are \begin{equation} \varepsilon_{\Delta}^{\pm} ({\bf k}) = \mathbb{E}_i + \sum\limits_{i = x,y} \eta_i k_i^2 \pm \left[\left( \Delta_O + \sum\limits_{i = x,y} \gamma_i k_i^2 \right)^2 + \chi^2 k_y^2 \right]^{1/2} \, , \label{disp01} \end{equation} where $\gamma_{e,h} = \pm 1$ corresponds to the electron or hole solution. For small values of the wave vector, these dispersions are approximated as \begin{equation} \varepsilon_{\Delta}^{\pm} ({\bf k}) \backsimeq \mathbb{E}_i \pm \Delta_O + \left( \eta_x \pm \gamma_x \right) k_x^2 + \left[ \eta_y \pm \left( \gamma_y + \frac{\chi^2}{2 \Delta_O} \right) \right] k_y^2 \ . \end{equation} The effective masses given by \,\cite{GGBpPRB} \begin{eqnarray} && m^{(e,h)}_{\,x} = \frac{\hbar^2}{2 \left( \eta_x \pm \gamma_x \right)} \, , \\ \nonumber && m^{(e,h)}_{\,y} = \frac{\hbar^2 / 2}{ \eta_y \pm \left( \gamma_y + \chi^2 / (2 \Delta_O) \right)} \, \end{eqnarray} are anisotropic, and this anisotropy is different for the electron and hole states as $\backsim \chi^2/\Delta_O$. \section{Electron dressed states for in single layer} \label{sect3} In this Section, we calculate electron-light dressed states for phosphorene. As far as circularly polarized irradiation is concerned, one must consider second order coupling in order to see how both components of the wave vector are modified. Such consideration is critical for the vector potential given by Eq.\eqref{acirc}, but is clearly beyond the scope of conventional analytical methods. The mere presence of an off-diagonal energy gap $\Delta_O$ means that there is no electron/hole symmetric solution, obtained in Refs.~[\onlinecite{kibissrep,kibisall}]. This situation also leads us to conclude that the Hamiltonian parameters, such as the energy gap, are affected at lower order than such parameters for Dirac fermions. Consequently, for monolayer BP, we focus on the case for linearly polarized irradiation. \subsection{Linear polarization of the dressing field and induced anisotropy} Since electron energy dispersion relations and their effective masses are intrinsically anisotropic for phosphorenes, the direction of the dressing field polarization now plays a substantial role. We define this direction by an arbitrary angle $\theta_0$ from the $x-$axis and generalize the vector potential used in Ref.~[\onlinecite{kibisall}] so that it now has both $x-$ and $y-$ non-zero components \begin{equation} {\bf A}^{L}(t) = \left\{\frac{E_0}{\omega} \cos \, \omega t\cos \theta_0; \,\frac{E_0}{\omega} \cos \, \omega t \sin \theta_0 \right\} \, . \label{Alin} \end{equation} \medskip \par The renormalized Hamiltonian for the dressed states is obtained by the canonical substituion $k_{x,y} \Rightarrow k_{x,y} - e /\hbar A_{x,y}$, where $e$ stands for the electron charge, yielding \begin{equation} \hat{\mathcal{H}}(k) = \mathbb{H}_{ph}^{\Delta} + \hat{h}_{0} + \hat{h}_{int} \, , \label{th} \end{equation} where $\mathbb{H}_{ph}^{\Delta}$ is the ``bare", non-interacting electron Hamilotonian, given by Eq.\eqref{h01}. The zeroth-order, $k-$independent \textit{interaction Hamiltonian} may be expressed as \begin{equation} \hat{h}_{o} = \chi \, \frac{E_0 e}{\hbar \omega} \, \sin \theta_0 \, \cos \omega t \, \hat{\Sigma}_y = c_0 \, \left( \begin{array}{cc} 0 & -i \cos \omega t\\ i \cos \omega t & 0 \end{array}\right) \ , \end{equation} where $c_0 = \chi \,\sin \theta_0 \, E_0 \mathfrak{e}/(\hbar \omega) $, $\mathfrak{e} = \vert e \vert$. We conclude that the vector potentail of linearly polarized light \eqref{Alin} must have a non-zero $y-$ component in order to enable $\backsim {\bf A}^L$ coupling. We now turn our attention to the interaction term which is linear in $k_{x,y}$ and given by \begin{equation} \hat{h}_{int} = 2 \frac{e}{\hbar} \left( \sum\limits_{i = x,y} \eta_i k_i \, A_i^{L}(t) \, \mathbb{I}_{2 \times 2} + \sum\limits_{i = x,y} \gamma_i k_i \, A_i^{L}(t) \, \hat{\Sigma}_x \right) \, , \end{equation} or, introducing the following simplifying notations $\epsilon_{\alpha} = \sqrt{\alpha_x^2 k_x^2 + \alpha_y^2 k_y^2}$, $\phi^{(\alpha)} = \tan^{-1} \left[ \alpha_y k_y / (\alpha_x k_x) \right]$ for $\alpha = \eta, \gamma$ and $c^{(2)} = (2 \mathfrak{e} E_0) / (\hbar \omega)$, we express it as \begin{equation} \hat{h}_{int} = c^{(2)} \, \cos \, \omega t \, \left[\begin{array}{cc} \epsilon_{\eta} \cos (\phi^{(\eta)} - \theta_0) & \epsilon_{\gamma} \cos (\phi^{(\gamma)} - \theta_0) \\ \epsilon_{\gamma} \cos (\phi^{(\gamma)} - \theta_0) & \epsilon_{\eta} \cos (\phi^{(\eta)} - \theta_0) \end{array} \right] \, . \end{equation} As the fisrt step, we solve a time-dependent Schr\"odinger equation for ${\bf k} = 0$: \begin{equation} i \hbar \frac{\partial \Psi_0(t)}{\partial t} = \hat{h}_{o} \Psi_0(t) \, . \end{equation} We obtain the solution in a straightforward way to be \begin{equation} \Psi_0^{\beta = \pm 1}(t) = \frac{1}{\sqrt{2}} \, \left[ \begin{array}{c} 1 \,\\ \beta \, i \end{array} \right] \texttt{exp}\left\{ - i \beta \, \frac{c_0}{\hbar \omega} \, \sin \omega t \right\} \, . \label{str} \end{equation} It is noteworthy that even if both energy bandgaps $\Delta_O$ and the initial energy shift $\mathbb{E}_i $ are included so that the Hamiltonian takes the form \begin{equation} h_{o} = \mathbb{E}_i \, \mathbb{I}_{2 \times 2} + \Delta_O \, \hat{\Sigma}_x + c_0 \, \cos \omega t \, \hat{\Sigma}_y = c_0 \, \left( \begin{array}{cc} \mathbb{E}_i & \Delta_O -i \cos \omega t\\ \Delta_O + i \cos \omega t & \mathbb{E}_i \end{array} \right) \, , \end{equation} the solution could still could be determined analytically as \begin{equation} \Psi_0^{\beta = \pm 1}(t) = \frac{1}{\sqrt{2}} \, \left[ \begin{array}{c} 1 \\ \beta \, i \end{array} \right] \texttt{exp}\left\{ - \beta \frac{i}{\hbar} \left[ \frac{c_0}{\omega} \, \sin \omega t - (\Delta_O - \beta \mathbb{E}_i) \, t \right] \right\} \, . \end{equation} We note that such a trivial solution could only be obtained for equal diagonal energies $\mathbb{E}_0$, i.e., for an introduced non-$\Sigma_z$ type of energy gap. If the diagonal energies are not equivalent, given by $\mathbb{E}_{1,2} = \mathbb{E}_i \pm \Delta_D$, complete symmetry between the two components of the wave function no longer exists and such a solution could not be determined analytically. Consequently, we will use basic set in Eq.\eqref{str} for the rest of the present calculation. Such a situation (non-zero diagonal gap $\Delta_D$) appears if there is a relatively small non-zero vertical electrostatic field component and the symmetry between the vertically displaced phosphorus atoms is broken similar to silicene. We will often use this mathematical generalization in the rest of our work. \par \medskip \par For finite wave vector, we present the eigenfunction as an expansion in terms of a basis set \,\cite{kibisall} \begin{equation} \Psi_{\bf k} (t) = \mathcal{F}^{\Uparrow}\, \Psi_0^{+}(t) + \mathcal{F}^{\Downarrow} \, \Psi_0^{-}(t) \, , \label{15} \end{equation} where $\mathcal{F}^{\Uparrow,\Downarrow} = \mathcal{F}^{\Uparrow,\Downarrow}(k_x,k_y \, \vert \, t)$ are scalar, time-dependent coefficients with anisotropic $k$-dependence. This equation immediately results in the two following indentities \begin{equation} i \hbar \frac{d \, \mathcal{F}^{\, \Uparrow,\Downarrow}}{dt} = \langle \Psi_0^{\pm}(t) \vert \delta\mathbb{H}_{ph}^{\gamma}({\bf k}) \vert \Psi_0^{+}(t) \rangle \, \mathcal{F}^{\, \Uparrow} + \langle \Psi_0^{\pm}(t) \vert\delta\mathbb{H}_{ph}^{\gamma}({\bf k}) \vert \Psi_0^{-}(t) \rangle \, \mathcal{F}^{\, \Downarrow} \, , \label{16} \end{equation} where \begin{equation} \delta\mathbb{H}_{ph}^{\gamma}({\bf k}) = \mathbb{H}_{ph}^{\gamma} + \hat{h}_{int} \end{equation} is the bandgap and wave vector dependent portion of the total Hamiltonian \eqref{th}. This system becomes \begin{eqnarray} i \frac{d \, \mathcal{F}^{\, \Uparrow,\Downarrow}}{dt} && = \left[ \mathbb{E}_i + \eta_x k_x^2 + ( \eta_y k_y \pm \chi ) k_y + c^{(2)} \epsilon_{\eta} \cos (\phi^{(\eta)} - \theta_0 ) \, \cos{\omega t} \right] \, \mathcal{F}^{\, \Uparrow, \Downarrow} \pm \\ \nonumber && \pm \,\, i \left[ \mp i \Delta_D + \Delta_O + \gamma_x k_x^2 + \gamma_y k_y^2 + c^{(2)} \epsilon_{\gamma} \cos (\phi^{(\gamma)} - \theta_0 ) \, \cos{\omega t} \right] \, \mathcal{F}^{\, \Downarrow, \Uparrow} \, \texttt{exp}\left[ \pm 2 i \, \frac{c_0}{\hbar \omega} \, . \, \sin \omega t \right] \, \end{eqnarray} \par \medskip \par The quisiparticle energy dispersion relations $\varepsilon_d ({\bf k})$ are calculated by using the Floquet theorem from the following substitution \begin{equation} \mathcal{F}^{\Uparrow, \Downarrow}(t) = \texttt{exp}\left[ - i \frac{\varepsilon_d ({\bf k}) }{\hbar} \, t \right] \sum\limits_{\lambda = - \infty}^{\infty} f_\lambda^{\Uparrow, \Downarrow} \, \texttt{e}^{i \lambda \omega t} \, , \end{equation} where $f^{\Uparrow, \Downarrow}(t) = f^{\Uparrow, \Downarrow}(t + 2 \pi/ \omega)$ are time-dependent periodic functions. The nested exponential dependence is usually simplified using the Jacobi-Anger identity \begin{equation} \texttt{exp}\left[ \pm 2 i \, \frac{c_0}{\hbar \omega} \, \, \sin \omega t \right] = \sum\limits_{\nu = - \infty}^{\infty} \texttt{e}^{i \nu \omega t} \, J_\nu \left( \frac{\pm 2 c_0}{\hbar \omega}\right)\ . \end{equation} The orthonormality of Fourier expansions results in the following system of $2 \mu$, $\mu \Rightarrow \infty$ equations \begin{eqnarray} \nonumber && \varepsilon_d ({\bf k}) \, f^{\Uparrow, \Downarrow}_\mu = \left[ \mu \, \hbar \omega + \mathbb{E}_i \pm \eta_x k_x^2 + ( \eta_y k_y \pm \chi ) k_y \right] \,f^{\Uparrow, \Downarrow}_\mu + \left[ \frac{c^{(2)}}{2} \epsilon_{\eta} \cos (\phi^{(\eta)} -\theta_0) \right] \, \left( f^{\Uparrow, \Downarrow}_{\mu+1} + f^{\Uparrow, \Downarrow}_{\mu-1} \right) \sum\limits_{\lambda = -\infty}^{\infty} f^{\Downarrow, \Uparrow}_\lambda \times \\ && \times \, \left\{ \left[ \Delta_D \pm i \left( \Delta_O + \gamma_x k_x^2 + \gamma_y k_y^2 \right) \right] \, J_{\mu-\lambda} \left( \frac{\pm 2 c_0}{\hbar \omega} \right) + \left[ \frac{c^{(2)}}{2} \epsilon_{\gamma} \cos (\phi^{(\gamma)} - \theta_0) \right] \, \sum\limits_{\alpha = \pm 1} J_{\mu+ \alpha -\lambda} \left( \frac{\pm 2 c_0}{\hbar \omega} \right) \right\} \, . \label{main} \end{eqnarray} In our consideration, the frequency of the off-resonant dressing field is high enough so that only diagonal elements in the eigenvalue equation \eqref{main} are retained. However, if we need to include the first-order electron-field coupling terms $\backsim c^{(2)}$, we must keep the summations with $\lambda = \mu \pm 1$. \medskip \par In the simplest case, where only diagonal elements are kept, the quasiparticle energy dispersion relations are \begin{equation} \varepsilon_d ({\bf k}) = \mathbb{E}_i + \sum\limits_{i=x,y} \eta_i k_i^2 \pm \left\{ \chi^2 k_y^2 + \left[\Delta_D^2 + \left(\Delta_O + \sum\limits_{j=x,y} \gamma_j k_j^2 \right)^2 \right] \, J_0^2 \left( \frac{2 c_0}{\hbar \omega} \right) \right\}^{1/2} \, . \end{equation} This result is valid only if $c_0 = \chi \,\sin \theta_0 \, E_0 \mathfrak{e}/(\hbar \omega) \neq 0$, or there is a finite $y-$component of the polarization direction of the dressing field. \begin{equation} \varepsilon_d ({\bf k}) = \mathbb{E}_i \pm \left\{ \left(1 - \alpha_c^2\right) \left[ \Delta_D^2 + \Delta_O^2 \right] \right\}^{1/2} + \left[ \eta_x \pm \gamma_x \, \frac{\sqrt{1 - \alpha_c^2} \, \Delta_O}{ \sqrt{ \Delta_D^2 + \Delta_O^2}} \right] \, k_x^2 + \left[ \eta_y \pm \frac{\gamma_y \, \Delta_O \, \left(1 - \alpha_c^2\right) + \chi^2/2}{ \sqrt{ \left(1 - \alpha_c^2 \right) \left[ \Delta_D^2 + \Delta_O^2 \right]}}\right]\, k_y^2 \, , \end{equation} where $\alpha_c = 2 c_0 / (\hbar \omega)$ is a dimensionless coupling coefficient. The electron effective masses are now readily obtained. If there is no $\Sigma_z$-type energy bandgap $\Delta_D$, the expressions are simplified as \begin{eqnarray} && m^{(e,h)}_{\,x} = \frac{\hbar^2}{2 \left( \eta_x \pm \tilde{\alpha}_c \, \gamma_x \right)} \, , \\ \nonumber && m^{(e,h)}_{\,y} = \frac{\hbar^2 / 2}{ \eta_y \pm \left[ \tilde{\alpha}_c \, \gamma_y + \chi^2 / (2 \Delta_O \tilde{\alpha}_c \,) \right]} \, , \end{eqnarray} where $ \tilde{\alpha}_c = \sqrt{1 - \alpha_c^2} \backsim 1 - \alpha_c^2 / 2$. The obtained energy dispersion relations are presented in Fig.~\ref{FIG:2}. It is interesting to note that both energy bandgaps are renormalized by the electron-photon interaction, showing a substantial decrease, while the diagoanl terms for the initial effective masses of a ``bare" electron $\eta_{x,y}$ are unchanged. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{FIG2} \caption{(Color online)\ Eenrgy dispersions of electron dressed states in phosphorene interacting with linearly-polarized light. The two panels correspond to the $x-$ and $y-$components of the wave vector. For each plot, the chosen dimensional electron-photon coupling constant is $\alpha_c = 0.0$ (black solid line), $\alpha_c = 0.15$, red and dashed, and $\alpha_c = 0.4$ (blue small-dashed curve).} \label{FIG:2} \end{figure} \section{Anisotropic massless fermions in few-layer phosphorus} \label{sect4} \par The central property, as well as the research focus of phosphorene, is the electron dispersion relation and effective mass anisotropy. At the same time, BP-based materials have a band gap which is determined by the number of layers, which varies from $0.6\,eV$ for five layers to $1.5\,eV$ for a single layer. Specifically, we consider anisotropic massless Dirac particles, which could be observed in special few-layer ($N_L > 5$) black phosphorus superlattices for a narrow range of energies. \medskip \par This anisotropic Dirac Hamiltonian \begin{equation} \mathbb{H}_{ml}^{\gamma_0} = \hbar v_F' \left( k_x \hat{\Sigma}_x + \gamma_0 k_y \, \hat{\Sigma}_y \right) \end{equation} leads to the following energy dispersion \begin{equation} \varepsilon_{\gamma_0}^{\pm} ({\bf k}) = \pm \hbar v_F' \sqrt{k_x^2 + (\gamma_0 k_y)^2} \, . \end{equation} For such fermions interacting with light having linear polarization in arbitrary direction $\theta_0$ described by Eq.\eqref{Alin}, we obtain the following Hamiltonian \begin{equation} \hat{\mathcal{H}}({\bf k}) = \mathbb{H}_{ml}^{\gamma_0}({\bf k}) + \hat{h}_0 \, , \end{equation} where \begin{equation} \hat{h}_0 = \mathfrak{e} v_F \, \mathbf{\hat{\Sigma}} \cdot {\bf A}^{L}(t) = \frac{\mathfrak{e} v_F E_0}{\omega} \, \left( \begin{array}{cc} 0 & \texttt{e}^{ - i \theta_\gamma} \cos \omega t \\ \texttt{e}^{ i \theta_\gamma} \cos \omega t & 0 \end{array} \right) \label{h00} \end{equation} is the ${\bf k} = 0$ portion of the total Hamiltonian. Here, we also introduced $\theta_\gamma = \tan^{-1}\left[\gamma_0 \tan(\theta_0)\right]$ so that $\texttt{e}^{ \pm i \theta_\gamma} = \cos \theta_0 \pm i \gamma_0 \sin \theta_0$. We define $c_0 = \mathfrak{e} v_F E_0 / \omega$ to be the electron-photon interaction coefficient with the dimension of energy and for a given range of frequency $c_0 \ll \hbar \omega$ - dressing field which cannot be absorbed by electrons. Traditionally, we first need to solve the time-dependent Schr\"odinger equation for ${\bf k} = 0$ and Hamiltonian $\hat{h}_0$. The eigenfunction is obtained in a straightforward way as \begin{equation} \Psi_0^{\beta = \pm 1}(t) = \frac{1}{\sqrt{2}} \, \left[ \begin{array}{c} 1 \,\\ \beta \texttt{e}^{i \theta_\gamma} \end{array} \right] \texttt{exp}\left\{ - i \beta \, \frac{c_0}{\hbar \omega} \, \sin \omega t \right\} \, . \label{setafm} \end{equation} \medskip \par In order to determine the solution for a finite wave vector we once again employ the expansion \eqref{15} and solve Eq.~\eqref{16} for the time-dependent coefficients $\mathcal{F}^{\Uparrow,\Downarrow} = \mathcal{F}^{\Uparrow,\Downarrow}(k_x,k_y \, \vert \, t)$. In our case, this leads to \begin{equation} i \frac{d \, \mathcal{F}^{\, \Uparrow,\Downarrow}}{dt} = \pm v_F' k_\gamma \cos(\phi^{(\gamma)} - \theta_\gamma) \, \mathcal{F}^{\, \Uparrow,\Downarrow} \pm i \, v_F'k_\gamma \sin(\phi^{(\gamma)} - \theta_\gamma) \, \mathcal{F}^{\, \Downarrow,\Uparrow} \, \texttt{exp}\left[\pm 2 i \, \frac{c_0}{\hbar \omega} \, \, \sin \omega t \right] \, , \end{equation} where $\phi^{(\gamma)} = \tan^{-1}[\gamma_0 k_y/k_x]$ or $\phi^{(\gamma)} = \tan^{-1}[\gamma_0 \tan(\phi_0)]$ and $\tan \phi_0 = k_y/k_x$. We also denote $k_\gamma = \sqrt{k_x^2 + (\gamma_0 k_y)^2}$. \par \medskip \par Now we also use the Floquet theorem to extract the quasiparticle energy $\varepsilon_d ({\bf k})$ and expand the remaining time-periodic functions as a Fourier series. The result is again a system of $2 \mu$, $\mu \Rightarrow \infty$ equations \begin{eqnarray} \varepsilon_d ({\bf k}) f^{\Uparrow, \Downarrow}_\mu = \sum\limits_{\lambda = -\infty}^{\infty} \left\{ \delta_{\mu, \lambda} \, \left[ \mu \, \omega \pm \hbar v_F' k_\gamma \cos (\phi^{(\gamma)} - \theta_\gamma) \right] \,f^{\Uparrow, \Downarrow}_\lambda \pm i \hbar v_F' k_\gamma \sin (\phi^{(\gamma)} - \theta_\gamma) \, J_{\mu-\lambda} \left( \frac{\pm 2 c_0}{\hbar \omega} \right) \, f^{\Downarrow, \Uparrow}_\lambda \right\} \, . \end{eqnarray} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{FIG3} \caption{(Color online)\ Angular dependence of the energy dispersion for anisotropic mass fermions (AMF's) subjected to the linearly polarized dressing field for chosen $\vert {\bf k} \vert = 1.0\,nm$, shown as polar plots. Each panel corresponds to different polarization directions $\theta_0$ and Dirac cone anisotropy parameter $\gamma_0$. Plot $(a)$ shows the case when $\theta_0 = 0$ (x-axis linear polarization) and $\gamma_0 = 1/3$, while the inset $(i1)$ demonstrates the way in which the electron-field interation affects the dispersion relations as a zeroth-order Bessel function. Panels $(b)$ and $(c)$ represent angular dispersion for $\gamma_0 = 1$ (isotropic Dirac cone) and $\theta_0 = 0$ and $\pi/6$, respectively. Panel $(d)$ describes the situation for $\theta_0 = \pi/6$ and $\gamma_0 = 0.6$ for the main plot, and $\gamma_0 = 0.3$ in inset $(i2)$. For each panel, the electron-photon coupling parameter $\alpha = 2 c_0 /(\hbar \omega) = 0.0$ (no irradiation), $0.8$, $1.2$ and $1.6$. } \label{FIG:3} \end{figure} \par In the region of interest, i.e., for large frequency $\omega \gg v_F' k$ and $\omega \gg \epsilon ({\bf k})$ we approximate $f^{\Uparrow, \Downarrow}_{\mu \neq 0} \backsimeq 0$. Finally, the eigenvalue equation becomes $\varepsilon_d ({\bf k}) f^{\Uparrow, \Downarrow}_0 = \tensor{K}(\Uparrow, \Downarrow \, \vert \, \gamma_0, {\bf k}) \times { \bf f}^{\Uparrow, \Downarrow}_0$, where \begin{equation} \tensor{K}(\Uparrow, \Downarrow \, \vert \, \gamma_0, {\bf k}) = \left[ \begin{array}{cc} \hbar v_F' k_\gamma \cos (\phi^{(\gamma)} - \theta_\gamma) & i \gamma_0 \, \hbar v_F' k_\gamma \sin (\phi^{(\gamma)} - \theta_\gamma) \, J_0 \left[ 2 c_0/(\hbar \omega) \right] \\ - i \hbar v_F' k_\gamma \sin (\phi^{(\gamma)} - \theta_\gamma) \, J_0 \left[ - 2 c_0/(\hbar \omega) \right] & - \hbar v_F' k_\gamma \cos (\phi^{(\gamma)} - \theta_\gamma) \end{array} \right] \end{equation} The energy eigenvalues are given by \begin{equation} \varepsilon_d ({\bf k}) = \pm \hbar v_f' \sqrt{k_x^2 + (\gamma_0 k_y)^2} \, \left\{ \cos^2 (\phi^{(\gamma)} - \theta_\gamma) +\left[ \sin (\phi^{(\gamma)} - \theta_\gamma) \, J_0 \left( \frac{2c_0}{\hbar \omega} \right) \right]^2 \right\}^{1/2} \ . \end{equation} For small light intensity $c_0 \ll \hbar \omega$ the zeroth-order Bessel function of the first kind behaves as \begin{equation} J_0^{c_0 \ll \hbar \omega} \left[ \frac{2c_0}{\hbar \omega} \right] \backsimeq 1 - \frac{c_0^2}{(\hbar \omega)^2} + \frac{c_0^4}{4 (\hbar \omega)^4} - ... \end{equation} and we have approximately for the energy dispersion \begin{equation} \varepsilon_d ({\bf k}) = \pm \hbar v_f' \left\{ \left[1 - \frac{2c_0^2}{(\hbar \omega)^2} \sin^2 \theta_\gamma \right]^2 k_x^2 + \gamma_0^2 \, \left[1 - \frac{2c_0^2}{(\hbar \omega)^2} \cos^2 \theta_\gamma \right]^2 k_y^2 + \frac{2\gamma_0 c_0^2}{(\hbar \omega)^2} k_x k_y \sin (2 \theta_\gamma) \right\}^{1/2} \, . \end{equation} If the light polarization is directed along the $x$-axis, then $\theta_0 = \theta_{\gamma} = 0$ and \begin{equation} \varepsilon_d ({\bf k}) = \hbar v_f' \sqrt{ k_x^2 + \gamma_0^2 \, \left[1 - \frac{2 c_0^2}{(\hbar \omega)^2} \right]^2 k_y^2 } \, . \end{equation} Angular dependence of the dressed states dispersions is shown in Fig.~\ref{FIG:3}. We notice that initially existing in-plane anisotropy is affected for all in-plance angles, depending on the dressing field polarization direction. For small intensity of the incoming radiation, polarized along the $x-$ axis, the anisotropy coefficient is simply renormalized. \subsection{Circular Polarization} \medskip \par For circular polarization of the dressing radiation, the vector potential is \begin{equation} {\bf A}^C(t) = \{ A_{0,x}, \, A_{0,y} \} = \frac{E_0}{\omega} \{ \cos \, \omega t, \,\, \sin \, \omega t \} \, . \label{acirc} \end{equation} Being completely isotropic, this type of field is known to induce the metal-insulatron transition in graphene,\cite{kibis0} resulting in the creation of a non-zero energy bandgap. If such a gap already exists, it could be increased or decreased depending on its initial value. \cite{kibisall} At the same time, the slope of the Dirac dispersions, known as Fermi velocity and the in-plane isotropy are not changed. The situation cannot be the same for an initially anisotropic Dirac cone for AMF's. \medskip \par The total Hamiltonian for the interacting quasiparticle now becomes \begin{equation} \hat{\mathcal{H}}({\bf k}) = \mathbb{H}_{ml}^{\gamma_0}({\bf k}) + \hat{h}_0^{(c)} \, , \end{equation} where the ${\bf k} = 0$ interaction term is \begin{equation} \hat{h}_0^{(c)} = \frac{e v_F E_0}{\omega} \, \mathbf{\hat{\Sigma}} \cdot {\bf A}^C (t) = \frac{c_0}{2} \, \left[ \begin{array}{cc} 0 & \sum\limits_{\alpha = \pm 1} (1 - \alpha \gamma) \, \texttt{e}^{i\alpha \gamma \omega t} \\ \sum\limits_{\alpha = \pm 1} (1 + \alpha \gamma) \, \texttt{e}^{i\alpha \gamma \omega t} & 0 \end{array} \right] \ . \end{equation} It seems rather surprising, although physically justified, that this problem is mathematically identical to the isotrpic Dirac cone interacting with elliptically polarized light addressed in Refs.~[\onlinecite{kibisall, goldprx}]. The interaction term also could be presented as \begin{eqnarray} && \hat{h}_0^{(c)} = \hat{\mathbb{S}}_\gamma \, \texttt{e}^{i \omega t} + \hat{\mathbb{S}}_\gamma^{\dagger} \, \texttt{e}^{ -i \omega t} \\ \nonumber && \hat{\mathbb{S}}_\gamma = \frac{c_0}{2} \sum\limits_{\alpha = \pm 1} (1 + \alpha \gamma) \hat{\Sigma}_\alpha \, , \end{eqnarray} where $\hat{\Sigma}_{\pm} = 1/2 \left(\hat{\Sigma}_x \pm i \hat{\Sigma}_y \right)$. This Hamiltonian represents an example of a wide class a periodically driven quantum systems. \cite{goldprx} Such problems are generally solved perturbatively, in powers of $c_0/(\hbar \omega)$, if the electron-field coupling is weak. The effective Hamiltonian for such problem has been shown to be \begin{equation} \hat{\mathcal{H}}_{eff}({\bf k}) \backsimeq \mathbb{H}_{ml}^{\gamma_0}({\bf k}) + 1 / (\hbar \omega) \, \left[ \, \hat{\mathbb{S}}_\gamma\, \hat{\mathbb{S}}^\dagger_\gamma \, \right] + \frac{1}{2 (\hbar \omega)^2} \, \left\{ \, \left[ \left[ \, \hat{\mathbb{S}}_\gamma, \mathbb{H}_{ml}^{\gamma_0}({\bf k}) \, \right] \, \hat{\mathbb{S}}^\dagger_\gamma \, \right] + h.c. \right\} + \cdots \end{equation} Evaluating this expression, we obtain \begin{equation} \hat{\mathcal{H}}_{eff}({\bf k}) = \hbar v_F' k_x \left( 1 -\frac{\gamma_0 \, c_0^2}{2 (\hbar \omega)^2} \right) \, \hat{\Sigma}_x + \hbar v_F' \, \gamma_0 k_y \left( 1 -\frac{c_0^2}{2 (\gamma_0 \, \hbar \omega)^2} \right) \, \hat{\Sigma}_y -\frac{c_0^2}{\hbar \omega} \,\gamma_0 \, \hat{\Sigma}_z \, . \end{equation} Finally, the energy dispersion is given by \begin{equation} \varepsilon_d ({\bf k}) = \pm \left\{ \left(\frac{\gamma_0 \, c_0^2}{\hbar \omega} \right)^2 + \hbar^2 v_F'^2 \left[ \left( 1 -\frac{\gamma_0 \, c_0^2}{2 (\hbar \omega)^2} \right)^2 k_x^2 + \gamma_0^2 \left( 1 -\frac{c_0^2}{2 (\gamma_0 \, \hbar \omega)^2} \right)^2 k_y^2 \right] \right\}^{1/2} \, . \end{equation} This result is an approximation. As we mentioned, in the case of an isotropic Dirac cone, electrons in graphene interacting with a circularly-polarized dressing field, the energy gap was found to be \,\cite{kibis0,kibissrep} \begin{equation} \Delta_g/2 = \sqrt{\hbar^2 \omega^2 + 2 \, c_0^2 } - \hbar \omega \backsimeq \frac{c_0}{\hbar \omega} - \frac{1}{4} \left( \frac{c_0}{\hbar \omega} \right)^2 + ... \,\,\, , \end{equation} while the Fermi velocity $v_F$ is unaffected. \subsection{Gapped anisotropic fermions} \medskip \par We now present a generalization of previously considered massless Dirac particles with a finite energy bandgap. Two different gaps added to both on- ($\Delta_D$) and off-diagonal terms ($\Delta_O$) of the Hamiltonian. Here, an anisotropic Dirac cone is combined with the energy gaps attributed to phosphorene, a single-layer structure. Even though this model does not exactly describe any of the fabricated black phosphorus structures, we conider it as an interesting mathematical generalization of the anisotropic Dirac fermions case, which may become relevant from a physical point of view. Apart from that, this is an intermediate case between phosphorene and few-layer gapless materials, which is expected to approximate the electronic properties of a system with a small number of phospohrus layers. We have \begin{eqnarray} \mathbb{H}_g = && \mathbb{E}_i \, \hat{\mathbb{I}}_{2\times 2} + \left( \hbar v_F' k_x + \Delta_O \right) \hat{\Sigma_x} + \hbar v_F' k_y \hat{\Sigma}_y + \Delta_D \hat{\Sigma}_z = \\ \nonumber = && \left[ \begin{array}{cc} \mathbb{E}_i + \Delta_D & 0 \\ 0 & \mathbb{E}_i - \Delta_D \end{array} \right] + \Delta_O \left[ \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right] + \hbar v_F' \left[ \begin{array}{cc} 0 & k_x - i \gamma_0 k_y \\ k_x + i \gamma_0 k_y & 0 \end{array} \right] \, . \end{eqnarray} The corresponding energy dispersion is given by \begin{equation} \epsilon_{\gamma_0}^{\pm}(k) = \mathbb{E}_0 \pm \left\{ \Delta_D^2 + \left[\Delta_O + \hbar v_F' k_x \right]^2 + (\gamma_0 \, \hbar v_F k_y)^2 \right\}^{1/2} \ . \end{equation} We now address the interaction of these gapped Dirac electrons with the dressing linearly polarized field. The vector potential here is again specified by Eq.\eqref{Alin} and the new Hamiltonian is \begin{equation} \hat{\mathcal{H}}({\bf k}) = \mathbb{H}_{g}({\bf k}) + \hat{h}_0 \, , \end{equation} where $\hat{h}_0$ is identical to Eq.\eqref{h00} since both equations share similar $k-$dependent terms. Following this approach, we expand the wave function for a finite wave vector $\Psi_{\bf k} (t)$ over the basis \eqref{setafm} and obtain the following equations for the expansion coefficients $\mathcal{F}^{\, \Uparrow,\Downarrow} (k_x, k_y \, \vert \, t)$ \begin{eqnarray} i \dot{\mathcal{F}}^{\, \Uparrow,\Downarrow} && = \pm \left\{ \pm \mathbb{E}_i + (\hbar v_F' k_x + \Delta_O)\cos \theta_\gamma + \gamma_0 \, \hbar v_F' k_y \sin \theta_\gamma \right\} \, \mathcal{F}^{\, \Uparrow,\Downarrow} + \\ \nonumber && + i \left\{ - i \Delta_D \pm \gamma_0 \, \hbar v_F' k_y \cos \theta_\gamma \mp (\hbar v_F' k_x + \Delta_O) \sin \theta_\gamma \right\} \, \mathcal{F}^{\, \Downarrow,\Uparrow} \, \texttt{exp}\left[ \pm 2 i \, \frac{c_0}{\hbar \omega} \, \, \sin \omega t \right] \, . \end{eqnarray} Similar to our previous case, we introduce the following simplifying notation \begin{eqnarray} && \phi^{(O)} = \tan^{-1} \left\{ \frac{ \gamma_0 \, \hbar v_F' k_y }{ \hbar v_F' k_x + \Delta_O } \right\} \\ \nonumber && \epsilon_{O} = \sqrt{(\hbar v_F' k_x + \Delta_O)^2 + (\gamma_0 \, \hbar v_F' k_y)^2} \ . \end{eqnarray} After performing the Floquet theorem substitution and the expansions similar to the previously adopted procedure and we obtain \begin{eqnarray} \varepsilon_d ({\bf k}) f^{\Uparrow, \Downarrow}_\mu = && \left[ \mu \, \omega + \mathbb{E}_i \pm \epsilon_{O} \cos (\phi^{(O)} - \theta_\gamma) \right] \,f^{\Uparrow, \Downarrow}_\lambda + \\ \nonumber + && \sum\limits_{\lambda = -\infty}^{\infty} \left[ \Delta_D \pm i \epsilon_{O} \sin (\phi^{(O)} - \theta_\gamma) \right] \, J_{\mu-\lambda} \left( \frac{\pm 2 c_0}{\hbar \omega} \right) \, f^{\Downarrow, \Uparrow}_\lambda \, . \end{eqnarray} It is interesting to note that the diagonal bandgap $\Delta_D$ is now on the main diagonal and is affected by the Bessel function. The energy dispersions are now given by \begin{eqnarray} \varepsilon_d ({\bf k}) && = \mathbb{E}_i \pm \left\{ \left[\epsilon_{O} \, \cos (\phi^{(\gamma)} - \theta_\gamma)\right]^2 +\left[ \Delta_D^2 + \epsilon_{O}^2 \, \sin^2 (\phi^{(\gamma)} - \theta_\gamma)\right] \, J_0^2 \left( \frac{2c_0}{\hbar \omega} \right) \right\}^{1/2} \, , \label{1p} \\ \nonumber \epsilon_{O} && = \sqrt{(\hbar v_F' k_x + \Delta_O)^2 + (\gamma_0 \, \hbar v_F' k_y)^2} \,\, . \end{eqnarray} If the electron-photon interaction is small, then $\alpha_c = 2 c_0/(\hbar \omega) \ll 1$, the energy dispersion relation is approximated by \begin{eqnarray} \label{2p} \left( \varepsilon_d ({\bf k}) - \mathbb{E}_i \right)^2 && \backsimeq (1 - \alpha_c^2) \, \Delta_D^2 + \left[1 - \alpha_c^2 \sin^2 \theta_\gamma \right]^2 (\hbar v_F' k_x + \Delta_O)^2 + (\hbar v_F')^2 \, \gamma_0^2 \, \left[1 - \alpha_c^2 \cos^2 \theta_\gamma \right]^2 k_y^2 + \\ \nonumber && + \,\, 2 \gamma_0 \, \alpha \, \hbar v_F' (\hbar v_F' k_x + \Delta_O) k_y \, \sin (2 \theta_\gamma) \, . \end{eqnarray} From Eqs.~\eqref{1p} and \eqref{2p} we note that the diagonal $\Delta_D$ bandgap is decreased as $\backsim \alpha_c^2$, similar to that for gapped graphene or the transition metal dichalcogenides, \cite{kibisall} while the off-diagonal gap modification drastically depends on the direction of the radiation polarization. This behavior has no analogy in the previously considered structures. At the same time, the Fermi velocity components are modified similarly to that for anisotropic massless fermions. \section{Concluding Remarks} \label{sect5} \medskip \par We have derived closed-form analytic expressions for electron-photon dressed states in one- and few layer black phosphorus. The energy gap is determined by the number of layers comprising the system, reaching its largest value for a single layer (phosphrene) and effectively vanishes for a few-layer structure. The latter case gives rise to anisotropic massless fermions, which exhibits an anisotropic Dirac cone. Since the anisotropy is the most significant and common property. Of all cases of BP, we focused on linearly polarized dressing field in an arbitrary direction. As a result, we demonstrated the Hamiltonian parameters are modified in an essentially different way compared to all isotropic Dirac systems. Anisotropy of the energy dispersion is modified in all directions, as well as all the electron effective masses. If both diagonal and off-diagonal gaps are present, the latter one remains unaffected, but only for a specific light polarization direction. For AMF's interacting with circularly polarized light, the problem is mathematically identical with Dirac electrons irradiated by the field with elliptical polarization. In that case we found that initially absent energy bandgap is created and the non-equivalent Fermi velocities in various directions are renormalized. These results are expected to be of high importance for electronic device applications based on recently discovered and fabricated black phosphorus. \medskip . \acknowledgments D.H. would like to thank the support from the Air Force Office of Scientific Research (AFOSR).
1,108,101,563,524
arxiv
\section{Esakia duality for Heyting algebras}\label{s:preliminaries} We assume familiarity with the theory of distributive lattices; for background see, e.g., \cite[Chapters II-III]{BD74}. Recall that a \emph{Heyting algebra} is a bounded distributive lattice $A$ in which the operation $\wedge$ has a residual $\to$, that is $a\wedge b\leq c$ iff $b\leq a\to c$ for all $a,b,c\in A$. An example of Heyting algebra is provided by the lattice of opens of an arbitrary topological space. Notice that in a Heyting algebra of this form, the supremum of any subset exists (i.e., the Heyting algebra is \emph{complete}), which is not the case in all Heyting algebras. We next recall the basics of Esakia duality, which gives a topological representation for all Heyting algebras. See, e.g., \cite{Geh2014Esak} for more details. An \emph{Esakia space} is a partially ordered compact space $(X,\leq)$ such that: \emph{(i)} $X$ is totally order-disconnected, that is, whenever $x\not\leq y$ are elements of $X$, there is a clopen (=closed and open) $U\subseteq X$ that is an up-set for $\leq$ and satisfies $x\in U$ but $y\notin U$; and \emph{(ii)} $\d{C}$ is clopen whenever $C$ is a clopen subset of $X$. Given a Heyting algebra $A$, the set $X_A$ of prime filters of $A$ partially ordered by set-theoretic inclusion is an Esakia space when equipped with the Stone topology generated by the sets $\w{a}:=\{x\in X_A\mid a\in x\}$, for $a\in A$, and their complements. Moreover, if $h\colon A\to B$ is a homomorphism of Heyting algebras then $f := h^{-1}\colon X_B\to X_A$ is continuous, and a \emph{p-morphism}, i.e.\ $\u{f^{-1}(S)}=f^{-1}(\u{S})$ for every subset $S\subseteq X_A$. This correspondence yields a duality, known as Esakia duality \cite{Esak1974}, between the category of Heyting algebras and their homomorphisms, and the category of Esakia spaces and continuous p-morphisms. In particular, a Heyting algebra $A$ can be recovered, up to isomorphism, from its dual Esakia space as the algebra of clopen up-sets of $X_A$, where the assignment $a\mapsto \w{a}$ is a Heyting algebra isomorphism. In dealing with properties of $\mathrm{IPC}$, a key r\^ole is played by finitely generated free Heyting algebras and their dual spaces. Let $\F{\v}$ be the Heyting algebra free on a finite set $\v$, that is, the algebra of $\mathrm{IPC}$-equivalence classes of propositional intuitionistic formulae in the variables $\v$, and $\E{\v}$ its dual Esakia space. A Heyting algebra is \emph{finitely presented} if it is the quotient of $\F{\v}$ under a finitely generated congruence; such congruences can in fact always be generated by a single pair of the form $(\phi,\top)$. We call an Esakia space \emph{finitely copresented} if its Heyting algebra of clopen up-sets is finitely presented. Equivalently, an Esakia space is finitely copresented if it is order-homeomorphic to a clopen up-set of $\E{\v}$ for some finite $\v$. We recall two basic facts about such spaces in Proposition~\ref{p:properties-of-co-free-spaces}. The first item amounts to the completeness of $\mathrm{IPC}$ with respect to its canonical model, and the second item is the dualization of the universal property of free algebras. \begin{proposition}\label{p:properties-of-co-free-spaces} Let $\v=\{p_1,\ldots,p_l\}$ be any finite set of variables. \begin{enumerate} \item For any two formulae $\phi(\v)$ and $\psi(\v)$, $\phi \vdash_\mathrm{IPC} \psi$ if, and only if, $\w{\phi} \subseteq \w{\psi}$ as subsets of $\E{\v}$. \item If $Y$ is an Esakia space and $C_1,\ldots,C_l$ are clopen up-sets of $Y$, there exists a unique continuous p-morphism $h} % \h_M unique map from M to \E{\vp_Y\colon Y\to \E{\v}$ satisfying $h} % \h_M unique map from M to \E{\vp_Y^{-1}(\w{p_i})=C_i$ for all $i\in\{1,\ldots, l\}$. \end{enumerate} \end{proposition} \begin{proof} For item $1$, we have $\phi \vdash_\mathrm{IPC} \psi$ iff $[\phi]\leq[\psi]$ in $\F{\v}$, which in turn is equivalent to $\w{\phi} \subseteq \w{\psi}$ because $\w{-}$ is an isomorphism of Heyting algebras. For item $2$, note that the choice of the clopen up-sets $C_1,\ldots,C_l$ gives a function from $\v$ to the algebra of clopen up-sets of $Y$. The dual map of the unique homomorphism lifting this function is $h_Y$. \end{proof} \section{Open maps and uniform interpolation}\label{s:interpolation-and-open-mapping-theorem} The main aim of this paper is to prove the following theorem. \begin{theorem}\label{thm:fin-gen} Every continuous p-morphism between finitely copresented Esakia spaces is an open map. \end{theorem} We show first that Pitts' uniform interpolation theorem follows in a straight-forward manner from Theorem~\ref{thm:fin-gen} and the Craig interpolation theorem for $\mathrm{IPC}$ \cite{Sch62}. Throughout the paper, $\v$ will denote a finite set of variables, and $v$ a variable not in $\v$. \begin{theorem}[Pitts \cite{Pit1992}] Let $\phi(\vp)$ be a propositional formula. There exist propositional formulae $\phi_R(\v)$ and $\phi_L(\v)$ such that, for any formula $\psi(\v,\o{q})$ not containing $v$, \begin{align*} \phi \vdash_\mathrm{IPC} \psi &\iff \phi_R \vdash_\mathrm{IPC} \psi,\\ \psi \vdash_\mathrm{IPC} \phi &\iff \psi \vdash_\mathrm{IPC} \phi_L. \end{align*} \end{theorem} \begin{proof} By the Craig interpolation theorem for $\mathrm{IPC}$, it suffices to prove the statement for any formula $\psi$ whose variables are contained in $\v$ (cf., e.g., \cite[Prop.~3.5]{vGMT2017}). Since $\w{\phi}\subseteq\E{\vp}$ is a clopen up-set, it follows at once from Theorem~\ref{thm:fin-gen}, and the definitions of Esakia space and p-morphism, that $\f(\w{\phi})$ and $(\d\f(\w{\phi}^{\, c}))^c$ are clopen up-sets of $\E{\v}$. Thus there exist formulae $\phi_R(\v)$ and $\phi_L(\v)$ such that $\w{\phi_R}=\f(\w{\phi})$ and $\w{\phi_L}=(\d\f(\w{\phi}^{\, c}))^c$. It is easy to see, using the first part of Proposition~\ref{p:properties-of-co-free-spaces}, that $\phi_R$ and $\phi_L$ satisfy the conditions in the statement. \end{proof} As a first step towards proving Theorem~\ref{thm:fin-gen}, we will show that the theorem follows from a special case, namely Proposition~\ref{prop:open-mapping} below. Denote by $i$ the embedding of free Heyting algebras $\F{\v}\hookrightarrow\F{\vp}$ that is the identity on $\v$. Let $f\colon \E{\vp}\twoheadrightarrow\E{\v}$ be the continuous p-morphism dual to $i$. \begin{proposition}\label{prop:open-mapping} The map $\f\colon \E{\vp}\twoheadrightarrow\E{\v}$ is open. \end{proposition} \begin{proof}[Proof that Proposition~\ref{prop:open-mapping} implies Theorem~\ref{thm:fin-gen}.] Let $g\colon X_A\to X_B$ be any continuous p-morphism between Esakia spaces. If $X_A$ and $X_B$ are dual to finitely presented Heyting algebras $A$ and $B$, respectively, then (see, e.g.,\ \cite[Lemma 3.11]{vGMT2017}) there are finite presentations $j_A\colon \F{\v,\o{q}}\twoheadrightarrow A$ and $j_B\colon \F{\v}\twoheadrightarrow B$ such that $j_A\circ i=g^{-1}\circ j_B$, where $i\colon \F{\v}\hookrightarrow\F{\v,\o{q}}$ is the natural embedding. Dually, we have the following commutative square \[\begin{tikzcd} \E{\v,\o{q}} \arrow[twoheadrightarrow]{r}& \E{\v} \\ X_A \arrow[hookrightarrow]{u} \arrow{r}{g} & X_B \arrow[hookrightarrow]{u} \end{tikzcd}\] where the top horizontal map is open by Proposition~\ref{prop:open-mapping}. Since the presentation $j_A$ is finite, the dual map identifies the Esakia space $X_A$ with a clopen up-set of $\E{\v,\o{q}}$, so that the left vertical map is open. Therefore $g\colon X_A\to X_B$ is also open. \end{proof} The connection between the existence of uniform interpolants and open maps can be explained in terms of adjoints. Indeed, it was already observed in \cite{Pit1992} that the uniform interpolation theorem is equivalent to the existence of both left and right adjoints for the embeddings $\F{\v}\hookrightarrow\F{\vp}$. In turn, it is not difficult to see that if a map between Esakia spaces is open, then its dual Heyting algebra homomorphism has left and right adjoints. Theorem~\ref{thm:fin-gen} implies that these properties always hold for homomorphisms between finitely presented Heyting algebras. The following example shows that the two properties are distinct in general. In this sense, our open mapping theorem establishes a slightly stronger property than uniform interpolation. \begin{example} We give an example of a Heyting algebra homomorphism $h\colon A\to B$ such that $h$ is both left and right adjoint, but its dual map is not open. For any natural number $n \geq 1$, denote by $\mathbb{n} = \{1 < \dots < n\}$ the finite chain with $n$ elements and the discrete topology. Let $X=\mathbb{1}+\mathbb{2}+\cdots$, the disjoint order-topological sum of countably many finite discrete chains, and let $\alpha X=X\cup\{\infty\}$ its one-point compactification. Extend the partial order on $X$ to a partial order on $\alpha X$ by defining $x \leq \infty$ for all $x \in \alpha X$. Denote by $\alpha\N = \N \cup \{\infty\}$ the one-point compactification of a discrete countable space, partially ordered by $x \leq y$ iff $x = y$ or $y = \infty$. Then $\alpha X$ and $\alpha\N$ are both Esakia spaces. Define a function $f\colon \alpha X\to \alpha \N$ by $f(\infty) := \infty$, and, for any $x\in\mathbb{n} \subseteq \alpha X$, $f(x) := n$ if $x < n$ and $f(x) := \infty$ if $x = n$. Note that $f$ is a continuous p-morphism. Let $h \colon A \to B$ be the dual Heyting algebra homomorphism. If $U\subseteq \alpha X$ is a clopen up-set, then $f(U)$ is a clopen up-set, and if $V\subseteq \alpha X$ is a clopen down-set then $\d f(V)$ is a clopen down-set. Therefore, $h$ admits left and right adjoints. However, the map $f$ is not open. Indeed, for any $n \geq 2$, $\mathbb{n} \subseteq X$ is open, but its image $f(\mathbb{n}) = \{n, \infty\}$ is not. \end{example} \begin{remark} The viewpoint of adjoint maps establishes a link between uniform interpolation for $\mathrm{IPC}$ and the theory of \emph{monadic Heyting algebras}. Indeed, recall that a monadic Heyting algebra can be described as a pair $(H,H_0)$ of Heyting algebras such that $H_0$ is a subalgebra of $H$ and the inclusion $H_0\hookrightarrow H$ has left and right adjoints \cite[Theorem 5]{GBezh1}. The relation between adjointness of a Heyting algebra homomorphism, and openness of the dual map, was already investigated in this framework. See, e.g., \cite[p.\ 32]{GBezh2} where an example akin to the one above is provided. \end{remark} \section{Clopen up-sets step-by-step}\label{s:step-by-step} The \emph{$\to$-degree} of a propositional formula $\phi$, denoted by $|\phi|$, is the maximum number of nested occurrences of the connective $\to$ in $\phi$; $\phi$ has $\to$-degree $0$ if the connective $\to$ does not occur in $\phi$. Fix a finite set of variables $\v$. For a point $x$ in $\E{\v}$ and $n\in\N$, we write $\T_n(x)$ for the \emph{degree $n$ theory of $x$}, that is \begin{align*} \T_n(x):=\{\phi(\v)\mid |\phi|\leq n \ \text{and} \ \phi\in x\}. \end{align*} We define a quasi-order $\leq_n$ on $\E{\v}$ by setting \begin{align*} x \leq_n y \stackrel{\mathrm{def}}{\iff} \T_n(x) \subseteq \T_n(y), \end{align*} and we standardly define an equivalence relation $\sim_n$ on $\E{\v}$ by: \begin{align*} x \sim_n y \stackrel{\mathrm{def}}{\iff} x \leq_n y \text{ and } y \leq_n x \iff \T_n(x)=\T_n(y). \end{align*} We remark that $\bigcap_{n\in\N}{\leq_n}={\leq}$, the natural order of $\E{\v}$. Moreover, for every $n\in \N$, there are only finitely many formulae of $\to$-degree at most $n$. In particular, $\sim_n$ has finite index. \begin{remark}\label{rmk:clopen-up-sets-n-upsets} Notice that: \emph{a subset $S\subseteq \E{\v}$ is of the form $\w{\phi}$ for some formula $\phi(\v)$ of $\to$-degree $\leq n$ if, and only if, it is an up-set with respect to $\leq_n$. Thus, $S$ is a clopen up-set if, and only if, it is an up-set with respect to $\leq_n$ for some $n\in\N$. Hence, in particular, $\sim_n$-equivalence classes are clopen.} In this sense, the quasi-orders $\leq_n$ yield the clopen up-sets of the space $\E{\v}$ `step-by-step'. \end{remark} The next proposition accounts for the Ehrenfeucht-Fraiss\'e games employed in \cite{GZ1995}. In our setting, these combinatorial structures reflect the interplay between the natural order of $\E{\v}$ and the quasi-orders $\leq_n$. \begin{proposition}\label{prop:n-plus-one-equiv} Suppose $x,y\in\E{\v}$ and $n\in \N$. The following equivalences hold. \begin{enumerate} \item $x \leq_0 y$ if, and only if, for each variable $p_i\in\v$, $p_i\in x$ implies $p_i\in y$; \item $x \leq_{n+1} y$ if, and only if, for each $y'\in\u{y}$ there exists $x' \in\u{x}$ such that $x'\sim_n y'$. \end{enumerate} \end{proposition} \begin{proof} Item 1 follows at once from the fact that every formula $\phi(\v)$ of $\to$-degree $0$ is equivalent to a finite disjunction of finite conjunctions of variables, along with the fact that $x,y$ are prime filters. In order to prove the left-to-right implication in item 2, assume $x \leq_{n+1} y$. Since $\sim_n$ has finite index, choose a finite set $\{y_1,\ldots,y_k\}\subseteq \u{y}$ such that each $y'\in\u{y}$ is $\sim_n$-equivalent to some $y_i$. It suffices to prove that for each $i\in\{1,\ldots,k\}$ there is $x_i\in\u{x}$ with $x_i\sim_n y_i$. To this aim, consider the following formula, $\phi$, of $\to$-degree $\leq n+1$, defined by \[ \phi:= \bigvee_{i=1}^k \left(\bigwedge{\T_n(y_i)}\to\bigvee{\T_n(y_i)^{c}}\right) \] where the complement is relative to the set of formulae of $\to$-degree at most $n$. It follows from the definitions of the logical connectives and of $\sim_n$ that, for every $z\in\E{\v}$, \[ \phi\notin z \iff \forall i\in\{1,\ldots,k\} \ \exists z_i\geq z \ \text{with} \ z_i\sim_n y_i. \] In particular, $\phi \not\in y$. Since $x \leq_{n+1} y$, also $\phi \not\in x$. Therefore, for each $i\in\{1,\ldots,k\}$ there is $x_i\in\u{x}$ satisfying $x_i\sim_n y_i$, as was to be shown. For the right-to-left implication, it is enough to show that $\phi\to\psi\in y$ whenever $\phi(\v),\psi(\v)$ are formulae of $\to$-degree $\leq n$ such that $\phi\to\psi\in x$. This follows easily from the definitions and the assumption. \end{proof} \section{Reduction to finite Kripke models}\label{s:construction-finite-kripke-models} Fix a finite set of variables $\v$. The Esakia space $\E{\v}$ has a countable basis, and thus admits a compatible metric by Urysohn metrization theorem, and even an ultrametric (see e.g.\ \cite[7.3.F]{Engelking}). We explicitly define such an ultrametric. Set \begin{equation*}\label{eq:metric} d\colon \E{\v}\times\E{\v}\to [0,1], \ \ (x,y)\mapsto 2^{-\min\{|\phi| \, \mid \, \phi \, \in \, x\bigtriangleup y\}} \end{equation*} where $x\bigtriangleup y$ denotes the symmetric difference of $x$ and $y$. We adopt the conventions $\min{\emptyset}=\infty$ and $2^{-\infty}=0$. It is immediate to check that $d$ is an ultrametric on the set $\E{\v}$, i.e.\ for all $x,y,z\in \E{\v}$ the following hold: \emph{(i)} $d(x,y)=0$ if, and only if, $x=y$; \emph{(ii)} $d(x,y)=d(y,x)$; \emph{(iii)} $d(x,z)\leq \max{(d(x,y),d(y,z))}$. Note that, for every $x,y \in \E{\v}$ and $n \in \mathbb{N}$, $x \sim_n y$ if, and only if, $d(x,y) < 2^{-n}$. Therefore, the open ball $B(x,2^{-n})$ of radius $2^{-n}$ centered in $x$ coincides with the equivalence class $[x]_n:=\{y\in\E{\v}\mid y\sim_n x\}$, which is clopen by Remark~\ref{rmk:clopen-up-sets-n-upsets}. \begin{lemma}\label{l:compatible-metric} The topology of the Esakia space $\E{\v}$ is generated by the clopen balls of the ultrametric $d$. \end{lemma} \begin{proof} Observe that, for any formula $\phi(\v)$, $\w{\phi}=\bigcup_{x \in \w{\phi}}[x]_{|\phi|}=\bigcup_{x\in\w{\phi}}{B(x,2^{-|\phi|})}$. Since the latter union is over finitely many clopen sets, it follows that $\w{\phi}$ is clopen in the topology induced by the ultrametric $d$. \end{proof} In order to prove that the map $\f\colon \E{\vp}\twoheadrightarrow\E{\v}$ is open, it is useful to see the spaces at hand as approximated by finite posets, in the following sense. For each $k\in \N$ consider the finite set of balls \begin{align*} X_k:=\{B(x,2^{-k})\mid x\in\E{\vp}\}=\{[x]_k \mid x \in \E{\vp}\}, \end{align*} partially ordered by $\leq_k$, and write $q_k\colon \E{\vp}\twoheadrightarrow X_k$ for the natural quotient $x\mapsto [x]_k$. For every $k'\geq k$, there is a monotone surjection $\rho_{k',k}\colon X_{k'}\twoheadrightarrow X_k$ sending $[x]_{k'}$ to $[x]_k$. Since $\f$ is non-extensive, it can be `approximated' by the monotone map $\f_k\colon X_k\to Y_k$, $[x]_k\mapsto [\f(x)]_k$, where $Y_k:=\{B(y,2^{-k})\mid y\in\E{\v}\}$. \[\begin{tikzcd}[row sep=scriptsize, column sep=large] \E{\vp} \arrow{rrrr}{\f} \arrow[twoheadrightarrow]{d}[swap]{q_{k'}} & & & &\E{\v} \arrow[twoheadrightarrow]{d}{} \\ X_{k'} \arrow[dashed]{rrrr} \arrow[twoheadrightarrow]{d}[swap]{\rho_{k',k}} & & & & Y_{k'} \arrow[twoheadrightarrow]{d}{} \\ X_{k} \arrow{rrrr}{\f_k} & & & & Y_{k} \end{tikzcd}\] To prove the open mapping theorem for the dual spaces of free finitely generated Heyting algebras (i.e., Proposition~\ref{prop:open-mapping}), it is enough to show that for every clopen ball $B=B(x,2^{-n})$ in $\E{\vp}$, $f(x)$ lies in the interior of $f(B)$. This is equivalent to finding, for every $n$, a number $R(n)$ such that $B(f(x),2^{-R(n)}) \subseteq f(B(x,2^{-n}))$ for all $x \in \E{\vp}$. Since $f(B(x,2^{-n}))$ is closed, it suffices to construct, for any $y$ with $y \sim_{R(n)} f(x)$, a sequence $(x^m)$ in $B(x,2^{-n})$ such that $f(x^m)$ converges to $y$. For the construction of such a sequence we will use Lemma~\ref{l:combinatorial-lemma}, which is a variant of the lemmas in \cite[Section 4]{GZ1995} and in \cite[Section 5]{Vis1996}. Before stating Lemma~\ref{l:combinatorial-lemma} and showing how it completes the above argument, we introduce some notation. Recall that a \emph{Kripke model} on the finite set of variables $\v$ (a \emph{$\v$-model}, for short) is a partially ordered set $(M,\leq)$ equipped with a monotone map $c_M\colon M\to 2^{\v}$. If $M$ is a finite $\v$-model, then by the second part of Proposition~\ref{p:properties-of-co-free-spaces} there is a unique p-morphism $h} % \h_M unique map from M to \E{\vp_M\colon M\to\E{\v}$ such that $h} % \h_M unique map from M to \E{\vp_M^{-1}(\w{p_i})=c_M^{-1}(\u{p_i})$ for every $p_i\in\v$. In Lemma~\ref{l:combinatorial-lemma} we will construct a $(\vp)$-model $M$ which is a sub-poset of $X_n \times Y_m$, where $m \geq n$. Given any sub-poset $M$ of $X_n \times Y_m$, we have a diagram \begin{equation}\label{eq:Mdiagram} \begin{tikzcd}[row sep=small] {} & M \arrow[bend right=20]{dl}[swap]{\pi_1} \arrow[bend left=20]{dr}{\pi_2} \arrow{dd}{\xi} & \\ X_n & & Y_m \\ {} & X_{m} \arrow[bend left=20]{ul}{\rho_{m,n}} \arrow[bend right=20]{ur}[swap]{\f_{m}} & \end{tikzcd} \end{equation} where $\xi\colon M\to X_{m}$ is defined as $\xi:=q_m\circ h} % \h_M unique map from M to \E{\vp_M$ and $\pi_1\colon M\to X_n$, $\pi_2\colon M\to Y_m$ are the natural projections. \begin{lemma}\label{l:combinatorial-lemma} Let $n\in \N$. There exists an integer $R(n)\geq n$ such that, for every $m\geq R(n)$, there is a finite $(\vp)$-model $M$ which is a sub-poset of $X_n\times Y_m$ and satisfies the following properties: \begin{enumerate} \item $\{([x]_n,[y]_m)\mid y\sim_{R(n)}\f(x)\}\subseteq M$; \item $\rho_{m,n}\circ \xi=\pi_1$; \item $\f_{m}\circ \xi=\pi_2$. \end{enumerate} In particular, items (2) and (3) together correspond to the commutativity of diagram \eqref{eq:Mdiagram}. \end{lemma} We prove Lemma~\ref{l:combinatorial-lemma} in the next section. We conclude by showing how Proposition~\ref{prop:open-mapping}, and hence Theorem~\ref{thm:fin-gen}, follow from it. \begin{proof}[Proof of Proposition~\ref{prop:open-mapping}] It suffices to prove that $B(f(x),2^{-R(n)})$ is contained in $f(B(x,2^{-n}))$ for every $x \in \E{\vp}$ and $n \in \mathbb{N}$. Let $y \sim_{R(n)} f(x)$. For every $m \geq R(n)$, $([x]_n,[y]_m) \in M$ by item 1 in Lemma~\ref{l:combinatorial-lemma}; we define $x^m := h_M([x]_n,[y]_m)$. By item 2 in Lemma~\ref{l:combinatorial-lemma}, $[x^m]_n = \rho_{m,n}(\xi([x]_n,[y]_m)) = [x]_n$, so $x^m \in B(x,2^{-n})$. By item 3 in Lemma~\ref{l:combinatorial-lemma} we have $[f(x^m)]_m = f_m(\xi([x]_n,[y]_m)) = [y]_m$, so that $f(x^m)$ converges to $y$. \end{proof} \section{Proof of Lemma~\ref{l:combinatorial-lemma}}\label{s:proof-of-combinatorial-lemma} Fix $n\in \N$. For every $x\in\E{\vp}$, define $r(x)$ to be the number of $\sim_n$-equivalence classes in $\E{\vp}$ above $x$, i.e., \[ r(x):= \#\{[x']_n \mid x'\in\u{x}\}=\#q_n(\u{x}). \] Moreover, set $R := R(n) = 2(\#X_n) - 1$. Fix an arbitrary integer $m\geq R$. For elements $(x,y)$ and $(x',y')$ in $\E{\vp}\times \E{\v}$, we say that $(x',y')$ \emph{is a witness for} $(x,y)$ if $x' \geq x$, $y' \leq y$, $x' \sim_n x$, $\f(x) \sim_{2r(x)-1} y'$, and $\f(x') \sim_{2r(x)-2} y$. Note that, by definition, $f(x) \sim_{2r(x)-1} y$ if, and only if, $(x,y)$ is a witness for itself. Let $M:=\{([x]_n,[y]_m) \in X_n\times Y_m \mid \text{there exists a witness for} \ (x,y)\}$, and equip it with the product order. Defining $c_M\colon M\to 2^{(\vp)}$ by $c_M([x]_n,[y]_m):=\{u\in(\vp)\mid x\in\w{u} \, \}$ turns $M$ into a $(\vp)$-model. We prove that it satisfies the three required properties. \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item If an element $([x]_n,[y]_m) \in X_n\times Y_m$ satisfies $y\sim_R \f(x)$, then $(x,y)$ is a witness for itself because $2r(x)-1\leq 2(\#X_n)-1 = R$. Therefore $([x]_n,[y]_m) \in M$. \item Observe that $\rho_{m,n}\circ \xi=q_n\circ h} % \h_M unique map from M to \E{\vp_M$. Hence we must show that $h_M([x]_n,[y]_m) \sim_n x$. Assume, without loss of generality, that $(x,y)$ admits a witness. We will prove by induction on $k$ that, for any $0 \leq k \leq n$, \begin{equation}\label{eq:ind1}\tag{$P_k$} \forall ([x]_n,[y]_m)\in M, \ h} % \h_M unique map from M to \E{\vp_M([x]_n,[y]_m) \sim_k x. \end{equation} For $k = 0$, \eqref{eq:ind1} is true by definition of $c_M$. We prove \eqref{eq:ind1} holds for $k+1$ provided it holds for $k\in\{0,\ldots,n-1\}$. We will show that (a) $h} % \h_M unique map from M to \E{\vp_M([x]_n,[y]_m)\leq_{k+1}x$ and (b) $x\leq_{k+1} h} % \h_M unique map from M to \E{\vp_M([x]_n,[y]_m)$. \begin{description} \item[(a)] Consider an arbitrary $w\geq x$. In view of Proposition~\ref{prop:n-plus-one-equiv} it is enough to find $z\geq h} % \h_M unique map from M to \E{\vp_M([x]_n,[y]_m)$ such that $z\sim_k w$. Let $(x',y')$ be a witness for $(x,y)$. Then $x'\sim_n x$, so that there is $x''\geq x'$ with $x''\sim_{n-1} w$, whence $x''\sim_{k} w$. Now, two cases: \begin{description} \item[(i)] If $x''\sim_n x$, in view of the inductive hypothesis $h} % \h_M unique map from M to \E{\vp_M([x]_n,[y]_m)\sim_k x$ we have $h} % \h_M unique map from M to \E{\vp_M([x]_n,[y]_m)\sim_k x'' \sim_k w$. Thus we can set $z:=h} % \h_M unique map from M to \E{\vp_M([x]_n,[y]_m)$. \item[(ii)] Else, suppose $x''\not\sim_n x$. Since $\f(x')\sim_{2r(x)-2}y$ and $\f(x'')\geq \f(x')$, there exists $z'\geq y$ with $z' \sim_{2r(x)-3}\f(x'')$. Now, $x''\not\sim_n x$ entails $r(x'')<r(x)$, hence $2r(x'')-1\leq 2r(x)-3$, showing that $(x'',z')$ is a witness for itself. Setting $z:=h} % \h_M unique map from M to \E{\vp_M([x'']_n,[z']_m)$ we see that $z\geq h} % \h_M unique map from M to \E{\vp_M([x]_n,[y]_m)$ because $h} % \h_M unique map from M to \E{\vp_M$ is monotone, and $z\sim_k x''\sim_k w$ by the inductive hypothesis applied to $z$. \end{description} \item[(b)] Given an arbitrary $z\geq h} % \h_M unique map from M to \E{\vp_M([x]_n,[y]_m)$ we must exhibit $w\geq x$ such that $w\sim_k z$. Since $h} % \h_M unique map from M to \E{\vp_M$ is a p-morphism, there is $([x']_n,[y']_m)\geq ([x]_n,[y]_m)$ such that $h} % \h_M unique map from M to \E{\vp_M([x']_n,[y']_m)=z$. By the inductive hypothesis, $h} % \h_M unique map from M to \E{\vp_M([x']_n,[y']_m)\sim_k x'$. Now, $x\leq_n x'$ implies the existence of $w\geq x$ satisfying $w\sim_{n-1}x'$, therefore $w\sim_k x' \sim_k z$. \end{description} \item We first prove the following claim. {\bf Claim.} $\pi_2\colon M\to Y_m$ is a p-morphism. \begin{proof}[Proof of Claim] Pick $([x]_n,[y]_m)\in M$ and $z\in\E{\v}$ with $y\leq_m z$. We need to prove that there is $w\in\E{\vp}$ such that $([w]_n,[z]_m)\in M$. Suppose, without loss of generality, that $(x,y)$ admits a witness $(x',y')$. Then $\f(x)\sim_{2r(x)-1} y'\leq y\leq_m z$ entails $\f(x)\leq_{2r(x)-1} z$ because $m\geq 2r(x)-1$. Since $f$ is a p-morphism, there exists $x''\geq x$ such that $\f(x'')\sim_{2r(x)-2}z$. We distinguish two cases, as above: \begin{description} \item[(i)] If $x''\sim_n x$, set $w:=x$. Then $(x'',y')$ is a witness for $(w,z)$. \item[(ii)] If $x''\not\sim_n x$, set $w:=x''$. It is easy to see, reasoning as in case (ii) of the proof of item $(2)$, that $(w,z)$ is a witness for itself.\qedhere \end{description} \end{proof} We use the claim to prove the identity $\f_{m}\circ \xi=\pi_2$. We show by induction that, for any $0 \leq k \leq m$, \begin{equation}\label{eq:ind2}\tag{$Q_k$} \forall ([x]_n,[y]_m)\in M, \ \f(h} % \h_M unique map from M to \E{\vp_M([x]_n,[y]_m))\sim_k y. \end{equation} For $k = 0$, \eqref{eq:ind2} is true because $y \sim_0 f(x)$. We prove \eqref{eq:ind2} holds for $k+1$ if it holds for $k\in\{0,\ldots,m-1\}$. As in item $2$, we prove that (a) $\f(h} % \h_M unique map from M to \E{\vp_M([x]_n,[y]_m))\leq_{k+1}y$ and (b) $y\leq_{k+1} \f(h} % \h_M unique map from M to \E{\vp_M([x]_n,[y]_m))$. \begin{description} \item[(a)] Pick $w\geq y$. By Proposition~\ref{prop:n-plus-one-equiv} it suffices to find $z\geq \f(h} % \h_M unique map from M to \E{\vp_M([x]_n,[y]_m))$ with $z\sim_k w$. Since by the Claim $\pi_2$ is a p-morphism and $w\geq_m y$, there is $([x']_n,[y']_m)\in M$ such that $([x']_n,[y']_m) \geq([x]_n,[y]_m)$ and $y'\sim_m w$. Define $z:=\f(h} % \h_M unique map from M to \E{\vp_M([x']_n,[y']_m))$. Then $z\geq \f(h} % \h_M unique map from M to \E{\vp_M([x]_n,[y]_m))$ because $\f$ and $h} % \h_M unique map from M to \E{\vp_M$ are monotone maps, and the inductive hypothesis applied to $z$ yields $z\sim_k y' \sim_k w$. \item[(b)] The argument is the same, mutatis mutandis, as in the previous item, and it hinges on the fact that both $h} % \h_M unique map from M to \E{\vp_M$ and $\f$ are p-morphisms.\qed \end{description} \end{enumerate} \section*{Concluding remarks} In this paper we have adopted a topological approach to the study of uniform interpolation for the intuitionistic propositional calculus. In particular, we have exposed the relation between uniform interpolation and open mapping theorems in topology. These kinds of connections between logical properties and topological ones are at the heart of duality theory. A well-known example is Rasiowa and Sikorski's proof \cite{RasSik1950} of G\"odel's completeness theorem for first-order classical logic, which exploited Baire Category Theorem. It would be interesting to investigate further how Theorem~\ref{thm:fin-gen} compares to classical open mapping theorems in functional analysis (e.g.\ for Banach spaces) and in the theory of topological groups, which typically rely on an application of Baire Category Theorem. Also, it would be important to understand if similar open mapping theorems hold for other propositional logics, and what are the underlying reasons --- from a duality-theoretic perspective --- for such theorems to hold. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,108,101,563,525
arxiv
\section{Introduction}\label{intro Current-carrying string loop represents a simplified 1D model of magnetized-plasma structures \cite{Jac-Sot:2009:PHYSR4:}. The plasma may exhibit a string-like structure arising from either dynamics of the magnetic field lines in the plasma \cite{Sem-Ber:1990:ASS:,Chri-Hin:1999:PhRvD:,Sem-Dya-Pun:2004:Sci:} or thin isolated flux tubes produced in plasma \cite{Spr:1981:AA:,Cre-Stu:2013:PhRvE:,Cre-Stu-Tes:2013:PlasmaPhys:,Cre-Stu:2014:PlasmaPhys:}. Tension of such a string loop governs an outer barrier of the string loop motion, while its worldsheet current introduces an angular momentum barrier preventing the loop from collapse. Dynamics of an axially symmetric string loop along the axis of symmetry of Kerr black holes has been investigated in \cite{Jac-Sot:2009:PHYSR4:,Kol-Stu:2013:PHYSR4:}, and extended also to the case of Kerr naked singularities \cite{Kol-Stu:2013:PHYSR4:}. The string loop dynamics in the spherically symmetric spacetimes has been studied for the case of Schwarzschild---de Sitter (SdS) black holes \citep{Kol-Stu:2010:PHYSR4:} and braneworld black holes or naked singularities described by the Reissner-Nordstrom spacetimes \cite{Stu-Kol:2012:JCAP:}. Such a configuration was also studied in \citep{Lar:1994:CLAQG:,Fro-Lar:1999:CLAQG:}. Quite recently, it has been demonstrated that small oscillations of string loop around a stable equilibrium radii in the Kerr black hole spacetimes can be well applied in astrophysical situations related to the high-frequency quasiperiodic oscillations (QPOs) observed in microquasars, i.e., binary systems containing a black hole \cite{Stu-Kol:2014:PHYSR4:}. On the other hand, it has been proposed in \citep{Jac-Sot:2009:PHYSR4:} that the current-carrying string loops could be relevant in an inverse astrophysical situation, as a model of formation and collimation of relativistic jets in the field of compact objects. The acceleration of jets is possible due to the transmutation effect where the chaotic character of the string-loop motion around a central black hole enables transmission of the internal energy in the oscillatory mode to the kinetic energy of the linear mode \cite{Jac-Sot:2009:PHYSR4:,Kol-Stu:2010:PHYSR4:}. It has been demonstrated in \cite{Stu-Kol:2012:PHYSR4:,Stu-Kol:2012:JCAP:} that ultra-relativistic escaping velocities of string loop can be really obtained even in spherically symmetric black hole spacetimes. Efficiency of the transmutation effect is slightly enhanced by the rotation of the central Kerr black hole as demonstrated in \cite{Kol-Stu:2013:PHYSR4:}. Enhancement of the efficiency of the transmutation process can be substantial in the innermost parts of the Kerr naked singularity spacetimes \cite{Kol-Stu:2013:PHYSR4:}, similarly to the acceleration process occuring in the particle collisions \cite{Stu-Sche:2013:CLAQG:}. The string loop accelerated by the gravitational field of non-rotating black holes thus can potentially serve as a new model of ultra-relativistic jets observed in microquasars and active galactic nuclei. This is an important result since the standard models of jet formation are based on the Blandford-Znajek effect \cite{Bla-Zna:1977:MNRAS:} that requires rapidly rotating Kerr black holes \cite{Pun:2001:Springer:}. The rotation of the central black hole is an important aspect also in the alternate model of ultra-relativistic jet formation based on the geodesic collimation along the rotation axis of the Kerr spacetime metric \cite{Gar-etal:2010:ASTRA:,Gar-Mar-San:2013:ApJ:}. It is generally assumed that magnetohydrodynamics (MHD) of plasmas in the combined gravitational and strong magnetic fields of compact objects enables understanding of the formation and energetics of jets in the accreting systems orbiting the compact objects. A magnetic field in the vicinity of the central black hole plays an important role in transporting of energy from the plasma accreting into the black hole to jets in the standard Blandford-Znajek model \cite{Bla-Zna:1977:MNRAS:}. The theoretical and observational data~\cite{Koo-Bic-Kun:1999:PASA:,Mil-etal:2006:NATUR:} shows that there must exist a strong magnetic field in the close vicinity of black holes. These magnetic fields can be generated by the accretion of a plasma into the black hole or if the black hole has a stellar mass it can still contain the contribution from the field of the collapsed precursor star \cite{Fro:2012:PHYSR4:}. Understanding the dynamics of charged particles in the combined gravitational and magnetic fields is necessary for modeling the MHD processes. The single-particle dynamics is relevant also for collective processes modeled in the framework of kinetic theory \cite{Cre-Stu:2013:PhRvE:,Cre-Stu-Tes:2013:PlasmaPhys:,Cre-Stu:2014:PlasmaPhys:}. Charged particle motion in electromagnetic fields surrounding black hole or in the field of magnetized neutron stars has been studied in a large variety of works for both equatorial and general motion (see e.g.\cite{Prasanna:1980:RDNC:,Fro-Sho:2010:PHYSR4:}). The special class of the off-equatorial circular motion of charged particles in combined gravitational and electromagnetic fields of compact objects has been studied in papers \cite{Kov-Stu-Kar:2008:CAQG:,Kov-etal:2010:CAQG:,Kop-etal:2010:ApJ:}. Detailed analysis of the astrophysics of rotating black holes with electromagnetic fields related to the Penrose process has been discussed in \cite{Wagh-Dadhich:1989:PR:,Las-etal:2014:PYSR4:}. Blandford-Znajek mechanism applied for a black hole with a toroidal electric current was investigated in \cite{Li-Xin:2000:PHYSR4:}. The oscillatory motion of charged particles around equatorial and off-equatorial circular orbits could be relevant in formation of magnetized string loops \cite{Cre-Stu:2013:PhRvE:,Kov:2013:EPJP:}. For the model of ultra-relativistic jet formation based on the transmutation effect in the string-loop dynamics, it is important to clear up the role of the external influences, namely those based on the cosmic repulsion and the large-scale magnetic fields. It has been demonstrated in \cite{Stu-Kol:2012:PHYSR4:} that the escaping string loops will be efficiently accelerated by the cosmic repulsion behind the so called static radius \cite{Stu:1983:BULAI:,Stu-Hle:1999:PHYSR4:} that plays an important role also in the accretion toroidal structures \cite{Stu-Sla-Kov:2009:CLAQG:} or motion of gravitationally bound galaxies \cite{Stu-Sche:2011:JCAP:}. Here we shall study the role of an asymptotically uniform external magnetic field in the transmutational acceleration of electrically charged current-carrying string loop in the field of non-rotating black holes. The acceleration of the string loop is assumed along the direction of the vector of the strength of the magnetic field. Because of the axial symmetry of the investigated system of the string loop and the central black hole immersed in the magnetic field, we are able to describe the string loop motion by an effective potential similar to those of the charged particle motion. Using the results of our preceding paper \cite{Tur-etal:2013:PHYSR4:}, we will show that the presence of even weak magnetic field can sufficiently increase the possibility and efficiency of conversion of the internal oscillatory energy of the string loop into the kinetic energy of its linear transitional motion. We focus our attention to the simple case of the asymptotically uniform magnetic field in order to illustrate the large scale role of the magnetic field on the string loop motion. For the transmutation effect itself, the local strength of the magnetic field is relevant, but the subsequent motion is influenced by the large scale structure of the magnetic field. Estimates related to the magnitude of the magnetic field in astrophysically plausible situations related to the magnetic field around neutron stars, stellar black holes and supermassive black holes in the galactic nuclei are presented in Appendix. According to these estimates, the magnetic field can be always considered as test field, having negligible influence of the spacetime structure. \begin{figure} \includegraphics[width=0.85\hsize]{01} \caption{\label{string_clas} The schematic picture of the oscillations and the acceleration of the string loop near a black hole embedded in an external uniform magnetic field. Due to the axial symmetry of the system, the string trajectory can be presented by a curve lying on the plane, chosen to be $x-y$ plane. The direction of the Lorenz force acting on the string loop depends on the orientation of the string loop current with respect to the external magnetic field. The initial position of the string loop is represented by the solid line, while its positions during the motion are represented by the dashed lines. } \end{figure} The paper is organized as follows. In the Section~II, the general relativistic description of the string loop model is presented, the fundamental quantities that characterize the electric current-carrying string loop through the action function and the Lagrangian formalism are given. In Section~III, the motion of the string loop in the combined electromagnetic and gravitational fields of a Schwarzschild black hole immersed in an asymptotically uniform magnetic field is studied. Due to the symmetries of the string loop and the combined electromagnetic and gravitational background, the string loop dynamics can be governed by a properly defined effective potential, which is compared to the effective potential governing motion of charged particles. At the end of the Section~III, the physical interpretation of the string loop model through the superconductivity phenomena of plasmas in accretion disc is discussed. In Section~IV, the transmutation effect and the acceleration of the string loop are studied. Dependence of the ejection speed (relativistic Lorentz $\gamma$-factor) on the intensity of the external uniform magnetic field is given. It is shown that the maximal acceleration of the string loop, up to the ultra-relativistic velocities ($v\simeq c$, $\gamma >> 1$) is possible for the special orientation of an electric current and the case of strong magnetic field. The concluding remarks and discussions are presented in Section~V. In Appendix~A, the estimation of the realistic magnetic field intensity is presented and discussed. In Appendix~B, the dimensional analysis of the characteristic parameters of the string loop model is presented, along with estimates of the physical quantities that characterize the string loop. \section{Relativistic electric current-carrying string loop} Generally, the string loops are assumed to be thin circular objects that carry an current and preserve their axial symmetry relative to the chosen axis, or the axis of the black hole spacetime. The string loops can oscillate, changing their radius in the loop plane, while propagating in the perpendicular direction as shown in Figure~\ref{string_clas}. Let us consider first a string loop moving in a spherically symmetric spacetime with the line element \begin{equation} \dif s^2 = g_{tt} \dif t^2 + g_{rr} \dif r^2 + g_{\theta\theta} \dif \theta^2 + g_{\phi\phi} \dif \phi^2. \label{axsymmet} \end{equation} We will specify the components of the metric tensor $g_{\alpha\beta}$ for the Schwarzschild black hole case in the Section~III. The spherically symmetric Schwarzschild black hole is assumed to be immersed in an asymptotically uniform magnetic field; both the spacetime and the external magnetic field define an axis of symmetry that is considered to be the axis of the string loop. In order to give the relativistic description for the motion of string loop, thereby enabling the derivation of the equations of motion one may chose the action, which will reflect the properties of both the string loop and external fields. However, before giving the definition of the string action, one has to introduce the string worldsheet which is characterized by a scalar function $\varphi$ and by coordinates $X^{\alpha}(\sigma^{a})$, where $\alpha = 0,1,2,3$ and $a=\tau, \sigma$ \cite{Jac-Sot:2009:PHYSR4:}. The worldsheet is thus a two-dimensional subspace which characterizes the properties of the string loop in a given combined gravitational and electromagnetic background. Therefore, the worldsheet of a string loop is an analogue of the worldline of a test particle, i.e., it gives the loci of the events of existence of the string loop in the given background. Thus, we can introduce the worldsheet induced metric in the following way \cite{Jac-Sot:2009:PHYSR4:} \begin{equation} h_{ab}= g_{\alpha\beta}X^\alpha_{| a}X^\beta_{| b}, \end{equation} where $X_{| a} = \partial X /\partial a$. The current of the string loop localized on the worldsheet is described by a scalar field $\varphi$ which depends on the worldsheet coordinates $\tau$ and $\sigma$, but is independent of the choice of the spacetime coordinates. Dynamics of the string loop moving in the combined gravitational and electromagnetic field is described by the action $S$ that should contain along with the term characterizing the freely moving string loop (see, e.g., \cite{Jac-Sot:2009:PHYSR4:}), also the term characterizing the interaction of the string loop and the external electromagnetic field. This implies the action and the Lagrangian density in the form $\mathcal{L}$ \cite{Lar:1993:CLAQG:}: \begin{eqnarray} S &=& \int \mathcal{L} \, \sqrt{-h} \, \dif \sigma \dif \tau,\\ \mathcal{L} &=& - \left[\tens/c + \frac{k}{2} \, h^{ab} (\varphi_{|a}+A_a)(\varphi_{|b}+A_b)\right], \label{akceEM} \end{eqnarray} where we use the projection \begin{equation} A_{a}=A_\gamma X^\gamma_{|a} , \end{equation} and $k$ is the constant number constrained by the world constants. For the electrically neutral current, the constant $k$ is chosen to be equal to unity, $k=1$. The first part of (\ref{akceEM}) represents the classical Nambu--Goto string action with the tension $\tens$, the second part describes the scalar field $\varphi$, rescaled according to \cite{Jac-Sot:2009:PHYSR4:,Kol-Stu:2013:PHYSR4:}, along with the potential $A_\alpha$ of the electromagnetic field, and their interaction. According to pioneering paper of Goto \cite{GOTO:1971:POTP}, the constant $\mu/c^2$ can be interpreted as the uniform mass density which prevents expansion of the string loop beyond some radius, while the worldsheet current introduces an angular momentum barrier preventing the loop from collapse. This implies that the parameter $\mu>0$ characterizes the tension of the string loop, or its self-force, which preserves its radius, or squeezes the string loop. We take here the dimensions of the worldsheet coordinates as the length dimension. Hereafter, we use the geometric units with $c=G=1$. In these units, the constant $k$ is equal to unity as well. The complete dimensional analysis and full conversion of the quantities describing the string loop motion from the geometrized units to the Gaussian or CGS units is given in the Appendix~B of the present paper, along with estimates of the string loop parameters in relation to realistic astrophysical conditions. In the spherically symmetric spacetimes, the coordinate dependence of the worldsheet coordinates (\ref{axsymmet}) can be written in the form \cite{Lar:1994:CLAQG:} \begin{equation} X^\alpha(\tau,\sigma) = \left\{t(\tau),r(\tau),\theta(\tau),\sigma\right\}, \label{strcoord} \end{equation} in such a way that new coordinates satisfy the relations \begin{equation} \dot{X}^\alpha = (t_{|\tau}\,,r_{|\tau}\,,\theta_{|\tau}\,,0\,), \quad {X'}^\alpha = (0,0,0,1), \label{XdotXprime} \end{equation} where we use the dot to denote derivative with respect to the string loop evolution time $\tau$, and the prime to denote derivative with respect to the space coordinate $\sigma$ of the worldsheet. From the relation (\ref{XdotXprime}), we can clearly see that the string loop does not rotate in the spherically symmetric spacetime (\ref{axsymmet}) combined with the external uniform magnetic field. The first order derivatives of the scalar field $\varphi$ with respect to the worldsheet coordinates determine the current on the string worldsheet, $\varphi_{| a} = j_a$. The axial symmetry of the string loop model and the conformal flatness of the two-dimensional worldsheet metric $h^{ab}$ allows to write the scalar field $\varphi$ in the form \cite{Lar:1993:CLAQG:,Jac-Sot:2009:PHYSR4:} \begin{equation} \varphi = j_{\tau} \tau + j_{\sigma}\sigma. \label{phioverj} \end{equation} In the presence of the external electromagnetic field, the equations of evolution of the scalar field will be influenced by the four-vector potential $A_\mu$; we then define the total current, labelled by the tilde $\tilde{j}_a$, in the following form \begin{equation} \tilde{j}_{\tau} = j_\tau + A_\alpha X^\alpha_{| \tau}, \quad \tilde{j}_{\sigma} = j_\sigma + A_\alpha X^\alpha_{| \sigma}. \label{constJOn} \end{equation} The variation of the action (\ref{akceEM}) with respect to the scalar field $\varphi$ can now be written in the form \begin{equation} \left( \sqrt{-h} \, h^{ab} \, \tilde{j}_a \right)_{| b} = 0. \label{PHIevolution} \end{equation} Equations of motion (\ref{PHIevolution}) for the scalar field $\varphi$, and the string loop axisymmetry imply existence of conserved quantities $\tilde{j}_{\tau}$ and ${j}_{\sigma}$. The conserved quantities $\tilde{j}_{\tau}$ and ${j}_{\sigma}$ correspond to the parameters $\Omega$ and $n$ introduced in \cite{Lar:1993:CLAQG:}, up to the constant $k$. Quantities ${j}_{\tau}$ and $\tilde{j}_{\sigma}$ generally could not be conserved during string loop motion. Varying the action (\ref{akceEM}) with respect to the four-potential $A_\alpha$ \cite{Lar:1993:CLAQG:}, we obtain the electromagnetic current density \begin{equation} I^{\mu} = \frac{\delta \mathcal{L}}{\delta A_{\mu}} = k \tilde{j}_\tau \dot{X}^\mu - k \tilde{j}_\sigma {X'}^\mu, \end{equation} and we can identify the string loop electric charge density $q$, and the electric current density $j$ due to the relations \begin{equation} q = k \tilde{j}_\tau, \quad j = k \tilde{j}_\sigma. \label{QIdef} \end{equation} Up to the constant $k$, we can consider the parameters $\tilde{j}_\tau$ and $\tilde{j}_\sigma$ to be related to the electric charge and the current densities. We can deduce from (\ref{PHIevolution}) that the electric charge density $q$ is conserved during the string loop evolution, but the current density $j$ is changing as it is influenced by the external electromagnetic field. It is important to specify the worldsheet stress-energy tensor which can be found varying the action (\ref{akceEM}) with respect to the induced metric $h_{ab}$ \cite{Jac-Sot:2009:PHYSR4:} \begin{eqnarray} && \Sigma^{\tau\tau} = \frac{k}{2} \, \frac{\tilde{j}_{\tau}^2+\tilde{j}_{\sigma}^2}{g_{\phi\phi}} + \mu, \quad \Sigma^{\sigma\tau} = - \frac{ k \tilde{j}_{\tau} \tilde{j}_{\sigma}}{g_{\phi\phi}}, \nonumber\\ && \Sigma^{\sigma\sigma} = \frac{k}{2} \, \frac{\tilde{j}_{\tau}^2+\tilde{j}_{\sigma}^2}{g_{\phi\phi}} - \mu. \end{eqnarray} The string loop canonical momentum density is defined by the relation \begin{equation} \Pi_\mu \equiv \frac{\partial \mathcal{L}}{\partial \dot{X}^\mu} = \Sigma^{\tau a} g_{\mu\alpha} X^\alpha_{|a} + k \tilde{j}_\tau A_\mu . \label{canMomenta} \end{equation} From the Lagrangian density (\ref{akceEM}) we can obtain Hamiltonian of the string loop dynamics in the combined gravitational and electromagnetic field in the form \cite{Lar:1993:CLAQG:,Car-Ste:2004:PHYSR4:} \begin{eqnarray} H &=& \frac{1}{2} g^{\alpha\beta} (\Pi_\alpha - q A_\alpha)(\Pi_\beta - q A_\beta) \nonumber \\ && + \frac{1}{2} g_{\phi\phi} \left[(\Sigma^{\tau\tau})^2 - (\Sigma^{\tau\sigma})^2 \right] . \label{HamEM} \end{eqnarray} The string loop dynamics is determined by the Hamilton equations \begin{equation} P^\mu \equiv \frac{\dif X^\mu}{\dif \zeta} = \frac{\partial H}{\partial \Pi_\mu}, \quad \frac{\dif \Pi_\mu}{\dif \zeta} = - \frac{\partial H}{\partial X^\mu}. \label{Ham_eqEM} \end{equation} The canonical momentum $\Pi^\mu$ (\ref{canMomenta}) is related to the mechanical momentum $P^\mu$ by the relation \begin{equation} \Pi^\mu = P^\mu + q A^\mu, \end{equation} where we use the string loop charge density definition (\ref{QIdef}). In the Hamiltonian (\ref{HamEM}), we use an affine parameter $\zeta$, related to the worldsheet coordinate $\tau$ by the transformation $\dif \tau = {\Sigma^{\tau\tau}} \dif \zeta$. \section{Dynamics of string loop in uniform magnetic field around Schwarzschild black hole} In this section we apply the previous solutions for the case of the Schwarzschild black hole spacetime immersed in axially symmetric magnetic field that is uniform at the spatial infinity. The plane of the string loop is perpendicular to the lines of strength of the magnetic field. The string loop moves along the axis which is chosen to be $y$-axis as shown in Figure~\ref{string_clas}. The oscillations of the string loop are restricted to the $x$-$z$ plane (considered in the $x$-axis due to the axisymmetry of the string loop), while its trajectory, because of the symmetry, is considered in the $x$-$y$ plane. The Schwarzschild black hole spacetime characterized by the mass parameter $M$ takes in the standard spherical coordinates the form \begin{equation} \label{metric} ds^2=-f(r) dt^2+f^{-1}(r) dr^2+r^2 \left(d\theta^2+\sin^2\theta d\phi^2\right)\ , \end{equation} where the metric "lapse" function $f(r)$ is defined by \begin{equation} f(r) = 1 - \frac{2 M}{r}. \end{equation} {Due to the symmetries discussed above for the description of the string loop motion, it is convenient for the proper description of the string loop motion to work with the Cartesian coordinates defined as \cite{Jac-Sot:2009:PHYSR4:,Kol-Stu:2010:PHYSR4:} \begin{equation} x = r \sin\theta \ , \quad y = r \cos\theta\ . \label{ccord} \end{equation} In our study we assume the static, axisymmetric and asymptotically uniform magnetic field. Since the Schwarzschild spacetime is flat at spatial infinity, the timelike $\xi_{(t)}$ and spacelike $\xi_{(\phi)}$ Killing vectors satisfy the equations $\Box\xi^{\alpha}=0$, which exactly correspond to the Maxwell equations \begin{equation} \Box A^{\alpha}=0, \end{equation} for the four-vector potential of the electromagnetic field. The solution of the Maxwell equations can be then written in the Lorentz gauge in the form~\cite{Wald:1974:PHYSR4:} \begin{equation} A^{\alpha}=C_{1}\xi^{\alpha}_{(t)}+C_{2}\xi^{\alpha}_{(\phi)}\ . \end{equation} The first integration constant has to be $C_1=0$, because of the asymptotic properties of the Schwarzschild spacetime (\ref{metric}), while the second integration constant takes the form $C_2 = B/2$, where $B$ is the strength of the homogeneous magnetic field at the spatial infinity. The commuting Killing vector $\xi_{(\phi)} = \partial / \partial \phi$ generates rotations around the symmetry axis. Consequently, the only nonzero covariant component of the potential of the electromagnetic field takes the form~\cite{Wald:1974:PHYSR4:} \begin{equation} A_{\phi} = \frac{B}{2} r^2 \sin^2 \theta = \frac{B}{2} x^2\ . \label{aasbx} \end{equation} The symmetries of the considered background gravitational and magnetic fields, corresponding to the $t$ and $\phi$ components of the Killing vector, imply the existence of two constants of the string loop motion \cite{Tur-etal:2013:PHYSR4:,Jac-Sot:2009:PHYSR4:} \begin{eqnarray} E &=& -\xi^{\mu}_{(t)} \Pi_{\mu} = - \Pi_{t}, \label{intE} \\ L &=& \xi^{\mu}_{(\varphi)} \Pi_{\mu} = - \tilde{j}_{\tau} \tilde{j}_{\sigma} + q A_{\varphi} = - \tilde{j}_{\tau} j_{\sigma}. \label{intL} \end{eqnarray} where we have already set $k=1$. The angular momentum $L$ is given by two another constants of motions $\tilde{j}_{\tau}$ and $j_{\sigma}$. \begin{figure} \includegraphics[width=0.7\hsize]{02} \caption{ Four possible types of the boundaries of the string loop motion in the Schwarzschild black hole spacetime and examples of string loop trajectories escaping to the infinity along $y$-axis. \label{cls} } \end{figure} In the spherically symmetric spacetime (\ref{metric}), the Hamiltonian (\ref{Ham_eqEM}) governing the string loop dynamics can be expressed in the form \cite{Tur-etal:2013:PHYSR4:} \begin{equation} H = \frac{1}{2} f(r) P_r^2 + \frac{1}{2r^2} P_\theta^2 - \frac{E^2}{2f(r)} + \frac{V_{\rm eff}}{2 f(r)}\ , \label{HamHam} \end{equation} with an effective potential for the string loop motion in the combined gravitational and magnetic fields \begin{equation} V_{\rm eff} = f(r) \left\{ \frac{B^2 x^3}{8} + \left(\frac{\Omega J B}{\sqrt{2}} + \mu \right) x + \frac{J^2}{x} \right\}^2 . \label{Sveff} \end{equation} In accord with \cite{Jac-Sot:2009:PHYSR4:}, we have introduced new parameters that are conserved for the string loop dynamics in the Schwarzschild spacetime combined with the uniform magnetic field \begin{equation} J^2 \equiv \frac{j_{\sigma}^2 +j_{\tau}^2}{2}, \quad \omega \equiv -\frac{j_\sigma}{j_\tau}, \quad \Omega \equiv \frac{ - \omega}{\sqrt{1+\omega^2}} , \label{Omegapar} \end{equation} where the parameter $J$ is always positive, $J>0$, the dimensionless parameter $\omega$ runs in the interval $-\infty<\omega<\infty$, and the dimensionless parameter $\Omega$ varies in the range $-1<\Omega<1$. For the uniform magnetic field ($A_t = 0$), we can introduce relations between the string loop charge $q$ and the current $j$ densities (\ref{QIdef}), and the string loop parameters $J$,$\Omega$, in the form \begin{equation} q = j_\tau, \quad j = j_\sigma + A_\phi = j_\sigma + \frac{B}{2} \, x^2, \quad j_\sigma = \sqrt{2} J \Omega. \label{eletricQJ} \end{equation} It is worth to recall that $j_\tau$ and $j_\sigma$ are conserved quantities in the uniform magnetic fields. Note that in absence of the external magnetic field, $B=0$, there exists a symmetry which allows for the interchange $\omega \leftrightarrow 1/\omega$; then the interval $-1<\omega<1$ covers all possible cases of the string loop motion \cite{Jac-Sot:2009:PHYSR4:,Kol-Stu:2010:PHYSR4:}. In the case of non-vanishing magnetic field, such a symmetry does not exist, and $\omega$ has to range generally in the interval $(-\infty,\infty)$. The sign of the parameter $\Omega$ depends on the choice of the direction of the electric current with respect to the direction of the uniform magnetic field. The case $\Omega=0$ corresponds to the zero current. The case with $\Omega > 0$, in principle is unstable, since even a small deviation of the string loop from the symmetry axis leads to the appearance of a torque proportional to the product of the current and the magnetic field and potentially the string loop can be overturned to the stable configuration with $\Omega < 0$ \cite{Tur-etal:2013:PHYSR4:}. The condition $H=0$ determines the regions allowed for the string loop motion, and it implies the relation for the effective potential (or energy boundary function) governing the motion through the string loop and the background (spacetime and magnetic field) parameters \begin{equation} E^2 = V_{\rm eff}(x,y;J,B,\Omega). \label{StringEnergy} \end{equation} Detailed analysis of the boundaries of the string loop motion has been done in \cite{Tur-etal:2013:PHYSR4:}, we only recall main results here. The string loop motion boundaries and related types of the motion can be distinguished into four classes according to the possibility of the string loop to escape to infinity or collapse to the central compact object. The first class of the boundaries correspond to the absence of inner and outer boundaries, i.e. the string loop can be captured by the black hole or escapes to infinity. The second class corresponds to the situation with an outer boundary - the string loop must be captured by the black hole. The third class corresponds to the situation when both inner and outer boundaries exist - the string loop is trapped in some region forming a potential ``lake'' around the black hole. The fourth class corresponds to an inner boundary - the string loop cannot fall into the black hole but it must escape to infinity, see Fig. \ref{cls} for the details. For the purposes of the present paper, the first and fourth classes of boundaries, when the string loop can escape to infinity, are relevant. \subsection{Analogy with test particle motion} The influence of electromagnetic field on the string loop dynamics can be cleared up, if we compare the effective potential of the string loop dynamics with those of the charged test particle motion on circular orbits around a Schwarzschild black hole immersed in the same magnetic field $B$. Hamiltonian of the motion of a charged test particle with mass $m$ and charge $q$ is given by the relation \cite{Mis-Tho-Whe:1973:Gra:} \begin{equation} H_{\rm p} = \frac{1}{2} g^{\alpha\beta} (\Pi_\alpha - q A_\alpha)(\Pi_\beta - q A_\beta) + \frac{1}{2} \, m^2, \label{particleHAM} \end{equation} where the mechanical and canonical momenta are again related by \begin{equation} P^\mu = \Pi^\mu - q A^\mu. \end{equation} We suppose the particle moving on a circular orbit at radius $x$, with constant angular velocity $\omega = P^\phi/m$. In the homogeneous magnetic field $B$, the Hamiltonian of the test particle motion (\ref{particleHAM}) can be cast into the (\ref{HamHam}) form, but with modified effective potential containing a constant axial angular momentum $L$ \begin{equation} V_{\rm eff}^{\rm (p)} = f(r) \left\{ \frac{L}{x} - \frac{q B}{2} x \right\}^2 + m^2. \label{Pveff} \end{equation} The effective potentials $V_{\rm eff}$ for both string loop (\ref{Sveff}) and test particle (\ref{Pveff}) motion consist of the part given by the geometry, $f(r)$, and the part in braces depending only on the $x$ coordinate. We can compare terms in braces with the same $x$ dependence for both string and particle cases revealing their interpretation. The first and the second term in braces in the effective potential of the particle motion (\ref{Pveff}) represent the angular momentum contribution, $\sim x^{-1}$, and the Lorentz force contribution, $\sim x$, respectively. The first term in braces in the effective potential of the string loop motion (\ref{Sveff}) represents the pure contribution of the external magnetic field to the "effective" energy - density of the magnetic field energy is proportional to $B^2$, and the space volume to $x^3$. The second term in braces in (\ref{Sveff}), showing the same dependence, $\sim x$, as the Lorentz force, consists from two parts - the pure tension $\mu$, and the part representing interaction between the electric current carried by the string loop and the external magnetic field. This term can be associated to interaction of two magnetic fields, where one of them is the "global", external, magnetic field, $\sim B$, and the other is the "local", self-generated, magnetic field related to the string loop, $\sim J\Omega/x$, in accord with (\ref{Omegapar}). The sign of this term can be either positive or negative, depending on the sign of $\Omega$, i.e., the direction of the electric current on the string loop. We consider $\Omega$ to be negative, if the direction of the vector of the self-generated magnetic field coincides with the direction of the vector of the external magnetic field, and positive, if these vectors are directed oppositely. Since the direction of the external magnetic field is given as an initial condition, the direction of the Lorenz force acting on the string is determined by the direction of the current. The last term in braces of the effective potential (\ref{Sveff}) corresponds to the angular momentum generated by the current of the string loop. \subsection{Analogy with the superconducting string} \label{analogy} The string loop model with the current generated by the scalar field $\varphi$ demonstrates an interesting analogy with the superconductivity. In the pioneering works \cite{Wit:1985:NuclPhysB:,Vil-She:1994:CSTD:}, it has been shown for the cosmic strings that in the case when the electromagnetic gauge invariance is broken, the string can be considered as a superconductor carrying large currents and charges, up to the order of the string mass scale. Under such circumstances, the carriers of the electric charge can be either bosons or fermions, depending on the energetic favor for the charged particle \cite{Kim:1999:JKPS}. In a series of papers \cite{Mart-Shel:1997:PRB,Car:1990:PHYSLB:}, the dynamics of the superconducting strings has been considered in the framework of the Nambu-Goto string theory (see, e.g. \cite{Zweibach:2004:CUP}), and the so called "vortex" theory \cite{Mart-Shel:1997:PRB}. Similarly to the current of the string loop, the density of the superconducting electric current $j_s$ is proportional to gradient of scalar function $\Phi$, which is identified to the phase of the wave function of the superconducting Cooper pairs \cite{Ahm-Kag:2005:IJMPD:,Ahm-Fat:2005:IJMPD:} \begin{equation} j^{\rm s}_\alpha = \frac{2 \hbar n_s e}{m_s} \left(\partial_\alpha \Phi - \frac{2 e}{\hbar c} A_\alpha \right), \label{jELth} \end{equation} where $n_s$ and $m_s$ represent the concentration and mass of the Cooper pairs, $e$ is the charge of electron and $\hbar$ is the Planck constant. In addition to zero resistivity, the superconductors are characterized by the existence of the so called Meissner effect according to which the magnetic field either is expelled from a type-I superconductor or penetrates into a type-II superconductor as an array of vortices. The phase transition between the superconducting and normal states can be caused by increase of either temperature or magnetic field. The highest strength of the magnetic field in a state with given temperature under which a material remains superconducting is called critical superconductivity strength. Further, the supercoducting states are possible only if temperature is lower than the critical temperature $T_c$, i.e., the temperature of the phase transition to the supercondicting state. It has to be underlined that the value of the critical temperature $T_c$ strongly depends on the pressure, and, e.g. in the neutron star crust, reaches the values up to $10^9 \sim 10^{10} K$ \cite{Cha-Hae:2008}. The critical magnetic field at any temperature below the critical temperature is given by the relation \begin{equation} B_c \approx B_c(0) \left[1-\left(\frac{T}{T_c}\right)^2\right], \end{equation} where $B_c(0)$ is the critical magnetic field at zero temperature. Suppose superconducting material is in the external magnetic field $B<B_c$ at $T > T_c$ and start lowering the temperature. When $T < T_c$, the medium separates into two phases: superconducting regions without magnetic flux (flux expulsion) and normal regions with concentrated strong magnetic field that suppresses superconductivity. Consequently, magnetic flux can pass through the superconducting material, separated into superconducting regions without magnetic flux and normally conducting regions into which the magnetic flux is concentrated, and as we discussed the nature of the nonsuperconducting regions depends on the type of superconductor as type-I or type-II. Large-scale, ordered magnetic fields in central parts of accretion disks around black holes are desirable for explaining e.g. collimated jet production. However inward advection of vertical flux in a turbulent accretion disk is problematic if there is an effective turbulent diffusivity. A new mechanism to resolve this problem was predicted in \cite{Spr-Uzd:2005:APHJ:}. Turbulent flux expulsion leads to concentration of the large-scale vertical flux into small patches of strong magnetic field and because of their large field strengths, the patches experience higher angular-momentum loss rate via magnetic braking and winds. As a result, patches rapidly drift inward, carrying the vertical flux with them. The accumulated vertical flux aggregates in the central flux bundle in the inner part of the accretion disk and accretion flow through the bundle changes its character. It is necessary to underline that the new phenomenon called \textit{turbulent diamagnetism} when magnetic field is expelled from regions of strong turbulence and is concentrated between turbulent cells was first predicted by Zeldovich \cite{Zel:1956:SOVP:} and Parker \cite{Par:1963:APJ:}. Hence, the possibility of appearance of superconductors near the horizon of black holes, discussed in \cite{Spr-Uzd:2005:APHJ:}, has an important analogy with the turbulence in the accretion disc, namely, the mechanism of efficient transport of the large-scale external magnetic flux inward through a turbulent flow can play a role of superconductivity in the accretion disc and according to \cite{Spr-Uzd:2005:APHJ:}, suppression of turbulence by a strong magnetic field is analogous to lifting of superconductor by an applied magnetic field. For an electric current-carrying string loop immersed in a magnetic field there exists a similar effect of the vanishing of the electric current when the magnetic field is reaching a critical value. In the theory of the string loops no thermodynamic features are contained, and the critical phenomena are simply relating the magnetic field to the electric current (and angular momentum) parameters of the string loop. To find the critical value of the magnetic field, let us consider a stationary string loop located at a fixed radius $r_0$ in the flat spacetime. The energy per length of the string loop in an asymptotically uniform magnetic field $B$, given by Eq. (\ref{StringEnergy}) where the lapse function is reduced to $f(r)=1$ in the flat spacetime, reads \begin{equation} \frac{E}{r_0} = \mu + \frac{1}{2} \frac{q^2}{r_0^2} + \frac{1}{2} \frac{j^2}{r_0^2} = \mu +\frac{J^2}{r_0^2} + \frac{\Omega J B}{\sqrt{2}} + \frac{B^2 r_0^2}{8}. \label{ensucon} \end{equation} The final expression is the sum of the terms responsible for the string tension, the electric current, the Lorenz force and the energy of the magnetic field, respectively. If one increases the strength of the background magnetic field $B$ while keeping the string loop at the initial position $r_0$ with fixed energy $E$, the electric current of the string loop has to be modified accordingly. Using the relation (\ref{eletricQJ}), we can write the electric current density $j$ in form corresponding to those related to the superconducting current (\ref{jELth}), and relate the current to the string loop parameters by \begin{equation} j = \varphi_{| \sigma} + A_{\phi} = \pm \sqrt{ - q^2 + 2 E r_0 - 2 \mu r_0^2}. \end{equation} The dependence of the current $j_{\sigma} = \varphi_{| \sigma}$ (being constant of the motion) on the strength of the magnetic field $B$ is then determined by the relation \begin{equation} j_{\sigma} = j -\frac{B}{2} r_0^2 . \label{cursucon} \end{equation} According to Eq.~(\ref{cursucon}), the magnitude of the current $j_\sigma$ has to be evidently decreasing with increasing magnetic field $B$. Therefore, we have to consider the possibility of disappearance of the electric $j_\sigma$ current when magnitude of the magnetic field reaches the critical value. The critical magnitude of the magnetic field for a string loop located at a given radius $r_0$ with energy $E$ and charge density $q$ takes the form \begin{equation} B_{\rm cr} = \pm \frac{2}{r_0^2} \sqrt{ - q^2 + 2 E r_0 - 2 \mu r_0^2}. \end{equation} The presence of the critical magnetic field is provided by the existence of the last term of the Eq.(\ref{ensucon}) representing the energy density of the magnetic field. If the last term in Eq.(\ref{ensucon}) vanishes, the current of the string loop could decrease to arbitrarily small values, but is cannot reach zero value. The presence of the last term of Eq.(\ref{ensucon}) also plays an important role for the benefit of the superconduction -- string loop analogy. The so called Meissner effect of the exclusion of the magnetic field from the superconductor means that the supercurrent generates a magnetic field having strength that is exactly the same as the strength of the external magnetic field. In other words, the term which is proportional to the square of $B$ in Eq.(\ref{ensucon}) can be in the case of the superconducting string loop interpreted as the self-magnetic field generated by the string loop. \section{String loop acceleration} Explanation of relativistic jets in Active Galactic Nuclei (AGN) and microquasars could be one of the most important astrophysical applications of the string loop model. It is possible because of the acceleration and fast ejection of the string loop from the black hole neighbourhood by the transmutation effect, i.e., transmission of the energy of the string loop oscillatory motion in the $x$-direction to the energy of the linear translation motion along the $y$-direction related to the string loop symmetry axis \cite{Lar:1994:CLAQG:,Jac-Sot:2009:PHYSR4:,Stu-Kol:2012:PHYSR4:}. In the analysis of the acceleration process, it is convenient to use dimensionless coordinates and string loop parameters. We thus make rescaling of the coordinates, $x \rightarrow x/M, y \rightarrow y/M$, and the string loop and background parameters, \begin{eqnarray} J \rightarrow J / \sqrt{\mu} M, \quad E \rightarrow E / \sqrt{\mu} M, \quad B \rightarrow BM / \sqrt{\mu}. \label{dmnsnless} \end{eqnarray} We will return to the Gaussian units in the Appendix~B. \begin{figure*} \includegraphics[width=\hsize]{03} \caption{String loop effective potential $V_{\rm flat}(x)$ for the flat spacetime with and without an uniform magnetic field. The effective potential is presented for three representative values of the string loop parameter $\Omega \in \{-1,0,1 \}$, keeping constant the current parameter $J=1$, and for various values of the strength of the magnetic field $B$ (denoted by numbers in the plot). Minimum of all the effective potentials is denoted by the small vertical line, the dotted line corresponds to the string loop energy in the non-magnetic case $B=0$. \label{Emin}} \end{figure*} \subsection{Maximal acceleration} Since the Schwarzschild{} spacetime is asymptotically flat, we have to study the string loop motion in the flat spacetime in order to understand the transmutation process. This enables clear definition of the string loop acceleration process and establishing of maximal acceleration that is available in a given uniform magnetic field. The energy of the string loop (\ref{StringEnergy}) in the flat spacetime with uniform magnetic field $B$, expressed in the Cartesian coordinates, reads \begin{equation} E^2 = \dot{y}^2 + \dot{x}^2 + V_{\rm flat}(x;B,J,\Omega) = E^2_{\mathrm y} + E^2_{\mathrm x}, \label{E2flat} \end{equation} where dot denotes derivative with respect to the affine parameter $\zeta$ and $V_{\rm flat}(x;B,J,\Omega)$ is the effective potential of the string loop motion in the flat spacetime \begin{equation} V_{\rm flat} = \left\{ \frac{B^2 x^3}{8} + \frac{\Omega J B x}{\sqrt{2}} + \left(1 +\frac{J^2}{x^2}\right)x \right\}^2. \label{e-flat} \end{equation} The energy related to the motion in the $x$- and the $y$-directions is given by the relations \cite{Stu-Kol:2012:PHYSR4:} \begin{equation} E^2_{\mathrm y} = \dot{y}^2, \quad E^2_{\mathrm x} = \dot{x}^2 + V_{\rm flat}(x;B,J,\Omega) = E^2_{0}. \label{restenergy} \end{equation} The energy in the $x$-direction $E_{0}$ can be interpreted as an internal energy of the oscillating string, consisting from the potential and kinetic parts; in the limiting case of coinciding minimal and maximal extension of the string loop motion, $x_{\rm i} = x_{\rm o}$, the internal energy has zero kinetic component. The string internal energy can in a well defined way represent the rest energy of the string moving in the $y$-direction in the flat spacetime \cite{Jac-Sot:2009:PHYSR4:,Stu-Kol:2012:PHYSR4:}. The final Lorentz factor of the transitional motion along the $y$-axis of an accelerated string loop as observed in the asymptotically flat region is determined by the relation \citep{Jac-Sot:2009:PHYSR4:,Stu-Kol:2012:PHYSR4:} \begin{equation} \gamma = \frac{E}{E_0} , \label{gamma} \end{equation} where $E$ is the total energy of the string loop having the internal energy $E_{0}$ and moving in the $y$-direction with the velocity corresponding to the Lorentz factor $\gamma$. Clearly, the maximal Lorentz factor of the transitional motion of the string loop is related to the minimal internal energy that can the string loop have, i.e., those with vanishing kinetic energy of the oscillatory motion \cite{Stu-Kol:2012:PHYSR4:} \begin{equation} \gamma_{\rm max} = \frac{E}{E_{\rm 0(min)}} \label{gammax}. \end{equation} It should be stressed that rotation of the black hole (or naked singularity) is not a relevant ingredient of the acceleration of the string loop motion due to the transmutation effect \cite{Stu-Kol:2012:PHYSR4:}, contrary to the Blandford---Znajek effect \cite{Bla-Zna:1977:MNRAS:} usually considered in modelling acceleration of jet-like motion in AGN and microquasars. Extremely large values of the gamma factor given by Eq.(\ref{gammax}) can be obtained by setting the initial energy $E$ very large or by adjusting properly the string loop parameters $J,\Omega$ and the magnetic field strength $B$ in order to obtain very low minimal internal energy related to infinity, $E_{\rm 0(min)}$. It is crucial to examine properties of the energy function $E_0(x;J,\Omega,B)$, primarily its minimal allowed value, $E_{\rm 0(min)}(J,\Omega,B)$, given by the local minimum of the effective potential $V_{\rm flat}$ (\ref{e-flat}). \begin{figure*} \subfigure[\quad position of the effective potential minima $\tilde{x}_{\rm min}(b)$ ]{\label{Emin1}\includegraphics[width=0.32\hsize]{04a}} \subfigure[\quad value of energy at the minima $\tilde{E}_{\rm (0)min}(b)$ ]{\label{Emin2}\includegraphics[width=0.32\hsize]{04b}} \subfigure[\quad properties of the functions $\tilde{x}_{\rm min}(b,\Omega), \tilde{E}_{\rm (0)min}(b,\Omega)$ ]{\label{Emin3}\includegraphics[width=0.32\hsize]{04c}} \caption{Properties of the effective potential $V_{\rm flat}(x)$ for the string loop dynamics in the flat spacetime with an uniform magnetic field. Position of the $V_{\rm flat}(x)$ minima $\widetilde{x}} \def\tE{\widetilde{E}_{\rm (0)min}$ (a), and the minimal energy $\tE_{\rm (0)min}$ (b) are given as function of $b$ for some characteristic values of the string loop parameter $\Omega$. In figure (c), properties of the functions $\widetilde{x}} \def\tE{\widetilde{E}_{\rm (0)min}(b,\Omega), \tE_{\rm (0)min(b,\Omega)}$ are demonstrated.} \end{figure*} Recall that the non-magnetic case, $B=0$, implies quite simple properties of the effective potential $V_{\rm eff}$, as there is only one local minimum with the location and the energy level determined by \cite{Stu-Kol:2012:PHYSR4:} \begin{equation} x_{\rm min} = J, \quad E_{\rm 0(min)} = 2 J. \label{nomaglimit} \end{equation} The effective potential $V_{\rm flat}$ is always positive in this case. Now we have to discuss in detail properties of the effective potential $V_{\rm flat}$ for the string loop dynamics in the homogeneous magnetic field in flat spacetime. It is enough to consider $B>0$ as the situation with opposite sign of $B$ can be described by inversing the sign of $\Omega$. Naturally, also $x>0$. The effective potential diverges for $x \to 0 \,\, (\sim J^2/x)$ and $x \to \infty \,\, (\sim B^2 x^3)$. It is convenient to define a new parameter $b$ reflecting the interaction of the magnetic field and the string loop current, modified space coordinate $\widetilde{x}} \def\tE{\widetilde{E}$, and modified energy $\tE$, by the relations \begin{equation} b = BJ, \quad \widetilde{x}} \def\tE{\widetilde{E} = \frac{x}{J}, \quad \tE = \frac{E}{2 J}. \end{equation} The zero points of the effective potential are then governed by \begin{eqnarray} \widetilde{x}} \def\tE{\widetilde{E}^{2}_{(z)\pm}(b,\Omega) &=& \frac{ -F \pm \sqrt{F^2 -8b} }{b^2}, \\ && F = 4 + 2\sqrt{2} b \Omega. \end{eqnarray} It is immediately clear that both the solutions $\widetilde{x}} \def\tE{\widetilde{E}^{2}_{(z)\pm}(b,\Omega)$ can be negative only, being thus physically unrealistic. Therefore, the effective potential is always positive, $V_{\rm flat} > 0$. It has only one local minimum for all values of the string loop parameters, and the magnetic field intensity $B>0$. The extremum, given by $\partial V_{\rm flat} / \partial x = 0$, is located at \begin{eqnarray} \widetilde{x}} \def\tE{\widetilde{E}^2_{\rm min} &=& \frac{x^2_{\rm min}}{J^2} = \frac{2\sqrt{2}}{3 b^2}\left\{\sqrt{D}-\left( \sqrt{2} + b \Omega \right) \right\}\\ D &=& 3 b^2 + \left(\sqrt{2} + b \Omega\right)^2 \end{eqnarray} that exist for all considered values of the parameters $J,B,\Omega$ or $b,\Omega$. The minimal energy of the string loop in the flat spacetime with the homogeneous magnetic field $B$ is then given by \begin{equation} \widetilde{E}_{\rm 0(min)} \equiv \frac{E_{\rm 0(min)}}{2 J} = \frac{\sqrt[4]{2}}{3 \sqrt{3}}\, \frac{\sqrt{D} \left(\sqrt{2} + b \Omega\right) + 6 b^2 - D}{ b \sqrt{\sqrt{D} - \left( \sqrt{2}+ b \Omega \right)}}. \end{equation} Positions of the local minima of the effective potential $\widetilde{x}} \def\tE{\widetilde{E}^2_{\rm min}$, and the minimal energy $\widetilde{E}_{\rm 0(min)}$, respectively, are illustrated as a function of $b$ for characteristic values of the string loop parameter $\Omega$ in Fig. \ref{Emin1}, and in Fig. \ref{Emin2}, respectively. The related local extrema of the $\tilde{x}_{\rm min}(b), \tilde{E}_{\rm 0(min)}(b)$ functions are given by the relation \begin{equation} 2 \sqrt{D} \left( b \Omega +\sqrt{2} -\sqrt{D} \right) +b^2 \left(\Omega ^2+3\right)-b \sqrt{D} \Omega +\sqrt{2} b \Omega = 0, \label{tXextrema} \end{equation} for $\tilde{x}_{\rm min}(b)$, and the relation \begin{eqnarray} && b^4 \left(-2 \Omega ^4-3 \Omega ^2+9\right)+2 b^3 \Omega \left(\sqrt{D} \Omega ^2-\sqrt{2} \left(\Omega ^2-6\right)\right) \nonumber \\ && +b^2 \left(-9 \sqrt{2} \sqrt{D} +12 \Omega ^2+30\right)+4 b \left(5 \sqrt{2}-3 \sqrt{D} \right) \Omega \nonumber \\ && -8 \sqrt{2} \sqrt{D} +16 = 0 \label{tEextrema} \end{eqnarray} for $\tilde{E}_{\rm 0(min)}(b)$. In the limit of $B \to 0$, we find the minimum location at $\tilde{x}_{\rm min} = 1$, and the minimal energy $\tilde{E}_{\rm 0(min)} = 1$, i.e., the values obtained for the empty flat spacetime, given by Eq. (\ref{nomaglimit}). However, these limit values are reached also for a special value of the magnetic field strength $B$, or parameter $b$, in dependence on the other string loop parameter $\Omega$. Therefore, it is relevant to discuss the conditions \begin{equation} \widetilde{x}} \def\tE{\widetilde{E}_{\rm min}(b,\Omega) = 1, \quad \tE_{\rm 0(min)}(b,\Omega) = 1 \label{eq11} \end{equation} in dependence on the interaction of the string loop and the magnetic field expressed by the parameter $b=BJ$. This enables to distinguish qualitatively different string loop configurations from the point of view of the acceleration process. Namely, the condition $\tE_{\rm 0(min)}(b,\Omega) = 1$ separates the string loop configurations enabling efficiency of the acceleration process in the magnetic field to be higher in comparison with those related to the non-magnetic case, from those where the efficiency is lower. The equations (\ref{eq11}) can be expressed in the form \begin{eqnarray} -3 b^2 -2\sqrt{2} b \Omega +2\sqrt{2} \sqrt{D}-4 &=& 0, \label{X0good} \\ 2 b^3 \left( \Omega^2 - 1 \right)^2 +2 \sqrt{2} b^2 \Omega \left(3 \Omega^2 + 5\right) &&\nonumber \\ + 12 b \Omega^2 + b+4 \sqrt{2} \Omega &=& 0. \label{E0good} \end{eqnarray} The numerically determined solutions of the equations governing the local extrema and the "flat limit" values of the position and energy functions, given in terms of the functions $b(\Omega)$, are represented in Fig. \ref{Emin3}. We can see that the solutions exist only in the range of the string loop parameter $\Omega\in\langle-1,0)$. We denote the solutions of equations (\ref{X0good}) and (\ref{E0good}) by $b_{1\rm(x)}(\Omega)$ and $b_{1\rm(E)}(\Omega)$, while $b_{E\rm(x)}(\Omega)$ and $b_{E\rm(E)}(\Omega)$ are referred to solutions of (\ref{tXextrema}) and (\ref{tEextrema}). For string loop with fixed parameter $\Omega$ the maximal distance of the $\tilde{x}_{\rm min}(b,\Omega)$ function from the $B=0$ position, $\tilde{x}_{\rm min} = 1$, is given by points in Fig. \ref{Emin1} and denoted by $\tilde{x}_{\rm min(max)}$. It is also useful to give the minimal energy $\tE_{\rm 0(min)}(b,\Omega)$ of the effective potential at the extremal cases assuming the parameter $\Omega$ fixed and given by the $b_{E\rm(E)}(\Omega)$ dependence. The results are illustrated by a sequence of points in Fig. \ref{Emin2} -- the minimal values of the energy $\tE_{\rm 0(min)}(b,\Omega)$ are denoted by circles, while the $\tE_{\rm 0(min)}(b,\Omega)$ values obtained at the maximal distance from the $\widetilde{x}} \def\tE{\widetilde{E}_{\rm min(max)}$ point are denoted by squares. The loci $\widetilde{x}} \def\tE{\widetilde{E}_{\rm min}$ of the effective potential $V_{\rm flat}$ local minima depend on the string loop parameter $\Omega$, and the parameter $b$ combining the role of the magnetic field intensity $B$ and the string loop parameter $J$. Keeping the string loop parameters $\Omega$ and $J$ constant, we can obtain a unique position of the minimum $\widetilde{x}} \def\tE{\widetilde{E}_{\rm min}$ for each magnitude of magnetic field $B>0$ in the case of $\Omega\in\langle0,1\rangle$. Such minima are located at $x_{\rm (min)} < 1$, i.e., closer to the coordinate origin as compared to the non-magnetic ($B=0$) case, see Fig. \ref{Emin1}. On the other hand, for $\Omega\in\langle-1,0)$ we can obtain a "binary" behavior of the effective potential, if the other parameters are properly tuned. For $b<b_{1\rm(x)}(\Omega)$, we can obtain the same location of the effective potential minimum, $x_{\rm (min)}$, for two different values of parameter $b$. These two different string loop configurations at a given radius will differ in the string loop energy $E$, and have to be located at $\widetilde{x}} \def\tE{\widetilde{E}_{\rm min} > 1$, i.e., at larger distance from the origin than the minimum of the effective potential in the $B=0$ case. Such "binary" string loop configurations can exist only for subcritical valus of the magnetic parameter \begin{equation} b < b_{\rm bin}=4\sqrt{2}/3 \end{equation} determined by the condition $b_{1\rm(x)}(-1)=1$ -- see the point Y in Fig. \ref{Emin1}. The maximal difference of the location of the local minimum of the effective potential from the position $\widetilde{x}} \def\tE{\widetilde{E}_{\rm min}=1$ corresponding to the non-magnetic case, $B=0$, is given by the point X in Fig. \ref{Emin1}, i.e., by the local maximum of the $\widetilde{x}} \def\tE{\widetilde{E}_{\rm min}(b)$ function determined by Eq. (\ref{tXextrema}), taken for $\Omega=-1$. For this maximum we obtain \begin{equation} b_{\rm E(x)}(-1) = 1/\sqrt{2}, \quad \tilde{x}^2_{\rm bin(max)} = \tilde{x}^2_{\rm min}(1/\sqrt{2}) = 4/3. \end{equation} The extremal efficiency of the string loop transmutation process is governed by the minimum of the effective potential at the flat spacetime containing a homogeneous magnetic field, $\tE_{\rm (0)min}(b,\Omega)$, see eq. (\ref{gammax}). If the minimal energy $E_{\rm 0(min)}$ is zero (for example in $J=0, B=0$ case) the maximal possible acceleration of the string loop diverges, $ \gamma_{\rm max} \rightarrow \infty$. For positive values of the string loop parameter $\Omega$, range $\Omega\in(0,1\rangle$, the minimal energy function $\tE_{\rm 0(min)}(b)$ is monotonically increasing with increasing parameter $b$ and diverges for $b \to \infty$, see Fig. \ref{Emin2}. Since there is $\tE_{\rm 0(min)} > 1$ for all values of $b>0$, $\Omega\in(0,1\rangle$, such configurations are not advantageous for the string loop acceleration as compared to the non-magnetic case $B=0$. Configurations $\tE_{\rm 0(min)} < 1$ implying possibility of more efficient string loop acceleration than in the non-magnetic case $B=0$, can exist only for negative values of the parameter $\Omega$. For $\Omega\in(-1,0)$, the minimum energy function $\tE_{\rm 0(min)}(b)$ decreases with increasing $b$ for small enough values of the parameter $b$ reaching a minimum for the magnetic parameter $b_{\rm E(E)}$ (point Z in Fig. \ref{Emin2}) and increases with further increasing of $b$, crossing the $\tE_{\rm 0(min)} = 1$ line at $b_{\rm 1(E)}$ (point W in Fig. \ref{Emin2}) and for larger values of parameter $b$ increases towards infinity. In the special case of $\Omega=-1$, the minimum energy function $\tE_{\rm 0(min)}(b)$ is monotonically decreasing and tends to zero value as $b$ increases to infinity. For negative values of the string loop parameter, $\Omega\in\langle-1,0)$, and for a given string loop parameter $J$, we can always find a properly large values of magnetic field $B$ to obtain the minimal energy ${E}_{\rm 0(min)}$ of the effective potential smaller then in the non-magnetic case where $E_{\rm 0(min)} = 2J$. The ratio of the minimal energy $\tE_{\rm (0)min}$ considered in the non-magnetic and magnetic cases can be put arbitrarily close to zero for large enough values of the parameter $b$. This implies possibility of an extremely efficient transmutation effect leading to accelerations of the string loop up velocities corresponding to ultra-high Lorentz factor of the motion of electric current-carrying string loop with $J>0$ in the combined Schwarzschild gravitational field and the uniform magnetic field -- see Fig. \ref{Emin3}. \begin{figure*} \includegraphics[width=\hsize]{05} \caption{ \label{fceJE} Plots of the $J_{\rm E \mp}(r;B,\Omega)$ function (left figure), position of the extrema of $J_{\rm E \mp}(r;B,\Omega)$ in dependence on the string loop parameter $\Omega$ (central figure), and dependence of the extrema on the strength of the magnetic field $B$ (right figure). \label{picJE} } \end{figure*} \subsection{Acceleration in combined gravitational and magnetic fields} Clearly, $E_{\rm x}=E_{0}$ and $E_{\rm y}$ are constants of the string loop motion in the flat spacetime and no transmission between these energy modes is possible. However, in vicinity of black holes or naked singularities, the internal kinetic energy of the oscillating string can be transmitted into the kinetic energy of the translational linear motion (or vice versa) due to the chaotic character of the string loop dynamics \cite{Jac-Sot:2009:PHYSR4:,Stu-Kol:2012:PHYSR4:}. In order to get a strong acceleration, the string loop has to pass the region of strong gravity near the black hole horizon or in vicinity of the naked singularity, where the string loop transmutation effect $E_{\rm x} \leftrightarrow E_{\rm y}$ can occur with maximal efficiency. However, during the acceleration process, all energy of the $E_{\rm x}$ mode cannot be transmitted into the $E_y$ energy mode -- there always remains the inconvertible internal energy of the string, $E_{\rm 0(min)}$, being the minimal energy hidden in the $E_{\rm x}$ energy mode, corresponding to the minimum of the effective potential. The opposite case corresponds to amplitude amplification of the oscillations in the $x$-direction and decceleration of the linear motion in the $y$-direction; in this case the translational kinetic energy is partially converted to the internal oscillatory energy of the string. All energy of the transitional ($E_{\rm y}$) energy mode can be transmitted to the oscillatory ($E_{\rm x}$) energy mode -- oscillations of the string loop in the $x$-direction and the internal energy of the string loop will increase maximally in such a situation, while the string loop will stop moving in the $y$-direction. We shall focus our attention to the case of accelerating string loop. First, we have to discuss the properties of the effective potential of the string loop motion in the combined gravitational and magnetic field. The effective potential is a simple combination of the lapse function $f(r)$ of the spacetime metric, and the effective potential of the string loop motion in the flat spacetime with the uniform magnetic field $V_{\rm flat}$; therefore, \begin{equation} V_{\rm eff} = f(r) V_{\rm flat}. \end{equation} The zero point of the effective potential is given by the vanishing of the lapse function $f(r)=0$, i.e., at the Schwarzschild black hole horizon at $r=2$. The divergence occurs at infinity $x \to \infty$, as in the flat spacetime with an homogeneous magnetic field. The local extrema of the effective potential cannot be located off the equatorial plane given by the spherically symmteric spacetime and the uniform magnetic field. In the equatorial plane $y=0$ they are given by the condition \cite{Tur-etal:2013:PHYSR4:} \begin{eqnarray} \frac{3}{8} B^2 x^5 - \frac{5}{8} B^2 x^4 + \left(1+\frac{ B J \Omega}{\sqrt{2}}\right) \left( x^3 - x^2\right) - {} \nonumber\\ - J^2 x +3 J^2 = 0. \label{extreq} \end{eqnarray} The local extrema can thus be determined by the condition related to the angular momentum parameter of the string loop \begin{equation} J = J_{\rm E \pm}(x;B,\Omega) \equiv \frac{B \Omega x^2 (x-1) \mp \sqrt{G}}{2 \sqrt{2} (x-3)} \end{equation} where \begin{eqnarray} G &=& B^2 (x-1)^2 x^2 \Omega^2 + B^2 (x-3) (3 x-5) x^2 \nonumber \\ && +8 (x-3) (x-1). \label{fceGG} \end{eqnarray} There are no zero points of the functions $J_{\rm E \pm}(x;B,\Omega)$ because the condition \begin{equation} \frac{1}{8} B^2 x^4 (3x - 5) + x^{2} (x - 1) > 0 \end{equation} is satisfied at $x>2$ for all values of the parameter $B$. The positive branch of the solution $J_{\rm E +}$ is real and positive above the radius of the photon circular geodesic, at $x>3$, for all combinations of parameters $B$ and $\Omega$. For \begin{equation} \Omega < -\sqrt{\frac{3}{11}} \doteq -0.52, \quad B > \sqrt{\frac{3 + \sqrt{33}}{18}} \doteq 0.70 \label{bpocon} \end{equation} the solution $J_{\rm E +}$ can be real and positive also below the photon circular geodesic, at $x>2$. Also the solution $J_{\rm E -}$ can be for (\ref{bpocon}) real and positive, but only in the region $2<x<3$. The behaviour of the functions $J_{\rm E \mp}(x;B,\Omega)$ is demonstrated in Fig. \ref{picJE}. A given current parameter $J$ represented by a line has possible multiple intersections with the function $J_{\rm E \mp}(x)$ that determine positions of the local extrema of effective potential $V_{\rm eff}(x)$ function in equatorial plane. The local extrema of the $J^2_{\rm E \mp}(x)$ function, given by the condition ${\partial_r J_{\rm E \mp}}~=~0$, enable us to distinguish maxima and minima of effective potential $V_{\rm eff}(x)$. There can exist only local minimum $J_{\rm E \mp (min)}$ for all combination of parameters $B$ and $\Omega$, see Fig. \ref{picJE}. For the string loop immersed in the combined uniform magnetic field and the spherically symmetric gravitational field described by the Schwarzschild{} spacetime, we can have two intersection points of the $J={\rm const}$ line with the function $J_{\rm E\mp}(x,B,\Omega)$ (maxima and minima of $V_{\rm eff}$) for the parameter $J>J_{\rm E \mp (min)}$, one intersection point (inflex point of $V_{\rm eff}$) for $J=J_{\rm E \mp (min)}$, and none intersection point for $J<J_{\rm E \mp (min)}$ (no extrema of $V_{\rm eff}$) - the situation is the same as in the Schwarzschild{} spacetime without magnetic field \citep{Kol-Stu:2010:PHYSR4:}. At the minima (maxima) of the effective potential, stable (unstable) equilibrium positions of the string loop occur. Note that energy of the string loop at the stable equilibrium positions governs oscillatory motion around the equilibrium state, but it is not relevant for the maximal acceleration of the string loop in the transmutation process -- the maximal acceleration is given by the local minimum of the effective potential in the flat spacetime \cite{Stu-Kol:2012:JCAP:}. \begin{figure*} \includegraphics[width=\hsize]{06} \caption{Different types of the string loop motion in the combined gravitational Schwarzschild field and uniform magnetic field $B$. The string loop trajectories are represented for appropriately chosen characteristic cases (scattering, backscattering, collapse). The thick line represents boundary of the motion given by the energy boundary function $E=E_{\rm b}(x,y)$, gray is the dynamical region below the black hole horizon. Sensitivity of the string loop motion to the initial conditions is observed in neighbourhood of the unstable periodic orbit; an example is shown in the second and third figures (being enlargemets of the first one) by the thick dashed line constructed for $y_{\rm s} \doteq 5.059$. We continuously vary the impact parameter $y_{\rm s}$ and determine by numerical calculations the resulting gamma factor $\gamma(y_{\rm s})$; for the characteristic values of $B$ and $\Omega$ the results are given in Fig. \ref{acceleration1} and Tabs. \ref{tab1}-\ref{tab3}. \label{schemaACC}} \end{figure*} \begin{figure*} \includegraphics[width=\hsize]{07a} \includegraphics[width=\hsize]{07b} \includegraphics[width=\hsize]{07c} \includegraphics[width=\hsize]{07d} \caption{\label{acceleration1} The asymptotic Lorentz factor $\gamma$ obtained due to the transmutation of the string loop energy in the Schwarzschild backgrounds with or without uniform magnetic field $B$ is given for three characteristic values of string loop parameter $\Omega$. The Lorentz factor $\gamma$ (vertical axis) is calculated for string loop with energy $E=25$ and current $J=2$, starting from the rest with varying initial position in the coordinate $y_0 \in (0,13) $ (horizontal axis), while the coordinate $x_0$ of the initial position is calculated from Eq. (\ref{StringEnergy}). Eq. (\ref{gammax}) for the maximal acceleration implies the limiting gamma factor $\gamma_{\rm max} = 6.25$ for the non-magnetic case $B=0$. We give the topical Lorentz factor, $\gamma_{\rm top}$, that is numerically found in the sample, and also the efficiency of the transmutation effect expressed by the mean value of the Lorentz factor, $\gamma_{\rm mean}$, in the sample. The blue (red) colour depicts results of the scattering (rescattering) of the string loop, the gray colour depicts its collapse to the black hole. } \end{figure*} \begin{figure*} \includegraphics[width=\hsize]{08} \caption{The string loop effective potential $V_{eff}(x)$ plotted for various combinations of the parameters $J,B,\Omega$ are illustrated for the flat or Schwarzschild{} spacetime in a way giving the energy that can be used for the string loop acceleration during the transmutation process in dependence on the magnetic field strength $B$. In the Schwarzschild{} spacetime we use the thick curve for $B=0$, amd the thin curve for $B>0$; the case of the magnetic field in the flat spacetime is dashed. We will demonstrate the influence of the magnetic field on the string loop acceleration in the three scenarios of modifying the initial conditions because of the increase of $B$: 1) we keep the position $x_i$ but change energy $E$; 2) we keep the energy $E$, but change position $x_i$; 3) we keep the position $x_i$ and the energy $E$, but change the current $J$. In all figures the string loop parameter $\Omega=0$. \label{situation}} \end{figure*} \subsection{Numerical simulations of the transmutation process} In the previous section, the maximal possible acceleration of the string loop has been determined, in dependence on the parameters $J,B,\Omega$, by finding the minima of the effective potential of the string loop dynamics in the uniform magnetic field in the flat spacetime that reflects the asymptotic properties of the combined Schwarzschild gravitational field and uniform magnetic field. However, in realistic transmutation processes in the vicinity of the black hole horizon, the efficiency is usually lower than the maximally allowed efficiency corresponding to the maximally accelerated string loop when their oscillatory motion is fully suppressed. Due to the chaotic character of the string loop equations of motion, even tiny change in the initial conditions of the motion can change completely character of the string loop trajectory. In order to demonstrate the effect of the magnetic field $B$ on the string loop acceleration, it is useful to compare the set of trajectories with or without magnetic field $B$ in dependence on the string loop parameter $\Omega$ introducing the qualitative differences of the character of the transmutation process, namely in the potential efficiency of the transmutation process reflected by the maximal acceleration determined by $\gamma_{\rm max}$. The effect of the uniform magnetic field on the string loop acceleration will be first illustrated by sending the string loop with fixed current parameters $J$ and $\Omega$ towards the black hole from rest in the initial position with coordinate $x_{\rm s}$ adjusted in order to have fixed string loop energy $E=25$, but with freely varying impact parameter $y_{\rm s} \in (0,13)$ -- giving displacement from the equatorial plane, see Fig. \ref{schemaACC}. Trajectories with large ejection velocity are assumed to appear for large maximal Lorentz factors, $\gamma_{\max}$, -- due to Eq. (\ref{gammax}) this should occur for small values of the string loop current parameter $J$, and large values of the string loop energy $E$; we will use $J \sim 2$ and $E \sim 25$. Of course, we can test different initials parameters, but our choice is quite reasonable and illustrative \cite{Stu-Kol:2012:PHYSR4:}. The current $J$ should be minimized, but since the minimum of the effective potential $V_{\rm eff}$ is located at $x_{\rm min} \sim J$, it is difficult for the trajectories of the string loop with $J<2$ to jump over the black hole horizon (see Fig. \ref{schemaACC}) and many of the string loop trajectories will end inside the black hole. The initial starting point at $x_{\rm s}$ corresponds to an initial stretching of the string loop -- for larger $x_{\rm s}$, we will start with larger energy $E$ that also could provide for larger Lorentz factor $\gamma$ of the string loop far away from the black hole. For simplicity, we first study the role of the string loop parameter $\Omega$ on the transmutation process in a fixed magnetic field with intensity $B=0.2$, considering the characteristic values of $\Omega = -1, 0, 1$ and compare the results to the case of vanishing magnetic field $B=0$. The scattering function $\gamma(y_{\rm s})$, plotted in Fig. \ref{acceleration1}, demonstrates some regular scattering regions (for example $y_{\rm s}\in (5.2,11)$ in the $B=0$ case), where the $\gamma$ depends on $y_{\rm s}$ in a quite regular way, combined with chaotic scattering regions (chaotic bands \cite{Ott:1993:book:}), where $\gamma$ depends on $y_{\rm s}$ in a completely chaotic way when there is no regular prediction of final output from neighbouring initial positions (for example $y_{\rm s}\in (4.9,5.1)$ in the $B=0$ case). The reason why there exist such unregular outcomes from some regions of the initial positions is the presence of unstable periodic orbits. An illustrative example on the unstable periodic orbits and behaviour of the string loop trajectories in its vicinity is presented in Fig. \ref{schemaACC} for our set of the initial conditions. The results of the study assuming fixed energy $E$ are reflected in Fig. \ref{acceleration1} by the values of the Lorentz factors $\gamma_{\rm top}$ and $\gamma_{\rm mean}$, giving the maximal and mean values of the Lorentz factor obtained from the considered sample of the initial conditions of the string loop motion in the given magnetic field. The results confirm expectation based on the properties of the function $\gamma_{\rm max}(B,J,\Omega)$ given by Eq. \ref{gammax} that the transmutation process is most efficient for negative values of the string loop parameter $\Omega$. Notice that for string loop in the magnetic field, the factor $\gamma_{\rm top}$ is higher than in the non-magnetic case $B=0$ only for $\Omega=-1$, but slightly lower even for $\Omega=0$, and substantially lower for $\Omega=1$. On the other hand the value of $\gamma_{\rm mean}$ is higher in the magnetic field for all the three cases of $\Omega$. Our study seems to be very similar to the problem of chaotic scattering \cite{Ott:1993:book:} -- the initial vertical displacement, $y_{\rm s}$, plays the role of the impact parameter, while we use the resulting string loop Lorentz factor $\gamma$ instead of the scattering angle. One can assume that due to the chaotic nature of the string loop motion, the numerical simulations will be able to find the trajectories with Lorentz factor having almost the maximal value, i.e., $\gamma_{\rm top} \sim \gamma_{\rm max}$. However, our numerical simulations show that such an assumption is not generally true, as we obtained at least $\gamma_{\rm top} \sim 0.5 \gamma_{\rm max}$, or slightly larger values -- see Fig. \ref{acceleration1} or Tabs. \ref{tab1}-\ref{tab3}. This is a general effect of the string loop transmutation process in the field of black holes, related to the existence of the event horizon capturing the string loop entering the region of the most efficient transmutation -- in the naked singularity spacetimes, where the efficient transmutation occurs in regions containing no event horizon and no capturing of the string loop occurs, we observe $\gamma_{\rm top} \sim \gamma_{\rm max}$ frequently \cite{Stu-Kol:2012:JCAP:,Kol-Stu:2013:PHYSR4:}. Now we have to discuss the influence of the magnitude of the magnetic field on the transmutation process and compare the results to those related to the case of $B=0$. The physical reason for distinction of the transmutations process efficiency in the Schwarzschild{} black hole without or with magnetic field $B$ comes from the behaviour of the minimum of the effective potential in the flat spacetime $E_{\rm 0(min)}$, giving the behaviour of the accelerated string loop far away from the black hole. For $B=0$ we have the simple relation $E_{\rm 0(min)}=2J$, while for $B\neq 0$ the value of $E_{\rm 0(min)}$ is modified by the magnetic field intensity and the string loop parameter $\Omega$. However, the realistic transmutation process is also strongly influenced by the presence of the black hole horizon, since the region of minimum of the effective potential enabling high efficiency of the transmutation process is close to the black hole horizon \cite{Stu-Kol:2012:PHYSR4:,Stu-Kol:2012:JCAP:}. Large values of the Lorentz factor $\gamma$ can be achieved by enlarging the (initial, and conserved) energy $E$, or by lowering the minimal string loop energy at infinity, $E_{\rm 0(min)}$, by lowering $J$ -- see Eq. (\ref{gammax}). We can achieve low values of the energy $E_{\rm 0(min)}$ (see Fig. \ref{Emin}), and hence large acceleration, by increasing magnitude of the magnetic field $B$, or by using very low values of the current magnitude $J$ of the string loop with parameter $\Omega < 0$. However, such string loops minima are very close to the black hole horizon suppressing thus the probability of observable acceleration process. In testing the role of the magnetic field, we have to reflect the problem of the initial conditions that have to be adjusted in order to enable the comparison with the case of $B=0$. The initial conditions for the string loop motion are given by the initial position and the initial speed $x_{\rm s}, y_{\rm s}, \dot{x}_{\rm s}, \dot{y}_{\rm s}$, the internal string loop parameters $E, J, \Omega$, and the external parameter -- intensity of the magnetic field $B$. We cannot choose arbitrarily all the parameters determining the initial conditions, for the string loop starting from the rest the parameters are related by the equation (\ref{StringEnergy}). If we want to demonstrate the influence of the magnetic field on the string loop acceleration process by varying the external parameter $B$, we have to modify some of the internal string loop parameters $E,J,\Omega$, or the initial position and the initial speed. We will discuss three scenarios for the string loop staring from rest state, distinguished according to what parameter is varied due to increase of the external parameter $B$, assuming in all the scenarios the parameter $\Omega$ fixed: \begin{itemize} \item[1)] Initial position $x_{\rm s}$ is varied, $E$ and $J$ are fixed \item[2)] Current parameter $J$ is varied, $x_{\rm s}$ and $E$ are fixed \item[3)] String energy $E$ is varied, $x_{\rm s}$ and $J$ are fixed. \end{itemize} The behaviour of the effective potential in these three scenarios is represented in Fig. \ref{situation}. \begin{table}[!h] \begin{center} \begin{tabular}{l @{\quad} l @{\quad} | @{\quad} c @{\quad} c @{\quad} | @{\quad} c @{\quad} c} \hline & & $x_{\rm 0}$ & $\gamma_{\rm max}$ & $\gamma_{\rm top}$ & $\gamma_{\rm mean}$ \\ \hline \hline $B=0$ & & $25.8$ & $6.3$ & $3.9$ & $1.6$ \\ \hline $B=0.1$ & $\Omega=-1$ & $19.5$ & $6.7$ & $4.5$ & $ 1.7$ \\ & $\Omega=0$ & $18.4$ & $6.2$ & $3.8$ & $ 1.7$ \\ & $\Omega=1$ & $17.3$ & $5.8$ & $3.3$ & $ 1.6$ \\ \hline $B=0.2$ & $\Omega=-1$ & $14.7$ & $7.2$ & $4.3$ & $ 1.9$ \\ & $\Omega=0$ & $13.7$ & $6.2$ & $3.7$ & $ 1.7$ \\ & $\Omega=1$ & $12.7$ & $5.5$ & $3.0$ & $ 1.7$ \\ \hline \end{tabular} \caption{ The characteristic values of the string loop asymptotic Lorentz factor, $\gamma_{\rm top}$ and $\gamma_{\rm mean}$, numerically obtained for the set of trajectories in the acceleration scenario 1 are compared to the maximal Lorentz factor $\gamma_{\rm max}$. In the scenario 1 we keep energy $E=25$ and current $J=2$, while coordinate $x_{\rm s}$ is varied according to increase of the strength of the magnetic field $B$. } \label{tab1} \end{center} \end{table} \begin{table}[!h] \begin{center} \begin{tabular}{l @{\quad} l @{\quad} | @{\quad} c @{\quad} c @{\quad} | @{\quad} c @{\quad} c} \hline & & $J$ & $\gamma_{\rm max}$ & $\gamma_{\rm top}$ & $\gamma_{\rm mean}$ \\ \hline \hline $B=0$ & & $5.0$ & $2.5$ & $2.5$ & $1.4$ \\ \hline $B=0.01$ & $\Omega=-1$ & $7.2$ & $1.8$ & $1.8$ & $ 1.3$ \\ & $\Omega=0$ & $4.5$ & $2.8$ & $2.7$ & $ 1.4$ \\ & $\Omega=1$ & $2.8$ & $4.5$ & $3.2$ & $ 1.5$ \\ \hline $B=0.02$ & $\Omega=-1$ & $9.4$ & $1.4$ & $1.4$ & $ 1.1$ \\ & $\Omega=0$ & $2.3$ & $5.5$ & $4.0$ & $ 1.6$ \\ & $\Omega=1$ & $0.6$ & $23.4$ & $4.0$ & $ 1.6$ \\ \hline \end{tabular} \caption{ The characteristic values of the string loop asymptotic Lorentz factor, $\gamma_{\rm top}$ and $\gamma_{\rm mean}$, numerically obtained for the set of trajectories in the acceleration scenario 2 are compared to the maximal Lorentz factor $\gamma_{\rm max}$. In the scenario 2 we keep coordinate $x_{\rm s}=25$ and energy $E=25$, while current $J$ is varied according to increase of the strength of the magnetic field $B$. } \label{tab2} \end{center} \end{table} \begin{table}[!h] \begin{center} \begin{tabular}{ l @{\quad} l @{\quad} | @{\quad} c @{\quad} c @{\quad} | @{\quad} c @{\quad} c } \hline & & $E$ & $\gamma_{\rm max}$ & $\gamma_{\rm top}$ & $\gamma_{\rm mean}$ \\ \hline \hline $B=0$ & & $24.2$ & $6.0$ & $4.0$ & $1.5$ \\ \hline $B=0.1$ & $\Omega=-1$ & $39.6$ & $10.6$ & $6.6$ & $ 1.7$ \\ & $\Omega=0$ & $43.0$ & $10.7$ & $6.9$ & $ 1.7$ \\ & $\Omega=1$ & $46.4$ & $10.8$ & $6.6$ & $ 1.7$ \\ \hline $B=0.2$ & $\Omega=-1$ & $92.6$ & $26.8$ & $15.5$ & $ 2.2$ \\ & $\Omega=0$ & $99.4$ & $24.6$ & $14.8$ & $ 2.3$ \\ & $\Omega=1$ & $106.2$ & $23.3$ & $12.1$ & $ 2.1$ \\ \hline \end{tabular} \caption{ The characteristic values of the string loop asymptotic Lorentz factor, $\gamma_{\rm top}$ and $\gamma_{\rm mean}$, numerically obtained for the set of trajectories in the acceleration scenario 3 are compared to the maximal Lorentz factor $\gamma_{\rm max}$. In the scenario 3 we keep coordinate $x_{\rm s}=25$ and current $J=2$, while energy $E$ is varied according to increase of the strength of the magnetic field $B$. } \label{tab3} \end{center} \end{table} For the first scenario (initial position $x_{\rm s}$ varied with $B$), the results of the numerical calculations of the Lorentz $\gamma$ factor are summarized in Tab. \ref{tab1} for two characteristic values of $B$. As we can see from Tab. \ref{tab1}, the string loop is forced to start closer to the BH horizon for $B > 0$, if we wish to keep its current $J$ and energy $E$, the initial position $x_{\rm s}$ decreases with increasing $B$ and decreases with increasing $\Omega$. There is a slight increase of the maximal allowed acceleration, $\gamma_{\rm max}$, with increasing parameter $B$ for $\Omega<0$, while it decreases for $\Omega>0$, in accord with the discussion of the properties of the function $E_{\rm 0(min)}(B,J,\Omega)$ in the previous section. For the string loop with $\Omega=-1$, the topical value of the Lorentz factor, $\gamma_{\rm top}$, exceeds the value corresponding to the case $B=0$, while it is lower for $\Omega = 0, 1$. The value of the $\gamma_{\rm top}$ decreases with increasing $B$ in all the cases of $\Omega = -1, 0, 1$. For the second scenario (string loop current parameter $J$ varied with $B$), the data of the numerical simulations giving the Lorentz $\gamma$ factors are given in Tab \ref{tab2} for two characteristic values of $B$ that are by one order smaller than in the previous case. We have to use smaller values of the magnetic field $B$, since a critical value $B_{\rm crit}$ of the magnetic field intensity exists for which the current $J=0$, see section \ref{analogy}. In this scenario, the parameter $J$ decreases significantly with increasing $\Omega$ and increasing $B$. We have found increase of the maximal allowed acceleration $\gamma_{\rm max}$ in comparison to the case $B=0$ due to decrease of the current $J$ for $\Omega = 0, 1$, while for $\Omega = -1$ the current $J$ increased and $\gamma_{\rm max}$ decreased. The value of $\gamma_{\rm top}$ decreases with increasing $B$ for string loop with $\Omega = -1$ being lower that in the $B=0$ case, while it increases with increasing $B$ for $\Omega = 0, 1$ due to the strong decrease of $J$. For the third scenario (string loop energy $E$ varied with $B$), the results of the numerical simulations for the Lorentz $\gamma$ factor are summarized in Tab. \ref{tab3} for the same two characteristic values of $B$ as in the first scenario. In comparison to the case of $B=0$, we need increase of the energy $E$ and we observe large increase of the maximal allowed acceleration $\gamma_{\rm max}$ that increases with increasing $\Omega$ for the smaller value of $B=0.1$, but it decreases with increasing $\Omega$ for $B=0.2$. The Lorentz factor topical value $\gamma_{\rm top}$ also demonstrates a substantial increase in comparison to the case $B=0$, especially for the larger magnetic field $B=0.2$. Large $\gamma$ factors are observed for the $\Omega = -1$ case, while for $\Omega = 0, 1$ the increase is smaller; in both cases it is caused by the large increase of the string energy $E$. \begin{figure*} \includegraphics[width=\hsize]{09a} \includegraphics[width=\hsize]{09b} \includegraphics[width=\hsize]{09c} \caption{\label{aa1} The Lorentz factor $\gamma$ (vertical axis), calculated for the string loop with current $J=2$, starting from rest state at a fixed initial position with coordinates $x_0, y_0$, while the strength of the magnetic field $B$ is varied. The maximal acceleration given by Eq. (\ref{gammax}) is plotted as the black curve. The string loop energy $E$ is calculated from Eq. (\ref{StringEnergy}) and increases with increasing $B$. } \end{figure*} In the framework of the third scenario we give also data of the numerically calculated Lorentz $\gamma$ factors in dependence on the magnetic field intensity $B$ for the string loop parameters $\Omega = -1, 0, 1$ and fixed $J=2$, fixed initial position of the string loop starting at rest, with energy $E$ varied correspondingly. The data are plotted in Fig. \ref{aa1}; we observe increasing of the Lorentz $\gamma$ factor with increasing $B$. We also observe combination of the regions of regular behaviour of the function $\gamma(B)$, with regions of quite chaotic character, similar to those occuring in Fig. \ref{acceleration1}. Notice that $\gamma_{top}$ takes highest value for the value of $\Omega = 1$ due to highest increase of the energy $E$ parameter. On the other hand it takes the lowest value for $\Omega = -1$; moreover, in this case the efficient acceleration can occur only in the field with $B<0.3$ -- clearly, for larger values of $B$ the efficient transmutation processes occur very close to the horizon and the string loop is efficiently captured by the black hole. The transmutation between the oscillatory motion and the transitional accelerated motion can be properly represented by their distribution in the space of initial states $x_{s}-y_{s}$. The numerical simulations are giving the resulting Lorentz $\gamma$ factor for two characteristic values of the magnetic field $B=0$ and $B=0.2$, two characteristic values of the string loop parameter $J = 2$ and $J = 11$, and the three characteristic values of the string loop parameter $\Omega = -1, 0, 1$. The results are given in Fig. \ref{accelerationXY}, along with the $E={\rm const}$ levels for the considered cases of the internal and external parameters of the string loop in the combined gravitational and magnetic fields. We observe that strong acceleration and large final Lorentz $\gamma$ factors can be obtained. In astrophysically realistic situations, our results are valid up to the regions distant from the black hole where the magnetic field can be well approximated as uniform. Clearly, the condition of magnetic field uniformity assumed to be valid in the black hole vicinity and reasonably large distance will be violated in regions very distant from the black hole. Then a crucial question arises -- how the string loop dynamics will be influenced by decreasing intensity of the magnetic field in distant regions described by the flat spacetime. We plan to study this problem in a future paper. Nevertheless, we can expect that no change will occur in the flat spacetime far away from the black hole in the energy of the translational motion $E_{y}$. This energy has to be conserved since no energy transmutation is possible in the flat spacetime, and the energy mode $E_y$ is independent on the intensity of the magnetic field, i.e. the string loop translational velocity along $y$-axis has to be conserved as well. However, we can expect the change in the energy of the string loop oscillations in the $x$-direction since both the total energy, $E$, and the x-mode energy, $E_{x}$, will be modified by decreasing intensity of the magnetic field. Another interesting question for future study arises, if we assume the uniform magnetic field that is not parallel to the string loop axis. Then a new force arises that turns the string loop axis to be parallel with the magnetic field. The non-parallel magnetic field case also opens up the question of stability of the string loops in $\Omega>0$ configuration - any perturbation may overturn the string loop to the $\Omega<0$ configuration that is energetically more favourable. \begin{figure*} \includegraphics[width=\hsize]{10} \caption{ \label{accelerationXY} String loop acceleration in the Schwarzschild{} spacetime with and without the external homogeneous magnetic field with $B=0.2$, plotted for various starting points of the string loop and correspondingly modified energy. The string loop is starting from the rest at the points from the region $x_{\rm s} \in (0.1, 30.1), y_{\rm s} \in (0.1, 30.1)$ with the angular momentum parameter fixed to values of $J=2$ (first two rows) or $J=11$ (second two rows), but with varying energy $E$ determined by the starting point (the energy levels are demonstrated in the first, $J=2$, and the third, $J=11$, row). In the distribution of the Lorentz factor $\gamma$, in the second row ($J=2$)and the fourth row ($J=11$), we coloured every point according to the asymptotic Lorentz factor of the translational string loop motion in the $y$-direction; the code for the colours is presented in the fourth row. Black colour denotes regions below the horizon, grey regions correspond to the string loop collapsed to the black hole. Regions of white colour correspond to string loop that do not reach "infinity" located at $r=1000$ for given time $\zeta=200$ and remain oscillating around the black hole. } \end{figure*} \section{Conclusion} We have investigated acceleration of the electric current-carrying string loop due to the transmutation process in the gravitational field of the Schwarzschild{} black hole combined with an asymptotically uniform magnetic field. We have pointed out a physical interpretation of the string loop model through the superconductivity phenomena of plasma in accretions discs. We also give correspondence of the parameters of the string loop model of jets to real physical quantities and estimate such quantities in realistic astrophysical conditions. In the pure spherically symmetric gravitational field the string loop dynamics is degenerated, being independent of the string loop motion constant $\Omega$. In the combined gravitational and magnetic field that have a common axial symmetry only, the degeneration is canceled, and the dynamics is strongly dependent on the parameter $\Omega$. The effective potential of the string loop dynamics allows for one stable and one unstable equilibrium points of the string loop. The maximal acceleration of the string loop is, however, determined by the minima of the effective potential of the string loop dynamics in the uniform magnetic field immersed in the flat spacetime. The numerical analysis given in Tabs. \ref{tab1}-\ref{tab3} confirms significant acceleration (large $\gamma$ factor) in the cases, where large $\gamma_{\rm max}$ is possible. The string loop acceleration is given by the transmutation process governed by two key ingredients: possibility of the string loop to escape with large ratio of the initial energy $E$ to the minimum energy at infinity $E_{\rm 0(min)}$, and existence of the transmutation region of strong gravitational (magnetic) fields where the chaotic regime of the string loop dynamics occurs and transmission of energy of the oscillatory motion to the energy of the translational motion is possible. It has been proved that presence of the external homogeneous magnetic field $B$ allows the string loop to escape to infinity with large Lorentz $\gamma$ factor, the magnetic field can significantly increase the maximal acceleration given by $\gamma_{\rm max}$. We have demonstrated that for the positive values of the parameter $\Omega$, the presence of the magnetic field decreases the efficiency of the transmutation effect, while it increases the efficiency for negative values of the parameter $\Omega$. We have shown that for the intensity of the magnetic field high enough, the string loop with negative parameter $\Omega$ can be strongly accelerated up to ultra-relativistic velocities of their translational motion. Therefore, the string loops accelerated in the field of magnetized Schwarzschild{} black holes could serve as an acceptable model of ultra-relativistic jets observed in active galactic nuclei. The black hole fast rotation is thus not necessary in the framework of the string loop acceleration model. One of the most important consequences of our current paper, considered from the point of view of astrophysics and observable phenomena, is that the magnetic field substantially increases the efficiency of the acceleration mechanism of the string loop. The ultra-relativistic acceleration necessary in modelling the jets observed in microquasars and active galactic nuclei is shown to be possible even for non-rotating black holes in the string loop model. This is a clear opposite to the model of ultra-relativistic jets based on the Blandford-Znajek process that requires fast rotating black holes. Therefore, this difference can potentially give clear signature of relevance of string loop models. \label{conclusion} \acknowledgments The authors would like to express their acknowledgements for the Institutional support of the Faculty of Philosophy and Science of the Silesian University at Opava, the internal student grant of the Silesian University SGS/23/2013 and the EU grant Synergy CZ.1.07/2.3.00/20.0071. ZS and MK acknowledge the Albert Einstein Centre for Gravitation and Astrophysics under the Czech Science Foundation No. 14-30786G. Warm hospitality that has facilitated this work to B.A. by Faculty of Philosophy and Science, Silesian University in Opava (Czech Republic) and to A.T. and B.A. by the Goethe University, Frankfurt am Main, Germany is thankfully acknowledged. The research of B.A. is supported in part by Projects No. F2-FA-F113, No. EF2- FA-0-12477, and No. F2-FA-F029 of the UzAS and by the ICTP through the OEA-PRJ-29 and the OEA-NET- 76 projects and by the Volkswagen Stiftung (Grant No. 86 866). \begin{table*} \begin{tabular}{c c c c c} Quantity & Symbol & Gaussian & Geometrized & Conv. \\ \hline \\ Length & r & $1$ cm & $1$ cm & $1$ \\ $\sigma$ coordinate & $\sigma$ & $1$ cm & $1$ cm & $1$ \\ $\tau$ coordinate & $[\tau]=[c t]$ & $1$ cm & $1$ cm & $1$ \\ Time & t & $1$ s & $2.99\times10^{10}$ cm & $\rm{c}$ \\ Mass & m & $1$ g & $7.42\times10^{-29}$ cm & $\rm{G/c^2}$ \\ Energy & E & $1$ erg & $8.26\times10^{-50}$ cm & $\rm{G/c^4}$ \\ Tension & $\quad \mu \quad$ & $1$ dyn & $8.26\times10^{-50}$ & $\rm{G/c^4}$ \\ Neutral current & $k\varphi^2_{|a}$ & $ 1 $ $\rm{g\cdot{s^{-1}}}$ & $2.48\times 10^{-39} $ & $\rm{G/c^3}$ \\ Electric current & $j_{\sigma}$ & $ 1 $ statA & $9.59\times 10^{-36} $ & $\rm{\sqrt{G}/c^3}$ \\ Charge & $q$ & $ 1 $ statC & $2.87\times 10^{-25} \rm{cm}$ & $\rm{\sqrt{G}/c^2}$ \\ Charge density & $j_{\tau}$ & $ 1 $ $\rm{statC\cdot{cm^{-1}}}$ & $2.87\times 10^{-25}$ & $\rm{\sqrt{G}/c^2}$ \\ Magnetic field & $B$ & $ 1 $ Gs & $8.16\times 10^{-15} \rm{cm^{-1}}$ & $\rm{\sqrt{G}/c}$ \\ \end{tabular} \caption{Units and dimensions of the physical quantities of the string loop in Gaussian (CGS) and geometrized system of units. \label{tabdim}} \end{table*}
1,108,101,563,526
arxiv
\section{Introduction} Geometrically frustrated spin systems exhibit a low-temperature behavior that is fundamentally different from conventional (non-frustrated) spin systems.\cite{Ramirez:2001, Ramirez:2003} The incompatibility between local interactions and global symmetry in geometrically frustrated magnets leads to a macroscopic degeneracy that prevents these systems from ordering. In some cases this degeneracy is lifted by further neighbor interactions or by a symmetry-breaking lattice distortion, resulting in ordered spin structures at temperatures that are significantly lower than what would be expected simply from the strength of the nearest neighbor interaction. Since usually several different ordered configurations with comparable energy exist in these systems, a very rich low temperature phase diagram can be observed. Recently, it has been found in various magnetic spinel systems (general chemical formula: $AB_2X_4$) that the geometrical frustration among the $B$ sites in the spinel structure can give rise to pronounced effects due to \emph{spin-lattice coupling}. In ZnCr$_2$O$_4$ and CdCr$_2$O$_4$ the macroscopic degeneracy is lifted by a tetragonal lattice distortion, resulting in complicated non-collinear spin ordering.\cite{Lee_et_al:2000, Chung_et_al:2005} In addition, a pronounced splitting of certain phonon modes due to strong \emph{spin-phonon coupling} has been found in ZnCr$_2$O$_4$.\cite{Sushkov_et_al:2005,Fennie/Rabe:2006} Non-collinear spiral magnetic ordering at low temperatures has also been found in CoCr$_2$O$_4$ and MnCr$_2$O$_4$,\cite{Hastings/Corliss:1962, Menyuk/Dwight/Wold:1964} where the presence of a second magnetic cation on the spinel $A$ site lifts the macroscopic degeneracy. Such non-collinear spiral magnetic order can break spatial inversion symmetry and lead to the appearance of a small electric polarization and pronounced \emph{magneto-electric coupling}.\cite{Kimura_et_al_Nature:2003, Lawes_et_al:2005} Indeed, dielectric anomalies at the magnetic transition temperatures have been found in polycrystalline CoCr$_2$O$_4$, \cite{Lawes_et_al:2006} and recently a small electric polarization has been detected in single crystals of the same material.\cite{Yamasaki_et_al:2006} Magnetic spinels therefore constitute a particularly interesting class of frustrated spin systems exhibiting various forms of coupling between their magnetic and structural properties. Furthermore, both $A$ and $B$ sites in the spinel structure can be occupied by various magnetic ions and simultaneously the $X$ anion can be varied between O, S, or Se. This compositional flexibility opens up the possibility to chemically tune the properties of these systems. To understand the underlying mechanisms of the various forms of magneto-structural coupling, it is important to first understand the complex magnetic structures found in these systems. Such complex magnetic structures can be studied using model Hamiltonians for interacting spin systems, which can be treated either classically or fully quantum mechanically. For the cubic spinel systems, a theory of the ground state spin configuration has been presented by Lyons, Kaplan, Dwight, and Menyuk (LKDM)\cite{Lyons_et_al:1962} about 45 years ago. Using a model of classical Heisenberg spins and considering only $BB$ and $AB$ nearest neighbor interactions, LKDM could show that in this case the ground state magnetic structure is determined by the parameter \begin{equation} u = \frac{4 \tilde{J}_{BB} S_B}{3 \tilde{J}_{AB} S_A} \quad , \end{equation} which represents the relative strength between the two different nearest neighbor interactions $\tilde{J}_{BB}$ and $\tilde{J}_{AB}$.\cite{footnote1} For $u \leq u_0=8/9$ the collinear N{\'e}el configuration, i.e. all $A$-site spins parallel to each other and anti-parallel to the $B$-site spins, is the stable ground state. For $u > u_0$ it was shown that a ferrimagnetic spiral configuration has the lowest energy out of a large set of possible spin configurations and that it is locally stable for $u_0 < u < u''\approx 1.298$. For $u > u''$ this ferrimagnetic spiral configuration is unstable. Therefore, it was suggested that the ferrimagnetic spiral is very likely the ground state for $u_0 < u < u''$, but can definitely not be the ground state for $u > u''$.\cite{Lyons_et_al:1962} On the other hand it has been found that neutron scattering data for both CoCr$_2$O$_4$ and MnCr$_2$O$_4$ are well described by the ferrimagnetic spiral configuration suggested by LKDM, although a fit of the experimental data to the theoretical spin structure leads to values of $u \approx 2.0$ for CoCr$_2$O$_4$,\cite{Menyuk/Dwight/Wold:1964} and $u \approx 1.6$ for MnCr$_2$O$_4$,\cite{Hastings/Corliss:1962} which according to the LKDM theory correspond to the locally unstable regime. Surprisingly, the overall agreement of the fit is better for CoCr$_2$O$_4$ than for MnCr$_2$O$_4$, even though the value of $u$ for CoCr$_2$O$_4$ is further within the unstable region than in the case of MnCr$_2$O$_4$. From this it has been concluded that: i) the ferrimagnetic spiral is a good approximation of the true ground state structure even for $u > u''$, ii) that the importance of effects not included in the theory of LKDM is probably more significant in MnCr$_2$O$_4$ than in CoCr$_2$O$_4$, and iii) that the ferrimagnetic spiral is indeed very likely to be the true ground state for systems with $u_0 < u < u''$.\cite{Hastings/Corliss:1962,Menyuk/Dwight/Wold:1964,Lyons_et_al:1962} Recently, Tomiyasu {\it et al.} fitted their neutron scattering data for CoCr$_2$O$_4$ and MnCr$_2$O$_4$ using a ferrimagnetic spiral structure similar to the one proposed by LKDM but with the cone angles of the individual magnetic sublattices not restricted to the LKDM theory.\cite{Tomiyasu/Fukunaga/Suzuki:2004} As originally suggested by LKDM, they interpreted their results as indicative of a collinear N{\'e}el-like ferrimagnetic component exhibiting long-range order below $T_C$ and a spiral component, which exhibits only short-range order even in the lowest temperature phase. In order to assess the validity of the LKDM theory and to facilitate a better comparison with experimental data, an independent determination of the magnetic coupling constants in these systems is very desirable. Density functional theory (DFT, see Ref.~\onlinecite{Jones/Gunnarsson:1989}) provides an efficient way for the {\it ab initio} determination of such magnetic coupling constants that can then be used for an accurate modeling of the spin structure of a particular system. DFT also offers a straightforward way to investigate the effect of structural distortions on the magnetic coupling constants, and is therefore ideally suited to study the coupling between magnetism and structural properties. Traditionally, insulating magnetic oxides represent a great challenge for DFT-based methods due to the strong Coulomb interaction between the localized $d$ electrons. However, recently the local spin density approximation plus Hubbard $U$ (LSDA+$U$) method has been very successful in correctly determining various properties of such strongly correlated magnetic insulators.\cite{Anisimov/Aryatesiawan/Liechtenstein:1997} In particular, it has been used for the calculation of magnetic coupling constants in a variety of transition metal oxides.\cite{Solovyev/Terakura:1998,Yaresko_et_al:2000,Baettig/Ederer/Spaldin:2005,Novak/Rusz:2005,Fennie/Rabe:2006} Here we present an LSDA+$U$ study of the magnetic coupling constants in the spinel systems CoCr$_2$O$_4$ and MnCr$_2$O$_4$. The goal of the present paper is to provide accurate values for the relevant coupling constants in these two systems, in order to test the assumptions made by LKDM and to resolve the uncertainties in the interpretation of the experimental data. In addition, we assess the general question of how accurate such magnetic coupling constants in complex oxides can be determined using the LSDA+$U$ method. We find that in contrast to the assumptions of the LKDM theory, the coupling between the $A$ site cations is not necessarily negligible, but that the general validity of the LKDM theory should be better for CoCr$_2$O$_4$ than for MnCr$_2$O$_4$, in agreement with what has been concluded from the experimental data. However, in contrast to what follows from fitting the experimental data to the LKDM theory, the calculated $u$ for CoCr$_2$O$_4$ is smaller than the value of $u=2.0$ obtained from the experimental fit. In addition, we show that by analyzing the dependence of the magnetic coupling constants on the LSDA+$U$ parameters and on the lattice constant, the various mechanisms contributing to the magnetic interaction can be identified, and a quantitative estimate of the corresponding coupling constant can be obtained within certain limits. The present paper is organized as follows. In Sec.~\ref{methods} we present the methods we use for our calculations. In particular, we give a brief overview over the LSDA+$U$ method and the challenges in using this method as a quantitative and predictive tool. In Sec.~\ref{sec:results} we present our results for the lattice parameters, electronic structure, and magnetic coupling constants of the two investigated Cr spinels. Furthermore, we analyze in detail the dependence of the magnetic coupling constants on the lattice constant and LSDA+$U$ parameters, and we discuss the reasons for the observed trends. We end with a summary of our main conclusions. \section{Methods} \label{methods} \subsection{LSDA+$U$} \label{sec:methods} The LSDA+$U$ method offers an efficient way to calculate the electronic and magnetic properties of complex transition metal oxides. The idea behind the LSDA+$U$ method is to explicitly include the Coulomb interaction between strongly localized $d$ or $f$ electrons in the spirit of a mean-field Hubbard model, whereas the interactions between the less localized $s$ and $p$ electrons are treated within the standard local spin density approximation (LSDA).\cite{Anisimov/Zaanen/Andersen:1991} To achieve this, a Hubbard-like interaction term $E_U$, which depends on the occupation of the localized orbitals, is added to the LSDA total energy, and an additional double counting correction $E_\text{dc}$ is introduced to subtract that part of the electron-electron interaction between the localized orbitals that is already included in the LSDA: \begin{equation} \label{eq:ldau} E = E_\text{LSDA} + E_U - E_\text{dc} \quad. \end{equation} Here \begin{equation} \label{E_U} E_U = \frac{1}{2} \sum_{ \{ \gamma \} } \left( U_{\gamma_1 \gamma_3 \gamma_2 \gamma_4} - U_{\gamma_1 \gamma_3 \gamma_4 \gamma_2} \right) n_{\gamma_1 \gamma_2} n_{\gamma_3 \gamma_4} \end{equation} and \begin{equation} \label{eq:edc} E_\text{dc} = \frac{U}{2} n (n-1) - \frac{{\cal J}^H}{2} \sum_s n_s (n_s - 1) \quad , \end{equation} where $\gamma = (m,s)$ is a combined orbital and spin index of the correlated orbitals, $n_{\gamma_1\gamma_2}$ is the corresponding orbital occupancy matrix, $n_s = \sum_m n_{ms,ms}$ and $n = \sum_s n_s$ are the corresponding traces with respect to spin and both spin and orbital degrees of freedom, and $U_{\gamma_1\gamma_3 \gamma_2\gamma_4} = \langle m_1m_3|V_\text{ee}|m_2m_4 \rangle \delta_{s_1s_2}\delta_{s_3s_4}$ are the matrix elements of the screened electron electron interaction, which are expressed as usual in terms of two parameters, the Hubbard $U$ and the intra-atomic Hund's rule parameter ${\cal J}^H$ (see Ref.~\onlinecite{Anisimov/Aryatesiawan/Liechtenstein:1997}). The LSDA+$U$ method has been shown to give the correct ground states for many strongly correlated magnetic insulators, and thus represents a significant improvement over the LSDA for such systems.\cite{Anisimov/Aryatesiawan/Liechtenstein:1997} Furthermore, the LSDA+$U$ method is very attractive due to its simplicity and the negligible additional computational effort compared to a conventional LSDA calculation. It therefore has become a widely used tool for the study of strongly correlated magnetic insulators. Since the LSDA+$U$ method treats the interactions between the occupied orbitals only in an effective mean-field way, it fails to describe systems where dynamic fluctuations are important. For such systems, the local density approximation plus dynamical mean field theory (LDA+DMFT), which also includes local dynamic correlations, has been introduced recently.\cite{Anisimov_et_al:1997} However, the LDA+DMFT method is computationally rather demanding, and is currently too costly to be used for the calculation of magnetic characteristics of such complex materials as the spinels. On the other hand, for a large number of systems such fluctuations are only of minor importance, and for these systems the LSDA+$U$ method leads to a good description of the electronic and magnetic properties. However, in order to obtain reliable results, the use of the LSDA+$U$ method should be accompanied by a careful analysis of all the uncertainties inherent in this method. An additional goal of the present paper is therefore to critically assess the predictive capabilities of the LSDA+$U$ method for the determination of magnetic coupling constants in complex transition metal oxides. Apart from the question about the general applicability of the LSDA+$U$ approach to the investigated system, and the unavoidable ambiguities in the definition of the LSDA+$U$ energy functional (Eqs.~(\ref{eq:ldau})-(\ref{eq:edc})),\cite{Solovyev/Dederichs/Anisimov:1994,Czyzyk/Sawatzky:1994} the proper choice of the parameters $U$ and ${\cal J}^H$ represents one of the main hurdles when the LSDA+$U$ method is used as a quantitative and predictive tool. $U$ and ${\cal J}^H$ can in principle be calculated using constrained density functional theory,\cite{Dederichs_et_al:1984} thus rendering the LSDA+$U$ method effectively parameter-free. In practice however, the exact definition of $U$ and ${\cal J}^H$ within a solid is not obvious, and the calculated values depend on the choice of orbitals or the details of the method used for their determination.\cite{Hybertsen/Schlueter/Christensen:1989,Solovyev/Hamada/Terakura:1996,Pickett/Erwin/Ethridge:1998,Cococcioni/Gironcoli:2005} Therefore, parameters obtained for a certain choice of orbitals are not necessarily accurate for calculations using a different set of orbitals. In the present work we thus pursue a different approach. We choose values for $U$ and ${\cal J}^H$ based on a combination of previous constrained DFT calculations, experimental data, and physical reasoning, and these values are then varied within reasonable limits to study the resulting effect on the physical properties. In particular, for the spinel systems studied in this work the Hubbard $U$s on the transition metal sites are varied between 2\,eV and 6\,eV (in 1\,eV increments), with the additional requirement that $U_\text{Cr} \leq U_{A}$ ($A$ = Co, Mn). For the on-site Hund's rule coupling we use two different values, ${\cal J}^H$ = 0\,eV and ${\cal J}^H$ = 1\,eV with ${\cal J}^H_\text{Cr} = {\cal J}^H_{A}$. The conditions $U_\text{Cr} \leq U_{A}$ and ${\cal J}^H_\text{Cr} = {\cal J}^H_{A}$ are motivated by constrained DFT calculations for a series of transition metal perovskite systems, which showed that the Hubbard $U$ increases continuously from V to Cu, whereas the on-site exchange parameter ${\cal J}^H$ is more or less constant across the series.\cite{Solovyev/Hamada/Terakura:1996} A similar trend for $U$ can be observed in the simple transition metal monoxides.\cite{Anisimov/Zaanen/Andersen:1991,Pickett/Erwin/Ethridge:1998} Although in the spinel structure the coordination and formal charge state of the $A$ cation is different from the $B$ cation, we assume that the assumption $U_\text{Cr} \leq U_{A}$ is nevertheless valid, since the screening on the sixfold coordinated $B$ site is expected to be more effective than on the tetrahedral $A$ site. Further evidence for the validity of this assumption is given by the relative widths of the $d$ bands on the $A$ and $B$ sites obtained from the calculated orbitally resolved densities of states (see Fig.~\ref{fig:dos} and Sec.~\ref{DOS}). The absolute values of $U$ used in this work are motivated by recent constrained DFT calculations using linear response techniques,\cite{Pickett/Erwin/Ethridge:1998,Cococcioni/Gironcoli:2005} which lead to significantly smaller values of $U$ than previous calculations using the linear muffin tin orbital (LMTO) method, where the occupation numbers are constrained by simply setting all transfer matrix elements out of the corresponding orbitals to zero.\cite{Anisimov/Gunnarsson:1991,Solovyev/Hamada/Terakura:1996} Typical values obtained for various transition metal ions in different chemical environments are between 3-6\,eV.\cite{Pickett/Erwin/Ethridge:1998,Cococcioni/Gironcoli:2005} For the Cr$^{3+}$ ion a value of $U \approx 3$\,eV, derived by comparing the calculated densities of states with photo-emission data, has been used successfully.\cite{Fennie/Rabe_CCS:2005,Fennie/Rabe:2006} We thus consider the values $U_A$ = 4-5\,eV and $U_\text{Cr}$ = 3\,eV as the most adequate $U$ parameters for our systems. Nevertheless, we vary these parameters here over a much larger range, in order to see and discuss the resulting trends in the calculated magnetic coupling constants. For the Hund's rule parameter ${\cal J}^H$ screening effects are less important, and calculated values for various systems are all around or slightly lower than 1\,eV.\cite{Anisimov/Zaanen/Andersen:1991,Solovyev/Hamada/Terakura:1996} On the other hand, a simplified LSDA+$U$ formalism is sometimes used, where the only effect of ${\cal J}^H$ is to reduce the effective Coulomb interaction $U_\text{eff} = U - {\cal J}^H$.\cite{Sawada_et_al:1997,Dudarev_et_al:1998,Ederer/Spaldin_2:2005} In this work we use the two values ${\cal J}^H$ = 0\,eV and ${\cal J}^H$ = 1\,eV to study the resulting effect on the magnetic coupling constants. \subsection{Other technical details} \label{sec:details} To determine the magnetic coupling constants corresponding to the closest neighbor magnetic interactions between the various sublattices, we calculate the total energy differences for four different collinear magnetic configurations: the N{\'e}el type ferrimagnetic order, the ferromagnetic configuration, and two different configurations with anti-parallel magnetic moments within the $A$ and $B$ sub-lattices respectively, and we then project the resulting total energies on a simple classical Heisenberg model, \begin{equation} E = - \sum_{i,j} \tilde{J}_{ij} \vec{S}_i \cdot \vec{S}_j = - \sum_{i,j} J_{ij} \hat{e}_i \cdot \hat{e}_j \quad , \label{eq:Heisenberg} \end{equation} where only the nearest neighbor coupling constants $J_{AB}$, $J_{BB}$, and $J_{AA}$ are assumed to be nonzero, and where we defined the coupling constants $J_{ij} = \tilde{J}_{ij} S_i S_j$ corresponding to normalized spin directions $\hat{e}_i$ of the magnetic ions. We note that even though for itinerant systems such as the elementary magnets Fe, Co, and Ni, the coupling constants obtained in this way can be different from the ones obtained for only small variations from the collinear configurations,\cite{Liechtenstein/Katsnelson/Gubanov:1984} the local magnetic moments of many insulating transition metal oxides, in particular the systems investigated in the present study, behave much more like classical Heisenberg spins and thus the simpler approach pursued in this work is justified. We point out that a determination of all possible further neighbor interactions is beyond the scope of this paper and is therefore left for future studies. We perform calculations at both experimentally determined lattice constants and theoretical lattice parameters. The theoretical lattice parameters are obtained by a full structural relaxation within the LSDA for a collinear N{\'e}el-type magnetic configuration. The same LSDA lattice parameters are used in all our calculations with varying values of the LSDA+$U$ parameters $U$ and ${\cal J}^H$. In order to reduce the required computational effort, we do not perform relaxations for each individual set of LSDA+$U$ parameters. Except when noted otherwise, all calculations are performed using the ``Vienna Ab-initio Simulation Package'' (VASP) employing the projector augmented wave (PAW) method.\cite{Bloechl:1994,Kresse/Furthmueller_PRB:1996,Kresse/Joubert:1999} We use a plane wave energy cutoff of 450\,eV (550\,eV for relaxations) and a 5$\times$5$\times$5 $\Gamma$-centered mesh for Brillouin zone integrations. Increasing the mesh density by using a 8$\times$8$\times$8 mesh results only in negligible changes for the calculated total energy differences. Structural relaxations are performed until the forces are less than 10$^{-5}$ eV/\AA\ and all components of the stress tensor are smaller than 0.02 kbar. The electronic self-consistency cycle is iterated until the total energy is converged better than 10$^{-8}$\,eV. In addition, we perform some test calculations using the full-potential linear-augmented-plane-wave (FLAPW) method.\cite{Wimmer:1981} For these calculations we use the Wien97 code\cite{Blaha:1990} with our own implementation of the LSDA+$U$ method. The plane-wave cut-off parameter is set to $223$\,eV in these calculation, and the Brillouin-zone integration is also carried out on a 5$\times$5$\times$5 $\Gamma$-centered mesh. The criterion for self-consistency is the difference in the total energy after the last two iterations being less than $10^{-4}$\,Ry. \section{Results and discussion} \label{sec:results} \subsection{Structural relaxation} \begin{table} \caption{Structural parameters calculated in this work. $a$ is the lattice constant of the cubic spinel structure, and the internal structural parameter $x$ corresponds to the Wyckoff position 32e ($x$,$x$,$x$) of the oxygen sites. Columns ``theo.'' contain the values calculated in this work while columns ``exp.'' contain experimental data.} \label{tab:struc} \begin{ruledtabular} \begin{tabular}{c|cc|cc} & \multicolumn{2}{c|}{CoCr$_2$O$_4$} & \multicolumn{2}{c}{MnCr$_2$O$_4$} \\ & exp. (Ref.~\onlinecite{Lawes_et_al:2006}) & theo. & exp. (Ref.~\onlinecite{Ram_private}) & theo. \\ \hline $a$ [\AA] & 8.335 & 8.137 & 8.435 & 8.242 \\ $x$ & 0.264 & 0.260 & 0.264 & 0.262 \\ \end{tabular} \end{ruledtabular} \end{table} Table~\ref{tab:struc} shows the structural parameters obtained in this work together with corresponding experimental data. The theoretical lattice constants are obtained within the LSDA and for N{\'e}el-type ferrimagnetic order, and are about 2.3~\% smaller than the corresponding experimental values for both materials. The calculated internal structural parameters $x$ are in very good agreement with experiment. The underestimation of the lattice constant by a few percent is a typical feature of the LSDA in complex transition metal oxides.\cite{Neaton_et_al:2005,Fennie/Rabe:2006} \subsection{Electronic structure} \label{DOS} Fig.~\ref{fig:dos} shows the densities of states for both CoCr$_2$O$_4$ and MnCr$_2$O$_4$ calculated using the LSDA and the LSDA+$U$ method with $U_A$ = 5\,eV, $U_\text{Cr}$ = 3\,eV, ${\cal J}^H_A$ = ${\cal J}^H_\text{Cr}$ = 0\,eV, and a collinear N{\'e}el-type magnetic configuration at the experimental lattice constants. Both systems are insulating within the LSDA, although the LSDA energy gap for CoCr$_2$O$_4$ is very small, about 0.15\,eV. The LSDA gap is larger for MnCr$_2$O$_4$, since in this system the gap is determined by the relatively strong crystal-field splitting on the octahedral $B$ site and the equally strong magnetic splitting, whereas in CoCr$_2$O$_4$ the width of the LSDA gap is limited by the small crystal-field splitting on the tetrahedrally coordinated $A$ site. The use of the LSDA+$U$ method increases the width of the energy gap substantially and pushes the majority $d$ states on the $A$ site down in energy, leading to strong overlap with the oxygen 2$p$ states. In the LSDA the transition metal $d$ states are well separated from the oxygen $p$ manifold, whereas the LSDA+$U$ method increases the energetic overlap between these states. In all cases the gap is between occupied and unoccupied transition metal $d$ states. \begin{figure}[htbp] \centerline{\includegraphics*[width=0.9\columnwidth]{dos.eps}} \caption{Densities of states (in states/eV) for CoCr$_2$O$_4$ (left two panels) and MnCr$_2$O$_4$ (right two panels) calculated using the LSDA (upper two panels) and the LSDA+$U$ method with $U_A$ = 5\,eV, $U_\text{Cr}$ = 3\,eV, and ${\cal J}^H_A$ = ${\cal J}^H_\text{Cr}$ = 0\,eV (lower two panels). Calculations were done at the experimental lattice parameters for a collinear N{\'e}el-type ferrimagnetic structure where the direction of the Cr magnetic moments was defined as ``spin-up'', and corresponding ``spin-down'' states are shown with a negative sign. The gray shaded areas represent the total density of states, the curves shaded with diagonal lines represent the $d$ states on the $A$ site of the spinel lattice, and the thick black lines correspond to the Cr $d$ states. Zero energy separates the occupied from the unoccupied states.} \label{fig:dos} \end{figure} It can be seen that the bandwidth of the $d$-bands for the tetrahedrally coordinated $A$ site is indeed smaller than for the octahedral $B$ site. Thus, the $d$ states on the $A$ sites are more localized and one can expect a larger on-site Coulomb interaction than on the Cr $B$ site, in agreement with the assumption that $U_\text{Cr} \leq U_{A}$ (see Sec.~\ref{sec:methods}). \subsection{Magnetic coupling constants} Fig.~\ref{fig:cco1} shows the magnetic coupling constants calculated using the experimental lattice parameters, ${\cal J}^H_{A} = {\cal J}^H_\text{Cr} = 1$\,eV, and different values of the Hubbard $U$ on the $A$ and $B$ sites. All coupling constants are negative, i.e. antiferromagnetic, and decrease in strength when the Hubbard parameters are increased. The ``inter-sublattice'' coupling $J_{AB}$ depends similarly on both $U_A$ and $U_B$, whereas both ``intra-sublattice'' coupling constants $J_{BB}$ and $J_{AA}$ depend only on the Hubbard parameter of the corresponding sublattice. \begin{figure} \centerline{\includegraphics*[width=0.9\columnwidth]{expJ1.eps}} \caption{Magnetic coupling constants $J_{AB}$ (upper panels), $J_{BB}$ (middle panels), and $J_{AA}$ (lower panels) calculated for CoCr$_2$O$_4$ (left) and MnCr$_2$O$_4$ (right) as a function of $U_A$ for $U_\text{Cr}$ = 2\,eV (open circles), 3\,eV (filled squares), 4\,eV (open diamonds), and 5\,eV (filled triangles). All calculations were performed using experimental lattice parameters and ${\cal J}^H_{A}$ = ${\cal J}^H_\text{Cr}$ = 1\,eV.} \label{fig:cco1} \end{figure} The $BB$ interaction in the spinel lattice is known to result from a competition between antiferromagnetic (AFM) direct cation-cation exchange and indirect cation-anion-cation exchange, which for the present case of a 90$^\circ$ cation-anion-cation bond angle gives rise to a ferromagnetic (FM) interaction.\cite{Goodenough:Book} The AFM direct interaction is expected to dominate at smaller volumes, whereas at larger volumes the FM indirect interaction should be stronger. Furthermore, it is important to note that even the pure direct cation-cation interaction is comprised out of two parts: (i) the ``potential exchange'' due to the standard Heitler-London exchange-integral, which is always FM for orthogonal orbitals but is usually negligible, and (ii) the AFM ``kinetic exchange'', which results from a second order perturbation treatment of the electron hopping and is proportional to 1/$U$.\cite{Anderson:1963,Goodenough:Book} The observed $U$ dependence of $J_{BB}$ can thus be understood as follows: At small values of $U$ the AFM direct kinetic exchange is strongest, but it is suppressed as the value of $U$ is increased. The FM indirect cation-anion-cation exchange also decreases, but in addition increasing $U$ shifts the cation $d$ states down in energy and thus leads to enhanced hybridization with the anion $p$ states (see Sec.~\ref{DOS}). This enhanced hybridization partially compensates the effect of increasing $U$ so that the indirect exchange decreases slower than 1/$U$. Therefore, the FM indirect exchange is less affected by increasing $U$ than the AFM direct exchange, and thus gains in strength relative to the latter. This explains why the observed decrease of $J_{BB}$ is stronger than $1/U$. In fact, for the larger experimental volumes and using $J = 0$\,eV for the Hund's rule coupling (see discussion below) the $BB$ coupling in both systems even becomes slightly FM for large $U$. The $AB$ coupling in the spinels is mediated by a cation-anion-cation bond with an intermediate angle of $\sim$120$^\circ$, which makes it difficult to predict the sign of the coupling based on general considerations. A weak AFM interaction has been proposed for the case of empty $e_g$ orbitals on the $B$ site,\cite{Wickham/Goodenough:1959} in agreement with the present results. Comparing the values of $J_{AB}$ and $J_{BB}$ calculated for a constant set of LSDA+$U$ parameters shows that both are of the same order of magnitude. On the other hand $J_{AA}$ is expected to be significantly weaker, since it corresponds to a cation-cation distance of $\sim 3.6$\,\AA\ with the shortest superexchange path along an $A$-O-O-$A$ bond sequence. Based on this assumption $J_{AA}$ was neglected in the theoretical treatment of LKDM.\cite{Lyons_et_al:1962} Nevertheless, in our calculations $J_{AA}$ is found to be of appreciable strength. This is particular striking for MnCr$_2$O$_4$, but also for CoCr$_2$O$_4$ the difference between $J_{AA}$ and $J_{AB}$ ($J_{BB}$) is less than an order of magnitude. From this it follows that $J_{AA}$ is definitely non-negligible in MnCr$_2$O$_4$ and can also lead to significant deviations from the LKDM theory in CoCr$_2$O$_4$. We point out that this conclusion holds true independently of the precise values of the LSDA+$U$ parameters used in the calculation. An appreciable value for $J_{AA}$ has also been found in a previous LSDA study of the spinel ferrite MnFe$_2$O$_4$.\cite{Singh/Gupta/Gupta:2002} As stated in Sec.~\ref{sec:details}, a full determination of all possible further neighbor interactions is beyond the scope of this paper. However, to obtain a rough estimate of the strength of further neighbor interactions in CoCr$_2$O$_4$, and to see whether this affects the values of $J_{AB}$, $J_{BB}$, and $J_{AB}$ obtained in this work, we perform some additional calculations using a doubled unit cell. This allows us to determine the coupling constant $J^{(3)}_{BB}$, corresponding to the third nearest neighbor $BB$ coupling. As shown in Ref.~\onlinecite{Dwight/Menyuk:1967}, due to the special geometry of the spinel structure, this third nearest neighbor coupling is larger than all other further neighbor interactions within the $B$ sublattice, and can be expected to represent the next strongest magnetic interaction apart from $J_{BB}$, $J_{AB}$, and $J_{AA}$. This coupling is mediated by a $B$-O-O-$B$ bond sequence and corresponds to a $BB$ distance of 5.89\,\AA. For comparison, the distances corresponding to $J_{BB}$, $J_{AB}$, and $J_{AA}$ are 2.94\,\AA, 3.46\,\AA, and 3.61\,\AA, respectively.\cite{distances} For these test calculations we use the experimental lattice parameters of CoCr$_2$O$_4$ and the LSDA+$U$ parameters $U_\text{Co}=5$\,eV, $U_\text{Cr}=3$\,eV, and ${\cal J}^H=0$\,eV. We obtain a value of $J^{(3)}_{BB} = 0.15$\,eV, corresponding to a weak FM coupling. However, the magnitude of $J^{(3)}_{BB}$ is small compared to $J_{AB}$ and $J_{BB}$, and we therefore continue to neglect further neigbor interactions in the following. Next we calculate the magnetic coupling constants using the lattice parameters obtained by a full structural optimization within the LSDA, and also by using ${\cal J}^H = 0$\,eV at both experimental and theoretical lattice parameters. Again, we vary $U_{A}$ and $U_\text{Cr}$ independently. The observed variation of the coupling constants with respect to the Hubbard parameters is very similar to the case shown in Fig.~\ref{fig:cco1}, only the overall magnitude of the magnetic coupling constants is changed. We therefore discuss only the results obtained for $U_\text{Cr} =3$\,eV and $U_{A} = 5$\,eV, which are physically reasonable choices for these parameters, as discussed in Sec.~\ref{sec:methods}. \begin{table} \caption{Calculated magnetic coupling constants $J_{AB}$, $J_{BB}$, and $J_{AA}$ for different lattice parameters and different values of the intra-atomic Hund's rule coupling parameter ${\cal J}^H$ for $U_A$ = 5\,eV and $U_\text{Cr}$ = 3\,eV. Lattice parameters ``exp.'' and ``theo.'' refer to the corresponding values listed in Table~\ref{tab:struc}.} \label{tab:coupling} \begin{ruledtabular} \begin{tabular}{cc|cccc} & ${\cal J}^H$ (eV) & 1.0 & 0.0 & 1.0 & 0.0 \\ & latt. param. & exp. & exp. & theo. & theo. \\ \hline & $J_{AB}$ (meV) & $-$4.44 & $-$3.55 & $-$6.02 & $-$4.83 \\ CoCr$_2$O$_4$ & $J_{BB}$ (meV) & $-$3.33 & $-$1.04 & $-$6.90 & $-$4.34 \\ & $J_{AA}$ (meV) & $-$0.50 & $-$0.44 & $-$0.77 & $-$0.58 \\ \hline & $J_{AB}$ (meV) & $-$3.14 & $-$1.40 & $-$4.88 & $-$2.61 \\ MnCr$_2$O$_4$ & $J_{BB}$ (meV) & $-$2.91 & $-$0.74 & $-$5.22 & $-$2.74 \\ & $J_{AA}$ (meV) & $-$1.19 & $-$0.92 & $-$1.88 & $-$1.45 \\ \end{tabular} \end{ruledtabular} \end{table} Table~\ref{tab:coupling} shows the calculated magnetic coupling constants for the different cases. It is apparent that both volume and the intra-atomic exchange parameter ${\cal J}^H$ have a significant effect on the calculated results. The volume dependence can easily be understood. The smaller theoretical volume leads to stronger coupling between the magnetic ions. This is particularly significant for ${J}_{BB}$, since the direct exchange interaction between the $B$ cations is especially sensitive to the inter-cation distance. The corresponding coupling is therefore strongly enhanced (suppressed) by decreasing (increasing) the lattice constant. The indirect superexchange interaction also depends strongly on the inter-atomic distances. It can be seen from Table~\ref{tab:coupling} that ${\cal J}^H=0$ significantly decreases the strength of all magnetic coupling constants compared to ${\cal J}^H=1$\,eV. A strong ${\cal J}^H$ dependence of the magnetic coupling has also been observed in other Cr spinels with non-magnetic cations on the $A$ site.\cite{Craig:unpublished} Further calculations, with different values for ${\cal J}^H$ on the $A$ and $B$ sites respectively, show that it is mostly ${\cal J}^H_\text{Cr}$ which is responsible for this effect. On the other hand, varying ${\cal J}^H_{A}$ has a smaller effect on the magnetic coupling. This is consistent with the very strong ${\cal J}^H$ dependence of $J_{BB}$ and the weaker ${\cal J}^H$ dependence of $J_{AA}$ seen in Table~\ref{tab:coupling}. To understand the strong effect of ${\cal J}^H_\text{Cr}$ on the magnetic coupling constants we first take a look at the occupation numbers $n_{\gamma} \equiv n_{\gamma\gamma}$ of the Cr $d$ orbitals. The corresponding occupation numbers in CoCr$_2$O$_4$ are (calculated for a FM configuration at the experimental lattice parameters and using ${\cal J}^H = 0$): $n_{t_{2g},\uparrow} = 0.95$, $n_{t_{2g},\downarrow} = 0.05$, $n_{e_{g},\uparrow} = 0.32$, and $n_{e_{g},\downarrow} = 0.21$. As expected, the occupation of the $t_{2g}$ orbitals represents the formal $d^3$ valency with full spin-polarization, but in addition there is a sizable $e_g$ occupation, which contributes $\sim 0.2 \mu_\text{B}$ to the local spin moment of the Cr cation. This partial $e_g$ occupation, which is due to hybridization with the oxygen $p$ bands, gives rise to a FM interaction between the Cr sites, because the $e_g$ polarization is coupled to the $t_{2g}$ spins via the Hund's rule coupling.\cite{Goodenough:Book} This FM interaction between the Cr sites should therefore be proportional to the strength of the Hund's rule coupling. Thus, the stronger AFM interaction for ${\cal J}^H=1$\,eV compared to ${\cal J}^H=0$ (see Table~\ref{tab:coupling}) might be surprising at first. However, it is important to realize that even though the parameter ${\cal J}^H$ represents the strength of the Hund's rule coupling, its effect within the LSDA+$U$ framework is not to introduce a strong Hund's rule interaction. If one analyzes the LSDA+$U$ energy expression, Eq.~(\ref{eq:ldau}), in a somewhat simplified picture where the occupation matrix is diagonal and the Coulomb matrix elements are orbitally independent, one can see that the double counting correction, $E_\text{dc}$, exactly cancels the different potential shifts for orbitals with parallel and antiparallel spins that are caused by $E_U$ for ${\cal J}^H \neq 0$, if one of the $d$ orbitals is filled. Thus, $E_U-E_\text{dc}$ does not lead to an additional Hund's rule interaction compared to $E_\text{LSDA}$, even for ${\cal J}^H \neq 0$. It is generally assumed that this type of interaction is already well described on the LSDA level. The only effect of ${\cal J}^H$ is therefore an effective reduction of the on-site Coulomb repulsion. This can be seen in the following, where we write the simplified version of Eq.~(\ref{eq:ldau}) as (see Ref.~\onlinecite{Sawada_et_al:1997}): \begin{equation} \label{eq:ldau2} E = E_\text{LSDA} + \frac{U_\text{eff}}{2} \left( n - \sum_{\gamma} n_\gamma n_\gamma \right) \quad , \end{equation} with $U_\text{eff} = U -{\cal J}^H$. Within this simplified LSDA+$U$ version, the effect of ${\cal J}^H$ on the magnetic coupling constant can be understood as an effective reduction of the on-site Coulomb interaction. According to the previously discussed $U$ dependence of the magnetic coupling constants (see also Fig.~\ref{fig:cco1}), a reduced on-site Coulomb interaction leads to a stronger AFM interaction for all calculated magnetic coupling constants. From Fig.~\ref{fig:cco1} it can be seen that the magnetic coupling constants for CoCrO$_2$ using experimental lattice parameters and the LSDA+$U$ parameters $U_\text{Co}=6$\,eV, $U_\text{Cr}=4$\,eV, and ${\cal J}^H=1$\,eV, i.e. $U_\text{eff,Co}=5$\,eV and $U_\text{eff,Cr}=3$\,eV, are: $J_{AB}=-3.26$\,eV, $J_{BB}=-2.12$\,eV, and $J_{AA}=-0.30$\,eV. The corresponding result for $U_\text{Co}=5$\,eV, $U_\text{Cr}=3$\,eV, and ${\cal J}^H=0$, i.e. for the same values of $U_\text{eff}$ but different ${\cal J}^H$, are: $J_{AB}=-3.55$\,eV, $J_{BB}=-1.04$\,eV, and $J_{AA}=-0.44$\,eV (see Table~\ref{tab:coupling}). Thus, the pure dependence on $U_\text{eff}$ seems to be approximately valid for $J_{AB}$ and $J_{AA}$, whereas there is a notable quantitative deviation from the simplified LSDA+$U$ model in the case of $J_{BB}$. Nevertheless, the overall trend can still be understood from the simplified LSDA+$U$ pictue. \begin{table} \caption{Magnetic coupling constants of CoCr$_2$O$_4$ and MnCr$_2$O$_4$ calculated using two different methods (FLAPW and PAW), different values for ${\cal J}^H$, and the experimental lattice parameters.} \label{tab:compare} \begin{ruledtabular} \begin{tabular}{llcccc} & & \multicolumn{2}{c}{FLAPW} & \multicolumn{2}{c}{PAW} \\ & ${\cal J}^H$ (eV) & 0.0 & 1.0 & 0.0 & 1.0 \\ \hline & $J_{AB}$ (meV) & $-$3.62 & $-$4.32 & $-$3.55 & $-$4.44 \\ CoCr$_2$O$_4$ & $J_{BB}$ (meV) & $-$1.32 & $-$3.09 & $-$1.04 & $-$3.33 \\ & $J_{AA}$ (meV) & $-$0.23 & $-$0.00 & $-$0.44 & $-$0.50 \\ \hline & $J_{AB}$ (meV) & $-$1.73 & $-$3.23 & $-$1.40 & $-$3.14 \\ MnCr$_2$O$_4$ & $J_{BB}$ (meV) & $-$1.32 & $-$3.21 & $-$0.74 & $-$2.91 \\ & $J_{AA}$ (meV) & $-$0.67 & $-$1.06 & $-$0.92 & $-$1.19 \end{tabular} \end{ruledtabular} \end{table} Finally, to assess the possible influence of different methods to solve the self-consistent Kohn-Sham equations on the calculated magnetic coupling constants, we perform additional tests using a different electronic structure code employing the FLAPW method (see Sec.~\ref{sec:details}). The results are summarized and compared to the PAW results in Table~\ref{tab:compare}. There are some variations in the absolute values of the magnetic coupling constants obtained with the two different methods, but overall the agreement is rather good. Trends are the same in both methods, and in particular the strong effect of the LSDA+$U$ Hund's rule parameter ${\cal J}^H$ on the magnetic coupling constants is confirmed by the FLAPW calculations. One possible reason for the differences between the PAW results and the FLAPW results is that the radii of the projection spheres used in the PAW method are chosen differently from the radii of the Muffin-Tin spheres used to construct the FLAPW basis functions. \subsection{The LKDM parameter $u$} \begin{figure} \centerline{\includegraphics*[width=0.8\columnwidth]{u_bw.eps}} \caption{Dependence of the LKDM parameter $u$ on the Hubbard $U$ parameters of the different magnetic cations. Left panels correspond to CoCr$_2$O$_4$, right panels to MnCr$_2$O$_4$. From top to bottom the different panels correspond to calculations for exp. volume and ${\cal J}^H = 1$\,eV, exp. volume and ${\cal J}^H = 0$\,eV, theo. volume and ${\cal J}^H = 1$\,eV, and theo. volume and ${\cal J}^H = 0$\,eV (open circles: $U_\text{Cr} = 2$\,eV, filled squares: $U_\text{Cr} = 3$\,eV, open diamonds: $U_\text{Cr} = 4$\,eV, filled triangles: $U_\text{Cr} = 5$\,eV). Dashed horizontal lines indicate the critical values $u_0 = 8/9$ and $u''\approx 1.298$.} \label{fig:u} \end{figure} Figure~\ref{fig:u} shows the variation of the LKDM parameter $u = \frac{4\tilde{J}_{BB}S_B}{3\tilde{J}_{AB}S_A} = \frac{4J_{BB}}{3J_{AB}}$ with the strength of the on-site Coulomb interactions for the different lattice parameters and values of ${\cal J}^H$ used in this work. The behavior of $u$ follows from the corresponding trends in the coupling constants $J_{AB}$ and $J_{BB}$ discussed in the previous section. Increasing $U_{A}$ decreases the strength of $J_{AB}$ but leaves $J_{BB}$ more or less unchanged, and thus increases the value of $u$. On the other hand both $J_{AB}$ and $J_{BB}$ decrease with increasing $U_\text{Cr}$, but the decrease is stronger for $J_{BB}$ and therefore $u$ decreases with increasing $U_\text{Cr}$. Thus, the trends caused by the Hubbard parameters corresponding to the two different magnetic sites are opposite to each other. As already pointed out in the previous section, changing the value of the LSDA+$U$ parameter ${\cal J}^H$ and using different lattice constants essentially just shifts the overall scale for the magnetic coupling constants without altering their $U$ dependence. Therefore, using the larger experimental volume decreases $u$ compared to the value obtained at the theoretical lattice parameters due to the very strong volume dependence of $J_{BB}$. Introducing the on-site Hund's rule coupling ${\cal J}^H$ increases $u$, since $J_{BB}$ is stronger affected by this and thus increases relative to $J_{AB}$. For similar values of $U_A$ and $U_\text{Cr}$ the LKDM parameter $u$ is larger in MnCr$_2$O$_4$ than in CoCr$_2$O$_4$, except for ${\cal J}^H = 1$\,eV at the theoretical lattice parameters, where they are approximately equal. This is in contrast to what has been concluded by fitting the experimental neutron spectra to the spiral spin structure of the LKDM theory, which leads to the values $u$=1.6 for MnCr$_2$O$_4$ and $u$=2.0 for CoCr$_2$O$_4$,\cite{Hastings/Corliss:1962,Menyuk/Dwight/Wold:1964} i.e. the fitted value for CoCr$_2$O$_4$ is significantly larger than the value for MnCr$_2$O$_4$. We now try to give a quantitative estimate of $u$ in the two systems. The first question is whether using experimental or theoretical lattice constants leads to more realistic magnetic coupling constants. This question is not easy to answer in general. On the one hand, the LSDA underestimation of the lattice constant can lead to an overestimation of the magnetic coupling, since the cations are too close together and can therefore interact stronger than at the experimental volume. On the other hand, the indirect cation-anion-cation interaction is intimately connected to the chemical bonding.\cite{Goodenough:Book} If the larger experimental lattice constant is used, this bonding is artificially suppressed and the corresponding magnetic coupling is eventually underestimated. It is therefore not obvious whether it is better to calculate the coupling constants at the experimental or the theoretical lattice parameters, but the two cases at least provide reasonable limits for the magnetic coupling constants. We note that using the LSDA+$U$ method for the structural relaxation, usually leads to lattice parameters that are in slightly better agreement with the experimental values,\cite{Neaton_et_al:2005} which will decrease the corresponding uncertainty in the magnetic coupling constants. In the present paper we do not perform such structural relaxations for each combination of the LSDA+$U$ parameters, in order to reduce the required computational effort. In addition, this allows us to discuss the pure effect of $U$ and ${\cal J}^H$ on the magnetic coupling constants, without contributions due to varying lattice parameters. Fig.~\ref{fig:u} shows that for the physically reasonable parameters $U_\text{Cr} = 3$\,eV, $U_A=$ 4-5\,eV, and ${\cal J}^H = 1$\,eV the value of $u$ in CoCr$_2$O$_4$ calculated at the theoretical lattice constant is slightly larger than the critical value $u'' \approx 1.298$, where within the LKDM theory the ferrimagnetic spiral configuration becomes unstable. In MnCr$_2$O$_4$ the corresponding value is about equal to $u''$. At the experimental lattice constants the values of $u$ in both systems are smaller than at the theoretical lattice constants, with the stronger effect in CoCr$_2$O$_4$, where $u$ at the theoretical lattice constant is about equal to $u_0=8/9$, the value below which, according to LKDM, a collinear ferrimagnetic spin configuration is the ground state. In MnCr$_2$O$_4$ the value of $u$ at the experimental lattice constant is between $u_0$ and $u''$. Thus, in all cases the calculated values of $u$ are consistent with the experimental evidence for noncollinear ordering. Since in MnCr$_2$O$_4$ the calculation predicts a rather strong $J_{AA}$, the validity of the LKDM theory is questionable for this system, but for CoCr$_2$O$_4$, where the magnitude of $J_{AA}$ is indeed significantly smaller than both $J_{AB}$ and $J_{BB}$, this theory should at least be approximately correct. However, for CoCr$_2$O$_4$ the calculated $u$ both at experimental and at the theoretical lattice constant (and using physically reasonable values for the LSDA+$U$ parameters) is still significantly smaller than the value $u=2.0$ obtained by fitting the experimental data to the LKDM theory.\cite{Menyuk/Dwight/Wold:1964} It would therefore be interesting to study how the incorporation of $J_{AA}$ into a generalized LKDM theory alters the conclusions drawn from the experimental data. Obviously, a non-negligible $J_{AA}$ will further destabilize the collinear N{\'e}el configuration, but the possible influence of $J_{AA}$ on the ferrimagnetic spiral structure cannot be obtained straightforwardly. Of course it cannot be fully excluded that the discrepancy between the calculated value of $u$ for CoCr$_2$O$_4$ and the value extracted from the experimental data is caused by some deficiencies of the LSDA+$U$ method. For example, it was shown in Ref.~\onlinecite{Solovyev/Terakura:1998} that for MnO the LSDA+$U$ method does not offer enough degrees of freedom to correctly reproduce both nearest and next nearest neighbor magnetic coupling constants. Finally, we note that the fact that $J_{AA}$ is not negligible in MnCr$_2$O$_4$ but has a significantly smaller magnitude than $J_{AB}$ and $J_{BB}$ in CoCr$_2$O$_4$ is compatible with the fact that the overall agreement between the experimental data and the LKDM theory is better for CoCr$_2$O$_4$ than for MnCr$_2$O$_4$.\cite{Menyuk/Dwight/Wold:1964} However, a quantitative discrepancy between the value of $u$ for CoCr$_2$O$_4$ calculated in this work and the value derived from the experimental data remains. \section{Summary and Conclusions} \label{sec:summary} In summary, we have presented a detailed LSDA+$U$ study of magnetic coupling constants in the spinel systems CoCr$_2$O$_4$ and MnCr$_2$O$_4$. We have found that the coupling between the $A$ site cations, which is neglected in the classical theory of LKDM, is of appreciable size in CoCr$_2$O$_4$ and definitely not negligible in MnCr$_2$O$_4$. The calculated LKDM parameter $u$, which describes the relative strength of the $BB$ coupling compared to the $AB$ coupling and determines the nature of the ground state spin configuration in the LKDM theory, is found to be smaller than the values obtained by fitting experimental neutron data to the predictions of the LKDM theory. It remains to be seen whether this discrepancy is caused by the simplifications made in the LKDM theory, or whether it is due to deficiencies of the LSDA+$U$ method used in our calculations. In addition, we have shown that it is difficult, but possible, to arrive at quantitative predictions of magnetic coupling constants using the LSDA+$U$ method. In addition, by analyzing the $U$ and ${\cal J}^H$ dependence of the magnetic coupling constants it is possible to identify the various interaction mechanisms contributing to the overall magnetic coupling. The presence of two different magnetic cations with different charge states and different anion coordination, promotes the systems investigated in this work to a very hard test case for the predictive capabilities of the LSDA+$U$ method. Nevertheless, some insight can be gained by a careful analysis of all methodological uncertainties, and the magnitudes of the magnetic coupling constants can be determined to a degree of accuracy that allows to establish important trends and predict the correct order of magnitude for the corresponding effects. \begin{acknowledgments} C.E. thanks Craig Fennie, Ram Seshadri, Nicola Spaldin, and Andrew Millis for useful discussions. This work was supported by the NSF's \emph{Chemical Bonding Centers} program, Grant No. CHE-0434567 and by the MRSEC Program of the NSF under the award number DMR-0213574. We also made use of central facilities provided by NSF-MRSEC Award No. DMR-0520415. \end{acknowledgments}
1,108,101,563,527
arxiv
\section*{Introduction} According to Enriques' classification of smooth complex algebraic surfaces, the surfaces with Kodaira dimension zero can be divided into four classes: K3 surfaces, Enriques surfaces, abelian surfaces and bielliptic surfaces, see \cite{ven}. Moreover every Enriques surface is the quotient of a K3 surface and every bielliptic surface is the quotient of an abelian surface. The study of moduli spaces of sheaves on surfaces with Kodaira dimension zero gave rise to a lot of interesting results, for example the construction of hyperk\"{a}hler varieties, that is irreducible holomorphic symplectic manifolds, of higher dimension. By know there is lots of literature about this topic in the case of K3 surfaces and abelian surfaces. There is also quite some amount of literature in the case of Enriques surfaces. But it seems that the case of bielliptic surfaces was not studied extensively. This situation changed recently. On the one hand, Nuer studied stable sheaves and especially possible Chern characters of stable sheaves on bielliptic surface in detail, see \cite{nuer}. On the other hand, building on Beauville's work in the case of Enriques surfaces in \cite{beau}, Bergstr{\"o}m, Ferrari, Tirabassi and Vodrup studied the so-called Brauer map for bielliptic surfaces in \cite{tirab}. In this article we want to combine both directions by studying a certain version of noncommutative Picard schemes. In this context a noncommutative variety is a pair $(X,\mathcal{A})$ consisting of a classical complex algebraic variety $X$ and a sheaf of noncommutative $\mathcal{O}_X$-algebras $\mathcal{A}$ of finite rank as an $\mathcal{O}_X$-module. The algebras of interest in this article are Azumaya algebras. These are locally isomorphic to a matrix algebra $M_r(\mathcal{O}_X)$ with respect to the \'{e}tale topology and they are classified by the Brauer group $\Br(X)$ of $X$. A noncommutative Picard scheme $\Pic(\mathcal{A})$ is the moduli scheme $\M_{\mathcal{A}/X}$ of certain sheaves on $X$, which have the structure of a left $\mathcal{A}$-module. These moduli schemes were constructed by Hoffmann and Stuhler in \cite{hoff}. In \cite{hoff} the case of noncommutative K3 resp. abelian surfaces was studied. We studied the case of noncommutative Enriques surfaces in \cite{fr}. In this article study the situation of bielliptic surfaces. We prove that certain bielliptic surfaces give rise to a noncommutative bielliptic surface $(X,\mathcal{A})$ with an Azumaya algebra $\mathcal{A}$ on $X$. The main results of this article can be summarized as follows: \begin{thm-non} Let $X$ be a bielliptic surface such that the Brauer map is injective. Let $\mathcal{A}$ be an Azumaya algebra on $X$ representing a nontrivial element in the Brauer group $\Br(X)$. \begin{enumerate}[i)] \item The moduli scheme $\M_{\mathcal{A}/X}$ of torsion free $\mathcal{A}$-modules of rank one is smooth. \item Every torsion free $\mathcal{A}$-module of rank one can be deformed into a locally projective $\mathcal{A}$-module, that is the locus $\M_{\mathcal{A}/X}^{lp}$ of locally projective $\mathcal{A}$-modules is dense in $\M_{\mathcal{A}/X}$. \end{enumerate} Let $\overline{X}$ be the canonically covering abelian surface and denote the pullback of the Azumaya algebra to $\overline{X}$ by $\overline{\mathcal{A}}$, then $\M_{\overline{\mathcal{A}}/\overline{X}}$ has a symplectic structure. For fixed Chern classes $c_1$ and $c_2$ we have \begin{enumerate}[i)] \setcounter{enumi}{2} \item $\M_{\mathcal{A}/X,c_1,c_2}$ is a finite \'{e}tale cover of a smooth projective subscheme $Y\subset \M_{\overline{\mathcal{A}}/\overline{X},\overline{c_1},\overline{c_2}}$. \item The subscheme $Y$ is Lagrangian if and only if the canonical cover of $X$ has degree two or $\dim(\M_{\mathcal{A}/X},c_1,c_2)=1$. \end{enumerate} \end{thm-non} In this article we work over the field of complex numbers $\mathbb{C}$. \section{Modules over an Azumaya algebra and cyclic Galois coverings}\label{1} Denote by $W$ a smooth projective variety of dimension $d$ together with a nontrivial $n$-torsion line bundle $L$, that is $n$ is the order of $L$ in $\Pic(W)$. By \cite[I.17]{hulek} there is a cyclic \'{e}tale Galois cover \begin{equation}\label{push} q: \overline{W} \rightarrow W\,\,\,\text{such that}\,\,\,q_{*}\mathcal{O}_{\overline{W}}\cong\bigoplus\limits_{i=0}^{n-1} L^i. \end{equation} \begin{rem} We make the following convention: for every coherent sheaf $E$ on $W$ we write $\overline{E}$ for the pullback to $\overline{W}$ along $q$, that is $\overline{E}:=q^{*}E$. \end{rem} \begin{defi} A sheaf of $\mathcal{O}_W$-algebras $\mathcal{A}$ is called an Azumaya algebra if it is locally free of finite rank and for every point $w\in W$ the fiber $\mathcal{A}(w)$ is a central simple algebra over the residue field $\mathbb{C}(w)$. Furthermore a coherent $\mathcal{O}_W$-module $E$ is said to be an Azumaya module or an $\mathcal{A}$-module if $E$ has the structure of a left $\mathcal{A}$-module. \end{defi} Azumaya algebras on $W$ are classified up to similarity by the Brauer group $\Br(W)$. Here similarity for two Azumaya algebras $\mathcal{A}$ and $\mathcal{B}$ is defined as follows: \begin{equation*} \mathcal{A}\sim \mathcal{B}\,\,\,\text{if}\,\,\, \mathcal{A}\otimes\mathcal{E}nd_W(\mathcal{E})\cong\mathcal{B}\otimes\mathcal{E}nd_W(\mathcal{F}), \end{equation*} where $\mathcal{E}$ and $\mathcal{F}$ are locally free $\mathcal{O}_W$-modules of finite rank. We say $\mathcal{A}$ is trivial if $\left[\mathcal{A} \right]=\left[\mathcal{O}_W\right]$ in $\Br(W)$. A quick computation shows that $\mathcal{A}$ is trivial if and only if $\mathcal{A}\cong \mathcal{E}nd_W(P)$ for some locally free sheaf $P$ of finite rank. From now on, if not otherwise stated, an Azumaya algebra $\mathcal{A}$ is always a nontrivial Azumaya algebra. Furthermore we assume that there is a nontrivial Azumaya algebra $\mathcal{A}$ on $W$ such that $\overline{\mathcal{A}}$ is nontrivial on $\overline{W}$. Also recall that the rank of an Azumaya algebra $\mathcal{A}$ is always a square, so that it makes sense to define the degree of such an algebra by: \begin{equation*} \deg(\mathcal{A}):=\sqrt{\rk(\mathcal{A})}. \end{equation*} \begin{lem}\label{hom} Assume $E$ and $F$ are $\mathcal{A}$-modules, then \begin{equation*} \Hom_{\overline{\mathcal{A}}}(\overline{E},\overline{F})\cong \bigoplus\limits_{i=0}^{n-1}\Hom_{\mathcal{A}}(E,F\otimes L^i). \end{equation*} \end{lem} \begin{proof} The proof is the same as \cite[Lemma 1.4]{fr}. Using \eqref{push} we have an isomorphism \begin{equation*} q_{*}\mathcal{H}om_{\overline{\mathcal{A}}}(\overline{E},\overline{F}) \cong \bigoplus\limits_{i=0}^{n-1}\mathcal{H}om_{\mathcal{A}}(E,F\otimes L^i). \end{equation*} Taking global sections gives the result. \end{proof} \begin{cor}\label{homvan} Assume $E$ is an $\mathcal{A}$-module. $\overline{E}$ is a simple $\overline{\mathcal{A}}$-module if and only if $E$ is a simple $\mathcal{A}$-module and $\Hom_{\mathcal{A}}(E,E\otimes L^i)=0$ for $1\leqslant i\leqslant n-1$. \end{cor} \begin{proof} Lemma \ref{hom} shows \begin{equation*} \End_{\overline{\mathcal{A}}}(\overline{E})\cong\End_{\mathcal{A}}(E)\oplus\bigoplus\limits_{i=1}^{n-1}\Hom_{\mathcal{A}}(E,E\otimes L^i). \end{equation*} If $\overline{E}$ is a simple $\overline{\mathcal{A}}$-module, we have $\End_{\overline{\mathcal{A}}}(\overline{E})\cong\mathbb{C}$. Now as $\id_E\in \End_{\mathcal{A}}(E)$ we find $\End_{\mathcal{A}}(E)\cong\mathbb{C}$ and $\Hom_{\mathcal{A}}(E,E\otimes L^i)=0$ for $1\leqslant i \leqslant n-1$. The other direction is clear. \end{proof} \begin{prop}\cite[Proposition 3.5.]{hoff}\label{serre} Assume $E$ and $F$ are $\mathcal{A}$-modules, then there is the following variant of Serre duality: \begin{equation*} \Ext^i_{\mathcal{A}}(E,F)\cong \left( \Ext^{d-i}_{\mathcal{A}}(F,E\otimes\omega_{W})\right)^{\vee}. \end{equation*} \end{prop} Denote the $\mathcal{O}_W$-dual of $E$ by $E^{*}$, that is $E^{*}=\mathcal{H}om_W(E,\mathcal{O}_W)$. Note that $E^{*}$ is a right $\mathcal{A}$-module and that $E^{**}$ is again a left $\mathcal{A}$-module. \begin{lem}\label{extvan2} Assume $E$ is an $\mathcal{A}$-module. If $T$ is an $\mathcal{A}$-module with $\codim(\supp(T))\geqslant 2$ then \begin{equation*} \Ext^1_{\mathcal{A}}(T,E^{**})=0. \end{equation*} \end{lem} \begin{proof} We start with the right $\mathcal{A}$-module $E^{*}$. By \cite[Proposition 3.4]{hoff} we can find locally projective $\mathcal{A}$-modules $M_0$ and $M_1$ and an exact sequence of $\mathcal{A}$-modules \begin{equation*} \begin{tikzcd} M_1 \arrow{r} & M_0 \arrow{r} & E^{*} \arrow{r} & 0 . \end{tikzcd} \end{equation*} Dualizing this sequence and denoting the image of $M_0^{*}\rightarrow M_1^{*}$ by $R$, which is torsion free, we have an exact sequence of $\mathcal{A}$-modules \begin{equation*} \begin{tikzcd} 0 \arrow{r}& E^{**} \arrow{r} & M_0^{*} \arrow{r} & R \arrow{r} & 0 . \end{tikzcd} \end{equation*} Applying $\Hom_{\mathcal{A}}(T,-)$ to the last exact sequence gives \begin{equation*} \begin{tikzcd} \Hom_{\mathcal{A}}(T,R) \arrow{r} & \Ext^1_{\mathcal{A}}(T,E^{**}) \arrow{r} & \Ext^1_{\mathcal{A}}(T,M_0^{*}). \end{tikzcd} \end{equation*} As $R$ is torsion free we have $\Hom_{\mathcal{A}}(T,R)=0$. Furthermore, since $M_0^{*}$ is locally projective we get the following isomorphisms, using Serre duality, the local-to-global spectral sequence and the fact that $\codim(\supp(T))\geqslant 2$: \begin{equation*} \Ext^1_{\mathcal{A}}(T,M_0^{*})\cong \left(\Ext^{d-1}_{\mathcal{A}}(M_0^{*},T\otimes\omega_W)\right)^{\vee}\cong \left( \mathrm{H}^{d-1}(W,\mathcal{H}om_{\mathcal{A}}(M_0^{*},T\otimes\omega_W))\right)^{\vee}=0 \end{equation*} Thus we also must have $\Ext^1_{\mathcal{A}}(T,E^{**})=0$. \end{proof} \begin{lem}\label{double2} Assume $E$ is an $\mathcal{A}$-module which is torsion free as an $\mathcal{O}_W$-module. If $\overline{E^{**}}$ is a simple $\overline{\mathcal{A}}$-module, then for $1\leqslant i \leqslant n-1$: \begin{equation*} \Hom_{\mathcal{A}}(E,E^{**}\otimes L^i)=0. \end{equation*} \end{lem} \begin{proof} Note that there is an exact sequence of $\mathcal{A}$-modules \begin{equation}\label{double} \begin{tikzcd} 0 \arrow{r} & E \arrow{r} & E^{**} \arrow{r} & T \arrow{r} & 0 \end{tikzcd} \end{equation} with $\codim\supp(T)\geqslant 2$ as $E$ is torsion free. Again $\Hom_{\mathcal{A}}(T,E^{**})=0$ since $T$ is torsion. Furthermore we have $\Ext^1_{\mathcal{A}}(T,E^{**})=0$ by using Lemma \ref{extvan2}. Applying $\Hom_{\mathcal{A}}(-,E^{**})$ to (\ref{double}) and using the vanishing results gives an isomorphism \begin{equation}\label{iso2} \End_{\mathcal{A}}(E^{**}) \cong \Hom_{\mathcal{A}}(E,E^{**}). \end{equation} Using the same argument for $\overline{E}$ shows that we also have an isomorphism \begin{equation}\label{iso3} \End_{\overline{\mathcal{A}}}(\overline{E^{**}}) \cong \Hom_{\overline{\mathcal{A}}}(\overline{E},\overline{E^{**}}) \end{equation} since $\overline{E^{**}}\cong\overline{E}^{**}$ by \cite[0.6.7.6.]{gro}. The lemma follows similar to \cite[Lemma 1.7]{fr}. As $\overline{E^{**}}$ is simple, so is $E^{**}$ by Corollary \ref{homvan}. The isomorphisms \eqref{iso2} and \eqref{iso3} imply \begin{equation*} \Hom_{\overline{\mathcal{A}}}(\overline{E},\overline{E^{**}})\cong \mathbb{C}\,\,\,\,\text{as well as}\,\,\,\Hom_{\mathcal{A}}(E,E^{**})\cong \mathbb{C}. \end{equation*} The vanishing now follows from Lemma \ref{hom}, which gives an isomorphism \begin{equation*} \Hom_{\overline{\mathcal{A}}}(\overline{E},\overline{E^{**}})\cong \bigoplus\limits_{i=0}^{n-1} \Hom_{\mathcal{A}}(E,E^{**}\otimes L^i).\qedhere \end{equation*} \end{proof} Recall that the relative automorphism group of the \'{e}tale cyclic Galois cover $q: \overline{W} \rightarrow W$ is generated by a covering map $\iota$ of order $n$: \begin{equation*} \Aut(\overline{W}/W)=\left\langle \iota \right\rangle \cong\mathbb{Z}/n\mathbb{Z}. \end{equation*} As the group $\Aut(\overline{W}/W)$ is cyclic, the descent condition for a coherent sheaf $F$ on $\overline{W}$, see \cite[\href{https://stacks.math.columbia.edu/tag/0D1V}{Lemma 0D1V}]{stacks-project}, reduces to the existence of an isomorphism $\varphi_{\iota}: F\rightarrow \iota^{*}F$ such that the map: \begin{equation*}\label{desccond} \psi:=\left( \iota^{n-1}\right)^{*}\varphi_{\iota}\circ\ldots\circ \iota^{*}\varphi_{\iota}\circ\varphi_{\iota}: F \rightarrow \left(\iota^n \right)^{*}F \cong F \end{equation*} is the identity map. If $F$ is simple, then any $\varphi_{\iota}$ satisfies $\psi\in \End_{\overline{W}}(F)=\mathbb{C}\cdot\id_F$. Hence after multiplication with an appropriate scalar $\varphi_{\iota}$ satisfies the descent condition and $F$ descends, that is $F\cong \overline{E}$ for some coherent $\mathcal{O}_W$-module $E$. This standard result can be generalized to the noncommutative situation: \begin{thm}\label{desc} Assume $F$ is a simple $\overline{\mathcal{A}}$-module with an isomorphism $F\cong \iota^{*}F$ of $\overline{\mathcal{A}}$-modules, then there is an $\mathcal{A}$-module $E$ and an isomorphism of $\overline{\mathcal{A}}$-modules $F\cong\overline{E}$. \end{thm} The proof of this theorem is the same as the proof of \cite[Theorem 2.6]{fr}. One uses the Brauer-Severi varieties $p: Y \rightarrow W$ and $\overline{p}:\overline{Y}\rightarrow \overline{W}$ associated to $\mathcal{A}$ and $\overline{\mathcal{A}}$ together with equivalences of categories: \begin{equation*} \Coh_l(W,\mathcal{A}) \cong \Coh(Y,W)\,\,\,\text{as well as}\,\,\,\Coh_l(\overline{W},\overline{\mathcal{A}}) \cong \Coh(\overline{Y},\overline{W}). \end{equation*} (Here $\Coh_l(W,\mathcal{A})$ is the category of coherent left $\mathcal{A}$-modules and \begin{equation*} \Coh(Y,W)=\left\lbrace E\in \Coh(Y)\,\,|\,\, p^{*}p_{*}(E\otimes G^{*}) \xrightarrow{\cong} E\otimes G^{*} \right\rbrace \end{equation*} for some locally free sheaf $G$ on $Y$ with $p^{*}\mathcal{A}\cong \mathcal{E}nd_Y(G)^{op}$.) By functoriality of the Brauer-Severi variety and/or the functoriality of the \'{e}tale cyclic Galois cover (as a relative spectrum) we have the following cartesian diagram: \begin{equation*} \begin{tikzcd} \overline{Y} \arrow[r,"\overline{q}"] \arrow[d,swap,"\overline{p}"] & Y \arrow[d,"p"]\\ \overline{W} \arrow[r,"q"] & W \end{tikzcd} \end{equation*} The idea is to reduce the question about descent of $\overline{\mathcal{A}}$-modules on $\overline{W}$ to a descent argument for classical $\mathcal{O}_{\overline{Y}}$-modules on $\overline{Y}$. This works out well, as the morphism $\overline{q}: \overline{Y}\rightarrow Y$ is also a cyclic \'{e}tale Galois cover. It is induced by the n-torsion line bundle $p^{*}L$. The last fact follows from the injectivity of $p^{*}: \Pic(W) \rightarrow \Pic(Y)$, which in turn follows from the projection formula and $p_{*}\mathcal{O}_Y \cong \mathcal{O}_W$, see \cite[Lemma 1.6]{reede}. \section{Noncommutative bielliptic surfaces}\label{3} \begin{defi} A smooth projective minimal surface $X$ is called a bielliptic surface if: \begin{itemize} \item $\kappa(X)=0$ that is $X$ has Kodaira dimension zero, \item $q(X)=1$ that is $\mathrm{H}^1(X,\mathcal{O}_X)\cong\mathbb{C}$ and \item $p_g(X)=0$ that is $\mathrm{H}^2(X,\mathcal{O}_X)=0$. \end{itemize} \end{defi} It is well known that each such surface is of the form \begin{equation*} X=(E\times F)/G, \end{equation*} where $E$ and $F$ are elliptic curves and $G$ is a finite abelian group. $G$ acts via \begin{equation*} g.(e,f)=(e+g,\Psi(g)(f)), \end{equation*} where we understand $G\subset E$ as a finite subgroup and $\Psi: G \rightarrow \Aut(F)$ is an injective group homomorphism. That is $E/G$ is again an elliptic curve and $F/G\cong\mathbb{P}^1$. It follows that $\omega_X\in \Pic(X)$ is a torsion element of order $n$ with $n\in \left\lbrace 2,3,4,6 \right\rbrace$. Using this structure theorem Bagnera and de Franchis were able to classify all bielliptic surfaces. In fact, each such surface belongs to one of seven families, which can be found in the following table, see for example \cite[V.5]{hulek} or \cite[List VI.20]{beau4}: \begin{center} \begin{tabular}{cccc} \hline \noalign{\vskip 2mm} Type & G & order of $\omega_X$ & $\Br(X)$ \\ [0.5ex] \hline\noalign{\vskip 2mm} 1 & $\mathbb{Z}/2\mathbb{Z}$ & 2 & $\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}$ \\ \noalign{\vskip 1mm} 2 & $\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}$ & 2 & $\mathbb{Z}/2\mathbb{Z}$ \\\noalign{\vskip 1mm} 3 & $\mathbb{Z}/4\mathbb{Z}$ & 4 & $\mathbb{Z}/2\mathbb{Z}$ \\\noalign{\vskip 1mm} 4 & $\mathbb{Z}/4\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}$ & 4 & 0 \\\noalign{\vskip 1mm} 5 & $\mathbb{Z}/3\mathbb{Z}$ & 3 & $\mathbb{Z}/3\mathbb{Z}$ \\\noalign{\vskip 1mm} 6 & $\mathbb{Z}/3\mathbb{Z}\times\mathbb{Z}/3\mathbb{Z}$ & 3 & 0 \\\noalign{\vskip 1mm} 7 & $\mathbb{Z}/6\mathbb{Z}$ & 6 & 0 \\ [1ex] \hline \end{tabular} \end{center} \begin{rem} The Brauer group of a bielliptic surface can be found as follows: first note that according to \cite[Proposition 4]{beau3} there is an isomorphism \begin{equation*} \Br(X) \cong \mathrm{H}^3(X,\mathbb{Z})_{\mathrm{tor}}. \end{equation*} Poincar\'{e} duality identifies the last group with $\mathrm{H}_1(X,\mathbb{Z})_{\mathrm{tor}}$. But these groups were for example computed by Serrano in \cite[Page 531, Table 3]{ser}. \end{rem} The table shows that we only need to work with bielliptic surfaces of type $1,2,3$ and $5$ in the following, since we are interested in nontrivial Azumaya algebras. Next we want to study the nontrivial elements in the Brauer group of a bielliptic surface. For this we recall the notion of central simple cyclic division algebras. Let $K$ be a field, which contains a primitive $n$-th root of unity $\zeta$ and such that $\mathrm{char}(K)\nmid n$. Then for $a,b\in K^{\times}$ the $n$-symbol algebra $\left\langle a,b \right\rangle_n$ is the $K$-algebra generated by two elements $u,v$ with the relations \begin{equation*} u^n=a,\,\,\,v^n=b\,\,\,\text{and}\,\,\, uv=\zeta vu. \end{equation*} If $x^n-a$ is irreducible in $K[x]$ then $\left\langle a,b \right\rangle_n$ is an example of a central simple cyclic algebra. If furthermore $x^n-b$ is also irreducible, then it is a central simple cyclic division algebra, see \cite[Page 284]{pierce}. These algebras satisfy \begin{equation*} \deg(\left\langle a,b \right\rangle_n)=n\,\,\,\,\text{and}\,\,\,\,\ord(\left\langle a,b \right\rangle_n)=n\,\,\text{in}\,\Br(K). \end{equation*} In fact, in this situation the $n$-symbol algebras are exactly the cyclic algebras, see \cite[Corollary 2.5.5]{gille}. \begin{prop}\label{repr} The nontrivial elements in the Brauer group of a bielliptic surface $X$ can be represented by Azumaya algebras $\mathcal{A}$ on $X$ that are generically central simple cyclic division $\mathbb{C}(X)$-algebras and such that \begin{equation*} \deg(\mathcal{A})=\begin{cases} 2 & \text{if}\,\,\, \ord\left( \left[\mathcal{A} \right] \right)=2\\ 3 & \text{if}\,\,\,\ord\left( \left[\mathcal{A} \right] \right)=3. \end{cases} \end{equation*} \end{prop} \begin{proof} Looking at the list of types of bielliptic surface we see that a nontrivial element $b\in\text{Br}(X)$ has order two or three. As $X$ is smooth by \cite[Th\'{e}or\`{e}me 2.4.]{coll} the restriction to the generic point $\eta$ gives an injection \begin{equation*} r_\eta: \text{Br}(X)\hookrightarrow \text{Br}(\mathbb{C}(X)). \end{equation*} So the image $r_\eta(b)$ has order two resp. three in $\text{Br}(\mathbb{C}(X))$. Since every class $\Br(\mathbb{C}(X))$ contains (up to isomorphism) a unique division algebra, see \cite[12.5. Proposition b]{pierce}, we may assume that $r_{\eta}(b)$ is represented by a central simple division algebra $A$. The field $\mathbb{C}(X)$ has transcendence degree two over $\mathbb{C}$. By a result of Artin and Tate, the element $r_\eta(b)$ can be represented by a division algebra $A$ of degree two resp. three over $\mathbb{C}(X)$, see \cite[Appendix]{artin}. A division algebra of degree two must be cyclic, since separable field extension of degree two are cyclic. The fact that a division algebra of degree is cyclic is a classical result due to Wedderburn, see \cite[15.6.]{pierce}. As the class $[A]=r_{\eta}(b)$ comes from $\text{Br}(X)$ it is unramified at every point of codimension one in $X$, and thus by \cite[Th\'{e}or\`{e}me 2.5.]{coll} there is an Azumaya algebra $\mathcal{A}$ on $X$ with $\mathcal{A}\otimes\mathbb{C}(X)=A$ such that $\left[ \mathcal{A}\right]=b$. \end{proof} Since the canonical bundle $\omega_X\in\Pic(X)$ is $n$-torsion, it induces a cyclic \'{e}tale Galois cover $\pi: \overline{X}\rightarrow X$ of degree $n$, the so called canonical cover. This cover satisfies the property $\pi^{*}\omega_X\cong\omega_{\overline{X}}=\mathcal{O}_{\overline{X}}$. It is known that $\overline{X}$ is an abelian surface. More exactly, if $X=(E\times F)/G$ then we see by \cite[2.2.]{tirab}: \begin{equation*} \overline{X}=\begin{cases} E\times F & \text{if $X$ is of type}\,\,\,1,3,5\\ (E\times F)/H & \text{if $X$ is of type}\,\,\,2\,\,\text{with}\,\, H\cong \mathbb{Z}/2\mathbb{Z}. \end{cases} \end{equation*} The canonical cover induces a morphism $\pi^{*}: \Br(X)\rightarrow \Br(\overline{X})$, the so called Brauer map. A natural question is, if the Brauer map is injective. Bergstr{\"o}m, Ferrari, Tirabassi and Vodrup give a complete answer to this question in \cite{tirab}. It turns out that the answer to the question is quite complicated and subtle in some cases and the results are not easily stated. Here we only record one example of these results, because it resembles most the case of Enriques surfaces found by Beauville in \cite{beau}, see \cite[Theorem 5.3.]{tirab}: \begin{thm} Let $X$ be a bielliptic surface with $X=(E\times F)/G$. If the elliptic curves $E$ and $F$ are not isogenous, then the morphism $\pi^{*}: \Br(X) \rightarrow \Br(\overline{X})$ is injective. \end{thm} Since the property that two elliptic curves $E$ and $F$ are not isogenous is very general in the moduli of these curves, a very general bielliptic surface (in some "moduli" sense) has injective Brauer map. Thus if $X$ is a bielliptic surface with injective Brauer map then the pullback $\overline{\mathcal{A}}$ on $\overline{X}$ of an Azumaya algebra $\mathcal{A}$ constructed in Proposition \ref{repr} represents a nontrivial class in $\Br(\overline{X})$. \section{Noncommutative Picard schemes and deformations}\label{4} In this section we first start more generally with a smooth projective $d$-dimensional variety $W$ and an Azumaya algebra $\mathcal{A}$ on $W$. We can think of the pair $(W,\mathcal{A})$ as a noncommutative version of $W$. We want to study moduli schemes of sheaves on such noncommutative pairs. \begin{defi} A sheaf $E$ on $W$ is called a generically simple torsion free $\mathcal{A}$-module, if $E$ is a left $\mathcal{A}$-module such that \begin{itemize} \item $E$ is coherent and torsion free as a $\mathcal{O}_W$-module \item the stalk $E_{\eta}$ over the generic point $\eta\in W$ is a simple module over $\mathcal{A}_{\eta}$. \end{itemize} If furthermore $\mathcal{A}_{\eta}$ is a central simple division algebra over $\mathbb{C}(W)$ then such a module is also called a torsion free $\mathcal{A}$-module of rank one. \end{defi} \begin{rem} An $\mathcal{A}$-module is locally projective if and only if it is locally free as an $\mathcal{O}_W$-module. If $\mathcal{A}_{\eta}$ is a central simple division algebra then locally projective $\mathcal{A}$-modules of rank one can be thought of as line bundles on the noncommutative variety $(W,\mathcal{A})$. Furthermore a generically simple torsion free $\mathcal{A}$-module is simple, see the argument after Remark 1.1. in \cite{hoff}. \end{rem} By fixing the Hilbert polynomial $P$ of such sheaves (with respect to some choice of an ample line bundle), Hoffmann and Stuhler showed that these modules are classified by a moduli scheme, see \cite[Theorem 2.4. iii), iv)]{hoff}: \begin{thm} There is a projective moduli scheme $\M_{\mathcal{A}/W,P}$ classifying generically simple torsion free $\mathcal{A}$-modules with Hilbert polynomial $P$ on $W$. \end{thm} The moduli scheme of all generically simple $\mathcal{A}$-modules is given by \begin{equation*} \M_{\mathcal{A}/W}:= \coprod\limits_{P} \M_{\mathcal{A}/W,P}. \end{equation*} By the remark above $\M_{\mathcal{A}/W}$ can be understood as the Picard scheme $\Pic(\mathcal{A})$ of the noncommutative variety $(W,\mathcal{A})$ in case the generic stalk is central simple division algebra. In the following we will use the decomposition \begin{equation*} \M_{\mathcal{A}/W}=\coprod\limits_{c_1,\ldots,c_d} \M_{\mathcal{A}/W,c_1,\ldots,c_d} \end{equation*} induced by fixing the Chern classes, see \cite[Page 379]{hoff}. We want to study these moduli schemes for a noncommutative bielliptic surfaces $(X,\mathcal{A})$. Here $X$ is a bielliptic surface with injective Brauer map and $\mathcal{A}$ is an Azumaya algebra representing a nontrivial element in $\Br(X)$. Since the nontrivial elements in $\Br(X)$ can be represented by Azumaya algebras which are generically central simple division algebras, we can work with torsion free $\mathcal{A}$-modules of rank one in the following. Note that the $\mathcal{O}_X$-rank of a torsion free $\mathcal{A}$-module of rank one $E$ is \begin{equation*} \rk(E)=\begin{cases} 4 & \text{if}\,\,\ord\left(\left[\mathcal{A} \right] \right)=2\\ 9 & \text{if}\,\,\ord\left(\left[\mathcal{A} \right] \right)=3. \end{cases} \end{equation*} We have an associated noncommutative abelian surface $(\overline{X},\overline{\mathcal{A}})$ defined by the canonical cover $\pi: \overline{X}\rightarrow X$. We first recall some facts about these moduli schemes for such pairs, see \cite[Theorem 3.6.]{hoff}: \begin{thm}\label{hoff} Let $\overline{X}$ be a abelian surface which is the canonical cover of a bielliptic surface $X$ with injective Brauer map. Let $\overline{\mathcal{A}}$ be an Azumaya algebra coming from an Azumaya algebra $\mathcal{A}$ on $X$, which represents a nontrivial element in $\Br(X)$. \begin{enumerate}[i)] \item The moduli scheme $\M_{\overline{\mathcal{A}}/\overline{X}}$ of torsion free $\overline{\mathcal{A}}$-modules of rank one is smooth. \item There is a nowhere degenerate alternating 2-form on the tangent bundle of $\M_{\overline{\mathcal{A}}/\overline{X}}$ \item Every torsion free $\overline{\mathcal{A}}$-module of rank one can be deformed into a locally projective $\overline{\mathcal{A}}$-module, that is the locus $\M_{\overline{\mathcal{A}}/\overline{X}}^{lp}$ of locally projective $\mathcal{A}$-modules is dense in $\M_{\overline{\mathcal{A}}/\overline{X}}$. \item For fixed Chern classes $\overline{c_1}$ and $\overline{c_2}$ we have \begin{equation*} \dim \M_{\overline{\mathcal{A}}/\overline{X},\overline{c_1},\overline{c_2}}=\begin{cases} \,\frac{1}{4}\left( 8\overline{c_2}-3\overline{c_1}^2\right) -c_2(\overline{\mathcal{A}})+2 & \text{if}\,\,\ord\left(\left[\mathcal{A} \right] \right)=2\\ \noalign{\vskip9pt} \,\frac{1}{9}\left(18\overline{c_2}-8\overline{c_1}^2 \right) -c_2(\overline{\mathcal{A}})+2 & \text{if}\,\,\ord\left(\left[\mathcal{A} \right] \right)=3 \end{cases} \end{equation*} where $\overline{c_i}=\pi^{*}c_i$. \end{enumerate} \end{thm} For the rest of this section we need the following \begin{rem}\label{eflat} For a torsion free $\mathcal{A}$-module $E$ of rank one on $X$, the $\mathcal{A}$-modules $E^{**}$ and $E\otimes L$ for $L\in \Pic(X)$ are also torsion free of rank one. In addition $\overline{E}$ is a torsion free $\overline{\mathcal{A}}$-module of rank one on $\overline{X}$ since $\pi$ is flat. \end{rem} We can now prove the first main theorem of this article. \begin{thm}\label{thm2} Let $X$ be bielliptic surface such that the Brauer map is injective. Let $\mathcal{A}$ be an Azumaya algebra on $X$ representing a nontrivial element in $\Br(X)$. \begin{enumerate}[i)] \item The moduli scheme $\M_{\mathcal{A}/X}$ of torsion free $\mathcal{A}$-modules of rank one is smooth. \item Every torsion free $\mathcal{A}$-module of rank one can be deformed into a locally projective $\mathcal{A}$-module, that is the locus $\M_{\mathcal{A}/X}^{lp}$ of locally projective $\mathcal{A}$-modules is dense in $\M_{\mathcal{A}/X}$. \item For fixed Chern classes $c_1$ and $c_2$ we have \begin{equation*} \dim \M_{\mathcal{A}/X,c_1,c_2}=\begin{cases} \,\frac{1}{4}\left(8c_2-3c_1^2 \right) -c_2(\mathcal{A})+1 & \text{if}\,\,\ord\left(\left[\mathcal{A} \right] \right)=2\\ \noalign{\vskip9pt} \,\frac{1}{9}\left(18c_2-8c_1^2 \right) -c_2(\mathcal{A})+1 & \text{if}\,\,\ord\left(\left[\mathcal{A} \right] \right)=3. \end{cases} \end{equation*} \end{enumerate} \end{thm} \begin{proof} \begin{enumerate}[i)] \item We note that, as in the classical case of $\mathcal{O}_X$-modules, there is a deformation theory for $\mathcal{A}$-modules, see \cite[Sect. 3]{hoff}. Thus for a given point $[E]\in \M_{\mathcal{A}/X}$ we have to show that all obstruction classes in $\Ext^2_{\mathcal{A}}(E,E)$ vanish. But by Proposition \ref{serre} we have: \begin{equation*} \Ext^2_{\mathcal{A}}(E,E)\cong \left( \Hom_{\mathcal{A}}(E,E\otimes\omega_X)\right)^{\vee}. \end{equation*} As $\overline{E}$ is a simple $\overline{\mathcal{A}}$-module, we get $\Hom_{\mathcal{A}}(E,E\otimes\omega_X)=0$ by Corollary \ref{homvan}. Thus all obstructions vanish and $\M_{\mathcal{A}/X}$ is smooth at $\left[E\right]$. \item The proof of \cite[Theorem 3.6.iii)]{hoff} carries over to our situation with one small change: the surjectivity of the connecting homomorphisms $\delta$ in the diagram: \begin{equation*} \begin{tikzcd} \Ext^1_{\mathcal{A}}(E,E) \arrow{r}{\delta} & \Ext^2_{\mathcal{A}}(T,E) \arrow{r}{\pi^{*}}\arrow{d}{\iota_{*}} & \Ext^2_{\mathcal{A}}(E^{**},E)\\ & \Ext^2_{\mathcal{A}}(T,E^{**}) \arrow[equal]{r} & \bigoplus\limits_{i=1}^l\Ext^2_{\mathcal{A}}(T_{x_i},E^{**}) \end{tikzcd} \end{equation*} follows from the fact that \begin{equation*} \Ext^2_{\mathcal{A}}(E^{**},E)=0. \end{equation*} This vanishing can be seen as follows: using Proposition \ref{serre} we have \begin{equation*} \Ext^2_{\mathcal{A}}(E^{**},E)\cong\left(\Hom_{\mathcal{A}}(E,E^{**}\otimes\omega_X) \right)^{\vee}. \end{equation*} But the last space is zero by Lemma \ref{double2}. The rest of the proof works unaltered. \item Using i) and ii) it suffices to compute the dimension of \begin{equation*} T_{\left[E\right]}\M_{\mathcal{A}/X} \cong \Ext^1_{\mathcal{A}}(E,E)\cong \mathrm{H}^1(X,\mathcal{E}nd_{\mathcal{A}}(E)) \end{equation*} for a locally projective $\mathcal{A}$-module $E$ of rank one. As $\End_{\mathcal{A}}(E)\cong\mathbb{C}$, $\Ext^2_{\mathcal{A}}(E,E)=0$ and $\chi(X,\mathcal{O}_X)=1$ we get our result by a Hirzebruch-Riemann-Roch computation, exactly as in \cite[Theorem 3.6.iv)]{hoff}. \end{enumerate} \end{proof} \begin{rem} The result in Theorem \ref{thm2} ii) is a new phenomenon in the noncommutative case. As noted in \cite[Remark 1.6]{hoff}, in the classical case $\mathcal{A}=\mathcal{O}_X$ just torsion free and locally projective $\mathcal{A}$-modules lie in different connected components of the moduli scheme. The reason is that locally projective $\mathcal{A}$-modules do not satisfy the valuative criterion for properness if $\mathcal{A}$ is nontrivial. That is the reason why one has to allow for just torsion free $\mathcal{A}$-modules to get a proper noncommutative Picard scheme $\Pic(\mathcal{A})$. \end{rem} \section{Lagrangian subschemes} The covering map $\iota: \overline{X}\rightarrow \overline{X}$ induces an automorphism \begin{equation*} \iota^{*}: \M_{\overline{\mathcal{A}}/\overline{X},\overline{c_1},\overline{c_2}} \rightarrow \M_{\overline{\mathcal{A}}/\overline{X},\overline{c_1},\overline{c_2}},\,\,\, \left[F\right]\mapsto \left[\iota^{*}F\right]. \end{equation*} Moreover, using Remark \ref{eflat}, the projection $\pi:\overline{X}\rightarrow X$ induces a morphism \begin{equation*} \pi^{*}: \M_{\mathcal{A}/X,c_1,c_2} \rightarrow \M_{\overline{\mathcal{A}}/\overline{X},\overline{c_1},\overline{c_2}},\,\,\,\, \left[E\right] \mapsto \left[\overline{E}\right]. \end{equation*} Our goal is to understand these morphisms. \begin{thm} The image of $\pi^{*}$ coincides with the fixed locus of $\iota^{*}$, that is we have $\Ima(\pi^{*})=\Fix(\iota^{*})$. The latter space is a smooth projective subscheme in $\M_{\overline{\mathcal{A}}/\overline{X},\overline{c_1},\overline{c_2}}$. Furthermore the restriction of the symplectic form $\sigma$ on the tangent bundle of $\M_{\overline{\mathcal{A}}/\overline{X},\overline{c_1},\overline{c_2}}$ to $\Ima(\pi^{*})$ vanishes identically. \end{thm} \begin{proof} We certainly have $\Ima(\pi^{*})\subset \Fix(\iota^{*})$. By Theorem \ref{desc} we also have the inclusion $\Fix(\iota^{*})\subset \Ima(\pi^{*})$. So $\Ima(\pi^{*})=\Fix(\iota^{*})$. The subscheme $\Fix(\iota^{*})$ is projective and smooth by \cite[3.1,3.4]{edix}. We have seen in Theorem \ref{thm2} that $\Ext^2_{\mathcal{A}}(E,E)=0$ for all $[E]\in \M_{\mathcal{A}/X,c_1,c_2}$. Now the vanishing of the symplectic form follows similar to \cite[Proof of (3), p.92]{kim} from the following commutative diagram: \begin{equation*} \begin{tikzcd} \Ext^1_{\mathcal{A}}(E,E) \times \Ext^1_{\mathcal{A}}(E,E) \arrow[r] \arrow[d,swap,shift right=3em,"\pi^{*}"]\arrow[d,shift left=3em,"\pi^{*}"] & \Ext^2_{\mathcal{A}}(E,E) \arrow[d,"\pi^{*}"] \\ \Ext^1_{\overline{\mathcal{A}}}(\overline{E},\overline{E}) \times \Ext^1_{\overline{\mathcal{A}}}(\overline{E},\overline{E}) \arrow[r] & \Ext^2_{\overline{\mathcal{A}}}(\overline{E},\overline{E}) \end{tikzcd} \end{equation*} using Mukai's description of the symplectic form on the tangent bundle of $\M_{\overline{\mathcal{A}}/\overline{X},\overline{c_1},\overline{c_2}}$. \end{proof} \begin{rem} The vanishing of the symplectic form on $\Ima(\pi^{*})$ can also be seen by noting that $\iota^{*}$ is an antisymplectic automorphism of $\M_{\overline{\mathcal{A}}/\overline{X},\overline{c_1},\overline{c_2}}$. More exactly we have $\iota^{*}\sigma=\zeta_n\sigma$ for a nontrivial $n$-th root of unity $\zeta_n$. This follows as in the proof of \cite[Lemma 4.7.]{fr}. One just has to note that $\iota^{*}$ acts as multiplication by $\zeta_n$ on $H^0(\overline{X},\omega_{\overline{X}})$, since $H^0(X,\omega_X)=0$. \end{rem} \begin{thm}\label{moduli} Let $X$ be bielliptic surface such that the Brauer map is injective. Let $\mathcal{A}$ be an Azumaya algebra on $X$ representing a nontrivial element in $\Br(X)$. The pullback map \begin{equation*} \pi^{*}: \M_{\mathcal{A}/X,c_1,c_2} \rightarrow \M_{\overline{\mathcal{A}}/\overline{X},\overline{c_1},\overline{c_2}} \end{equation*} realizes $\M_{\mathcal{A}/X,c_1,c_2}$ as a finite \'{e}tale cover of the smooth subscheme $\Fix(\iota^{*})\subset \M_{\overline{\mathcal{A}}/\overline{X},\overline{c_1},\overline{c_2}}$. \end{thm} \begin{proof} The previous theorem shows that $\pi^{*}$ factors through $\Fix(\iota^{*})$ giving rise to a surjective morphism \begin{equation*} \varphi: \M_{\mathcal{A}/X,c_1,c_2} \rightarrow \Fix(\iota^{*}). \end{equation*} Since $\pi: \overline{X}\rightarrow X$ is the canonical cover one has isomorphisms \begin{equation*} \overline{E\otimes\omega_X^j}\cong \overline{E} \,\,\,\text{for any $0\leqslant j \leqslant n-1$}. \end{equation*} By Corollary \ref{homvan} the $E\otimes\omega_X^j$ are pairwise non-isomorphic since $\overline{E}$ is simple. Now assume $\varphi(\left[E\right] )=\varphi(\left[F\right] )$ that is $\overline{E}\cong\overline{F}$ and $\Hom_{\overline{\mathcal{A}}}(\overline{E},\overline{F})\cong \mathbb{C}$. Then Lemma \ref{hom} says \begin{equation*} \mathbb{C}\cong\Hom_{\overline{\mathcal{A}}}(\overline{E},\overline{F}) \cong \bigoplus\limits_{i=0}^{n-1} \Hom_{\mathcal{A}}(E,F\otimes\omega_X^i) \end{equation*} and so by \cite[Lemma 4.3]{fr} we have \begin{equation*} E\cong F\otimes\omega_X^j \end{equation*} for exactly one $j$ with $0\leqslant j\leqslant n-1$. So $\varphi$ is an unramified morphism of degree $n$. Moreover the computations also show that $\varphi$ is flat by \cite[Lemma, p.675]{schaps}, hence $\varphi$ is \'{e}tale. \end{proof} Since the symplectic form vanishes on $\Fix(\iota^{*})$ one may ask, if this is a Lagrangian subscheme in $\M_{\overline{\mathcal{A}}/\overline{X},\overline{c_1},\overline{c_2}}$? This question can be answered by a simply dimension computation. \begin{lem} The subscheme $\Fix(\iota^{*})$ is Lagrangian if and only if the bielliptic surface $X$ is of type $1$ or $2$ or $\dim(\M_{\mathcal{A}/X,c_1,c_2})=1$. \end{lem} \begin{proof} By the previous theorem we need to check in which cases we have \begin{equation*} \dim\left( \M_{\overline{\mathcal{A}}/\overline{X},\overline{c_1},\overline{c_2}}\right) =2\dim\left( \M_{\mathcal{A}/X,c_1,c_2}\right). \end{equation*} It is therefore enough to check in which cases we have \begin{equation}\label{lagra} \ext^1_{\overline{\mathcal{A}}}(\overline{E},\overline{E})=2\ext^1_{\mathcal{A}}(E,E) \end{equation} for a locally projective $\mathcal{A}$-module $E$ of rank one. Since the canonical cover is finite \'{e}tale of degree $n$ we find using \cite[Lemma 1.3]{fr}: \begin{equation*} \chi(\mathcal{H}om_{\overline{\mathcal{A}}}(\overline{E},\overline{E}))=\chi(\pi^{*}\mathcal{H}om_{\mathcal{A}}(E,E))=n\chi(\mathcal{H}om_{\mathcal{A}}(E,E)). \end{equation*} We find \begin{equation*} 2-\ext^1_{\overline{\mathcal{A}}}(\overline{E},\overline{E})=n\left( 1-\ext^1_{\mathcal{A}}(E,E)\right) \end{equation*} Inserting equation \eqref{lagra} into the last equation and simplifying gives: \begin{equation*} (n-2)\left(1-\ext^1_{\mathcal{A}}(E,E) \right)=0. \end{equation*} We conclude: $\Fix(\iota^{*})$ is a Lagrangian subscheme if and only if the canonical cover of $X$ has degree two, that is $X$ is of type 1 or 2, or in case $\dim(\M_{\mathcal{A}/X,c_1,c_2})=1$. \end{proof}
1,108,101,563,528
arxiv
\section{Introduction} Most optical materials have positive (normal) dispersion, which means that the refractive index decreases at longer wavelengths. As a consequence, blue light is deflected more than red light by dielectric prisms [Fig. \ref{fig:1_concept}(a)]. The reason why diffraction gratings are said to have negative dispersion is because they disperse light similar to hypothetical refractive prisms made of a material with negative (anomalous) dispersion [Fig. \ref{fig:1_concept}(b)]. For diffractive devices, dispersion is not related to material properties, and it refers to the derivative of a certain device parameter with respect to wavelength. For example, the angular dispersion of a grating that deflects normally incident light by a positive angle $\theta$ is given by $\mathrm{d}\theta/\mathrm{d}\lambda=\tan(\theta)/\lambda$ (see~\cite{Born1999} and Supplementary Section S2). Similarly, the wavelength dependence of the focal length ($f$) of a diffractive lens is given by $\mathrm{d}f/\mathrm{d}\lambda=-f/\lambda$ ~\cite{Born1999,O'shea2004}. Here we refer to diffractive devices that follow these fundamental chromatic dispersion relations as ``\textit{regular}". Achieving new regimes of dispersion control in diffractive optics is important both at the fundamental level and for numerous practical applications. Several distinct regimes can be differentiated as follows. Diffractive devices are dispersionless when the derivative is zero (i.e. $\mathrm{d}\theta/\mathrm{d}\lambda=0$, $\mathrm{d}f/\mathrm{d}\lambda=0$ shown schematically in Fig. \ref{fig:1_concept}(c)), have positive dispersion when the derivative has opposite sign compared to a regular diffractive device of the same kind (i.e. $\mathrm{d}\theta/\mathrm{d}\lambda<0$, $\mathrm{d}f/\mathrm{d}\lambda>0$) as shown in Fig. \ref{fig:1_concept}(d), and are hyper-dispersive when the derivative has a larger absolute value than a regular device (i.e. $|\mathrm{d}\theta/\mathrm{d}\lambda|>|\tan(\theta)/\lambda|$, $|\mathrm{d}f/\mathrm{d}\lambda|>|-f/\lambda|$) as seen in Fig. \ref{fig:1_concept}(e). Here we show that these regimes can be achieved in diffractive devices based on optical metasurfaces. Metasurfaces have attracted great interest in the recent years~\cite{Kildishev2013Science,Yu2014NatMater,Koenderink2015Science,Jahani2016NatNano,Lalanne1998OptLett,Lalanne1999JOSAA,Fattal2010NatPhoton,Yin2013Science,Lee2014Nature,Silva2014Science} because they enable precise control of optical wavefronts and are easy to fabricate with conventional microfabrication technology in a flat, thin, and light weight form factor. Various conventional devices such as gratings, lenses, holograms, and planar filter arrays~\cite{Lalanne1998OptLett,Lalanne1999JOSAA,Fattal2010NatPhoton,Ni2013LightSciApp,Vo2014IEEEPhotonTechLett,Lin2014Science,Arbabi2015NatCommun,Yu2015LaserPhotonRev,Arbabi2015OptExp,Decker2015AdvOptMat,Wang2016NatPhoton,Kamali2016LaserPhotonRev,Zhan2016ACSPhotonics,Arbabi2016NatCommun,Khorasaninejad2016Science,Horie2015OptExp,Horie2016OptExp}, as well as novel devices~\cite{Arbabi2015NatNano,Kamali2016NatCommun} have been demonstrated using metasurfaces. These optical elements are composed of large numbers of scatterers, or meta-atoms placed on a two-dimensional lattice to locally shape optical wavefronts. Similar to other diffractive devices, metasurfaces that locally change the propagation direction (e.g. lenses, beam deflectors, holograms) have negative chromatic dispersion~\cite{Born1999,O'shea2004,Sauvan2004OptLett,Arbabi2016Optica}. This is because most of these devices are divided in Fresnel zones whose boundaries are designed for a specific wavelength~\cite{Faklis1995ApplOpt,Arbabi2016Optica}. This chromatic dispersion is an important limiting factor in many applications and its control is of great interest. Metasurfaces with zero and positive dispersion would be useful for making achromatic singlet and doublet lenses, and the larger-than-regular dispersion of hyper-dispersive metasurface gratings would enable high resolution spectrometers. We emphasize that the devices with zero chromatic dispersion discussed here are fundamentally different from the multiwavelength metasurface gratings and lenses recently reported~\cite{Faklis1995ApplOpt,Eisenbach2015OptExp,Aieta2015Science,Khorasaninejad2015NanoLett,Arbabi2016Optica,Wang2016NanoLett,Arbabi2016OptExp,Zhao2016OptLett,Deng2016OptExp,Arbabi2016SciRep,Lin2016NanoLett}. Multiwavelength devices have several diffraction orders, which result in lenses (gratings) with the same focal length (deflection angle) at a few discrete wavelengths. However, at each of these focal distances (deflection angles), the multi-wavelength lenses (gratings) exhibit the regular negative diffractive chromatic dispersion (see~\cite{Faklis1995ApplOpt,Arbabi2016Optica}, Supplementary Section S3 and Fig. S1). \section{Theory} Here we argue that simultaneously controlling the phase imparted by the meta-atoms composing the metasurface ($\phi$) and its derivative with respect to frequency $\omega$ ($\phi'=\partial \phi/\partial\omega$ which we refer to as chromatic phase dispersion or dispersion for brevity) makes it possible to dramatically alter the fundamental chromatic dispersion of diffractive components. This, in effect, is equivalent to simultaneously controlling the ``effective refractive index" and ``chromatic dispersion" of the meta-atoms. We have used this concept to demonstrate metasurface focusing mirrors with zero dispersion~\cite{Arbabi2016CLEO_Displess} in near IR. More recently, the same structure as the one used in~\cite{Arbabi2016CLEO_Displess} (with titanium dioxide replacing $\alpha$:Si) was used to demonstrate achromatic reflecting mirrors in the visible~\cite{Khorasaninejad2017NanoLett}. Using the concept introduced in~\cite{Arbabi2016CLEO_Displess}, here we experimentally show metasurface gratings and focusing mirrors that have positive, zero, and hyper chromatic dispersions. We also demonstrate an achromatic focusing mirror with a highly diminished focal length chromatic dispersion, resulting in an almost three times increase in its operation bandwidth. First, we consider the case of devices with zero chromatic dispersion. In general for truly frequency independent operation, a device should impart a constant delay for different frequencies (i.e. demonstrate a true time delay behavior), similar to a refractive device made of a non-dispersive material~\cite{Born1999}. Therefore, the phase profile will be proportional to the frequency: \begin{equation} \phi(x,y;\omega) = \omega T(x,y), \label{eq:displess_phase}\\ \end{equation} where $\omega=2\pi c/ \lambda$ is the angular frequency ($\lambda$: wavelength, $c$: speed of light) and $T(x,y)$ determines the function of the device (for instance $T(x,y)=-x \sin{\theta_0}/c$ for a grating that deflects light by angle $\theta_0$; $T(x,y)=-\sqrt{x^2 +y^2 + f^2}/c$ for a spherical-aberration-free lens with a focal distance $f$). Since the phase profile is a linear function of $\omega$, it can be realized using a metasurface composed of meta-atoms that control the phase $\phi(x,y;\omega_0) = T(x,y)\omega_0$ and its dispersion $\phi'=\mathrm{\partial}\phi(x,y;\omega)/\mathrm{\partial}\omega = T(x,y)$. The bandwidth of dispersionless operation corresponds to the frequency interval over which the phase locally imposed by the meta-atoms is linear with frequency $\omega$. For gratings or lenses, a large device size results in a large $|T(x,y)|$, which means that the meta-atoms should impart a large phase dispersion. Since the phase values at the center wavelength $\lambda_0=2\pi c/\omega_0$ can be wrapped into the 0 to 2$\pi$ interval, the meta-atoms only need to cover a rectangular region in the \textit{phase-dispersion} plane bounded by $\phi=0$ and 2$\pi$ lines, and $\phi'=0$ and $\phi'_\mathrm{max}$ lines, where $\phi'_\mathrm{max}$ is the maximum required dispersion which is related to the device size (see Supplementary Section S5 and Fig. S2). The required phase-dispersion coverage means that, to implement devices with various phase profiles, for each specific value of the phase we need various meta-atoms providing that specific phase, but with different dispersion values. Considering the simple case of a flat dispersionless lens (or focusing mirror) with radius $R$, we can get some intuition to the relations found for phase and dispersion. Dispersionless operation over a certain bandwidth $\Delta \omega$ means that the device should be able to focus a transform limited pulse with bandwidth $\Delta \omega$ and carrier frequency $\omega_0$ to a single spot located at focal length $f$ [Fig. \ref{fig:2_Simulations}(a)]. To implement this device, part of the pulse hitting the lens at a distance $r$ from its center needs to experience a pulse delay (i.e. group delay $t_g = \partial\phi/\partial\omega$) smaller by $(\sqrt{r^2+f^2}-f)/c$ than part of the pulse hitting the lens at its center. This ensures that parts of the pulse hitting the lens at different locations arrive at the focus at the same time. Also, the carrier delay (i.e. phase delay $t_p = \phi(\omega_0)/\omega_0$) should also be adjusted so that all parts of the pulse interfere constructively at the focus. Thus, to implement this phase delay and group delay behavior, the lens needs to be composed of elements, ideally with sub-wavelength size, that can provide the required phase delay and group delay at different locations. For a focusing mirror, these elements can take the form of sub-wavelength one-sided resonators, where the group delay is related to the quality factor $Q$ of the resonator (see Supplementary Section S7) and the phase delay depends on the resonance frequency. We note that larger group delays are required for lenses with larger radius, which means that elements with higher quality factors are needed. If the resonators are single mode, the $Q$ imposes an upper bound on the maximum bandwidth $\Delta \omega$ of the pulse that needs to be focused. The operation bandwidth can be expanded by using one-sided resonators with multiple resonances that partially overlap. As we will show later in the paper, these resonators can be implemented using silicon nano-posts backed by a reflective mirror. To realize metasurface devices with non-zero dispersion of a certain parameter $\xi(\omega)$, phase profiles of the following form are needed: \begin{equation} \phi(x,y;\omega) = \omega T(x,y,\xi(\omega)). \label{eq:disp_phase}\\ \end{equation} \noindent{For instance, the parameter $\xi(\omega)$ can be the deflection angle of a diffraction grating $\theta(\omega)$ or the focal length of a diffractive lens $f(\omega)$. As we show in the Supplementary Section S4, to independently control the parameter $\xi(\omega)$ and its chromatic dispersion $\mathrm{\partial}\xi/\mathrm{\partial}\omega$ at $\omega=\omega_0$, we need to control the phase dispersion at this frequency in addition to the phase. The required dispersion for a certain parameter value $\xi_0=\xi(\omega_0)$, and a certain dispersion $\mathrm{\partial}\xi/\mathrm{\partial}\omega|_{\omega=\omega_0}$ is given by: \begin{equation} \frac{\mathrm{\partial}\phi(x,y;\omega)}{\mathrm{\partial}\omega}|_{\omega=\omega_0} = T(x,y,\xi_0)+\partial\xi/\partial\omega|_{\omega=\omega_0}\omega_0\frac{\partial T(x,y,\xi)}{\partial\xi}|_{\xi=\xi_0}. \label{eq:disp_disp_phase}\\ \end{equation} \noindent{This dispersion relation is valid over a bandwidth where a linear approximation of $\xi(\omega)$ is valid. One can also use Fermat's principle to get similar results to Eq. \ref{eq:disp_disp_phase} for the local phase gradient and its frequency derivative (see Supplementary Section S6).} We note that discussing these types of devices in terms of phase $\phi(\omega)$ and phase dispersion $\partial\phi/\partial\omega$, which we mainly use in this paper, is equivalent to using the terminology of phase delay ($t_p=\phi(\omega_0)/\omega_0$) and group delay ($t_g=\partial\phi/\partial\omega$). The zero dispersion case discussed above corresponds to a case where the phase and group delays are equal. Figures~\ref{fig:2_Simulations}(b) and \ref{fig:2_Simulations}(c) show the required phase and group delays for blazed gratings and focusing mirrors with various types of dispersion, demonstrating the equality of phase and group delays in the dispersionless case. In microwave photonics, the idea of using sets of separate optical cavities for independent control of the phase delay of the optical carrier, and group delay of the modulated RF signal has previously been proposed~\cite{Morton2009IEEEPhotonTechLett} to achieve dispersionless beam steering and resemble a true time delay system over a narrow bandwidth. For all other types of chromatic dispersion, the phase and group delays are drastically different as shown in Figs.~\ref{fig:2_Simulations}(b) and \ref{fig:2_Simulations}(c). Assuming hypothetical meta-atoms that provide independent control of phase and dispersion up to a dispersion of $-150$~Rad$/\mu$m (to adhere to the commonly used convention, we report the dispersion in terms of wavelength) at the center wavelength of 1520~nm, we have designed and simulated four gratings with different chromatic dispersions (see Supplementary Section S1 for details). The simulated deflection angles as functions of wavelength are plotted in Fig. \ref{fig:2_Simulations}(d). All gratings are 150~$\mu$m wide, and have a deflection angle of 10 degrees at their center wavelength of 1520~nm. The positive dispersion grating exhibits a dispersion equal in absolute value to the negative dispersion of a regular grating with the same deflection angle, but with an opposite sign. The hyper-dispersive design is three times more dispersive than the regular grating, and the dispersionless beam deflector shows almost no change in its deflection angle. Besides gratings, we have also designed focusing mirrors exhibiting regular, zero, positive, and hyper dispersions. The focusing mirrors have a diameter of $500$~$\mu$m and a focal distance of $850$~$\mu$m at 1520~nm. Hypothetical meta-atoms with a maximum dispersion of $-200$~Rad$/\mu$m are required to implement these focusing mirror designs. The simulated focal distances of the four designs are plotted in Fig. \ref{fig:2_Simulations}(e). The axial plane intensity distributions at three wavelengths are plotted in Figs. \ref{fig:2_Simulations}(f-i) (for intensity plots at other wavelengths see Supplementary Fig. S3). To relate to our previous discussion of dispersionless focusing mirrors depicted in Fig. \ref{fig:2_Simulations}(a), a focusing mirror with a diameter of 500~$\mu$m and a focal distance of 850~$\mu$m would require meta-atoms with group delay of $\sim$24~$\lambda_0/\mathrm{c}$, with $\lambda_0$=1520~nm. To implement this device we used hypothetical meta-atoms with maximum dispersion of $\sim$-100~Rad/$\mu$m which corresponds to a group delay of $\sim$24~$\lambda_0/\mathrm{c}$. The hypothetical meta-atoms exhibit this almost linear dispersion over the operation bandwidth of 1450~nm to 1590~nm. \section{Metasurface design} An example of meta-atoms capable of providing 0 to 2$\pi$ phase coverage and different dispersions is shown in Fig. \ref{fig:3_DesignGraphs}(a). The meta-atoms are composed of a square cross-section amorphous silicon ($\mathrm{\alpha}$-Si) nano-post on a low refractive index silicon dioxide (SiO$_2$) spacer layer on an aluminum reflector that play the role of the multi-mode one sided resonators mentioned in Section 2 [Fig. \ref{fig:2_Simulations}(a)]. They are located on a periodic square lattice [Fig. \ref{fig:3_DesignGraphs}(a), middle]. The simulated dispersion versus phase plot for the meta-atoms at the wavelength of $\lambda_0=1520$~nm is depicted in Fig. \ref{fig:3_DesignGraphs}(b), and shows a partial coverage up to the dispersion value of $\sim-100$~Rad$/\mu$m. The nano-posts exhibit several resonances which enables high dispersion values over the 145~0nm to 1590~nm wavelength range. The meta-atoms are 725~nm tall, the SiO$_{2}$ layer is 325~nm thick, the lattice constant is 740~nm, and the nano-post side length is varied from 74 to 666~nm at 1.5~nm steps. Simulated reflection amplitude and phase for the periodic lattice are plotted in Figs. \ref{fig:3_DesignGraphs}(c) and \ref{fig:3_DesignGraphs}(d), respectively. The reflection amplitude over the bandwidth of interest is close to 1 for all nano-post side lengths. The operation of the nano-post meta-atoms is best intuitively understood as truncated multi-mode waveguides with many resonances in the bandwidth of interest \cite{Kamali2016NatCommun,Lalanne1999JOSAA_Multimode}. By going through the nano-post twice, light can obtain larger phase shifts compared to the transmissive operation mode of the metasurface (i.e. without the metallic reflector). The metallic reflector keeps the reflection amplitude high for all sizes, which makes the use of high quality factor resonances possible. As discussed in Section 2, high quality factor resonances are necessary for achieving large dispersion values, because, as we have shown in Supplementary Section S7, dispersion is given by $\phi'\approx -Q/\lambda_0$, where $Q$ is the quality factor of the resonance. Using the dispersion-phase parameters provided by this metasurface, we designed four gratings operating in various dispersion regimes. The gratings are $\sim$90~$\mu$m wide and have a 10-degree deflection angle at 1520~nm. They are designed to operate in the 1450 to 1590~nm wavelength range, and have regular negative, zero, positive, and hyper (three-times-larger negative) dispersion. Since the phase of the meta-atoms does not follow a linear frequency dependence over this wavelength interval [Fig. \ref{fig:3_DesignGraphs}(d), top right], we calculate the desired phase profile of the devices at 8 wavelengths in the range (1450 to 1590~nm at 20~nm steps), and form an 8$\times$1 complex reflection coefficient vector at each point on the metasurface. Using Figs. \ref{fig:3_DesignGraphs}(c) and \ref{fig:3_DesignGraphs}(d), a similar complex reflection coefficient vector is calculated for each meta-atom. Then, at each lattice site of the metasurface, we place a meta-atom whose reflection vector has the shortest weighted Euclidean distance to the desired reflection vector at that site. The weights allow for emphasizing different parts of the operation bandwidth, and can be chosen based on the optical spectrum of interest or other considerations. Here, we used an inverted Gaussian weight ($\exp((\lambda-\lambda_0)^2/2\sigma^2)$, $\sigma=300$~nm), which values wavelengths farther away from the center wavelength of $\lambda_0=1520$~nm. The same design method is used for the other devices discussed in the manuscript. The designed devices were fabricated using standard semiconductor fabrication techniques as described in Supplementary Section S1. Figures \ref{fig:3_DesignGraphs}(e) and \ref{fig:3_DesignGraphs}(f) show scanning electron micrographs of the nano-posts, and some of the devices fabricated using the proposed reflective meta-atoms. Supplementary Figure S5 shows the chosen post side lengths and the required as well as the achieved phase and group delays for the gratings with different dispersions. Required phases, and the values provided by the chosen nano-posts are plotted at three wavelengths for each grating in Supplementary Fig. S6. \section{Experimental results} Figures \ref{fig:4_Gratings}(a) and \ref{fig:4_Gratings}(b) show the simulated and measured deflection angles for gratings, respectively. The measured values are calculated by finding the center of mass of the deflected beam 3~mm away from the grating surface (see Supplementary Section S1 and Fig. S8 for more details). As expected, the zero dispersion grating shows an apochromatic behavior resulting in a reduced dispersion, the positive grating shows positive dispersion in the $\sim$1490-1550~nm bandwidth, and the hyper-dispersive one shows an enhanced dispersion in the measurement bandwidth. This can also be viewed from the grating momentum point of view: a regular grating has a constant momentum set by its period, resulting in a constant transverse wave-vector. In contrary, the momentum of the hyper-dispersive grating increases with wavelength, while that of the zero and positive gratings decreases with it. This means that the effective period of the non-regular gratings changes with wavelength, resulting in the desired chromatic dispersion. Figures \ref{fig:4_Gratings}(e-h) show good agreement between simulated intensities of these gratings versus wavelength and transverse wave-vector (see Supplementary Section S1 for details) and the measured beam deflection (black stars). The change in the grating pitch with wavelength is more clear in Supplementary Fig. S6, where the required and achieved phases are plotted for three wavelengths. The green line is the theoretical expectation of the maximum intensity trajectory. Measured deflection efficiencies of the gratings, defined as the power deflected by the gratings to the desired order, divided by the power reflected from a plain aluminum reflector (see Supplementary Section S1 and Fig. S8 for more details) are plotted in Figs. \ref{fig:4_Gratings}(c) and \ref{fig:4_Gratings}(d) for TE and TM illuminations, respectively. A similar difference in the efficiency of the gratings for TE and TM illuminations has also been observed in previous works~\cite{Arbabi2015NatCommun,Kamali2016NatCommun}. As another example for diffractive devices with controlled chromatic dispersion, four spherical-aberration-free focusing mirrors with different chromatic dispersions were designed, fabricated and measured using the same reflective dielectric meta-atoms. The mirrors are 240~$\mu$m in diameter and are designed to have a focal distance of 650~$\mu$m at 1520~nm. Supplementary Figure S6 shows the chosen post side lengths and the required as well as the achieved phase and group delays for the focusing mirrors with different dispersions. Figures \ref{fig:5_Lenses}(a) and \ref{fig:5_Lenses}(b) show simulated and measured focal distances for the four focusing mirrors (see Supplementary Figs. S9, S10, and S11 for detailed simulation and measurement results). The positive dispersion mirror is designed with dispersion twice as large as a regular mirror with the same focal distance, and the hyper-dispersive mirror has a negative dispersion three and a half times larger than a regular one. The zero dispersion mirror shows a significantly reduced dispersion, while the hyper-dispersive one shows a highly enhanced dispersion. The positive mirror shows the expected dispersion in the $\sim$1470 to 1560~nm range. As an application of diffractive devices with dispersion control, we demonstrate a spherical-aberration-free focusing mirror with increased operation bandwidth. For brevity, we call this device dispersionless mirror. Since the absolute focal distance change is proportional to the focal distance itself, a relatively long focal distance is helpful for unambiguously observing the change in the device dispersion. Also, a higher NA value is preferred because it results in a shorter depth of focus, thus making the measurements easier. Having these considerations in mind, we have chosen a diameter of 500~$\mu$m and a focal distance of 850~$\mu$m (NA$\approx$0.28) for the mirror, requiring a maximum dispersion of $\phi'_\mathrm{max}\approx-98$~Rad/$\mu$m which is achievable with the proposed reflective meta-atoms. We designed two dispersionless mirrors with two $\sigma$ values of 300 and 50~nm. For comparison, we also designed a regular metasurface mirror for operation at $\lambda_0=1520$~nm and with the same diameter and focal distance as the dispersionless mirrors. The simulated focal distance deviations (from the designed 850~$\mu$m) for the regular and dispersionless ($\sigma=300$~nm) mirrors are plotted in Fig. \ref{fig:5_Lenses}(c), showing a considerable reduction in chromatic dispersion for the dispersionless mirror. Detailed simulation results for these mirrors are plotted in Supplementary Fig. S12. Figures \ref{fig:5_Lenses}(d-g) summarize the measurement results for the dispersionless and regular mirrors (see Supplementary Section S1 and Fig. S8 for measurement details and setup). As Figs. \ref{fig:5_Lenses}(d) and \ref{fig:5_Lenses}(g) show, the focal distance of the regular mirror changes almost linearly with wavelength. The dispersionless mirror, however, shows a highly diminished chromatic dispersion. Besides, as seen from the focal plane intensity measurements, while the dispersionless mirrors are in focus in the 850~$\mu$m plane throughout the measured bandwidth, the regular mirror is in focus only from 1500 to 1550~nm (see Supplementary Figs. S13 and S14 for complete measurement results, and the Strehl ratios). Focusing efficiencies, defined as the ratio of the optical power focused by the mirrors to the power incident on them, were measured at different wavelengths for the regular and dispersionless mirrors (see Supplementary Section S1 for details). The measured efficiencies were normalized to the efficiency of the regular metasurface mirror at its center wavelength of 1520~nm (which is estimated to be $\sim$80$\%$--90$\%$ based on Fig. \ref{fig:3_DesignGraphs}, measured grating efficiencies, and our previous works~\cite{Arbabi2015NatCommun}). The normalized efficiency of the dispersionless mirror is between 50$\%$ and 60$\%$ in the whole wavelength range and shows no significant reduction in contrast to the regular metasurface mirror. \section{Discussion and conclusion} The reduction in efficiency compared to a mirror designed only for the center wavelength (i.e. the regular mirror) is caused by two main factors. First, the required region of the phase-dispersion plane is not completely covered by the reflective nano-post meta-atoms. Second, the meta-atom phase does not change linearly with respect to frequency in the relatively large bandwidth of 140~nm as would be ideal for a dispersionless metasurface. Both of these factors result in deviation of the phase profiles of the demonstrated dispersionless mirrors from the ideal ones. Furthermore, dispersionless metasurfaces use meta-atoms supporting resonances with high quality factors, thus leading to higher sensitivity of these devices to fabrication errors compared to the regular metasurfaces. Equation \ref{eq:disp_disp_phase} is basically a Taylor expansion of Eq. \ref{eq:disp_phase} kept to the first order. As a result, this equation is accurate only over the range of linearity of the phase given in Eq. \ref{eq:disp_phase}. To increase the validity bandwidth, one can generalize the method to keep higher order terms of the series. Another method to address this issue is the Euclidean distance minimization method that was used in the design process of the devices presented here. In conclusion, we demonstrated that independent control over phase and dispersion of meta-atoms can be used to engineer the chromatic dispersion of diffractive metasurface devices over continuous wavelength regions. This is in effect similar to controlling the ``material dispersion" of meta-atoms to compensate, over-compensate, or increase the structural dispersion of diffractive devices. In addition, we developed a reflective dielectric metasurface platform that provides this independent control. Using this platform, we experimentally demonstrated gratings and focusing mirrors exhibiting positive, negative, zero, and enhanced dispersions. We also corrected the chromatic aberrations of a focusing mirror resulting in a $\sim$3 times bandwidth increase (based on an Strehl ratio $>0.6$, see Supplementary Fig. S14). In addition, the introduced concept of metasurface design based on dispersion-phase parameters of the meta-atoms is general and can also be used for developing transmissive dispersion engineered metasurface devices. \vspace{0.2in} \section*{Acknowledgements} This work was supported by Samsung Electronics. E.A. and A.A. were also supported by National Science Foundation award 1512266. A.A. and Y.H. were also supported by DARPA, and S.M.K. was supported as part of the Department of Energy (DOE) ``Light-Material Interactions in Energy Conversion‚" Energy Frontier Research Center under grant no. DE-SC0001293. The device nanofabrication was performed at the Kavli Nanoscience Institute at Caltech. \noindent\textbf{Author contributions} E.A., A.A., and A.F. conceived the experiment. E.A., S.M.K., and Y.H. fabricated the samples. E.A., S.M.K., A.A., and Y.H. performed the simulations, measurements, and analyzed the data. E.A., A.F., and A.A. co-wrote the manuscript. All authors discussed the results and commented on the manuscript. \clearpage \begin{figure*}[htp] \centering \includegraphics{./Fig1_Concept_Fig_Rev_3_Horizontal.pdf} \caption{Schematic illustrations of different dispersion regimes. (a) Positive chromatic dispersion in refractive prisms and lenses made of materials with normal dispersion. (b) Regular (negative) dispersion in typical diffractive and metasurface gratings and lenses. (c) Schematic illustration of zero, (d) positive, and (e) hyper dispersion in dispersion-controlled metasurfaces. Only three wavelengths are shown here, but the dispersions are valid for any other wavelength in the bandwidth. The diffractive devices are shown in transmission mode for ease of illustration, while the actual devices fabricated in this paper are designed to operate in reflection mode.} \label{fig:1_concept} \end{figure*} \clearpage \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./Fig2_ConceptFigure_main_Consice_ver_6.pdf} \caption{Required phase and group delays and simulation results of dispersion-engineered metasurfaces based on hypothetical meta-atoms. (a) Schematics of focusing of a light pulse to the focal distance of a flat lens. The $E$ vs $t$ graphs show schematically the portions of the pulse passing through the center and at a point at a distance $r$ away from center both before the lens, and when arriving at focus. The portions passing through different parts of the lens should acquire equal group delays, and should arrive at the focal point in phase for dispersionless operation.(b) Required values of group delay for gratings with various types of chromatic dispersion. The dashed line shows the required phase delay for all devices, which also coincides with the required group delay for the dispersionless gratings. The gratings are $\sim$90~$\mu$m wide, and have a deflection angle of 10 degrees in their center wavelength of 1520~nm. (c) Required values of group delay for aspherical focusing mirrors with various types of chromatic dispersion. The dashed line shows the required phase delay for all devices. The mirrors are 240~$\mu$m in diameter, and have a focal distance of 650~$\mu$m at their center wavelength of 1520~nm. (d) Simulated deflection angles for gratings with regular, zero, positive, and hyper dispersions. The gratings are 150~$\mu$m wide and have a 10-degree deflection angle at 1520~nm. (e) Simulated focal distances for metasurface focusing mirrors with different types of dispersion. The mirrors are 500~$\mu$m in diameter and have a focal distance of 850~$\mu$m at 1520~nm. All gratings and focusing mirrors are designed using hypothetical meta-atoms that provide independent control over phase and dispersion (see Supplementary Section S1 for details). (f) Intensity in the axial plane for the focusing mirrors with regular negative, (g) zero, (h) positive, and (i) hyper dispersions plotted at three wavelengths (see Supplementary Fig. S3 for other wavelengths).} \label{fig:2_Simulations} \end{figure*} \clearpage \begin{figure}[htp] \centering \includegraphics[width=0.6\columnwidth]{./Fig3_DesignFigures_Main.pdf} \caption{High dispersion silicon meta-atoms. (a) A meta-atom composed of a square cross-section amorphous silicon nano-post on a silicon dioxide layer on a metallic reflector. Top and side views of the meta-atoms arranged on a square lattice are also shown. (b) Simulated dispersion versus phase plot for the meta-atom shown in (a) at $\lambda_0=$1520~nm. (c) Simulated reflection amplitude, and (d) phase as a function of the nano-post side length and wavelength. The reflection amplitude and phase along the dashed lines are plotted on the right. (e, f) Scanning electron micrographs of the fabricated nano-posts and devices.} \label{fig:3_DesignGraphs} \end{figure} \clearpage \begin{figure}[htp] \centering \includegraphics[width=1\columnwidth]{./Fig4_Gratings_Simulation_Measurement_Summary.pdf} \caption{Simulation and measurement results of gratings in different dispersion regimes. (a) Simulated deflection angles for gratings with different dispersions, designed using the proposed reflective meta-atoms. (b) Measured deflection angles for the same grating. (c) Measured deflection efficiency for the gratings under TE, and (d) TM illumination. (e-h) Comparison between FDTD simulation results showing the intensity distribution of the diffracted wave as a function of normalized transverse wave-vector ($k_x/k_0$, $k_0=2\pi/\lambda_0$, and $\lambda_0=$1520~nm) and wavelength for different gratings, and the measured peak intensity positions plotted with black stars. All simulations here are performed with TE illumination. The green lines show the theoretically expected maximum intensity trajectories.} \label{fig:4_Gratings} \end{figure} \clearpage \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./Fig5_MeasurementResults_Main_Compact_Rotated.pdf} \caption{Simulation and measurement results for mirrors with different dispersion regimes. (a) Simulated focal distance for focusing mirrors with different dispersions, designed using the reflective meta-atoms (see Supplementary Fig. S9 for axial plane intensity distributions). The mirrors are 240~$\mu$m in diameter and have a focal distance of 650~$\mu$m at 1520~nm. (b) Measured focal distances of the same focusing mirrors (see Supplementary Figs. S10 and S11 for axial plane intensity distributions). (c) Simulated and (d) measured focal distance deviation from its design value of 850~$\mu$m as a function of wavelength for the dispersionless and regular mirrors (see Supplementary Figs. S12 and S13 for extended simulation and measurement results). (e) Measured efficiency for the regular and dispersionless mirrors normalized to the efficiency of the regular device at its center wavelength of 1520~nm. (f) Measured intensity in the axial plane of the dispersionless metasurface mirror at five wavelengths (left). Intensity distributions measured in the desired focal plane (i.e. 850~$\mu$m away from the mirror surface) at the same wavelengths are shown in the center, and their one dimensional profiles along the $x$ axis are plotted on the right. (g) Same plots as in (f) but for the regular mirror. Scale bars: 2$\lambda$.} \label{fig:5_Lenses} \end{figure*} \clearpage \newcommand{\NatureFormatExtendedData}{% \setcounter{figure}{0} \renewcommand{\figurename}{\textbf{Extended Data Figure}} \renewcommand{\thefigure}{\textbf{\arabic{figure} $|$}}% } \NatureFormatExtendedData \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./FigS1_Multiwavelength_vs_Achromatic.pdf} \caption{Comparison of regular, multi-wavelength, and apochromatic lenses. (a) Schematic comparison of a regular, a multi-wavelength, and an apochromatic metasurface lens. The multi-wavelength lens is corrected at a short and a long wavelength to have a single focal point at a distance $f$, but it has two focal points at wavelengths in between them, none of which is at $f$. The apochromatic lens is corrected at the same short and long wavelengths, and in wavelengths between them it will have a single focus very close to $f$. (b) Focal distances for three focal points of a multiwavelength lens corrected at three wavelengths, showing the regular dispersion (i.e. $f\propto 1/\lambda$) of each focus with wavelength. For comparison, focal distance for the single focus of a typical apochromatic lens is plotted.} \label{fig:S1_Multi_vs_Ach} \end{figure*} \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./FigS2_MaxDispersionsLens_normalized_to_f.pdf} \caption{Maximum required dispersion of meta-atoms for lenses. (a) Maximum meta-atom dispersion necessary to control the dispersion of a spherical-aberration-free lens. The maximum dispersion is normalized to $-k_0 f/\lambda_0$ and is plotted on a logarithmic scale. (b) Normalized (to $-k_0 R/\lambda_0$) maximum dispersion required for a dispersionless lens. $R$ is the radius, $f$ is the focal distance, and NA is the numerical aperture of the lens.} \label{fig:S2_MaxDisps} \end{figure*} \clearpage \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./FigS3_MadeDispersionLensesAxialPlanes.pdf} \caption{Simulated axial intensity distribution for focusing mirrors with different dispersions designed using hypothetical meta-atoms. (a) Hyper-dispersive mirror. (b) Mirror with regular dispersion. (c) Mirror with zero dispersion. (d) Mirror with positive dispersion.} \label{fig:S3_MadeDispersionLensAxailPlanes} \end{figure*} \clearpage \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./FigS4_Supplemet_SEMs.pdf} \caption{Scanning electron micrographs of metasurface focusing mirrors with 850~$\mu$m focal distance. (a) Regular metasurface mirror. (b) Dispersionless metasurface mirror with $\sigma=300$~nm, and (c) $\sigma=50$~nm. (d) Fabricated meta-atoms.} \label{fig:S4_SEMs} \end{figure*} \clearpage \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./FigS5_Grating_PostSizee_GroupDelay_PhaseDelay.pdf} \caption{Chosen nano-post side lengths for gratings and their corresponding phase and group delays. (a) The chosen nano-post side length (left), phase delay (center), and group delay (right) at 1520~nm for the fabricated regular grating. (b-d) Same as \textit{a} for the dispersionless, hyper-dispersive, and positive-dispersion gratings respectively.} \label{fig:S5_GratingDelays} \end{figure*} \clearpage \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./FigS6_Grating_IdealVsAchievedPhase_3Wavelengths.pdf} \caption{Required and achieved phase values for gratings at three wavelengths. The phase delays are wrapped to the $-\pi$ to $\pi$ range. While the effective grating pitch is constant for the regular grating, it changes with wavelength in all other cases.} \label{fig:S6_GratingPhasesThreeWavelengths} \end{figure*} \clearpage \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./FigS7_Lens_PostSize_GroupDelay_PhaseDelay.pdf} \caption{Chosen nano-post side lengths for focusing mirrors and their corresponding phase and group delays. (a) The chosen nano-post side length (left), phase delay (center), and group delay (right) at 1520~nm for the fabricated 240~$\mu$m regular focusing mirror. (b-d) Same as \textit{a} for the dispersionless, hyper-dispersive, and positive-dispersion focusing mirrors respectively.} \label{fig:S7_LensDelays} \end{figure*} \clearpage \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./FigS8_MeasurementSetup_Reflective.pdf} \caption{Measurement setups. (a) Schematic illustration of the setup used to measure the deflection angles of gratings, and focus patterns and axial plane intensity distributions of focusing mirrors at different wavelengths. To measure the efficiency of the focusing mirrors, the flip mirror, iris, and optical power meter were used. (b) The setup used to measure the efficiencies of the gratings. The power meter was placed at a long enough distance such that the other diffraction orders fell safely outside its active aperture area.} \label{fig:S8_MeasurementSetup} \end{figure*} \clearpage \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./FigS9_ActualPostsLensesAxialPlanes_650_Simulations.pdf} \caption{Simulated axial intensity distribution for focusing mirrors with different dispersions designed using the reflective $\mathrm{\alpha}$-Si nano-posts discussed in Fig. 5(a). (a) Hyper-dispersive mirror. (b) Mirror with regular dispersion. (c) Mirror with zero dispersion. (d) Mirror with a positive dispersion with an amplitude twice the regular negative dispersion.} \label{fig:S9_ActualPostsLensAxailPlanes_Sim} \end{figure*} \clearpage \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./FigS10_MeasurementResults_Axial_ExtraordinaryLenses.pdf} \caption{Measured axial intensity distributions for focusing mirrors with different dispersions designed using the reflective $\mathrm{\alpha}$-Si nano-posts discussed in Fig. 5(b). (a), Hyper-dispersive mirror. (b) Mirror with regular dispersion. (c) Mirror with zero dispersion. (d) Mirror with a positive dispersion with an amplitude twice the regular negative dispersion.} \label{fig:S10_ActualPostsLensAxailPlanes_Meas} \end{figure*} \clearpage \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./FigS11_MeasurementResults_Axial_ExtraordinaryLenses_1D.pdf} \caption{One-dimensional cuts of the measured axial intensities plotted in Fig. \ref{fig:S10_ActualPostsLensAxailPlanes_Meas}. (a) Hyper-dispersive mirror. (b) Mirror with regular dispersion. (c) Mirror with zero dispersion. (d) Mirror with a positive dispersion with an amplitude twice the regular negative dispersion.} \label{fig:S11_ActualPostsLensAxailPlanes_Meas_1D} \end{figure*} \clearpage \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./FigS12_ActualPostsLensesAxialPlanes_850_Simulations.pdf} \caption{Extended simulation results for the regular and dispersionless mirrors discussed in Figs. 5(c-g). (a) Simulated axial plane (left) and focal plane (center) intensities for a regular metasurface focusing mirror designed using the proposed reflective dielectric meta-atoms. One-dimensional cross-sections of the focal plane intensity is plotted on the right. The focusing mirror has a diameter of 500~$\mu$m and a focal distance of 850~$\mu$m at 1520~nm. (b) Similar results for a focusing mirror with the same parameters designed to have a minimal dispersion in the bandwidth. Scale bars: 2$\lambda$.} \label{fig:S12_SupplementaryMeasurements} \end{figure*} \clearpage \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./FigS13_SupplementaryMeasurementsRotated_Displess.pdf} \caption{Complete measurement results for the dispersionless and regular mirrors discussed in Figs. 5(c-g). (a) Measured intensities for the regular metasurface mirror. The axial plane intensities are shown on the left, the measured intensities in the 850~$\mu$m plane are plotted in the middle, and one dimensional cuts of the focal plane measurements are shown on the right. (b) Same as (a) but for the dispersionless mirror design with $\sigma=300$~nm. (c) Measured intensities in the plane 850~$\mu$m away from the surface of the dispersionless mirror with $\sigma=50$~nm. One dimensional cuts of the measured intensities are shown on the right. Scale bars: 2$\lambda$.} \label{fig:S13_SupplementaryMeasurements} \end{figure*} \clearpage \begin{figure*}[htp] \centering \includegraphics[width=1\columnwidth]{./FigS14_StrehlRatios.pdf} \caption{Measured focal distances and Strehl ratios for the regular and dispersionless mirrors. (a) Measured focal distances for the regular and disperisonless ($\sigma =300$~nm) mirrors (same as Fig. 5(d)). (b) Measured focal distances for the regular and disperisonless ($\sigma =50$~nm) mirrors. (c) Strehl ratios calculated from the measured two dimensional modulation transfer functions (MTF) of the regular and dispersionless ($\sigma =300$~nm) metasurface mirrors. To find the Strehl ratio, the volume enclosed by the normalized two dimensional MTF is calculated at each wavelength. (d) The same graph as in (c), calculated and plotted for the $\sigma =50$~nm dispersionless mirror. In both cases, a clear flattening of the Strehl ratio, which is a measure of the contrast of an image formed by the mirror, is observed compared to the regular metasurface mirror.} \label{fig:S14_StrehlRatios} \end{figure*} \clearpage \begin{figure*}[ht] \centering \includegraphics[width=0.6\columnwidth]{./FigS15_aSiRefractiveIndex.pdf} \caption{Refractive index of amorphous silicon. The refractive index values were obtained using spectroscopic ellipsometry.} \label{fig:S15_aSi_ind} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=0.7\columnwidth]{./FigS16_Fermat.pdf} \caption{Schematic of light deflection at a gradient phase surface. (a) A gradient phase surface between two materials with indices $\mathrm{n_1}$ and $\mathrm{n_2}$. At frequency $\omega$ a ray of light going from A to B, passes the interface at a point with coordinate $x$. (b) The same structure with a ray of light at $\omega+\mathrm{d}\omega$ that goes from A to B'.} \label{fig:S16_Fermat} \end{figure*} \clearpage \begin{figure*}[ht] \centering \includegraphics[width=0.5\columnwidth]{./FigS17_DispersionCalcFig.pdf} \caption{Schematic of a generic metasurface. The metasurface is between two uniform materials with wave impedances of $\eta_1$ and $\eta_2$, and it is illuminated with a normally incident plane wave from the top side. Virtual planar boundaries $\Gamma_1$ and $\Gamma_2$ are used for calculating field integrals on each side of the metasurface.} \label{fig:S17_PlanarStruct} \end{figure*} \clearpage } \clearpage \section*{Supplementary Information} \section*{S1. Materials and Methods} \subsection*{Simulation and design.} The gratings with different dispersions discussed in Fig. 2(d) were designed using hypothetical meta-atoms that completely cover the required region of the phase-dispersion plane. We assumed that the meta-atoms provide 100 different phase steps from 0 to 2$\pi$, and that for each phase, 10 different dispersion values are possible, linearly spanning the 0 to $-150$~Rad$/\mu$m range. We assumed that all the meta-atoms have a transmission amplitude of 1. The design began with constructing the ideal phase masks at eight wavelengths equally spaced in the 1450 to 1590~nm range. This results in a vector of eight complex numbers for the ideal transmission at each point on the metasurface grating. The meta-atoms were assumed to form a two dimensional square lattice with a lattice constant of 740~nm, and one vector was generated for each lattice site. The optimum meta-atom for each site was then found by minimizing the Euclidean distance between the transmission vector of the meta-atoms and the ideal transmission vector for that site. The resulting phase mask of the grating was then found through a two-dimensional interpolation of the complex valued transmission coefficients of the chosen meta-atoms. The grating area was assumed to be illuminated uniformly, and the deflection angle of the grating was found by taking the Fourier transform of the field after passing through the phase mask, and finding the angle with maximum intensity. A similar method was used to design and simulate the focusing mirrors discussed in Figs. 2(e-i). In this case, the meta-atoms are assumed to cover dispersion values up to $-200$~Rad$/\mu$m. The meta-atoms provide 21 different dispersion values distributed uniformly in the 0 to $-200$~Rad$/\mu$m range. The focusing mirrors were designed and the corresponding phase masks were found in a similar manner to the gratings. A uniform illumination was used as the source, and the resulting field after reflection from the mirror was propagated in free space using a plane wave expansion method to find the intensity in the axial plane. The focal distances plotted in Fig. 2(e) show the distance of the maximum intensity point from the mirrors at each wavelength. The gratings and focusing mirrors discussed in Figs. 4(a), 5(a), and 5(c) are designed and simulated in exactly the same manner, except for using actual dielectric meta-atom reflection amplitudes and phases instead of the hypothetical ones. If the actual meta-atoms provided an exactly linear dispersion (i.e. if their phase was exactly linear with frequency over the operation bandwidth), one could use the required values of the phase and dispersion at each lattice site to choose the best meta-atom (knowing the coordinates of one point on a line and its slope would suffice to determine the line exactly). The phases of the actual meta-atoms, however, do not follow an exactly linear curve [Fig. 3(d)]. Therefore, to minimize the error between the required phases, and the actual ones provided by the meta-atoms, we have used a minimum weighted Euclidean distance method to design the devices fabricated and tested in the manuscript: at each point on the metasurface, we calculate the required complex reflection at eight wavelengths (1450~nm to 1590~nm, at 20~nm distances). We also calculate the complex reflection provided by each nano-post at the same wavelengths. To find the best meta-atom for each position, we calculate the weighted Euclidean distance between the required reflection vector, and the reflection vectors provided by the actual nano-posts. The nano-post with the minimum distance is chosen at each point. As a result, the chromatic dispersion is indirectly taken into account, not directly. The weight function can be used to increase or decrease the importance of each part of the spectrum depending on the specific application. In this work, we have chosen an inverted Gaussian weight function ($\exp((\lambda-\lambda_0)^2/2\sigma^2)$, $\lambda_0=1520$~nm, $\sigma=300$~nm) for all the devices to slightly emphasize the importance of wavelengths farther from the center. In addition, we have also designed a dispersionless lens with $\sigma=50$~nm (the measurement results of which are provided in Figs. \ref{fig:S13_SupplementaryMeasurements} and \ref{fig:S14_StrehlRatios}) for comparison. The choice of 8 wavelengths to form and compare the reflection vectors is relatively arbitrary; however, the phases of the nano-posts versus wavelength are smooth enough, such that they can be well approximated by line segments in 20~nm intervals. In addition, performing the simulations at 8 wavelengths is computationally not very expensive. Therefore, 8 wavelengths are enough for a 150~nm bandwidth here, and increasing this number may not result in a considerable improvement in the performance. Reflection amplitude and phase of the meta-atoms were found using rigorous coupled wave analysis technique~\cite{Liu2012CompPhys}. For each meta-atom size, a uniform array on a subwavelength lattice was simulated using a normally incident plane wave. The subwavelength lattice ensures the existence of only one propagating mode which justifies the use of only one amplitude and phase for describing the optical behavior at each wavelength. In the simulations, the amorphous silicon layer was assumed to be 725~nm thick, the SiO$_{2}$ layer was 325~nm, and the aluminum layer was 100~nm thick. A 30-nm-thick Al$_{2}$O$_{3}$ layer was added between the Al and the oxide layer (this layer served as an etch stop layer to avoid exposing the aluminum layer during the etch process). Refractive indices were set as follows in the simulations: SiO$_{2}$: 1.444, Al$_{2}$O$_{3}$: 1.6217, and Al: 1.3139-$i$13.858. The refractive index of amorphous silicon used in the simulations is plotted in Fig. \ref{fig:S15_aSi_ind}. The FDTD simulations of the gratings (Figs. 4(e-h)) were performed using a normally incident plane-wave illumination with a Gaussian amplitude in time (and thus a Gaussian spectrum) in MEEP~\cite{Oskooi2010CompPhys}. The reflected electric field was saved in a plane placed one wavelength above the input plane at time steps of 0.05 of the temporal period. The results in Figs. 4(e-h) are obtained via Fourier transforming the fields in time and space resulting in the reflection intensities as a function of frequency and transverse wave-vector. \subsection*{Sample fabrication.} A 100-nm aluminum layer and a 30-nm Al$_{2}$O$_{3}$ layer were deposited on a silicon wafer using electron beam evaporation. This was followed by deposition of 325~nm of SiO$_{2}$ and 725~nm of amorphous silicon using the plasma enhanced chemical vapor deposition (PECVD) technique at $200\,^{\circ}{\rm C}$. A $\sim$300~nm thick layer of ZEP-520A positive electron-beam resist was spun on the sample at 5000 rpm for 1 min, and was baked at $180\,^{\circ}{\rm C}$ for 3 min. The pattern was generated using a Vistec EBPG5000+ electron beam lithography system, and was developed for 3 minutes in the ZED-N50 developer (from Zeon Chemicals). A $\sim$70-nm Al$_{2}$O$_{3}$ layer was subsequently evaporated on the sample, and the pattern was reversed with a lift off process. The Al$_{2}$O$_{3}$ hard mask was then used to etch the amorphous silicon layer in a 3:1 mixture of $\mathrm{SF_6}$ and $\mathrm{C_4F_8}$ plasma. The mask was later removed using a 1:1 solution of ammonium hydroxide and hydrogen peroxide at $80^{\circ}$~C. \subsection*{Measurement procedure.} The measurement setup is shown in Fig. \ref{fig:S8_MeasurementSetup}(a). Light emitted from a tunable laser source (Photonetics TUNICS-Plus) was collimated using a fiber collimation package (Thorlabs F240APC-1550), passed through a 50/50 beamsplitter (Thorlabs BSW06), and illuminated the device. For grating measurements a lens with a 50~mm focal distance was also placed before the grating at a distance of $\sim$45~mm to partially focus the beam and reduce the beam divergence after being deflected by the grating in order to decrease the measurement error (similar to Fig. \ref{fig:S8_MeasurementSetup}(b)). The light reflected from the device was redirected using the same beamsplitter, and imaged using a custom built microscope. The microscope consists of a 50X objective (Olympus LMPlanFL N, NA=0.5), a tube lens with a 20 cm focal distance (Thorlabs AC254-200-C-ML), and an InGaAs camera (Sensors Unlimited 320HX-1.7RT). The grating deflection angle was found by calculating the center of mass for the deflected beam imaged 3~mm away from the gratings surface. For efficiency measurements of the focusing mirrors, a flip mirror was used to send light towards an iris (2 mm diameter, corresponding to an approximately 40~$\mu$m iris in the object plane) and a photodetector (Thorlabs PM100D with a Thorlabs S122C head). The efficiencies were normalized to the efficiency of the regular mirror at its center wavelength by dividing the detected power through the iris by the power measured for the regular mirror at its center wavelength. The measured intensities were up-sampled using their Fourier transforms in order to achieve smooth intensity profiles in the focal and axial planes. To measure the grating efficiencies, the setup shown in Supporting Information Fig. \ref{fig:S8_MeasurementSetup}(b) was used, and the photodetector was placed $\sim$50~mm away from the grating, such that the other diffraction orders fall outside its active area. The efficiency was found by calculating the ratio of the power deflected by the grating to the power normally reflected by the aluminum reflector in areas of the sample with no grating. The beam-diameter on the grating was calculated using the setup parameters, and it was found that $\sim$84$\%$ of the power was incident on the 90~$\mu$m wide gratings. This number was used to correct for the lost power due to the larger size of the beam compared to the grating. \section*{S2. Chromatic dispersion of diffractive devices.} Chromatic dispersion of a regular diffractive grating or lens is set by its function. The grating momentum for a given order of a grating with a certain period is constant and does not change with changing the wavelength. If we denote the size of the grating reciprocal lattice vector of interest by $k_G$, we get: \begin{equation} \sin(\theta) = \frac{k_G}{2\pi/\lambda} \Rightarrow \theta=\sin^{-1}(\frac{k_G}{2\pi/\lambda}), \label{eq:GratingDeflectionAngle}\\ \end{equation} where $\theta$ is the deflection angle at a wavelength $\lambda$ for normally incident beam. The chromatic angular dispersion of the grating ( ${\mathrm{d}\theta}/{\mathrm{d}\lambda}$) is then given by: \begin{equation} \frac{\mathrm{d}\theta}{\mathrm{d}\lambda} = \frac{k_G/2\pi}{\sqrt{1-(k_G\lambda/2\pi)^2}}=\frac{\tan(\theta)}{\lambda}. \label{eq:GratingDispersion}\\ \end{equation} \noindent{and in terms of frequency:} \begin{equation} \frac{\mathrm{d}\theta}{\mathrm{d}\omega} = -\frac{\tan(\theta)}{\omega}. \label{eq:GratingDispersion_frequency}\\ \end{equation} Therefore, the dispersion of a regular grating only depends on its deflection angle and the wavelength. Similarly, focal distance of one of the focal points of diffractive and metasurface lenses changes as ${\mathrm{d}f}/{\mathrm{d}\lambda}=-f/\lambda$ (thus ${\mathrm{d}f}/{\mathrm{d}\omega}=f/\omega$ (\cite{Born1999,Arbabi2016Optica,Faklis1995ApplOpt}). \section*{S3. Chromatic dispersion of multiwavelength diffractive devices.} As it is mentioned in the main text, multiwavelength diffractive devices~(\cite{Arbabi2016Optica,Faklis1995ApplOpt,Aieta2015Science}) do not change the dispersion of a given order in a grating or lens. They are essentially multi-order gratings or lenses, where each order has the regular (negative) diffractive chromatic dispersion. These devices are designed such that at certain distinct wavelengths of interest, one of the orders has the desired deflection angle or focal distance. If the blazing of each order at the corresponding wavelength is perfect, all of the power can be directed towards that order at that wavelength. However, at wavelengths in between the designed wavelengths, where the grating or lens is not corrected, the multiple orders have comparable powers, and show the regular diffractive dispersion. This is schematically shown in Fig. \ref{fig:S1_Multi_vs_Ach}(a). Figure \ref{fig:S1_Multi_vs_Ach}(b) compares the chromatic dispersion of a multi-wavelength diffractive lens to a typical refractive apochromatic lens. \section*{S4. Generalization of chromatic dispersion control to nonzero dispersions.} Here we present the general form of equations for the dispersion engineered metasurface diffractive devices. We assume that the function of the device is set by a parameter $\xi (\omega)$, where we have explicitly shown its frequency dependence. For instance, $\xi$ might denote the deflection angle of a grating or the focal distance of a lens. The phase profile of a device with a desired $\xi (\omega)$ is given by \begin{equation} \phi(x,y,\xi(\omega);\omega) = \omega T(x,y,\xi(\omega)), \label{eq:arb_disp_phase}\\ \end{equation} which is the generalized form of the Eq. (1). We are interested in controlling the parameter $\xi (\omega)$ and its dispersion (i.e. derivative) at a given frequency $\omega_0$. $\xi (\omega)$ can be approximated as $\xi(\omega)\approx\xi_0+\partial\xi/\partial\omega|_{\omega=\omega_0}(\omega-\omega_0)$ over a narrow bandwidth around $\omega_0$. Using this approximation, we can rewrite \ref{eq:arb_disp_phase} as \begin{equation} \phi(x,y;\omega) = \omega T(x,y,\xi_0+\partial\xi/\partial\omega|_{\omega=\omega_0}(\omega-\omega_0)). \label{eq:linear_disp_phase}\\ \end{equation} At $\omega_0$, this reduces to \begin{equation} \phi(x,y;\omega)|_{\omega=\omega_0} = \omega_0 T(x,y,\xi_0), \label{eq:phase_disp_phase}\\ \end{equation} and the phase dispersion at $\omega_0$ is given by \begin{equation} \frac{\mathrm{\partial}\phi(x,y;\omega)}{\mathrm{\partial}\omega}|_{\omega=\omega_0} = T(x,y,\xi_0)+\partial\xi/\partial\omega|_{\omega=\omega_0}\omega_0\frac{\partial T(x,y,\xi)}{\partial\xi}|_{\xi=\xi_0}. \label{eq:disp_disp_phase}\\ \end{equation} Based on Eqs. (\ref{eq:phase_disp_phase}) and (\ref{eq:disp_disp_phase}) the values of $\xi_0$ and $\partial\xi/\partial\omega|_{\omega=\omega_0}$ can be set independently, if the phase $\phi(x,y,\omega_0)$ and its derivative $\partial\phi/\partial\omega$ can be controlled simultaneously and independently. Therefore, the device function at $\omega_0$ (determined by the value of $\xi_0$) and its dispersion (determined by $\partial\xi/\partial\omega|_{\omega=\omega_0}$) will be decoupled. The zero dispersion case is a special case of Eq. (\ref{eq:disp_disp_phase}) with $\partial\xi/\partial\omega|_{\omega=\omega_0}=0$. In the following we apply these results to the special cases of blazed gratings and spherical-aberration-free lenses (also correct for spherical-aberration-free focusing mirrors). For a 1-dimensional conventional blazed grating we have $\xi=\theta$ (the deflection angle), and $T =- x\sin(\theta)$. Therefore the phase profile with a general dispersion is given by: \begin{equation} \phi(x;\omega) = -\omega x \sin[\theta_0+ D(\omega-\omega_0)], \label{eq:Generalized_grating_phase_linear_freq}\\ \end{equation} \noindent where $D = \partial\theta/\partial\omega|_{\omega=\omega_0} = \nu D_0$, and $D_0=-\tan(\theta_0)/\omega_0$ is the angular dispersion of a regular grating with deflection angle $\theta_0$ at the frequency $\omega_0$. We have chosen to express the generalized dispersion $D$ as a multiple of the regular dispersion $D_0$ with a real number $\nu$ to benchmark the change in dispersion. For instance, $\nu=1$ corresponds to a regular grating, $\nu=0$ represents a dispersionless grating, $\nu=-1$ denotes a grating with positive dispersion, and $\nu=3$ results in a grating three times more dispersive than a regular grating (i.e. hyper-dispersive). Various values of $\nu$ can be achieved using the method of simultaneous control of phase and dispersion of the meta-atoms, and thus we can break this fundamental relation between the deflection angle and angular dispersion. The phase derivative necessary to achieve a certain value of $\nu$ is given by: \begin{equation} \frac{\partial\phi(x;\omega)}{\partial\omega}|_{\omega=\omega_0} = -x/c \sin(\theta_0)(1-\nu), \label{eq:Generalized_grating_disp_linear_freq}\\ \end{equation} \noindent or in terms of wavelength: \begin{equation} \frac{\partial\phi(x;\lambda)}{\partial\lambda}|_{\lambda=\lambda_0} = \frac{2\pi}{{\lambda_0}^2} x \sin(\theta_0)(1-\nu). \label{eq:Generalized_grating_disp_linear}\\ \end{equation} For a spherical-aberration-free lens we have $\xi=f$ and $T(x,y,f)=-\sqrt{x^2 +y^2 + f^2}/c$. Again we can approximate $f$ with its linear approximation $f(\omega)=f_0+D(\omega-\omega_0)$, with $D=\partial f/ \partial \omega|_{\omega=\omega_0}$ denoting the focal distance dispersion at $\omega=\omega_0$. The regular dispersion for such a lens is given by $D_0=f_0/\omega_0$. Similar to the gratings, we can write the more general form for the focal distance dispersion as $D=\nu D_0$, where $\nu$ is some real number. In this case, the required phase dispersion is given by: \begin{equation} \frac{\partial\phi(x,y;\omega)}{\partial\omega}|_{\omega=\omega_0} = -\frac{1}{c}[\sqrt{x^2 +y^2 + {f_0}^2}+\frac{\nu {f_0}^2}{\sqrt{x^2 +y^2 + {f_0}^2}}], \label{eq:Generalized_lens_disp_linear_freq}\\ \end{equation} \noindent which can also be expressed in terms of wavelength: \begin{equation} \frac{\partial\phi(x,y;\lambda)}{\partial\lambda}|_{\lambda=\lambda_0} = \frac{2\pi}{{\lambda_0}^2}[\sqrt{x^2 +y^2 + {f_0}^2}+\frac{\nu {f_0}^2}{\sqrt{x^2 +y^2 + {f_0}^2}}]. \label{eq:Generalized_lens_disp_linear}\\ \end{equation} \section*{S5. Maximum meta-atom dispersion required for controlling chromatic dispersion of gratings and lenses.} Since the maximum achievable dispersion is limited by the meta-atom design, it is important to find a relation between the maximum dispersion required for implementation of a certain metasurface device, and the device parameters (e.g. size, focal distance, deflection angle, etc.). Here we find these maxima for the cases of gratings and lenses with given desired dispersions. For the grating case, it results from Eq. (\ref{eq:Generalized_grating_disp_linear}) that the maximum required dispersion is given by \begin{equation} \mathrm{max}(\frac{\partial\phi(x;\lambda)}{\partial\lambda}|_{\lambda=\lambda_0}) = k_0 X \frac{\sin(\theta_0)}{\lambda_0}(1-\nu), \label{eq:Grating_max_dispersion}\\ \end{equation} where $X$ is the length of the grating, and $k_0=2 \pi/\lambda_0$ is the wavenumber. It is important to note that based on the value of $\nu$, the sign of the meta-atom dispersion changes. However, in order to ensure a positive group velocity for the meta-atoms, the dispersions should be negative. Thus, if $1-\nu>0$, a term should be added to make the dispersion values negative. We can always add a term of type $\phi_0=k L_0$ to the phase without changing the function of the device. This term can be used to shift the required region in the phase-dispersion plane. Therefore, it is actually the difference between the minimum and maximum of Eqs. \ref{eq:Generalized_grating_disp_linear} and \ref{eq:Generalized_lens_disp_linear} that sets the maximum required dispersion. Using a similar procedure, we find the maximum necessary dispersion for a spherical-aberration-free lens as \begin{equation} \phi'_{\mathrm{max}} = -\frac{k_0 f}{\lambda_0}\begin{cases} \frac{\Theta+\nu}{\sqrt{\Theta}}-1-\nu & \nu<1 \\ \frac{\Theta+\nu}{\sqrt{\Theta}}-2\sqrt{\nu} & 1<\nu<\sqrt{\Theta}\\ (1-\sqrt{\nu})^2 & \sqrt{\Theta}<\nu<\Theta\\ -(\frac{\Theta+\nu}{\sqrt{\Theta}}-1-\nu) &\Theta<\nu\\ \end{cases}, \label{eq:Lens_max_disp} \end{equation} where $f$ is the focal distance of the lens, and $\Theta=(f^2+R^2)/f^2=1/(1-\mathrm{NA}^2)$ ( R: lens radius, NA: numerical aperture). $\log{[\phi'_{\mathrm{max}}/( -k_0 f/\lambda_0)]}$ is plotted in Fig. \ref{fig:S2_MaxDisps}(a) as a function of NA and $\nu$. In the simpler case of dispersionless lenses (i.e. $\nu=0$), Eq. (\ref{eq:Lens_max_disp}) can be further simplified to \begin{equation} \phi'_{\mathrm{max}} =-\frac{k_0 R}{\lambda}\frac{1-\sqrt{1-\mathrm{NA}^2}}{\mathrm{NA}}\approx -\frac{k_0 R \mathrm{NA}}{2\lambda} \label{eq:max_dispersion_simplified}\\ \end{equation} where $R$ is the lens radius and the approximation is valid for small values of NA. The maximum required dispersion for the dispersionless lens is normalized to $-k_0 R/\lambda_0$ and is plotted in Supporting Information Fig. \ref{fig:S2_MaxDisps}(b) as a function of NA. \section*{S6. Fermat's principle and the phase dispersion relation.} Phase only diffractive devices can be characterized by a local grating momentum (or equivalently phase gradient) resulting in a local deflection angle at each point on their surface. Here we consider the case of a 1D element with a given local phase gradient (i.e. $\phi_x=\partial\phi/\partial x$) and use Fermat's principle to connect the frequency derivative of the local deflection angle (i.e. chromatic dispersion) to the frequency derivative of $\phi_x$ (i.e. $\partial\phi_x/\partial\omega$). For simplicity, we assume that the illumination is close to normal, and that the element phase does not depend on the illumination angle (which is in general correct in local metasurfaces and diffractive devices). Considering Fig. \ref{fig:S16_Fermat}(a), we can write the phase acquired by a ray going from point A to point B, and passing the interface at $\mathrm{x}$ as: \begin{equation} \Phi(x,\omega) = \frac{\omega}{\mathrm{c}}[\mathrm{n_1}\sqrt{x^2+{y_A}^2}+\mathrm{n_2}\sqrt{{(d-x)}^2+{y_B}^2}]+\phi(x,\omega) \label{eq:FermatPhaseOmega}\\ \end{equation} \noindent{To minimize this phase we need:} \begin{equation} \frac{\partial\Phi(x,\omega)}{\partial x} = \frac{\omega}{\mathrm{c}}[\frac{\mathrm{n_1}x}{\sqrt{x^2+{y_A}^2}}+\frac{\mathrm{n_2}(d-x)}{\sqrt{{(d-x)}^2+{y_B}^2}}]+\phi_x=0. \label{eq:FermatPhaseDiffOmega}\\ \end{equation} \noindent{For this minimum to occur at point O (i.e. $x=0$)}: \begin{equation} \phi_x(\omega)=\frac{\omega}{\mathrm{c}}\frac{\mathrm{n_2}d}{r}=\frac{\mathrm{n_2}\omega}{\mathrm{c}}\sin(\theta(\omega)) \label{eq:FermatPhaseDiffOmegaAt0}\\ \end{equation} \noindent{which is a simple case of the diffraction equation, and where $r=\sqrt{d^2+{y_B}^2}$ is the OB length. At $\omega+\mathrm{d}\omega$, we get the following phase for the path from A to B' [Fig. \ref{fig:S16_Fermat}(b)]:} \begin{equation} \begin{split} \Phi(x,\omega+\mathrm{d}\omega) = & \frac{\omega+\mathrm{d}\omega}{\mathrm{c}}[\mathrm{n_1}\sqrt{x^2+{y_A}^2}\\ & +\mathrm{n_2}\sqrt{{(d-x+\mathrm{d}x)}^2+{(y_B+\mathrm{d}y)}^2} ]+\phi(x,\omega+\mathrm{d}\omega) \end{split} \label{eq:FermatPhaseOmegadOmega} \end{equation} \noindent{where we have chosen B' such that OB and OB' have equal lengths. Minimizing the path passing through O:}\\ \begin{equation} \phi_x(\omega+\mathrm{d}\omega)=\frac{\omega+\mathrm{d}\omega}{\mathrm{c}}\frac{\mathrm{n_2}(d+dx)}{r}=\frac{\mathrm{n_2}(\omega+\mathrm{d}\omega)}{\mathrm{c}}\sin(\theta(\omega+\mathrm{d}\omega)) \label{eq:FermatPhaseDiffOmegadOmegaAt0}\\ \end{equation} \noindent{subtracting \ref{eq:FermatPhaseDiffOmegaAt0} from \ref{eq:FermatPhaseDiffOmegadOmegaAt0}, and setting $\phi_x(\omega+\mathrm{d}\omega)-\phi_x(\omega)=\frac{\partial\phi_x}{\partial\omega}\mathrm{d}\omega$, we get:} \begin{equation} \frac{\partial\phi_x}{\partial\omega}=\frac{\mathrm{n_2}}{c}\sin(\theta(\omega))+\frac{\mathrm{d}\theta}{\mathrm{d}\omega}\frac{\mathrm{n_2}\omega}{c}\cos(\theta(\omega)). \label{eq:FermatPhaseDisp}\\ \end{equation} \noindent{One can easily recognize the similarity between \ref{eq:FermatPhaseDisp} and \ref{eq:disp_disp_phase}.} \section*{S7. Relation between dispersion and quality factor of highly reflective or transmissive meta-atoms.} Here we show that the phase dispersion of a meta-atom is linearly proportional to the stored optical energy in the meta-atoms, or equivalently, to the quality factor of the resonances supported by the mata-atoms. To relate the phase dispersion of transmissive or reflective meta-atoms to the stored optical energy, we follow an approach similar to the one taken in chapter 8 of~\cite{Harrington2001} for finding the dispersion of a single port microwave circuit. We start from the frequency domain Maxwell's equations: \begin{align} \begin{split} &\nabla \times E = i \omega \mu H, \\ &\nabla \times H = -i \omega \epsilon E, \end{split} \label{align:Maxwell} \end{align} and take the derivative of the Eq. \ref{align:Maxwell} with respect to frequency: \begin{equation} \nabla \times \frac{\partial E}{\partial \omega} = i \mu H +i \omega \mu \frac{\partial H}{\partial \omega}, \label{eq:Deriv_1}\\ \end{equation} \begin{equation} \nabla \times \frac{\partial H}{\partial \omega} = -i \epsilon E -i \omega \epsilon \frac{\partial E}{\partial \omega}. \label{eq:Deriv_2}\\ \end{equation} Multiplying Eq. \ref{eq:Deriv_1} by $H^*$ and the conjugate of Eq. \ref{eq:Deriv_2} by $\partial E/\partial \omega$, and subtracting the two, we obtain \begin{equation} \nabla \cdot (\frac{\partial E}{\partial \omega} \times H^*) = i \mu |H|^2 +i \omega \mu \frac{\partial H}{\partial \omega} \cdot H^* -i \omega \epsilon \frac{\partial E}{\partial \omega} \cdot E^*. \label{eq:Div_1}\\ \end{equation} Similarly, multiplying Eq. \ref{eq:Deriv_2} by $E^*$ and the conjugate of Eq. \ref{eq:Deriv_1} by $\partial H/\partial \omega$, and subtracting the two we find: \begin{equation} \nabla \cdot (\frac{\partial H}{\partial \omega} \times E^*) = -i \epsilon |E|^2 -i \omega \epsilon \frac{\partial E}{\partial \omega} \cdot E^* +i \omega \mu \frac{\partial H}{\partial \omega} \cdot H^*. \label{eq:Div_2}\\ \end{equation} Subtracting Eq. \ref{eq:Div_2} from Eq. \ref{eq:Div_1} we get: \begin{equation} \nabla \cdot (\frac{\partial E}{\partial \omega} \times H^*-\frac{\partial H}{\partial \omega} \times E^*) = i \mu |H|^2 +i \epsilon |E|^2. \label{eq:Diff}\\ \end{equation} Integrating both sides of Eq. \ref{eq:Diff}, and using the divergence theorem to convert the left side to a surface integral leads to: \begin{equation} \sideset{}{_{\partial V} }\oint(\frac{\partial E}{\partial \omega} \times H^*-\frac{\partial H}{\partial \omega} \times E^*) = i\sideset{}{_V}\int(\mu |H|^2 + \epsilon |E|^2)dv =2iU, \label{eq:Integral}\\ \end{equation} where $U$ is the total electromagnetic energy inside the volume $V$, and $\partial V$ denotes the surrounding surface of the volume. Now we consider a metasurface composed of a subwavelength periodic array of meta-atoms as shown in Fig. \ref{fig:S17_PlanarStruct}. We also consider two virtual planar boundaries $\Gamma_1$ and $\Gamma_2$ on both sides on the metasurface (shown with dashed lines in Fig. \ref{fig:S17_PlanarStruct}). The two virtual boundaries are considered far enough from the metasurface that the metasurface evanescent fields die off before reaching them. Because the metasurface is periodic with a subwavelength period and preserves polarization, we can write the transmitted and reflected fields at the virtual boundaries in terms of only one transmission $t$ and reflection $r$ coefficients. The fields at these two boundaries are given by: \begin{align} \begin{split} E_1&=E+rE \\ H_1&=-\hat{z}\times(\frac{E}{\eta_1}-r\frac{E}{\eta_1}) \\ E_2&=tE \\ H_2&=-t\hat{z}\times\frac{E}{\eta_2} \end{split} \label{align:Fields} \end{align} where $E$ is the input field, $E_1$ and $E_2$ are the total electric fields at $\Gamma_1$ and $\Gamma_2$, respectively, and $\eta_1$ and $\eta_2$ are wave impedances in the materials on the top and bottom of the metasurface. Inserting fields from Eq. \ref{align:Fields} to Eq. \ref{eq:Integral}, and using the uniformity of the fields to perform the integration over one unit of area, we get: \begin{equation} \frac{\partial r}{\partial \omega}r^*\frac{|E|^2}{\eta_1}+\frac{\partial t}{\partial \omega}t^*\frac{|E|^2}{\eta_2} =i\tilde{U} \label{eq:Total_Deriv}\\ \end{equation} where $\tilde{U}$ is the optical energy per unit area that is stored in the metasurface layer. For a loss-less metasurface that is totally reflective (i.e. $t=0$ and $r= e^{i\phi}$), we obtain: \begin{equation} \frac{\partial \phi}{\partial \omega} = \frac{\tilde{U}}{P_\mathrm{in}}, \label{eq:Freq_disp}\\ \end{equation} where we have used $P_\mathrm{in}=|E|^2/\eta_1$ to denote the per unit area input power. Finally, the dispersion can be expressed as: \begin{equation} \frac{\partial \phi}{\partial \lambda} = \frac{\partial \phi}{\partial \omega} \frac{\partial \omega}{\partial \lambda}=-\frac{\omega}{\lambda}\frac{\tilde{U}}{P_\mathrm{in}}. \label{eq:Wavelength_disp}\\ \end{equation} We used Eq. \ref{eq:Wavelength_disp} throughout the work to calculate the dispersion from solution of the electric and magnetic fields at a single wavelength, which reduced simulation time by a factor of two. In addition, in steady state the input and output powers are equal $P_\mathrm{out}=P_\mathrm{in}$, and therefore we have: \begin{equation} \frac{\partial \phi}{\partial \lambda} = -\frac{1}{\lambda}\frac{\omega\tilde{U}}{P_\mathrm{out}}=-\frac{Q}{\lambda} \label{eq:Disp_Q}\\ \end{equation} where we have assumed that almost all of the stored energy is in one single resonant mode, and $Q$ is the quality factor of that mode. Therefore, in order to achieve large dispersion values, resonant modes with high quality factors are necessary. \bibliographystyle{naturemag_noURL}
1,108,101,563,529
arxiv
\section*{Figure Captions} \noindent Fig.~1. skyrmions of charge five to nine; on the left baryon density isosurfaces (to scale) with five at the top and nine at the bottom and on the right wireframe models of the corresponding solids (not to scale). \bigskip \end{document}
1,108,101,563,530
arxiv
\section{Introduction} Despite the increasing power of computational resources and the availability of high quality observations, a precise description of geophysical flows over their whole dynamical scales is today completely beyond reach. Challenges appear as unlimited as the variety of dynamics and boundary conditions with their broad range of spatial and temporal scales across the globe. To face these challenges, numerous efforts are taking place to build an ever-increasing quality, quantity, duration and integration of all observations, in situ and satellite. In parallel, simulation capabilities largely improved, {\rm i.e.}, analysis can now be routinely carried out to more precisely characterize the variability in the global ocean, at scales of ten to hundreds of kilometers and one to hundreds of days. Yet, for these ocean models, the unresolved small scales and associated fluxes are always accounted for by simple mathematical models, {\rm i.e.} parameterizations. Although the development of more efficient sub-grid representations remains a very active research area, the possible separation between relatively low-frequency, large scale patterns and transient, small-scale fluctuations, strongly invites to consider stochastic representations of the geophysical dynamics \citep[e.g.][]{Hasselmann76,allen2002towards,penland2003stochastic,berner2011model,Franzke15}. As derived, such developments are meant to better describe the system's variability, especially including a mean drift , called ``bolus" velocity \citep{Gent90} or skew-diffusion \citep{Nakamura01,Vallis} in oceanography, and noise-induced velocity in climate sciences. In that context, several different strategies have been proposed \citep{Franzke15}. Among them, techniques motivated by physics have been devised. Those schemes aim to overcome a bad representation of the small scale forcing and of their interactions with the large scale processes. Two of such schemes have been carried out at ECMWF. The first one, the stochastic perturbation of the physical tendencies -- SPPT -- \citep{Buizza99} implements a multiplicative random perturbation of parameterized physical tendencies. The random variables involved are correlated in space and time, and their characteristics set from fine grid simulations. The second one, the stochastic kinetic-energy backscatter -- SKEB -- \citep{Shutts05} introduces a perturbation of the stream function and potential temperature. This scheme is based on earlier works on energy backscattering modelling through the introduction of random variables \citep{Mason92}. Numerous works showed a beneficial impact of the injected randomness on weather and climate forecasts mean and variability (see \citep{Berner15} and references therein) or in oceanography \citep{Brankart13,mana2014toward}. However, the amplitude of the perturbations to apply is difficult to specify. The non-conservative and the variance-creating nature of those schemes is also problematic in that prospect. A too large amplitude, while increasing significantly the ensemble spread, may lead to unstable schemes for simulations that go beyond short-term forecast applications. A balance between the large-scale sub-grid diffusive tensor and the noise amplitude must thus be found to stabilize the system. Also based on a separation of the state variables between slow and fast components, a mathematical framework -- refereed to as MTV algorithms -- has been proposed to derive stochastic reduced-order dynamical systems for weather and climate modelling \citep{Franzke05,Franzke06,Majda99,Majda01,Majda03}. Considering a linear stochastic equation to describe the fast modes, derivations have been rigorously studied \citep{Gottwald13,Melbourne11,Pavliotis08}. As demonstrated, the continuous fast dynamics converges in continuous time towards a Stratonovich noise, leading to a diffusion term when expressed in a corresponding Ito stochastic integral form. As well, stochastic superparametrization assumes a scale separation \citep{Grooms13,Grooms14}. The point approximation and Reynolds decompositions replace homogenization techniques. As for MTV methods, the small-scale evolution law is linearized and corrected with the introduction of noise and damping terms. The second order moments of the solution are then known analytically and can feed the sub-grid tensors expression of the mean deterministic large-scale evolution law. For such developments, the direct use of the Reynolds decomposition implicitly assumes that small-scale components are differentiable. This theoretically prevents the use of Langevin type equations for the small-scale evolution. Furthermore, in such a derivation, each scalar evolution law involves a different sub-grid tensor. Similarly to the definition of eddy viscosity and diffusivity models for Large-Eddy simulation, the noise expression of most stochastic fluid dynamic models are hardly inferred from physics. So, instantaneous diffusion and randomness may not be consistently related; even though some careful parametrizations of stationary energy fluxes couple them \citep{Grooms13,sapsis2013blending,Grooms14,sapsis2013statistically}. To overcome these difficulties, we propose to dwell on a different strategy. As previously initiated \citep{Memin14}, the large-scale dynamics is not prescribed from a deterministic representation of the system's dynamics. Instead, a random variable, referred to as location uncertainty, is added to the Lagrangian expression of the flow. The resulting Eulerian expression then provides stochastic extensions of the material derivative and of the Reynolds transport theorem. An explicit expression of a noise-induced drift is further obtained. As also derived, a sub-grid stress tensor, describing the small-scale action on the large scales, does not resort to the usual Boussinesq eddy viscosity assumption, and further, consistently appears throughout all the conservation equations of the system. Moreover, the advection by the unresolved velocity acts as a random forcing. As such, this framework provides a direct way to link the resulting material transport and the underlying dynamics. The well-posedness of these equations has been studied by \cite{mikulevicius2004stochastic} and \cite{flandoli2011interaction}. Recently, \cite{Holm2015} derived similar evolution laws from the inviscid and adiabatic framework of Lagrangian mechanics. Compared to models under location uncertainty, the stochastic transport of scalars is identical. However, the momentum evolution of \cite{Holm2015} involves an additional term which imposes the helicity conservation but may increase the kinetic energy. Starting with the description of the transport under location uncertainty (section 2), developments are then carried out to explore this stochastic framework for different classical geophysical dynamical models (section 3). \section{Transport under location uncertainty} \label{Transport under location uncertainty} \subsection{A 2-scale random advection-diffusion description} \label{Informal description} As often stated, ocean and atmospheric dynamics can be assumed to be split into two contributions with very distinct correlation times. This assumption can especially hold for the top layer of the ocean. For example, the larger ocean geostrophic component generally varies on much slower time scales than motions at smaller spatial scales. From an observational perspective, current generation satellite altimeter instruments are capable of resolving only the largest eddy scales, and the measurements can depend sensitively on the local kinetic energy spectrum of the unresolved flow \citep{poje2010resolution,keating2011diagnosing}. Satellite observations of the upper-ocean velocity field at higher resolution can also be obtained \citep[e.g.][]{chapron2005direct} but are certainly too sparse and possibly noisy. Accordingly, without loss of generality, observations of an instantaneous Eulerian velocity field are likely coarse-grained in time, and can be interpreted under a 2-scale framework. As such, the instantaneous Eulerian velocity is decomposed between a well resolved smooth component, denoted $\mbs w$, continuous in time, and a rough small-scale one, rapidly decorrelating in time. This badly-resolved contribution, expressed as $\boldsymbol{\sigma} \dot{\mbs B}$, is then assumed Gaussian, correlated in space, but uncorrelated in time. This contribution can be inhomogeneous and anisotropic in space. Due to the irregularity of the flow, the transport of a conserved quantity, ${\varTheta}$, by the whole velocity, defined as \begin{eqnarray} {\varTheta} (\boldsymbol{X}_{t + \Delta t},t + \Delta t) &=& {\varTheta} (\boldsymbol{X}_{t},t) \end{array} corresponds to a random mapping. In this setup the large-scale velocity possibly depends on the past history of the small-scale component. This latter being white in time, the two components are uncorrelated. Hence, the above conservation shall lead to a classical advection-diffusion evolution, with the introduction of an inhomogeneous and anisotropic diffusion coefficient matrix, $\mbs a$, solely defined by the one-point one-time covariance of the unresolved displacement per unit of time: \begin{eqnarray} \mbs a = \frac{ \mathbb{E} \left \{ \boldsymbol{\sigma} \dif \boldsymbol{B}_t \left( \boldsymbol{\sigma} \dif \boldsymbol{B}_t \right)^{\scriptscriptstyle T} \right \} }{{\mathrm{d}} t} . \label{balance} \end{array} The inhomogeneous structure of the small-scale variance motions shall create inhomogeneous spreading rates. More agitated fluid parcels spread faster than those over quiescent regions. Overall, the latter can be seen as ``attracting'' the large-scale gradients. This effect leads to invoke a drift correction, anti-correlated with the variance gradient, or, in a multi-dimensional point of view, anti-correlated with the covariance matrix divergence. Accordingly, the random advection under a 2-scale description can be expected to be expressed as: \begin{eqnarray} \partial_t {\varTheta} + \underbrace{ {\boldsymbol{w}}^\star\bcdot \boldsymbol{\nabla} {\varTheta} }_{\text{Corrected advection}} = \underbrace{ \boldsymbol{\nabla}\bcdot \left ( \tfrac{1}{2} \mbs a \boldsymbol{\nabla} {\varTheta} \right ) }_{\text{Diffusion}} - \underbrace{ \boldsymbol{\sigma} \dot{\mbs B} \bcdot \boldsymbol{\nabla} {\varTheta} }_{\text{Random forcing}}, \label{heuristic transport} \end{array} with a modified velocity given by \begin{eqnarray} \boldsymbol{w}^\star = \boldsymbol{w} - \tfrac{1}{2} ( \boldsymbol{\nabla}\bcdot \mbs a)^{\scriptscriptstyle T} + \boldsymbol{\sigma} (\bnabla\! \bcdot\! \boldsymbol{\sigma})^{\scriptscriptstyle T} . \end{array} We note the conserved quantity is diffused by the small-scale random velocity. The random forcing expresses the advection by the unresolved velocity $\boldsymbol{\sigma} \dot{\mbs B}=\boldsymbol{\sigma}{\dif \boldsymbol{B}_t}/{{\mathrm{d}} t}$, and continuously backscatters random energy to the system. Because of this white-noise forcing term, the Eulerian conservation equation \eqref{heuristic transport} (that will be formally expressed in the following sections) intrinsically concerns a random non-differentiable tracer. Finally, the conserved quantity is also advected by an ``effective" velocity, $\boldsymbol{w}^\star$, taking into account the possible spatial variation of the small-scale velocity variance, as well as the possible divergence of this velocity component. Considering the unresolved velocity and this effective drift, $\boldsymbol{w}^\star$, divergent-free, we shall see that this 2-scale development establishes an exact balance between the amount of diffusion and the random forcing. Subsequently, essential properties related to energy conservation and mean/variance tracer evolution directly result from this balance. \subsection{Uncertainty formalism} \label{The used stochastic model} In a Lagrangian stochastic form, the infinitesimal displacement associated with a particle trajectory $\boldsymbol{X}_t$ is: \begin{eqnarray} \label{particle_dX} {\mathrm{d}}\boldsymbol{X}_t &=& \boldsymbol{w}(\boldsymbol{X}_t,t) {\mathrm{d}} t+ \boldsymbol{\sigma}(\boldsymbol{X}_t,t) {\mathrm{d}}\boldsymbol{B}_t. \end{eqnarray} Formally, this is defined over the fluid domain, $\Omega$, from a $d$-dimensional Brownian function $\boldsymbol{B}_t$. Such a function can be interpreted as a white noise process in space and a Brownian process in time\footnote{Formally it is a cylindrical $I_d$-Wiener process (see \cite{DaPrato} and \cite{Prevot07} for more information on infinite dimensional Wiener process and cylindrical $I_d$-Wiener process).}. The time derivative of the Brownian function, in a distribution sense, is denoted $\boldsymbol{\sigma} \dot{\mbs B} =\boldsymbol{\sigma} {\dif \boldsymbol{B}_t}/{{\mathrm{d}} t}$, and is a white noise distribution. The spatial correlations of the flow uncertainty are specified through the diffusion operator $\boldsymbol{\sigma}(.,t)$, defined for any vectorial function, $\mbs f $, through the matrix kernel $\breve{\boldsymbol{\sigma}} (.,.,t)$: \begin{equation} \boldsymbol{\sigma}(\boldsymbol{x},t)\mbs f \stackrel{\scriptscriptstyle\triangle}{=} \int_\Omega \breve{\boldsymbol{\sigma}}(\boldsymbol{x},\boldsymbol{z},t) \mbs f (\boldsymbol{z},t) {\mathrm{d}}\boldsymbol{z}. \end{equation} This quantity is assumed to have a finite norm\footnote{More precisely, the operator $\boldsymbol{\sigma}$ is assumed to be Hilbert-Schmidt.} and to have a null boundary condition on the domain frontier\footnote{Note that periodic boundary conditions can also be envisaged.}. The resulting $d$-dimensional random field, $\boldsymbol{\sigma}(\boldsymbol{x},t) \dif \boldsymbol{B}_t$, is a centered vectorial Gaussian function, correlated in space and uncorrelated in time with covariance tensor: \begin{eqnarray} \mbs {Cov} (\boldsymbol{x},\boldsymbol{y},t,t') &\stackrel{\triangle}{=}& \mathbb{E} \left \{ \left (\boldsymbol{\sigma}(\boldsymbol{x},t) {\mathrm{d}}\boldsymbol{B}_t \right ) \left ( \boldsymbol{\sigma}(\boldsymbol{y},t') {\mathrm{d}}\boldsymbol{B}_{t'} \right ) ^{\scriptscriptstyle T} \right \} ,\\ &=& \int_\Omega \breve \boldsymbol{\sigma}(\boldsymbol{x},\boldsymbol{z},t) \breve\boldsymbol{\sigma} ^{\scriptscriptstyle T} (\boldsymbol{y},\boldsymbol{z},t){\mathrm{d}}\boldsymbol{z} \ \delta(t-t'){\mathrm{d}} t. \end{array} For sake of thoroughness, the uncertainty random field has a (mean) bounded norm\footnote{ This norm is finite since $\boldsymbol{\sigma}$ is Hilbert-Schmidt, ensuring the boundness of the trace of operator $Q$ -- defined by the kernel $(\boldsymbol{x},\boldsymbol{y}) \mapsto \boldsymbol{\sigma}(\boldsymbol{x},t) \boldsymbol{\sigma} ^{\scriptscriptstyle T} (\boldsymbol{y},t)$ --, and $ \forall t \leqslant T < \infty, \ \mathbb{E} \|\int_0^{t}\boldsymbol{\sigma} {\mathrm{d}}\boldsymbol{B}_{t'}\|^2_{L^2(\Omega)} = \int_0^{t}\int_\Omega \| \breve \boldsymbol{\sigma} (\bullet, \boldsymbol{z}) \|^2_{L^2(\Omega)}{\mathrm{d}} \boldsymbol{z} {\mathrm{d}} t' = \int_0^{t} \| \boldsymbol{\sigma} \|^2_{HS,L^2(\Omega)} {\mathrm{d}} t' = \int_0^{t} tr(\mbs Q) {\mathrm{d}} t' < \infty$, where the index HS refers to the Hilbert-Schmidt norm.}: $\mathbb{E} \| \int_0^t \boldsymbol{\sigma} {\mathrm{d}}\boldsymbol{B}_{t'}\|^2_{L^2(\Omega)} < \infty$ for any bounded time $t\leqslant T<\infty$. Hereafter, the diagonal of the covariance tensor, $\mbs a$, will be referred to as the variance tensor: \begin{eqnarray*} \mbs a(\boldsymbol{x},t)\delta(t-t'){\mathrm{d}} t = \mbs {Cov}(\boldsymbol{x},\boldsymbol{x},t,t') . \end{eqnarray*} By definition, it is a symmetric positive definite matrix at all spatial points, $\boldsymbol{x}$. This quantity, also denoted $\boldsymbol{\sigma} \boldsymbol{\sigma} ^{\scriptscriptstyle T}$, corresponds to the time derivative of the so-called quadratic variation process: \begin{eqnarray*} \boldsymbol{\sigma} \boldsymbol{\sigma} ^{\scriptscriptstyle T} \stackrel{\triangle}{=} \mbs a = \partial_t \left < \int_0^t \boldsymbol{\sigma} {\mathrm{d}} \boldsymbol{B}_{s}, \left( \int_0^t \boldsymbol{\sigma} {\mathrm{d}} \boldsymbol{B}_{r} \right) ^{\scriptscriptstyle T} \right >. \end{eqnarray*} with $\left<f,g\right>$ to stand for the quadratic cross-variation process of $f$ and $g$ (see Appendix \ref{QuadVar}). Given this strictly defined flow, the corresponding material derivative expression of a given quantity can be introduced. \subsection{Material derivative} To derive the expression of the material derivative $ \rm D_t {\varTheta} \stackrel{\triangle}{=} \left ( {\mathrm{d}} \left ( {\varTheta} \left( \boldsymbol{X}_t,t \right ) \right ) \right )_{|_{\boldsymbol{X}_t = x}} $, also quoted as the Ito-Wentzell derivative or generalized Ito derivative in a stochastic flow context \citep[theorem 3.2.2]{Kunita}, let us introduce an operator, hereafter referred to as the stochastic transport operator: \begin{eqnarray} { \mathbb D}_t {\varTheta} \ &\stackrel{\triangle}{=}& \underbrace{ {\mathrm{d}}_t {\varTheta} }_{ \substack{ \stackrel{\triangle}{=} \ {\varTheta}(\boldsymbol{x},t+{\mathrm{d}} t) - {\varTheta}(\boldsymbol{x},t) \\ \text{Time increment} } } + \underbrace{ \left ({\boldsymbol{w}}^\star{\mathrm{d}} t + \boldsymbol{\sigma} {\mathrm{d}}\boldsymbol{B}_t \right)\bcdot \boldsymbol{\nabla} {\varTheta} }_{\text{Advection}} - \underbrace{ \boldsymbol{\nabla}\bcdot \left ( \tfrac{1}{2} \mbs a \boldsymbol{\nabla} {\varTheta} \right ) }_{\text{Diffusion}} {\mathrm{d}} t \label{Mder} \end{array} This operator corresponds to a strict formulation of \eqref{heuristic transport}. More specifically, it involves a time increment term ${\mathrm{d}}_t {\varTheta}$ instead of a partial time derivative as ${\varTheta}$ is non differentiable. Contrary to the material derivative, the transport operator has an explicit expression (equation \eqref{Mder}). However, the material derivative is explicitly related to the transport operator (see proof in Appendix \ref{link Material-Der}) \begin{eqnarray} \label{link DD and material deriv} \left\{ \begin{array}{r c l} { \mathbb D}_t {\varTheta} &= & f_1 {\mathrm{d}} t + \mbs h_1 ^{\scriptscriptstyle T} {\mathrm{d}}\boldsymbol{B}_t, \label{Dt-eq1}\\ \rm D_t {\varTheta} &=& f_2 {\mathrm{d}} t + \mbs h_2 ^{\scriptscriptstyle T} {\mathrm{d}}\boldsymbol{B}_t, \end{array} \right. \Longleftrightarrow \left\{ \begin{array}{r c l} f_2 &=& f_1 + {\scriptscriptstyle T}\bigl(\left ( \boldsymbol{\sigma} ^{\scriptscriptstyle T} \bnabla \right ) \mbs h_1 ^{\scriptscriptstyle T} \bigr),\\ \mbs h_1 &=& \mbs h_2 . \end{array} \right. \end{array} Note, the material derivative, $\rm D_t$, has a clear physical meaning but no explicit expression whereas the explicit expression of the transport operator offers elegant means to derive stochastic Eulerian evolution laws. Most often both operators coincide and can interchangeably be used. As a matter of fact, in most cases, we deal with null Brownian function $\mbs h_1$ in \eqref{link DD and material deriv}. This corresponds, for instance, either to the transport of a scalar $\mathbb D_t {\varTheta}=0$ or to the conservation of an extensive property $\left( \int_{{\cal V} (t)} q\right)$ when the unresolved velocity component is solenoidal ($\bnabla\! \bcdot\! \boldsymbol{\sigma} \dif \boldsymbol{B}_t= 0$), which leads, as we will see it, to $\rm D_t q = -\bnabla\! \bcdot\! \boldsymbol{w}^* q {\mathrm{d}} t$ (\eqref{th_transport}). In such a case, it is straightforward to infer from the system \eqref{link DD and material deriv}, that ${\mathbb D}_t$ and $\rm D_t$ coincide. For this precise case, those operators lead to \begin{eqnarray} \mathbb D_t {\varTheta} (\boldsymbol{X}_t,t) = \rm D_t {\varTheta} (\boldsymbol{X}_t,t) = {\mathrm{d}}\left( {\varTheta} (\boldsymbol{X}_t,t) \right) = f_1 (\boldsymbol{X}_t,t){\mathrm{d}} t . \end{array} Going back to the Eulerian space, the classical calculus rules apply to operator $\mathbb D_t$, e.g. the product rule \begin{equation} \mathbb D_t(fg) (\boldsymbol{x},t) = \left( \mathbb D_t f \ g+ f \ \mathbb D_t g \right)(\boldsymbol{x},t), \label{product-rule} \end{equation} and the chain rule: \begin{equation} \mathbb D_t\bigl( \varphi\circ f \bigr) (\boldsymbol{x},t)= \mathbb D_t f (\boldsymbol{x},t) (\varphi'\circ f)(\boldsymbol{x},t). \label{chain-rule} \end{equation} Given these properties, an expression for the stochastic advection of a scalar quantity can be derived. \subsection{Scalar advection} \label{Passive scalar advection} The advection of a scalar ${\varTheta}$ thus reads: \begin{eqnarray} \label{eq Scalar advection} \mathbb D_t {\varTheta} = \rm D_t {\varTheta} = 0. \end{array} To analyze this stochastic transport equation, let us first consider that the effective drift and the unresolved velocity are both divergence-free. As shown later, these conditions ensure an isochoric stochastic flow (see \eqref{eq_incomp_sto}). With these conditions, the stochastic transport equation exhibits remarkable conservation properties. \subsubsection{Energy conservation} From (\ref{Mder}-\ref{eq Scalar advection}) and Ito lemma, the scalar energy evolution is given by: \begin{align} {\mathrm{d}} \int_\Omega \tfrac{1}{2} {\varTheta}^2 & =\int_\Omega \left( {\varTheta} {\mathrm{d}}_t {\varTheta} + \tfrac{1}{2} {\mathrm{d}}_t \langle {\varTheta},{\varTheta}\rangle \right),\\ &= -\int_\Omega \tfrac{1}{2} \left( \boldsymbol{w}^*{\mathrm{d}} t + \boldsymbol{\sigma} {\mathrm{d}}\boldsymbol{B}_t \right) \bcdot \boldsymbol{\nabla} \left({\varTheta}^2\right) + \underbrace{ \int_\Omega {\varTheta} \boldsymbol{\nabla} \bcdot \left( \tfrac{1}{2} \mbs a \boldsymbol{\nabla} {\varTheta} \right) {\mathrm{d}} t }_{\text{Loss by diffusion}} + \underbrace{ \int_\Omega \tfrac{1}{2} \left( \boldsymbol{\nabla} {\varTheta}\right) ^{\scriptscriptstyle T} \mbs a \boldsymbol{\nabla} {\varTheta} {\mathrm{d}} t }_{\text{Energy intake from noise}} . \label{decomposition of the energy} \end{align} For suitable boundary conditions, the two last terms cancel out after integration by part. The diffused energy is thus exactly compensated by the energy brought by the noise. With divergent-free conditions for $\boldsymbol{w}^\star$ and $\boldsymbol{\sigma}$, another integration by part gives \begin{eqnarray} {\mathrm{d}} \int_\Omega \tfrac{1}{2} {\varTheta}^2 &=& \int_\Omega \tfrac{1}{2} \boldsymbol{\nabla} \bcdot \left( \boldsymbol{w}^*{\mathrm{d}} t + \boldsymbol{\sigma} {\mathrm{d}}\boldsymbol{B}_t \right) {\varTheta}^2 =0. \label{E-cons} \end{array} The energy is thus conserved for all scalar random realizations. The expectation of the energy -- the energy (ensemble) mean -- is therefore also conserved. Moreover, from the decomposition ${\varTheta} = \mathbb{E}({\varTheta}) + \bigl({\varTheta} -\mathbb{E}({\varTheta})\bigr)$ into the mean and the random anomaly component, we obtain a partition of this constant energy mean: \begin{equation} 0=\frac{{\mathrm{d}}}{{\mathrm{d}} t} \mathbb{E} \|{\varTheta}\|^2_{{\cal L}^2(\Omega)}= \frac{{\mathrm{d}}}{{\mathrm{d}} t} \|\mathbb{E}({\varTheta})\|^2_{{\cal L}^2(\Omega)} + \frac{{\mathrm{d}}}{{\mathrm{d}} t} \int_\Omega Var({\varTheta}) . \label{Eq-Exp-var} \end{equation} A decrease of the mean energy -- the energy of the (ensemble) mean -- is always associated with an (ensemble) variance increase. Similar energy mean budgets have recently been discussed by several authors. \cite{majda2015statistical} refers to this energy mean as the statistical energy. The author derives the evolution law of this energy by adding the evolution equations of the mean energy and of the integrated variance, whereas our energy budget is obtained by evaluating the mean of the evolution law of the total energy, $ \|{\varTheta}\|^2_{{\cal L}^2(\Omega)}$. However, \cite{majda2015statistical} does not specify the random forcing. This is why the latter does not {\em a priori} balance the turbulent diffusion. \cite{Farrell14} also studied the energy mean of stochastic fluid dynamics systems especially under quasi-linear approximations and with an additive Gaussian forcing. By the chain rule, all the tracer moments are also conserved: \begin{equation} {\mathbb D}_t {\varTheta}^p = p \ {\varTheta}^{p-1} {\mathbb D}_t {\varTheta} =0. \end{equation} Yet, the energy of statistical moments are in general not conserved, as detailed in the following section. \subsubsection{Mean and variance fields of a passive scalar} Consider now that the expectation corresponds to a conditional expectation given the effective drift. This applies to passive scalar transport for which the drift does not depend on the tracer. Terms in $\dif \boldsymbol{B}_t$ have zero-mean, and the mean passive scalar evolution can be immediately derived taking the conditional expectation of the stochastic transport: \begin{equation} \label{mean tracer} \partial_t \mathbb{E} ({\varTheta}) + \underbrace{ \boldsymbol{w}^\star \bcdot \boldsymbol{\nabla} \mathbb{E} ({\varTheta}) }_{\text{Advection}} = \underbrace{ \boldsymbol{\nabla}\bcdot \left(\tfrac{1}{2}\mbs a \boldsymbol{\nabla} \mathbb{E}({\varTheta})\right) }_{\text{Diffusion}} . \end{equation} Since $\mbs w^*$ is divergent-free, it has no influence on the energy budget. The mean field energy decreases with time due to diffusion. As for the variance, its evolution equation, derived in Appendix \ref{variance tracer proof}, reads: \begin{equation} \label{variance tracer} \partial_t Var ({\varTheta}) + \underbrace{ \boldsymbol{w}^\star \bcdot \boldsymbol{\nabla} Var ({\varTheta}) }_{\text{Advection}} = \underbrace{ \boldsymbol{\nabla}\bcdot \left(\tfrac{1}{2}\mbs a \boldsymbol{\nabla} Var({\varTheta})\right) }_{\text{Diffusion}} + \underbrace{ \left(\boldsymbol{\nabla}\mathbb{E}({\varTheta}) \right )^{\scriptscriptstyle T} \mbs a \boldsymbol{\nabla} \mathbb{E}({\varTheta}) }_{\text{Variance intake}} . \end{equation} This is also an advection-diffusion equation, with an additional source term. Integrating this equation on the whole domain, with the divergent-free condition, and considering the divergence form of the first right-hand term, we obtain \begin{equation} \frac{{\mathrm{d}}}{{\mathrm{d}} t}\int_\Omega Var ({\varTheta}) = \int_\Omega \left(\boldsymbol{\nabla}\mathbb{E}({\varTheta}) \right )^{\scriptscriptstyle T} \mbs a \boldsymbol{\nabla} \mathbb{E}({\varTheta}) \geqslant 0. \end{equation} It shows that the stochastic transport of a passive scalar creates variance. The dissipation that occurs in the mean-field energy equation is exactly compensated by a variance increase. This mechanism is very relevant for ensemble-based simulations. The uncertainty modeling directly incorporates a large-scale dissipating sub-grid tensor, and further encompasses a variance increase mechanism to balance the total energy dissipation. Such a mechanism is absent in ensemble-based data assimilation development \citep{berner2011model,gottwald2013role,Snyder15}. An artificial inflation of the ensemble variance is usually required in consequence to avoid filter divergence \citep{Anderson99}. \subsubsection{Active tracers} For the more general case of an active tracer, the velocity depends on the tracer distribution, additional energy transfers occurs between the mean and the random tracer components \citep{sapsis2013attractor,sapsis2013blending,sapsis2013statistically,ueckermann2013numerical,majda2015statistical}. Though a complete analytical description is involved, these energy transfers are mainly due to the nonlinearity of the flow dynamics, and are hence more familiar. The models under location uncertainty involve both types of interactions: the ``usual'' nonlinear interactions and the random energy transfers previously described. As such, these two energy fluxes analyzes are complementary. In deterministic turbulent dynamics with random initial conditions, energy is drained from the mean tracer toward several modes ({\em e.g.} Fourier modes) of the tracer random component, and is backscattered from other modes. The energy fluxes toward (from) random modes increases (decreases) the variance. In the case of the deterministic Navier-Stokes equations, \cite{sapsis2013attractor} analytically expressed the integrated variance. The molecular or turbulent diffusion decreases the variance whereas the mean velocity may increases or decreases the random energy, by triad interactions. The modes receiving energy become unstable, whereas those giving energy are over-stabilized \citep{sapsis2013blending}. In ensemble data assimilation of large-scale geophysical flows, the solution is defined by a manifold sampled by a small ensemble of realizations. Those stabilizations and destabilizations are the reason for the alignment of ensembles along unstable directions \citep{trevisan2004assimilation,ng2011role}. It can lead to filter divergence \citep{gottwald2013role,bocquet2016degenerate}. In the absence of any modes truncation, the nonlinear interactions redistribute the energy between those modes. Otherwise, the missing energy fluxes can be parametrized with additional random terms \citep{sapsis2013blending,sapsis2013statistically}. To further describe the energy exchanges involved in the dynamics under location uncertainty of active tracers, we introduce the decomposition ${\varTheta}=\widetilde {\varTheta} + {\varTheta}'$ in terms of a slow component $\widetilde {\varTheta}$ and a highly oscillating component ${\varTheta}'$. The first one is time-differentiable whereas the second is only continuous with respect to time. Both components are random. This decomposition, the so-called semi-martingale decomposition, is unique \citep{Kunita}. For each component, the following coupled system of transport equations is: \begin{align} &\partial_t \widetilde {\varTheta} + \boldsymbol{w}^\star \bcdot \boldsymbol{\nabla} {\varTheta} = \boldsymbol{\nabla}\bcdot \left(\tfrac{1}{2}\mbs a \boldsymbol{\nabla} {\varTheta}\right), \label{eq-dq-tilde}\\ &{\mathrm{d}}_t {\varTheta}' + \boldsymbol{\sigma}{\mathrm{d}}\boldsymbol{B}_t \bcdot \boldsymbol{\nabla} {\varTheta} =0. \label{eq-dq'} \end{align} At the initial time, the first component is deterministic (given the initial conditions) and the second one is zero. The large-scale component becomes random through the oscillating component, which is characterized by a gradually increasing energy along time: \begin{equation} \mathbb{E} \|{\varTheta}'\|^2_{L^2(\Omega)} = \mathbb{E} \int_\Omega \langle {\varTheta}',{\varTheta}'\rangle = \mathbb{E} \int_0^t \int_\Omega \left(\boldsymbol{\nabla} {\varTheta} \right )^{\scriptscriptstyle T} \mbs a \boldsymbol{\nabla} {\varTheta} \; {\mathrm{d}} t \geqslant 0 . \end{equation} Note, the expectation is taken with respect to the law of the Brownian path. The energy mean of the non-differentiable component ${\varTheta}'$ is the mean of the energy intake provided by the noise \eqref{decomposition of the energy}. The same amount of energy mean is removed from the system by the diffusion \eqref{decomposition of the energy}. Once diffused, this energy is fed back to the small-scale tracer ${\varTheta}'$, the white noise velocity acting here as an energy bridge. Such an energy redistribution is a main issue in sub-grid modeling. Indeed, as explained above, large-scale flow simulations often miss to capture the energy fluxes between the mean and the random components but also the energy redistribution from the unstable modes to the stable modes. Note that, even though the two components are orthogonal as functions of time (in a precise sense), they are not, in general, as functions of space: $\int_{\Omega}\widetilde{{\varTheta}}{\varTheta}'\neq 0$. In particular, it can be shown that those two components are indeed anti-correlated when the tracer is passive. \subsubsection{The homogeneous case and the Kraichnan model} A divergent-free isotropic random field for the small-scale velocity component corresponds to the Kraichnan model \citep{Kraichnan68,kraichnan1994anomalous,Gawedzki95,Majda-Kramer}. The variance tensor, $\mbs a$, becomes a constant diagonal matrix $\frac 1 d tr(\mbs a) \mbs{\mathbb{I}}_d$, where $d$ stands for the dimension of the spatial domain $\Omega$. The tracer evolution now involves a Laplacian diffusion \begin{align} {\mathrm{d}}_t {\varTheta} + \bigl(\boldsymbol{w} {\mathrm{d}} t + \boldsymbol{\sigma} {\mathrm{d}} \boldsymbol{B}_t\bigr ) \bcdot \boldsymbol{\nabla} {\varTheta} = \frac{tr(\mbs a)}{2d} \Delta {\varTheta} {\mathrm{d}} t. \end{align} Additionally, the original Kraichnan model considers a small molecular diffusion, $\nu$, and an external Gaussian forcing, $ f {\mathrm{d}} B_t'$, defined as an homogeneous random field uncorrelated in time and independent of the velocity component $\boldsymbol{\sigma} \dot{\mbs B}$ \citep{Gawedzki95}. In our framework, the Kraichnan model, which does not involve any large-scale drift term, reads: \begin{eqnarray} {\mathrm{d}}_t {\varTheta} + \boldsymbol{\sigma} {\mathrm{d}} \boldsymbol{B}_t \bcdot \boldsymbol{\nabla} {\varTheta} = \left( \nu + \frac{tr(\mbs a)}{2d} \right) \Delta {\varTheta} {\mathrm{d}} t + f {\mathrm{d}} B_t'. \end{array} As compared to the original model, this derivation directly identifies the eddy diffusivity contribution, only implicitly termed in the Kraichnan model \citep{Gawedzki95,Majda-Kramer}. This usual formulation corresponds to the Stratonovich notation. The Ito calculus further offers means to infer the evolution of the tracer moments, \eqref{mean tracer} and \eqref{variance tracer}. The proposed development introduces an additional non-linearity through $\mbs w$ and possible non-uniform turbulence conditions. \subsection{Transport of extensive properties} Hereafter, all fundamental conservation laws are formulated for extensive properties. \subsubsection{Stochastic Reynolds transport theorem} \label{subsubsection Stochastic Reynolds transport theorem} Similar to the deterministic case, the stochastic Reynolds transport theorem shall describe the time differential of a scalar function, $q(\boldsymbol{x},t)$, within a material volume, ${\cal V}(t)$, transported by the random flow (\ref{particle_dX}): \begin{equation} \label{th_transport} {\mathrm{d}} \int_{{\cal V}(t)} q = \int_{{\cal V} (t)} \biggl[ \rm D_t q + \boldsymbol{\nabla} \bcdot \left( \boldsymbol{w}^\star {\mathrm{d}} t + \boldsymbol{\sigma} \dif \boldsymbol{B}_t \right )q + {\mathrm{d}} \left < \int_0^t D_{t'} q , \int_0^t \bnabla\! \bcdot\! \boldsymbol{\sigma} {\mathrm{d}} \boldsymbol{B}_{t'} \right > \biggr]. \end{equation} This expression, rigorously derived in Appendix \ref{Reynolds}, was first introduced in a slightly different version by \cite{Memin14}. In most cases, the unresolved velocity component, $\boldsymbol{\sigma} \dot{\mbs B}$, is divergence-free and, the source of variations of the extensive property $\int_{{\cal V}(t)}q$ is time-differentiable, {\rm i.e.} with a differential of the form $ {\mathrm{d}} \int_{{\cal V}(t)} q = \mathcal{F} {\mathrm{d}} t $. In such a case, for an arbitrary volume, the transport theorem takes the form $\rm D_t q = f {\mathrm{d}} t$, and according to equation \eqref{link DD and material deriv} the material derivative can be replaced by the stochastic transport operator, $\mathbb D_t q$, to provide an intrinsic expression of this stochastic transport theorem. \subsubsection{Jacobian} Taking $q=1$ characterizes the volume variations through the flow Jacobian, $J$: \begin{subequations} \begin{eqnarray} \int_{{\cal V} (t_0)}\!\!\!\!\!\!\!\!{\mathrm{d}} (J(\boldsymbol{X}_t(\boldsymbol{x}_0),t)) {\mathrm{d}} \boldsymbol{x}_0 &=& {\mathrm{d}} \!\!\int_{{\cal V} (t)} \!\!\!\!\!\!\! {\mathrm{d}} \boldsymbol{x} ,\\ &=& \int_{{\cal V} (t)} \!\!\!\!\!\!\!\! \boldsymbol{\nabla} \bcdot \left( \boldsymbol{w}^\star {\mathrm{d}} t + \boldsymbol{\sigma} \dif \boldsymbol{B}_t \right )(\boldsymbol{x},t) \;{\mathrm{d}} \boldsymbol{x} ,\\ &=& \int_{{\cal V} (t_0)} \bigg [ J \boldsymbol{\nabla}\bcdot \left( \boldsymbol{w}^\star {\mathrm{d}} t + \boldsymbol{\sigma} \dif \boldsymbol{B}_t \right ) \bigg ] (\boldsymbol{X}_t(\boldsymbol{x}_0),t) \; {\mathrm{d}} \boldsymbol{x}_0 . \label{th_transport-formal} \end{array} \end{subequations} Valid for an arbitrary initial volume ${\cal V}(t_0)$, it leads to a familiar form for the Lagrangian flow Jacobian evolution law: \begin{eqnarray} \rm D_t J - J \boldsymbol{\nabla}\bcdot \left( \boldsymbol{w}^\star {\mathrm{d}} t + \boldsymbol{\sigma} \dif \boldsymbol{B}_t \right ) = 0. \label{Jacobian eq} \end{array} \subsubsection{Incompressibility condition} The Jacobian evolution \eqref{Jacobian eq} ensures a necessary and sufficient condition for the isochoric nature of the stochastic flow: \begin{eqnarray} \label{eq_incomp_sto} \boldsymbol{\nabla}\bcdot \boldsymbol{\sigma} = 0 \text{ and } \boldsymbol{\nabla}\bcdot \mbs w^* = 0. \end{eqnarray} If the large-scale flow component, $\boldsymbol{w}$, is solenoidal, this reduces to: \begin{equation} \label{incompressibility} \boldsymbol{\nabla}\bcdot \boldsymbol{\sigma}= 0 \text{ and } \boldsymbol{\nabla}\bcdot \boldsymbol{w} = \boldsymbol{\nabla}\bcdot \left( \boldsymbol{\nabla}\bcdot \mbs a \right) ^{\scriptscriptstyle T}=0. \end{equation} Note that for an isotropic unresolved velocity, the last condition is naturally satisfied, as this unresolved velocity component is associated with a constant variance tensor, $\mbs a$. \subsection{Summary} An additional Gaussian and time-uncorrelated velocity modifies the expression of the material derivative. In most cases, the resulting stochastic transport operator, $\mathbb D_t$, coincides with the material derivative, $\rm D_t$. Yet, possible differences between $\mathbb D_t$ and $\rm D_t$ have simple analytic expressions. This stochastic transport operator leads to an Eulerian expression of the tracer transport. As obtained, the tracer is forced by a multiplicative noise and mixed by an inhomogeneous and anisotropic diffusion. Moreover, the advection drift is possibly modified with a correction term related to the spatial variation of the small-scale velocity variance. The random forcing, the dissipation and the effective drift correction are all linked. Accordingly, the energy is conserved for each realization, as the tracer energy dissipated by the diffusion term is exactly compensated by the energy associated with the random velocity forcing. For a passive tracer, the evolution laws for the mean and variance precise these energy exchanges. The unresolved velocity transfers energy from the mean part of the tracer to its random part. For an active tracer, this velocity component bears energy from the whole tracer field to its random non-differentiable component. \section{Stochastic versions of geophysical flow models} \label{section Stochastic versions of oceanic models} The stochastic version of the Reynolds transport theorem provides us the flow Jacobian evolution law, as well as the rate of change expression of any scalar quantity within a material volume. Together with the fundamental conservation laws of classical mechanics, it provides us a powerful tool to derive in a systematic way stochastic flow models. Thanks to the bridge between the material derivative and the stochastic transport operator, this derivation closely follows the usual deterministic derivations. All along the following development, the small-scale random flow component will be assume incompressible, {\rm i.e.} associated with a divergence-free diffusion tensor: \begin{eqnarray} \label{incompressiblity cond sigma} \boldsymbol{\nabla}\bcdot \boldsymbol{\sigma}=0. \end{array} This assumption remains realistic for the geophysical models considered in this study, and does not prevent the resolved velocity component (and therefore the whole field) to be compressible. \subsection{Mass conservation} Mass conservation for arbitrary volumes rules the stochastic transport of the fluid density, denoted $\rho$: \begin{equation} \label{Cont-eq} \mathbb D_t \rho + \rho \boldsymbol{\nabla}\bcdot \boldsymbol{w}^* {\mathrm{d}} t=0. \end{equation} A suggested in \ref{subsubsection Stochastic Reynolds transport theorem}, the material derivative, $\rm D_t$, is now replaced by $\mathbb D_t$, defined by Eq. \eqref{Mder}. Indeed, the mass variation is zero and thus time-continuous, and the stochastic operator coincides with the material derivative. \subsection{Active scalar conservation law} The transport theorem \eqref{th_transport} applied to a quantity $\rho {\varTheta}$ describes the rate of change of the scalar ${\varTheta}$ and is generally balanced by a production/dissipation term, as: \begin{equation} \label{sto-scalar-cons-gen} \mathbb D_t (\rho{\varTheta}) +\rho {\varTheta} \boldsymbol{\nabla}\bcdot \boldsymbol{w}^* {\mathrm{d}} t = \rho{\cal F}_{\varTheta}({\varTheta}) {\mathrm{d}} t. \end{equation} Again, the stochastic transport operator, $\mathbb D_t$, is used instead of the material derivative, $\rm D_t$, since the source of variation $\int_0^t \left( \int_{{\cal V}(t)} \rho{\cal F}_{\varTheta} \right) {\mathrm{d}} t$ of the extensive property, $\int_{{\cal V}(t)} \rho{\varTheta}$, is time-differentiable (integral in ${\mathrm{d}} t$), as explained in \ref{subsubsection Stochastic Reynolds transport theorem}. Considering the product rule (\ref{product-rule}) and mass conservation \eqref{Cont-eq}, the transport evolution model for the scalar writes: \begin{equation} \label{sto-scalar-cons} \mathbb D_t{\varTheta} = {\cal F}_{\varTheta}({\varTheta}) {\mathrm{d}} t. \end{equation} For a negligible production/dissipation term, the scalar is conserved by the stochastic flow and follows properties highlighted in section \ref{Transport under location uncertainty} -- e.g. the energy conservation of each realization and the dissipation of the mean field. As in the deterministic case, the 1$^{st}$ law of thermodynamics implies both temperature conservation (${\varTheta}=T$) and conservation of the amount of substance -- e.g. the conservation of salinity (${\varTheta}=S$): \begin{subequations} \begin{eqnarray} \label{transportTemp} \mathbb D_t T={\cal F}_T(T) {\mathrm{d}} t,\\ \label{transportSalinity} \mathbb D_t S ={\cal F}_S(S) {\mathrm{d}} t. \end{array} \end{subequations} The term $ {\cal F}_{\varTheta}({\varTheta})$ corresponds to diabatic terms such as the molecular diffusion process or the radiative heat transfer. \subsection{Conservation of momentum} To derive a stochastic representation of the Navier-Stokes equations, pressure forcing is decomposed into continuous component, $p$, and white-noise term $\dot{p}_{\sigma} = {{\mathrm{d}}_t p_{\sigma}}/{{\mathrm{d}} t}$. The smooth component of the velocity is not only assumed continuous but also time-differentiable \citep{Memin14}. As demonstrated in Appendix \ref{Appendix Stochastic Navier-Stokes model}, the flow dynamics for an observer in an uniformly rotating coordinate frame writes: \\\!\\ \fcolorbox{black}{lightgray}{ \begin{minipage}{0.95\textwidth} \begin{center} \bf Navier-Stokes equations under location uncertainty in a rotating frame \end{center} \begin{subequations} \label{sto-NS-Rot} \begin{align} &\!\!\text{\em Momentum equations} \nonumber\\ &\;\;\;\; \label{Navier Stokes:momentum:variation finie} \partial_t \boldsymbol{w} + \left( \boldsymbol{w}^* \bcdot \boldsymbol{\nabla} \right) \boldsymbol{w} - \frac{1}{2\rho} \sum_{i,j} \partial_{i}\biggl (\rho a_{ij} \partial_j \boldsymbol{w} \biggr) + \mbs f \times \boldsymbol{w} = \mbs g - \frac{1}{\rho}\boldsymbol{\nabla} p + \frac{1}{\rho}{\cal F}(\boldsymbol{w}),\\ &\!\!\text{\em Effective drift} \nonumber\\ &\;\;\;\;\boldsymbol{w}^*= \boldsymbol{w} - \tfrac{1}{2} (\boldsymbol{\nabla}\bcdot \mbs a)^{\scriptscriptstyle T}, \\ &\!\!\text{\em Random pressure contribution} \nonumber\\ &\;\;\;\; \label{Navier Stokes:momentum:martingale} \boldsymbol{\nabla} {\mathrm{d}}_t p_{\sigma} =\! \left( \boldsymbol{\sigma} {\mathrm{d}} \boldsymbol{B}_t \bcdot \boldsymbol{\nabla} \right) \boldsymbol{w} - \rho \mbs f \times \boldsymbol{\sigma} {\mathrm{d}} \boldsymbol{B}_t + {\cal F}(\boldsymbol{\sigma} {\mathrm{d}}\boldsymbol{B}_t),\\ &\!\!\text{\em Mass conservation} \nonumber\\ &\;\;\;\;\mathbb D_t \rho + \rho \boldsymbol{\nabla}\bcdot \boldsymbol{w}^* {\mathrm{d}} t=0, \;\;{\boldsymbol{\nabla}} \bcdot(\boldsymbol{\sigma} {\mathrm{d}}\boldsymbol{B}_t ) =0. \end{align} \end{subequations} \end{minipage} } \\\!\\\!\\ Similarly to the Reynolds decomposition, the dynamics associated with the drift component includes an additional stress term, and the large-scale velocity component is advected by an eddy effective drift velocity. The density is driven by a stochastic mass conservation equation or alternatively through the stochastic transport of temperature and salinity (\ref{transportTemp}-\ref{transportSalinity}), together with a state law. The random density constitutes a random forcing in the large-scale momentum equation. For incompressible flows, the pressure is then recovered from a modified Poisson equation; \begin{equation} \label{Sto-Pressure-Poisson} - \Delta p= \boldsymbol{\nabla}\bcdot \biggl(\rho \bigl(\boldsymbol{w}^*\bcdot \boldsymbol{\nabla} \bigr) \boldsymbol{w} + \rho \mbs f \times \boldsymbol{w} - \tfrac{1}{2}\sum_{ij} \partial_i (\; \rho a_{ij} \partial_j\boldsymbol{w})\biggr). \end{equation} The pressure acts as a Lagrangian penalty term to constrain the large scale component to be divergent-free. This formalization can be compared to another stochastic framework based on scale gap: Stochastic Super-Parametrization (SSP) \citep{Grooms13,Grooms14}. Both modeling enable separating the large-scale velocity \eqref{Navier Stokes:momentum:variation finie} and the small-scale contribution \eqref{Navier Stokes:momentum:martingale}. This is done by a differentiability assumption on the large-scale drift, $\boldsymbol{w}$, in the modeling under location uncertainty, and through the Reynolds decomposition and a point approximation assumption in SSP. However, it can be pointed out that no averaging procedure is settled in the modeling under location uncertainty. Furthermore, the transports of density, temperature and salinity involve random forcings. Unlike SSP, the whole system to be simulated is thus random. This randomness is of main importance for Uncertainty Quantification (UQ) aplications as illustrated theoretically in section \ref{Transport under location uncertainty} and numerically in the part II of this set of papers \citep{resseguier2016geo2}. Another main difference between the two methods lies in the subgrid tensors parametrization. Each SSP scalar evolution law involves a different subgrid tensor whereas there is a single one (related to the small-scale velocity) for every transports under location uncertainty. For both model it can be noted that the small-scale velocity component is Gaussian conditionally on the large-scale properties. Unlike our models, the SSP proposes a simple evolution model for this unresolved velocity and hence for its statistics. This type of linear forced-dissipative evolution laws, introduced by Eddy-Damped Quasi Normal Markovian (EDQNM) models \citep{orszag1970analytical,Leith71,chasnov1991simulation}, could be as well used to specify the diffusion operator $\boldsymbol{\sigma}$ and close the models under location uncertainty. Yet, such closure also need to be parametrized. \subsection{Atmosphere and Ocean dynamics approximations} Ocean and atmosphere dynamical models generally rely on several successive approximations. In the following, we review these approximations within the uncertainty framework. For ocean and atmosphere flows, a partition of the density and pressure is generally considered: \begin{subequations} \label{p-rho-decomposition} \begin{eqnarray} \rho &=& \rho_b + \rho_0(z) + \rho'(x,y,z,t),\\ p &=& \widetilde p(z) + p'(x,y,z,t). \end{array} \end{subequations} Fields $\widetilde \rho(z)= \rho_b + \rho_0(z)$ and $\widetilde p(z)$ correspond to the density and the pressure at equilibrium (without any motion), respectively; they are deterministic functions and depend on the height only. The pressure and density departures, $p'$ and $\rho'$, are random functions, depending on the uncertainty component. From the expression of the vertical velocity component (\ref{sto-NS-Rot}a), the equilibrium fields are related through an hydrostatic balance: \begin{equation} \label{H-bal} \frac{\partial \widetilde p}{\partial z} = - g\widetilde \rho(z). \end{equation} \subsubsection{Traditional approximation} This approximation helps to neglect the deflecting rotation forces associated with vertical movements. Considering the first moment conservation along the vertical direction of (\ref{sto-NS-Rot}), with the hydrostatic balance (\ref{H-bal}), it writes: \begin{equation} \partial_t w + \left( \boldsymbol{w}^* \bcdot \boldsymbol{\nabla} \right) w - \tfrac{1}{2} \sum_{i,j} \partial_{i}\biggl (a_{ij} \partial_j w\biggr) + f_x v - f_y u = - \frac{1}{\rho}\left [\rho' g+ \frac{\partial p'}{\partial z}\right] + {\cal F}(w). \end{equation} This approximation is justified when an hydrostatic assumption is employed. \subsubsection{Boussinesq approximation} Within small density fluctuations ({\rm i.e.} the Boussinesq approximation) as observed in the ocean, the stochastic mass conservation reads \begin{equation} \label{approxBoussinesq_isochoric} 0 = \mathbb D_t \rho + \rho \boldsymbol{\nabla}\bcdot \boldsymbol{w}^* {\mathrm{d}} t \approx \rho_b \boldsymbol{\nabla}\bcdot \boldsymbol{w}^* {\mathrm{d}} t. \end{equation} This implies that the flow is volume-preserving. In an anelastic approximation, density variations dominate. It can be shown we get the weaker constraint, associated with an horizontal uncertainty: \begin{eqnarray} \boldsymbol{\nabla}\bcdot \boldsymbol{w} - \tfrac{1}{2} \boldsymbol{\nabla}_H\bcdot(\boldsymbol{\nabla}_H\bcdot \mbs{a}_H)^{\scriptscriptstyle T} = \frac{g}{c^2\rho}(w\tilde \rho) \end{array} where $c^{-2}$ denotes the velocity of the acoustic waves and subscript $H$ indicates the set of horizontal coordinates. The classical anelastic constraint implicitly assumes a divergence-free condition on the variance tensor divergence (as obtained for homogeneous turbulence). According to equations \eqref{transportTemp} and \eqref{transportSalinity}, temperature and salinity are transported by the random flow. If those tracers do not oscillate too much, the density anomaly, $\rho - \rho_b$, can be approximated by a linear combination of these two properties. And thus, in the Boussinesq approximation, this anomaly is transported: \begin{equation} \label{Rhoz} 0 = \rm D_t (\rho-\rho_b) = \mathbb D_t (\rho-\rho_b) . \end{equation} \cite{Holm2015} obtained the very same stochastic transport of density anomaly from a Lagrangian mechanics approach. Using the same approximation, the contribution of the momentum material derivative associated with the density variation can be neglected. The Navier-Stokes equations coupling the Boussinesq and traditional approximations then read: \\\!\\ \fcolorbox{black}{lightgray}{ \begin{minipage}{0.95\textwidth} \begin{center} \bf Simple Boussinesq equations under location uncertainty \end{center} \begin{subequations} \label{sto-Boussinesq-buo1} \begin{align} &\!\!\text{\em Momentum equations} \nonumber\\ &\;\;\;\; \partial_t \boldsymbol{w} + \left( \boldsymbol{w}^* \bcdot \boldsymbol{\nabla} \right) \boldsymbol{w} - \tfrac{1}{2} \sum_{i,j} \partial_{i}\biggl (a_{ij} \partial_j \boldsymbol{w} \biggr) + f \mbs k \times \mbs u = b \; \mbs k - \frac{1}{\rho_b}\boldsymbol{\nabla} p' + {\cal F}(\boldsymbol{w}), \label{w-comp}\\ &\!\!\text{\em Effective drift} \nonumber\\ &\;\;\;\; \boldsymbol{w}^* = \begin{pmatrix} \mbs u^* \\ w^* \end{pmatrix} = \boldsymbol{w} - \tfrac{1}{2} (\boldsymbol{\nabla}\bcdot \mbs a)^{\scriptscriptstyle T} ,\\ &\!\!\text{\em Buoyancy equation} \nonumber\\ &\;\;\;\; \mathbb D_t b + N^2 \left( w^* {\mathrm{d}} t + (\boldsymbol{\sigma} {\mathrm{d}} \boldsymbol{B}_t)_z \right)= \tfrac{1}{2} \boldsymbol{\nabla} \bcdot \left( \mbs a_{\bullet z} N^2 \right) {\mathrm{d}} t, \label{eq-buo}\\ &\!\!\text{\em Random pressure fluctuation} \nonumber\\ &\;\;\;\; \boldsymbol{\nabla} {\mathrm{d}}_t p_{\sigma} =\! - \rho_b \left ( \boldsymbol{\sigma} {\mathrm{d}} \boldsymbol{B}_t \bcdot \boldsymbol{\nabla} \right ) \boldsymbol{w}^* - f\mbs k \times (\boldsymbol{\sigma} {\mathrm{d}} \boldsymbol{B}_t)_H+ {\cal F}(\boldsymbol{\sigma} {\mathrm{d}} \boldsymbol{B}_t),\label{dp}\\ &\!\!\text{\em Incompressibility} \nonumber\\ &\;\;\;\; \boldsymbol{\nabla} \bcdot \boldsymbol{w} ={\boldsymbol{\nabla}} {\bcdot}\bigl(\boldsymbol{\sigma} \dot{ \boldsymbol{B}}\bigr) = \boldsymbol{\nabla}\bcdot\boldsymbol{\nabla} \bcdot \mbs a =0. \label{inc-cond-Boussinesq} \end{align} \end{subequations} \end{minipage} } \\\!\\\!\\ For this system, the thermodynamics equations are expressed through the buoyancy variable $b=-g \rho'/\rho_b$, and the stratification (Brunt-V\"ais\"al\"a frequency) $N^2(z)=-g /{\rho_b}\ \partial_z \rho_0(z)$ is introduced. The buoyancy term constitutes a random forcing of the vertical large-scale velocity component. Since the density anomaly, $\rho-\rho_b$, has been decomposed into a constant background slope and a residual, the multiplicative noise of equation \eqref{Rhoz} is split into an additive and a multiplicative noise in \eqref{eq-buo}. The additive noise drains random energy from the stratification toward the buoyancy. Therefore, the buoyancy energy is not conserved due to the background stratification. \subsubsection{Buoyancy oscillations} To illustrate the effect of this additive noise in simple cases, we consider here constant-along-depth buoyancy anomaly and stratification ($\partial_z b = 0$ and $\partial_z N =0$) and only a vertical motion component ({\em i.e} $\mbs u =0$ and $(\boldsymbol{\sigma}{\mathrm{d}}\boldsymbol{B}_t)_H=0$) with no dependence on depth (due to the divergence constraint). Note that this latter constraint on the diffusion tensor, implies that only $a_{zz}$ is non null with no dependence on depth as well. Then, the Boussinesq equations read \begin{eqnarray} \partial_t w = b \text{ and } {\mathrm{d}}_t b = - N^2 (w {\mathrm{d}} t +(\boldsymbol{\sigma}\dif \boldsymbol{B}_t)_{z}). \label{buo-strat} \end{array} Similarly to the deterministic case, we recognize an oscillatory system if $N^2>0$ and a diverging system if $N^2<0$ ({\rm i.e.} when lighter fluid is below heavier fluid). The velocity and buoyancy are coupled by gravity and transport. However, in our stochastic framework, the density anomaly is also transported by a random velocity. This highly oscillating velocity may be interpreted as the action of wind on the surface of the ocean. The interaction between this unresolved velocity component and the stratification acts has a random forcing on the oscillator: \begin{equation} {\mathrm{d}}_t \partial_t w + N^2 w {\mathrm{d}} t =-N^2 (\boldsymbol{\sigma}\dif \boldsymbol{B}_t)_{z}. \label{Oscillator randomly forced} \end{equation} To solve this equation, one can note that: \begin{eqnarray} {\mathrm{d}}_t \left( {\rm e}^{-2\rm i Nt} \partial_t ( {\rm e}^{\rm i Nt} w ) \right) = - N^2 {\rm e}^{-\rm i Nt} (\boldsymbol{\sigma} \dif \boldsymbol{B}_t )_z . \end{array} Then, by integrating twice, we get the solutions of the stochastic system \eqref{buo-strat}: \begin{align} w(t) &= \underbrace{ w(0) \cos(Nt) + \partial_t w(0)/N \sin(Nt) }_{=\mathbb{E} (w(t) )} - N\int_{0}^t \sin\bigl(N(t-r)\bigr) (\boldsymbol{\sigma}{\mathrm{d}} \boldsymbol{B}_r)_{z},\\ b(t) &= \underbrace{ \partial_t w(0) \cos(Nt) -w(0) N \sin(Nt) }_{=\mathbb{E} (b(t) )} -N^2 \int_0^t \cos(N(t-r)) (\boldsymbol{\sigma}{\mathrm{d}} \boldsymbol{B}_r)_{z}. \end{align} The ensemble means are the traditional deterministic solutions whereas the random parts are continuous summations of sine wave with uncorrelated random amplitudes. At each time $r$, the additive random forcing introduces an oscillation. Without dissipative processes, the latter remains in the system. But, the influence of the past excitations are weighed by sine wave due to the phase change. The buoyancy and the velocity are Gaussian random variables (as linear combinations of independent Gaussian variables). Therefore, their finite dimensional law ({\em i.e}. the multi-time probability density function) are entirely defined by their mean and covariance functions. The variances can be computed through the Ito isometry \citep{Oksendal98}. Then, the velocity covariance can be inferred from the SDE \eqref{Oscillator randomly forced}: \begin{eqnarray} \label{cov_buo-strat} Cov_w(t, t+\tau) = \frac{a_{zz}N} 4 \cos(N \tau ) \left ( 2 N t - \sin( 2 N t) \right ) + \frac{ a_{zz}N} 4 \sin(N \tau ) \left ( 1 - \cos( 2 N t) \right ) . \end{array} The covariance of the buoyancy is similar. Since the interaction between the unresolved velocity component and the background density gradient cannot be resolved deterministically, uncertainties of the dynamics accumulate. Each time introduces a new random uncorrelated excitation. This is why the buoyancy and velocity variances increase linearly with time. In contrast, in a deterministic oscillator with random perturbations of the initial conditions, the variance remains constant and depends solely on the initial velocity variance. This growing also illustrates in a very simple case the possible destabilization effects of the unresolved velocity in the models under location uncertainty. The first term of the covariance \eqref{cov_buo-strat} modulates the variance with a sine wave. The randomness of $w$ is generated by a set of sine wave which have coherent phases and interfere. When $N \tau = 0 [2 \pi]$ the noises with correlated amplitudes, $(\boldsymbol{\sigma}{\mathrm{d}} \boldsymbol{B}_r)_{z}$, in $w(t)$ and $w(t+\tau)$ are in phase, and thus the velocity covariance is large. When $N \tau = \pi [2 \pi]$ these correlated noises have opposite phases, and yields a negative velocity covariance. When $N \tau$ is close to $ \frac \pi 2 [ \pi]$, the noises are in quadrature and the first term of the velocity covariance is zero. \subsection{Summary} The fundamental conservation laws (mass, momentum and energy) have been interpreted within the proposed stochastic framework. Usual approximations of fluid dynamics are considered, leading to a stochastic version of Boussinesq equations. As developed, the buoyancy is transported by a smooth large-scale velocity component and a small-scale random field, delta-correlated in time. Consequently, the buoyancy is forced by an additive and a multiplicative noises, uncorrelated in time but correlated in space. The additive noise encodes the interaction between the unresolved velocity and the background stratification. The resulting random buoyancy then appears as an additive time-correlated random forcing in the vertical momentum equation. Both momentum and thermodynamic equations then involve an inhomogeneous and anisotropic diffusion, and a drift correction that both depend on the unresolved velocity variance tensor, $\mbs a$. Assuming hydrostatic equilibrium in this stochastic Boussinesq model directly provides a stochastic version of the primitive equations. A solvable model is also derived from this Boussinesq model. This toy model exemplifies how the random forcing continually increases the variance of the solution. \subsection{Guidelines for the derivation of models under location uncertainty} \label{Guidelines for the derivation of models under location uncertainty} The main steps of the derivation of dynamics under location uncertainty are sketched out below. \begin{enumerate}[(i)] \item The conservation laws of classical mechanics describe variation of some extensive properties. As illustrated in Appendix \ref{Appendix Stochastic Navier-Stokes model} for the stochastic Navier-Stokes model, if the extensive property of interest (linear momentum in this Appendix) has a component uncorrelated in time, the variations of this component must be balanced by a very irregular forcing, and can be discarded. \item The stochastic Reynolds transport theorem \eqref{th_transport} enables us to interpret the variation of the time-correlated component of the extensive property. The expression of the stochastic material derivative of an associated intensive quantity follows. \item The formulas \eqref{link DD and material deriv} relate this material derivative, $\rm D_t$, to the stochastic transport operator, $\mathbb D_t$. In most cases, these operators coincide. \item Gathering the equations from (ii) and (iii) provides an explicit Eulerian evolution law. \item Additional regularity assumptions can be used to separate the large-scale and small-scale components of the evolution law. As an example, the velocity component, $\boldsymbol{w}$, has been assumed to be differentiable with respect to time in this section {\rm i.e.} the acceleration component, $\partial_t \boldsymbol{w}$, is correlated in time. Thus, there is no time-uncorrelated noise in the large-scale momentum evolution law and the random pressure fluctuations appear in a separate equation. This separation is of great interest for deterministic LES-like simulations. However, by this approximation, we lose the conservation of the kinetic energy \eqref{E-cons}. For Uncertainty Quantification (UQ) purposes, this separation is not necessary. \item With or without regularity assumptions, usual approximations ({\em e.g.} the Boussinesq approximation) can be done to simplify further the stochastic model. \end{enumerate} Let us point out that the corresponding models involve subgrid terms which generally cannot be neglected. When adimentionalized, those subgrid terms are weighted by an additional adimentional number whose value depends on the noise magnitude. For a low noise the approximate dynamical models take a random form that remains similar to their deterministic counterparts. At the opposite, the system is generally significantly changed when considering a strong noise. A second companion paper (part II) \citep{resseguier2016geo2} describes random versions of Quasi-Geostrophic (QG) and Surface Quasi-Geostrophic (SQG) models with a moderate influence of the subgrid terms, whereas the third one (part III) \citep{resseguier2016geo3} focuses on the same models with a stronger influence of subgrid terms. The two dynamics are significantly different. \\ To close the stochastic system, the operator $\boldsymbol{\sigma}$ needs to be fully specified. Several solutions can be proposed to that purpose. The simplest specification consists in resorting to a homogeneous parametrization such as the Kraichnan model \citep{Kraichnan68,kraichnan1994anomalous,Gawedzki95,Majda-Kramer}. The companion paper \cite{resseguier2016geo2} relies on this type of random field with a parameterization fixed from an ideal spectrum. When the small-scale velocity is observable or at least partially observable the structure of that operator can then be estimated. For instance, in \cite{resseguier2015reduced} a nonparametric and inhomogeneous variance tensor $\mbs a (\boldsymbol{x}) = \boldsymbol{\sigma} (\boldsymbol{x}) \boldsymbol{\sigma} (\boldsymbol{x})^{\scriptscriptstyle T}$ is estimated from a sequence of observed velocity. Parametric and/or homogeneous models could also be specified. If no small-scale statistics are available, the choice of a closure can expressed $\boldsymbol{\sigma}$ as a function of large-scale quantities and similarity assumption \citep{Kadri-Memin-16,Chandramouli16}. The unresolved velocity can be defined as the solution of a simple linearized equations subject to advection by large-scale components, damping and additive random forcing as in {\em e.g.} quasi-linear approximations \citep{Farrell14} or stochastic super-parameterizations \citep{Grooms13,Grooms14}. Existing methodologies of data assimilation literature would also be of great interest in this context. Several authors define models from observed correlation length or correlation deformation estimation \citep{pannekoucke2008estimation,mirouze2010representation,Weaver-Courtier01}. Others specify the correlation matrices by diffusion equations \citep{michel2013estimatinga,michel2013estimatingb,pannekoucke2014modelling}. \section{Conclusion} In this paper, a random component is added to the smooth velocity field. This helps model a coarse-grained effect. The random component is chosen Gaussian and uncorrelated in time. Nevertheless, it can be inhomogeneous and anisotropic in space. With such a velocity, the expression of the material derivative is changed. To explicit this change, we introduce the stochastic transport operator, $\mathbb D_t$. The material derivative, $\rm D_t$, generally coincides with this operator, especially for tracer transports. Otherwise, the difference between these operators has a simple analytic expression. The stochastic transport operator involves an anisotropic and inhomogeneous diffusion, a drift correction and a multiplicative noise. These terms are specified by the statistics of the sub-grid velocity. The diffusion term generalizes the Boussinesq assumption. Moreover, the link between the three previous terms ensures many desired properties for tracers, such as energy conservation and continuous variance increasing. For passive tracer, the PDEs of mean and variance field are derived. The unresolved velocity transfers energy from the small-scale mean field to the variance. This is very suitable to quantify the uncertainty associated with sub-grid dynamics. This randomized dynamics has been called transport under location uncertainty. A stochastic version of the Reynolds transport theorem is then derived. It enables us to compute the time differentiation of extensive properties to interpret the conservation laws of classical mechanics in a stochastic sense. Applied to the conservation of linear momentum, amount of substance and first principle of thermodynamics, a stochastic version of the Navier-Stokes equations is obtained. Similarly to the deterministic case, a small buoyancy assumption leads to random Boussinesq equations. The random transport of buoyancy involves both a multiplicative and an additive noises. The additive noise encodes the interaction between the unresolved velocity and the background stratification. We schematically presented the action of this last forcing through a solvable model of fluid parcels vertical oscillations. \\ Under strong rotation and strong stratification assumptions, the stochastic Boussinesq representation simplifies to different mesoscale models depending on the scaling of the subgrid terms. The companion papers part II \citep{resseguier2016geo2} and part III \citep{resseguier2016geo3} describe such models. For a moderate influence of noise-driven subgrid terms, the Potential Vorticity (PV) is randomly transported up to three source terms \citep{resseguier2016geo2}. Assuming zero PV in the fluid interior yields the usual Surface Quasi-Geostrophic (SQG) relationship. The stochastic transport of buoyancy, yields a stochastic SQG model referred to as SQG model under Moderate Uncertainty ($SQG_{MU}$). This two-dimensional nonlinear dynamics enables \cite{resseguier2016geo2} to numerically unveil advantages of the models under location uncertainty in terms of small-scale structures restoration (in a single realization) and ensemble model error prediction (with an improvement compared to perturbed deterministic models of one order of magnitude). To go beyond the framework of this paper, larger-scale random dynamics can be inferred by averaging the models under location uncertainty using singular perturbation or stochastic invariant manifold theories \citep{gottwald2013role}. Finally, a delta-correlated process and stochastic calculus may seem insufficient to model the smallest velocity scales. Ito formulas deal with white-noise forcing and contains only second-order terms. For higher order terms, such as hyperviscosity, more complete theories exist \citep{klyatskin2005stochastic}. \section*{Acknowledgments} The authors thank Guillaume Lapeyre, Aur\'elien Ponte, Jeroen Molemaker, Guillaume Roulet and Jonathan Gula for helpful discussions. We also acknowledge the support of the ESA DUE GlobCurrent project (contract no. 4000109513/13/I-LG), the ``Laboratoires d'Excellence'' CominLabs, Lebesgue and Mer (grant ANR-10-LABX-19-01) through the SEACS project. \bibliographystyle{plainnat}
1,108,101,563,531
arxiv
\section{\label{intro}Era uma vez numa sala de aula...} Se existe uma coisa que pode ser bem difícil para alguns estudantes é entender o porquê daquelas famigeradas regras de sinais que estão relacionadas às operações básicas envolvendo números positivos e negativos. \begin{quote} \emph{\textquotedblleft Mais com mais é mais,\\ Mais com menos é menos,\\ Menos com mais também é menos e\\ Menos com menos é mais.\\ Por que isso é assim?\textquotedblright} \end{quote} Ao menos era essa a pergunta que não saía da minha cabeça aos doze anos de idade, algum tempo depois da minha introdução aos números negativos. Confesso que eu não posso dizer que eu tive \textquotedblleft grandes\textquotedblright \hspace*{0.01cm} dificuldades com a Matemática ao longo da minha vida já que, desde muito cedo, as minhas notas eram muito altas e eu já gostava muito dela. E como esse meu gosto era tanto, eu chego a desconfiar que algumas pessoas, que me conheceram na época em que eu conheci os primeiros números e as suas primeiras somas, deviam achar que eu era uma criança bem estranha em algum sentido. Afinal de contas, eu sempre arranjava um jeito de me cercar de números por todos os lados a ponto de, por exemplo, eu sempre carregar, comigo, uma caneta e um caderno (inclusive durante as férias) só para eu ficar escrevendo números e inventando algumas somas com as letras do alfabeto. Aliás, ver números e letras sendo somados era muito lindo para mim, e a certeza que eu já tinha, desde os meus seis anos de idade, era a de que, quando eu \textquotedblleft crescesse\textquotedblright , eu iria saber fazer todas as contas possíveis, principalmente todas aquelas que permitiam entender as coisas do Universo. Só que, apesar de eu realmente não poder afirmar que eu tive \textquotedblleft grandes\textquotedblright \hspace*{0.01cm} dificuldades com a Matemática, eu já não posso dizer o mesmo em relação à maioria dos professores que me deram aulas de Matemática. A Matemática que eles me apresentavam era muito chata já que, nas suas aulas, eles nunca propunham qualquer desafio interessante, eles nunca tentavam construir uma correspondência daquela Matemática para com a realidade. Basicamente a única coisa que eles faziam era pedir para decorar tabuadas e um monte de regras para fazer operações sem nunca justificá-las. E como aquela Matemática não era a que eu já gostava, com o passar do tempo eu fui desanimando, eu fui deixando de fazer todas as tarefas que não valiam notas e eu só fazia essas, que valiam notas, no \textquotedblleft último minuto\textquotedblright \hspace*{0.01cm} para minimizar toda aquela chatice na minha vida. O meu ânimo só voltou, de verdade, quando eu estava com os mesmos doze anos de idade de quando eu conheci os números negativos, principalmente depois que todas as aulas começaram a ser voltadas para as operações envolvendo letras e números. Embora explicar toda a lógica que existe por trás da soma de todas as frações seja algo até que bem simples de ser feito por um professor, esse mesmo cenário parece não se aplicar quando, por exemplo, esse mesmo professor precisa explicar por que as multiplicações precisam ser efetuadas antes das somas, ou mesmo por que \textquotedblleft todo\textquotedblright \hspace*{0.01cm} número elevado a zero é sempre igual a um\footnote{Eu coloquei aspas em \textquotedblleft todo\textquotedblright \hspace*{0.01cm} pois, por mais que a maioria dos professores que me deram aula no Ensino Básico tenham sempre me afirmado que \textquotedblleft zero elevado a zero é sempre zero\textquotedblright \hspace*{0.01cm} (inclusive me dando errado quando eu não associava qualquer resultado a essa operação naquela época), a verdade é que o resultado de zero elevado a zero \textbf{não é bem definido} \cite{elon1,elon2}: afinal de contas, embora alguns algebristas até lidem com a definição de que $ \boldsymbol{0^{0}} = \boldsymbol{1} $, pelo ponto de vista da Análise essa definição não faz muito sentido. Ou seja, eu realmente tinha alguma razão em não associar qualquer resultado a essa operação.}. A maioria dos professores de Matemática do Ensino Básico apenas pede para que os estudantes \textbf{decorem} todas essas regras e, consequentemente, acabam atribuindo notas a esses estudantes apenas pela capacidade que eles têm de cumprí-las e não de entendê-las. Ou seja, pelo ponto de vista pedagógico, é possível afirmar que essa situação pode ser interpretada como um belo exemplo da educação \textquotedblleft bancária\textquotedblright \hspace*{0.01cm} que Paulo Freire menciona na Ref. \cite{bancaria}, onde os professores tratam os estudantes como meros recipientes que sempre precisam ser preenchidos com alguma informação sem dar qualquer espaço para qualquer reflexão. No meu caso, por exemplo, foram poucos os professores de Matemática que fugiram um pouco dessa educação \textquotedblleft bancária\textquotedblright \hspace*{0.01cm} e conseguiram me explicar os porquês de algumas operações: entre os treze professores que me deram aulas de Matemática no Ensino Básico, apenas \textbf{um} único deles nunca se esquivou de discutir esses porquês comigo. A única coisa que os demais faziam era passar listas imensas, com exercícios bem chatos e repetitivos, onde as palavras de ordem para resolvê-las sempre eram \textquotedblleft decorar\textquotedblright \hspace*{0.01cm} e \textquotedblleft reproduzir\textquotedblright . E foi exatamente isso que aconteceu quando, aos doze anos de idade, eu cheguei na (antiga) sexta série do Ensino Básico e a professora que me dava aulas de Matemática começou a fazer umas contas com números negativos sempre repetindo \begin{quote} \emph{\textquotedblleft Mais com mais é mais,\\ Mais com menos é menos,\\ Menos com mais é menos e\\ Menos com menos é mais\textquotedblright ,} \end{quote} sem nunca explicar o porquê desse \textquotedblleft mantra\textquotedblright \hspace*{0.01cm} para ninguém. Aquilo deu um verdadeiro \textquotedblleft nó\textquotedblright \hspace*{0.01cm} na minha cabeça! A coisa simplesmente não ia! Consequência? Eu, que sempre fui um dos melhores estudantes de todas as escolas por onde eu passei, pela primeira vez na vida me vi numa situação completamente inusitada: eu fiquei de recuperação em Matemática no segundo bimestre daquele ano (já que, a partir daquele ano, foi instituído um período de recuperação no final de cada bimestre). Só que, no bimestre seguinte, aconteceu uma coisa diferente: essa professora resolveu substituir a primeira prova por uma lista imensa de exercícios, justamente uma daquelas listas que eu odiava fazer. Acho que tinha mais de duzentas contas que precisavam ser calculadas naquela lista. Só que, para piorar um pouco mais a minha situação, essa lista precisava ser feita em grupo. Confesso que eu não gostava muito de fazer atividades (que valiam notas) em grupo naquela época. E não era nem por um motivo antissocial: o motivo era o simples fato de não ser incomum alguns dos meus colegas abusarem do fato de eu sempre tirar as melhores notas da turma para não fazerem as suas partes nessas atividades. Ou seja, sempre que eu fazia essas atividades em grupo, não era nada incomum eu me ver numa situação onde eu precisava fazer tudo, por conta própria, só para eu não receber uma nota que eu julgava não merecer. Só que o caso dessa lista conseguiu ser o pior dos piores. Afinal, apesar da ideia dessa professora até ter sido muito boa (já que a sua proposta provavelmente era a de fazer uma atividade colaborativa, onde os estudantes se ajudassem e aprendessem uns com os outros), não se tratava de um trabalho em grupo onde os estudantes podiam formar os seus grupos por conta própria: todos os grupos foram definidos pela própria professora através de alguns sorteios e o sorteio, que acabou definindo o meu grupo, acabou me colocando justamente ao lado dos estudantes mais problemáticos daquela turma, os quais adoravam praticar \textquotedblleft bullying\textquotedblright \hspace*{0.01cm} com todos, principalmente contra mim. Ou seja, além de todos os problemas que eu já tinha com aqueles benditos números negativos, eu ainda precisava lidar com um \textquotedblleft novo\textquotedblright \hspace*{0.01cm} problema, agora de ordem humana. Resultado: eu não consegui fazer nada com aquele grupo, já que, por definição ou por medo dos assediadores, ninguém ali estava disposto a ser colaborativo para comigo. Aliás, eu sequer tive acesso àquela lista enquanto eu estive naquele grupo já que apenas uma cópia foi fornecida para cada grupo e, no caso da que foi entregue para esse meu grupo, ela foi entregue justamente nas mãos de um dos assediadores. Eu cheguei até a relatar o causo para a professora, pedindo para que ela me mudasse de grupo, já que as pessoas daquele grupo sequer me davam acesso à lista pra eu resolver. Mas ela me disse que ela não poderia fazer isso pois, do contrário, outros componentes, dos outros grupos, acabariam fazendo o mesmo pedido para ela e seria uma verdadeira confusão. Porém, como a minha situação era meio \textquotedblleft atípica\textquotedblright , ela acabou me fornecendo uma cópia extra da lista e deixou que eu entregasse tudo individualmente caso eu conseguisse. Eu fiquei quase uma semana sem ir à escola, sozinha, em casa, enquanto a minha mãe estava no trabalho, só tentando resolver aquela bendita lista. Aliás, a minha mãe, que trabalhava na mesma escola onde eu estudava, até comentou num desses dias comigo que, na noite anterior, essa minha professora havia sugerido que a minha mãe contratasse um professor particular de Matemática para ele \textbf{resolver} aquela lista para mim. Ouvindo a minha mãe me dizer aquilo, eu disse \begin{quote} -- Pra quê? -- Sabia que você ia falar isso... -- Pagar alguém pra resolver essa lista, sendo que eu sei que eu sou capaz de fazer tudo sozinha? -- Foi exatamente isso que eu falei pra ela. \emph{\textquotedblleft Se eu conheço bem a Fernanda, ela jamais vai querer fazer isso. Tudo que ela sempre precisou fazer, ela sempre fez sozinha, ela sempre conseguiu. Deixa ela quieta lá em casa que ela dá um jeito.\textquotedblright }. -- E pra quê pagar alguém pra resolver essa lista pra mim sendo que sou que tenho que fazer? Sou eu que tenho que aprender! O correto seria ela explicar as coisas direito ou, então, perguntar se eu preciso de alguma ajuda, ou sei lá o quê... Mas nunca falar uma coisa dessas! -- Eu falei isso pra ela. Falei que, ao invés dela falar isso, talvez era ela que devia ver o que `tava' acontecendo, já que você nunca teve problemas antes e só `tá' com problemas na matéria dela. Ela ficou pálida, pediu desculpas dizendo que ela não pode fazer isso porque `tá' sem tempo, disse que vai viajar pra outra cidade, que vai mudar de escola... A única coisa que ela me perguntou foi se eu não ficava preocupada com você faltando. Eu dei risada. \emph{\textquotedblleft Eu nunca, na minha vida, precisei obrigar a Fernanda a `vim' pra escola. Ela vem porque ela quer. Quando ela não quer `vim', ela não vem. E, se ela está dizendo que quer ficar em casa pra resolver essa lista, eu confio nela, ela sabe o que `tá' fazendo. Além do mais, ela pode até não `tá' vindo pra escola, mas `tá' lá em casa, estudando. Ela é inteligente, o resto das matérias ela pega depois.\textquotedblright } -- Eu vou resolver essa lista! Ainda mais depois disso que ela disse! É uma questão de honra agora! -- Eu sei que você vai, você sempre consegue! \end{quote} E foi assim que eu passei a semana toda, dentro de casa, diante daquela lista e de um livro de Matemática que parecia não me ajudar em nada. Até que, numa tarde que estava bastante tediosa, eu notei que, sobre a mesa onde eu estudava, havia um espelho de mão. E foi justamente no meio de uma brincadeira, que eu resolvi fazer com aquele espelho para desanuviar um pouco, que eu acabei percebendo uma coisa interessante: ao enxergar a imagem que o espelho construiu de uma régua que também estava sobre aquela mesa, eu percebi que aquela imagem era \textbf{revertida} conforme, por exemplo, a Figura \ref{regua-e-imagem} está ilustrando. \begin{figure}[!t] \centering \includegraphics[viewport=360 10 0 144,scale=1.3]{regua-e-imagem.jpg} \caption{\label{regua-e-imagem}À direita nós podemos ver uma régua que foi colocada diante de um espelho plano (aqui representado pelo segmento de reta destacado em preto), mais especificamente da sua superfície espelhada (que está sendo destacada em ciano). Já à esquerda nós podemos ver a imagem que esse espelho plano foi capaz de construir para essa régua.} \end{figure} E como colocar o espelho perpendicularmente àquela régua, no seu ponto $ \boldsymbol{0} $, me fez perceber que a justaposição dessa régua com a sua imagem correspondia a um trecho da reta dos números reais, desde que fosse feita a identificação de cada número \reflectbox{$ \boldsymbol{Z} $} (ou seja, cada número $ \boldsymbol{Z} $ revertido que aparece na imagem) como $ - \boldsymbol{Z} $, foi exatamente isso que permitiu com que eu justificasse todas aquelas famigeradas regras de sinais que estavam envolvidas para com os números positivos e negativos. Como? \begin{figure}[!t] \centering \includegraphics[viewport=360 10 0 230,scale=1.3]{regua-e-reta.jpg} \caption{\label{regua-e-reta}\textbf{(a)} Justaposição de uma régua (à direita) com a sua imagem (à esquerda) mediante a alocação de um espelho plano (que está destacado apenas em preto, sem dar qualquer destaque à sua superfície refletora apenas para realçar a simetria desta figura) perpendicularmente a essa régua no seu ponto $ \boldsymbol{0} $. \textbf{(b)} Reta dos números reais onde está sendo dado destaque aos vinte números inteiros não nulos que estão mais próximos de $ \boldsymbol{0} $.} \end{figure} A primeira coisa que eu percebi (diante dessa justaposição que está sendo bem ilustrada pela Figura \ref{regua-e-reta}(a)) foi que, do mesmo jeito que o número $ \boldsymbol{5} $ estava mais distante do número $ \boldsymbol{0} $ que o número $ \boldsymbol{3} $, o número \reflectbox{$ \boldsymbol{5} $} também estava mais distante de $ \boldsymbol{0} $ que o número \reflectbox{$ \boldsymbol{3} $}. E isso só acontecia porque todas as distâncias que existiam entre os números da régua real eram \textbf{preservadas} pelos da régua revertida. \begin{figure}[!t] \centering \includegraphics[viewport=360 10 0 305,scale=1.3]{soma-total.jpg} \caption{\label{soma-total}\textbf{(a)} Situação inicial onde um objeto (o peão preto) foi colocado sobre o número $ \boldsymbol{4} $ da régua (à direita) e a sua imagem (que foi construída pelo espelho plano que está novamente destacado em preto, sem qualquer destaque à sua superfície refletora) está sobre o número \quatrorevertido . \textbf{(b)} Situação final onde o mesmo objeto e, consequentemente, a sua imagem foram deslocados para os números $ \boldsymbol{9} $ e \noverevertido \hspace*{0.01cm} respectivamente, caracterizando ambos os deslocamentos como uma operação de soma já que tanto o objeto como a sua imagem estão mais distantes de $ \boldsymbol{0} $.} \end{figure} Já a segunda coisa que eu percebi foi que, quando eu colocava um objeto (como, por exemplo, o peão que consta na Figura \ref{soma-total}) sobre a posição $ \boldsymbol{4} $ da régua e o deslocava até a posição $ \boldsymbol{9} $, a imagem daquele objeto também saía da posição \reflectbox{$ \boldsymbol{4} $} e chegava até a posição \reflectbox{$ \boldsymbol{9} $}. E como esse processo de sair da posição $ \boldsymbol{4} $ da régua e chegar até a posição $ \boldsymbol{9} $ pode ser interpretado em termos da \textbf{soma} \begin{equation*} \boldsymbol{4} + \boldsymbol{5} = \boldsymbol{9} \ , \end{equation*} não havia como não interpretar o processo de sair da posição \reflectbox{$ \boldsymbol{4} $} e chegar até a posição \reflectbox{$ \boldsymbol{9} $} também em termos de uma soma \begin{equation*} \quatrorevertido + \cincorevertido = \noverevertido \ , \end{equation*} já que ambos os processos levavam a números que mantinham a mesma distância (de nove unidades) em relação ao número $ \boldsymbol{0} $. \begin{figure}[!t] \centering \includegraphics[viewport=360 10 0 305,scale=1.3]{subtracao-total.jpg} \caption{\label{subtracao-total}\textbf{(a)} Nova situação inicial onde o mesmo objeto da figura anterior (que continua à direita) foi colocado sobre o número $ \boldsymbol{7} $ da régua e, consequentemente, a sua imagem (que foi construída pelo mesmo espelho plano da figura anterior) agora está sobre o número \seterevertido . \textbf{(b)} Nova situação final onde tal objeto e, consequentemente, a sua imagem foram deslocados para os números $ \boldsymbol{2} $ e \doisrevertido \hspace*{0.01cm} respectivamente, o que caracteriza ambos os deslocamentos como uma operação de subtração já que tanto o objeto como a sua imagem agora estão mais próximos de $ \boldsymbol{0} $.} \end{figure} Seguindo essa mesma linha de raciocínio, a terceira coisa que eu observei foi que, como o processo de sair da posição $ \boldsymbol{7} $ da régua e chegar até a posição $ \boldsymbol{2} $ pode ser interpretado em termos da \textbf{subtração} \begin{equation*} \boldsymbol{7} - \boldsymbol{5} = \boldsymbol{2} \ , \end{equation*} também não havia como eu não interpretar o processo de sair da posição \reflectbox{$ \boldsymbol{7} $} e chegar até a posição \reflectbox{$ \boldsymbol{2} $} em termos de uma subtração \begin{equation*} \seterevertido - \cincorevertido = \doisrevertido \ , \end{equation*} já que ambos os processos levavam a números que mantinham a mesma distância (de duas unidades) em relação ao número ponto $ \boldsymbol{0} $. Diante de tudo isso, como, pra mim, estava bastante clara a existência de uma correspondência entre a reta dos números reais e uma outra reta, que poderia ser fisicamente construída com o auxílio de uma régua semi-infinita desde que \begin{itemize} \item[\textbf{(i)}] um espelho plano fosse posicionado \textbf{perpendicularmente} a essa régua sobre o seu ponto $ \boldsymbol{0} $ e \item[\textbf{(ii)}] todo número \reflectbox{$ \boldsymbol{Z} $} fosse identificado como $ \boldsymbol{-Z} $, \end{itemize} todas essas regras de sinais ficaram justificadas. Afinal, como o processo que pode ser interpretado como uma soma (o da Figura \ref{soma-total}) está mostrando que a soma de um número negativo está associada a um deslocamento para a esquerda (o qual está sendo destacado pela flecha vermelha da Figura \ref{soma-total}\textbf{(b)}), tal como também já ocorre com a subtração de um número positivo (o que também está propositalmente destacado por uma flecha vermelha na Figura \ref{subtracao-total}\textbf{(b)}), é possível afirmar que \begin{equation*} - \left( + \boldsymbol{Z} \right) = + \left( - \boldsymbol{Z} \right) = - \boldsymbol{Z} \ . \end{equation*} Analogamente, como o deslocamento que caracteriza a subtração de um número negativo é para a direita (conforme bem ilustra a Figura \ref{subtracao-total}\textbf{(b)} com o auxílio de uma flecha azul), assim como também acontece com a soma de um número positivo (o que também está sendo propositalmente destacado, pela flecha de mesma cor, na Figura \ref{soma-total}\textbf{(b)}), também passa a ser possível afirmar que \begin{equation*} - \left( - \boldsymbol{Z} \right) = + \left( + \boldsymbol{Z} \right) = + \boldsymbol{Z} \ . \end{equation*} Depois que eu enxerguei toda essa correspondência, tudo se tornou tão trivial que eu consegui fazer todas as contas daquela lista numa única tarde. E, como eu morava bem perto da escola onde eu estudava, assim que terminei eu já fui até lá, no início da noite, entregar a lista resolvida para a minha professora antes que as suas aulas terminassem. Cerca de duas ou três semanas depois, com todas as listas já corrigidas, a professora começa uma chamada anunciando, em voz alta, as notas de todos. \begin{quote} -- Maria Fernanda. -- Presente. -- \textbf{Dez}, meus parabéns! \end{quote} Naquele momento, todos na sala de aula se viraram para mim num ato único, olhando atônitos. Deu até para ouvir o barulho de todos se virando ao mesmo tempo. Afinal de contas, depois de eu ter ficado de recuperação em Matemática e de ter sofrido \textquotedblleft bullying\textquotedblright \hspace*{0.01cm} ao longo de todo o ano justamente daquele grupo onde o sorteio me inseriu, eu larguei esse grupo (coisa que ninguém, ali, sabia até então já que eu desapareci de todas as aulas), resolvi toda aquela lista de exercícios sozinha, sem a ajuda de ninguém, e ainda fui a única pessoa que tirou \textbf{dez} não apenas naquela lista, mas naquele bimestre. \begin{quote} -- O que foi, nunca me viram? \end{quote} Depois disso, eu voltei ao meu normal e nunca mais tirei uma nota baixa em Matemática no Ensino Básico, do mesmo jeito que eu nunca mais vi ninguém daquela sala praticar \textquotedblleft bullying\textquotedblright \hspace*{0.01cm} contra mim nem contra nenhum dos meus amigos ao meu redor. \section{Mas por que usar um espelho plano?} Na verdade, o fato de eu ter usado um espelho plano nesse experimento, que me fez entender todas essas regras de sinais, não foi uma questão de escolha: aquele era o único espelho que eu tinha ao meu alcance. Entretanto, é importante deixar bem claro que são justamente os espelhos planos que são os mais indicados para este fim. E para deixar bem claro por que eles são realmente os mais indicados, é interessante nós voltarmos as nossas atenções para \textquotedblleft outros\textquotedblright \hspace*{0.01cm} tipos de espelhos cujo nome parece não remeter à nada que é plano: os \textbf{espelhos esféricos}. \subsection{\label{subsection:paraxiais}O que são espelhos esféricos?} De um modo geral, é possível afirmar que os espelhos esféricos recebem esse nome por serem fabricados com o auxílio de alguma estrutura esférica. Aliás, a Figura \ref{mirror-balls} já traz um bom exemplo de tais espelhos, uma vez que as duas esferas que nela consta possuem superfícies que conseguem formar imagens até que bem nítidas de diversos objetos que estão nas suas adjacências. \begin{figure}[!t] \centering \includegraphics[viewport=460 10 0 305,scale=1.0]{mirror-balls.jpg} \caption{\label{mirror-balls}Duas esferas cujas superfícies exteriores foram preparadas para funcionarem como espelhos. Note que, para quem olha essas duas esferas frontalmente, as imagens que aparecem sobre as suas superfícies até que são bastante nítidas na região mais central, embora sejam um pouco deformadas nas outras regiões devido à não planitude dessas superfícies.} \end{figure} É claro que vale notar que nem todo espelho esférico possui essa forma explicitamente esférica dos que aparecem nessa Figura \ref{mirror-balls} e um bom exemplo disso é o espelho esférico que aparece na Figura \ref{spherical-mirror}. \begin{figure}[!t] \centering \includegraphics[viewport=460 10 0 460,scale=1.0]{spherical-mirror.jpg} \caption{\label{spherical-mirror}Espelho esférico comumente usado nas situações onde se faz necessário diminuir os tamanhos das imagens dos objetos que estão próximos a ele aumentando, assim, o seu campo visual.} \end{figure} No entanto, apesar desse último espelho não ter um formato explicitamente esférico, o que justifica o seu nome é o fato dele possuir o formato de uma \textbf{calota esférica}: ou seja, conforme bem ilustra a Figura \ref{corte-esfera}, o seu processo de fabricação pode ser visto em termos de um corte numa esfera oca, a qual teve (pelo menos) uma das suas superfícies preparadas para refletir luz e formar imagens nítidas. \begin{figure}[!t] \centering \includegraphics[viewport=180 10 0 170,scale=1.1]{corte-esfera.jpg} \caption{\label{corte-esfera}Esquema que ilustra a lógica por trás do corte que é feito numa esfera oca (destacada em cinza) com o auxílio de um plano (destacado com um azul semitransparente) para formar uma calota esférica que pode dar suporte a um espelho esférico como, por exemplo, o que aparece na Figura \ref{spherical-mirror}.} \end{figure} Diante desta última observação, e independentemente de um espelho ter sido construído usando uma esfera completa ou apenas um pedaço dela, o fato é que todo espelho esférico pode ser caracterizado por um raio $ \boldsymbol{R} $ -- no caso, pelo raio da esfera que lhe deu origem. E uma relação, que é bem conhecida entre $ \boldsymbol{R} $ e as posições do objeto $ \boldsymbol{p_{\mathrm{ob}}} $ e da imagem $ \boldsymbol{p_{\mathrm{im}}} $ que esse espelho forma do objeto, é \begin{equation} \frac{\boldsymbol{1}}{\boldsymbol{p_{\mathrm{ob}}}} + \frac{\boldsymbol{1}}{\boldsymbol{p_{\mathrm{im}}}} = \frac{\boldsymbol{2}}{\boldsymbol{R}} \ . \label{gauss-1} \end{equation} Esta é a chamada \textbf{equação de Gauss}, a qual costuma ser popularmente expressa como \begin{equation} \frac{\boldsymbol{1}}{\boldsymbol{p_{\mathrm{ob}}}} + \frac{\boldsymbol{1}}{\boldsymbol{p_{\mathrm{im}}}} = \frac{\boldsymbol{1}}{\boldsymbol{f}} \ , \label{gauss-2} \end{equation} onde a razão \begin{equation} \boldsymbol{f} = \boldsymbol{R} / \boldsymbol{2} \label{d-focal} \end{equation} é reconhecida como a \textbf{distância focal} desses espelhos esféricos. Essa razão só é assim chamada por ser a distância que existe entre a superfície de um espelho esférico e o ponto para o qual todos os feixes de luz \textbf{paraxiais} (ou seja, que incidem sobre essa superfície paralelamente ao eixo de simetria e muito próximos a ele) convergem. Note que essa é justamente a situação que está sendo ilustrada na Figura \ref{light-reflection}, onde nós vemos um único feixe de luz partindo do ponto $ \boldsymbol{A} $ e incidindo sobre a superfície interna de uma calota esférica: se essa superfície tiver sido bem preparada para funcionar como um espelho, esse feixe será refletido de um ponto $ \boldsymbol{B} $ dessa calota para um ponto $ \boldsymbol{F} $ que pertence ao eixo de simetria deste espelho (eixo esse que será chamado, a partir de agora, como \textbf{eixo principal}). E como todos os feixes de luz paraxiais sempre serão refletidos para esse mesmo ponto $ \boldsymbol{F} $ é justamente esse ponto que pode ser reconhecido como o \textbf{foco} deste espelho, o qual se mantém a uma distância $ \boldsymbol{f} $ da superfície em questão. \begin{figure}[!t] \centering \includegraphics[viewport=330 10 0 180,scale=1.3]{light-reflection.jpg} \caption{\label{light-reflection}Corte lateral de um espelho \textbf{côncavo} (ou seja, de um espelho cuja face espelhada (destacada em ciano) é a superfície interna da calota que lhe deu origem) onde é dado destaque ao seu eixo principal que o intercepta no centro $ \boldsymbol{O} $ da sua superfície. Note que o feixe de luz incidente está sendo refletido para o ponto $ \boldsymbol{F} $ (o foco) deste espelho esférico, ponto o qual não coincide com o centro $ \boldsymbol{C} $ da esfera oca que deu origem a este espelho. Como tal reflexão sempre ocorre com $ \boldsymbol{\alpha } = \boldsymbol{\beta } $, essa focalização de todos os feixes em $ \boldsymbol{F} $ só ocorre quando o ângulo $ \boldsymbol{\measuredangle BCO} $ é extremamente pequeno (ou seja, quando todos esses feixes são paraxiais \cite{sutanto}).} \end{figure} \subsection{Qual é a relação que existe entre espelhos planos e esféricos?} Embora a demostração da validade de todas essas relações (\ref{gauss-1}), (\ref{gauss-2}) e (\ref{d-focal}) possa ser encontrada em diversas fontes bibliográficas \cite{hecht,savelyev,moyses}, uma demonstração bastante didática também consta no Apêndice \ref{appendix} deste artigo por uma questão de completude. Todavia, diante da proposta deste artigo, a pergunta que não quer calar é: por que será que é tão importante nós termos consciência de todas essas informações sobre os espelhos esféricos? Aliás, será que existe alguma relação entre esses espelhos e os espelhos planos, a qual consegue explicar por que os últimos são os mais indicados nesta tarefa de entender as regras de sinais da Matemática? Para entender as respostas de todas essas questões é importante notar, primeiro, que, às vezes, quando nós estamos diante de uma esfera que possui um raio gigantesco (quando comparado, por exemplo, às nossas dimensões), fica muito difícil reconhecer que nós estamos realmente diante de uma esfera. Esse é o caso, por exemplo, da Terra: dependendo da região onde nós estamos sobre ela, é impossível reconhecer o seu formato esférico apenas visualmente. Agora imagine uma situação onde o raio de uma esfera tem um tamanho que tende ao infinito: neste caso é \textbf{impossível} distinguir a superfície de uma esfera da de um plano infinito. E como, neste caso onde o raio de uma esfera tem um tamanho que tende ao infinito, a expressão (\ref{d-focal}) nos mostra que \begin{equation} \lim _{\boldsymbol{R} \rightarrow \boldsymbol{\infty }} \left( \frac{\boldsymbol{1}}{\boldsymbol{f}} \right) = \boldsymbol{0} \ , \label{limite} \end{equation} a substituição deste resultado em (\ref{gauss-2}) nos leva a \begin{equation} \boldsymbol{p_{\mathrm{im}}} = - \boldsymbol{p_{\mathrm{ob}}} \ . \label{resultado} \end{equation} Já a segunda coisa que nós podemos fazer para entender as respostas dessas questões é notar que, quando nós estamos diante da equação de Gauss (esteja ela expressa sob a forma (\ref{gauss-1}) ou sob a forma (\ref{gauss-2})), as posições $ \boldsymbol{p_{\mathrm{ob}}} $ de um objeto e $ \boldsymbol{p_{\mathrm{im}}} $ da imagem que um espelho esférico forma para esse objeto acabam sendo localizadas com o auxílio do eixo principal. Ou seja, conforme a Figura \ref{convencao} está ilustrando, \begin{figure}[!t] \centering \includegraphics[viewport=340 10 0 170,scale=1.4]{convencao.jpg} \caption{\label{convencao}Convenção que é adotada para descrever todas as posições que estão relacionadas à formação de imagens de objetos por um espelho esférico: sempre que um objeto e/ou a sua imagem estiverem diante ou atrás da face espelhada deste espelho, as suas posições serão representadas por números reais positivos ou negativos respectivamente. Este é justamente o caso do objeto (o cavalo preto) que aparece à direita: como a face espelhada está voltada para esse lado direito, a sua posição $ \boldsymbol{p_{f}} $ é descrita por um número real positivo, o qual pode ser perfeitamente interpretado como a distância que existe entre tal objeto e o espelho. Já no caso do outro objeto (o bispo preto) que consta à esquerda (mais especificamente atrás da face espelhada), a sua posição $ \boldsymbol{p_{a}} $ é descrita por um número real negativo. Este número negativo também pode ser interpretado em termos da distância que existe entre esse último objeto e o espelho, só que multiplicada por $ - \boldsymbol{1} $. Aqui, vale notar que os objetos que constam dos dois lados desta figura foram escolhidos apenas a título de ilustração e, portanto, não devem ser interpretados como a imagem um do outro. Outra coisa que também vale notar aqui é que, embora o espelho que aparece nesta figura seja plano, tudo o que foi dito aqui também vale para um espelho esférico, até mesmo porque espelhos planos são interpretáveis como espelhos esféricos que possuem raios infinitos.} \end{figure} se nós colocarmos um objeto diante de um espelho esférico, tais posições são localizadas com o auxílio da métrica que é atribuída a esse eixo, a qual, por uma questão de \textquotedblleft convenção\textquotedblright , está em correspondência para com a reta dos números reais. Afinal, de acordo com o que essa Figura \ref{convencao} nos mostra, enquanto todas as posições na frente do espelho são descritas por números positivos, todas as que ficam atrás do espelho são descritas por números negativos. Aliás, devido a essa correspondência, vale notar que a origem $ \boldsymbol{0} $ desta métrica está sobre o ponto $ \boldsymbol{O} $ onde o espelho e o seu eixo principal se interceptam. Em linhas bem gerais, é válido afirmar que um dos motivos que justificam essa \textquotedblleft convenção\textquotedblright \hspace*{0.01cm} é, por exemplo, o fato de espelhos esféricos serem capazes de formar dois tipos de imagens. E uma delas é a que se pode chamar de \textbf{imagem virtual}, a qual só recebe este nome por só poder ser enxergada dentro do espelho: esse é justamente o caso, por exemplo, de todas as imagens que já puderam ser vistas nas Figuras \ref{mirror-balls} e \ref{spherical-mirror}. Já o outro tipo de imagem que um espelho esférico é capaz de formar é a \textbf{imagem real}, a qual recebe este nome por poder ser vista fora do espelho (isto é, no \textquotedblleft mundo real\textquotedblright , bem ao nosso lado) conforme a Figura \ref{imagem-real} está ilustrando. \begin{figure}[!t] \centering \includegraphics[viewport=330 10 0 155,scale=1.5]{imagem-real.jpg} \caption{\label{imagem-real}Corte lateral de um espelho \textbf{côncavo} que consegue ilustrar como uma imagem real pode ser formada. Para isso, um objeto real (uma vela) foi colocado na frente desse espelho, bem próximo ao seu foco, e a sua imagem real (por também estar na frente desse espelho) pôde ser identificada numa posição um pouco mais distante. As duas linhas (uma contínua e outra hachurada) que estão destacadas nesta figura (com a cor púrpura) representam dois feixes de luz que partem do objeto real em direções distintas, os quais voltam a se cruzar no lugar onde a imagem é identificada. Maiores detalhes sobre o porquê tais feixes foram destacados podem ser encontrados, por exemplo, no Apêndice \ref{appendix}.} \end{figure} Ou seja, por mais que seja intuitivo interpretar $ \boldsymbol{p_{\mathrm{ob}}} $ como a distância que existe entre um objeto real e um espelho (já que é justamente essa distância que nós conseguimos medir na prática), o fato da imagem de um objeto poder (como uma possibilidade) ser formada tanto do lado de dentro como do lado de fora de um espelho parece nos impelir a dar uma interpretação um pouco diferente à posição $ \boldsymbol{p_{\mathrm{im}}} $, já que $ \boldsymbol{p_{\mathrm{im}}} $ precisa ser capaz de fazer tal distinção. Nestes termos, como atribuir tal escala métrica ao eixo principal nos permite não apenas distinguir as posições de imagens reais e virtuais usando números reais positivos e negativos, mas também a interpretar $ \left\vert \boldsymbol{p_{\mathrm{im}}} \right\vert $ (ou seja, o módulo ou a norma de $ \boldsymbol{p_{\mathrm{im}}} $) como a distância que essa imagem (real ou não) mantém do espelho, o uso dessa escala acaba sendo muito bem-vindo. Diante de todas as observações que acabaram de ser feitas aqui, e principalmente diante do resultado que foi obtido em (\ref{resultado}), não é nada difícil concluir que, se nós construírmos qualquer reta perpendicular a um espelho plano num ponto $ \boldsymbol{O} $ arbitrário e darmos, à ela, uma métrica que tenha a sua origem em $ \boldsymbol{O} $, essa reta se estenderá para dentro do espelho. E como essa métrica se estenderá para dentro desse espelho sem sofrer qualquer tipo de deformação devido ao resultado (\ref{limite}), também não será nada difícil constatar que, para cada ponto $ \boldsymbol{p_{\mathrm{ob}}} $ que existir sobre a reta real (ou seja, sobre a reta que nós realmente construímos diante da superfície espelhada), a sua imagem estará, atrás do espelho, sobre a posição $ \boldsymbol{p_{\mathrm{im}}} = - \boldsymbol{p_{\mathrm{ob}}} $ que é identificável sobre essa reta estendida. \section{Atividades que podem ser propostas} Se nós analisarmos esta situação desta construção pelo ponto de vista lúdico, o ato de colocar uma régua, ou qualquer outro objeto que possa ser interpretado como uma reta numérica, bem na frente da superfície espelhada de um espelho plano, abre uma espécie de \textquotedblleft portal\textquotedblright \hspace*{0.01cm} através do qual é possível enxergar o \textquotedblleft habitat\textquotedblright \hspace*{0.01cm} dos números negativos. Afinal, note que, conforme já bem ilustrou a Figura \ref{regua-e-reta}\textbf{(a)}, para qualquer número $ \boldsymbol{Z} $ que venha a ser escolhido sobre a régua que está na frente do espelho, sempre será possível enxergar o seu oposto \reflectbox{$ \boldsymbol{Z} $}, dentro do espelho, à mesma distância da sua superfície. Assim, como o reconhecimento de \reflectbox{$ \boldsymbol{Z} $} como $ - \boldsymbol{Z} $ entra em pleno acordo com a propriedade \begin{equation} \boldsymbol{Z} + \left( - \boldsymbol{Z} \right) = - \boldsymbol{Z} + \boldsymbol{Z} = \boldsymbol{0} \label{reais-1} \end{equation} dos números reais (posto que ela traz o número $ \boldsymbol{0} $ como o ponto médio entre os números $ \boldsymbol{Z} $ e $ - \boldsymbol{Z} $), é justamente esse reconhecimento que acaba ratificando a interpretação dessa justaposição, da régua com a sua imagem que aparece na Figura \ref{regua-e-reta}\textbf{(a)}, como uma reta numérica. Já um outro ponto que reforça que o uso de espelhos planos é bem-vindo (dentro desse contexto de justificar todas as regras de sinais) segue de um comportamento que já foi explorado nas Figuras \ref{soma-total} e \ref{subtracao-total}. Afinal, sempre que um objeto é movimentado diante da superfície espelhada de tais espelhos, a sua imagem se move no sentido \textbf{oposto} ao longo da linha que une esse objeto à sua imagem. E observar esse comportamento é excelente posto que isso já consegue ilustrar, por exemplo, porque os números $ \boldsymbol{Z} $ e $ - \boldsymbol{Z} $ são interpretados como opostos um do outro. Aliás, note que essa oposição que existe $ \boldsymbol{Z} $ e $ - \boldsymbol{Z} $ já pode ser explorada para justificar, por exemplo, a regra de sinal que diz que \textquotedblleft mais com menos é menos\textquotedblright . Afinal, como o espelho plano está sobre o número $ \boldsymbol{0} $, fica bem fácil incutir a ideia de que $ \boldsymbol{0} $ é o ponto médio que existe entre esses dois números. Assim, como o resultado da média simples \begin{equation*} \frac{\boldsymbol{Z} + \left( - \boldsymbol{Z} \right) }{2} \end{equation*} precisa resultar nesse $ \boldsymbol{0} $, isso pode ser explorado para fazer com que os estudantes entendam tal regra de sinal dado que isso só ocorre se \begin{equation*} + \left( - \boldsymbol{Z} \right) = - \boldsymbol{Z} \ . \end{equation*} De qualquer forma, é válido reforçar que pedir para que os estudantes observem que um objeto e a sua imagem se movem em sentidos opostos parece ser o mais lúdico para o bom entendimento dessas regras de sinais. Afinal, conforme já foi mencionado na Seção \ref{intro}, como todos os deslocamentos que caracterizam as somas de números negativos têm o mesmo sentido que os que caracterizam a subtração de números positivos, não é difícil usar isso para mostrar a validade da regra \begin{equation*} + \left( - \boldsymbol{Z} \right) = - \left( + \boldsymbol{Z} \right) = - \boldsymbol{Z} \ . \end{equation*} Nestes termos, como esse último comentário pode ser perfeitamente estendido para mostrar a validade da regra \begin{equation*} - \left( - \boldsymbol{Z} \right) = + \left( + \boldsymbol{Z} \right) = + \boldsymbol{Z} \end{equation*} (já que os deslocamentos que caracterizam as somas de números positivos e as subtrações de números negativos têm a mesma direção), uma boa atividade que pode ser desenvolvida para fazer com que os estudantes entendam efetivamente todas essas regras de sinais é: \begin{itemize} \item introduzí-los ao conceito de \textquotedblleft oposto\textquotedblright , explorando atividades que envolvem o uso de espelhos onde eles podem observar as diversas imagens de objetos que são formadas; \item instigar que os estudantes avaliem se a distância que existe entre um objeto e um espelho plano é a mesma que existe entre a imagem e esse espelho, pedindo para que eles coloquem alguma régua perpendicularmente ao espelho; e \item fazer com que os estudantes percebam que o ato de colocar uma régua perpendicularmente a esse espelho, com o seu $ \boldsymbol{0} $ grudado na superfície espelhada, faz surgir uma reta que pode ser interpretada como a reta que contém os números reais. \end{itemize} Feito isso, é interessante pedir para que os estudantes desenhem uma semirreta e a coloquem perpendicularmente à superfície espelhada de um espelho plano pelo seu ponto $ \boldsymbol{0} $, tal como já foi ilustrado na Figura \ref{regua-e-reta}. Diante de tal construção, a segunda parte da atividade consiste em: \begin{itemize} \item pedir para que os estudantes coloquem alguns objetos sobre essa semirreta (como borrachas, apontadores etc.) e, anotando as posições $ \boldsymbol{p_{\mathrm{im}}} $ e $ \boldsymbol{p_{\mathrm{ob}}} $ que esses objetos e as suas imagens assumem respectivamente, percebam que $ \boldsymbol{p_{\mathrm{im}}} = - \boldsymbol{p_{\mathrm{ob}}} $; \item pedir para que os estudantes movam esses objetos sobre essa semirreta, e percebam que os movimentos que estão associados às operações de somas/subtração sempre afastam/aproximam esses objetos e as suas imagens do espelho; \item anotem os sentidos dos movimentos que estão asssociados às \begin{itemize} \item[\textbf{(a)}] somas de números positivos, \item[\textbf{(b)}] subtrações de números negativos, \item[\textbf{(c)}] somas de números negativos e \item[\textbf{(d)}] subtrações de números positivos \end{itemize} e identifiquem quais deles têm os mesmos sentidos. \end{itemize} Como consequência da identificação que é pedida neste último item, todas essas regras de sinais poderão ser justificadas. \subsection{Uma observação física} Entretanto, vale fazer uma observação bem importante aqui, a qual tem a ver com uma coisa que não costuma ser muito bem explorada no Ensino Básico: \textbf{a Matemática é a linguagem pela qual a Física se escreve}. A propósito, apesar de eu saber, por exemplo, que para enviar um foguete ou um satélite para o espaço exigia saber fazer muitas contas, eu mesmo, quando eu estava no primeiro ano do Ensino Médio, ainda não entendia muito bem por que o meu professor de Matemática sabia tantas coisas sobre Física. Afinal, ele era apenas um professor de Matemática e aquela coisa que chamavam de Física, e que me foi apresentada até o final do meu Ensino Médio, não era a Ciência que eu também já amava: ela era só mais uma matéria da escola, onde todos os seus professores apenas me pediam para eu decorar um monte de fórmulas sem nunca me dar qualquer explicação sobre o porquê de todas elas. E por que fazer essa observação, sobre a Matemática ser a linguagem pela qual a Física se escreve, é importante? Porque, quando nós voltamos as nossas atenções exatamente para o contexto deste artigo, não é errado afirmar que o fato do eixo principal de um espelho esférico corresponder à reta dos números reais não se deve a uma mera convenção: essa correspondência é \textbf{proposital} posto que a Matemática, por ser a linguagem pela qual a Física se escreve, mantém uma correspondência natural para com a realidade, para com tudo aquilo que nos cerca. Ou seja, apesar de algumas pessoas acharem que a Matemática é apenas uma concepção do cérebro humano, o fato é que ela pode mesmo ser observada em diversas situações na nossa realidade e uma das situações onde isso fica bem claro é, curiosamente, diante de um espelho. Afinal, se assim não fosse, nós jamais enxergaríamos a reta dos números reais ao colocar uma simples régua na frente de um espelho plano. Nestes termos, uma atividade que pode ser realizada em sala de aula, não apenas pelos professores de Matemática do Ensino Básico, mas (principalmente) pelos professores de Física nas aulas que abordam a formação de imagens por espelhos esféricos, é exatamente essa que está sendo proposta neste artigo. É claro que, numa aula de Física que versa sobre essa formação de imagens, as principais preocupações são outras -- como explorar as características que as imagens possuem, se elas são maiores ou menores, invertidas ou direitas etc. Porém, realizar uma atividade que também tem um caráter um pouco mais matemático numa aula de Física, para que os estudantes notem (ao menos) que a reta dos números reais pode ser construída com o auxílio de uma régua e de um espelho plano, é algo muito bem-vindo. Afinal de contas, além dessa atividade ser bastante útil para mostrar (e explorar) a presença da Matemática de um jeito que os estudantes não esperam, é justamente essa presença que acaba deixando claro por que, por exemplo, todas as posições atrás dos espelhos são descritas apenas por números negativos haja vista que essa é uma dúvida bastante comum entre os estudantes. \section{\label{conclusions}Comentários finais} Embora a história que eu contei ao longo deste artigo traga uma experiência bastante pessoal (onde eu também exponho, por exemplo, a minha opinião sobre eu achar muito chatas boa parte das aulas de Física e de Matemática que eu tive no Ensino Básico), o fato é que essa realidade do \textquotedblleft decorar\textquotedblright \hspace*{0.01cm} e do \textquotedblleft reproduzir\textquotedblright \hspace*{0.01cm} (que era justamente o que eu achava muito chato) ainda é bastante recorrente nas aulas da grande maioria das escolas de Ensino Básico do Brasil. Entretanto, na grande maioria das vezes, essa realidade não chega nem a ser culpa direta dos professores que lecionam todas essas disciplinas, mas, sim, do sistema educacional onde todos estão imersos, o qual não é capaz de atrair bons professores e de, tão pouco, valorizar os que já estão dando aulas oferencendo, a eles, uma melhor formação. Um bom exemplo disso é o fato de que uma parte considerável dos professores que me deram aulas no Ensino Básico sequer tinham a formação adequada para lecionar todas as matérias que eles me lecionaram. Afinal de contas, eu já tive aulas de Biologia com um farmacêutico, de Química com um segundo anista de Medicina e quase todos os professores que me deram aulas de Matemática nas (antigas) quinta, sexta e sétima séries do Ensino Básico eram formados em Ciências Biológicas e não em Matemática. Aliás, essa também era a formação da professora que me deu aulas de Física no primeiro ano do Ensino Médio: ela era formada em Ciências Biológicas, não em Física. E, quando eu perguntei para ela, em sala de aula, sobre o porquê de existir um fator $ \boldsymbol{1} / \boldsymbol{2} $ na equação horária \begin{equation*} \boldsymbol{s} \left( \boldsymbol{t} \right) = \boldsymbol{s_{0}} + \boldsymbol{v_{0} t} + \frac{\boldsymbol{1}}{\boldsymbol{2}} \ \boldsymbol{at^{2}} \end{equation*} do movimento uniformemente acelerado, ela simplesmente \textbf{riu} (muito provavelmente para disfarçar que ela não sabia a resposta), mas não foi só isso: além de rir, ela disse que \textbf{não era para eu me preocupar com isso pois nunca, na minha vida, eu ia precisar saber por que esse fator $ \boldsymbol{1} / \boldsymbol{2} $ estava ali}. Alguns anos depois, eu obtive o meu \textbf{doutorado em Física}. É claro que, às vezes, eu até chego a ter vontade de reencontrar essa professora só para lembrá-la deste episódio e poder explicar para ela o porquê daquele fator $ \boldsymbol{1} / \boldsymbol{2} $. Afinal, além de ser bastante possível que ela ainda não saiba a resposta da pergunta que eu lhe fiz, eu subentendo que, por trás do que ela me disse, talvez existisse um certo preconceito já que, como boa parte dos estudantes daquela escola eram filhos de empregadas domésticas, pedreiros e de pais com outras profissões que (infelizmente) são vistas como menores pela atual sociedade brasileira (como também era o meu caso), provavelmente ela supunha que ninguém, naquela sala de aula, iria tão longe nos estudos. Ou seja, por mais que a maioria dos professores realmente não tenha culpa alguma das eventuais falhas que existem nas suas formações, não existe motivo algum para que eles duvidem da capacidade que os estudantes têm para aprender alguma coisa, muito menos se essa dúvida eventualmente estiver embebida em algum preconceito. Aliás, duvidar da capacidade que os estudantes possuem, por qualquer que seja o motivo, ou mesmo desincentivar qualquer questionamento pertinente que eles façam dentro de uma sala de aula do Ensino Básico, chega até mesmo a ser contraproducente à luz do que, por exemplo, afirma a \textbf{teoria do desenvolvimento cognitivo} de Jean Piaget \cite{piaget1,piaget2,piaget3}. Afinal de contas, é justamente na época que os estudantes adentram na segunda metade do Ensino Fundamental que, segundo essa teoria, a maioria deles já começa a ser capaz de solucionar problemas criando conceitos e ideias, assim como se valendo de pensamentos formais abstratos. Ou seja, por mais que existam respostas que exijam, de fato, um conhecimento um pouco mais profundo do que o que é oferecido no Ensino Básico, muito provavelmente esses estudantes já são perfeitamente capazes não apenas de entender todas as respostas que lhes sejam apresentadas de um jeito intelígivel, mas de, inclusive, obtê-las por conta própria desde que uma discussão seja fomentada para isso (note que a história real que eu acabo de contar neste texto é um bom exemplo disso). E é justamente dentro desse contexto, de fomentar uma discussão, que esse artigo, que você, leitor, está acabando de ler agora, se encaixa. Afinal, basta ver que, além dele trazer uma proposta bastante lúdica através da qual um professor pode abordar esse tema das regras de sinais, essa mesma proposta também pode fomentar diversas discussões que até podem fugir do escopo desse tema, mas que ainda estão dentro de um nicho que pertence à Física e à Matemática. Aliás, uma das outras discussões que o uso de espelhos no ensino da Matemática pode fomentar versa, por exemplo, sobre o conceito matemático de \textbf{conjunto imagem} que está associado a uma função, já que não é à toa que esse conjunto possui justamente o nome daquilo que um espelho é capaz de formar. Porém, como qualquer comentário adicional que eu possa fazer sobre isso só vai estender ainda mais esse texto (que já está um pouco longo) sem a menor necessidade, eu prefiro deixar para falar sobre isso num artigo futuro. \section{Agradecimentos} Este trabalho infelizmente não foi financiado por nenhum órgão, seja ele privado ou governamental, pois, ao contrário do que acontece em algumas das grandes nações, ser cientista e professor aqui no Brasil ainda parece ser um ato de heroísmo e de afronta ao sistema que está aqui estabelecido há séculos. No entanto, eu agradeço profundamente a todos os personagens que fizeram parte, seja direta ou indiretamente, desse pedaço da minha história que eu apresentei neste artigo. Afinal, se não fosse por causa de todos eles, e de todos os desafios felizes e não felizes que eles colocaram na minha frente, eu jamais teria obtido todas as provas de que eu realmente estava no caminho certo, rumo ao \textquotedblleft saber a fazer todas as contas possíveis, principalmente todas aquelas que permitiam entender as coisas do Universo\textquotedblright , mesmo que esse sistema já conspirasse, desde aquela época, para que outras pessoas como eu não conseguissem ter acesso a tais informações. Este jamais seria o meu caso pois, além de eu sempre ter acreditado na minha capacidade (e nunca duvidado dela), outra pessoa também sempre acreditou: a minha mãe, Celsina Jacinta de Araujo, a quem eu agradeço profundamente não apenas por toda a ajuda na construção da minha trajetória, mas por ter (inclusive) saído em minha defesa, protagonizando uma das principais cenas que eu contei neste artigo. De qualquer forma, apesar de eu não ter citado os nomes dos demais personagens, eu preciso citar o nome de duas pessoas que, de alguma maneira, participaram (in)diretamente da construção deste artigo. E uma dessas pessoas é o Prof. Dr. Vinicio de Macedo Santos (FE-USP) haja vista que foi durante uma das suas aulas de \textquotedblleft EDM0427 -- Metodologia do Ensino da Matemática I\textquotedblright \hspace*{0.01cm} que eu, há exatos dezesseis anos ainda como estudante de graduação, apresentei (pela primeira vez) essa minha abordagem para o ensino das regras de sinais. Já a outra pessoa que eu preciso agradecer aqui é o meu grande amigo, o Prof. Dr. Leandro Daros Gama (IFSP), pela leitura minuciosa deste artigo, assim como pelas adicionais sugestões que fizeram desse texto algo bem melhor de ser lido. Todavia, antes de eu terminar este texto, não é muito justo eu deixar de agradecer algumas pessoas que, apesar de eu sequer conhecê-las, disponibilizaram algumas imagens extremamente bonitas que fizeram parte da apresentação deste artigo. E as primeiras delas são as que, usando os codinomes \textquotedblleft photosforyou\textquotedblright , \textquotedblleft fireboltbyl\textquotedblright \hspace*{0.01cm} e \textquotedblleft S. Hermann \& F. Richter\textquotedblright , disponibilizaram publicamente as fotografias que constam nas Figuras \ref{mirror-balls}, \ref{spherical-mirror} e \ref{jaguar} respectivamente \cite{photosforyou,fireboltbyl,pixel2013}. Já as outras pessoas que também merecem o meu agradecimento aqui são as que, com os codinomes \textquotedblleft JJuni\textquotedblright , \textquotedblleft OpenClipart-Vectors\textquotedblright \hspace*{0.01cm} e \textquotedblleft Clker-Free-Vector-Images\textquotedblright , também disponibilizaram publicamente algumas imagens que eu acabei editando para criar as Figuras \ref{regua-e-imagem}, \ref{regua-e-reta}, \ref{soma-total}, \ref{subtracao-total}, \ref{convencao} e \ref{imagem-real} com o auxílio de uma programação em \LaTeX \hspace*{0.01cm} \cite{jjuni,openclip-1,openclip-2,clker-free}. E, já que eu acabei de falar em \LaTeX , um último agradecimento também cabe a quem, usando o codinome \textquotedblleft user121799\textquotedblright , publicou o código \textquotedblleft .tex\textquotedblright \hspace*{0.01cm} que consta na Ref. \cite{user121799}, o qual eu usei para gerar a Figura \ref{corte-esfera}.
1,108,101,563,532
arxiv
\section{Introduction and summary} Recently, it has become clear that the problem of moduli stabilization may find its resolution in the context of flux compactifications (see {\it e.g.} \cite{reva, revb, revc} for recent reviews). In most recent models (starting with \cite{kklt}) a crucial role is played by nonperturbative effects which can generate a superpotential for the K\"{a}hler moduli. Within the context of M-theory compactifications on Calabi-Yau fourfolds, as was first noted in \cite{w}, the nonperturbative effects arise from fivebrane instantons wrapping internal divisors. In a dual IIB picture this setup is equivalent to compactifications on Calabi-Yau threefolds, with instantons arising from D3-branes wrapping internal divisors. In \cite{w} Witten showed that in the absence of flux a necessary condition for the generation of a superpotential is that the divisor which the fivebrane wraps possesses a certain topological property: its arithmetic genus must be equal to one. When there are exactly two fermion zeromodes (corresponding to rigid isolated cycles) a superpotential {is} indeed generated. If more zeromodes are present, cancellations may occur. The lift of the arithmetic genus criterion to F-theory in general and IIB orientifolds in particular, was given by Robbins and Sethi in \cite{rs}. Recently attention has been drawn to the possibility that the arithmetic genus criterion may be violated in the presence of flux \cite{ktt, saul, kall} (a discussion of the effects of flux was already presented in \cite{rs}). The authors of \cite{kall} defined a flux-dependent generalization of the arithmetic genus, $\chi_F$, to be discussed in more detailed in the following. $\chi_F$ is not, strictly-speaking, an index: it cannot be defined as the dimension of the kernel minus the dimension of the cokernel of some operator. At present it is not clear what should the arithmetic genus criterion be replaced by in the presence of fluxes. In particular, it is not clear whether the arithmetic genus criterion should simply be replaced by the condition $\chi_F=1$ or not. Moreover, it is conceivable that instantons with four or more fermionic zero-modes contribute to the superpotential\footnote{Instantons with more than two zeromodes are known to contribute to higher-derivative and/or multi-fermion couplings \cite{bw}. Here we examine whether such instantons can contribute to the {\it superpotential}.}, as there exist higher-order fermionic terms in the worldvolume action of the fivebrane which may be used in order soak up the extra zero modes. Clarifying these issues is crucial for realistic model-building. The computation of M-theory instantons goes back to the work of Becker et al \cite{bbs}. These techniques were further elaborated by Harvey and Moore \cite{hm} in the context of $G_2$ compactifications. The subject of fivebrane instantons in M-theory has largely remained unexplored, mainly due to the exotic nature of the fivebrane worldvolume theory. Instanton effects in heterotic M-theory have been considered in \cite{ovru, lima, angu, buch}. Further progress beyond the computation of instantons with two zeromodes has been hindered by the lack of knowledge of the theta-expansions of the supervielbein and $C$-field in eleven-dimensional superspace. Recently there have been technical advances in this direction reported in \cite{t}, which applies the normal-coordinates approach \cite{norcor} to the case of eleven-dimensional superspace. Using this method, the expression for linear backgrounds was derived to all orders in $\theta$, i.e. up to and including terms of order $\theta^{32}$. This constitutes significant progress, taking to account the fact that previously this expansion was known explicitly only to order $\theta^2$ \cite{nicolai}. Results exact in the background fields were also presented up to and including terms of order $\theta^5$. It is the purpose of this paper to perform an explicit computation in the case of fivebrane instantons with four fermion zeromodes, in the context of M-theory compactifications on Calabi-Yau fourfolds in the presence of (normal) flux. We find that no superpotential is generated in this case. Therefore, our result does not rule out the possibility that in the presence of flux the arithmetic genus criterion should be replaced by the condition $\chi_F=1$. As this is a somewhat technical paper, in the following subsections of the introduction we have tried to put it in context and to summarize in a self-contained way the strategy and the result of the computation. \subsection{Review of the arithmetic genus criterion } In \cite{w} Witten argued that M-theory compactifications on Calabi-Yau fourfolds may generate a nonzero superpotential in three dimensions through fivebrane instantons wrapping divisors of arithmetic genus one. We will now review his argument: consider a supersymmetric M-theory background of the form $\mathbb{R}^{1,2}\times X$, where $X$ is a Calabi-Yau fourfold\footnote{Eventually we will work in Euclideanized eleven-dimensional space.}. Provided a certain topological condition is satisfied, this is a consistent M-theory background \cite{wittflux,wittseth}. Compactification on $X$ results in an ${\cal N}=2$ theory in three dimensions (four real supercharges). This theory is very similar to a supersymmetric ${\cal N}=1$ theory in four dimensions, and we may think of it (although this is not necessary) as a dimensional reduction from four to three dimensions. Similarly to the case in four dimensions, the kinetic terms are obtained by integration over the whole superspace, whereas the Yukawa couplings and the mass terms are obtained by integrating over half the superspace (F-terms). Crucially, powerful nonrenormalization theorems prevent radiative corrections to the F-terms. Let us now describe the structure of the so-called `linear multiplets', which play a distinguished role in the discussion of \cite{w} and in the following: the bosonic part of a linear multiplet in four dimensions consists of a second-rank antisymmetric tensor and a real scalar. The fact that the antisymmetric tensor is dual in four dimensions to a scalar, can be promoted at the level of superfields to a duality between linear and chiral supermultiplets. Upon reduction to three dimensions the chiral multiplets give rise to chiral multiplets, whereas the linear multiplets become vector multiplets. In analogy to the situation in four dimensions, a vector in three dimensions is dual to the a scalar {\it provided there is no Chern-Simons term} arising from the compactification on the fourfold. In absence of fluxes there is indeed no Chern-Simons term which could obstruct the dualization, but this is generally no longer the case in the presence of fluxes \cite{haaca, haacb}. To be more explicit: upon compactification of M-theory on a Calabi-Yau fourfold, one obtains $b_2$ vectors from the threeform gauge field \begin{align} C=\sum_{I=1}^{b_2}A^I(x)\wedge \omega_I+\dots ~, \end{align} where $x$ is a (three-dimensional) spacetime coordinate and $\{\omega_I, ~I=1,\dots b_2\}$ is a basis of $H^{2}(X,\mathbb{R})$, which of course coincides with $H^{1,1}(X,\mathbb{R})$ for a Calabi-Yau fourfold. In the absence of a Chern-Simons term in three dimensions the $A^I$s can be dualized to $b_2$ scalars, which we will call the `dual scalars' $\phi^I_D$, $d\phi^I_D=\star dA^I$. Note that perturbatively there are Peccei-Quinn symmetries whereby the dual scalars are shifted by constants; as we will see in the following, these continuous symmetries can be broken by instantons to discrete subgroups thereof. In addition to the $\phi_D^I$s there are $b_2$ scalars, $\phi^I$, from the deformations of the K\"{a}hler form $J$, \begin{align} J=\sum_{I=1}^{b_2}\phi^I(x)\omega_I ~. \label{jexp} \end{align} After dualization, the bosonic fields of each vector multiplet in three dimensions (these are the `descendants' of the linear multiplets in four dimensions) consist of a pair of real scalars ($\phi^I$, $\phi^I_D$). The superpotential $W$ depends holomorphically on $\phi^I+i\phi^I_D$. Following \cite{w}, we note that all terms in the superpotential depend on the vector multiplets. Indeed if there were any terms in the superpotential which did not depend on the vector multiplets, they could be computed by scaling up the metric of $X$ (since such terms would be independent of the K\"{a}hler class, which belongs to the vector multiplets). But in the limit where the metric is scaled up, M-theory reduces to supergravity and $\mathbb{R}^{1,2}\times X$ becomes an exact solution -- showing that there is no superpotential in this case. To look for instantons which may generate a superpotential, we note that the threeform gauge field is (magnetically) sourced by the fivebrane. Hence, a relevant instanton in three dimensions is seen from the eleven-dimensional point-of-view as a fivebrane wrapping a six-cycle $\Sigma$ in the Calabi-Yau fourfold. In order for the instanton to preserve half the supersymmetry (so that it may generate an F-term), the cycle $\Sigma$ must be a holomorphic divisor. This fact is re-derived in detail in section \ref{supersymmetriccycles}, in the presence of normal flux. As can be verified explicitly, the contribution of the instanton includes the classical factor \begin{align} \int d^2\theta_0 ~e^{-(\mathrm{Vol}_{\Sigma}+i\phi_D)}~, \label{gras} \end{align} where $\mathrm{Vol}_{\Sigma}$ is the volume (in units of the eleven-dimensional Planck length $l_P$) of the six-cycle the fivebrane is wrapping, and $\phi_D$ is the linear combination of dual scalars which constitutes the superpartner of $\mathrm{Vol}_{\Sigma}$. I.e. the scalars ($\mathrm{Vol}_{\Sigma}$, $\phi_D$) form the real and imaginary parts of a chiral superfield, as is expected from the holomorphic property of the superpotential (which is, in its turn, a consequence of supersymmetry). For the generation of a superpotential, the fermionic terms in the fivebrane action should conspire so as to soak up all but two of the fermion zeromodes. The Grassmann integration in (\ref{gras}) above is the integration over the remaining fermionic zeromodes. As was then argued in \cite{w}, apart from the classical factor above, the superpotential should be independent of the K\"{a}hler class. This is because the dependence on $\phi_D$ is fixed by the magnetic charge of the instanton, and so the dependence on $\mathrm{Vol}_{\Sigma}$ is in its turn fixed by holomorphy. Apart from the classical factor above, the steepest-slope approximation of the path integral around the fivebrane instanton includes a one-loop determinant, which is independent of the K\"{a}hler class but depends holomorphically on the complex structure moduli. The one-loop result is in fact exact, as higher loops do not contribute to the superpotential. This can be seen as follows: higher loops would be proportional to positive powers of $l_P$ and would therefore scale as inverse powers of the volume; but, as already mentioned, apart from the classical factor the superpotential cannot depend on the K\"{a}hler class. A necessary criterion for a divisor $\Sigma$ to contribute to the superpotential is that its arithmetic genus $\chi$, \begin{align} \chi=\sum_{p=0}^3(-1)^p h^{p,0}(\Sigma)~, \end{align} is equal to one. This was arrived at in \cite{w} by the following line of arguments: first note that, in the limit where $\Sigma$ is scaled up, the $U(1)$ rotations along the normal direction to $\Sigma$ inside the fourfold become an exact symmetry (dubbed `$W$-symmetry' in \cite{w}) of M-theory. On the other hand, in the absence of fluxes the worldvolume theory of the fivebrane has a one-loop $W$-anomaly equal to $\chi$. It must then be that the exponential in (\ref{gras}) has $W$-charge equal to $-\chi$.\footnote{Note that Witten's paper \cite{w} was written before the cancellation of the normal-bundle anomaly of the fivebrane was properly understood in \cite{hmm}. It would be interesting to derive this result directly using the techniques of \cite{hmm}.} Moreover, it is straightforward to see that the fermionic zeromode measure carries $W$-charge equal to one. It follows that a necessary condition for the generation of a superpotential is $\chi=1$; this is the arithmetic genus criterion. \subsection{Caveats to the arithmetic genus criterion} \label{caveats} As already anticipated in \cite{w}, the arithmetic genus criterion may be violated in cases where the assumption of $W$-symmetry fails. This can occur if there are couplings of the fermions to normal derivatives of the background fields (i.e. normal to the divisor $\Sigma$ inside $X$). Indeed, in the presence of flux such couplings are present already in the `minimal' quadratic-fermion action $\theta\slsh{\cal D}\theta$, where $\slsh{\cal D}$ is a flux-dependent Dirac operator which we will define more precisely in the following. Even in the absence of flux, $W$-violating couplings will generally be present at higher orders in the fermions, they will however be suppressed in the large-volume limit. A further complication is the following: in the presence of flux, there is a Chern-Simons term in the three-dimensional low-energy supergravity, \begin{align} T_{IJ}d\phi^I\wedge A^J~, \label{cs} \end{align} which will a priori obstruct the straightforward dualization of the vectors $A^I$ to scalars $\phi_D^I$ \cite{haaca,haacb}. One may therefore worry about the fate of holomorphy, on which the derivation of the arithmetic genus criterion relied. (Recall that the holomorphic property of the superpotential allowed us to take the large-volume limit in which the $W$-symmetry becomes exact). The object $T_{IJ}$ which enters the Chern-Simons term above is a constant symmetric matrix given by \begin{align} T_{IJ}&:=\frac{\partial^2 T }{\partial\phi^I\partial\phi^J}=\int_X F\wedge\omega_I\wedge\omega_J\nonumber\\ T&:=\frac{1}{2}\int_X F\wedge J\wedge J=\frac{1}{2}T_{IJ}\phi^I\phi^J~, \label{tdef} \end{align} where $F$ is the internal component of the fourform flux. Its quantization condition is equivalent to the expansion \begin{align} F=\sum_{a=1}^{b_4}n^a\omega_a+\sum_{I=1}^{b_2}dA^I\wedge\omega_I~, \end{align} where $\{ \omega_a, ~a=1\dots b_4\}$ is a basis of $H^4(X,\mathbb{Z})$, and the $n^a$s are integers. An additional effect of the flux is the gauging of the the Peccei-Quinn isometries. The gauging is completely determined by the constant matrix $T_{IJ}$. Contrary perhaps to the na{i}ve expectation, the dualization of vectors to scalars can proceed more-or-less straightforwardly also in the case with fluxes. Let us assume for simplicity that we work in a basis of $H^{2}(X,\mathbb{R})$ such that $T_{IJ}$ is diagonal, and for the moment let's assume that the complex structure moduli are frozen. It then follows from the work of \cite{bhs} (which is based on general results on three-dimensional gauged supergravities \cite{whs}) that {\it (i)} the isometries $\phi_D^I\rightarrow \phi_D^I+\mathrm{constant}$ corresponding to zero eigenvalues of $T_{IJ}$ are {\it not} gauged and {\it (ii)} if $\phi_D^I\rightarrow \phi_D^I+\mathrm{constant}$ is an isometry which {\it does} get gauged, the superpotential cannot depend on $\phi_D$ (nor can it depend on the K\"{a}hler modulus $\phi^I$, by holomorphy).\footnote{On the other hand, if there are additional fields which are charged under the gauge potential, this conclusion may be relaxed \cite{haack}. We thank M. Haack for pointing this out. In the present context, such phenomena may arise presumably in the presence of M2 branes \cite{gano} and will not be examined here. } This picture is consistent with the conclusions of \cite{poortomasiello} who find (in the context of IIA string theory) that those isometries which are gauged by the flux are protected from quantum corrections. \subsection{The results of the present paper} In the presence of fluxes, the scalar potential of the low-energy three-dimensional supergravity is still given in terms of the holomorphic superpotential $W$, but in addition will also generally depend on $T$. On the other hand the fermion bilinears \begin{align} \chi^I\chi^JD_ID_JW+\mathrm{c.c.}~, \label{fbils} \end{align} where $D_I$ is a K\"{a}hler-covariant derivative, solely depend on the holomorphic superpotential, $W$, even in the presence of fluxes \cite{whs}. ( Fermion mass terms of the form $\bar{\chi}^I\chi^J M_{IJ}$ do depend on $T$, as we will see in section \ref{gravitinokkreduction}). Hence, a straightforward way to obtain instanton corrections to the superpotential is to compute the coupling (\ref{fbils}). For the purpose of examining the possible generation of a superpotential by instanton effects, it follows from the discussion in section \ref{caveats} that we only need examine whether the coupling (\ref{fbils}) is generated for fermions $\chi^I$ which correspond to zero eigenvalues of $T_{IJ}$ (we may consider a basis where $T_{IJ}$ is diagonal, for simplicity). Hence, we may assume that the K\"{a}hler moduli corresponding to nonzero eigenvalues of $T_{IJ}$ are frozen to zero\footnote{Examples of fourfolds for which there are choices of fourform flux such that $T_{IJ}$ vanishes identically, were examined in \cite{mayra}.}. In other words we can assume, as follows from (\ref{jexp},\ref{tdef}), that we are in the region of the K\"{a}hler moduli space where: \begin{align} \int_{X}F\wedge J\wedge\omega_I =0; ~~~~~I=1\dots b_2~. \label{wderiv} \end{align} If no such region exists, {\it i.e.} if $T_{IJ}$ has no zero eigenvalues, all isometries are gauged and there can be no superpotential dependence on the K\"{a}hler moduli: the superpotential is protected against instanton contributions. Moreover, condition (\ref{wderiv}) implies that \begin{align} \omega_I\lrcorner F=0~, \label{dkn} \end{align} for all $\omega_I$s corresponding to zero eigenvalues of $T_{IJ}$. This observation simplifies somewhat the rather tedious computational task of this paper. In particular, we may assume we are in the region of the K\"{a}hler moduli space where the flux is primitive: $J\lrcorner F=0$. Furthermore, for the purposes of the present computation we may assume that the complex structure moduli are frozen to values such that the internal fourform flux is of type (2,2). These are exactly the conditions which ensure that {\it the flux is compatible with supersymmetry}, as we will see in detail in section \ref{mtheoryonfourfolds}. Despite the fact that certain conceptual subtleties remain, there are clear rules for instanton computations in M-theory first put forward in \cite{bbs} and subsequently elucidated in \cite{hm}. We will schematically describe the procedure here, relegating the details to the main body of the paper. In order to compute the instanton contribution to the coupling (\ref{fbils}), one first decomposes the eleven-dimensional gravitino in terms of three-dimensional fermions $\chi^I$, \begin{align} \Psi_m=\chi^I\otimes\Omega_{I,m}\xi~, \label{kkgravit} \end{align} where $\xi$ is the covariantly constant spinor of the Calabi-Yau fourfold\footnote{In the presence of flux, the internal space becomes a warped Calabi-Yau. As we will see, however, the effect of the warp factor can be ignored at leading order in the large-volume expansion.} and $\Omega_I$ is a one-form on $X$ valued in the Clifford algebra $Cl(TX)$. Next, from the fivebrane action one reads off the coupling of the eleven-dimensional gravitino to the fivebrane worldvolume fermion $\theta$, schematically: \begin{align} V=\sum_{n} c_n\Psi\theta^{2n+1} ~, \end{align} for some, possibly flux-dependent, `coefficients' $c_n$. The coupling $V$ is the `gravitino vertex operator'. Finally, to read off the coefficient $D_ID_JW$ in (\ref{fbils}) one evaluates the correlator $\langle VV\rangle$ in the worldvolume theory of the fivebrane. Note that the worldvolume fermions are valued in the normal bundle to the fivebrane, which is the sum of $T\mathbb{R}^3$ (after passing to Euclidean signature) and the normal bundle to the divisor inside the fourfold. Thus, each worldvolume fermion should be thought of as tensored with a two-component spinor of $Spin(3)$. The main result of the present paper is that {\it instantons with exactly four fermionic zeromodes do not contribute to the superpotential.} In deriving this result we have made the simplifying assumption that both the curvature of the worldvolume self-dual tensor as well as the pull-back of the threeform flux onto the worldvolume vanish. This is what we call the condition of `normal flux'. One major technical difficulty with the present computation is the explicit expansion of the fivebrane action in terms of the worldvolume fermion, the so-called `theta-expansion'. This, in its turn, stems from the theta-expansion of the eleven-dimensional background superfields on which the fivebrane action depends. Until recently, this expansion had only been fully worked out to quadratic order in the fermions. The present computation is now possible thanks to the recent results of \cite{t} in which, among other things, the theta-expansion of the eleven-dimensional superfields was computed explicitly to fifth order in the fermions. We should at this point elaborate on what we mean by `the fivebrane action'. The fivebrane dynamics was given in terms of covariant field equations in \cite{howea, howeb}. For the application we are interested in, however, one needs to work with an action. As is well known, the worldvolume theory of the fivebrane contains a self dual antisymmetric tensor which renders the formulation of an action problematic. A covariant supersymmetric action for the fivebrane can be constructed with the help of an auxiliary scalar \cite{pst}. Alternatively, the auxiliary field can be eliminated at the expense of explicitly breaking Lorentz invariance \cite{schw}. The equivalence of all different formulations was shown in \cite{equi}. Here we will use the covariant action of \cite{pst}. An important cautionary remark is in order. In \cite{wfive} Witten pointed out that a useful way to define the action of a self-dual field is in terms of a Chern-Simons theory in one dimension higher. This definition, for spacetime dimensions higher than two, involves a suitable generalization of the notion of spin structure -- on a choice of which the self-dual action depends. These issues have been recently clarified by Belov and Moore \cite{beloa, belob}. Unfortunately, the action of \cite{pst} does not take these topological aspects into account; it is however at present our only available (covariant) { supersymmetric action} for the fivebrane. \subsection{Outline} We now give a detailed plan of the rest of the paper. Section \ref{thetaexpansions} relies on \cite{t} treating the theta-expansion of the various superfields of the eleven-dimensional background, with the aim of applying it to the worldvolume theory of the fivebrane. The theta expansion of the sixform potential was not considered in \cite{t}, and this is addressed in section \ref{thetaexpansions1}. The worldvolume theory of the fivebrane is considered in section \ref{pst} in the framework of the covariant action of \cite{pst}. Eventually we make the simplifying assumption that the flux is `normal', {\it i.e.} that both the field-strength of the worldvolume antisymmetric tensor and the pull-back of the background threeform flux onto the fivebrane worldvolume, vanish. The main result of this section is the form of the gravitino vertex operator in the case of normal flux, equation (\ref{grv}). Section \ref{mtheoryonfourfolds} considers M-theory backgrounds of the form of a warp product $\mathbb{R}^{1,2}\times_{w}X$, where $X$ is a Calabi-Yau fourfold. (Eventually we Wick-rotate to Euclidean signature and take the large-volume limit in which the warp factor becomes trivial). Requiring ${\cal N}=2$ supersymmetry in three dimensions (four real supercharges) implies certain restrictions on the fourform flux, equation (\ref{gform}). Next we consider fivebrane instantons such that the worldvolume wraps a six-cycle ${\Sigma} \subset X$ and we assume that $X$ can be thought of as the total space of the normal bundle of ${\Sigma}$ inside $X$. As discussed in the introduction, this approximation becomes more accurate as the size of ${\Sigma}$ is scaled up. Imposing the normal-flux condition, the form of the background flux simplifies further, equations (\ref{fffn}, \ref{nfff}). In section \ref{supersymmetriccycles} we show that, in the case of normal flux, demanding that the instanton preserve one-half the supersymmetry of the background implies that ${\Sigma}$ is an (anti)holomorphic cycle. Section \ref{zeromodes} treats the worldvolume fermion zeromodes of the flux-dependent Dirac operator, equation (\ref{dirac}). After decomposing the background fermion in terms of forms on the fivebrane, we derive the explicit expression of the fermion zeromodes (\ref{zm}). This result agrees with the analysis of \cite{saul, kall}, in the case of normal flux and provided the warp factor is trivial. This can be consistently taken to be the case in the large-volume limit, as explained in section \ref{mtheoryonfourfolds}. In section \ref{instantoncontributions} we finally come to the main subject of the paper, the instanton contributions to the superpotential. Section \ref{gravitinokkreduction} discusses the Kaluza-Klein Ansatz for the gravitino, equation (\ref{kkgr}). Next, the Kaluza-Klein ans\"{a}tze for the gravitino as well as for the fermion zeromodes are substituted into the expression (\ref{grv}) for the gravitino vertex operator. The result of the fermion zeromode integration in the case of two zeromodes is briefly discussed in section \ref{ofrzm}. In section \ref{frzm} it is shown that in the case of four fermion zeromodes the result of the zeromode integration is zero. I.e. in this case the instanton contribution to the superpotential vanishes. The appendices contain several useful technical details. For quick reference, we have also included an index of our conventions and notation in section \ref{notation/conventions}. \section{Theta-expansions} \label{thetaexpansions} This section examines the theta-expansions of the various eleven-dimensional superfields. Except for the expansion of the sixform which is given in section \ref{thetaexpansions1}, these were treated in reference \cite{t} to which the reader is referred for further details. For reasons which are explained below (\ref{grv}), for our purposes we will not need the explicit form of the $\Psi^2$ contact terms. It also suffices to keep terms up to and including order $\theta}\def\Th{\Theta}\def\vth{\vartheta^3$. Also note that we are using standard superembedding notation, whereby target-space indices are underlined. Further explanation of the notation can be found in appendix \ref{notation/conventions}. \subsection{Vielbein and threeform } Using the formul{\ae} in \cite{t}, to which the interested reader is referred for further details, we find \begin{alignat}{2} E_{m}{}^{\underline a}\def\unA{\underline A}&=e_{m}{}^{\underline a}\def\unA{\underline A} -\frac{i}{2}({\cal D}_m\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta) +\frac{1}{24}({\cal D}_m\theta}\def\Th{\Theta}\def\vth{\vartheta\mathfrak{G}\C^{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta) +\frac{1}{24}(\theta}\def\Th{\Theta}\def\vth{\vartheta{\cal R}_{\underline n}\def\unN{\underline N\underline{\phantom{a}}\!\!\! p}\def\unP{\underline P}{\cal I}_m{}^{\underline n}\def\unN{\underline N\underline{\phantom{a}}\!\!\! p}\def\unP{\underline P}\C^{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta)\nonumber\\ &-i(\Psi_m\C^{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta)+\frac{1}{6}(\Psi_m\mathfrak{G}\C^{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta) +\frac{1}{6}(\Psi_{\underline n}\def\unN{\underline N\underline{\phantom{a}}\!\!\! p}\def\unP{\underline P}{\cal I}_m{}^{\underline n}\def\unN{\underline N\underline{\phantom{a}}\!\!\! p}\def\unP{\underline P}\C^{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta) +{\cal O}(\Psi^2, \theta}\def\Th{\Theta}\def\vth{\vartheta^5)~, \label{v} \end{alignat} \vfill\break where \begin{alignat}{2} (\mathfrak{G})_{\underline{\alpha}}{}^{\underline{\phantom{\alpha}}\!\!\!\beta}&:=\frac{1}{576}\Big\{ (\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{\underline a}\def\unA{\underline A\underline b}\def\unB{\underline B\underline c}\def\unC{\underline C\underline d}\def\unD{\underline D\underline e}\def\unE{\underline E\underline{\phantom{e}}\!\!\!\! f}\def\underline{F}}\def\unT{\underline{T}}\def\unR{\underline{R}{\underline F})_{\underline{\alpha}}(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{\underline e}\def\unE{\underline E\underline{\phantom{e}}\!\!\!\! f}\def\underline{F}}\def\unT{\underline{T}}\def\unR{\underline{R}{\underline F})^{\underline{\phantom{\alpha}}\!\!\!\beta} -2(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{\underline e}\def\unE{\underline E})_{\underline{\alpha}}(\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{\underline a}\def\unA{\underline A\underline b}\def\unB{\underline B\underline c}\def\unC{\underline C\underline d}\def\unD{\underline D\underline e}\def\unE{\underline E})^{\underline{\phantom{\alpha}}\!\!\!\beta} -16(\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{\underline a}\def\unA{\underline A})_{\underline{\alpha}}(\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{\underline b}\def\unB{\underline B\underline c}\def\unC{\underline C\underline d}\def\unD{\underline D})^{\underline{\phantom{\alpha}}\!\!\!\beta}\nonumber\\ &+24(\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{\underline a}\def\unA{\underline A\underline b}\def\unB{\underline B})_{\underline{\alpha}}(\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{\underline c}\def\unC{\underline C\underline d}\def\unD{\underline D})^{\underline{\phantom{\alpha}}\!\!\!\beta} \Big\}G_{\underline a}\def\unA{\underline A\underline b}\def\unB{\underline B\underline c}\def\unC{\underline C\underline d}\def\unD{\underline D}~, \label{fgdef} \end{alignat} \begin{alignat}{2} ({\cal I}_{m}{}^{\underline e}\def\unE{\underline E\underline{\phantom{e}}\!\!\!\! f}\def\underline{F}}\def\unT{\underline{T}}\def\unR{\underline{R}{\underline F})_{\underline{\alpha}}{}^{\underline{\phantom{\alpha}}\!\!\!\beta}&:=-\frac{1}{48}\Big\{ (\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{\underline a}\def\unA{\underline A\underline b}\def\unB{\underline B})_{\underline{\alpha}}(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_m{}^{\underline a}\def\unA{\underline A\underline b}\def\unB{\underline B\underline e}\def\unE{\underline E\underline{\phantom{e}}\!\!\!\! f}\def\underline{F}}\def\unT{\underline{T}}\def\unR{\underline{R}{\underline F})^{\underline{\phantom{\alpha}}\!\!\!\beta} +4(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{m\underline a}\def\unA{\underline A})_{\underline{\alpha}}(\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{\underline a}\def\unA{\underline A\underline e}\def\unE{\underline E\underline{\phantom{e}}\!\!\!\! f}\def\underline{F}}\def\unT{\underline{T}}\def\unR{\underline{R}{\underline F})^{\underline{\phantom{\alpha}}\!\!\!\beta} -4(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{\underline a}\def\unA{\underline A\underline b}\def\unB{\underline B})_{\underline{\alpha}}(\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{\underline a}\def\unA{\underline A\underline b}\def\unB{\underline B\underline e}\def\unE{\underline E})^{\underline{\phantom{\alpha}}\!\!\!\beta}e_m{}^{\underline{\phantom{e}}\!\!\!\! f}\def\underline{F}}\def\unT{\underline{T}}\def\unR{\underline{R}{\underline F} \nonumber\\ &+6(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{m})_{\underline{\alpha}}(\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{\underline e}\def\unE{\underline E\underline{\phantom{e}}\!\!\!\! f}\def\underline{F}}\def\unT{\underline{T}}\def\unR{\underline{R}{\underline F})^{\underline{\phantom{\alpha}}\!\!\!\beta} -12(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{\underline a}\def\unA{\underline A})_{\underline{\alpha}}(\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{\underline a}\def\unA{\underline A\underline e}\def\unE{\underline E})^{\underline{\phantom{\alpha}}\!\!\!\beta}e_m{}^{\underline{\phantom{e}}\!\!\!\! f}\def\underline{F}}\def\unT{\underline{T}}\def\unR{\underline{R}{\underline F} \Big\} ~. \end{alignat} Using (\ref{v}) we find for the Green-Schwarz metric \begin{alignat}{2} g_{mn}&=G_{mn}-\frac{1}{4}({\cal D}_m\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta)({\cal D}_n\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta) -i({\cal D}_{(m}\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{n)}\theta}\def\Th{\Theta}\def\vth{\vartheta)+\frac{1}{12}({\cal D}_{(m}\theta}\def\Th{\Theta}\def\vth{\vartheta\mathfrak{G}\C_{n)}\theta}\def\Th{\Theta}\def\vth{\vartheta)\nonumber\\ &+\frac{1}{12}(\theta}\def\Th{\Theta}\def\vth{\vartheta{\cal R}_{\underline{\phantom{a}}\!\!\! p}\def\unP{\underline P\underline{\phantom{a}}\!\!\! q}{\cal I}_{(m}{}^{\underline{\phantom{a}}\!\!\! p}\def\unP{\underline P\underline{\phantom{a}}\!\!\! q}\C_{n)}\theta}\def\Th{\Theta}\def\vth{\vartheta) -2i(\Psi_{(m}\C_{n)}\theta}\def\Th{\Theta}\def\vth{\vartheta) +\frac{1}{3}(\Psi_{(m}\mathfrak{G}\C_{n)}\theta}\def\Th{\Theta}\def\vth{\vartheta) \nonumber\\ &+\frac{1}{3}(\Psi_{\underline{\phantom{a}}\!\!\! p}\def\unP{\underline P\underline{\phantom{a}}\!\!\! q}{\cal I}_{(m}{}^{\underline{\phantom{a}}\!\!\! p}\def\unP{\underline P\underline{\phantom{a}}\!\!\! q}\C_{n)}\theta}\def\Th{\Theta}\def\vth{\vartheta) -(\Psi_{(m}\C^{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta)({\cal D}_{n)}\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta)+{\cal O}(\Psi^2,\theta}\def\Th{\Theta}\def\vth{\vartheta^5) ~. \end{alignat} Similarly, for the pull-back of the three-form we find \begin{alignat}{2} C_{mnp}&= c_{mnp}-\frac{3i}{2}({\cal D}_{[m}\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{np]}\theta}\def\Th{\Theta}\def\vth{\vartheta)+\frac{1}{8}({\cal D}_{[m}\theta}\def\Th{\Theta}\def\vth{\vartheta\mathfrak{G}\C_{np]}\theta}\def\Th{\Theta}\def\vth{\vartheta) +\frac{1}{8}(\theta}\def\Th{\Theta}\def\vth{\vartheta{\cal R}_{\underline{\phantom{a}}\!\!\! p}\def\unP{\underline P\underline{\phantom{a}}\!\!\! q}{\cal I}_{[m}{}^{\underline{\phantom{a}}\!\!\! p}\def\unP{\underline P\underline{\phantom{a}}\!\!\! q}\C_{np]}\theta}\def\Th{\Theta}\def\vth{\vartheta) \nonumber\\ &-\frac{3}{4}({\cal D}_{[m}\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{n}{}^{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta)({\cal D}_{p]}\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta) -3i(\Psi_{[m}\C_{np]}\theta}\def\Th{\Theta}\def\vth{\vartheta)-(\Psi_{[m}\C_{n}{}^{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta)({\cal D}_{p]}\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta )\nonumber\\ &-2(\Psi_{[m}\C^{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta)({\cal D}_{n}\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{p]}{}_{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta) +\frac{1}{2}(\Psi_{[m}\mathfrak{G}\C_{np]}\theta}\def\Th{\Theta}\def\vth{\vartheta)+\frac{1}{2}(\Psi_{\underline n}\def\unN{\underline N\underline{\phantom{a}}\!\!\! q}{\cal I}_{[m}{}^{\underline n}\def\unN{\underline N\underline{\phantom{a}}\!\!\! q}\C_{np]}\theta}\def\Th{\Theta}\def\vth{\vartheta) +{\cal O}(\Psi^2,\th^5)~. \end{alignat} \subsection{Sixform} \label{thetaexpansions1} The $\theta}\def\Th{\Theta}\def\vth{\vartheta$-expansion for $C_6$ was not given in \cite{t}, but the same methods can be applied in this case. First we note that the $C_6$-field satisfies \begin{align} 7\partial_{[\unM_1}C_{\unM_2\dots \unM_7\}}=G_{\unM_1\dots \unM_7}. \label{bianchi} \end{align} Up to a gauge choice, the following is a solution of the Bianchi identity (\ref{bianchi}) at each order in the $\theta$ expansion: \begin{alignat}{2} C^{(0)}_{\underline {\phantom{\alpha}}\!\!\!\mu_1\dots\underline {\phantom{\alpha}}\!\!\!\mu_6}= C^{(0)}_{\underline {\phantom{\alpha}}\!\!\!\mu_1\dots\underline {\phantom{\alpha}}\!\!\!\mu_5 \underline m}\def\unM{\underline M_1}&=\dots C^{(0)}_{\underline {\phantom{\alpha}}\!\!\!\mu_1 \underline m}\def\unM{\underline M_1\dots \underline m}\def\unM{\underline M_5}=0~,\nonumber\\ 7\partial_{[\underline m}\def\unM{\underline M_1}C^{(0)}_{\underline m}\def\unM{\underline M_2\dots \underline m}\def\unM{\underline M_7]}&=G^{(0)}_{\underline m}\def\unM{\underline M_1\dots \underline m}\def\unM{\underline M_7} \label{cexpa} \end{alignat} \vfill\break and \begin{alignat}{2} C^{(n+1)}_{\underline {\phantom{\alpha}}\!\!\!\mu_1\dots\underline {\phantom{\alpha}}\!\!\!\mu_6}&=\frac{1}{n+7}~\theta^{\underline{\lambda}} G^{(n)}_{{\underline{\lambda}}\underline {\phantom{\alpha}}\!\!\!\mu_1\dots\underline {\phantom{\alpha}}\!\!\!\mu_6}\nonumber\\ C^{(n+1)}_{\underline {\phantom{\alpha}}\!\!\!\mu_1\dots\underline {\phantom{\alpha}}\!\!\!\mu_5 \underline m}\def\unM{\underline M_1}&=\frac{1}{n+6}~\theta^{\underline{\lambda}} G^{(n)}_{{\underline{\lambda}}\underline {\phantom{\alpha}}\!\!\!\mu_1\dots\underline {\phantom{\alpha}}\!\!\!\mu_5 \underline m}\def\unM{\underline M_1}\nonumber\\ C^{(n+1)}_{\underline {\phantom{\alpha}}\!\!\!\mu_1\dots\underline {\phantom{\alpha}}\!\!\!\mu_4 \underline m}\def\unM{\underline M_1\underline m}\def\unM{\underline M_2}&=\frac{1}{n+5}~\theta^{\underline{\lambda}} G^{(n)}_{{\underline{\lambda}}\underline {\phantom{\alpha}}\!\!\!\mu_1\dots\underline {\phantom{\alpha}}\!\!\!\mu_4 \underline m}\def\unM{\underline M_1\underline m}\def\unM{\underline M_2}\nonumber\\ C^{(n+1)}_{\underline {\phantom{\alpha}}\!\!\!\mu_1\underline {\phantom{\alpha}}\!\!\!\mu_2\underline {\phantom{\alpha}}\!\!\!\mu_3 \underline m}\def\unM{\underline M_1\underline m}\def\unM{\underline M_2 \underline m}\def\unM{\underline M_3}&=\frac{1}{n+4}~\theta^{\underline{\lambda}} G^{(n)}_{{\underline{\lambda}}\underline {\phantom{\alpha}}\!\!\!\mu_1\underline {\phantom{\alpha}}\!\!\!\mu_2\underline {\phantom{\alpha}}\!\!\!\mu_3 \underline m}\def\unM{\underline M_1\underline m}\def\unM{\underline M_2 \underline m}\def\unM{\underline M_3}\nonumber\\ C^{(n+1)}_{\underline {\phantom{\alpha}}\!\!\!\mu_1\underline {\phantom{\alpha}}\!\!\!\mu_2 \underline m}\def\unM{\underline M_1\dots \underline m}\def\unM{\underline M_4}&=\frac{1}{n+3}~\theta^{\underline{\lambda}} G^{(n)}_{{\underline{\lambda}}\underline {\phantom{\alpha}}\!\!\!\mu_1\underline {\phantom{\alpha}}\!\!\!\mu_2 \underline m}\def\unM{\underline M_1\dots \underline m}\def\unM{\underline M_4}\nonumber\\ C^{(n+1)}_{\underline {\phantom{\alpha}}\!\!\!\mu \underline m}\def\unM{\underline M_1\dots \underline m}\def\unM{\underline M_5}&=\frac{1}{n+2}~\theta^{\underline{\lambda}} G^{(n)}_{{\underline{\lambda}}\underline {\phantom{\alpha}}\!\!\!\mu \underline m}\def\unM{\underline M_1\dots \underline m}\def\unM{\underline M_5}\nonumber\\ C^{(n+1)}_{\underline m}\def\unM{\underline M_1\dots \underline m}\def\unM{\underline M_6}&=\frac{1}{n+1}~\theta^{\underline{\lambda}} G^{(n)}_{{\underline{\lambda}} \underline m}\def\unM{\underline M_1\dots \underline m}\def\unM{\underline M_6} ~, ~~~~~n\geq 0~. \label{cexp} \end{alignat} Using the fact that \begin{equation}}\def\ee{\end{equation} G_{\underline a}\def\unA{\underline A_1\dots \underline a}\def\unA{\underline A_5{\underline{\alpha}}_1{\underline{\alpha}}_2}=-i(\C_{\underline a}\def\unA{\underline A_1\dots \underline a}\def\unA{\underline A_5})_{{\underline{\alpha}}_1{\underline{\alpha}}_2} ~, \end{equation} we find for the right-hand sides of the equations (\ref{cexp}), \begin{alignat}{2} \theta^{\underline{\lambda}} G_{{\underline{\lambda}}\underline {\phantom{\alpha}}\!\!\!\mu_1\dots\underline {\phantom{\alpha}}\!\!\!\mu_6}&= 6iE_{(\underline {\phantom{\alpha}}\!\!\!\mu_1}{}^{{\underline a}\def\unA{\underline A}_1} \dots E_{\underline {\phantom{\alpha}}\!\!\!\mu_5}{}^{{\underline a}\def\unA{\underline A}_5} E_{\underline {\phantom{\alpha}}\!\!\!\mu_6)}{}^{\underline{\alpha}} (\C_{{\underline a}\def\unA{\underline A}_1\dots {\underline a}\def\unA{\underline A}_5}\theta)_{\underline{\alpha}} \nonumber\\ \theta^{\underline{\lambda}} G_{{\underline{\lambda}}\underline {\phantom{\alpha}}\!\!\!\mu_1\dots\underline {\phantom{\alpha}}\!\!\!\mu_5 \underline m}\def\unM{\underline M}&= -5i E_m{}^{{\underline a}\def\unA{\underline A}_1} E_{(\underline {\phantom{\alpha}}\!\!\!\mu_1}{}^{{\underline a}\def\unA{\underline A}_2} \dots E_{\underline {\phantom{\alpha}}\!\!\!\mu_4}{}^{{\underline a}\def\unA{\underline A}_5} E_{\underline {\phantom{\alpha}}\!\!\!\mu_5)}{}^{\underline{\alpha}} (\C_{{\underline a}\def\unA{\underline A}_1\dots {\underline a}\def\unA{\underline A}_5}\theta)_{\underline{\alpha}} \nonumber\\ &~~~+i E_{\underline {\phantom{\alpha}}\!\!\!\mu_1}{}^{{\underline a}\def\unA{\underline A}_1} \dots E_{\underline {\phantom{\alpha}}\!\!\!\mu_5}{}^{{\underline a}\def\unA{\underline A}_5} E_{\underline m}\def\unM{\underline M}{}^{\underline{\alpha}} (\C_{{\underline a}\def\unA{\underline A}_1\dots {\underline a}\def\unA{\underline A}_5}\theta)_{\underline{\alpha}} \nonumber\\ \theta^{\underline{\lambda}} G_{{\underline{\lambda}}\underline {\phantom{\alpha}}\!\!\!\mu_1\dots\underline {\phantom{\alpha}}\!\!\!\mu_4 \underline m}\def\unM{\underline M_1\underline m}\def\unM{\underline M_2}&= 4i E_{\underline m}\def\unM{\underline M_1}{}^{{\underline a}\def\unA{\underline A}_1} E_{\underline m}\def\unM{\underline M_2}{}^{{\underline a}\def\unA{\underline A}_2} E_{(\underline {\phantom{\alpha}}\!\!\!\mu_1}{}^{{\underline a}\def\unA{\underline A}_3} E_{\underline {\phantom{\alpha}}\!\!\!\mu_2}{}^{{\underline a}\def\unA{\underline A}_4} E_{\underline {\phantom{\alpha}}\!\!\!\mu_3}{}^{{\underline a}\def\unA{\underline A}_5} E_{\underline {\phantom{\alpha}}\!\!\!\mu_4)}{}^{\underline{\alpha}} (\C_{{\underline a}\def\unA{\underline A}_1\dots {\underline a}\def\unA{\underline A}_5}\theta)_{\underline{\alpha}} \nonumber\\ &~~~+2i E_{\underline {\phantom{\alpha}}\!\!\!\mu_1}{}^{{\underline a}\def\unA{\underline A}_1} \dots E_{\underline {\phantom{\alpha}}\!\!\!\mu_4}{}^{{\underline a}\def\unA{\underline A}_4} E_{[\underline m}\def\unM{\underline M_1}{}^{{\underline a}\def\unA{\underline A}_5} E_{\underline m}\def\unM{\underline M_2]}{}^{\underline{\alpha}} (\C_{{\underline a}\def\unA{\underline A}_1\dots {\underline a}\def\unA{\underline A}_5}\theta)_{\underline{\alpha}} \nonumber\\ \theta^{\underline{\lambda}} G_{{\underline{\lambda}}\underline {\phantom{\alpha}}\!\!\!\mu_1\underline {\phantom{\alpha}}\!\!\!\mu_2\underline {\phantom{\alpha}}\!\!\!\mu_3 \underline m}\def\unM{\underline M_1\underline m}\def\unM{\underline M_2 \underline m}\def\unM{\underline M_3}&= -3i E_{\underline m}\def\unM{\underline M_1}{}^{{\underline a}\def\unA{\underline A}_1} E_{\underline m}\def\unM{\underline M_2}{}^{{\underline a}\def\unA{\underline A}_2} E_{\underline m}\def\unM{\underline M_3}{}^{{\underline a}\def\unA{\underline A}_3} E_{(\underline {\phantom{\alpha}}\!\!\!\mu_1}{}^{{\underline a}\def\unA{\underline A}_4} E_{\underline {\phantom{\alpha}}\!\!\!\mu_2}{}^{{\underline a}\def\unA{\underline A}_5} E_{\underline {\phantom{\alpha}}\!\!\!\mu_3)}{}^{\underline{\alpha}} (\C_{{\underline a}\def\unA{\underline A}_1\dots {\underline a}\def\unA{\underline A}_5}\theta)_{\underline{\alpha}} \nonumber\\ &~~~+3i E_{\underline {\phantom{\alpha}}\!\!\!\mu_1}{}^{{\underline a}\def\unA{\underline A}_1}E_{\underline {\phantom{\alpha}}\!\!\!\mu_2}{}^{{\underline a}\def\unA{\underline A}_2} E_{\underline {\phantom{\alpha}}\!\!\!\mu_3}{}^{{\underline a}\def\unA{\underline A}_3} E_{[\underline m}\def\unM{\underline M_1}{}^{{\underline a}\def\unA{\underline A}_4}E_{\underline m}\def\unM{\underline M_2}{}^{{\underline a}\def\unA{\underline A}_5} E_{\underline m}\def\unM{\underline M_3]}{}^{\underline{\alpha}} (\C_{{\underline a}\def\unA{\underline A}_1\dots {\underline a}\def\unA{\underline A}_5}\theta)_{\underline{\alpha}} \nonumber\\ \theta^{\underline{\lambda}} G_{{\underline{\lambda}}\underline {\phantom{\alpha}}\!\!\!\mu_1\underline {\phantom{\alpha}}\!\!\!\mu_2 \underline m}\def\unM{\underline M_1\dots \underline m}\def\unM{\underline M_4}&= +2i E_{\underline m}\def\unM{\underline M_1}{}^{{\underline a}\def\unA{\underline A}_1} \dots E_{\underline m}\def\unM{\underline M_4}{}^{{\underline a}\def\unA{\underline A}_4} E_{(\underline {\phantom{\alpha}}\!\!\!\mu_1}{}^{{\underline a}\def\unA{\underline A}_5} E_{\underline {\phantom{\alpha}}\!\!\!\mu_2)}{}^{\underline{\alpha}} (\C_{{\underline a}\def\unA{\underline A}_1\dots {\underline a}\def\unA{\underline A}_5}\theta)_{\underline{\alpha}} \nonumber\\ &~~~+4i E_{\underline {\phantom{\alpha}}\!\!\!\mu_1}{}^{{\underline a}\def\unA{\underline A}_1}E_{\underline {\phantom{\alpha}}\!\!\!\mu_2}{}^{{\underline a}\def\unA{\underline A}_2} E_{[\underline m}\def\unM{\underline M_1}{}^{{\underline a}\def\unA{\underline A}_3}E_{\underline m}\def\unM{\underline M_2}{}^{{\underline a}\def\unA{\underline A}_4}E_{\underline m}\def\unM{\underline M_3}{}^{{\underline a}\def\unA{\underline A}_5} E_{\underline m}\def\unM{\underline M_4]}{}^{\underline{\alpha}} (\C_{{\underline a}\def\unA{\underline A}_1\dots {\underline a}\def\unA{\underline A}_5}\theta)_{\underline{\alpha}} \nonumber\\ \theta^{\underline{\lambda}} G_{{\underline{\lambda}}\underline {\phantom{\alpha}}\!\!\!\mu \underline m}\def\unM{\underline M_1\dots \underline m}\def\unM{\underline M_5}&= -i E_{\underline m}\def\unM{\underline M_1}{}^{{\underline a}\def\unA{\underline A}_1} \dots E_{\underline m}\def\unM{\underline M_5}{}^{{\underline a}\def\unA{\underline A}_5} E_{\underline {\phantom{\alpha}}\!\!\!\mu}{}^{\underline{\alpha}} (\C_{{\underline a}\def\unA{\underline A}_1\dots {\underline a}\def\unA{\underline A}_5}\theta)_{\underline{\alpha}} \nonumber\\ &~~~+5i E_{\underline {\phantom{\alpha}}\!\!\!\mu}{}^{{\underline a}\def\unA{\underline A}_1} E_{[\underline m}\def\unM{\underline M_1}{}^{{\underline a}\def\unA{\underline A}_2}\dots E_{\underline m}\def\unM{\underline M_4}{}^{{\underline a}\def\unA{\underline A}_5} E_{\underline m}\def\unM{\underline M_5]}{}^{\underline{\alpha}} (\C_{{\underline a}\def\unA{\underline A}_1\dots {\underline a}\def\unA{\underline A}_5}\theta)_{\underline{\alpha}} \nonumber\\ \theta^{\underline{\lambda}} G_{{\underline{\lambda}} \underline m}\def\unM{\underline M_1\dots \underline m}\def\unM{\underline M_6}&= 6i E_{[\underline m}\def\unM{\underline M_1}{}^{{\underline a}\def\unA{\underline A}_1}\dots E_{\underline m}\def\unM{\underline M_5}{}^{{\underline a}\def\unA{\underline A}_5} E_{\underline m}\def\unM{\underline M_6]}{}^{\underline{\alpha}} (\C_{{\underline a}\def\unA{\underline A}_1\dots {\underline a}\def\unA{\underline A}_5}\theta)_{\underline{\alpha}} ~. \label{cexpr} \end{alignat} In the following we will only need the part $\Delta C_6$ of $C_6$ which is linear in the gravitino. Plugging the expressions for the vielbein components given in \cite{t} into (\ref{cexpr}) we obtain \begin{alignat}{2} \Delta C_{m_1\dots m_6}= &-6i(\Psi_{[m_1}\C_{m_2\dots m_6]}\theta}\def\Th{\Theta}\def\vth{\vartheta) +10(\Psi_{[m_1}\C^{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta)({\cal D}_{m_2}\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{m_3\dots m_6]\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta)\nonumber\\ &+(\Psi_{[m_1}\mathfrak{G}\C_{m_2\dots m_6]}\theta}\def\Th{\Theta}\def\vth{\vartheta) +(\Psi_{\underline{\phantom{a}}\!\!\! p}\def\unP{\underline P\underline{\phantom{a}}\!\!\! q}{\cal I}_{[m_1}{}^{\underline{\phantom{a}}\!\!\! p}\def\unP{\underline P\underline{\phantom{a}}\!\!\! q}\C_{m_2\dots m_6]}\theta}\def\Th{\Theta}\def\vth{\vartheta)\nonumber\\ &-5(\Psi_{[m_1}\C_{m_2\dots m_5 \underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta)({\cal D}_{m_6]}\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta) +{\cal O}(\Psi^2,\th^5)~. \label{dc6} \end{alignat} \section{Fivebrane action} \label{pst} We are now ready to consider the application of the theta-expansion discussed in the previous section to the case of the fivebrane worldvolume action. As already mentioned in the introduction, we will adopt the covariant framework of \cite{pst} to which the reader is referred for more details. The main result of this section is the gravitino vertex operator, equation (\ref{grv}) below. To improve the presentation, we have relegated the details of the derivation to appendix \ref{pstapp}. The fivebrane action is of the form \begin{alignat}{2} S=S_1+S_2+S_3~, \end{alignat} where \begin{alignat}{2} S_1&:=T_{M5}\int_{{\Sigma}} d^6x\sqrt{-det(g_{mn}+i\widetilde{H}_{mn} ) }\nonumber\\ S_2&:=T_{M5}\int_{{\Sigma}} d^6x\sqrt{-g}~ \frac{1}{4}{\widetilde{H}}_{mn} {H}^{mn}\nonumber\\ S_3&:=T_{M5}\int_{{\Sigma}} \Big(C_6+\frac{1}{2}F_3\wedge C_3 \Big)~ \label{action} \end{alignat} and $T_{M5}\sim l_P^{-6}$ is the fivebrane tension. Moreover, we have made the following definitions \begin{alignat}{2} H_{mnp}&:=F_{mnp}-C_{mnp}\nonumber\\ H_{mn}&:=H_{mnp}v^p\nonumber\\ \widetilde{H}^{mn}&:=\frac{1}{6\sqrt{-g}}\epsilon}\def\vare{\varepsilon^{mnpqrs}v_pH_{qrs}\nonumber\\ v_p&:=\frac{\partial_p a}{\sqrt{-g^{mn}\partial_ma\partial_n a}}~, \end{alignat} where $F_{mnp}$ is the field-strength of the world-volume chiral two-form and $a$ is an auxiliary world-volume scalar. It follows from the above definitions that \begin{alignat}{2} det(\delta}\def\D{\Delta_{m}{}^{n}+i\widetilde{H}_{m}{}^{n} )= 1+\frac{1}{2}tr\widetilde{H}^2+\frac{1}{8}(tr\widetilde{H}^2)^2-\frac{1}{4}tr\widetilde{H}^4~. \end{alignat} \subsection{The gravitino vertex operator} \label{grvsec} In the case of normal flux, {\em i.e.} when the world-volume two-form tensor is flat ($F_{mnp}=0$) and the pull-back of the three-form potential onto the fivebrane vanishes ($c_{mnp}=0$), the expression for the gravitino vertex operator simplifies considerably. Skipping the details of the derivation, which can be found in appendix \ref{pstapp}, the final result reads: \begin{center} \fbox{\parbox{14.5cm}{ \begin{alignat}{2} V= T_{M5}\int_{{\Sigma}}d^6x\sqrt{-G}~ \Big\{ 2(\Psi_m\C^m\theta)+i(\Psi_{\underline m}\def\unM{\underline M} V^{(2)\underline m}\def\unM{\underline M}) &+\frac{i}{3}(\Psi_m\mathfrak{G}\C^m\theta) \nonumber\\ &+\frac{i}{3}(\Psi_{\underline{\phantom{a}}\!\!\! p}\def\unP{\underline P\underline{\phantom{a}}\!\!\! q}{\cal I}_m{}^{\underline{\phantom{a}}\!\!\! p}\def\unP{\underline P\underline{\phantom{a}}\!\!\! q}\C^m\theta) +{\cal O}(\Psi^2, \theta^5) \Big\} ~.\nonumber \end{alignat} }} \end{center} \begin{align}\label{grv}\end{align} We can now see why the $\Psi^2$ contact-terms can be neglected. As is easy to verify, $\Psi^2$ terms first appear in the $\theta}\def\Th{\Theta}\def\vth{\vartheta$-expansion at order $\theta}\def\Th{\Theta}\def\vth{\vartheta^4$. Consequently, a single vertex-operator insertion $V_{\Psi^2}$ is needed to saturate the four fermion zeromodes --which is the case examined here. A single insertion, however, is proportional to $T_{M5}$ and is of order ${\cal O}(l_P^6)$ relative to two vertex-operator insertions: the latter give a contribution proportional to $T_{M5}^2$. Clearly, this analysis is valid provided the `radius' of the six-cycle is much larger than the Planck length, ${\rm Vol}_{\Sigma}>>l^6_P$. As was shown in the case of the M-theory membrane \cite{hklt} and is also expected in the case of the fivebrane \cite{dk}, the first higher-order correction to the world-volume action occurs at order $l_P^4$. Hence it would be inconsistent to include contact terms without considering the higher-order derivative corrections to the world-volume action. Moreover, at order $l_P^6$ (eight derivatives) there are higher-order curvature corrections to the background supergravity action\footnote{The eleven-dimensional supergravity admits a supersymmetric deformation at order $l_P^3$ (five derivatives) \cite{ttt}. On a topologically-nontrivial spacetime $M$ such that $p_1(M)\neq 0$, this deformation can be removed by a $C$-field redefinition, at the cost of shifting the quantization condition of the fourform fieldstrentgh.} which, as was explained in \cite{t}, modify the $\theta}\def\Th{\Theta}\def\vth{\vartheta$-expansion of all superfields. \subsection{Quadratic fermion terms} It follows from the preceding sections that in a bosonic background ($\Psi_{\underline m}\def\unM{\underline M}^{\underline{\alpha}}=0$) the part of the Lagrangian quadratic in $\theta}\def\Th{\Theta}\def\vth{\vartheta$ (this is the analogue of equations (38), (39) of \cite{tt}) is given by \begin{alignat}{2} {\cal L}^{(quad)}&= \frac{i}{2}\sqrt{det(A_i{}^j)}(A^{-1})^{(mn)}(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{m}{\cal D}_n\theta}\def\Th{\Theta}\def\vth{\vartheta)\nonumber\\ &-\frac{\epsilon}\def\vare{\varepsilon^{lpqrs}{}_m}{6\sqrt{-G}}\sqrt{det(A_i{}^j)}(A^{-1})^{[mn]} (\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{(n}{\cal D}_{l)}\theta}\def\Th{\Theta}\def\vth{\vartheta) a_p (F_{qrs}-c_{qrs})\nonumber\\ &-\frac{\epsilon}\def\vare{\varepsilon^{klpqrs}}{24\sqrt{-G}}\sqrt{det(A_i{}^j)}(A^{-1})_{kl} a_p \nonumber\\ &~~~~~~~~~~~~~~~~~\times\Big\{ (F_{qrs}-c_{qrs})\Big[ a^ma^n(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_m{\cal D}_n\theta}\def\Th{\Theta}\def\vth{\vartheta) +(\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{m}{\cal D}_m\theta}\def\Th{\Theta}\def\vth{\vartheta) \Big] +3(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{qr}{\cal D}_s\theta}\def\Th{\Theta}\def\vth{\vartheta) \Big\} \nonumber\\ &-\frac{i\epsilon}\def\vare{\varepsilon^{klpqrs}}{24\sqrt{-G}} a_ka^m(F_{lpq}-c_{lpq})\nonumber\\ &~~~~~~~~~~~~~~~~~\times\Big\{(F_{rst}-c_{rst}) \Big[a^ta^n (\theta}\def\Th{\Theta}\def\vth{\vartheta\C_n{\cal D}_m\theta}\def\Th{\Theta}\def\vth{\vartheta)+\frac{1}{2}(\theta}\def\Th{\Theta}\def\vth{\vartheta\C^t{\cal D}_m\theta}\def\Th{\Theta}\def\vth{\vartheta) \Big] +\frac{1}{2}(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{rs}{\cal D}_m\theta}\def\Th{\Theta}\def\vth{\vartheta) \Big\} \nonumber\\ &-\frac{i\epsilon}\def\vare{\varepsilon^{klpqrs}}{48\sqrt{-G}} a_ka^n (F_{lpq}-c_{lpq}) (F_{rs}{}^{t}-c_{rs}{}^{t}) (\theta}\def\Th{\Theta}\def\vth{\vartheta\C_n{\cal D}_t\theta}\def\Th{\Theta}\def\vth{\vartheta)\nonumber\\ &-\frac{i\epsilon}\def\vare{\varepsilon^{klpqrs}}{2\times 5!\sqrt{-G}} \Big\{ 15 a^ta_k(F_{lpt}-c_{lpt})(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{qr}{\cal D}_s\theta}\def\Th{\Theta}\def\vth{\vartheta) -10 a^ta_k(F_{lpq}-c_{lpq})(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{rt}{\cal D}_s\theta}\def\Th{\Theta}\def\vth{\vartheta)\nonumber\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - 5F_{klp}(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{qr}{\cal D}_s\theta}\def\Th{\Theta}\def\vth{\vartheta) -(\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{klpqr}{\cal D}_s\theta}\def\Th{\Theta}\def\vth{\vartheta) \Big\} ~. \end{alignat} Note that ${\cal L}^{(quad)}$ is related to $V_{~~~\underline{\alpha}}^{(1)\underline m}\def\unM{\underline M}$ in a simple way. \subsection*{Normal flux} In this case the part of the Lagrangian quadratic in the fermions simplifies to \begin{alignat}{2} {\cal L}^{(quad)}&= \frac{i}{2}\Big\{ (\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{m}{\cal D}_m\theta}\def\Th{\Theta}\def\vth{\vartheta) +\frac{\epsilon}\def\vare{\varepsilon^{klpqrs}}{ 5!\sqrt{-G}} (\theta}\def\Th{\Theta}\def\vth{\vartheta\C_{klpqr}{\cal D}_s\theta}\def\Th{\Theta}\def\vth{\vartheta) \Big\} ~. \end{alignat} After Wick-rotating we obtain \begin{alignat}{2} {\cal L}^{(quad)}&= -(\theta}\def\Th{\Theta}\def\vth{\vartheta\C^{m}{\cal D}_m\theta}\def\Th{\Theta}\def\vth{\vartheta) ~, \label{ityu} \end{alignat} where we have taken (\ref{gammahodge}) into account, and we have noted that after gauge-fixing the physical fermion modes satisfy $P^+\theta}\def\Th{\Theta}\def\vth{\vartheta=\theta}\def\Th{\Theta}\def\vth{\vartheta$. \section{Supersymmetric cycles} This section is devoted to the analysis of the conditions for a supersymmetric six-cycle, and the derivation of the worldvolume fermionic zeromodes in the presence of (normal) flux. \subsection{M-theory on fourfolds} \label{mtheoryonfourfolds} We start by reviewing M-theory on a Calabi-Yau fourfold with flux. Let the eleven-dimensional metric be of the form \begin{align} ds^2=\Delta^{-1}ds_3^2+\Delta^{1/2}ds^2_8~, \end{align} where $ds_3^2$ is the metric of three-dimensional Minkowski space, $\Delta$ is a warp factor, and $ds^2_8$ is the metric on $X$. Let us also decompose the eleven-dimensional Majorana-Weyl supersymmetry parameter $\eta$ in terms of a real anticommuting spinor $\epsilon$ along the three-dimensional Minkowski space, and a real chiral spinor $\xi$ on $X$: \begin{align} \eta=\Delta^{-1/4}\epsilon\otimes\xi~. \label{ansa} \end{align} As was first shown in \cite{bb}, the requirement of ${\cal N}=1$ supersymmetry in three dimensions (two real supercharges) leads to the condition \begin{align} \nabla_m\xi=0~, \end{align} i.e. the `internal' spinor is covariantly constant with respect to the connection associated with the metric $g_{mn}$ on $X$. Under the Ansatz (\ref{ansa}), requiring ${\cal N}=2$ supersymmetry in three dimensions implies the existence of two real covariantly-constant spinors $\xi_{1,2}$ of the same chirality. It follows that $X$ is a Calabi-Yau four-fold. In the following we shall combine $\xi_{1,2}$ into a complex chiral spinor on $X$, $\xi:=\xi_1+i\xi_2$. An antiholomorphic $(0,4)$ fourform $\Omega$ and a complex structure $J$ on $X$ can be constructed as bilinears of $\xi$, as is discussed in detail in appendix \ref{sus}. Moreover, supersymmetry imposes the following conditions on the components of the fourform field-strength: \begin{align} G={\rm Vol}_3\wedge d\Delta^{-3/2}+F~, \label{gform} \end{align} where $F$ is a fourform on $X$ which is purely $(2,2)$ and traceless, $J\lrcorner F=0$, with respect to the complex structure $J$ on $X$. We have denoted by ${\rm Vol}_3$ the volume element of the three-dimensional Minkowski space. Finally, the warp factor is constrained by the Bianchi identities to satisfy \begin{align} d\star d~{\rm log}\Delta=\frac{1}{3}F\wedge F-\frac{2}{3}(2\pi)^4\beta X_8~, \label{x8} \end{align} where $\beta$ is a constant of order $l_P^6$, and the Hodge star is with respect to the metric on $X$. The second term on the right-hand side of the equation above is a higher-order correction related to the fivebrane anomaly. In general there will be other corrections of the same order which should also be taken into account. However, it can be argued that in the large-radius approximation it is consistent to only take the above correction into account (see \cite{pvw}, for example). In the large-volume limit $g^{CY}=tg_{0}^{CY}+\dots$, $t\rightarrow\infty$, the two terms on the right-hand side of (\ref{x8}) scale like $t^{-3}$ relative to the left-hand side and can be neglected. It is therefore consistent to take the warp factor to be trivial, $\Delta=1$ \cite{beck}. We will henceforth assume this to be the case. In particular, it follows from (\ref{gform}) that the fourform's only nonzero components are along the Calabi-Yau fourfold. Note that the integrated version of equation (\ref{x8}), \begin{align} \int_X F\wedge F+\frac{\beta}{12} ~\chi(X)=0~, \end{align} is the tadpole cancellation condition. Finally, note that the normal flux condition, together with the constraints of supersymmetry on the fourform flux explained in section \ref{mtheoryonfourfolds}, imply that $F$ is of the form \begin{align} F_{mnpq}=4\widetilde{F}_{[mnp}K_{q]}+4\widetilde{F}^*_{[mnp}K^*_{q]}~, \label{fffn} \end{align} where $\widetilde{F}$ obeys \begin{align} {J}\lrcorner \widetilde{F}=0; ~~~~~ \iota_K\widetilde{F}=\iota_{K^*}\widetilde{F}=0~ \label{nfff} \end{align} and $K$ is a complex vector field normal to the fivebrane worldvolume, see eq. (\ref{see}) below. The above results can be extended to include more general fluxes \cite{ms, teight}. In this case the internal manifold generally ceases to be Calabi-Yau. \subsection{Supersymmetric cycles} \label{supersymmetriccycles} Consider a bosonic superembedding of the fivebrane ($X^{\underline m}\def\unM{\underline M}(\sigma), ~\theta}\def\Th{\Theta}\def\vth{\vartheta^{\underline {\phantom{\alpha}}\!\!\!\mu}(\sigma)=0)$ in a bosonic background $(\Psi_ {\underline m}\def\unM{\underline M}{}^{\underline{\alpha}}=0)$, where $\sigma^m$ is the coordinate on the fivebrane worldvolume. The fivebrane action is invariant under superdiffeomorphisms \begin{alignat}{2} \delta}\def\D{\Delta_\zeta Z^{\unM}=\zeta^{\unA} E_{\unA}{}^{\unM} \label{a} \end{alignat} such that \begin{alignat}{2} {\cal L}_{\zeta}E_{\unM}{}^{\unA}= -(\partial_{\unM}+\Omega_{\unM\unB}{}^{\unA})\zeta^{\unB} -\zeta^{\unB}T_{\unB\unM}{}^{\unA} =0~. \label{ui} \end{alignat} This can be seen by first noting that \begin{alignat}{2} {\cal L}_{\zeta}C_3=d(\iota_\zeta C_3)+\iota_\zeta G_4~. \end{alignat} The first term on the right-hand side pulls back to a total derivative on the fivebrane worldvolume, which can be compensated by a gauge transformation. The pull-back of the second term on the right-hand side vanishes for a bosonic background at $\theta}\def\Th{\Theta}\def\vth{\vartheta=0$, as can be seen by (\ref{q}) below and by taking into account that the only nonzero components of $G_4$ are $G_{\underline a}\def\unA{\underline A\underline b}\def\unB{\underline B\underline{\alpha}\underline{\phantom{\alpha}}\!\!\!\beta}$ and $G_{\underline a}\def\unA{\underline A\underline b}\def\unB{\underline B\underline c}\def\unC{\underline C\underline d}\def\unD{\underline D}$. Similarly, the WZ term transforms under (\ref{a}) as \begin{alignat}{2} \int_{W_6} {\cal L}_{\zeta}(C_6+\frac{1}{2} F_3\wedge C_3)= \int_{W_6} {\iota}_{\zeta}(G_7+\frac{1}{2} H_3\wedge G_4)~, \end{alignat} where we have dropped a total derivative from the integrand. Again, this vanishes for a bosonic background at $\theta}\def\Th{\Theta}\def\vth{\vartheta=0$. Finally, the Green-Schwarz metric is manifestly invariant under (\ref{a}, \ref{ui}). Condition (\ref{ui}) can be solved for $\zeta$, order by order in a $\theta$-expansion. By taking the torsion constraints into account, it can be shown that \begin{alignat}{2} \zeta^{\underline{\alpha}}&=\eta^{\underline{\alpha}}(X)+{\cal O}(\theta}\def\Th{\Theta}\def\vth{\vartheta^2)\nonumber\\ \zeta^{\underline a}\def\unA{\underline A}&=i(\eta\C^{\underline a}\def\unA{\underline A}\theta}\def\Th{\Theta}\def\vth{\vartheta)+{\cal O}(\theta}\def\Th{\Theta}\def\vth{\vartheta^3)~, \label{q} \end{alignat} where $\eta^{\underline{\alpha}}$ is a Killing spinor, \begin{alignat}{2} {\cal D}_{\underline m}\def\unM{\underline M}\eta^{\underline{\alpha}}(X)=0~. \label{m} \end{alignat} Transformation (\ref{a}) corresponds to a zero mode iff it can be compensated by a $\kappa$-transformation, i.e. iff there exists $\kappa^{\underline{\alpha}}(\sigma)$ such that \begin{alignat}{2} \eta^{\underline{\alpha}}(X(\sigma))+\kappa^{\underline{\alpha}}(\sigma)=0~. \label{p} \end{alignat} On the other hand $\kappa$ satisfies $\kappa^{\underline{\phantom{\alpha}}\!\!\!\beta}\bar{\C}_{\underline{\phantom{\alpha}}\!\!\!\beta}{}^{\underline{\alpha}}=\kappa^{\underline{\alpha}}$, where \begin{alignat}{2} \bar{\C}(\sigma):= \frac{1}{\sqrt{det(\delta}\def\D{\Delta_r{}^s+i\widetilde{H}_r{}^s)}} \Big\{ \frac{1}{6!}\frac{\epsilon}\def\vare{\varepsilon^{m_1\dots m_6}}{\sqrt{-g}}\C_{m_1\dots m_6} &+\frac{i}{2}\C_{mnp}\widetilde{H}^{mn}v^p\nonumber\\ &-\frac{1}{16}\frac{\epsilon}\def\vare{\varepsilon^{m_1\dots m_6}}{\sqrt{-g}} \widetilde{H}_{m_1m_2}\widetilde{H}_{m_3m_4} \C_{m_5m_6} \Big\}~, \label{kproj} \end{alignat} so that $\bar{\C}^2=1$. Hence (\ref{p}) is equivalent to \begin{alignat}{2} \eta^{\underline{\phantom{\alpha}}\!\!\!\beta}(X(\sigma))(1-\bar{\C}(\sigma))_{\underline{\phantom{\alpha}}\!\!\!\beta}{}^{\underline{\alpha}}=0~, \label{k} \end{alignat} with $\bar{\C}(\sigma)$ evaluated for the bosonic fivebrane superembedding in the bosonic background. To summarize: the `global' zero modes are given by \begin{alignat}{2} \theta}\def\Th{\Theta}\def\vth{\vartheta^{\underline{\alpha}}(\sigma)=\eta^{\underline{\alpha}}(X(\sigma))~, \end{alignat} where $\eta$ satisfies (\ref{m}), (\ref{k}). Consequently, $\theta}\def\Th{\Theta}\def\vth{\vartheta^{\underline{\alpha}}$ is annihilated by ${\cal D}_m=\partial_mX^{\underline m}\def\unM{\underline M}{\cal D}_{\underline m}\def\unM{\underline M}$ and hence obeys the Dirac equation on the fivebrane: \begin{align} \C^m{\cal D}_m\theta}\def\Th{\Theta}\def\vth{\vartheta=0~, \label{dirac} \end{align} which follows from the quadratic part of the fivebrane action (\ref{ityu}). I.e. `global' zero modes give rise to zero modes on the fivebrane. The converse is not generally true. \subsection*{Supersymmetric cycles in the case of normal flux} For a large six-cycle ${\Sigma}$, $X$ can be approximated by the total space of the normal bundle of ${\Sigma}$ in $X$ as in \cite{w}. Equivalently, ${\Sigma}$ can be specified by a complex vector field $K$ on $X$ such that \begin{align} ds^2(X) =G_{mn}d\sigma^{m}\otimes d\sigma^{n}+K\otimes K^* ~, \label{see} \end{align} where $G_{mn}(\sigma)$ is the metric of ${\Sigma}$, and $K^mG_{mn}=0$. We shall normalize $K$ as in appendix \ref{sus}, $|K|^2=2$, in which case the determinants of the metrics on $X$, ${\Sigma}$ are equal. The kappa-symmetry projector simplifies considerably in the case of normal flux. Passing to the static gauge and Wick-rotating, condition (\ref{k}) can be seen to be equivalent to \begin{align} \Big(1-\frac{ K^mK^{*n}\epsilon}\def\vare{\varepsilon_{mn}{}^{m_1\dots m_6}}{2\times 6!\sqrt{G}}\C_{m_1\dots m_6}\Big)\xi=0 ~. \label{iy} \end{align} Furthermore, using the formul{\ae} in the appendix, equation (\ref{iy}) can be rewritten as \begin{align} P^+\xi=\xi; ~~~~~ P^+:=\frac{1}{2}\Big(1+\frac{1}{2} K^mK^{*n}\Gamma_{mn}\Gamma_9 \Big) ~. \label{iyu} \end{align} The normal vector $K$ is not a priori holomorphic with respect to the complex structure of $X$. However, it is straightforward to see from (\ref{iyu}) that \begin{align} J_m{}^nK_n=-iK_m~. \end{align} It follows that in the case of normal flux, supersymmetric cycles are antiholomorphic cycles. \subsection{Zero modes} \label{zeromodes} We are now ready to come to the analysis of the fermionic zeromodes on the worldvolume of the fivebrane. The main result of this section is given in (\ref{zmeqs}) below. In the process we make contact with the earlier results of \cite{saul, kall}. The form of the Dirac operator in the linear approximation was derived in \cite{dira}. A note on notation: in the remainder of the paper, lower-case Latin letters from the middle of the alphabet ($m,n,\dots$) denote indices along $X$ (as opposed to indices along the fivebrane worldvolume). \subsection*{Spinors-forms correspondence on $X$} Using formul{\ae} (\ref{fierzsu}) in appendix \ref{sus} we can see that any chiral spinor $\lambda_+$ on $X$ can be expanded as \begin{align} \lambda_+=\Phi^{(0,0)}\xi+\Phi^{(2,0)}_{mn}\gamma^{mn}\xi+\Phi^{(4,0)}_{mnpq}\gamma^{mnpq}\xi ~, \end{align} where $\Phi^{(p,0)}$ is a $(p,0)$-form with respect to the complex structure $J$. I.e. $\Phi^{(2,0)}$ is in the $\bf{6}$ of $SU(4)$ and $\Phi^{(4,0)}$ is a singlet. Similarly in the case of an antichiral spinor $\lambda_-$ we can expand \begin{align} \lambda_-=\Phi^{(1,0)}_{m}\gamma^{m}\xi+\Phi^{(3,0)}_{mnp}\gamma^{mnp}\xi ~, \end{align} where $\Phi^{(1,0)}$ is in the $\bf{4}$ of $SU(4)$ and $\Phi^{(3,0)}$ is in the $\bar{\bf{4}}$. More succinctly, the equations above are nothing but the equivalence \begin{align} S_+&\cong \Lambda^{({\rm even}, 0)}\nonumber\\ S_-&\cong \Lambda^{({\rm odd}, 0)}~, \end{align} which can be shown to hold in the case of a Calabi-Yau manifold. \subsection*{Spinors-forms correspondence on the fivebrane} We will now assume that the fivebrane wraps a supersymmetric cycle, as described above. Ignoring the three flat directions for simplicity, after gauge-fixing the kappa-symmetry the fermions on the worldvolume of the fivebrane transform as sections of the tensor product \begin{align} S_+\otimes (S_+(N)\oplus S_-(N)) &\cong \Lambda^{(0,0)}\oplus\Lambda^{(2,0)}\oplus K \oplus(K\otimes\Lambda^{(2,0)}) \nonumber\\ &\cong \Lambda^{(0,0)}\oplus \Lambda^{(2,0)} \oplus\Lambda^{(0,1)}\oplus\Lambda^{(0,3)}~, \label{kloi} \end{align} where $S_{\pm}(N)$ are the positive-, negative-chirality spin bundles associated to the normal bundle $N$ of ${\Sigma}$ in $X$, $\Lambda^{(p,0)}$ is the bundle of $(p,0)$-forms on ${\Sigma}$, and $K$ is the canonical bundle of ${\Sigma}$. The first equivalence above can be shown by taking the adjunction formula into account, and the triviality of the canonical bundle of $X$. The second equivalence is proven by noting that $K\otimes\Lambda^{(3-p,0)}\cong \Lambda^{(0,p)}$, as can be seen by contracting with the antiholomorphic $(0,4)$-form on $X$. More explicitly, after gauge-fixing the kappa-symmetry, the physical fermion $\theta$ on the world-volume ${\Sigma}$ can be expanded as \begin{align} \theta=\epsilon\otimes P^+\sum_{p=0}^{4}\Phi^{(p,0)}_{i_i\dots i_p}\gamma^{{i_i\dots i_p}}\xi~, \label{ty} \end{align} where $\Phi^{(p,0)}\in \Lambda^{(p,0)}$ and $\epsilon$ is a two-component spinor in the noncompact directions. Expanding \begin{align} \Phi^{(p,0)}=\widehat{\Phi}^{(p,0)}+\frac{1}{p}K^*\wedge\widehat{\Psi}^{(p-1,0)}~, \end{align} where $\iota_K\widehat{\Phi}$, $\iota_K\widehat{\Psi}=0$, and substituting $P^+$, (\ref{ty}) reads \begin{align} \theta=\epsilon\otimes \Big( \widehat{\Phi}^{(0,0)}+ \widehat{\Phi}^{(2,0)}_{ij}\gamma^{ij} +\widehat{\Phi}^{(1,0)}_i\gamma^i +\widehat{\Phi}_{ijk}^{(3,0)}\gamma^{ijk} \Big)\xi ~, \label{koo} \end{align} where we have set \begin{align} \widehat{\Phi}^{(1,0)}_i&:=\widehat{\Psi}^{(0,0)}K^*_{i}\nonumber\\ \widehat{\Phi}_{ijk}^{(3,0)}&:=\widehat{\Psi}_{[ij}^{(2,0)}K^*_{k]}~. \label{leg} \end{align} Equation (\ref{koo}) above is the explicit form of (\ref{kloi}). \subsection*{Zero modes} The zero modes on the fivebrane satisfy the Dirac equation (\ref{dirac}) where, after gauge-fixing $\theta$ has positive chirality along the fivebrane world-volume, $\theta=P^+\theta$. Having explained the spinor-form correspondence, we would now like to rewrite the Dirac equation in terms of forms on the fivebrane. First, it would be useful to note the following relations: \begin{align} (\Pi^{\parallel})_m^rF_{rnpq}\gamma^m\gamma^{npq}\theta_-&=0\nonumber\\ (\Pi^{\parallel})_m^rF_{rnpq}\gamma^m\gamma^{npq}\theta_+&=\frac{3}{4}F_{mnpq}\gamma^{mnpq}\theta_+~, \label{topi} \end{align} where $\theta_{\pm}$ denotes the chirality of $\theta$ along the normal directions, and $\Pi^{\parallel}$ is the projector onto the fivebrane worldvolume defined in appendix \ref{kixlh}. Since $\theta$ has positive chirality along the fivebrane world-volume, we have $\theta_{\pm}=\frac{1}{2}(1\pm\C_9)\theta$. It further follows that \begin{align} {\cal D}_m\theta}\def\Th{\Theta}\def\vth{\vartheta^{(p,0)}=\epsilon\otimes \left\{ \begin{array}{ll} \nabla_m\widehat{\Phi}\xi~, & ~~~~~p=0\\ \nabla_m\widehat{\Phi}_r\ga^r\xi-\frac{1}{4}\widehat{\Phi}^rF_{rstm}\ga^{st}\xi~, & ~~~~~p=1\\ \nabla_m\widehat{\Phi}_{rs}\ga^{rs}\xi-\frac{1}{6}\widehat{\Phi}^{rn}F_{rstm}\ga^{st}{}_n\xi~, & ~~~~~p=2\\ \nabla_m\widehat{\Phi}_{rst}\ga^{rst}\xi-\frac{3}{4}\widehat{\Phi}^{rnp}F_{rstm}\ga^{st}{}_{np}\xi~, & ~~~~~p=3 \end{array}\right. ~, \label{koptz} \end{align} where we have denoted $\theta}\def\Th{\Theta}\def\vth{\vartheta^{(p,0)}:=\epsilon\otimes \widehat{\Phi}^{(p,0)}_ {i_1\dots i_p}\ga^{i_1\dots i_p}\xi$. Plugging (\ref{koptz}) into (\ref{dirac}), we obtain \begin{center} \fbox{\parbox{11cm}{ \begin{align} 0&=\Big\{(\nabla^{\parallel})_m\widehat{\Phi} +4(\nabla^{\parallel})^{ p}\widehat{\Phi}_{pm}\Big\}\gamma^m\xi\nonumber\\ 0&=\Big\{(\nabla^{\parallel})_{m}\widehat{\Phi}_{n}+6(\nabla^{\parallel})^{ p}\widehat{\Phi}_{pmn} -\frac{1}{2}F_{mn}{}^{pq}\widehat{\Phi}_{pq}\Big\}\Omega^{mnrs}\gamma_{rs}\xi^*\nonumber\\ 0&=\Big\{(\nabla^{\parallel})_{m}\widehat{\Phi}_{np}\Big\}\Omega^{mnpq}\gamma_q\xi^*\nonumber\\ 0&=\Big\{(\nabla^{\parallel})_{m}\widehat{\Phi}_{npq}\Big\}\Omega^{mnpq} ~,\nonumber \end{align} }} \end{center} \begin{align}\label{zmeqs}\end{align} where $(\nabla^{\parallel})_m:= (\Pi^{\parallel})_m^n\nabla_n$, is the covariant derivative projected along the fivebrane. Passing to complex coordinates, the above can be seen to be equivalent to equations (3.6-3.9) of \cite{saul}, or (3.10-3.13) of \cite{kall}. Following the analysis of \cite{kall}, the space of solutions to the above system of equations is spanned by harmonic forms\footnote{\label{foot} The forms $\widehat{\Phi}_{I_p}^{(p,0)}$, $p=1,3$, have a leg in the normal bundle, see definition (\ref{leg}). More precisely: they are in $H^{0}(\Sigma, K\otimes\Omega^{3-p})$, $p=1,3$. Out of these, we can construct harmonic forms in $H^{0,p}(\Sigma)\cong H^p(\Sigma, {\cal O})$, by contracting with the antiholomorphic fourform on $X$. This is just the statement of Serre duality. } $\{\widehat{\Phi}_{I_{p}}^{(p,0)}; ~p=0\dots 3\}$, where in addition the $\widehat{\Phi}^{(2,0)}$s satisfy the constraint \begin{align} \mathcal{H}\Big\{F_{mnpq}\widehat{\Phi}^{np}(\Pi^{\parallel})_r^qdx^r\Big\}=0 \label{hcon} \end{align} and we have denoted by $\mathcal{H}$ the projector onto the space of harmonic forms. The corresponding fermion zero modes are of the form \begin{align} \theta= \sum_{p=0}^3\sum_{I_p} \epsilon^{I_{p}}\otimes X_{I_p}\xi ~, \label{zm} \end{align} where (no summation over $p$) \begin{align} X_{I_p}=\left\{ \begin{array}{ll} \widehat{\Phi}_{I_{p}}^{(p,0)}\gamma_{(p)}, & ~~~p\neq2\\ \widehat{\Phi}_{I_{2}}^{(2,0)}\gamma_{(2)}+\delta\widehat{\Phi}^{(1,0)}_{I_{2}}\gamma_{(1)} +\delta\widehat{\Phi}^{(3,0)}_{I_{2}}\gamma_{(3)} , & ~~~p=2 \end{array}\right. ;~~~ I_p=\left\{ \begin{array}{ll} 1,\dots, h^{p,0}({\Sigma}), & ~~~p\neq2\\ 1,\dots,n, & ~~~p=2 \end{array}\right.~, \end{align} the $\widehat{\Phi}_{I_{p}}^{(p,0)}$s are harmonic and $\{\delta\widehat{\Phi}^{(1,0)}_{I_{2}}$, $\delta\widehat{\Phi}^{(3,0)}_{I_{2}}\}$ is a special solution of the inhomogeneous equation \begin{align} (\nabla^{\parallel})^+_{[m}\widehat{\Phi}_{n]}+6(\nabla^{\parallel})^{ p}\widehat{\Phi}_{pmn} =\frac{1}{2}F_{mn}{}^{pq}\widehat{\Phi}_{I_2,pq} ~. \label{opuio} \end{align} In the above, $n$ is the number of harmonic (2,0) forms on ${\Sigma}$ which in addition satisfy the constraint (\ref{hcon}); the $\epsilon^{I_{p}}$s are spinors in the $\bf{2}$ of $Spin(3)$ (after Wick-rotating to Euclidean signature). Note that (\ref{opuio}) implies condition (\ref{hcon}). The authors of \cite{kall} define a flux-dependent generalization of the arithmetic genus: \begin{align} \chi_F:=h^{0,0}-h^{1,0}+n-h^{3,0}~. \label{ketal} \end{align} \section{Instanton contributions} \label{instantoncontributions} We can now proceed to the computation of the instanton contributions to the coupling (\ref{fbils}). The main result of the paper is arrived at in this section: instantons with four fermionic zeromodes do not contribute to the superpotential. \subsection{Gravitino Kaluza-Klein reduction} \label{gravitinokkreduction} Before proceeding to integrate over the fermion zeromodes, we will need the Kaluza-Klein ansatz for the gravitino entering the vertex operator $V$ in (\ref{grv}). As already discussed in the introduction, only terms which depend on the descendants of the linear multiplets contribute to the superpotential. Hence, the relevant part of the Kaluza-Klein ansatz for the gravitino reads \begin{align} \left\{ \begin{array}{l} \Psi_{\mu}=i(\omega_I\cdot J)~\gamma_{\mu}\chi^I\otimes\xi^* +{\rm c.c.} \\ \Psi_m = \chi^I\otimes \omega_{I,mp}\ga^p\xi^*+{\rm c.c.}~; ~~~~~I=1,\dots b_2~, \end{array}\right. \label{kkgr} \end{align} where the $\chi^{I}$s are complex spinors in the $\bf{2}$ of $Spin(3)$, and $\omega_I\in H^{2}(X,\mathbb{R})$. As is straightforward to see, the eleven-dimensional gravitino equation, $\Gamma^M{\cal D}_{[M}\Psi_{N]}=0$, is satisfied if $\chi^{I}$ is a massless three-dimensional fermion, \begin{align} \slsh\nabla\chi^I=0~, \end{align} provided \begin{align} \omega_I\lrcorner F=0~. \label{34} \end{align} The implications of this condition were discussed extensively in the introduction. In this picture, $\chi^{I}$ is massless if it corresponds to a zero eigenvalue of the matrix $T_{IJ}$ (in a diagonal basis). Alternatively this can be seen as follows. The quadratic part of the three-dimensional action for the $\chi^I$s comes from the dimensional reduction of the quadratic-gravitino term in the eleven-dimensional supergravity action \begin{align} \int{ d^{11}x\sqrt{g_{11}} \Psi_M\C^{MNP}{\cal D}_N\Psi_P }~. \end{align} Plugging the Kaluza-Klein ansatz (\ref{kkgr}) in the action above, we obtain \begin{align} {\rm Vol}(X)\int{ d^{3}x\sqrt{g_{3}} \Big( D_{IJ}\bar{\chi}^I\slsh\nabla\chi^J-\frac{4}{9}T_{IJ}\bar{\chi}^I\chi^J \Big)}~, \label{3daction} \end{align} where \begin{align} D_{IJ}&:= \int_X \Big( \omega_I\wedge\star\omega_J+\frac{2}{3}~\omega_I\wedge\omega_J\wedge J\wedge J \Big) ~ \label{irw} \end{align} and the Hodge star is with respect to the metric of the Calabi-Yau fourfold. In the above we have made use of the identity \begin{align} \star(\omega_I\wedge\omega_J\wedge J\wedge J) =\frac{1}{2}\Big\{(\omega_I\cdot J)(\omega_J\cdot J)-2(\omega_I\cdot\omega_J)\Big\}~, \end{align} which can be proven with the help of (\ref{jids}). As advertised, massless fermions correspond to zero eigenvalues of $T_{IJ}$. We remark that in (\ref{3daction}) there is no coupling of the form \begin{align} {\rm Vol}(X)\int d^{3}x\sqrt{g_{3}}\Big( W_{IJ}\chi^I\chi^J +{\rm c.c.} \Big)~. \label{poten} \end{align} In the following we will investigate whether such a term is generated by instanton contributions. In the context of three-dimensional supersymmetric field theory the fact that such a term can indeed be generated by instanton effects, was demonstrated in \cite{wittenold}. \subsection{Two zeromodes} \label{ofrzm} Before coming to the subject of instantons with four fermionic zeromodes in the next subsection, we will briefly comment on the case of instantons with two zeromodes (corresponding to the fivebrane wrapping rigid, isolated cycles). As can be seen from (\ref{zm}), there are always two zero modes corresponding to $p=0$: \begin{align} \theta= \epsilon\otimes\xi~. \label{oipi} \end{align} These are the zero modes which come from the supersymmetry of the Calabi-Yau background\footnote{ In three-dimensional nomenclature the supersymmetry of the background is ${\cal N}=2$ (equivalently: ${\cal N}=1$ in four dimensions), i.e. four real supercharges. The instanton breaks half the supersymmetries, as can be seen from (\ref{iyu}). Note that $\xi$ in (\ref{oipi}) is complex and $\epsilon$ is a spinor in the $\bf{2}$ of $Spin(3)$. Henceforth we are complexifying our notation for $\theta$, $\Psi_m$, $V$. At any rate, $\theta$ must be complexified in order to pass to Euclidean signature.}. We would like to compute the instanton contribution of these zeromodes to the superpotential. First, we need to define the integration over fermion zeromodes: \begin{align} \int d^2\epsilon~\epsilon^{\alpha}\epsilon^{\beta}:=C^{\alpha\beta}~, \label{zint} \end{align} where $C$ in the equation above is the charge-conjugation matrix in three dimensions. It follows that \begin{align} \int d^2\epsilon~(\chi\epsilon)(\epsilon\psi)=(\chi\psi)~, \end{align} for any two three-dimensional spinors $\chi$, $\psi$ in the $\bf{2}$ of $Spin(3)$. To simplify the presentation, we are using the notation $(\chi\psi):=(\chi^{Tr}C\psi)$. Integrating over the zeromodes using the above prescription, we find that the instanton induces a two-fermion coupling of the form \begin{align} \chi^I\chi^J\int [DZ'(\sigma)]v_Iv_{J} e^{-S_{PST}[Z(\sigma); g,C,\Psi]}+{\rm c.c.} ~, \label{2zm} \end{align} where \begin{align} v_I&:=2i\int_{{\Sigma}}J\wedge J \wedge\omega_I \label{vdef} \end{align} and the path integration above does not include the zeromodes. In (\ref{vdef}) all the forms should be understood as pulled-back to ${\Sigma}$. In particular the pull-back of the almost complex structure to ${\Sigma}$ can be identified with $\widehat{J}$, which is discussed from the point of view of the induced $SU(3)$ structure on ${\Sigma}$ in appendix \ref{kixlh}. Note that in the formula above the primitive part of $\omega_I$ is projected out. We are not going to elaborate on the one-loop determinants, as this lies outside the main focus of this paper. The result of the integration over the bosonic coordinates should be obtainable using techniques similar to \cite{hm}. The integration over the fermionic variables is proportional to the determinant of the flux-dependent Dirac operator $\gamma^m{\cal D}^{\parallel}_m$ (away from its kernel), as follows from equation (\ref{ityu}). \subsection{Four zeromodes} \label{frzm} In the presence of four zeromodes there are the following possibilities which we will examine in turn: either $h^{0,0}=n=1$ (corresponding to $\chi_F=2$) or $h^{0,0}= h^{p,0}=1$, where $p$ is odd (corresponding to $\chi_F=0$). Recall that $n$ is the number of harmonic (2,0) forms on ${\Sigma}$ which in addition satisfy the constraint (\ref{hcon}). As we will see, no superpotential is generated in either case. Since $\chi_F\neq 1$ in all cases, we conclude that our result does not rule out the possibility that in the presence of flux the arithmetic genus criterion should be replaced by the condition $\chi_F=1$. $~\bullet h^{0,0}=n=1$ In this case we have $\chi_F=2$. Let us substitute the Kaluza-Klein ansatz (\ref{kkgr}) and the expression for the zeromodes, \begin{align} \theta= \epsilon\otimes\xi+ \zeta\otimes\Big( \widehat{\Phi}_{mn} \gamma^{mn}+\delta\widehat{\Phi}_{m}\gamma^{m} +\delta\widehat{\Phi}_{mnp}\gamma^{mnp} \Big)\xi~, \end{align} into equation (\ref{grv}) for the gravitino vertex operator. Integrating over the zeromodes using (\ref{zint}) we get, up to a total worldvolume derivative, \begin{align} \int d^2\epsilon ~d^2\zeta~V V = \chi^I\chi^J v_I w_{J}~, \label{4zm} \end{align} where $v_I$ was defined in (\ref{vdef}) above and \begin{align} w_{J}&:= \frac{2}{9}\int_{{\Sigma} \widehat{\Theta}\wedge\widehat{\Phi}\wedge\omega_J ~. \label{ji} \end{align} The object $\widehat{\Theta}$ is defined by \begin{align} \widehat{\Theta}_{mn}:=\Omega_{mnpq}F^{pq}{}_{rs}\widehat{\Phi}^{rs} \label{thdefi} \end{align} and is a (0,2)-form on ${\Sigma}$. (Recall that in our conventions $\Omega$ is antiholomorphic). In deriving this result, we had to perform some tedious but straightforward gamma-matrix algebra making repeated use of the formul\ae {} in the appendices \ref{gammapp}, \ref{sus}, especially equations (\ref{bfive}, \ref{usefids}). Moreover we have taken into account the normal flux condition and we have implemented (\ref{dkn}), as discussed in the introduction. In the following we show that the right-hand side of (\ref{ji}) vanishes; no instanton-induced superpotential is generated in this case. Before demonstrating this fact however, let us note that the following group-theoretical reasoning can be used to gain insight into the result (\ref{4zm}). As follows from the form of the vertex operator, the integration over the zeromodes receives three kinds of contributions: \begin{align} \chi^I\chi^J v_I\otimes \omega_J\otimes F\otimes (\widehat{\Phi}^{(2,0)}+ \delta\widehat{\Phi}^{(1,0)}+ \delta\widehat{\Phi}^{(3,0)} )^{2\otimes_s} ~, \label{a111} \end{align} coming from terms of the form $VV\propto (\Psi_m\C^m\theta}\def\Th{\Theta}\def\vth{\vartheta)(\Psi\theta}\def\Th{\Theta}\def\vth{\vartheta^3)F$, \begin{align} \chi^I\chi^J v_I\otimes \nabla\omega_J\otimes(\widehat{\Phi}^{(2,0)}+ \delta\widehat{\Phi}^{(1,0)}+ \delta\widehat{\Phi}^{(3,0)} )^{2\otimes_s}~, \label{a211} \end{align} coming from terms of the form $VV\propto (\Psi_m\C^m\theta}\def\Th{\Theta}\def\vth{\vartheta)(\nabla\Psi\theta}\def\Th{\Theta}\def\vth{\vartheta^3)$, and \begin{align} \chi^I\chi^Jv_I\otimes \omega_J\otimes (\widehat{\Phi}^{(2,0)}+ \delta\widehat{\Phi}^{(1,0)}+ \delta\widehat{\Phi}^{(3,0)} ) \otimes \nabla (\widehat{\Phi}^{(2,0)}+ \delta\widehat{\Phi}^{(1,0)}+ \delta\widehat{\Phi}^{(3,0)} ) \label{a311}~, \end{align} coming from terms of the form $VV\propto (\Psi_m\C^m\theta}\def\Th{\Theta}\def\vth{\vartheta)(\Psi\theta}\def\Th{\Theta}\def\vth{\vartheta^2\nabla\theta}\def\Th{\Theta}\def\vth{\vartheta)$. Contributions of the type (\ref{a111}) transform in the\footnote{ In the following we are using the Dynkin notation for $A_3$.} $$ \Big((000)\oplus(101)\Big)\otimes (020)\otimes\Big( (010) \oplus(100)\oplus(001) \Big)^{2\otimes_s} $$ of $SU(4)$. There are exactly three scalars in the decomposition of the tensor product above. These we can write explicitly as: \begin{align} S_1&:=\chi^I\chi^Jv_I\omega_{J, mn}\Omega^{mpij} F_{ijqr}\widehat{\Phi}^{qr}\widehat{\Phi}_{p}{}^n\nonumber\\ S_2&:=\chi^I\chi^Jv_I(\omega_J\cdot J) \Omega^{mpij} F_{ijqr}\widehat{\Phi}^{qr}\widehat{\Phi}_{mp}\nonumber\\ S_3&:=\chi^I\chi^Jv_I\delta\widehat{\Phi}^i\delta\widehat{\Phi}^{jk}{}_m \Omega_{ijkn}F^{mnpq}\omega_{J, pq}~. \end{align} The last one, however, vanishes by virtue of equation (\ref{34}). Moreover, using equation (\ref{opuio}), the scalars $S_{1,2}$ can be expressed as a linear combination of $R_{1}, \dots R_7$ defined in equation (\ref{taro}) below: \begin{align} S_1&=-2R_2+4R_5-4R_6\nonumber\\ S_2&=2R_4-8R_7 ~. \end{align} In deriving the above we have used the identity \begin{align} \delta\widehat{\Phi}_{qrs}\Omega^{rsmp}=-\frac{2}{3}\Omega^{ijk[m}(\Pi^+)_q{}^{p]} \delta\widehat{\Phi}_{ijk} ~, \end{align} which can be proved using (\ref{bfive}). A direct computation of the terms of the form (\ref{a111}), yields the contribution \begin{align} \frac{2i}{9}S_1-\frac{1}{18}S_2=-\frac{4i}{9}(R_2-2R_5+2R_6)- \frac{1}{9}(R_4-4R_7) \label{contr1} \end{align} to the zeromode integral (\ref{4zm}). The linear combination above can be written in a more elegant way by noting that \begin{align} iS_1-\frac{1}{4}S_2=\chi^I\chi^Jv_I \star (\widehat{\Theta}\wedge\widehat{\Phi}\wedge\omega_J)~, \label{525} \end{align} where the Hodge star is along ${\Sigma}$. In proving (\ref{525}) we have made use of equation (\ref{b0}). Taking into account that $\omega_I$ is a harmonic (1,1) form and that therefore $(\omega_I\cdot J)$ is a constant\footnote{\label{ext} A direct computation reveals that it is in fact $\widehat{\omega}^I$ rather than $\omega_I$, where the hat denotes the pull-back to $\Sigma$, which appears in the various invariants of this section. However, using the inclusion map $$ \iota^*:~H^{p,q}(X,\mathbb{R})\longrightarrow H^{p,q}({\Sigma},\mathbb{R})~, $$ we can think of ${\omega}^I$ as the extension to $X$ of the harmonic form $\widehat{\omega}^I$ on $\Sigma$ \cite{gh}. In the text, we do not make an explicit distinction between $\omega_I$ and $\widehat{\omega}^I$. See also the next footnote. }, it follows that $\nabla\omega_I$ transforms in the $(201)\oplus(102)$ of $SU(4)$. Hence, contributions of the type (\ref{a211}) transform in the $$ \Big((201)\oplus(102)\Big)\otimes \Big( (010) \oplus(100)\oplus(001) \Big)^{2\otimes_s} $$ of $SU(4)$. As there are no scalars in the decomposition of the tensor product above, we conclude that these terms vanish. Taking into account that $\widehat{\Phi}^{(2,0)}$ is a harmonic (2,0) form on a K\"{a}hler manifold, it follows that $\nabla\widehat{\Phi}^{(2,0)}$ transforms in the $(110)$ of $SU(4)$. Similarly, $\nabla\delta\widehat{\Phi}^{(1,0)}$ transforms in the $(000)\oplus(200)\oplus(010)\oplus(101)$ of $SU(4)$. Finally, taking into account the last of equations (\ref{zmeqs}), it follows that $\nabla\delta\widehat{\Phi}^{(3,0)}$ transforms in the $(010)\oplus(101)\oplus(002)$ of $SU(4)$. Putting everything together, it follows that contributions of the type (\ref{a311}) transform in the \begin{align} \Big((000)\oplus(101)\Big)\otimes \Big((010)\oplus&(100)\oplus(001) \Big)\nonumber\\ \otimes &\Big( (110)\oplus(000)\oplus(200) \oplus 2(010)\oplus 2(101)\oplus(002) \Big)\nonumber \end{align} of $SU(4)$. There are exactly seven scalars in the decomposition of the tensor product above: one coming from $\nabla\widehat{\Phi}^{(2,0)}$, three from $\nabla\delta\widehat{\Phi}^{(1,0)}$ and three from $\nabla\delta\widehat{\Phi}^{(3,0)}$. These can be written explicitly as \begin{align} R_1&:=\chi^I\chi^Jv_I\nabla^m\widehat{\Phi}_{ij}\Omega^{ijpq}\delta\widehat{\Phi}_{p} \omega_{J,qm}\nonumber\\ R_2&:=\chi^I\chi^Jv_I\nabla_m\delta\widehat{\Phi}_n\Omega^{mnij}\omega_{J,ip} \widehat{\Phi}^{p}{}_{j}\nonumber\\ R_3&:=\chi^I\chi^Jv_I\nabla^m\delta\widehat{\Phi}_n \Omega^{nijk}\omega_{J,km}\widehat{\Phi}_{ij}\nonumber\\ R_4&:=\chi^I\chi^Jv_I(\omega_J\cdot J)\nabla_m\delta\widehat{\Phi}_n \Omega^{mnij}\widehat{\Phi}_{ij}\nonumber\\ R_5&:=\chi^I\chi^Jv_I\nabla^m\delta\widehat{\Phi}_{ijk}\Omega^{ijkq} \omega_{J,qp}\widehat{\Phi}^{p}{}_{m}\nonumber\\ R_6&:=\chi^I\chi^Jv_I\nabla^m\delta\widehat{\Phi}_{ijk}\Omega^{ijkq} \omega_{J,mp}\widehat{\Phi}^{p}{}_{q}\nonumber\\ R_7&:=\chi^I\chi^Jv_I(\omega_J\cdot J) \nabla^m\delta\widehat{\Phi}_{ijk}\Omega^{ijkq} \widehat{\Phi}_{qm} ~. \label{taro} \end{align} A direct computation of the terms of the form (\ref{a311}), yields the contribution \begin{align} -4i(R_1+R_3+2R_5) \label{contr2} \end{align} to the zeromode integral (\ref{4zm}). Putting the contributions (\ref{contr1}, \ref{contr2}) together, we arrive at equation (\ref{4zm}). Note that the invariants $R_4, \dots R_7$ as well as the linear combinations $R_1+2R_2$ and $R_1+R_3$, can be written as total derivatives. This can readily be seen by taking into account that $\Omega$ is covariantly constant while $\omega$, $\widehat{\Phi}$ are harmonic\footnote{ Note that in general the pull-back of the Christoffel connection from the total space $X$ to the base $\Sigma$, $(\nabla^{\parallel})_m$, {\it cannot} be identified with the Christoffel connection $\widehat{\nabla}_m$ associated with the metric on $\Sigma$. However if $\widehat{S}$ is an arbitrary $p$-form on $\Sigma$ whose extension to $X$ is $S$, we have $$ (\nabla^{\parallel})_m{S}^{mm_2\dots m_p}= \nabla_m{S}^{mm_2\dots m_p}= (\widehat{\nabla})_m\widehat{S}^{mm_2\dots m_p}~. $$ The first equality follows from (\ref{tbb}). The second equality follows from $\C_{mn}^n=g^{-1/2}\partial_mg^{1/2}$ and the fact that the determinants of the metrics $X$, $\Sigma$ are equal, as can be seen from the explicit form of the fibration (\ref{explfibr}). }. It follows that the total contribution can be cast in the form $\propto R_2$+total derivative. On the other hand, up to a total derivative, $R_2$ is proportional to the right-hand-side of (\ref{525}), as follows from (\ref{contr1},\ref{525}). We are now ready to show that the left-hand-side of (\ref{ji}) vanishes identically. First note that, as follows from (\ref{hcon}) or (\ref{opuio}), the projection of $\widehat{\Theta}$ onto the space of harmonic forms on ${\Sigma}$ vanishes: ${\cal H}\{\widehat{\Theta}\}=0$. It follows that \begin{align} \int_{{\Sigma}}\widehat{\Theta}\wedge\widehat{\Phi}\wedge J=0~, \label{gnv} \end{align} since $\widehat{\Phi}\wedge J$ is harmonic (this can be seen by noting that $\star\widehat{\Phi}=\widehat{\Phi}\wedge J$). Varying this equation with respect to the K\"{a}hler structure, $\phi^I\rightarrow \phi^I+\delta \phi^I$, we get \begin{align} \int_{\Sigma}\frac{\delta\widehat{\Theta}}{\delta\phi^I}\wedge\widehat{\Phi}\wedge J + \int_{{\Sigma}}\widehat{\Theta}\wedge\widehat{\Phi}\wedge \omega_I=0~. \label{vrtrr} \end{align} Furthermore, under a K\"{a}hler-structure variation the metric transforms as \begin{align} \delta g_{mn}&=\sum_I \delta\phi^I\omega_{I,mp}J_n{}^p~. \end{align} Note that the right-hand side above is automatically symmetric in the indices $m$, $n$. Taking the above into account together with the fact that $S_2$ is a total worldvolume derivative it follows that \begin{align} \int_{\Sigma}\frac{\delta\widehat{\Theta}}{\delta\phi^I}\wedge\widehat{\Phi}\wedge J=0~. \label{asd} \end{align} In the derivation we made use of the identity \begin{align} \widehat{\Phi}^{mn}\Omega_{mnpq}\widehat{\Phi}^{rs}F_{rs}{}^{qt}\omega_{I,pt}=- \widehat{\Phi}^{mn}\Omega_{mn}{}^{pq}\widehat{\Phi}_s{}^{t}F_{pq}{}^{sr}\omega_{I,rt} ~. \end{align} From (\ref{vrtrr}, \ref{asd}) it finally follows that the right-hand side of (\ref{ji}) vanishes, as advertised. No potential is generated in the remaining cases either, as we now show. $~\bullet h^{0,0}=h^{1,0}=1$ In this case we have $\chi_F=0$. As can be verified by direct computation, no potential is generated in this case. The easiest way to arrive at this result is by the following group-theoretical argument. It follows from the form of the vertex operator that the integration over the zeromodes receives three kinds of contributions: \begin{align} \chi^I\chi^Jv_I\otimes \omega_J\otimes F\otimes\widehat{\Phi}^{(1,0)}\otimes\widehat{\Phi}^{(1,0)}~, \label{c111} \end{align} coming from terms of the form $VV\propto (\Psi_m\C^m\theta}\def\Th{\Theta}\def\vth{\vartheta)(\Psi\theta}\def\Th{\Theta}\def\vth{\vartheta^3)F$, \begin{align} \chi^I\chi^Jv_I\otimes \nabla\omega_J\otimes\widehat{\Phi}^{(1,0)}\otimes\widehat{\Phi}^{(1,0)}~, \label{c211} \end{align} coming from terms of the form $VV\propto (\Psi_m\C^m\theta}\def\Th{\Theta}\def\vth{\vartheta)(\nabla\Psi\theta}\def\Th{\Theta}\def\vth{\vartheta^3)$, and \begin{align} \chi^I\chi^Jv_I\otimes \omega_J\otimes\widehat{\Phi}^{(1,0)}\otimes \nabla\widehat{\Phi}^{(1,0)} \label{c311}~, \end{align} coming from terms of the form $VV\propto (\Psi_m\C^m\theta}\def\Th{\Theta}\def\vth{\vartheta)(\Psi\theta}\def\Th{\Theta}\def\vth{\vartheta^2\nabla\theta}\def\Th{\Theta}\def\vth{\vartheta)$. Contributions of the type (\ref{c111}) transform in the $$ \Big((000)\oplus(101)\Big)\otimes (020)\otimes (100)^{2\otimes_s} $$ of $SU(4)$. As there are no scalars in the decomposition of the tensor product above, we conclude that these terms vanish. Taking into account that $\omega_I$ is a harmonic (1,1) form, it follows that $\nabla\omega_I$ transforms in the $(201)\oplus(102)$ of $SU(4)$. Hence, contributions of the type (\ref{c211}) transform in the $$ \Big((201)\oplus(102)\Big)\otimes (100)^{2\otimes_s} $$ of $SU(4)$. As there are no scalars in the decomposition of the tensor product above, we conclude that these terms vanish. Taking into account that $\widehat{\Phi}^{(1,0)}$ is harmonic, it follows that $\widehat{\Phi}^{(1,0)}$ transforms in the $(200)$ of $SU(4)$. Hence, contributions of the type (\ref{c311}) transform in the $$ \Big((000)\oplus(101)\Big)\otimes (100)\otimes (200) $$ of $SU(4)$. As there are no scalars in the decomposition of the tensor product above, we conclude that these terms vanish. $\bullet$ $h^{0,0}=h^{3,0}=1$ In this case we have $\chi_F=0$. As in the previous case, no potential is generated. This can be shown {\it e.g.} by the same type of group-theoretical reasoning as before. \section{Discussion} Taking advantage of the recent progress in explicit theta-expansions in eleven-dimensional superspace \cite{t}, we have performed a computation of the contribution of fivebrane instantons with four fermionic zeromodes in M-theory compactifications on Calabi-Yau fourfolds with (normal) flux. The calculus of fivebrane instantons in M-theory is still largely unexplored, and we hope that our computation will initiate a more extensive study of these phenomena directly in M-theory. We have found that no superpotential is generated in this case -- a result which is compatible with replacing the arithmetic genus criterion by the condition $\chi_F=1$, where $\chi_F$ is the flux-dependent `index' of \cite{kall}. It would be interesting to reexamine this statement when the condition of normal flux is relaxed. It would be desirable to explore the obvious generalizations of our computation: fivebrane instanton contributions to non-holomorphic couplings, and/or contributions to higher-derivative and multi-fermion couplings as in \cite{bw}. The expansions of \cite{t} can also be used to study instantons with more than four zeromodes. So far the precise relation between instanton calculus in M-theory \cite{bbs, hm} and the rules of D-instanton computations in string theory put forward in \cite{gg, bill, blum}, has not been clearly spelled out. Understanding this relation may help clarify some of the conceptual issues associated with the M-theory calculus, see {\it e.g.} \cite{hm}. This would be another interesting possibility for future investigation. Last but not least, it is important to address the reservations, discussed in the introduction, about the fivebrane action of \cite{pst} and to incorporate the topological considerations of \cite{beloa, belob} in a supersymmetric context\footnote{I would like to thank Greg Moore for correspondence on this point.}. \vfill\break \section*{Acknowledgment} I am indebted to Greg Moore for encouragement, correspondence and valuable comments on several previous versions of the manuscript. I am also grateful to Ralph Blumenhagen, Michael Haack, Peter Mayr and Henning Samtleben for useful discussions and correspondence.
1,108,101,563,533
arxiv
\section{Introduction} {\em Introduction.--} Understanding quantum dynamics and control is essential to modern quantum technologies such as adiabatic quantum computation~\cite{AQC,Childs}. A quantum dynamical processes is driven by its corresponding Hamiltonian, where the Hamiltonian represents a physical realization. For instance, spin dynamics can be driven by the Zeeman Hamiltonian, which is physically realized by applying magnetic fields~\cite{Messiah}. Different realized dynamics, for example fast vs. adiabatically controlled passage~\cite{Rice1,Rice2, Berry09} may seem remote from one another. However, in this paper we show that they \red{can} well be intimately related. For example, a physical realization of adiabatic quantum computation (AQC) suffers from its slowness, with the resultant destructive effects of decoherence and the occurrence of quantum phase transitions during dynamics~\cite{Adolfo,Torrontegui13,Jing13,JW2}. Here, we \red{prove rigorously} that different dynamics, described by two Hamiltonians defined on the same Hilbert space, can always be transformed into one another. As a consequence, the physical outcome of AQC can be made equivalent to the outcome of a dynamical process that can be extremely fast. This relationship between different dynamics is based on a straightforward but profound proposition described below, implying, for example, that an adiabatic process may be physically realized with a fast Hamiltonian. Similarly, it implies a hidden adiabaticity amongst rapid dynamics. {\em The Transformability proposition.--} Given any two Hamiltonians, $\hat H$ and $\hat h$ in the same Hilbert space, which can be time-independent or time-dependent, the corresponding Schr\"odinger equations are \begin{equation}\label{e1} i\partial_t\hat U=\hat H(t)\hat U, \end{equation} and \begin{equation}\label{e2} i\partial_t\hat u=\hat h(t)\hat u, \end{equation} where $\hat U$ and $\hat u$ are propagators of $\hat H(t)$ and $\hat h(t)$, respectively. Proposition: {\em Two Hamiltonians $\hat H$ and $\hat h$ can always be transformed into one another. } Mathematically, this claim can be expressed as: For given $\hat H$ and $\hat h$, there exists at least one unitary operator $\hat S$ such that \begin{equation}\label{e3} \hat h=\hat S^{\dagger} \hat H \hat S-i\hat S^{\dagger}\dot{\hat S}, \end{equation} and \begin{equation}\label{e4} \hat H=\hat S \hat h \hat S^{\dagger}-i\hat S\dot{\hat S}^{\dagger}. \end{equation} where the overdot indicates a time derivative. Proof: The operator $\hat S$ enables the transformation $\hat U=\hat S\hat u$. Substituting it into the Schr\"odinger equation~(\ref{e1}), we obtain Eq.~(\ref{e2}) with the {\em effective} Hamiltonian $\hat h=\hat S^{\dagger} \hat H \hat S-i\hat S^{\dagger}\dot{\hat S}$. Similarly, if we begin with the Schr\"odinger equation (\ref{e2}) we transform it to Eq. (\ref{e1}) by identifying its Hamiltonian with $\hat H=\hat S \hat h \hat S^{\dagger}-i\hat S\dot{\hat S}^{\dagger}$. Because the solutions $\hat u$ and $\hat U$ of the Schr\"odinger equations~(\ref{e1}) and (\ref{e2}) always exist, so does the product $\hat U\hat u^{\dagger}$. \red{By setting} $\hat S=\hat U\hat u^{\dagger}$, we can reproduce the Hamiltonians~(\ref{e3}) and (\ref{e4}) and therefore formally prove the \red{universal} existence of the \red{unitary} transformation $\hat S$. \red{In other words, there is {\em always} a unitary transformation that enables two arbitrarily given Hamiltoni- ans in the same Hilbert space to be transformed into one another.} We term this property transformability, and the two Hamiltonians $\hat H$ and $\hat h$ are {\em transformable}. The special case of the proposition with $\hat h$ being time-independent was proven a quarter-century ago in Ref.~\cite{PLA93}. {\em Rapid Adiabatic Quantum Computation} Adiabatic quantum computation is one of the most promising candidates to realize quantum computing \cite{Lidar}. The approach is based on the adiabatic theorem: The solution to a computational problem of interest is encoded in the ground state of a potentially complicated Hamiltonian. To approach the solution, one prepares a system with a simpler Hamiltonian and initializes it at its ground state. By evolving the Hamiltonian sufficiently slowly towards the desired (complex) one, the adiabatic theorem guarantees that the system follows the instantaneous ground state, finally realizing the target ground state. Evidently, the slowness of AQC could be the main impediment to its utility for quantum algorithms. The universal transformability property suggests that a slow AQC process $\hat u$ can be mapped onto a fast quantum process $\hat U$---that is more controllable, and suffers reduced decoherence during processing. Note that as a convention, we use lower (upper) cases to denote the slow (fast) dynamics throughout the paper.] Consequently, AQC can be physically realized by a fast process. The eigenstate $|E(T)\rangle$ of the problem Hamiltonian at time $T$ is given by implementing the adiabatic process \begin{equation} |E(T)\rangle \sim \hat u|E(0)\rangle=\hat S^{\dagger} (T) \hat U (T) |E(0)\rangle. \label{eq:adia} \end{equation} % The second equality suggests to physically implement $|E(T)\rangle$ by the following circuit: the first gate $\hat U (T)$ is governed by $\hat H$. The transformation $\hat S^{\dagger} (T)$ acts on the output. {\em Adiabatic algorithms and their fast counterparts.--} Consider now the proposition in the context of a realistic AQC, i.e. an ensemble of qubits described by a family of slowly-varying Hamiltonians, \begin{equation}\label{e10} \hat h=\Gamma(t) \sum_{i}\hat X_{i}+\hat h_P(\{\hat Z_i\}). \end{equation} Here, $\Gamma(t)$ is large at $t=0$, and slowly evolves towards zero at $t=T$. The Hamiltonian $\hat h_P(\{\hat Z_i\})$ contains the $\hat Z_i$ component of the $i$-th qubit. The solution of a {\em hard} problem is encoded within $\hat h_P$. For example, Grover's search problem~\cite{Lidar} is realized with \begin{equation}\label{e11} \hat h_P({\hat Z_i})=\hat I-|B\rangle \langle B|, \end{equation} where $|B\rangle$ is the {\em marked} state, and $|B\rangle \langle B|$ is a function of $\hat Z_i$. In the D-Wave system the Hamiltonian (\ref{e10}) is given by \begin{equation}\label{e12} \hat h_P({\hat Z_i})=\sum_{i} h_{i}\hat Z_{i}+\sum_{ij} J_{ij}\hat Z_i \hat Z_j, \end{equation} with the parameters $h_i$ and $J_{ij}$. Applying a fast magnetic field, we can enable the corresponding fast-varying Hamiltonian \begin{equation}\label{e13} \hat H=\gamma(t)\sum_{i}\hat X_{i}+\hat h_P(\{e^{-i\phi_{i}(t)\hat X_{i}}\hat Z_{i}e^{i\phi_{i}(t)\hat X_{i}}\}), \end{equation} where $\gamma(t)=\Gamma(t)+\dot{\phi}$ is a fast-varying function. Here, the transformation matrix is given by $\hat S(t)=\Pi_i e^{-i\phi_i(t)\hat X_i }$. Thus, instead of evolving the system slowly under $\hat h$, the two gates $\hat U(t)$ and $\hat S(t)$ should be realized, Eq. (\ref{eq:adia}), allowing for a fast implementation. {\em Built-in adiabaticity} The discussion above focuses on implementing fast dynamics to achieve the adiabatic result, i.e. to replace slow dynamics by fast dynamics. Here we show that the opposite is also the case, That is, fast dynamics can be shown to have a "hidden adiabaticity". As an example, consider a qubit under external fields, with the NMR-type Hamiltonian \begin{equation}\label{e5} \hat H=\frac{\omega_0(t)}{2}\hat Z+g\left[\hat X\cos\phi(t)+\hat Y\sin\phi(t)\right]. \end{equation} Here, $\hat X$, $\hat Y$ and $\hat Z$ are the Pauli operators, $\omega_0(t)$ and $\phi(t)$ potentially depend on time and are allowed to be fast-varying. A unitary transformation $\hat S=\exp\left[i\frac{\theta(t)-\phi(t)}{2}\hat Z\right]$ brings $\hat H$ into $\hat h$, \begin{equation}\label{e6} \hat h=\frac{\omega_0(t)+\dot{\theta}-\dot{\phi}}{2}\hat Z+g\left[\hat X\cos\theta(t)+\hat Y\sin\theta(t)\right]. \end{equation} We assume that $g$ is a constant and that the newly introduced time-dependent parameter $\theta(t)$, as well as $\frac{\omega_0(t)+\dot{\theta}-\dot{\phi}}{2}$ are controlled such that they vary slowly. The transformation $\hat S$ thus brings the system into the adiabatic domain. In other words, a system driven by fast-varying $\hat H$ has built-in hidden adiabaticity characterized by $\hat h$. In the particular case where $\phi(t)=\omega t$ and $\omega_0$, $\omega$ are constants, we can easily obtain the solution, that is the time evolution operator corresponding to $\hat H$, \begin{equation}\label{e7} \hat U=\exp\left(-i\frac{\omega \hat Z}{2} t\right)\exp\left(-i\frac{2g\hat X-\Omega \hat Z}{2}t\right) \end{equation} where $\Omega=\omega-\omega_0$. We can control parameters and realize the function $\theta(t)=\Omega t$, resulting in \begin{equation}\label{e8} \hat u=\exp\left(-i\frac{\Omega \hat Z}{2}t\right)\exp\left(-i\frac{2g\hat X-\Omega \hat Z}{2}t\right). \end{equation} The instantaneous eigenstates of $\hat h=\exp\left(-i\frac{\Omega \hat Z}{2}t\right)\hat X\exp\left(i\frac{\Omega \hat Z}{2}t\right)$ are \begin{equation}\label{e9} |E_{\pm}(t)\rangle=\exp\left(-i\frac{\Omega \hat Z}{2} t \right)|\pm\rangle. \end{equation} These states are proportional to the wave function $\hat u|\pm\rangle$ ($\hat X|\pm\rangle=\pm|\pm\rangle$) as stated by the adiabatic theorem for the adiabatic regime $g\gg \Omega$. Therefore, in order to physically realize $|E_{\pm}(T)\rangle$, say at $T=\pi/2\Omega$ when $\hat h(T)=g\hat Y$, one needs to implement two gates: $\hat U(T)$, then $\hat S^\dagger(T)= \exp(i\frac{\pi \omega_0}{4\Omega} \hat Z)$. We now come to a simple but nontrivial corollary following immediately from the Transformability proposition. {\em The transformability corollary at different times.--} Let $\hat H$ (the fast Hamiltonian) be a function of the normalized or scaling time $\tau=t/T$, where $T$ a characteristic time of the dynamical system. Eq.~(\ref{e1}) can then be rewritten as \begin{equation}\label{e14} i\partial_\tau\hat U(\tau)=T\hat H(\tau)\hat U(\tau). \end{equation} Likewise, \begin{equation}\label{e15} i\partial_\tau\hat u(\tau)=T'\hat h(\tau)\hat u(\tau), \end{equation} where $\tau=t'/T'$ and the latter describes a slower process so that $T<T'$, $t'(T')$ is the real time (characteristic time) of the Schr\"odinger equation (\ref{e15}). The scaling times of two equations may be identical or different. Here we set the same scaling time $\tau$ with the constraint $t'/T'=t/T$. As \red{proved} in the transformability proposition, mathematically there is at least one unitary operator $\hat S$ such that \begin{equation}\label{e16} T'\hat h=\hat S^{\dagger} T\hat H \hat S-i\hat S^{\dagger}\partial_\tau{\hat S}, \end{equation} and \begin{equation}\label{e17} T\hat H=\hat S T'\hat h \hat S^{\dagger}-i\hat S\partial_\tau{\hat S}^{\dagger}, \end{equation} for a given scaling time $\tau$. The simplest non-trivial example is $\hat S=1$, such that $\hat H=\frac {T'}{T}\hat h$ and $\hat U(\tau)=\hat u(\tau)$. The latter equality, rewritten as $\hat U(t)=\hat u(\frac{T'}{T}t)$ with $T<T'$, is an exact proof that the runtime of an adiabatic quantum process can be reduced $\frac {T'}{T}$ times -- exact trade-off between energy and time. \red{Specifically, Eq.~(\ref{e7}) can be rewritten as \begin{equation}\label{e18} \hat U(\tau)=\hat u(\tau)=\exp(-i\pi Z\tau)\exp\left(-i(T g\hat X-\pi \hat Z)\tau\right), \end{equation}} \red{where $\tau=t/T=t'/T'$, $gT=g'T'$ and we have set $\omega_0=0$. $U(\tau)$ ($\hat u(\tau)$) may denote a fast (adiabatic) evolution if $T'$ is in the adiabatic regime while $T$ is not in.} This result suggests a strategy of experimentally implementing an expedited adiabatic processes: simply enhancing the strength of the driving Hamiltonian to its strongest possible value. \red{In general, the universal existence of $S$ and the equality \begin{equation}\label{e199} \hat u(\frac {t'}{T'})=\hat S^{\dagger}(\frac {t}{T}) \hat U(\frac {t}{T}) \end{equation} manifests that an adiabatic quantum algorithm can always be mimicked by at most two fast gates where $T' \gg T$ is in the adiabatic regime.} {\em Conclusion.--}Two \red{arbitrarily given} Hamiltonians within the same Hilbert space can be always transformed to each other via a unitary transformation. This seemingly \red{simple but rigorous theorem} is powerful: It allows one to implement a slowly varying evolution within a fast protocol, which is less susceptible to errors. We exemplified this result on a qubit system and on problems in the context of quantum adiabatic computing. The transformability of open quantum system Hamiltonians is left for future work. \acknowledgments L.A. Wu acknowledges grant support from the Basque Government (Grant No. IT986-16), the Spanish MICINN (Grant No. FIS2015-67161-P). D. S. acknowledges the Canada Research Chairs Program. We thank Professor P. Brumer for very helpful discussions.
1,108,101,563,534
arxiv
\section{Introduction} Procurement auctions for awarding contracts to supply goods or services are prevalent in many modern resource allocation situations. In several of these scenarios, the buyer plays the role of an intermediary who purchases some goods or services from the suppliers and resells it in the consumer market. For example, in the retail sector, an intermediary procures products from different vendors (perhaps through an auction) and resells it in consumer markets for a profit. In a cloud computing setting \cite{abhinandan2013}, an intermediary buys cloud resources from different service providers (again through an auction) and resells these resources to requesters of cloud services. The objective of the intermediaries in each of these cases is to maximize the profit earned in the process of reselling. Solving such problems via optimal auction of the kind discussed in the auction literature \cite{myerson1981optimal} inevitably requires assumption of a \emph{prior} distribution on the sellers' valuations. The requirement of a known prior distribution often places severe practical limitations. We have to be extremely careful in using a prior distribution which is collected from the past transactions of the bidders as they can possibly manipulate to do something differently than before. Moreover, deciding the prior distribution ideally requires a large number of samples. In reality, we can only approximate it with a finite number of samples. Also, prior-dependent auctions are non-robust: if the prior distribution alters, we are compelled to repeat the entire computation for the new prior, which is often computationally hard. This motivates us to study \emph{prior-free} auctions. In particular, in this paper, we study profit maximizing prior-free procurement auctions with one buyer and $n$ sellers. \section{Prior Art and Contributions} The problem of designing a revenue-optimal auction was first studied by Myerson \cite{myerson1981optimal}. Myerson considers the setting of a seller trying to sell a single object to one of several possible buyers and characterizes all revenue-optimal auctions that are BIC (Bayesian Incentive Compatible) and IIR (Interim individually Rational). Dasgupta and Spulber \cite{dasgupta1990managing} consider the problem of designing an optimal procurement auction where suppliers have unlimited capacity. Iyengar and Kumar \cite{iyengar2008optimal} consider the setting where the buyer purchases multiple units of a single item from the suppliers and resells it in the consumer market to earn some profit. We consider the same setting here, however, we focus on the design of \emph{prior-free} auctions unlike the \emph{prior-dependent} optimal auction designed in \cite{iyengar2008optimal}. \subsection{Related Work and Research Gaps} Goldberg et al. \cite{goldberg2001competitive} initiated work on design of prior-free auctions and studied a class of single-round sealed-bid auctions for an item in unlimited supply, such as digital goods where each bidder requires at most one unit. They introduced the notion of \emph{competitive} auctions and proposed prior-free randomized competitive auctions based on random sampling. In \cite{goldberg2006competitive}, the authors consider non-asymptotic behavior of the random sampling optimal price auction (RSOP) and show that its performance is within a large constant factor of a prior-free benchmark. Alaei et al. \cite{alaei2009random} provide a nearly tight analysis of RSOP which shows that it is 4-competitive for a large class of instances and 4.68 competitive for a much smaller class of remaining instances. The competitive ratio has further been improved to 3.25 by Hartline and McGrew \cite{hartline2005optimal} and to 3.12 by Ichiba and Iwama \cite{ichiba2010averaging}. Recently, Chen et al. \cite{chen2014optimal} designed an optimal competitive auction where the competitive ratio matches the lower bound of 2.42 derived in \cite{goldberg2006competitive} and therefore settles an important open problem in the design of digital goods auctions. Beyond the digital goods setting, Devanur et al. \cite{devanur2012envy} have studied prior-free auctions under several settings such as multi-unit, position auctions, downward-closed, and matroid environments. They design a prior-free auction with competitive ratio of 6.24 for the multi-unit and position auctions using the 3.12 competitive auction for digital goods given in \cite{ichiba2010averaging}. They also design an auction with a competitive ratio of 12.5 for multi-unit settings by directly generalizing the random sampling auction given in \cite{goldberg2006competitive}. Our setting is different than the above works on forward auction as we consider a procurement auction with capacitated sellers. Somewhat conceptually closer to our work is budget feasible mechanism design (\cite{singer2010budget}, \cite{bei2012budget}, \cite{chen2011approximability}, \cite{dobzinski2011mechanisms}) which models a simple procurement auction. Singer \cite{singer2010budget} considers single-dimensional mechanisms that maximize the buyer's valuation function on subsets of items, under the constraint that the sum of the payments provided by the mechanism does not exceed a given budget. Here the objective is maximizing social welfare derived from the subset of items procured under a budget and the benchmark considered is welfare optimal. On the other hand, our work considers maximizing profit or revenue of the buyer which is fundamentally different from the previous objective and our benchmark is revenue optimal. A simple example can be constructed to show that Singer's benchmark is not revenue optimal as follows. Suppose there is a buyer with budget \$100 and valuation function $V(k) = \$25k$ ($k$ is the number of items procured) and $5$ sellers with costs \$10, \$20, \$50, \$60, and \$70. Then Singer's benchmark will procure 3 items (with costs \$10, \$20, and \$50) earning a negative utility to the buyer which is welfare optimal but not revenue optimal as an omniscient revenue maximizing allocation will procure 2 items (with costs \$10 and \$20) yielding a revenue or utility of \$20 to the buyer. Although the design of prior-free auctions has generated wide interest in the research community (\cite{alaei2009random}, \cite{chen2014optimal}, \cite{devanur2013prior}, \cite{devanur2012envy}, \cite{goldberg2006competitive}, \cite{goldberg2001competitive}, \cite{hartline2005optimal}, \cite{ichiba2010averaging}) most of the works have considered the forward setting. The reverse auction setting is subtly different from forward auctions especially if the sellers are capacitated and the techniques used for forward auctions cannot be trivially extended to the case of procurement auctions. To the best of our knowledge, design of profit-maximizing prior-free multi-unit procurement auctions is yet unexplored. Moreover, the existing literature on prior-free auctions is limited to the single-dimensional setting where each bidder has only one private type which is valuation per unit of an item. However, in a procurement auction, the sellers are often capacitated and strategically report their capacities to increase their utilities. Therefore, the design of bi-dimensional prior-free procurement auctions is extremely relevant in practice and in this paper, we believe we have derived the first set of results in this direction. \subsection{Contributions} In this paper, we design profit-maximizing prior free procurement auctions where a buyer procures multiple units of an item from $n$ sellers and subsequently resells the units to earn revenue. Our contributions are three-fold. First, we look at unit capacity sellers and define two benchmarks for analyzing the performance of any prior-free auction -- (1) an optimal single price auction $(\mathcal{F})$ and (2) an optimal multi-price auction $(\mathcal{T})$. We show that no prior-free auction can be constant competitive against any of the two benchmarks. We then consider a lightly relaxed benchmark $(\mathcal{F}^{(2)})$ which is constrained to procure at least two units and design a prior-free auction PEPA (Profit Extracting Procurement Auction) which is 4-competitive against $(\mathcal{F}^{(2)})$ for any concave revenue curve. Second, we study a setting where the sellers have non-unit capacities that are common knowledge and derive similar results. In particular, we propose a prior free auction PEPAC (Profit Extracting Procurement Auction with Capacity) which is truthful for any concave revenue curve. Third, we obtain results in the inherently harder bi-dimensional case where per unit valuation as well as capacities are private information of the sellers. We show that PEPAC is truthful and constant competitive for the specific case of linear revenue curves. We believe the proposed auctions represent the first effort in single dimensional and bi-dimensional prior-free multi-unit procurement auctions. Further, these auctions can be easily adapted in real-life procurement situations due to the simple rules and prior-independence of the auctions. \section{Sellers with Unit Capacities} \label{model1} We consider a single round procurement auction setting with one buyer (retailer, intermediary etc.) and $n$ sellers where each seller has a single unit of a homogeneous item. The buyer procures multiple units of the item from the sellers and subsequently resells it in an outside consumer market, earning a revenue of $\mathcal{R}(q)$ from selling $q$ units of the item. We assume that the revenue curve of the outside market $\mathcal{R}(q)$ is concave with $\mathcal{R}(0) = 0$. This is motivated by the following standard argument from economics. According to the \emph{law of demand}, the quantity demanded decreases as price per unit increases. It can be easily shown that the marginal revenue falls with an increase in the number of units sold and so the revenue curve is concave. \subsection{Procurement Auction} We assume that the buyer (auctioneer) has \emph{unlimited demand}, but as each seller (bidder) has unit capacity, the number of units the buyer can procure and resell is limited by the total number of sellers. We make the following assumptions about the bidders: \begin{enumerate \item Each bidder has a private valuation $v_i$ which represents the true minimum amount he is willing to receive to sell a single unit of the item. \item Bidders' valuations are independently and identically distributed. \item Utility of a bidder is given as payment minus valuation. \end{enumerate} In reference to the \emph{revelation principle} \cite{myerson1981optimal}, we will restrict our attention to single-round, sealed-bid, truthful auctions. Now we define the notions of single-round sealed-bid auction, bid-independent auction, and competitive auction for our setting. \subsubsection*{Single-round Sealed-bid Auction ($\mathcal{A}$).} \begin{enumerate \item The bid submitted by bidder $i$ is $b_i$. The vector of all the submitted bids is denoted by $\textbf{b}$. Let \textbf{b}$_{-i}$ denote the masked vector of bids where $b_i$ is removed from \textbf{b}. \item Given the bid vector $\textbf{b}$ and the revenue curve $\mathcal{R}$, the auctioneer computes an allocation $\textbf{x} = (x_1, \ldots, x_n)$, and payments $\textbf{p} = (p_1, \ldots, p_n)$. If bidder $i$ sells the item, $x_i = 1$ and we say bidder $i$ wins. Otherwise, bidder $i$ loses and $x_i = 0$. The auctioneer pays an amount $p_i$ to bidder $i$. We assume that $p_i \geq b_i$ for all winning bidders and $p_i = 0$ for all losing bidders. \item The auctioneer resells the units bought from the sellers in the outside consumer market. The profit of the auction (or auctioneer) is given by, \begin{center} $\mathcal{A}(\textbf{b}, \mathcal{R}) = \mathcal{R}(\sum_{i=1}^{n}{x_i(\textbf{b}, \mathcal{R})}) - \sum_{i=1}^{n}{p_i(\textbf{b}, \mathcal{R})}$. \end{center} \end{enumerate} The auctioneer wishes to maximize her profit satisfying IR (Individual Rationality) and DSIC (Dominant Strategy Incentive Compatibility). As bidding $v_i$ is a dominant strategy for bidder $i$ in a truthful auction, in the remainder of this paper, we assume that $b_i = v_i$. \subsubsection*{Bid-independent Auction.} An auction is \emph{bid-independent} if the offered payment to a bidder is independent of the bidder's bid. It can certainly depend on the bids of the other bidders and the revenue curve. Such an auction is determined by a function $f$ (possibly randomized) which takes the masked bid vectors and the revenue curve as input and maps it to payments which are non-negative real numbers. Let $\mathcal{A}_f($\textbf{b}$, \mathcal{R})$ denote the bid-independent auction defined by $f$. For each bidder $i$ the allocation and payments are determined in two phases as follows : \begin{enumerate} \item Phase I : \begin{enumerate \item $t_i \leftarrow f(\textbf{b}_{-i}, \mathcal{R})$. \item If $t_i < b_i$, set $x_i \leftarrow 0$, $p_i \leftarrow 0$, and remove bidder $i$. \end{enumerate} Suppose $n^\prime$ is the number of bidders left. Let $t_{[i]}$ denote the $i^{th}$ lowest value of $t_j$ among the remaining $n^\prime$ bidders. Let $x_{[i]}$ and $p_{[i]}$ be the corresponding allocation and payment. Now we choose the allocation that maximizes the revenue of the buyer. \item Phase II : \begin{enumerate} \item $k \leftarrow \underset{0 \leq i \leq n^\prime}{\operatorname{argmax}}\ (\mathcal{R}(i) - \sum_{j=1}^i{t_{[j]}})$. \item Set $x_{[i]} \leftarrow 1$ and $p_{[i]} \leftarrow t_{[i]}$ for $i = \{1, \ldots, k\}$. \item Otherwise, set $x_{[i]} \leftarrow 0, p_{[i]} \leftarrow 0$. \end{enumerate} \end{enumerate} For any bid-independent auction, the allocation of bidder $i$ is non-increasing in valuation $v_i$ and his payment is independent of his bid. It follows from Myerson's characterization \cite{myerson1981optimal} of truthful auctions that any bid-independent auction is truthful. \subsubsection*{Competitive Auction.} In the absence of any prior distribution over the valuations of the bidders, we cannot compare the profit of a procurement auction with respect to the average profit of the optimal auction. Rather, we measure the performance of a truthful auction on any bid by comparing it with the profit that would have been achieved by an omniscient optimal auction ($OPT$), the optimal auction which knows all the true valuations in advance without requiring to elicit them from the bidders. \begin{definition} \emph{$\beta$-competitive auction ($\beta > 1$)} : An auction $\mathcal{A}$ is \emph{$\beta$-competitive} against $OPT$ if for all bid vectors \textbf{b}, the expected profit of $\mathcal{A}$ on $\textbf{b}$ satisfies \begin{center} $\mathbb{E}[\mathcal{A}(\textbf{b}, \mathcal{R})] \geq \displaystyle \frac{OPT(\textbf{b}, \mathcal{R})}{\beta}$. \end{center} We refer to $\beta$ as the \emph{competitive ratio} of $\mathcal{A}$. Auction $\mathcal{A}$ is \emph{competitive} if its competitive ratio $\beta$ is a constant. \end{definition} \subsection{Prior-Free Benchmarks} \label{bench} As a first step in comparing the performance of any prior-free procurement auction, we need to come up with the right metric for comparison that is a benchmark. It is important that we choose such a benchmark carefully for such a comparison to be meaningful. Here, we start with the strongest possible benchmark for comparison: the profit of an auctioneer who knows the bidder's true valuations. This leads us to consider the two most natural metrics for comparison -- the optimal multiple price and single price auctions. We compare the performances of truthful auctions to that of the optimal multiple price and single price auctions. Let $v_{[i]}$ denote the $i$-th lowest valuation. \subsubsection*{Optimal Single Price Auction ($\mathcal{F}$).} Let $\textbf{b}$ be a bid vector. Auction $\mathcal{F}$ on input $\textbf{b}$ determines the value $k$ such that $\mathcal{R}(k) - kv_{[k]}$ is maximized. All bidders with bid $b_i \leq v_{[k]}$ win at price $v_{[k]}$; all remaining bidders lose. We denote the optimal procurement price for \textbf{b} that gives the optimal profit by $OPP(\textbf{b}, \mathcal{R})$. The profit of $\mathcal{F}$ on input $\textbf{b}$ is denoted by $\mathcal{F}(\textbf{b}, \mathcal{R})$. So we have, \begin{center} $\mathcal{F}(\textbf{b}, \mathcal{R}) = \max \limits_{0 \leq i \leq n} (\mathcal{R}(i) - iv_{[i]})$.\\ $OPP(\textbf{b}, \mathcal{R}) = \underset{v_{[i]}}{\operatorname{argmax}}\ (\mathcal{R}(i) - iv_{[i]})$. \end{center} \subsubsection*{Optimal Multiple Price Auction ($\mathcal{T}$).} Auction $\mathcal{T}$ buys from each bidder at her bid value. So auction $\mathcal{T}$ on input $\textbf{b}$ determines the value $l$ such that $\mathcal{R}(l) - \sum_{i=1}^{l}{v_{[i]}}$ is maximized. First $l$ bidders win at their bid value; all remaining bidders lose. The profit of $\mathcal{T}$ on input $\textbf{b}$ is given by \begin{center} $\mathcal{T}(\textbf{b}, \mathcal{R}) = \max \limits_{0 \leq i \leq n} (\mathcal{R}(i) - \sum_{j=1}^{i}{v_{[j]}})$. \end{center} It is clear that $\mathcal{T}(\textbf{b}, \mathcal{R}) \geq \mathcal{F}(\textbf{b}, \mathcal{R})$ for any bid vector $\textbf{b}$ and any revenue curve $\mathcal{R}$. However, $\mathcal{F}$ does not perform very poorly compared to $\mathcal{T}$. We prove a bound between the performance of $\mathcal{F}$ and $\mathcal{T}$. Specifically, we observe that in the worst case, the maximum ratio of $\mathcal{T}$ to $\mathcal{F}$ is logarithmic in the number $n$ of bidders. \begin{lemma} \label{benchmarkrelation} For any \normalfont \textbf{b} \textit{and any concave revenue curve $\mathcal{R}$}, \begin{center} $\mathcal{F}(\textbf{b}, \mathcal{R}) \geq \displaystyle \frac{\mathcal{T}(\textbf{b}, \mathcal{R})}{\mathrm{ln} \ n}$. \end{center} \end{lemma} \begin{pfof} We use the following property of concave function, $\displaystyle \frac{\mathcal{R}(i)}{i} \geq \frac{\mathcal{R}(j)}{j}$ $\ \ \forall i \leq j$. Suppose $\mathcal{T}$ buys $k$ units and $\mathcal{F}$ buys $l$ units from the sellers. \begin{align*} \mathcal{T}(\textbf{b}, \mathcal{R}) &= \mathcal{R}(k) - \sum_{i=1}^{k}{v_{[i]}} = \sum_{i=1}^{k}\left({\frac{\mathcal{R}(k)}{k} - v_{[i]}}\right) \\ &\leq \sum_{i=1}^{k}\left({\frac{\mathcal{R}(i)}{i} - v_{[i]}}\right) \leq \sum_{i=1}^{k}{\frac{\mathcal{R}(l)- lv_{[l]}}{i}} \\ &\leq \mathcal{F}(\textbf{b}, \mathcal{R})(\mathrm{ln}\ n + O(1)). \end{align*} \end{pfof} The result implies that if an auction $\mathcal{A}$ is constant-competitive against $\mathcal{F}$ then it is $\ln n$ competitive against $\mathcal{T}$. Now we show that no truthful auction can be constant-competitive against $\mathcal{F}$ and hence it cannot be competitive against $\mathcal{T}$. \begin{theorem} \label{impossibility} For any truthful auction $\mathcal{A}_f$, any revenue curve $\mathcal{R}$, and any $\beta \geq 1$, there exists a bid vector \normalfont \textbf{b} \textit{such that the expected profit of $\mathcal{A}_f$ on} \normalfont \textbf{b} \textit{is at most} $\mathcal{F}($\textbf{b}$, \mathcal{R})/\beta$. \end{theorem} \begin{pfof} Consider a bid-independent randomized auction $\mathcal{A}_f$ on two bids, $r$ and $L < r$ where $r = \mathcal{R}(1)$. Suppose $g$ and $G$ denote the probability density function and cumulative density function of the random variable $f(r, \mathcal{R})$. We fix one bid at $r$ and choose $L$ depending on the two cases. \begin{enumerate \item Case I : $G(r) \leq 1/\beta$. We choose $L = \displaystyle \frac{r}{\beta}$. \\ Then $\mathcal{F}(\textbf{b}, \mathcal{R}) = r \left (1 - \displaystyle \frac{1}{\beta} \right)$ and $\mathbb{E}[\mathcal{A}_f(\textbf{b}, \mathcal{R})] \leq \displaystyle \frac{1}{\beta}\left( r - \frac{r}{\beta} \right) = \frac{\mathcal{F}(\textbf{b}, \mathcal{R})}{\beta}$. \item Case II : $G(r) > 1/\beta$. We choose $L = \displaystyle r - \epsilon$ such that $G(r) - G(r - \epsilon) < 1/\beta$. As $G$ is a non-decreasing function and $G(r) > 1/\beta$ such a value of $\epsilon$ always exists. Then $\mathcal{F}(\textbf{b}, \mathcal{R}) = r - (r - \epsilon) = \epsilon$ and \begin{align*} \mathbb{E}[\mathcal{A}_f(\textbf{b}, \mathcal{R})] &= \displaystyle \int_{r - \epsilon}^{r} (r - y) g(y) \mathrm{d}y\\ &\leq \displaystyle r \int_{r - \epsilon}^{r} g(y) \mathrm{d}y - \displaystyle (r - \epsilon)\int_{r - \epsilon}^{r}g(y) \mathrm{d}y \\ &= \displaystyle \epsilon \int_{r - \epsilon}^{r} g(y) \mathrm{d}y = \displaystyle \epsilon (G(r) - G(r - \epsilon)) \\ &< \frac{\epsilon}{\beta} = \frac{\mathcal{F}(\textbf{b}, \mathcal{R})}{\beta}. \end{align*} \end{enumerate} \end{pfof} Theorem \ref{impossibility} shows that we cannot match the performance of the optimal single price auction when the optimal profit is generated from the single lowest bid. Therefore we present an auction that is \emph{competitive} against $\mathcal{F}^{(2)}$, the optimal single price auction that buys at least two units. Such an auction achieves a constant fraction of the revenue of $\mathcal{F}^{(2)}$ on \emph{all inputs}. \subsubsection*{An Auction $\mathcal{F}$ that Procures at least Two Units ($\mathcal{F}^{(2)}$).} Let $\textbf{b}$ be a bid vector. Auction $\mathcal{F}^{(2)}$ on input $\textbf{b}$ determines the value $k \geq 2$ such that $\mathcal{R}(k) - kv_{[k]}$ is maximized. The profit of $\mathcal{F}^{(2)}$ on input $\textbf{b}$ is \begin{center} $\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = \max \limits_{2 \leq i \leq n} (\mathcal{R}(i) - iv_{[i]})$. \end{center} Note that, though $\mathcal{F}^{(2)}$ is slightly constrained the performance of $\mathcal{F}^{(2)}$ can be arbitrarily bad in comparison to $\mathcal{F}$. We demonstrate it using a simple example where $\mathcal{F}$ procures only one unit as follows. \begin{example} Consider the revenue curve $\mathcal{R}(k) = rk$ ($r>0$). Let $0 < \epsilon \ll r$ and bid vector $\textbf{b} = (\epsilon, r - \epsilon, r, \ldots, r)$. Then, ${\mathcal{F}(\textbf{b}, \mathcal{R})} = r - \epsilon$ and $\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = 2r - 2(r - \epsilon) = 2\epsilon$. Hence, $\displaystyle \frac{\mathcal{F}(\textbf{b}, \mathcal{R})}{\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R})} = \frac{r - \epsilon}{2\epsilon} = \left(\frac{r}{2\epsilon} - \frac{1}{2}\right) \rightarrow \infty$ as $\epsilon \rightarrow 0$. \end{example} But if $\mathcal{F}$ chooses to buy at least two units, we have $\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = \mathcal{F}(\textbf{b}, \mathcal{R})$. Thus, comparing auction performance to $\mathcal{F}^{(2)}$ is identical to comparing it to $\mathcal{F}$ if we exclude the bid vectors where only the lowest bidder wins in the optimal auction. From now on, we say an auction is \emph{$\beta$-competitive} if it is \emph{$\beta$-competitive} against $\mathcal{F}^{(2)}$. \subsection{Profit Extracting Procurement Auction (PEPA)} \label{rspa} We now present a prior-free procurement auction based on random sampling. Our auction takes the bids from the bidders and then partitions them into two sets by flipping a fair coin for each bid to decide to which partition to assign it. Then we use one partition for market analysis and utilize what we learn from a sub-auction on the other partition, and vice versa. We extend the \emph{profit extraction} technique of \cite{goldberg2006competitive}. The goal of the technique is, given \textbf{b}, $\mathcal{R}$, and profit $P$, to find a subset of bidders who generate profit $P$. \subsubsection*{Profit Extraction (PE$_P(\textbf{b}, \mathcal{R})$).} Given target profit $P$, \begin{enumerate \item Find the largest value of $k$ for which $v_{[k]}$ is at most $(\mathcal{R}(k) - P)/k$. \item Pay these $k$ bidders $(\mathcal{R}(k) - P)/k$ and reject others. \end{enumerate} \subsubsection*{Profit Extracting Procurement Auction (PEPA).} \begin{enumerate \item Partition the bids \textbf{b} uniformly at random into two sets \textbf{b$^{\prime}$} and \textbf{b$^{\prime\prime}$}: for each bid, flip a fair coin, and with probability 1/2 put the bid in \textbf{b$^{\prime}$} and otherwise in \textbf{b$^{\prime\prime}$}. \item Compute $F^\prime = \mathcal{F}(\textbf{b}^\prime, \mathcal{R})$ and $F^{\prime\prime} = \mathcal{F}(\textbf{b}^{\prime\prime}, \mathcal{R})$ which are the optimal single price profits for \textbf{b$^{\prime}$} and \textbf{b$^{\prime\prime}$} respectively. \item Compute the auction results of PE$_{F^{\prime\prime}}$(\textbf{b$^{\prime}$}, $\mathcal{R})$ and PE$_{F^\prime}$(\textbf{b$^{\prime\prime}$}, $\mathcal{R})$. \item Run the auction PE$_{F^{\prime\prime}}$(\textbf{b$^{\prime}$}, $\mathcal{R})$ or PE$_{F^\prime}$(\textbf{b$^{\prime\prime}$}, $\mathcal{R})$ that gives higher profit to the buyer. Ties are broken arbitrarily. \end{enumerate} The following lemmas can be easily derived. \begin{lemma} \label{pepa1} PEPA \textit{is truthful}. \end{lemma} \begin{lemma} \label{pepa2} PEPA \textit{has profit} $F^\prime$ \textit{if} $F^\prime = F^{\prime\prime}$; \textit{otherwise it has profit} $\min(F^\prime, F^{\prime\prime})$. \end{lemma} Now we derive the competitive ratio for PEPA, first for linear revenue curve and then for any arbitrary concave revenue curve. \begin{theorem} \label{thm1} PEPA is $4$-competitive if the revenue curve is linear i.e. $\mathcal{R}(k) = rk$ where $r > 0$ and this bound is tight. \end{theorem} \begin{pfof} By definition, $\mathcal{F}^{(2)}$ on \textbf{b} buys from $k \geq 2$ bidders for a profit of $\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = \mathcal{R}(k) - kv_{[k]}$. These $k$ bidders are divided uniformly at random between \textbf{b$^{\prime}$} \ and \textbf{b$^{\prime\prime}$}. Let $k_1$ be the number of them in \textbf{b$^{\prime}$} \ and $k_2$ the number in \textbf{b$^{\prime\prime}$}. Now we denote the $i^{th}$ lowest bid in \textbf{b$^{\prime}$} \ by $v_{[i]}^\prime$ and in \textbf{b$^{\prime\prime}$} \ by $v_{[i]}^{\prime\prime}$. Clearly, $v_{[k_1]}^\prime \leq v_{[k]}$ and $v_{[k_2]}^{\prime\prime} \leq v_{[k]}$. So we have, $\mathcal{F}^\prime \geq \mathcal{R}(k_1) - k_1v_{[k_1]}^\prime \geq \mathcal{R}(k_1) - k_1v_{[k]}$ and $\mathcal{F}^{\prime\prime} \geq \mathcal{R}(k_2) - k_2v_{[k_2]}^{\prime\prime} \geq \mathcal{R}(k_2) - k_2v_{[k]}$. \begin{align*} \displaystyle \frac{\min(F^\prime, F^{\prime\prime})}{\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R})} &\geq \frac{\min(\mathcal{R}(k_1) - k_1v_{[k]}, \mathcal{R}(k_2) - k_2v_{[k]})}{\mathcal{R}(k) - kv_{[k]}} \\ &= \frac{\min(rk_1 - k_1v_{[k]}, rk_2 - k_2v_{[k]})}{rk - kv_{[k]}} \\ &= \frac{\min(k_1, k_2)}{k}. \end{align*} Thus, the competitive ratio is given by \begin{align} \displaystyle \frac{\mathbb{E}[P]}{\mathcal{F}^{(2)}} &= \frac{1}{k} \sum_{i=1}^{k-1}{\min(i, k-i){k \choose i} 2^{-k}} = \displaystyle \frac{1}{2} - {k-1 \choose \lfloor{k/2}\rfloor} 2^{-k}. \end{align} The above expression achieves its minimum of 1/4 for $k$ = 2 and $k$ = 3. As $k$ increases it approaches 1/2. \vspace{-2mm} \end{pfof} \begin{example} The bound presented on the competitive ratio is tight. Consider the revenue curve $\mathcal{R}(k) = 2lk$ ($l>0$) and bid vector \textbf{b} which consists of two bids $l - \epsilon$ and $l$. All other bids are very high compared to $l$. Then, ${\mathcal{F}(\textbf{b}, \mathcal{R})} = \mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = 2l$. The expected profit of PEPA is $l \cdot \mathbb{P}$ [two low bids are split between \textbf{b$^{\prime}$} \ and \textbf{b$^{\prime\prime}$}] = $l/2$ = $\mathcal{F}(\textbf{b}, \mathcal{R})/4$. \end{example} \begin{theorem} For any concave revenue curve, PEPA is $4$-competitive \end{theorem} \begin{pfof} Using notation defined above, \begin{align*} \displaystyle \frac{\min(F^\prime, F^{\prime\prime})}{\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R})} &\geq \frac{\min(\mathcal{R}(k_1) - k_1v_{[k]}, \mathcal{R}(k_2) - k_2v_{[k]})}{\mathcal{R}(k) - kv_{[k]}}\\ &= \frac{\min(k_1(\frac{\mathcal{R}(k_1)}{k_1} - v_{[k]}), k_2(\frac{\mathcal{R}(k_2)}{k_2} - v_{[k]}))}{k(\frac{\mathcal{R}(k)}{k} - v_{[k]})} \\ &\geq \frac{\min(k_1(\frac{\mathcal{R}(k)}{k} - v_{[k]}), k_2(\frac{\mathcal{R}(k)}{k} - v_{[k]}))}{k(\frac{\mathcal{R}(k)}{k} - v_{[k]})} \\ &= \frac{\min(k_1, k_2)}{k} \geq 1/4. \end{align*} \vspace{-5mm} \end{pfof} \section{Sellers with Non-Unit Non-Strategic Capacities} \label{Nonstrategic} \subsection{Setup} Now we consider the setting where sellers can supply more than one unit of an item. Seller $i$ has valuation per unit $v_i$ and a maximum capacity $q_i$ where $v_i$ is a positive real number and $q_i$ is a positive integer. In other words, each seller can supply at most $q_i$ units of a homogeneous item. We assume that the sellers are strategic with respect to valuation per unit only and they always report their capacities truthfully. Let $x_i$ and $p_i$ denote the allocation and per unit payment to bidder $i$. Then the profit of the auction (or auctioneer) for bid vector $\textbf{b}$ is \begin{center} $\mathcal{A}(\textbf{b}, \mathcal{R}) = \mathcal{R}(\displaystyle \sum_{i=1}^{n}{x_i(\textbf{b}, \mathcal{R})}) - \displaystyle \sum_{i=1}^{n}{p_i(\textbf{b}, \mathcal{R})\cdot x_i(\textbf{b}, \mathcal{R})}$. \end{center} The auctioneer wants to maximize her profit satisfying feasibility, IR, and DSIC. As before, we first define the notion of bid-independent auction for this setting. \subsubsection*{Bid-independent Auction.} For each bidder $i$, the allocation and payments are determined in two phases as follows. \begin{enumerate} \item Phase I : \vspace{-1mm} \begin{enumerate \item $t_i \leftarrow f(\textbf{b}_{-i}, \mathcal{R})$. \item If $t_i < v_i$, set $x_i \leftarrow 0$, $p_i \leftarrow 0$, and remove bidder $i$. \item Let $n^\prime$ be the number of remaining bidders. \end{enumerate} \item Phase II : \vspace{-2mm} \begin{enumerate \item $i^\prime \leftarrow \underset{0 \leq i \leq m^\prime}{\operatorname{argmax}}\ \left(\mathcal{R}(i) - \displaystyle\sum_{j=1}^{k-1}{q_{[j]}t_{[j]}} - (i - \displaystyle\sum_{j=1}^{k-1}{q_{[j]}})t_{[k]}\right)$ \\where $\ m^\prime = \sum_{j=1}^{n^\prime}{q_{[j]}}$ and $\sum_{j=1}^{k-1}{q_{[j]}} < i \leq \sum_{j=1}^{k}{q_{[j]}}$ \item Suppose $k^\prime$ satisfies $\sum_{j=1}^{k^\prime-1}{q_{[j]}} < i^\prime \leq \sum_{j=1}^{k^\prime}{q_{[j]}}$. \item Set $x_{[i]} \leftarrow q_{[i]}$ and $p_{[i]} \leftarrow t_{[i]}$ for $i = \{1, \ldots, k^\prime - 1\}$. \item Set $x_{[k^\prime]} \leftarrow (i^\prime - \sum_{j=1}^{k^\prime-1}{q_{[j]}})$ and $p_{[k^\prime]} \leftarrow t_{[k^\prime]}$. \item Otherwise, set $x_{[i]} \leftarrow 0, p_{[i]} \leftarrow 0$. \end{enumerate} \end{enumerate} As the allocation is monotone in bids and payment is bid-independent, any bid-independent auction is truthful. \subsection{Prior-Free Benchmark} We denote the $i$-th lowest valuation by $v_{[i]}$ and the corresponding capacity by $q_{[i]}$. Suppose $m = \sum_{i=1}^{n}{q_i}$. Then we have, \begin{center} $\mathcal{F}(\textbf{b}, \mathcal{R}) = \max \limits_{0 \leq i \leq m} (\mathcal{R}(i) - iv_{[j]})$, where $\sum_{k=1}^{j-1}{q_{[k]}} < i \leq \sum_{k=1}^{j}{q_{[k]}}$ $OPP(\textbf{b}, \mathcal{R}) = \underset{v_{[j]}}{\operatorname{argmax}}\left( \underset{\sum_{k=1}^{j-1}{q_{[k]}} < i \leq \sum_{k=1}^{j}{q_{[k]}}}\max (\mathcal{R}(i) - iv_{[j]}) \right)$ \end{center} The first $j$ bidders are the winners and they are allocated at their full capacity except possibly the last one. As no truthful auction can be constant-competitive against $\mathcal{F}$ we define $\mathcal{F}^{(2)}$ as the optimal single price auction that buys from at least two bidders. The profit of $\mathcal{F}^{(2)}$ on input vector \textbf{b} is \begin{center} $\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = \max \limits_{q_{[1]} < i \leq m} (\mathcal{R}(i) - iv_{[j]})$, where $\sum_{k=1}^{j-1}{q_{[k]}} < i \leq \sum_{k=1}^{j}{q_{[k]}}$ \end{center} \subsection{Profit Extracting Procurement Auction with Capacity (PEPAC)} Now we extend the random sampling based procurement auction presented in Section \ref{rspa} for this setting. \subsubsection*{Profit Extraction with Capacity (PEC$_P(\textbf{b}, \mathcal{R})$).} \begin{enumerate \item Find the largest value of $k^\prime$ for which $v_{[k]}$ is at most $(\mathcal{R}(k^\prime) - P)/k^\prime$ where $\sum_{i=1}^{k-1}{q_{[i]}} < k^\prime \leq \sum_{i=1}^{k}{q_{[i]}}$. \item Pay these $k$ bidders $(\mathcal{R}(k^\prime) - P)/k^\prime$ per unit and reject others. \end{enumerate} \subsubsection*{Profit Extracting Procurement Auction with Capacity (PEPAC).} PEPAC is same as PEPA except that it invokes PEC$_P(\textbf{b}, \mathcal{R})$ instead of PE$_P(\textbf{b}, \mathcal{R})$. Next we derive the performance of PEPAC through the following theorems. \begin{theorem} \label{pepac1} PEPAC is $4$-competitive for any concave revenue curve $\mathcal{R}$ if \\$q_i = q \ \forall \ i \in \{1, \ldots, n\}$. \end{theorem} \begin{pfof} By definition, $\mathcal{F}^{(2)}$ on \textbf{b} buys from $k \geq 2$ bidders for a profit of $\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = \mathcal{R}(k^\prime) - k^\prime v_{[k]}$ where $\sum_{i=1}^{k-1}{q} < k^\prime \leq \sum_{i=1}^{k}{q}$ These $k$ bidders are divided uniformly at random between \textbf{b$^{\prime}$} \ and \textbf{b$^{\prime\prime}$}. Let $k_1$ be the number of them in \textbf{b$^{\prime}$} \ and $k_2$ the number in \textbf{b$^{\prime\prime}$}. Now we denote the $i^{th}$ lowest bid in \textbf{b$^{\prime}$} \ by $v_{[i]}^\prime$ and in \textbf{b$^{\prime\prime}$} \ by $v_{[i]}^{\prime\prime}$. Clearly, $v_{[k_1]}^\prime \leq v_{[k]}$ and $v_{[k_2]}^{\prime\prime} \leq v_{[k]}$. As $\mathcal{F}^\prime$ and $\mathcal{F}^{\prime\prime}$ are optimal in respective partitions we have, \begin{align*} \mathcal{F}^{\prime} \geq \mathcal{R}(k_1q) - k_1qv_{[k_1]}^{\prime} \geq \mathcal{R}(k_1q) - k_1qv_{[k]}.\\ \mathcal{F}^{\prime\prime} \geq \mathcal{R}(k_2q) - k_2qv_{[k_2]}^{\prime\prime} \geq \mathcal{R}(k_2q) - k_2qv_{[k]}. \end{align*} Auction profit is $P =\ \min(F^\prime, F^{\prime\prime})$. Therefore, \begin{align*} \displaystyle \frac{\min(F^\prime, F^{\prime\prime})}{\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R})} &\geq \frac{\min(\mathcal{R}(k_1q) - k_1qv_{[k]}, \mathcal{R}(k_2q) - k_2qv_{[k]})}{\mathcal{R}(k^\prime) - k^\prime v_{[k]}} \\ &= \frac{\min(k_1q(\frac{\mathcal{R}(k_1q)}{k_1q} - v_{[k]}), k_2q(\frac{\mathcal{R}(k_2q)}{k_2q} - v_{[k]}))}{k^\prime(\frac{\mathcal{R}(k^\prime)}{k^\prime} - v_{[k]})} \\ &\geq \frac{\min(k_1q(\frac{\mathcal{R}(k^\prime)}{k^\prime} - v_{[k]}), k_2q(\frac{\mathcal{R}(k^\prime)}{k^\prime} - v_{[k]}))}{k^\prime(\frac{\mathcal{R}(k^\prime)}{k^\prime} - v_{[k]})} \\ & \displaystyle[\mbox{as} \ \frac{\mathcal{R}(k_1q)}{k_1q} \geq \frac{\mathcal{R}(k^\prime)}{k^\prime} \ \mbox{and}\ \frac{\mathcal{R}(k_2q)}{k_2q} \geq \frac{\mathcal{R}(k^\prime)}{k^\prime}\displaystyle]\\ &= \frac{\min(k_1q, k_2q)}{k^\prime} \geq \frac{\min(k_1q, k_2q)}{kq} \\ &= \frac{\min(k_1, k_2)}{k}. \end{align*} Thus, the competitive ratio is given by \begin{align*} \displaystyle \frac{\mathbb{E}[P]}{\mathcal{F}^{(2)}} &= \frac{1}{k} \sum_{i=1}^{k-1}{\min(i, k-i){k \choose i} 2^{-k}}\\ &= \displaystyle \frac{1}{2} - {k-1 \choose \lfloor{k/2}\rfloor} 2^{-k}.\\ \end{align*} which is the same as in Theorem \ref{thm1}. \end{pfof} \vspace{-2mm} \begin{theorem} \label{pepac2} PEPAC is $4 \cdot \left( \displaystyle \frac{q_{\max}}{q_{\min}}\right)$-competitive if the revenue curve is linear and $q_i \in [q_{\min}, q_{\max}] \ \ \forall \ i \in \{1, \ldots, n\}$ and further this bound is tight. \end{theorem} \begin{pfof} By definition, $\mathcal{F}^{(2)}$ on \textbf{b} buys from $k \geq 2$ bidders for a profit of $\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R}) = \mathcal{R}(k^\prime) - k^\prime v_{[k]}$ where $\sum_{i=1}^{k-1}{q_{[i]}} < k^\prime \leq \sum_{i=1}^{k}{q_{[i]}}$ These $k$ bidders are divided uniformly at random between \textbf{b$^{\prime}$} \ and \textbf{b$^{\prime\prime}$}. Let $k_1$ be the number of bidders in \textbf{b$^{\prime}$} \ and $k_2$ the number in \textbf{b$^{\prime\prime}$}. Now we denote the $i^{th}$ lowest bid (according to valuation) in \textbf{b$^{\prime}$} \ by $(v_{[i]}^\prime, q_{[i]}^\prime)$ and in \textbf{b$^{\prime\prime}$} \ by $(v_{[i]}^{\prime\prime}, q_{[i]}^{\prime\prime})$. Clearly, $v_{[k_1]}^\prime \leq v_{[k]}$ and $v_{[k_2]}^{\prime\prime} \leq v_{[k]}$. As $\mathcal{F}^\prime$ and $\mathcal{F}^{\prime\prime}$ are optimal in respective partitions we have, \begin{align*} \mathcal{F}^\prime \geq \mathcal{R}(\sum_{i=1}^{k_1}{q_{[i]}^\prime}) - (\sum_{i=1}^{k_1}{q_{[i]}^\prime})v_{[k]}, \ \ \mathcal{F}^{\prime\prime} \geq \mathcal{R}(\sum_{i=1}^{k_2}{q_{[i]}^{\prime\prime}}) - (\sum_{i=1}^{k_2}{q_{[i]}^{\prime\prime}})v_{[k]}. \end{align*} Suppose $\displaystyle\sum_{i=1}^{k_1}{q_{[i]}^\prime} = q_{x}$, $\displaystyle\sum_{i=1}^{k_2}{q_{[i]}^{\prime\prime}} = q_{y}$, and $\displaystyle\sum_{i=1}^{k}{q_{[i]}} = q_{z}$. \begin{align*} \displaystyle \frac{\min(F^\prime, F^{\prime\prime})}{\mathcal{F}^{(2)}(\textbf{b}, \mathcal{R})} &\geq \frac{\min(\mathcal{R}(q_{x}) - q_{x}v_{[k]}, \mathcal{R}(q_{y}) - q_{y}v_{[k]})}{\mathcal{R}(k^\prime) - k^\prime v_{[k]}} \\ &= \frac{\min(r q_{x} - q_{x}v_{[k]}, r q_{y} - q_{y}v_{[k]})}{r k^\prime - k^\prime v_{[k]}} \\ &= \frac{\min(q_x, q_y)}{k^\prime} \geq \frac{\min(q_x, q_y)}{q_z} \\ &\geq \frac{\min(k_1 q_{\min}, k_2 q_{\min})}{k q_{\max}} \\ &= \left(\frac{q_{\min}}{q_{\max}}\right) \frac{\min(k_1, k_2)}{k} \geq \frac{q_{\min}}{4 \cdot q_{\max}}. \end{align*} \end{pfof} \vspace{-2mm} \begin{theorem} \label{pepac3} PEPAC is $4 \cdot \left( \displaystyle \frac{q_{\max}}{q_{\min}}\right)$-competitive for any concave revenue curve $\mathcal{R}$ if $q_i \in [q_{\min}, q_{\max}] \ \ \forall \ i \in \{1, \ldots, n\}$. \end{theorem} \section{Sellers with Non-Unit Strategic Capacities} \label{strategic} \subsection{Setup} In this case, seller $i$ can misreport his capacity $q_i$ in addition to misreporting his valuation per unit $v_i$ to maximize his gain from the auction. Here, we assume that sellers are not allowed to overbid their capacity. This can be enforced by declaring, as part of the auction, that if a seller fails to provide the number of units he has bid, he suffers a huge penalty (financial or legal loss). But underbidding may help a seller as depending on the mechanism it may result in an increase in the payment which can often more than compensate the loss due to a decrease in allocation. Hence, as shown by Iyengar and Kumar \cite{iyengar2008optimal}, even when the bidders can only underbid their capacities, an auction that simply ignores the capacities of the bidders need not be incentive compatible. A small example can be constructed as follows. \begin{example} Suppose the $(v_i, q_i)$ values of the sellers are $(6, 100), (8, 100), (10, 200)$ and $(12, 100)$. Consider an external market with maximum demand of $200$ units. The revenue curve is given by $R(j) = 15j$ when $j <= 200$ and $15 * 200$ when $j > 200$. Suppose the buyer conducts the classic Kth price auction where the payment to a winning seller is equal to the valuation of the first losing seller. Bidding valuation truthfully is a weekly dominant strategy of the sellers but it does not deter them from possibly altering their capacities. If they report both $v_i$ and $q_i$ truthfully the allocation will be $(100, 100, 0 , 0)$ and the utility of the second seller will be $(10-8) * 100 = 200$. If the second seller underbids his capacity to $90$ the allocation changes to $(100, 90, 10, 0)$ and the utility of the second seller will be $(12-8) * 90 = 360$. So the Kth price auction is clearly not incentive compatible. \end{example} Note that the actual values of $b_i = (v_i,q_i)$ are known to only seller $i$. From now on, the true type of each bidder is represented by $b_i = (v_i,q_i)$ and each reported bid is represented by $\hat{b}_i = (\hat{v}_i,\hat{q}_i)$. So we denote the true type vector by \textbf{b} and the reported bid vector by $\hat{\textbf{b}}$. We denote the utility or \emph{true} surplus of bidder $i$ by $u_i(\hat{\textbf{b}}, \mathcal{R})$ and the \emph{offered} surplus by $\hat{u}_i(\hat{\textbf{b}}, \mathcal{R})$. True surplus is the pay-off computed using the true valuation and the offered surplus is the pay-off computed using the reported valuation. \begin{center} $u_i(\hat{\textbf{b}}, \mathcal{R}) = [p_i(\hat{\textbf{b}}, \mathcal{R})x_i(\hat{\textbf{b}}, \mathcal{R}) - v_ix_i(\hat{\textbf{b}}, \mathcal{R})]$. $\hat{u}_i(\hat{\textbf{b}}, \mathcal{R}) = [p_i(\hat{\textbf{b}}, \mathcal{R})x_i(\hat{\textbf{b}}, \mathcal{R}) - \hat{v}_ix_i(\hat{\textbf{b}}, \mathcal{R})]$. \end{center} \subsection{A Characterization of DSIC and IR Procurement Auctions} Iyengar and Kumar \cite{iyengar2008optimal} have characterized all DSIC and IR procurement auctions and the payment rule that implements a given DSIC allocation rule. \label{ch} \begin{enumerate \item \label{c1} A feasible allocation rule \textbf{x} is \textbf{DSIC} if, and only if, $x_i(((v_i,q_i),\hat{b}_{-i}), \mathcal{R})$ is non-increasing in $v_i$,$\ \forall q_i $, $ \forall \hat{b}_{-i}$. \item \label{c2} A mechanism (\textbf{x}, \textbf{p}) is \textbf{DSIC} and \textbf{IR} if, and only if, the allocation rule \textbf{x} satisfies \ref{c1} and the ex-post offered surplus is \begin{center} $\hat{u}_i(\hat{b}_i, \hat{b}_{-i}, \mathcal{R}) = \displaystyle \int_{\hat{v}_i}^{\infty} x_i(u, \hat{q}_i, \hat{b}_{-i}, \mathcal{R})\mathrm{d}u$. \end{center} with $\hat{u}_i((\hat{v}_i, \hat{q}_i), \hat{b}_{-i}, \mathcal{R})$ non-negative and non-decreasing in $\hat{q}_i$ for all $\hat{v}_i$ and for all $\hat{b}_{-i}$. \end{enumerate} We use the same prior-free benchmark and the random sampling procurement auction as defined in Section \ref{Nonstrategic} and extend our previous results for strategic capacity case. \begin{lemma} \label{pesc1} PEC$_P$ is truthful if $\mathcal{R}$ is linear \end{lemma} \begin{pfof} Let $k_1$ and $k_2$ be the number of units PEC$_P$ procures from the bidders to achieve target profit $P$ when bidders report their capacities truthfully and misreport their capacities respectively. By our assumption bidders are only allowed to underbid their capacities. So we have $k_2 \leq k_1$. Now truthful reporting is a dominant strategy of the bidders if the bidders are not better off by underbidding their capacities. So $\hat{q}_i \leq q_i$ and $u_i(((v_i,\hat{q}_i),\hat{b}_{-i}), \mathcal{R}) \leq u_i(((v_i,q_i),\hat{b}_{-i}), \mathcal{R})$. Hence we have, \begin{center} $\displaystyle -v_i\hat{q}_i + \hat{q}_i\frac{\mathcal{R}(k_2) - P}{k_2} \leq -v_iq_i + q_i\frac{\mathcal{R}(k_1) - P}{k_1}$. \end{center} A sufficient condition for the above inequality to hold is \begin{center} $\displaystyle \frac{\mathcal{R}(k_2) - P}{k_2} \leq \frac{\mathcal{R}(k_1) - P}{k_1} \ \ \forall k_2 \leq k_1$. \end{center} Clearly a linear revenue curve satisfies the above sufficient condition. \end{pfof} The following results immediately follow from the above lemma. \begin{theorem} \label{pepasc1} PEPAC is truthful if $\mathcal{R}$ is linear \end{theorem} \begin{theorem} PEPAC is $4$-competitive if the revenue curve $\mathcal{R}$ is linear and \\$q_i = q \ \forall \ i = \{1, \ldots, n\}$. \end{theorem} \begin{theorem} PEPAC is $4 \cdot \left( \displaystyle \frac{q_{\max}}{q_{\min}}\right)$-competitive for any linear revenue curve $\mathcal{R}$ if $q_i \in [q_{\min}, q_{\max}] \ \forall i \in \{1, \ldots, n\}$. \end{theorem} \section{Conclusion and Future Work} \label{future} In this paper, we have considered a model of prior-free profit-maximizing procurement auctions with capacitated sellers and designed prior-free auctions for both single dimensional and bi-dimensional sellers. We have shown that the optimal single price auction cannot be matched by any truthful auction. Hence, we have considered a lightly constrained single price auction as our benchmark in the analysis. We have presented procurement auctions based on profit extraction, PEPA for sellers with unit capacity and PEPAC for sellers with non-unit capacity and proved an upper bound on their competitive ratios. For the bi-dimensional case, PEPAC is truthful for the specific case of linear revenue curves. Our major future work is to design a prior-free auction for bi-dimensional sellers which is truthful and competitive for all concave revenue curves. Subsequently, we would like to design prior-free procurement auctions for the more generic setting where each seller can announce discounts based on the volume of supply. \newpage
1,108,101,563,535
arxiv
\section{Introduction} With the goal of better understanding the physics of glasses and of glass formation, there has been a continuing search for empirical correlations among various aspects of the phenomenology of glassformers. The most distinctive feature of glass formation being the rapid increase with decreasing temperature of the viscosity and relaxation times, correlations have essentially been looked for between the characteristics of the latter and other thermodynamic or dynamic quantities. Angell coined the term ``fragility'' to describe the non-Arrhenius temperature dependence of the viscosity or (alpha) relaxation time and the associated change of slope on an Arrhenius plot \cite{Angell84}. He noticed the correlation between fragility and amplitude of the heat-capacity jump at the glass transition. Earlier, the Adam-Gibbs approach was a way to rationalize the correlation between the viscosity increase and the configurational or excess entropy decrease as one lowers the temperature \cite{adam65}. Since then, a large number of empirical correlations between ``fragility'' and other properties of the liquid or of the glass have been found: for instance, larger fragility (\emph{i.e.}, stronger deviation from Arrhenius behavior) has been associated with (i) a stronger deviation of the relaxation functions from an exponential dependence on time (a more important ``stretching'') \cite{bohmer93}, (ii) a lower relative intensity of the boson peak \cite{sokolov93}, (iii) a larger mean square displacement at $T_g$ \cite{ngai00}, (iv) a smaller ratio of elastic to inelastic signal in the X-ray Brillouin-spectra \cite{scopigno03}, (v) a larger Poisson ratio \cite{novikov04} and (vi) a stronger temperature dependence of the elastic shear modulus, $G_\infty$, in the viscous liquid \cite{dyre06}. For useful as they may be to put constraints on proposed models and theories of the glass transition, such correlations can also be misleading by suggesting causality relations where there are no such things. It seems therefore important to assess the robustness of empirically established correlations. In this respect, we would like to emphasize a number of points that are most often overlooked: 1) Fragility involves a variation with temperature that \emph{a priori} depends on the thermodynamic path chosen, namely constant pressure (isobaric) versus constant density (isochoric) conditions. On the other hand, many quantities that have been correlated to fragility only depend on the thermodynamic state at which they are considered: this is not the case for the variation of the excess entropy or of the shear modulus, nor for the jump in heat capacity measured in differential scanning calorimetry, which are all path dependent, but the other properties are measured either at Tg, the glass-transition temperature, or in the glass, where they also relate to properties of the liquid as it falls out of equilibrium at Tg (there may be a residual path dependence due to the nonequilibrium nature of the glass, but it is quite different from that occuring in the liquid). \emph{Which fragility then, isobaric or isochoric, should best be used in searching for correlations ?} 2) The quantities entering in the proposed correlations are virtually always considered at Tg. This is the case for the commonly used measure of fragility, the ``steepness index'', which is defined as the slope of the temperature dependence of the alpha-relaxation time on an Arrhenius plot with T scaled by Tg \cite{richert98}. Tg is of course only operationally defined as the point at which the alpha-relaxation time (or the viscosity) reaches a given value, say 100 seconds for dielectric relaxation. The correlated properties are thus considered at a given relaxation time or viscosity. \emph{What is the fate of the proposed correlations when one studies a different value of the relaxation time ?} 3) Almost invariably, comparisons involve properties measured at atmospheric pressure, for which the largest amount of data is available. Since, as discussed in the preceding point, the properties are also considered at a given relaxation time, an obvious generalization consists in studying the validity of the reported correlations under ``isochronic'' (\emph{i.e.}, constant relaxation time ) conditions, by varying the control parameters such that the relaxation time stays constant. \emph{How robust are then the correlations when one varies, say, the pressure along an isochrone ?} In light of the above, our contention is that any putative correlation between fragility and another property should be tested, as far as possible, by varying the reference relaxation time, by varying the thermodynamic state along a given isochrone, and by changing the thermodynamic path along which variations, such as that defining the fragility, are measured A better solution would certainly be to correlate ``intrinsic'' properties of glassformers that do not depend on the chosen state point or relaxation time. A step toward defining such an ``intrinsic'' fragility has been made when it was realized that the temperature and the density dependences of the alpha-relaxation time and viscosity of a given liquid could be reduced to the dependence on a single scaling variable, $X=e(\rho)/T$, with $e(\rho)$ an effective activation energy characteristic of the high-temperature liquid \cite{alba02,tarjus04}. Evidence is merely empirical and is supported by the work of several groups for a variety of glassforming liquids and polymers \cite{alba02,tarjus04,casalini04,roland05,dreyfus04,reiser05,floudas06}. The direct consequence of this finding is that the fragility of a liquid defined along an isochoric path is independent of density: the isochoric fragility is thus an intrinsic property, contrary to the isobaric fragility. Although one could devise ways to characterize the isochoric fragility in a truly intrinsic manner, independently of the relaxation time, the common measure through the steepness index (see above) still depends on the chosen isochrone. In looking for meaningful correlations to this isochoric steepness index, it is clear however that one should discard quantities that vary with pressure (or equivalently with temperature) under isochronic conditions. As we further elaborate in this article, the stretching parameter characterizing the shape of the relaxation function (or spectrum) is \emph{a priori} a valid candidate, as there is some experimental evidence that it does not vary with pressure along isochrones \cite{ngai05a}. The aim of the present work is to use the knowledge about pressure and temperature dependences of the liquid dynamics to test the robustness of proposed correlations between fragility and other properties. This is a continuation of the work presented in reference \cite{niss06}, where the focus was mainly on correlations between fragility of the liquid and properties of the associated glass. In this paper we specifically consider the correlation between fragility and stretching. The reported correlation between the two is indeed one of the bases of the common belief that both fragility and stretching are signatures of the cooperativity of the liquid dynamics. We present new dielectric spectroscopy data on the pressure dependence of the alpha relaxation of two molecular glassforming liquids, dibutyl phthalate (DBP) and m-toluidine. We express the alpha-relaxation time as a function of the scaling variable $X=e(\rho)/T$ and evaluate the density dependence of $e(\rho)$ as well as the isochoric fragility. We also study the spectral shape and its pressure dependence along isochronic lines. We spend some time discussing the methodological aspects of the evaluation of the fragility and of the stretching from experimental data, as well as that of the conversion from $P,T$ to $P,\rho$ data. This provides an estimate of the error bars that one should consider when studying correlations. Finally, by combining our data with literature data we discuss the robustness of the correlation between fragility and stretching along the lines sketched above. The paper is structured as follows. Section \ref{sec:back} introduces some concepts and earlier developments that are central for the discussion. In section \ref{sec:exp} we present the experimental technique. Section \ref{sec:relax} is devoted to the pressure, temperature and density dependence of the relaxation time. In section \ref{sec:spec} we analyze the spectral shape and its pressure and temperature dependence. Finally, in section \ref{sec:disc} we combine the current results with literature data to assess the relation between fragility and stretching, stressing the need to disentangle temperature and density effects. Two appendices discuss some methodological points. \section{Background\label{sec:back}} \subsection{Isochoric and isobaric fragilities}\label{sec:iso} The fragility is a measure of how much the temperature dependence of the alpha-relaxation time (or alternatively the shear viscosity) deviates from an Arrhenius form as the liquid approaches the glass transition. The most commonly used criterion is the so called steepness index, \begin{eqnarray} \label{eq:angel} m_P=\left.\parti{\log_{10}(\tau_\alpha)}{\,T_g/T}\right|_P (T=T_g), \end{eqnarray} where the derivative is evaluated at $T_g$ and $\tau_\alpha$ is expressed in seconds. Conventionally, the liquid is referred to as strong if $m$ is small, that is $17-30$, and fragile if $m$ is large, meaning roughly above 60. In the original classification of fragility it was implicitly assumed that the relaxation time (or viscosity) was monitored at constant (atmospheric) pressure, as this is how the vast majority of experiments are performed. The conventional fragility is therefore the (atmospheric pressure) isobaric fragility, and, as indicated in Eq. \ref{eq:angel}, the associated steepness index is evaluated at constant pressure. However, the relaxation time can also be measured as a function of temperature along other isobars, and this will generally lead to a change in $m_P$. Moreover, it is possible to define an isochoric fragility and the associated index, $m_\rho$, obtained by taking the derivative at constant volume rather than at constant pressure. The two fragilities are straightforwardly related via the chain rule of differentiation, \begin{eqnarray*} m_P = m_\rho +\left.\parti{\log_{10}(\tau_\alpha)}{\rho}\right|_T \left.\parti{\rho}{\,T_g/T}\right|_{P}(T=T_g), \end{eqnarray*} when both are evaluated at the same point $(T_g(P),\rho(P,T_g(P)))$. The isochoric fragility, $m_\rho$, describes the intrinsic effect of temperature, while the second term on the right hand side incorporates the effect due to the change of density driven by temperature under isobaric conditions. It can be shown that the above relation can be rewritten as \begin{eqnarray} m_P = m_\rho (1-\alpha_P/\alpha_\tau) \label{eq:mpmrho} \end{eqnarray} where the unconventional $\alpha_{\tau}$ is the isochronic expansivity \cite{ferrer98}, \emph{i.e.}, the expansivity along a line of constant alpha-relaxation time $\tau_{\alpha}$ (the $T_g$ line being a specific isochrone). The above result is purely formal and contains no assumptions. The implication of the result is that $m_P$ is larger than $m_\rho$ if $\alpha_P>0$ and $\alpha_\tau<0$. It is well known that $\alpha_P>0$ in general. The fact that $\alpha_\tau$ is negative arises from the empirical result that the liquid volume always decreases when heating while following an isochrone. Within the last decade a substantial amount of relaxation-time and viscosity data has been collected at different temperatures and pressures/densities. On the basis of the existing data, it is reasonably well established that the temperature and density dependences of the alpha-relaxation time can be expressed in a scaling form as \cite{alba02,tarjus04,casalini04,roland05,dreyfus04,reiser05,floudas06}. \begin{eqnarray} \tau_\alpha(\rho,T)=F\left(\frac{e(\rho)}{T}\right). \label{eq:scaling} \end{eqnarray} It is seen directly from Eq. \ref{eq:scaling} that $X(\rho,T)=e(\rho)/T$, when evaluated at $T_g$, has the same value at all densities ($X_g=e(\rho)/T_g(\rho)$) if $T_g(\rho)$ is defined as the temperature where the relaxation time has a given value (e.g., $\tau_\alpha=100$ s). Exploiting this fact, it is easy to show \cite{tarjus04,alba06} that the scaling law implies that the isochoric fragility is independent of density. For instance, the isochoric steepness index, when evaluated at a $T_g$ corresponding to a fixed relaxation time, is given by \begin{equation} m_\rho=\left.\diff{\log_{10}(\tau_\alpha)}{\,T_g/T}\right|_{\rho}(T=T_g)=F^\prime (X_g) \diff{X}{T_g/T}(T=T_g) = X_g F^\prime (X_g). \end{equation} The fact that the relaxation time $\tau_\alpha$ is constant when $X$ is constant means that the isochronic expansion coefficient $\alpha_\tau$ is equal to the expansion coefficient at constant $X$. Using this and the general result $\left(\parti{\rho}{T} \right)_X\left(\parti{X}{\rho} \right)_T\left(\parti{T}{X} \right)_\rho=-1$, it follows that \begin{equation} \frac{1}{\alpha_\tau}=-T_g \diff{ \log e(\rho)}{\log \rho}, \end{equation} which inserted in Eq. \ref{eq:mpmrho} leads to \begin{eqnarray} m_P = m_\rho \(1+\alpha_P T_g \diff{ \log e(\rho)}{\log \rho}\), \label{eq:mpmrho2} \end{eqnarray} where $m_P$, $m_\rho$ and $\alpha_P$ are evaluated at $T_g$. When liquids have different isobaric fragilities, it can be thought of as due to two reasons: a difference in the intrinsic isochoric fragility, $m_\rho$, or a difference in the relative effect of density, characterized by $\alpha_P T_g$ and the parameter $x=\diff{ \log e(\rho)}{\log \rho}$. We analyze the data in this frame. \subsection{Relaxation-time dependent fragility}\label{sec:time} The following considerations hold for isochoric and isobaric conditions alike. The $\rho$ or $P$ subscript are therefore omitted in this section. The fragility is usually characterized by a criterion evaluated at $T_g$, \emph{i. e.}, the temperature at which the relaxation time reaches $\tau_\alpha$=100 s-1000 s. The same criterion, e.g. the steepness index, can however equally well be evaluated at a temperature corresponding to another relaxation time, and this is also found more often in literature, mainly to avoid the extrapolation to long times. So defined, the ``fragility'' for a given system can be considered as a quantity which is dependent of the relaxation time at which it is evaluated: \begin{eqnarray} m(\tau)=\diff{log_{10}(\tau_\alpha)}{T_\tau/T} (T=T_\tau)\, \label{eq:mtau} \end{eqnarray} where $\tau_\alpha(T_\tau)=\tau$ defines the temperature $T_\tau$. ($T_g$ is a special case with $\tau\approx $100 s-1000s .) An (extreme) strong system is a system for which the relaxation time has an Arrhenius behavior, \begin{eqnarray} \tau_\alpha(T)=\tau_\infty\exp\(\frac{E_\infty}{T}\), \end{eqnarray} where $E_\infty$ is a temperature and density independent activation energy (measured in units of temperature). Inserting this in the expression for the relaxation-time dependent steepness index (Eq. \ref{eq:mtau}) gives \begin{eqnarray} m_{strong}(\tau)=log_{10}\(\tau/\tau_\infty\) \end{eqnarray} which gives the value $m_{strong}(\tau=100s)=15$ (assuming $log_{10}(\tau_\infty/\mathrm{sec})=-13$) and decreases to $m_{strong}(\tau=\tau_\infty)=0$ as the relaxation time is decreased. This means that even for a strong system the steepness index is relaxation-time dependent. In order to get a proper measure of departure from Arrhenius behavior it could therefore be more adequate to use the steepness index normalized by that of a strong system: \begin{eqnarray} m_n(\tau)=\frac{ m(\tau)}{ m_{strong}(\tau)}=\frac{\diff{log_{10}(\tau_\alpha)}{T_\tau/T}}{log_{10}\(\tau/\tau_\infty\)}. \label{eq:mtaun} \end{eqnarray} $m_n(\tau)$ will take the value $1$ at all relaxation times in a system where the relaxation time has an Arrhenius behavior. Such a normalized measure of fragility has been suggested before \cite{schug98,granato02,dyre04}. For instance, Olsen and coworkers \cite{dyre04} have introduced the index \begin{eqnarray} I=-\diff{log E(T)}{log T} \end{eqnarray} where $E(T)$ is a temperature dependent activation energy defined by $E(T)=T\ln{(\tau_\alpha/\tau_infty)}$. The relation between the steepness index and the Olsen index is \cite{dyre04} \begin{equation} \label{eq:mI2} I(\tau)=\frac{m(\tau)}{\log_{10}\(\frac{\tau}{\tau_\infty}\)}-1=m_n(\tau)-1 \end{equation} $I(\tau)$ takes the value 0 for strong systems at all relaxation times. Typical glass forming liquids display an approximate Arrhenius behavior at high temperatures and short relaxation times; in this limit $I(\tau)=0$ and it increases as the temperature dependence starts departing from the Arrhenius behavior. Typical values of I at $T_g(\tau=100s)$ are ranging from I=3 to I=8, corresponding to steepness indices of m=47 to m=127. Finally, we note in passing that relaxation-time independent measures of fragility can be formulated through fitting formulae: this is the case for instance of the fragility parameter $D$ in the Vogel-Tammann-Fulcher (VTF) formula or of the frustration parameter $B$ in the frustration-limited domain theory \cite{kivelson98}. \section{Experimentals}\label{sec:exp} The dielectric cell is composed of two gold-coated electrodes separated by small Teflon spacers. The distance between the spacers is $0.3$ mm and the area is 5.44 cm$^{2}$ giving an empty capacitance of 16 pF. The electrodes are totally immersed in the liquid sample, which is sealed from the outside by a Teflon cell. The electric contacts are pinched through the Teflon. The compression is performed using liquid pentane, which surrounds the Teflon cell from all sides. The Teflon cell has one end with a thickness of $0.5$ mm in order to insure that the pressure is well transmitted from the pentane to the liquid sample. The pressure is measured by using a strain gauge. The cooling is performed by a flow of thermostated cooling liquid running inside the autoclave. The temperature and the temperature stability are monitored by two PT100 sensors placed 2cm and 0.3 cm from the sample. The temperature is held stable within $\pm 0.1$ degree for a given isotherm. The temperature during the time it takes to record a spectrum is stable within $\pm 0.01$ degree. The setup insures a hydrostatic pressure because the sample is compressed from all sides. It is moreover possible to take spectra both under compression and decompression. By doing so and returning to the same $P-T$ condition after several different thermodynamic paths, we have verified that there was no hysteresis in the pressure dependence of the dynamics. This serves to confirm that the liquid is kept at thermodynamic equilibrium at all stages. The capacitance was measured using a HP 4284A LCR-meter which covers the frequency range from 100 Hz to 1 MHz. The low-frequency range from 100 Hz to 1 Hz is covered using a SR830 lockin. The samples, dibutyl phthalate (DBP) and m-toluidine, were acquired from Sigma-Aldrich. The m-toluidine was twice distilled before usage. The DBP was used as acquired. Liquid m-toluidine was measured on one isotherm at 216.4 K. DBP was measured along 4 different isotherms, 205.5 K, 219.3 K, 236.3 K and 253.9 K, at pressures up to 4 kbar. DBP was moreover measured at different temperatures along two isobars: atmospheric pressure and 230 MPa. The pressure was continuously adjusted in order to compensate for the decrease of pressure which follows from the contraction of the sample due to decreasing temperature. It is of course always possible to reconstruct isobars based on experiments performed under isotherm conditions. However, such a procedure mostly involves interpolation of the data, which is avoided by performing a strictly isobaric measurement. For DBP we have obtained relaxation-time data at times shorter than $10^{-6.5}$ s by using the high-frequency part of the spectrum and assuming time-temperature and time-pressure superposition (TTPS). Although TTPS is not followed to a high precision (see section \ref{sec:shapeDBP}), the discrepancies lead to no significant error on the determination of the relaxation time. This is verified by comparison to atmospheric-pressure data from the literature (see figure \ref{fig:dbpIsob}). \section{Alpha-relaxation time and fragility}\label{sec:relax} \subsection{Dibutyl phtalate}\label{sec:relaxDBP} The DBP data at atmospheric pressure is shown in figure \ref{fig:dbpIsob} along with literature results. $T_g(P_atm)=177$ K, when defined as the temperature at which $\tau_\alpha=100$ s. We also present the data taken at $P=230$ MPa in this figure. It is clearly seen that $T_g$ increases with pressure. An extrapolation of the data to $\tau_\alpha=100$ s gives $T_g=200$ K for $P=230$ MPa, corresponding to $d T_g/ d P\approx0.1$ KMPa$^{-1}$. This corresponds well to the pressure dependence of $T_g$ (at $\tau_\alpha=1$ s) reported by Sekula \emph{et al.} \cite{sekula04}, based on measurements taken at pressures higher than $600$ MPa. The dependence is however stronger than that reported by Fujimori \emph{et al.} \cite{fujimori97} based on isothermal calorimetry, for which $d T_g/ d P=0.06$ KMPa$^{-1}$. This indicates that the calorimetric and the dielectric relaxations may have somewhat different dependences on pressure. In figure \ref{fig:fragi} we illustrate the determination of $T_g$ and of the steepness index $m_P$ for the atmospheric-pressure data, using the part of the data of figure \ref{fig:dbpIsob} whith a relaxation time longer than a millisecond. Along with the data we show the VTF fit from Sekula \emph{et al.} \cite{sekula04} extrapolated to low temperatures, which gives $T_g=177.4$ K and $m_P=84$. We have also performed a new VTF fit restricted to the data in the $10^{-6}$ s$-10^{2}$ s region. The result of this fit yields $T_g=176.1 $ K and $m_P=79$. Finally, we have made a simple linear estimate of $log_{10}\tau_\alpha$ as a function of $1/T$ in the temperature range shown in the figure. This linear slope fits the data close to $T_g$ better than any of the VTF fits. The corresponding glass transition temperature and steepness index are $T_g=176$ K and $m_P=65$. This illustrates that the determination of $T_g$ is rather robust while this is less so for the steepness index. This latter depends on how it is obtained, and the use of extrapolated VTF fits can lead to an overestimation. (Of course, a VTF fit made over a very narrow range, e.g. $10^{-2}s-10^{2}$ s, will agree with the linear fit, because it becomes essentially linear over the restricted range.) The fragility of DBP has earlier been reported to be $m_P=69$ \cite{bohmer93}, based on the data of Dixon \emph{et al.} \cite{dixon90}. We take $m_P=67$ as a representative value. The relaxation-time data along four different isotherms are displayed as a function of pressure in figure \ref{fig:dbpP}. In order to separate the relative effects of density and temperature it is convenient to express the relaxation time as a function of density and temperature rather than pressure and temperature. To do this, we need the pressure and temperature dependences of the density. However, for liquid DBP such data is only available at high temperature \cite{bridgman32}. In order to extrapolate the equation of state to low temperature we have applied the following scheme. When calculated from the data in Ref. \cite{bridgman32}, the expansion coefficient $\alpha_P$ shows a weak decrease with decreasing temperature. We therefore assume that the temperature dependence of $\alpha_P$ is linear over the whole temperature range and integrate with respect to temperature to obtain the density along the atmospheric-pressure isobar. In the whole temperature range of Ref. \cite{bridgman32}, the pressure dependence of the density is well described by fits to the Tait equation with temperature-dependent adjustable parameters ``c'' and ``b'' \cite{cook93} (which are directly related to the compressibility and its first-order pressure derivative). We have linearly extrapolated the temperature dependence of these parameters and used the Tait equation to calculate the pressure dependence along each isotherm. Extrapolating the derivatives rather than the density itself is expected to lead to smaller errors on the latter. In addition, we have checked that this procedure gives physically reasonable pressure and temperature dependences of the expansivity and of the compressibility \cite{terminassian88}. Figure \ref{fig:dbpRho} shows the density dependence of the alpha-relaxation time along the four different isotherms, the atmospheric-pressure isobar and the 230 MPa isobar. We have also included the room-temperature dielectric data of Paluch \emph{et al.} \cite{paluch03}. For DBP the viscosity data and the dielectric relaxation time do not decouple under pressure \cite{sekula04}, and we have therefore also included the room-temperature viscosity data of Cook \emph{et al.} \cite{paluch03}. In figure \ref{fig:dbpScaling} we show the data of figure \ref{fig:dbpRho} plotted as a function of the scaling variable $\rho^x/T$, choosing for $x$ the value that gives the best collapse for the data of this work. This corresponds to testing the scaling in equation \ref{eq:scaling} by assuming that $e(\rho)$ is a power law. The data taken at low density collapse quite well with $x=2.5$, while this is not true for the data of Paluch \cite{paluch03} taken at densities higher than approximately 1.2 g/cm$^3$. It is possible to make all the data collapse by allowing $e(\rho)$ to have a stronger density dependence at higher densities. In figure \ref{fig:dbpScaling2} we show the data as a function of $e(\rho)/T$, where we have constructed the density dependence of $e(\rho)$ in order to get a good overlap of all the data (we did not look for the best collapse, but merely evaluated the change of the isochronic expansivity: see section \ref{sec:back}). The resulting density dependence of $e(\rho)$ is shown in figure \ref{fig:dbpScaling2} along with the $\rho^{2.5}$ power law. Note that the quality of the data collapse depends only on the density dependence of $e(\rho)$ not on its absolute value. The constructed $e(\rho)$ has an apparent ``power-law'' exponent $x(\rho)=\mathrm{d} log e(\rho)/\mathrm{d} log \rho$ that increases from 1.5 to 3.5 with density in the range considered. In any case, the absence of collapse in figure \ref{fig:dbpScaling} cannot be explained by errors in estimating the PVT data: this is discussed in more detail in Appendix \ref{sec:densAp}. As a last note regarding the $e(\rho)/T$-scaling in figure \ref{fig:dbpScaling2}, we want to stress that we cannot test the scaling (Eq. \ref{eq:scaling}) in the density range above $1.25 g/cm^3$ where there is only one set of data. (This is why we did not attempt to fine tune $e(\rho)$ to find the best collapse, see above.) Indeed, with a unique set of data in a given range of density it is always possible to construct $e(\rho)$ in this range to make the data overlap with data taken in other density ranges. We have determined the ratio between the isochoric fragility and the isobaric fragility at atmospheric pressure by calculating $\alpha_\tau$ along the isochrone of $100$ s and inserting it in Eq. \ref{eq:mpmrho}. This leads to $m_P/m_\rho\approx 1.2$, when $m_P$. In figure \ref{fig:dbpIsob2} we show the isobaric data taken at atmospheric pressure and at 230MPa scaled by their respective $T_g(P)$. No significant pressure dependence of the isobaric fragility is observed when going from atmospheric pressure to 230 MPa, which is consistent with the result of reference \cite{sekula04}. The pressure independence of $m_P$ is connected to the relatively low value of $m_P/m_\rho=1.2$ (typical values are 1.1-2 \cite{tarjus04}); $m_\rho$ is pressure independent and the ratio $m_P/m_\rho$ cannot be lower than one (see Eq. \ref{eq:mpmrho}), so that $m_P$ can at most decrease by $20\%$ from its atmospheric-pressure value. Such a change would almost be within the errorbar of the determination of $m_P$ from the data at $230$MPa (see the discussion earlier in this section). \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{figure1.eps} \caption{Temperature dependence of the alpha-relaxation time (from dielectric measurements, $\tau_\alpha=1/\omega_{peak}$) of liquid DBP at atmospheric pressure and at 230 MPa (Arrhenius plot). Data at atmospheric pressure from other groups are also included: unpublished data from Nielsen \cite{nielsen06}, the VTF fit of \cite{sekula04} shown in the range where it can be considered as an interpolation of the original data, and data taken from figure 2(a) in reference \cite{dixon90}.}\label{fig:dbpIsob} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{figure2.eps} \caption{Atmospheric-pressure data of figure \ref{fig:dbpIsob} with relaxation times longer than a millisecond (symbols). Also shown are the VTF fit from reference \cite{sekula04} extrapolated to low temperatures (dashed-dotted line), a new VTF fit made by using data in the $10^{-6}$ s $-10^{2}$ region (dashed line), and estimated slope of the data in the long-time region (full line). The $Tg$'s estimated from these three methods are very similar, whereas the fragility varies significantly from $m=65$ to $m=85$. }\label{fig:fragi} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{figure3.eps} \caption{Alpha-relaxation time of DBP (from dielectric measurements, $\tau_\alpha=1/\omega_{peak}$) as a function of pressure along 4 different isotherms (log-linear plot).}\label{fig:dbpP} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{figure4.eps} \caption{Logarithm of the alpha-relaxation time of DBP versus density (see the text regarding the calculation of density). Included are data from this work along with dielectric data from figure 3 in reference \cite{paluch03}, and viscosity data from reference \cite{cook93}. The viscosity data are shifted arbitrarily on the logarithmic scale in order to make the absolute values correspond to the dielectric data of reference \cite{paluch03}, which are taken at the same temperature.}\label{fig:dbpRho} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{figure5.eps} \caption{The alpha-relaxation times shown in figure \ref{fig:dbpRho} plotted as a function of $\rho^{2.5}/T$.}\label{fig:dbpScaling} \end{figure} \begin{figure}[htbp] \centering a) \includegraphics[scale=0.4]{figure6a} b) \includegraphics[scale=0.4]{figure6b.eps} \caption{(a) The alpha-relaxation times shown in figure \ref{fig:dbpRho} plotted as a function of $X=e(\rho)/T$, with increasing $d log e(\rho)/d log \rho$ as $\rho$ increases. (b) Density-dependent activation energy $e(\rho)$ (dashed line) used in the scaling variable $X=e(\rho)/T$ for collapsing data in (a) (the associated $x(\rho)=d log e(\rho)/d log \rho$ increases from 1.5 to 3.5 in the density range under study). We also display the power law giving the best scaling, $\rho^{2.5}$, at low density (full line). }\label{fig:dbpScaling2} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{figure7.eps} \caption{Arrhenius plot of the alpha-relaxation time of DBP at atmospheric pressure and at 230 MPa, when the temperature is scaled with the pressure dependent $T_g$, $T_g(Patm)=176$ K and $T_g$(230MPa)=200K. As in figure \ref{fig:dbpIsob}, data from other groups are also included: unpublished data from Nielsen \cite{nielsen06}, the VTF fit of \cite{sekula04} shown in the range where it can be considered as an interpolation of the original data, and data taken from figure 2 (a) in reference \cite{dixon90}.}\label{fig:dbpIsob2} \end{figure} \subsection{m-Toluidine} The glass transition temperature at atmospheric pressure is $T_g=187$ K (for $\tau_\alpha=100$ s) and the isobaric fragility based on dielectric spectra is reported to be $m_P=82\pm3$ \cite{mandanici05,alba99}. (There has been some controversy about the dielectric relaxation in m-toluidine, see reference \cite{mandanici05} and references therein.) In the inset of figure \ref{fig:mTrho} we show the pressure-dependent alpha-relaxation time at $216.4$ K. Extrapolating the data to $\tau_\alpha=100$ s leads to $P_g=340\pm10$ MPa, which is in agreement with the slope, $d T_g/ d P=0.45$ KMaP$^{-1}$, reported for the calorimetric glass transition in \cite{alba97}. This indicates that the decoupling between the timescales of dipole relaxation and of calorimetric relaxation which appears under pressure in the case of DBP is not present in m-toluidine in this pressure range. As for DBP, we wish to convert the temperature and pressure dependences of the relaxation time to the temperature and density dependences. Density data are available along four isotherms in the $278.4$ K $-305.4$ K range for pressures up to 300 Mpa \cite{wurflinger}. Tait fits and thermal expansivity in this range were extrapolated by using the scheme described above for DBP in order to determine density both as a function of temperature down to $T_g$, and as a function of pressure on the $216.4$ K isotherm. In figure \ref{fig:mTrho} we show the alpha-relaxation time as a function of density. The data taken at atmospheric pressure and the data taken along the 216.4 K isotherm cover two different ranges in density. It is therefore not possible from this data to verify the validity of the scaling in $X=e(\rho)/T$. We therefore assume that the scaling is possible. Moreover, due to the paucity of the data we describe $e(\rho)$ by a simple power law, $e(\rho)=\rho^x$. We find the exponent $x$ by exploiting the fact that the scaling variable $X=e(\rho)/T$ is uniquely fixed by the value of the relaxation time; applying this at $T_g$, namely setting $X_g(Patm)$ =$X_g(216$K), leads to $x=2.3$ and gives a ratio of $m_P/m_\rho=1.2$. \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{figure8.eps} \caption{Logarithm of the alpha-relaxation time of m-toluidine as a function of density along the isotherm $T=216.4K$ (symbols). The VTF fit of the atmospheric-pressure data of reference \cite{mandanici05} is also shown in the range where the fit can be considered as an interpolation of the data (dashed line). The inset shows the alpha-relaxation time of m-toluidine as a function of pressure along the isotherm T=216.4 K. }\label{fig:mTrho} \end{figure} \section{Spectral shape and stretching}\label{sec:spec} The shape of the relaxation function (or spectrum), most specifically its distinctly nonexponential (or non-Debye) character in the viscous regime, is taken as one of the important features of glassforming materials. Characterizing and quantifying this effect is however not fully straightforward and has led to diverging interpretations. First of all, the shape of the relaxation function or spectrum may change with the experimental probe considered. Even when restricting comparison to a single probe, here, dielectric relaxation, there is no consensus on how to best characterize the shape. We discuss in appendix \ref{sec:shape} various procedures that are commonly used and we test their validity on one representative spectrum. For reasons detailed in that appendix, we focus in the following on the Cole-Davidson fitting form. \subsection{Dibutyl phtalate}\label{sec:shapeDBP} The frequency-dependent dielectric loss for a selected set of different pressures and temperatures is shown in figure \ref{fig:dbpimag}. The first observation is that cooling and compressing have a similar effect as both slow down the alpha relaxation and separate the alpha relaxation from higher-frequency beta processes. The data displayed are chosen so that different combinations of temperature and pressure give almost the same relaxation time. However, the correspondence is not perfect. In figure \ref{fig:dbpimag2} we have thus slightly shifted the data, by at most 0.2 decade, in order to make the peak positions overlap precisely. This allows us to compare the spectral shapes directly. It can be seen from the figure that the shape of the alpha peak itself is independent of pressure and temperature for a given value of the alpha-relaxation time (\emph{i.e.}, of the frequency of the peak maximum), while this is not true for the high-frequency part of the spectra which is strongly influenced by the beta-relaxation peak (or high-frequency wing). When comparing datasets that have the same alpha-relaxation time one finds that the high-frequency intensity is higher for the pressure-temperature combination corresponding to high pressure and high temperature. In figure \ref{fig:dbpttszoom} we show all the datasets of figure \ref{fig:dbpimag} superimposed and we zoom in on the region of the peak maximum. The overall shape of the alpha relaxation is very similar at all pressures and temperatures. However, looking at the data in more detail, one finds a significantly larger degree of collapse between spectra which have the same relaxation time, whereas a small broadening of the alpha peak is visible as the relaxation time is increased. At long relaxation times there is a perfect overlap of the alpha-relaxation peaks which have the same relaxation time. At shorter relaxation time, $log_{10} (\omega_{max}) \approx 5$, the collapse is not as good: the peak gets slightly broader when pressure and temperature are increased along the isochrone. In all cases, the alpha peak is well described by a Cole-Davidson (CD) shape. The $\beta_{CD}$ goes from 0.49 to 0.45 on the isochrone with shortest relaxation time and decreases to about 0.44 close to $T_g$ at all pressures. On the other hand, a Kolraush-William-Watts (KWW) fit close to $T_g$ gives $\beta_{KWW}=0.65$. A detailed discussion of the fitting procedures and of the relation between CD and KWWW descriptions is given in appendix \ref{sec:shape}. \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{figure9.eps} \caption{Log-log plot of the frequency-dependent dielectric loss of DBP. The curves can be sorted in 4 groups, each group having roughly the same peak frequency; from right to left: (i) Red dashed-dotted curve: T=253.9 K P=320 MPa; black dots: T= 236.3 K and, from right to left, P=153 MPa, P=251 MPa, P=389 MPa; full blue line: T=219.3 K and, from right to left, P=0 MPa, P=108 MPa, P=200 MPa, P=392 MPa; magenta dashed curve: T=206 K and, from right to left, P=0 MPa, P=85 MPa, P=206 MPa.}\label{fig:dbpimag} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{figure10.eps} \caption{Same dielectric loss data of DBP as in figure \ref{fig:dbpimag} with a slight shift of the peak frequencies (less than 0.2 decade) to make the data taken under quasi isochronic conditions precisely coincide. The symbols are the same as in figure \ref{fig:dbpimag}, but the data at T=206 K and P=206 MPa and 219.3 K and P=392 MPa are not shown.}\label{fig:dbpimag2} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{figure11.eps} \caption{Same dielectric-loss data as in figures \ref{fig:dbpimag} and \ref{fig:dbpimag2}, with the frequency and intensity now scaled by the values at the maximum. We show only a $1.5$ decade in frequency in order to magnify the details. Notice a small broadening as the characteristic relaxation time increases: Blue dashed-dotted line are three different data sets with $log_{10}\nu_{max}\approx 2.6$ (P=320 MPa,T=253.9 K and P=153 MPa,T=236.3 K and P=0 Mpa,T=219.3 K). Red full lines are three data sets with $log_{10}\nu_{max}\approx 4.1$ (P=251 MPa,T=236.3 K and P=108 MPa,T=219.3 K and P=0 Mpa,T=205.6 K). Green dashed lines are three data sets with $log_{10}\nu_{max}\approx 5.2$ (P=339 MPa,T=236.3 K and P=200 MPa,T=219.3 K and P=85 Mpa,T=205.6 K). }\label{fig:dbpttszoom} \end{figure} \subsection{m-toluidine} The frequency-dependent dielectric loss of m-toluidine for several pressures along the T=216.4 K isotherm is shown in figure \ref{fig:mtolimag}. The data are then superimposed by scaling the intensity and the frequency by the intensity and the frequency of the peak maximum, respectively: this is displayed in figure \ref{fig:mtoltts}. When zooming in (figure \ref{fig:mtoltts} (b) we still see almost no variation of the peak shape. For the present set of data, pressure-time-superposition is thus obeyed to a higher degree than in DBP, and the changes are too small to give any pressure dependence in the parameters when fitting the spectra. The Cole-Davidson fit to the m-toluidine gives $\beta_{CD}=0.42$ (see also appendix \ref{sec:shape}). Mandanici \cite{mandanici05} and coworkers have reported a temperature independent value of $\beta_{CD}=0.45$ for data taken at atmospheric pressure in the temperature range 190 K-215 K, a value that is compatible with ours. \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{figure12.eps} \caption{Log-log plot of the frequency-dependent dielectric-loss of m-toluidine at T=216.4K and pressures 0 MPa, 59 MPa, 79 MPa, 105 MPa, 122 MPa, 142 MPa, 173 MPa and 191 MPa. The peak shifts left as pressure is applied. Lines are guides to the eye.}\label{fig:mtolimag} \end{figure} \begin{figure}[htbp] \centering (a) \includegraphics[scale=0.4]{figure13a.eps} (b) \includegraphics[scale=0.4]{figure13b.eps} \caption{Same dielectric-loss data as in figure \ref{fig:mtolimag}, now with the intensity and the frequency scaled by the values of the peak maximum. Figure (b) shows a zooming in of the data in (a) to focus on the alpha-relaxation region near the peak maximum.}\label{fig:mtoltts} \end{figure} \section{Discussion}\label{sec:disc} \subsection{Correlations with fragility} As discussed in the Introduction, the temperature dependence of the alpha-relaxation time (or of the viscosity) is usually considered as the most important phenomenon to understand in glass science. Isobaric fragility is then often used to characterize the viscous slowing down and its measures, such as the steepness index, are then considered as fundamental parameters. Many studies have been aimed at investigating which other properties of the liquid and of the associated glass correlate to fragility. Such correlations have been empirically established by comparing rather large sets of systems covering a wide spectrum of fragilities. In the literature, the finding of a correlation between fragility and some other property is always interpreted as indicating that the property in question is related to the effect of \emph{temperature} on the structural relaxation. However, when cooling a liquid isobarically two effects contribute to the slowing down of the dynamics: the decrease of temperature and the associated increase of density. Hence, the isobaric fragility is a combined measure of the two effects. It is of course the underlying goal that the proposed correlations be used as guidelines and tests in the development of theories and models for the glass transition. It is therefore important to clarify if the correlations result from, and consequently unveil information on, the intrinsic effect of temperature on the relaxation time, the effect of density, or a balanced combination of the two. Eq.s \ref{eq:mpmrho2} and \ref{eq:mpmrho} show how isobaric fragility can be decomposed into two contributions, that of temperature being given by $m_\rho$ and the relative effect of density on relaxation time characterized by $\alpha_P T_g\frac{d log e(\rho)}{d log \rho}$. Isobaric measurements do not give access to $m_\rho$ nor to $\alpha_P T_g\frac{d log e(\rho)}{d log \rho}$ independently, but the relevant information can be obtained from data taken under pressure, as we have shown for the data presented here. From this information it becomes possible to revisit the correlations between fragility and other properties \cite{niss06}. The underlying idea is that a property supposed to correlate to the effect of temperature on the relaxation time should more specifically correlate to the isochoric fragility, $m_\rho$, than to the isobaric one, $m_P$. As also stressed in the Introduction, it is instructive to consider the evolution of the empirically established correlations with pressure. As shown in section \ref{sec:iso}, $m_\rho$ is constant, \emph{i.e.}, is independent of density and pressure, when it is evaluated at a pressure (or density) dependent $T_g$ corresponding to a given relaxation time. Nonetheless, it follows from Eq. \ref{eq:mpmrho2}, that the isobaric fragility will in general change due to the pressure dependence of $ \alpha_P T_g \frac{d log e(\rho)}{d log \rho}$. $T_g$ increases with pressure, $\alpha_P T_g(P)$ decreases, whereas $\frac{d log e(\rho)}{d log \rho}=x$ is often to a good approximation constant (the DBP case at high pressure discussed in section \ref{sec:relaxDBP} is one exception). As a result, the pressure dependence of $m_P$ is nontrivial. DBP, which we have studied here, shows no significant pressure dependence of the isobaric fragility, while the general behavior seen from the data compiled by Roland \emph{et al.} \cite{roland05} is that the isobaric fragility decreases or stays constant with pressure, with few exceptions. This seems to indicate that the decrease of $\alpha_PT_g(P)$ usually dominates over the other factors. The properties that are correlated to fragility will \emph{a priori} also depend on pressure or density. However if a property is related to the pure effect of temperature on the relaxation time, and therefore correlates to $m_\rho$, then it should be independent of density when evaluated along an isochrone (usually the glass transition line Tg), as $m_\rho$ itself does not depend on density. \subsection{Stretching and fragility}\label{sec:betam} One of the properties that has been suggested to correlate to the fragility is the nonexponential character of the relaxation function, usually expressed in terms of the stretching parameter $\beta_{KWW}$. The data we have reported here confirm the earlier finding \cite{ngai05a} that the spectral shape of the alpha relaxation does not vary when pressure is increased while keeping the relaxation time constant. This leads us to suggest that, if a correlation between fragility and stretching does exist, this latter should better correlate to the isochoric fragility which is also independent of pressure than to the isobaric fragility. To test this hypothesis we have collected data from literature reporting isobaric fragility and stretching of the relaxation at $T_g$. We consider here a description of the shape of the relaxation function in terms of the KWW stretching parameter $\beta_{KWW}$. This choice is made because it is convenient to use a characterization with only one parameter for the shape (see appendix \ref{sec:shape} for a discussion and the connection with the Cole-Davidson description used above) and because $\beta_{KWW}$ is the most reported of the liquids where $m_\rho$ is also available. The compilation of this data is shown in table I and in figures \ref{fig:mP} and \ref{fig:mrho} where both the isobaric fragility at atmospheric pressure (Fig. \ref{fig:mP}) and isochoric fragility (Fig. \ref{fig:mrho}) are plotted against the stretching parameter. There is a great deal of scatter in both figures. There is however an observable trend, the fragilities appearing to decrease as the stretching increases. The relative effect of density (over that of temperature) on the slowing down of the relaxation is characterized by the term $ \alpha_P T_g \frac{\mathrm{d} log e(\rho)}{\mathrm{d} log \rho}=m_P/m_\rho-1$. In figure \ref{fig:ratio} we show the ratio $m_P/m_\rho$ as a function of $\beta_{KWW}$. Clearly, no correlation is found between this ratio and the stretching. \begin{landscape} \begin{table}\label{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Compound & $m_{P}$ & Refs. & $m_{\rho}$ & Refs. & $\beta_{KWW}$ & Refs. \\ \hline \emph{o}-terphenyl & 82, 81, 76, 84 & \cite{alba04,dixon88,huang01,paluch01} & 45 & \cite{alba04} & 0.57, 0.52 & \cite{dixon88,tolle01} \\ {\small Dibutyl phtalate} & 67 & this work & 56 & this work & 0.56,0.65 & \cite{dixon90} this work \\ PC & 104, 93, 90 & \cite{qin06,richert03,paluch01}& 57, 65$^*$ & \cite{casalini05b,reiser05} & 0.73 & \cite{paluch01} \\ BMPC & 70 & \cite{casalini05c} & 26 & \cite{casalini05c} & 0.6 & \cite{hensel02b}\\ BMMPC & 58 & \cite{casalini05b} & 25 & \cite{casalini05b} & 0.55 & \cite{casalini03} \\ DEP A& 95 & \cite{roland04} & 57 & \cite{roland04} & 0.38 & \cite{paluch03b} \\ KDE & 64,73,68 & \cite{casalini05b,paluch01,roland03} & 34 & \cite{casalini05b}& 0.75 & \cite{paluch01} \\ DHIQ & 163, 158 & \cite{casalini06,richert03} & 117 & \cite{casalini06} & 0.36 & \cite{richert03} \\ Cumene & 80$^*$ & \cite{barlow66} & 53$^*$& \cite{barlow66,bridgman49} & 0.66 & \cite{nissU} \\ Salol & 68, 73, 63 & \cite{roland05,paluch01,laughlin72} & 36 & \cite{roland05} & 0.6, 0.53 & \cite{sidebottom89,bohmer93} \\ Glycerol & 40, 53 & \cite{alba04,birge86} & 38 & \cite{alba04} & 0.65, 0.7 ,0.75 & \cite{birge86,ngai90,dixon90} \\ Sorbitol & 128 & \cite{casalini04} & 112 & \cite{casalini04} & 0.5 & \cite{ngai91} \\ \textit{m}-fluoroaniline & 70 & \cite{alba99} & 51$^*$ & \cite{reiser05} & 0.35, 0.64 & \cite{cutroni94,hensel05} \\ \textit{m}-toluidine & 84,79 & \cite{mandanici05,alba99}& 68 & this work & 0.57 & this work \\ Polyisobutylene & 46 & \cite{plazek91} & 34$^*$ & \cite{audePHD} & 0.55 & \cite{plazek91} \\ Polyvinylchloride & 160, 191 & \cite{huang02,plazek91} & 140 & \cite{huang02} & 0.25 & \cite{plazek91} \\ Polyvinylacetate & 130, 95, 78 & \cite{huang02,alba04,roland05} & 130, 61, 52 & \cite{huang02,alba04,roland05} & 0,43 & \cite{plazek91} \\ Polystyrene & 77, 139 & \cite{huang02,plazek91} & 55 & \cite{huang02} & 0.35 & \cite{plazek91} \\ Polymethylacrylate & 102, 122, 102 & \cite{huang02,roland04,plazek91}& 80, 94 & \cite{huang02,roland04} & 0.41 & \cite{plazek91} \\ \hline \end{tabular} \caption{Fragilites and KWW stretching exponents of molecular liquids and polymers. The $^*$ indicates that the value is not given in the corresponding reference but is calculated from the data therein. The following abbreviations are used for the names of the liquids, PC = Propylene Carbonate, BMPC = 1,1'-bis(p-methoxyphenyl)cyclohexane, BMMPC = 1,1'-di(4-methoxy-5-methylphenyl)cyclohexane, KDE = cresolphtalein-dimethyl-ether, DEP A = diglycidylether of bisphenol A, and DHIQ = Decahydroisoquinoline.} \end{table} \end{landscape} The correlation between stretching and fragility is not strikingly different in figures \ref{fig:mP} and \ref{fig:mrho}. However, both on theoretical ground (focusing on the intrinsic effect of temperature) and on phenomenological one (isochoric fragility and stretching do not appear to vary as one changes pressure along an isochrone), our contention is that one should prefer using the isochoric fragility. In the above we have considered only fragility and stretching at the conventional glass transition temperature, that is around $\tau_\alpha=100$ s. However, we have pointed out in the Introduction that both the steepness index characterizing fragility and the stretching parameter depend on the relaxation time. Although still debated, there seems to be a qualitative trend toward a decrease of the stretching (an increase in $\beta_{KWW}$) and of the steepness index as the relaxation time decreases and one approaches the ``normal'' liquid regime. It would certainly be very valuable to obtain more data in order to study how the correlation between fragility and stretching evolves as a function of the relaxation time. \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{figure14.eps} \caption{Isobaric fragility as a function of stretching parameter. Diamonds: molecular liquids, circles: polymers. See table \ref{table} for numerical values and references. }\label{fig:mP} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{figure15.eps} \caption{Isochoric fragility $m_\rho$ as a function of stretching parameter. Diamonds: molecular liquids, circles: polymers. See table \ref{table} for numerical values and references. }\label{fig:mrho} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.4]{figure16.eps} \caption{ Ratio between isochoric and isobaric fragility as a function of stretching parameter. Diamonds: molecular liquids, circles: polymers. See table \ref{table} for numerical values and references. }\label{fig:ratio} \end{figure} \section{Conclusion} In this article we have stressed the constraints that one should put on the search for (meaningful) empirical correlations between the fragility of a glassformer, which characterizes the temperature dependence of the slowing down, and other dynamic or thermodynamic properties. Among such constraints is the check that the proposed correlations, often established at Tg and at atmospheric pressure, are robust when one changes the reference relaxation time (in place of the characteristic of Tg) as well as when one varies the pressure under isochronic conditions. Important also is the fact that fragility depends on the thermodynamic path considered (constant pressure versus constant density) and that, contrary to the isobaric fragility, the isochoric one appears as an intrinsic property of the glassformer, characterizing the pure effect of temperature. We have reported dielectric relaxation spectra under pressure for two molecular liquids, m-toluidine and DBP. We have combined these data with the available thermodynamic data and analyzed the respective effect of density and temperature on the dynamics. Our results are consistent with a general picture in which the isochoric fragility is constant on an isochrone. The shape of the relaxation function, as e.g. expressed by the stretching parameter $\beta_{KWW}$, has also been found constant along isochrones. We have finally discussed the possible correlation between fragility and stretching, suggesting that a meaningful correlation is to be looked for between stretching and isochoric fragility, as both seem to be constant under isochronic conditions and thereby reflect the intrinsic effect of temperature. On the practical side, the correlation is however no stronger with the isochoric fragility than with the isobaric one. One top of large error bars that may be present and that we have addressed in some detail, this reflects the fact that correlations are rather statistical in nature, emerging from a comparison of a large number of glassformers, rather than one-to-one correspondences between properties of the materials. . \section{Acknowledgment} We would like to thank A. W\"urflinger for the PVT data on m-Toluidine and Albena Nielsen and coworkers for making available her dielectric data on DBP prior to publishing. We are grateful to Denis L'H\^ote and Fran{\c c}ois Ladieu for having lent us the SR830 lockin. Moreover we acknowledge the work of Jo\"el Jaffr\'e who built the autoclave for the dielectric measurements under pressure. This work was supported by the CNRS (France) and grant No. 645-03-0230 from Forskeruddannelsesraadet (Denmark). \hspace{1cm}
1,108,101,563,536
arxiv
\section{Introduction} In a signalling game, a privately informed \emph{sender} (for instance a student) observes their type (e.g. ability) and chooses a signal (e.g. education level) that is observed by a \emph{receiver }(such as an employer), who then picks an action without observing the sender's type. These signalling games can have many perfect Bayesian equilibria, which are supported by different specifications of how the receivers would update their beliefs following the observation of ``off-path'' signals that the equilibrium predicts will never occur. These off-path beliefs are not pinned down by Bayes rule, and solution concepts such as perfect Bayesian equilibrium and sequential equilibrium place no restrictions on them. This has led to the development of equilibrium refinements like \citet{cho_signaling_1987}'s Intuitive Criterion and \citet{banks_equilibrium_1987}'s divine equilibrium that reduce the set of equilibria by imposing restrictions on off-path beliefs, using arguments about how players should infer the equilibrium meaning of observations that the equilibrium says should never occur. This paper uses a learning model to provide a micro-foundation for off-path beliefs in signalling games, then uses this foundation to deduce restrictions on which Nash equilibria can emerge from learning. Our learning model has a continuum of agents, with a constant inflow of new agents who do not know the prevailing distribution of strategies and a constant outflow of equal size. This lets us analyze learning in a deterministic stationary model where social \emph{steady states} exist, even though individual agents learn. To give agents adequate learning opportunities, we assume that their expected lifetimes are long, so that most agents in the population live a long time. And to ensure that agents have sufficiently strong incentives to experiment, we suppose that they are very patient. This leads us to analyze what we call the ``\emph{patiently stable}'' steady states of our learning model. Our agents are Bayesians who believe they face a time-invariant distribution of opponent's play. As in much of the learning-in-games literature and most laboratory experiments, these agents only learn from their personal observations and not from other sources such as newspapers, parents, or friends.\footnote{As we explain in Corollary \ref{cor:1}, the results extend to the case where some fraction of the population has access to data about the play of others.} Therefore, patient young senders will rationally try out different signals to see how receivers react. This implies some ``off-path'' signals that have probability zero in a given equilibrium will occur with small but positive probabilities in the steady states that approximate it. For this reason, we can use Bayes rule to derive restrictions on the receivers' typical posterior beliefs following these rare but positive-probability observations. As we will show, differences in the payoff functions of the sender types generate different incentives for these experiments. As a consequence, we can prove that patiently stable steady states must be a subset of Nash equilibria where the receiver's off-path beliefs satisfy a\emph{ compatibility criterion} restriction. This provides a learning-based justification for eliminating certain ``unintuitive'' equilibria in signalling games. These results also suggest that learning theory could be used to control the rates of off-path play and hence generate equilibrium refinements in other games. \footnote{It is interesting to note that \citet{spence_job_1973} also interprets equilibria of the signalling game as a steady state (or ``nontransitory configuration'') of a learning process, though he does not explicitly specify what sort of process he has in mind.} \subsection{Outline and Overview of Results} Section \ref{sec:Model} lays out the notation we will use for signalling games and introduces our learning model. Section \ref{sec:sender_side} and Section \ref{sec:receiver_side} then separately analyze the learning problems of the senders and of the receivers respectively. There we define and characterize the \emph{aggregate responses} of the senders and of the receivers, which are the analogs of the best-response functions in the one-shot signalling game. Finally, Section \ref{sec:two_sided} turns to steady states of the learning model, which can be viewed as pairs of mutual aggregate responses, analogous to the definition of Nash equilibrium. Section \ref{sec:sender_side} defines the type-compatibility orders. We say that type $\theta^{'}$ is \emph{more type-compatible with signal $s'$} than type $\theta^{''}$ if whenever $s^{'}$ is a weak best response for $\theta^{''}$ against some receiver behavior strategy, it is a strict best response for $\theta^{'}$ against the same strategy. To relate this static definition to the senders' optimal dynamic learning behavior, we show that under our assumptions the senders' learning problem is formally a multi-armed bandit, so the optimal policy of each type is characterized by the Gittins index. Theorem \ref{prop:index} shows that the compatibility order on types is equivalent to an order on their Gittins indices: $\theta^{'}\text{ }$ is more type-compatible with signal $s^{'}$ than type $\theta^{''}$ if and only if whenever $s^{'}$ has the (weakly) highest Gittins index for $\theta^{''}$, it has the strictly highest index for $\theta{}^{'}$, provided the two types hold the same beliefs and have the same discount factor. Lemma \ref{prop:compatible_comonotonic} then uses a coupling argument to extend this observation to the aggregate sender response, proving that types who are more compatible with a signal send it more often in aggregate. Section \ref{sec:receiver_side} considers the learning problem of the receivers. Intuitively, we would expect that when receivers are long-lived, most of them will ``learn'' the type-compatibility order, and we show that this is the case. More precisely, we show that most receivers best respond to a posterior belief whose likelihood ratio of $\theta^{'}$ to $\theta^{''}$ dominates the prior likelihood ratio of these two types whenever they observe a signal $s$ which is more type-compatible with $\theta^{'}$ than $\theta^{''}$. Lemma \ref{prop:receiver_learning} shows this is true for any signal that is sent ``frequently enough'' relative to the receivers' expected lifespan, using a result of \citet*{fudenberg_he_imhof_2016} on updating posteriors after rare events. Lemma \ref{lem:nondom_message} then shows that any \emph{equilibrium undominated }signal (see Definition \ref{def:J}) gets sent ``frequently enough'' in steady state when senders are sufficiently patient and long lived. Combining the three lemmas discussed above, we establish our main result: any patiently stable steady state must be a Nash equilibrium satisfying the additional restriction that the receivers best respond to certain \emph{admissible beliefs} after every off-path signal (Theorem \ref{thm:PS_is_compatible}). As an example, consider the beer-quiche game studied by \citet{cho_signaling_1987}, where it is easy to verify that the strong type is more compatible with ``beer'' than the weak type. Our results imply that the strong types will in aggregate send this signal at least as often as the weak types do, and that a strong type will send it ``many times'' when it is very patient. As a consequence, when senders are patient, long-lived receivers are unlikely to revise the probability of the strong type downwards following an observation of ``beer.'' Thus the ``both types eat quiche'' equilibrium is not a patiently stable steady state of the learning model, as it would require receivers to interpret ``beer'' as a signal that the sender is weak. \medskip{} \subsection{Related Work} The most closely related work is that of \citet{fudenberg_steady_1993} and \citet{fudenberg_superstition_2006} which studied a similar learning model. A key issue in this work and more generally in studying learning in extensive-form games is characterizing how much agents will experiment with actions that are not myopically optimal. If agents do not experiment at all, then non-Nash equilibria can persist, because players can maintain incorrect but self-confirming beliefs about off-path play. \citet{fudenberg_steady_1993} showed that patient long-lived agents will experiment enough at their on-path information sets to learn if they have any profitable deviations, thus ruling out steady states that are not Nash equilibria. However, more experimentation than that is needed for learning to generate the sharper predictions associated with backward induction and sequential equilibrium. \citet{fudenberg_superstition_2006} showed that patient rational agents need not do enough experimentation to imply backwards induction in games of perfect information. We say more below about how the models and proofs of those papers differ from ours. This paper is also related to the Bayesian learning models of \citet{kalai_rational_1993}, which studied two-player games with one agent on each side, so that every self-confirming equilibrium is path-equivalent to a Nash equilibrium, and \citet{esponda_berknash_2016}, which allowed agents to experiment but did not characterize when and how this occurs. It is also related to the literature on boundedly rational experimentation in extensive-form games, (e.g. \citet*{fudenbergKreps1988}, \citet*{jehiel_learning_2005}, \citet*{fudenberg_learning_1995}, \citet*{laslier_stubborn_2014}), where the experimentation rules of the agents are exogenously specified. We assume that each sender's type is fixed at birth, as opposed to being i.i.d. over time. \citet*{dekel_learning_2004} show some of the differences this can make using various equilibrium concepts, but they do not develop an explicit model of non-equilibrium learning. For simplicity, we assume here that agents\emph{ }do not know the payoffs of other players and have full support priors over the opposing side's behavior strategies. Our companion paper \citet{FudenbergHe2017TCE} supposes that players assign zero probability to dominated strategies of their opponents, as in the Intuitive Criterion \citep{cho_signaling_1987}, divine equilibrium \citep{banks_equilibrium_1987}, and rationalizable self-confirming equilibrium \citep*{dekel1999payoff}. There we analyze how the resulting micro-founded equilibrium refinement compares to those in past work. \section{Model \label{sec:Model}} \subsection{Signalling Game Notation} A \emph{signalling game} has two players, a sender (player 1, ``she'') and a receiver (player 2, ``he''). The sender's type is drawn from a finite set $\Theta$ according to a prior $\lambda\in\Delta(\Theta$) with $\lambda(\theta)>0$ for all $\theta$.\footnote{Here and subsequently $\Delta(X)$ denotes the collection of probability distributions on the set $X.$} There is a finite set $S$ of signals for the sender and a finite set $A$ of actions for the receiver.\footnote{To lighten notation we assume that the same set of actions is feasible following any signal. This is without loss of generality for our results as we could let the receiver have very negative payoffs when he responds to a signal with an ``impossible'' action.} The utility functions of the sender and receiver are $u_{1}:\Theta\times S\times A\to\mathbb{R}$ and $u_{2}:\Theta\times S\times A\to\mathbb{R}$ respectively. When the game is played, the sender knows her type and sends a signal $s\in S$ to the receiver. The receiver observes the signal, then responds with an action $a\in A$. Finally, payoffs are realized. A \emph{behavior strategy for the sender }$\pi_{1}=(\pi_{1}(\cdot|\theta))_{\theta\in\Theta}$ is a type-contingent mixture over signals $S$. Write $\Pi_{1}$ for the set of all sender behavior strategies. A\emph{ behavior strategy for the receiver} $\pi_{2}=(\pi_{2}(\cdot|s))_{s\in S}$ is a signal-contingent mixture over actions $A$. Write $\Pi_{2}$ for the set of all receiver behavior strategies. \subsection{Learning by Individual Agents \label{subsec:learning1} } We now build a learning model with a given signalling game as the stage game. In this subsection, we explain an individual agent's learning problem. In the next subsection, we complete the learning model by describing a society of learning agents who are randomly matched to play the signalling game every period. Time is discrete and all agents are rational Bayesians with geometrically distributed lifetimes. They survive between periods with probability $0\le\gamma<1$ and further discount future utility flows by $0\le\delta<1$, so their objective is to maximize the expected value of $\sum_{t=0}^{\infty}(\gamma\delta)^{t}\cdot u_{t}$. Here, $0\le\gamma\delta<1$ is the effective discount factor, and $u_{t}$ is the payoff $t$ periods from today. At birth, each agent is assigned a role in the signalling game: either as a sender with type $\theta$ or as a receiver. Agents know their role, which is fixed for life. Every period, each agent is randomly and anonymously matched with an opponent to play the signalling game, and the game's outcome determines the agent's payoff that period. At the end of each period, agents observe the outcomes of their own matches, that is, the signal sent, the action played in response, and the sender's type. They do not observe the identity, age, or past experiences of their opponents, nor does the sender observe how the receiver would have reacted to a different signal.\footnote{The receiver's payoff reveals the sender's type for generic assignments of payoffs to terminal nodes. If the receiver's payoff function is independent of the sender's type, their beliefs about it (and equilibrium refinements) are irrelevant. If the receivers do care about the sender's type but observe neither the sender's type nor their own realized payoff, a great many outcomes can persist, as in \citet*{dekel_learning_2004}. } Importantly, a sender only observes the receiver's response to the signal she sent, and not how the receiver would have reacted had she sent a different signal. Agents update their beliefs and play the signalling game again with new random opponents next period, provided they are still alive. Agents believe they face a fixed but unknown distribution of opponents' aggregate play, so they believe that their observations will be exchangeable. We feel that this is a plausible first hypothesis in many situations, so we expect that agents will maintain their belief in stationarity when it is approximately correct, but will reject it given clear evidence to the contrary, as when there is a strong time trend or a high-frequency cycle. The environment will indeed be constant in the steady states that we analyze. Formally, each sender is born with a prior density function over the aggregate behavior strategy of the receivers, $g_{1}:\Pi_{2}\to\mathbb{R}_{+,}$ which integrates to 1. Similarly, each receiver is born with a prior density over the sender's behavior strategies\footnote{Note that the agent's prior belief is over opponents' \emph{aggregate} play (i.e. $\Pi_{1}$ or $\Pi_{2}$) and not over the prevailing distribution of behavior strategies in the opponent population (i.e. $\Delta(\Pi_{2})$ or $\Delta(\Pi_{1})$), since under our assumption of anonymous random matching these are observationally equivalent for our agents. For instance, a receiver cannot distinguish between a society where all type $\theta$ randomize 50-50 between signals $s_{1}$ and $s_{2}$ each period, and another society where half of the type $\theta$ always play $s_{1}$ while the other half always plays $s_{2}$. Note also that because agents believe the system is in a steady state, they do not care about calendar time and do not have beliefs about it. \citet*{fudenbergKreps1994learning} suppose that agents append a non-Bayesian statistical test of whether their observations are exchangeable to a Bayesian model that presumes that it is. }, $g_{2}:\Pi_{1}\to\mathbb{R}_{+}$. We denote the marginal distribution of $g_{1}$ on signal $s$ as $g_{1}^{(s)},$ so that $g_{1}^{(s)}(\pi_{2}(\cdot|s))$ is the density of the new senders' prior over how receivers respond to signal $s$. Similarly, we denote the $\theta$ marginal of $g_{2}$ as $g_{2}^{(\theta)}$, so that $g_{2}^{(\theta)}(\pi_{1}(\cdot|\theta))$ is the new receivers' prior density over $\pi_{1}(\cdot|\theta)\in\Delta(S)$. \renewcommand{\theenumi}{(\alph{enumi})} It is important to remember that $g_{1}$ and $g_{2}$ are beliefs over opponents' strategies, but not strategies themselves. A newborn sender expects the response to $s$ to be $\int\pi_{2}(\cdot|s)\cdot g_{1}(\pi_{2})d\pi_{2}$ while a newborn receiver expects type $\theta$ to play $\int\pi_{1}(\cdot|\theta)\cdot g_{2}(\pi_{1})d\pi_{1}$. We now state a regularity assumption on the agents' priors that will be maintained throughout. \begin{defn} \label{def:regular_prior} A prior $g=(g_{1},g_{2})$ is \textbf{regular} if \end{defn} \begin{enumerate} \item {[}\emph{independence}{]} $g_{1}(\pi_{2})=\underset{s\in S}{\prod}g_{1}^{(s)}(\pi_{2}(\cdot|s))$ and $g_{2}(\pi_{1})=\underset{\theta\in\Theta}{\prod}g_{2}^{(\theta)}(\pi_{1}(\cdot|\theta))$. \item {[}$g_{1}$ \emph{non-doctrinaire}{]} $g_{1}$ is continuous and strictly positive on the interior of $\Pi_{2}.$ \item {[}$g_{2}$ \emph{nice}{]} For each type $\theta$$,$ there are positive constants $\left(\alpha_{s}^{(\theta)}\right)_{s\in S}$ such that \[ \pi_{1}(\cdot|\theta)\mapsto\frac{g_{2}^{(\theta)}(\pi_{1}(\cdot|\theta))}{\prod_{s\in S}\pi_{1}(s|\theta){}^{\alpha_{s}^{(\theta)}-1}} \] is uniformly continuous and bounded away from zero on the relative interior of $\Pi_{1}^{(\theta)}$, the set of behavior strategies of type $\theta$. \end{enumerate} Independence ensures that a receiver does not learn how type $\theta$ plays by observing the behavior of some other type $\theta^{'}\ne\theta$, and that a sender does not learn how receivers react to signal $s$ by experimenting with some other signal $s^{'}\ne s$. For example, this means in \citet{cho_signaling_1987}'s beer-quiche game the sender doesn't learn how receivers respond to beer by eating quiche.\footnote{One could imagine learning environments where the senders believe that the responses to various signals are correlated, but independence is a natural special case.} The non-doctrinaire nature of $g_{1}$ and $g_{2}$ implies that the agents never see an observation that they assigned zero prior probability, so that they have a well-defined optimization problem after any history. Non-doctrinaire priors also imply that a large enough data set can outweigh prior beliefs \citep{diaconis_uniform_1990}. The technical assumption about the boundary behavior of $g_{2}$ in (c) ensures that the prior density function $g_{2}$ behaves like a power function near the boundary of $\Pi_{1}$. Any density that is strictly positive on $\Pi_{1}$ satisfies this condition, as does the Dirichlet distribution, which is the prior associated with fictitious play \citep{fudenberg1993learning}. The set of histories for an age $t$ sender of type $\theta$ is $Y_{\theta}[t]:=(S\times A)^{t}$, where each period the history records the signal sent and the action that her receiver opponent took in response. The set of all histories for a type $\theta$ is the union $Y_{\theta}\coloneqq\bigcup_{t=0}^{\infty}Y_{\theta}[t]$. The dynamic optimization problem of type $\theta$ has an optimal policy function $\sigma_{\theta}:Y_{\theta}\to S$, where $\sigma_{\theta}(y_{\theta})$ is the signal that a type $\theta$ with history $y_{\theta}$ would send the next time she plays the signalling game. Analogously, the set of histories for an age $t$ receiver is $Y_{2}[t]:=(\Theta\times S)^{t}$, where each period the history records the type of his sender opponent and the signal that she sent. The set of all receiver histories is the union $Y_{2}\coloneqq\bigcup_{t=0}^{\infty}Y_{2}[t]$. The receiver's learning problem admits an optimal policy function $\sigma_{2}:Y_{2}\to A^{S}$, where $\sigma_{2}(y_{2})$ is the pure strategy that a receiver with history $y_{2}$ would commit to next time he plays the game.\footnote{Because our agents are expected utility maximizers, it is without loss of generality to assume each agent uses a deterministic policy rule. If more than one such rule exists, we fix one arbitrarily. Of course, the optimal policies $\sigma_{\theta}$ and $\sigma_{2}$ depend on the prior $g$ as well as the effective discount factor $\delta\gamma$. Where no confusion arises, we suppress these dependencies. } \subsection{Random Matching and Aggregate Play \label{subsec:learning2} } We analyze learning in a deterministic stationary model with a continuum of agents, as in \citet{fudenberg_steady_1993,fudenberg_superstition_2006}. One innovation is that we let lifetimes follow a geometric distribution instead of the finite and deterministic lifetimes assumed in those earlier papers, so that we can use the Gittins index. The society contains a unit mass of agents in the role of receivers and mass $\lambda(\theta)$ in the role of type $\theta$ for each $\theta\in\Theta$. As described in Subsection \ref{subsec:learning1}, each agent has $0\le\gamma<1$ chance of surviving at the end of each period and complementary chance $1-\gamma$ of dying. To preserve population sizes, $(1-\gamma)$ new receivers and $\lambda(\theta)(1-\gamma)$ new type $\theta$ are born into the society every period. Each period, agents in the society are matched uniformly at random to play the signalling game. In the spirit of the law of large numbers, each sender has probability $(1-\gamma)\gamma^{t}$ of matching with a receiver of age $t$, while each receiver has probability $\lambda(\theta)(1-\gamma)\gamma^{t}$ of matching with a type $\theta$ of age $t.$ A \emph{state }$\psi$ of the learning model is described by the mass of agents with each possible history. We write it as \[ \psi\in\left(\times_{\theta\in\Theta}\Delta(Y_{\theta})\right)\times\Delta(Y_{2}). \] We refer to the components of a state $\psi$ by $\psi_{\theta}\in\Delta(Y_{\theta})$ and $\psi_{2}\in\Delta(Y_{2})$. Given the agents' optimal policies, each possible history for an agent completely determines how that agent will play in their next match. The sender policy functions $\sigma_{\theta}$ are maps from sender histories to signals\footnote{Remember that we have fixed deterministic policy functions. }, so they naturally extend to maps from distributions over sender histories to distributions over signals. That is, given the policy function $\sigma_{\theta}$, each state $\psi$ induces an aggregate behavior strategy $\sigma_{\theta}(\psi_{\theta})\in\Delta(S)$ for each type $\theta$ population, where we extend the domain of $\sigma_{\theta}$ from $Y_{\theta}$ to distributions on $Y_{\theta}$ in the natural way: \begin{equation} \sigma_{\theta}(\psi_{\theta})(s)\coloneqq\psi_{\theta}\left\{ y_{\theta}\in Y_{\theta}:\sigma_{\theta}(y_{\theta})=s\right\} .\label{eq:induced_sender_strat} \end{equation} Similarly, state $\psi$ and the optimal receiver policy $\sigma_{2}$ together induce an aggregate behavior strategy $\sigma_{2}(\psi_{2})$ for the receiver population, where \[ \sigma_{2}(\psi_{2})(a|s)\coloneqq\psi_{2}\left\{ y_{2}\in Y_{2}:\sigma_{2}(y_{2})(s)=a\right\} . \] We will study the steady states of this learning model, to be defined more precisely in Section \ref{sec:two_sided}. Loosely speaking, a steady state is a state $\psi$ that reproduces itself indefinitely when agents use their optimal policies. Put another way, a steady state induces a time-invariant distribution over how the signalling game is played in the society. Suppose society is at steady state today and we measure what fraction of type $\theta$ sent a certain signal $s$ in today's matches. After all agents modify their strategies based on their updated beliefs and after all births and deaths take place, the fraction of type $\theta$ playing $s$ in the matches tomorrow will be the same as today. \section{Senders' Optimal Policies and Type Compatibility \label{sec:sender_side}} This section studies the senders' learning problem. We will prove that differences in the payoff structures of the various sender types generate certain restrictions on their behavior in the learning model. Subsection \ref{subsec:sender_problem} notes that the senders face a multi-armed bandit, so the Gittins index characterizes their optimal policies. Subsection \ref{subsec:sender_APR} defines the notion of an aggregate sender response, which describes the aggregate distribution over sender strategies that is induced by a fixed aggregate distribution of receiver play. In Subsection \ref{subsec:compatible}, we define \emph{type compatibility}, which formalizes what it means for type $\theta^{'}$ to be more ``compatible'' with any given signal $s$ than type $\theta^{''}$ is. The definition of type compatibility is static, in the sense that it depends only on the two types' payoff functions in the one-shot signalling game. Subsection \ref{subsec:gittins} relates type compatibility to the Gittins index, which applies to the dynamic learning model. We use this relationship to show in Subsection \ref{subsec:sender_reply} that whenever type $\theta^{'}$ is more compatible with $s$ than type $\theta^{''},$ type $\theta^{'}$ sends signal $s$ relatively more often in the learning model. \subsection{Optimal Policies and Multi-Armed Bandits\label{subsec:sender_problem} } Each type-$\theta$ sender thinks she is facing a fixed but unknown aggregate receiver behavior strategy $\pi_{2}$, so each period when she sends signal $s$, she believes that the response is drawn from some $\pi_{2}(\cdot|s)\in\Delta(A)$, i.i.d. across periods. Because her beliefs about the responses to the various signals are independent, her problem is equivalent to a discounted multi-armed bandit, with signals $s\in S$ as the arms, where the rewards of arm $s$ are distributed according to $u_{1}(\theta,s,\pi_{2}(\cdot|s))$. Let $\nu_{s}\in\Delta(\Delta(A))$ be a belief over the space of mixed replies to signal $s$, and let $\nu=(\nu_{s})_{s\in S}$ be a profile of such beliefs. Write $I(\theta,s,\nu,\beta)$ for the Gittins index of signal $s$ for type $\theta$, with beliefs $\nu$ over receiver's play after various signals, so that \[ I(\theta,s,\nu,\beta)\coloneqq\ \sup_{\tau>0}\dfrac{\mathbb{E}_{\nu_{s}}\left\{ \sum_{t=0}^{\tau-1}\beta^{t}\cdot u_{1}(\theta,s,a_{s}(t))\right\} }{\mathbb{E}_{\nu_{s}}\left\{ \sum_{t=0}^{\tau-1}\beta^{t}\right\} }. \] Here, $a_{s}(t)$ is the receiver's response that the sender observes the $t$-th time she sends signal $s$, $\tau$ is a stopping time\footnote{That is, whether or not $\tau=t$ depends only on the realizations of $a_{s}(0),a_{s}(1),...,a_{s}(t-1)$. } and the expectation $\mathbb{E}_{\nu_{s}}$ over the sequence of responses $\{a_{s}(t)\}_{t\ge0}$ depends on the sender's belief $\nu_{s}$ about responses to signal $s$. The Gittins index theorem \citep{gittins1979bandit} implies that after every positive-probability history $y_{\theta},$ the optimal policy $\sigma_{\theta}$ for a sender of type $\theta$ sends the signal that has the highest Gittins index for that type under the profile of posterior beliefs $(\nu_{s})_{s\in S}$ that is induced by $y_{\theta}$. \subsection{The Aggregate Sender Response \label{subsec:sender_APR}} Next, we define the\emph{ aggregate sender response }(ASR) $\mathscr{R}_{1}:\Pi_{2}\to\Pi_{1}$. Loosely speaking, this is the learning analog of the sender's best response function in the static signalling game. If we fix the aggregate play of the receiver population at $\pi_{2}$ and run the learning model period after period from an arbitrary initial state, the distribution of signals sent by each type $\theta$ will approach $\mathscr{R}_{1}[\pi_{2}](\cdot|\theta)$. We will subsequently define the aggregate receiver response and then use these functions to characterize the steady states of the system. To formalize the definition of the aggregate sender response, we first introduce the one-period-forward map. \begin{defn} The \emph{one-period-forward map for type $\theta$}, $f_{\theta}:\Delta(Y_{\theta})\times\Pi_{2}\to\Delta(Y_{\theta})$ is \[ f_{\theta}[\psi_{\theta},\pi_{2}](y_{\theta},(s,a)):=\psi_{\theta}(y_{\theta})\cdot\gamma\cdot\boldsymbol{1}\{\sigma_{\theta}(y_{\theta})=s\}\cdot\pi_{2}(a|s) \] and $f_{\theta}[\psi_{\theta},\pi_{2}](\emptyset):=1-\gamma$. \end{defn} If the distribution over histories in the type $\theta$ population is $\psi_{\theta}$ and the receiver population's aggregate play is $\pi_{2}$, the resulting distribution over histories in the type $\theta$ population is $f_{\theta}[\psi_{\theta},\pi_{2}]$. Specifically, there will be a $1-\gamma$ mass of newborn type $\theta$ who will have no history. Also, if the optimal first signal of a newborn type $\theta$ is $s^{'}$, that is if $\sigma_{\theta}(\emptyset)=s^{'},$ then $f_{\theta}[\psi_{\theta},\pi_{2}](s^{'},a^{'})=\gamma\cdot(1-\gamma)\cdot\pi_{2}(a^{'}|s^{'})$ newborn senders send $s^{'}$ in their first match, observe action $a^{'}$ in response, and survive. In general, a type $\theta$ who has history $y_{\theta}$ and whose policy $\sigma_{\theta}(y_{\theta})$ prescribes playing $s$ has $\pi_{2}(a|s)$ chance of having subsequent history $(y_{\theta},(s,a))$ provided she survives until next period; the survival probability $\gamma$ corresponds to the the factor $\gamma\cdot\boldsymbol{1}\{\sigma_{\theta}(y_{\theta})=s\}$. Write $f_{\theta}^{T}$ for the $T$-fold application of $f_{\theta}$ on $\Delta(Y_{\theta}),$ holding fixed some $\pi_{2}$. Note that for arbitrary states $\psi$ and $\psi^{'}$, if $(y_{\theta},(s,a))$ is a length-1 history (i.e. $y_{\theta}=\emptyset$), then $\psi_{\theta}(y_{\theta})=\psi'_{\theta}(y_{\theta}))$ because both states must assign mass $1-\gamma$ to $\emptyset$, so $f_{\theta}^{1}[\psi_{\theta},\pi_{2}]$ and $f_{\theta}^{1}[\psi_{\theta}^{'},\pi_{2}]$ agree on $Y_{\theta}[1]$. Iterating, for $T=2,$ $f_{\theta}^{2}[\psi_{\theta},\pi_{2}]$ and $f_{\theta}^{2}[\psi_{\theta}^{'},\pi_{2}]$ agree on $Y_{\theta}[2]$, because each history in $Y_{\theta}[2]$ can be written as $(y_{\theta},(s,a))$ for $y_{\theta}\in Y_{\theta}[1]$, and $f_{\theta}^{1}[\psi_{\theta},\pi_{2}]$ and $f_{\theta}^{1}[\psi_{\theta}^{'},\pi_{2}]$ match on all $y_{\theta}\in Y_{\theta}[1]$. Proceeding inductively, we can conclude that $f_{\theta}^{T}(\psi_{\theta},\pi_{2})$ and $f_{\theta}^{T}(\psi_{\theta}^{'},\pi_{2})$ agree on all $Y_{\theta}[t]$ for $t\le T$ for any two type $\theta$ states $\psi_{\theta}$ and $\psi_{\theta}^{'}$. This means $\lim_{T\to\infty}f_{\theta}^{T}(\psi_{\theta},\pi_{2})$ exists and is independent of the initial $\psi_{\theta}$. Denote this limit as $\psi_{\theta}^{\pi_{2}}.$ It is the long-run distribution over type-$\theta$ histories induced by starting at an arbitrary state and fixing the receiver population's play at $\pi_{2}$. \begin{defn} The\emph{ aggregate sender response (ASR)} $\mathscr{R}_{1}:\Pi_{2}\to\Pi_{1}$ is defined by \[ \mathscr{R}_{1}[\pi_{2}](s|\theta):=\psi_{\theta}^{\pi_{2}}(y_{\theta}:\sigma_{\theta}(y_{\theta})=s) \] \end{defn} That is, $\mathscr{R}_{1}[\pi_{2}](s|\theta)$ is the mass of type $\theta$ who will send $s$ in their next match, according to the measure $\psi_{\theta}^{\pi_{2}}$. \begin{rem} \label{rem:ASR_details} Technically, $\mathscr{R}_{1}$ depends on $g_{1},\delta,$ and $\gamma$, just like $\sigma_{\theta}$ does. When relevant, we will make these dependencies clear by adding the appropriate parameters as superscripts to $\mathscr{R}_{1}$, but we will mostly suppress them to lighten notation. \begin{rem} \label{rem:sender_APR} Although the ASR is defined at the aggregate level, $\mathscr{R}_{1}[\pi_{2}](\cdot|\theta)$ also describes the probability distribution of the play of a single type-$\theta$ sender over her lifetime when she faces receiver play drawn from $\pi_{2}$ every period.\footnote{Observe that $f_{\theta}[\psi_{\theta},\pi_{2}]$ restricted to $Y_{\theta}[1]$ gives the probability distribution over histories for a type $\theta$ who uses $\sigma_{\theta}$ and faces play drawn from $\pi_{2}$ for one period: it puts weight $\pi_{2}(a^{'}|s^{'})$ on history $(s^{'},a^{'})$ where $s^{'}=\sigma_{\theta}(\emptyset)$. Similarly, $f_{\theta}^{T}[\psi_{\theta},\pi_{2}]$ restricted to $Y_{\theta}[t]$ for any $t\le T$ gives the probability distribution over histories for someone who uses $\sigma_{\theta}$ and faces play drawn from $\pi_{2}$ for $t$ periods. Since $\psi_{\theta}^{\pi_{2}}$ assigns probability $(1-\gamma)\gamma^{t}$ to the set of histories $Y_{\theta}[t]$, $\mathscr{R}_{1}[\pi_{2}](\cdot|\theta)=\sigma_{\theta}(\psi_{\theta}^{\pi_{2}})$ is a weighted average over the distributions of period-$t$ play ($t=1,2,3,...$) of someone using $\sigma_{\theta}$ and facing $\pi_{2}$, with weight $(1-\gamma)\gamma^{t}$ given to the period $t$ distribution. } \end{rem} \end{rem} \subsection{\label{subsec:compatible}Type compatibility in signalling games} We now introduce a notion of comparative compatibility with a signal in the one-shot signalling game. The key result of this section is that this static definition of compatibility actually imposes restrictions on types' dynamic behavior in the ASR. \begin{defn} Signal $s^{'}$ is \emph{more type-compatible} with $\theta^{'}$ than $\theta^{''}$, written as $\theta^{'}\succ_{s^{'}}\theta^{''}$, if for every $\pi_{2}\in\Pi_{2}$ such that \[ u_{1}(\theta^{''},s^{'},\pi_{2}(\cdot|s^{'}))\ge\max_{s^{''}\ne s^{'}}u_{1}(\theta^{''},s^{''},\pi_{2}(\cdot|s^{''})), \] we have \[ u_{1}(\theta^{'},s^{'},\pi_{2}(\cdot|s^{'}))>\max_{s^{''}\ne s^{'}}u_{1}(\theta^{'},s^{''},\pi_{2}(\cdot|s^{''})). \] \end{defn} In words, $\theta^{'}\succ_{s^{'}}\theta^{''}$ means that whenever $s^{'}$ is a weak best response for $\theta^{''}$ against some receiver behavior strategy $\pi_{2}$, it is also a strict best response for $\theta^{'}$ against $\pi_{2}$. The following proposition says the compatibility order is transitive and essentially asymmetric. Its proof is in the Appendix. \begin{prop} \label{prop:properties_of_comp_relation} \end{prop} \begin{enumerate} \item $\succ_{s^{'}}$ is transitive. \item Except when $s^{'}$ is either strictly dominant for both $\theta^{'}$ and $\theta^{''}$ or strictly dominated for both $\theta^{'}$ and $\theta^{''}$, $\theta^{'}\succ_{s^{'}}\theta^{''}$ implies $\theta^{''}\not\succ_{s^{'}}\theta^{'}$. \end{enumerate} To check the compatibility condition, one must consider all strategies in $\Pi_{2},$ just as the belief restrictions in divine equilibrium involve all the possible mixed best responses to various beliefs. However, when the sender's utility function is separable in the sense that $u_{1}(\theta,s,a)=v(\theta,s)+z(a),$ as in \citet{spence_job_1973}'s job market signalling game and in \citet{cho_signaling_1987}'s beer-quiche game (given below), a sufficient condition for $\theta^{'}\succ_{s^{'}}\theta^{''}$ is \[ v(\theta^{'},s^{'})-v(\theta^{''},s^{'})>\max_{s^{''}\ne s^{'}}v(\theta^{'},s^{''})-v(\theta^{''},s^{''}). \] This can be interpreted as saying $s^{'}$ is the least costly signal for $\theta^{'}$ relative to $\theta^{''}.$ In the Online Appendix, we present a general sufficient condition for $\theta^{'}\succ_{s^{'}}\theta^{''}$ under general payoff functions. \begin{example} \noindent \label{exa:beer-quiche} (\citet{cho_signaling_1987}'s beer-quiche game) The sender (P1) is either strong ($\theta_{\text{strong}}$) or weak $(\theta_{\text{weak}}$), with prior probability $\lambda(\theta_{\text{strong}})=0.9.$ The sender chooses to either drink beer or eat quiche for breakfast. The receiver (P2), observing this breakfast choice but not the sender's type, chooses whether to fight the sender. If the sender is $\theta_{\text{weak}}$, the receiver prefers fighting. If the sender is $\theta_{\text{strong}}$, the receiver prefers not fighting. Also, $\theta_{\text{strong}}$ prefers beer for breakfast while $\theta_{\text{weak}}$ prefers quiche for breakfast. Both types prefer not being fought over having their favorite breakfast. \noindent \begin{center} \includegraphics[scale=0.4]{BQ1-eps-converted-to.pdf} \noindent \end{center} This game has separable sender utility with $v(\text{\ensuremath{\theta_{\text{strong}}}},B)=v(\theta_{\text{weak}},Q)=1,$ $v(\theta_{\text{strong}},Q)=v(\theta_{\text{weak}},B)=0,$ $z(F)=0$ and $z(NF)=2$. So, we have $\theta_{\text{strong}}\succ_{B}\theta_{\text{weak}}$. \hfill{} $\blacklozenge$ \end{example} It is easy to see that in every Nash equilibrium $\pi^{*},$ if $\theta^{'}\succ_{s^{'}}\theta^{''}$, then $\pi_{1}(s^{'}|\theta^{''})>0$ implies $\pi_{1}(s^{'}|\theta^{'})=1$. By Bayes rule, this implies that the receiver's equilibrium belief $p$ after every \emph{on-path} signal $s^{'}$ satisfies the restriction $\dfrac{p(\theta^{''}|s^{'})}{p(\theta^{'}|s^{'})}\le\dfrac{\lambda(\theta^{''})}{\lambda(\theta^{'})}$ if $\theta^{'}\succ_{s^{'}}\theta^{''}$. Thus in every Nash equilibrium of the beer-quiche game, if the sender chooses B with positive ex-ante probability, then the receiver's odds ratio that the sender is tough after seeing this signal cannot be less than the prior odds ratio. Our main result, Theorem \ref{thm:PS_is_compatible}, essentially shows for any strategy profile that can be approximated by steady state outcomes with patient and long-lived agents, the same compatibility-based restriction is satisfied even for \emph{off-path} signals. In particular, this allows us to place restrictions on the receiver's belief after seeing B in equilibria where no type of sender ever plays this signal. \subsection{Type compatibility and the Gittins index \label{subsec:gittins}} We now establish a link between the compatibility order $\theta^{'}\succ_{s^{'}}\theta^{''}$ and the two types' Gittins indices for $s^{'}$. \begin{thm} \noindent \label{prop:index} $\theta^{'}\succ_{s^{'}}\theta^{''}$ if and only if for every $\beta\in[0,1)$ and every $\nu$, $I(\theta^{''},s^{'},\nu,\beta)\ge\max_{s^{''}\ne s^{'}}I(\theta^{''},s^{''},\nu,\beta)$ implies $I(\theta^{'},s^{'},\nu,\beta)>\max_{s^{''}\ne s^{'}}I(\theta^{'},s^{''},\nu,\beta)$. \end{thm} That is, $\theta^{'}\succ_{s^{'}}\theta^{''}$ if and only if whenever $s^{'}$ has the (weakly) highest Gittins index for $\theta^{''}$, it has the index for $\theta,^{'}$ provided the two types hold the same beliefs and have the same discount factor. The key to the proof is that every stopping time $\tau$ for sequential experiments with signal $s$ induces a discounted time average over receiver actions observed before stopping, which we denote as $\pi_{2,s}(\tau,\nu_{s},\beta)$ and interpret as a mixture over receiver actions. To illustrate the construction, suppose $\nu_{s}$ is supported on two pure receiver strategies after $s$: either $\pi_{2}(a^{'}|s)=1$ or $\pi_{2}(a^{''}|s)=1,$ with both strategies equally likely. Consider the stopping time $\tau$ that specifies stopping after the first time the receiver plays $a^{''}$. Then the discounted time average frequency of $a^{''}$ is: \[ \frac{\sum_{t=0}^{\infty}\beta^{t}\cdot\mathbb{P}_{\nu_{s}}[\tau\ge t\text{ and receiver plays }a^{''}\text{ in period }t]}{\sum_{t=0}^{\infty}\beta^{t}\cdot\mathbb{P}_{\nu_{s}}[\tau\ge t]}=\frac{0.5}{1+\sum_{t=1}^{\infty}\beta^{t}\cdot0.5}=\frac{1-\beta}{2-\beta}. \] So $\pi_{2,s}(\tau,\nu_{s},\beta)(a^{''})=\frac{1-\beta}{2-\beta}$ and similarly we can calculate that $\pi_{2,s}(\tau,\nu_{s},\beta)(a^{'})=\frac{1}{2-\beta}$ so that $\pi_{2,s}$ does correspond to a mixture over receiver actions. Moreover, we can show that when the optimal stopping problem that defines the Gittins index of $s$ is evaluated at $\tau$, it yields the sender's payoff from playing $s$ against $\pi_{2,s}(\tau,\nu_{s},\beta)$ in the one-shot signalling game. Thus type $\theta$'s Gittins index of $s$ is $u_{1}(\theta,s,\pi_{2,s}(\tau_{s}^{\theta'},\nu_{s},\beta))$, where $\tau_{s}^{\theta^{'}}$ is the optimal stopping time of type $\theta^{'}$ in the stopping problem defining the Gittins index of $s$. This links the Gittins index to the signalling game payoff structure, which lets us apply the compatibility definition to establish the desired equivalence. \subsection{Type compatibility and the ASR \label{subsec:sender_reply} } The next lemma shows how restrictions on the Gittins indices generate restrictions on the aggregate sender response. \begin{lem} \label{prop:compatible_comonotonic}Suppose $\theta^{'}\succ_{s^{'}}\theta^{''}$. Then for any regular prior $g_{1}$, $0\le\delta,\gamma<1$, and any $\pi_{2}\in\Pi_{2}$, we have $\mathscr{R}_{1}[\pi_{2}](s^{'}|\theta^{'})\ge\mathscr{R}_{1}[\pi_{2}](s^{'}|\theta^{''})$. \end{lem} Theorem \ref{prop:index} showed when $\theta^{'}\succ_{s^{'}}\theta^{''}$ and the two types share the same beliefs, if $\theta^{''}$plays $s^{'}$ then $\theta^{'}$ must also play $s^{'}$. But even though newborn agents of both types start with the same prior $g_{1}$, their beliefs may quickly diverge during the learning process due to $\sigma_{\theta^{'}}$ and $\sigma_{\theta^{''}}$ prescribing different experiments. This lemma shows that compatibility still imposes restrictions on the aggregate play of the sender population: Regardless of the aggregate play $\pi_{2}$ in the receiver population, the frequencies that $s^{'}$ appears in the aggregate responses of different types are always comonotonic with the compatibility order $\succ_{s^{'}}$. It is natural to expect that the co-monotonicity condition in Lemma \ref{prop:compatible_comonotonic} will be reflected in the beliefs of most receivers when the receivers live a long time and so have many observations. Lemma \ref{prop:receiver_learning} in the next section shows that for signals $s^{'}$ that are sent sufficiently often, most receivers have posterior beliefs $p$ such that $\dfrac{p(\theta^{''}|s^{'})}{p(\theta^{'}|s^{'})}\le\dfrac{\lambda(\theta^{''})}{\lambda(\theta^{'})}$ whenever $\theta^{'}\succ_{s^{'}}\theta^{''}$. Lemma \ref{lem:nondom_message} then shows that signals that are not equilibrium dominated are played sufficiently often by patient senders. To gain some intuition for Lemma \ref{prop:compatible_comonotonic}, consider two newborn senders with types $\theta_{\text{strong}}$ and $\theta_{\text{weak}}$ who are learning to play the beer-quiche game from Example \ref{exa:beer-quiche}. Suppose they have uniform priors over the responses to each signal, and that they face a sequence of receivers programmed to play F after B and NF after Q. Since observing F is the worst possible news about a signal's payoff, the Gittins index of a signal decreases when F is observed. Conversely, the Gittins index of a signal increases after each observation of NF.\footnote{This follows from \citet{bellman1956problem}'s Theorem 2 on Bernoulli bandits. } So, there are $n_{1},n_{2}\ge0$ such that type $\theta_{\text{strong}}$ will play B for $n_{1}$ periods (and observe $n_{1}$ instances of F) and then play Q forever after, while type $\theta_{\text{weak}}$ will play B for $n_{2}$ periods before switching to Q forever after. Now we claim that $n_{1}\ge n_{2}$. To see why, suppose instead that $n_{1}<n_{2}$, and let $\nu$ be the posterior belief about receivers' aggregate play induced from $n_{1}$ periods of observing F after B. After $n_{1}$ periods, both types would share the belief $\nu$. Then at belief $\nu$ type $\theta_{\text{weak}}$ must play B while type $\theta_{\text{strong}}$ plays Q, so signal B must have the highest Gittins index for $\theta_{\text{weak}}$ but not for $\theta_{\text{strong}}$. But this would contradict Theorem \ref{prop:index}. The proof of Lemma \ref{prop:compatible_comonotonic} relies on the similar idea of fixing a particular ``programming'' of receiver play and studying the induced paths of experimentation for different types. In the aggregate learning model, the sequence of responses that a given sender encounters in her life depends on the realization of the random matching process, because different receivers have different histories and respond differently to a given signal. We can index all possible sequences of random matching realizations using a device we call the ``pre-programmed response path''. To show that more compatible types play a given signal more often, it suffices to show this comparison holds on each pre-programmed response path, thus coupling the learning processes of types $\theta^{'}$ and $\theta^{''}.$ We will show that the intuition above extends to signalling games with any number of signals and to any pre-programmed response path. \begin{defn} A \emph{pre-programmed response path} $\mathfrak{a}=(a_{1,s},a_{2,s},...,)_{s\in S}$ is an element in $\times_{s\in S}\left(A^{\infty}\right)$. \end{defn} A pre-programmed response path is an $|S|$-tuple of infinite sequences of receiver actions, one sequence for each signal. For a given pre-programmed response path $\mathfrak{a}$, we can imagine starting with a newborn type $\theta$ and generating receiver play each period in the following programmatic manner: when the sender plays $s$ for the $j$-th time, respond with receiver action $a_{j,s}$. (If the sender sends $s^{''}$ 5 times and then sends $s^{'}\ne s^{''}$, the response she gets to $s^{'}$ is $a_{1,s^{'}}$, not $a_{6,s^{'}}$.) For a type $\theta$ who applies $\sigma_{\theta}$ each period, $\mathfrak{a}$ induces a deterministic history of experiments and responses, which we denote $y_{\theta}(\mathfrak{a}$). The induced history $y_{\theta}(\mathfrak{a})$ can be used to calculate $\overline{\mathscr{R}}_{1}[\mathfrak{a}](\cdot|\theta)$, the distribution of signals over the lifetime of a type $\theta$ induced by the pre-programmed response path $\mathfrak{a}$. Namely, $\overline{\mathscr{R}}_{1}[\mathfrak{a}](\cdot|\theta)$ is simply a mixture over all signals sent along the history $y_{\theta}(\mathfrak{a})$, with weight $(1-\gamma)\gamma^{t-1}$ given to the signal in period $t$. Now consider a type $\theta$ facing actions generated i.i.d. from the receiver behavior strategy $\pi_{2}$ each period, as in the interpretation of $\mathscr{R}_{1}$ in Remark \ref{rem:sender_APR}. This data-generating process is equivalent to drawing a random pre-programmed response path $\mathfrak{a}$ at time 0 according to a suitable distribution, then producing all receiver actions using $\mathfrak{a}$. That is, $\mathscr{R}_{1}[\pi_{2}](\cdot|\theta)=\int\overline{\mathscr{R}}_{1}[\mathfrak{a}](\cdot|\theta)d\pi_{2}(\mathfrak{a})$ where we abuse notation and use $d\pi_{2}(\mathfrak{a})$ to denote the distribution over pre-programmed response paths associated with $\pi_{2}$. Importantly, any two types $\theta^{'}$ and $\theta^{''}$ face the same distribution over pre-programmed response paths, so to prove the proposition it suffices to show $\overline{\mathscr{R}}_{1}[\mathfrak{a}](s^{'}|\theta^{'})\ge\overline{\mathscr{R}}_{1}[\mathfrak{a}](s^{'}|\theta^{''})$ for all $\mathfrak{a}$. \begin{proof} For $t\ge0$, write $y_{\theta}^{t}$ for the truncation of infinite history $y_{\theta}$ to the first $t$ periods, with $y_{\theta}^{\infty}:=y_{\theta}$. Given a finite or infinite history $y_{\theta}^{t}$ for type $\theta$, the signal counting function $\#(s|y_{\theta}^{t})$ returns how many times signal $s$ has appeared in $y_{\theta}^{t}$. (We need this counting function since the receiver play generated by a pre-programmed response path each period depends on how many times each signal has been sent so far.) As discussed above, we need only show $\overline{\mathscr{R}}_{1}[\mathfrak{a}](s^{'}|\theta^{'})\ge\overline{\mathscr{R}}_{1}[\mathfrak{a}](s^{'}|\theta^{''})$. Let $\mathfrak{a}$ be given and write $T_{j}^{\theta}$ for the period in which type $\theta$ sends signal $s^{'}$ for the $j$-th time in the induced history $y_{\theta}(\mathfrak{a})$. If no such period exists, then set $T_{j}^{\theta}=\infty$. Since $\overline{\mathscr{R}}_{1}[\mathfrak{a}](\cdot|\theta)$ is a weighted average over signals in $y_{\theta}(\mathfrak{a})$ with decreasing weights given to later signals, to prove $\overline{\mathscr{R}}_{1}[\mathfrak{a}](s^{'}|\theta^{'})\ge\overline{\mathscr{R}}_{1}[\mathfrak{a}](s^{'}|\theta^{''})$ it suffices to show that $T_{j}^{\theta^{'}}\le T_{j}^{\theta^{''}}$ for every $j$. Towards this goal, we will prove a sequence of statements by induction: \texttt{Statement $j$}: Provided $T_{j}^{\theta^{''}}$ is finite, $\#\left(s^{''}\ |\ y_{\theta^{'}}^{T_{j}^{\theta^{'}}}(\mathfrak{a})\right)\le\#\left(s^{''}\ |\ y_{\theta^{''}}^{T_{j}^{\theta^{''}}}(\mathfrak{a})\right)$for all $s^{''}\ne s^{'}$. For every $j$ where $T_{j}^{\theta^{''}}<\infty$, \texttt{statement $j$} implies that the number of periods type $\theta^{'}$ spent sending each signal $s^{''}\ne s^{'}$ before sending $s^{'}$ for the $j$-th time is fewer than the number of periods $\theta^{''}$ spent doing the same. Therefore it follows that $\theta^{'}$ sent $s^{'}$ for the $j$-th time sooner than $\theta^{''}$ did, that is $T_{j}^{\theta^{'}}\le T_{j}^{\theta^{''}}$. Finally, if $T_{j}^{\theta^{''}}=\infty$, then evidently $T_{j}^{\theta'}\le\infty=T_{j}^{\theta^{''}}.$ It now remains to prove the sequence of statements by induction. \texttt{Statement 1} is the base case. By way of contradiction, suppose $T_{1}^{\theta^{''}}<\infty$ and \[ \#\left(s^{''}\ |\ y_{\theta^{'}}^{T_{1}^{\theta^{'}}}(\mathfrak{a})\right)>\#\left(s^{''}\ |\ y_{\theta^{''}}^{T_{1}^{\theta^{''}}}(\mathfrak{a})\right) \] for some $s^{''}\ne s^{'}$. Then there is some earliest period $t^{*}<T_{1}^{\theta^{'}}$ where \[ \#\left(s^{''}\ |\ y_{\theta^{'}}^{t^{*}}(\mathfrak{a})\right)>\#\left(s^{''}\ |\ y_{\theta^{''}}^{T_{1}^{\theta^{''}}}(\mathfrak{a})\right), \] where type $\theta^{'}$ played $s^{''}$ in period $t^{*}$, $\sigma_{\theta^{'}}(y_{\theta^{'}}^{t^{*}-1}(\mathfrak{a}))=s^{''}$. But by construction, by the end of period $t^{*}-1$ type $\theta^{'}$ has sent $s^{''}$ exactly as many times as type $\theta^{''}$ has sent it by period $T_{1}^{\theta^{''}}-1$, so that \[ \#\left(s^{''}\ |\ y_{\theta^{'}}^{t^{*}-1}(\mathfrak{a})\right)=\#\left(s^{''}\ |\ y_{\theta^{''}}^{T_{1}^{\theta^{''}}-1}(\mathfrak{a})\right). \] Furthermore, neither type has sent $s^{'}$ yet, so also \[ \#\left(s^{'}\ |\ y_{\theta^{'}}^{t^{*}-1}(\mathfrak{a})\right)=\#\left(s^{'}\ |\ y_{\theta^{''}}^{T_{1}^{\theta^{''}}-1}(\mathfrak{a})\right). \] Therefore, type $\theta^{'}$ holds the same posterior over the receiver's reaction to signals $s^{'}$ and $s^{''}$ at period $t^{*}-1$ as type $\theta^{''}$ does at period $T_{1}^{\theta^{''}}-1$. So\footnote{In the following equation and elsewhere in the proof, we abuse notation and write $I(\theta,s,y)$ to mean $I(\theta,s,g_{1}(\cdot|y),\delta\gamma)$, which is the Gittins index of type $\theta$ for signal $s$ at the posterior obtained from updating the prior $g_{1}$ using history $y$, with effective discount factor $\delta\gamma$. } by Theorem \ref{prop:index}, \begin{equation} s^{'}\in\underset{\hat{s}\in S}{\arg\max}\ I\left(\theta^{''},\hat{s},y_{\theta^{''}}^{T_{1}^{\theta^{''}}-1}(\mathfrak{a})\right)\implies I(\theta^{'},s^{'},y_{\theta^{'}}^{t^{*}-1}(\mathfrak{a}))>I(\theta^{'},s^{''},y_{\theta^{'}}^{t^{*}-1}(\mathfrak{a})).\label{eq:core_dp1} \end{equation} However, by construction of $T_{1}^{\theta^{''}}$, we have $\sigma_{\theta^{''}}\left(y_{\theta^{''}}^{T_{1}^{\theta^{''}}-1}(\mathfrak{a})\right)=s^{'}$. By the optimality of the Gittins index policy, the left-hand side of (\ref{eq:core_dp1}) is satisfied. But, again by the optimality of the Gittins index policy, the right-hand side of (\ref{eq:core_dp1}) contradicts $\sigma_{\theta^{'}}(y_{\theta^{'}}^{t^{*}-1}(\mathfrak{a}))=s^{''}$. Therefore we have proven \texttt{Statement 1}. Now suppose \texttt{Statement $j$} holds for all $j\le K$. We show \texttt{Statement $K+1$} also holds. If $T_{K+1}^{\theta^{''}}$ is finite, then $T_{K}^{\theta^{''}}$ is also finite. The inductive hypothesis then shows \[ \#\left(s^{''}\ |\ y_{\theta^{'}}^{T_{K}^{\theta^{'}}}(\mathfrak{a})\right)\le\#\left(s^{''}\ |\ y_{\theta^{''}}^{T_{K}^{\theta^{''}}}(\mathfrak{a})\right) \] for every $s^{''}\ne s^{'}$. Suppose there is some $s^{''}\ne s^{'}$ such that \[ \#\left(s^{''}\ |\ y_{\theta^{'}}^{T_{K+1}^{\theta^{'}}}(\mathfrak{a})\right)>\#\left(s^{''}\ |\ y_{\theta^{''}}^{T_{K+1}^{\theta^{''}}}(\mathfrak{a})\right). \] Together with the previous inequality, this implies type $\theta^{'}$ played $s^{''}$ for the\\ $\left[\#\left(s^{''}\ |\ y_{\theta^{''}}^{T_{K+1}^{\theta^{''}}}(\mathfrak{a})\right)+1\right]$-th time sometime between playing $s^{'}$ for the $K$-th time and playing $s^{'}$ for the ($K+1$)-th time. That is, if we put \[ t^{*}\coloneqq\min\left\{ t:\#(s^{''}\ |\ y_{\theta^{'}}^{t}(\mathfrak{a})))>\#\left(s^{''}\ |\ y_{\theta^{''}}^{T_{K+1}^{\theta^{''}}}(\mathfrak{a})\right)\right\} , \] then $T_{K}^{\theta^{'}}<t^{*}<T_{K+1}^{\theta^{'}}$. By the construction of $t^{*}$, \[ \#\left(s^{''}\ |\ y_{\theta^{'}}^{t^{*}-1}(\mathfrak{a})\right)=\#\left(s^{''}\ |\ y_{\theta^{''}}^{T_{K+1}^{\theta^{''}}-1}(\mathfrak{a})\right), \] and also \[ \#\left(s^{'}\ |\ y_{\theta^{'}}^{t^{*}-1}(\mathfrak{a})\right)=K=\#\left(s^{'}\ |\ y_{\theta^{''}}^{T_{K+1}^{\theta^{''}}-1}(\mathfrak{a})\right). \] Therefore, type $\theta^{'}$ holds the same posterior over the receiver's reaction to signals $s^{'}$ and $s^{''}$ at period $t^{*}-1$ as type $\theta^{''}$ does at period $T_{K+1}^{\theta^{''}}-1$. As in the base case, we can invoke Theorem \ref{prop:index} to show that it is impossible for $\theta^{'}$ to play $s^{''}$ in period $t^{*}$ while $\theta^{''}$ plays $s^{'}$ in period $T_{K+1}^{\theta^{''}}$. This shows \texttt{statement $j$ }is true for every $j$ by induction. \end{proof} \section{The Aggregate Receiver Response \label{sec:receiver_side}} Each newborn receiver thinks he is facing a fixed but unknown aggregate sender behavior strategy $\pi_{1}$, with belief over $\pi_{1}$ given by his regular prior $g_{2}$. He thinks that each period a sender type $\theta$ is drawn according to $\lambda$, and then this type $\theta$ sends a signal according to $\pi_{1}(\cdot|\theta)$. To maximize his expected utility, the receiver must learn to infer the type of the sender from the signal, using his personal experience. Unlike the senders whose optimal policy may involve experimentation, the receivers' problem only involves passive learning. Since the receiver observes the same information in a match regardless of his action, the optimal policy $\sigma_{2}(y_{2})$ simply best responds to the posterior belief induced by history $y_{2}$. \begin{defn} The \emph{one-period-forward map for receivers} $f_{2}:\Delta(Y_{2})\times\Pi_{1}\to\Delta(Y_{2})$ is \[ f_{2}[\psi_{2},\pi_{1}](y_{2},(\theta,s)):=\psi_{2}(y_{2})\cdot\gamma\cdot\lambda(\theta)\cdot\pi_{1}(s|\theta) \] and $f_{2}(\emptyset):=1-\gamma$. \end{defn} As with the one-period-forward maps $f_{\theta}$ for senders, $f_{2}[\psi_{2},\pi_{1}]$ describes the new distribution over receiver histories tomorrow if the distribution over histories in the receiver population today is $\psi_{2}$ and the sender population's aggregate play is $\pi_{1}.$ We write $\psi_{2}^{\pi_{1}}:=\lim_{T\to\infty}f_{2}^{T}(\psi_{2},\pi_{1})$ for the long-run distribution over $Y_{2}$ induced by fixing sender population's play at $\pi_{1}$. \begin{defn} The\emph{ aggregate receiver response (ARR)} $\mathscr{R}_{2}:\Pi_{1}\to\Pi_{2}$ is \[ \mathscr{R}_{2}[\pi_{1}](a|s):=\psi_{2}^{\pi_{1}}(y_{2}:\sigma_{2}(y_{2})(s)=a) \] \end{defn} We are interested in the extent to which $\mathscr{R}_{2}[\pi_{1}]$ responds to inequalities of the form $\pi_{1}(s^{'}|\theta^{'})\ge\pi_{1}(s^{'}|\theta^{''})$ embedded in $\pi_{1}$, such as those generated when $\theta^{'}\succ_{s^{'}}\theta^{''}$. To this end, for any two types $\theta^{'},\theta^{''}$ we define $P_{\theta^{'}\triangleright\theta^{''}}$ as those beliefs where the odds ratio of $\theta^{'}$ to $\theta^{''}$ exceeds their prior odds ratio, that is \begin{equation} P_{\theta^{'}\triangleright\theta^{''}}:=\left\{ p\in\Delta(\Theta):\frac{p(\theta^{''})}{p(\theta^{'})}\le\frac{\lambda(\theta^{''})}{\lambda(\theta^{'})}\right\} .\label{eq:odds_ratio_P} \end{equation} If $\pi_{1}(s^{'}|\theta^{'})\ge\pi_{1}(s^{'}|\theta^{''}),$ $\pi_{1}(s^{'}|\theta^{'})>0,$ and receiver knows $\pi_{1}$, then receiver's posterior belief about sender's type after observing $s^{'}$ falls in the set $P_{\theta^{'}\triangleright\theta^{''}}$. The next proposition shows that under the additional provisions that $\pi_{1}(s^{'}|\theta^{'})$ is ``large enough'' and receivers are sufficiently long-lived, $\mathscr{R}_{2}[\pi_{1}]$ will best respond to $P_{\theta^{'}\triangleright\theta^{''}}$ with high probability when $s^{'}$ is sent. For $P\subseteq\Delta(\Theta)$, we let\footnote{We abuse notation here and write $u_{2}(p,s,a^{'})$ to mean $\sum_{\theta\in\Theta}u_{2}(\theta,s,a^{'})\cdot p(\theta)$. } $\text{BR}(P,s)\coloneqq\bigcup_{p\in P}\left(\underset{a'\in A}{\arg\max}\ \text{\ensuremath{u_{2}(p,s,a')}}\right);$ this is the set of best responses to $s$ supported by some belief in $P$. \begin{lem} \label{prop:receiver_learning}Let regular prior $g_{2}$, types $\theta^{'},\theta^{''}$, and signal $s^{'}$ be fixed. For every $\epsilon>0$, there exists $C>0$ and $\underline{\gamma}<1$ so that for any $0\le\delta<1$, $\underline{\gamma}\le\gamma<1$, and $n\ge1$, if $\pi_{1}(s^{'}|\theta^{'})\ge\pi_{1}(s^{'}|\theta^{''})$ and $\pi_{1}(s^{'}|\theta^{'})\ge(1-\gamma)nC$, then \[ \mathscr{R}_{2}[\pi_{1}](\text{BR}(P_{\theta^{'}\triangleright\theta^{''}},s^{'})\ |\ s^{'})\ge1-\frac{1}{n}-\epsilon. \] \end{lem} This lemma gives a lower bound on the probability that $\mathscr{R}_{2}[\pi_{1}]$ best responds to $P_{\theta^{'}\triangleright\theta^{''}}$ after signal $s^{'}$. Note that the bound only applies for survival probabilities $\gamma$ that are close enough to 1, because when receivers have short lifetimes they need not get enough data to outweigh their prior. Note also that more of the receivers learn the compatibility condition when $\pi_{1}(s^{'}|\theta^{'})$ is large compared to $(1-\gamma)$ and almost all of them do in the limit of $\text{\ensuremath{n\shortrightarrow\infty.}}$ To interpret the condition $\pi_{1}(s^{'}|\theta^{'})\ge(1-\gamma)nC,$ recall that an agent with survival chance $\gamma$ has a typical lifespan of $\frac{1}{1-\gamma}$. If $\pi_{1}$ describes the aggregate play in the sender population, then on average a type $\theta^{'}$ plays $s^{'}$ for $\frac{1}{1-\gamma}\cdot\pi_{1}(s^{'}|\theta^{'})$ periods in her life. So when a typical type $\theta^{'}$ plays $s^{'}$ for $nC$ periods, this lemma provides a bound of $1-\frac{1}{n}-\epsilon$. It is important that this hypothesis does not require that $\pi_{1}(s^{'}|\theta^{'})$ be bounded away from 0 as $\gamma\to1$. Although the absolute number of periods that $\theta^{'}$ experiments with $s^{'}$ might be large, the fraction of her life spent on such experiments could still be negligible if $n$ grows slowly relative to $\gamma$. The proof relies on Theorem 2 from \citet*{fudenberg_he_imhof_2016} about updating Bayesian posteriors after rare events. Specialized into our setting and notation, the result says: \begin{quote} \emph{Let regular prior $g_{2}$ and signal $s^{'}$ be fixed. Let $0<\epsilon,h<1$. There exists $C$ such that whenever $\pi_{1}(s^{'}|\theta^{'})\ge\pi_{1}(s^{'}|\theta^{''})$ and $t\cdot\pi_{1}(s^{'}|\theta^{'})\ge C$, we get} \emph{ \[ \psi_{2}^{\pi_{1}}\left(y_{2}\in Y_{2}[t]:\frac{p(\theta^{''}|s^{'};y_{2})}{p(\theta^{'}|s^{'};y_{2})}\le\frac{1}{1-h}\cdot\frac{\lambda(\theta^{''})}{\lambda(\theta^{'})}\right)/\psi_{2}^{\pi_{1}}(Y_{2}[t])\ge1-\epsilon \] } \emph{where $p(\theta|s;y_{2})$ refers to the conditional probability that a sender of $s$ is type $\theta$ according to the posterior belief induced by history $y_{2}$. } \end{quote} That is, if at age $t$ a receiver would have observed in expectation $C$ instances of type $\theta^{'}$ sending $s^{'}$, then the belief of at least $1-\epsilon$ fraction of age $t$ receivers (essentially) falls in $P_{\theta^{'}\triangleright\theta^{''}}$ after seeing the signal $s^{'}$. The proof of Proposition \ref{prop:receiver_learning} calculates what fraction of receivers meets this ``age requirement.'' \begin{proof} We will actually show the following stronger result: Let regular prior $g_{2}$, types $\theta^{'},\theta^{''}$, and signal $s^{'}$ be fixed. For every $\epsilon>0$, there exists $C>0$ so that for any $0\le\delta,\gamma<1$ and $n\ge1$, if $\pi_{1}(s^{'}|\theta^{'})\ge\pi_{1}(s^{'}|\theta^{''})$ and $\pi_{1}(s^{'}|\theta^{'})\ge(1-\gamma)nC$, then \[ \mathscr{R}_{2}[\pi_{1}](\text{BR}(P_{\theta^{'}\triangleright\theta^{''}},s^{'})\ |\ s^{'})\ge\gamma^{\left\lceil \frac{1}{n(1-\gamma)}\right\rceil }-\epsilon \] The lemma follows because we may pick a large enough $\underline{\gamma}<1$ so that $\gamma^{\left\lceil \frac{1}{n(1-\gamma)}\right\rceil }>1-\frac{1}{n}$ for all $n\ge1$ and $\gamma\ge\underline{\gamma}$. For each $0<h<1$, define $P_{\theta^{'}\triangleright\theta^{''}}^{h}:=\left\{ p\in\Delta(\Theta):\frac{p(\theta^{''})}{p(\theta^{'})}\le\frac{1}{1-h}\cdot\frac{\lambda(\theta^{''})}{\lambda(\theta^{'})}\right\} ,$with the convention that $\frac{0}{0}=0.$ Then it is clear that each $P_{\theta^{'}\triangleright\theta^{''}}^{h}$, as well as $P_{\theta^{'}\triangleright\theta^{''}}$ itself, is a closed subset of $\Delta(\Theta)$. Also, $P_{\theta^{'}\triangleright\theta^{''}}^{h}\to P_{\theta^{'}\triangleright\theta^{''}}$ as $h\to0$. Fix action $a\in A$. If for all $\bar{h}>0$ there exists some $0<h\le\bar{h}$ so that $a\in\text{BR}(P_{\theta^{'}\triangleright\theta^{''}}^{h},s^{'})$, then $a\in\text{BR}(P_{\theta^{'}\triangleright\theta^{''}},s^{'})$ also due to best response correspondence having a closed graph. This means for each $a\notin\text{BR}(P_{\theta^{'}\triangleright\theta^{''}},s^{'})$, there exists $\bar{h}_{a}>0$ so that $a\notin\text{BR}(P_{\theta^{'}\triangleright\theta^{''}}^{h},s^{'})$ whenever $0<h\le\bar{h}_{a}$. Let $\bar{h}:=\min_{a\notin\text{BR}(P_{\theta^{'}\triangleright\theta^{''}},s^{'})}\bar{h}_{a}$. Let $\epsilon>0$ be given and apply Theorem 2 of \citet*{fudenberg_he_imhof_2016} with $\epsilon$ and $\bar{h}$ to find constant $C$. When $\pi_{1}(s^{'}|\theta^{'})\ge\pi_{1}(s^{'}|\theta^{''})$ and $\pi_{1}(s^{'}|\theta^{'})\ge(1-\gamma)nC$, consider an age $t$ receiver for $t\ge\left\lceil \frac{1}{n(1-\gamma)}\right\rceil $. Since $t\cdot\pi_{1}(s^{'}|\theta^{'})\ge C,$ Theorem 2 of \citet*{fudenberg_he_imhof_2016} implies there is probability at least $1-\epsilon$ this receiver's belief about the types who send $s^{'}$ falls in $P_{\theta^{'}\triangleright\theta^{''}}^{\bar{h}}$. By construction of $\bar{h},$ $\text{BR}(P_{\theta^{'}\triangleright\theta^{''}}^{\bar{h}},s^{'})=\text{BR}(P_{\theta^{'}\triangleright\theta^{''}},s^{'})$, so $1-\epsilon$ of age $t$ receivers have a history $y_{2}$ where $\sigma_{2}(y_{2})(s^{'})\in\text{BR}(P_{\theta^{'}\triangleright\theta^{''}},s^{'})$. Since agents survive between periods with probability $\gamma,$ the mass of the receiver population aged $\left\lceil \frac{1}{n(1-\gamma)}\right\rceil $ or older is $(1-\gamma)\cdot\sum_{t=\left\lceil \frac{1}{n(1-\gamma)}\right\rceil }^{\infty}\gamma^{t}=\gamma^{\left\lceil \frac{1}{n(1-\gamma)}\right\rceil }$.This shows \[ \mathscr{R}_{2}[\pi_{1}](\text{BR}(P_{\theta^{'}\triangleright\theta^{''}},s^{'})\ |\ s^{'})\ge\gamma^{\frac{1}{n(1-\gamma)}}\cdot(1-\epsilon)\ge\gamma^{\left\lceil \frac{1}{n(1-\gamma)}\right\rceil }-\epsilon \] as desired. \end{proof} \section{Steady State Implications for Aggregate Play\label{sec:two_sided} } Sections \ref{sec:sender_side} and \ref{sec:receiver_side} have separately examined the senders' and receivers' learning problems. In this section, we turn to the two-sided learning problem. We will first define steady state strategy profiles, which are signalling game strategy profiles $\pi^{*}$ where $\pi_{1}^{*}$ and $\pi_{2}^{*}$ are mutual aggregate responses, and then characterize the steady states using our previous results. \subsection{Steady states, $\delta$-stability, and patient stability } We begin by defining steady states using the one-period-forward maps $f_{\theta}$ and $f_{2}$ introduced in Sections \ref{sec:sender_side} and \ref{sec:receiver_side}. \begin{defn} A state $\psi^{*}$ is a \emph{steady state} if $\psi_{\theta}^{*}=f_{\theta}(\psi_{\theta}^{*},\sigma_{2}(\psi_{2}^{*}))$ for every $\theta$ and $\psi_{2}^{*}=f_{2}(\psi_{2}^{*},(\sigma_{\theta}(\psi_{\theta}^{*}))_{\theta\in\Theta})$. The set of all steady states for regular prior $g$ and $0\le\delta,\gamma<1$ is denoted $\Psi^{*}(g,\delta,\gamma)$. \end{defn} The strategy profiles associated with steady states represent time-invariant distributions of play. \begin{defn} For regular prior $g$ and $0\le\delta,\gamma<1$, the set of steady state strategy profiles is $\Pi^{*}(g,\delta,\gamma):=\{\sigma(\psi^{*}):\psi^{*}\in\Psi^{*}(g,\delta,\gamma)\}$. \end{defn} We now give an equivalent characterization $\Pi^{*}(g,\delta,\gamma)$ in terms of $\mathscr{R}_{1}$ and $\mathscr{R}_{2}$. The proof is in Appendix \ref{subsec:pf_mutual_AR}. \begin{prop} \label{prop:mutual_AR} $\pi^{*}\in\Pi^{*}(g,\delta,\gamma)$ if and only if $\mathscr{R}_{1}^{g,\delta,\gamma}(\pi_{2}^{*})=\pi_{1}^{*}$ and $\mathscr{R}_{2}^{g,\delta,\gamma}(\pi_{1}^{*})=\pi_{2}^{*}$. \end{prop} (Note that here we make the dependence of $\mathscr{R}_{1}$ and $\mathscr{R}_{2}$ on parameters $(g,\delta,\gamma)$ explicit to avoid confusion.) Analogous to a Nash equilibrium as a pair of mutual best replies, a steady state strategy profile is a pair of aggregate distributions, each of which is an aggregate reply to the other. The next proposition guarantees that there always exists at least one steady-state strategy profile. \begin{prop} \label{thm:existence}$\Pi^{*}(g,\delta,\gamma)$ is non-empty and compact in the norm topology. \end{prop} The proof is in the Online Appendix. We establish that $\Psi^{*}(g,\delta,\gamma)$ is non-empty and compact in the $\ell_{1}$ norm on the space of distributions, which immediately implies the same properties for $\Pi^{*}(g,\delta,\gamma)$. Intuitively, if lifetimes are finite, the set of histories is finite, so the set of states is of finite dimension. Here the one-period-forward map $f$ is continuous, so the usual version of Brouwer's fixed-point theorem applies. With geometric lifetimes, very old agents are rare, so truncating the agent's lifetimes at some large $T$ yields a good approximation. Instead of using these approximations directly, our proof shows that under the $\ell_{1}$ norm $f$ is continuous, and that (because of the geometric lifetimes) the feasible states form a compact locally convex Hausdorff space. This lets us appeal to a fixed-point theorem for that domain. Note that in the steady state, the information lost when agents exit the system exactly balances the information agents gain through learning. We now focus on the iterated limit \[ \text{\textquotedblleft}\lim_{\delta\to1}\lim_{\gamma\to1}\Pi^{*}(g,\delta,\gamma),\text{\textquotedblright} \] that is the set of steady state strategy profiles for $\delta$ and $\gamma$ near 1, where we first send $\gamma$ to 1 holding $\delta$ fixed, and then send $\delta$ to 1. \begin{defn} For each $0\le\delta<1$, a strategy profile $\pi^{*}$ is \emph{$\delta$-stable under $g$ }if there is a sequence $\gamma_{k}\to1$ and an associated sequence of steady state strategy profiles $\pi^{(k)}\in\Pi^{*}(g,\delta,\gamma_{k})$, such that $\pi^{(k)}\to\pi^{*}$. Strategy profile $\pi^{*}$ is \emph{patiently stable under $g$ }if there is a sequence $\delta_{k}\to1$ and an associated sequence of strategy profiles $\pi^{(k)}$ where each $\pi^{(k)}$ is $\delta_{k}$-stable under $g$ and $\pi^{(k)}\to\pi^{*}$. Strategy profile $\pi^{*}$ is \emph{patiently stable} if it is patiently stable under some regular prior $g$. \end{defn} Heuristically speaking, patiently stable strategy profiles are the limits of learning outcomes when agents become infinitely patient (so that senders are willing to make many experiments) and long lived (so that agents on both sides can learn enough for their data to outweigh their prior). As in past work on steady state learning \citep{fudenberg_steady_1993,fudenberg_superstition_2006}, the reason for this order of limits is to ensure that most agents have enough data that they stop experimenting and play myopic best responses.\footnote{If agents did not eventually stop experimenting as they age, then even if most agents have approximately correct beliefs, aggregate play need not be close to a Nash equilibrium because most agents would not be playing a (static) best response to their beliefs.} We do not know whether our results extend to the other order of limits; we explain the issues involved below, after sketching the intuition for Proposition \ref{thm:nash}. \subsection{Preliminary results on $\delta$-stability and patient stability} When $\gamma$ is near 1, agents correctly learn the consequences of the strategies they play frequently. But for a fixed patience level they may choose to rarely or never experiment, and so can maintain incorrect beliefs about the consequences of strategies that they do not play. The next result formally states this, which parallels \citet{fudenberg_steady_1993}'s result that $\delta$-stable strategy profiles are self-confirming equilibria. \begin{prop} \label{thm:fixed_delta}Suppose strategy profile $\pi^{*}$ is $\delta$-stable under a regular prior. Then for every type $\theta$ and signal $s$ with $\pi_{1}^{*}(s|\theta)>0$, $s$ is a best response to some $\pi_{2}\in\Pi_{2}$ for type $\theta$, and furthermore $\pi_{2}(\cdot|s)=\pi_{2}^{*}(\cdot|s)$. Also, for any signal $s$ such that $\pi_{1}^{*}(s|\theta)>0$ for at least one type $\theta$, $\pi_{2}^{*}(\cdot|s)$ is supported on pure best responses to the Bayesian belief generated by $\pi_{1}^{*}$ after $s$. \end{prop} We prove this result in the Online Appendix. The idea of the proof is the following: If signal $s$ has positive probability in the limit, then it is played many times by the senders, so the receivers eventually learn the correct posterior distribution for $\theta$ given $s.$ As the receivers have no incentive to experiment, their actions after $s$ will be a best response to this correct posterior belief. For the senders, suppose $\pi_{1}^{*}(s|\theta)>0,\text{ }$ but $s$ is not a best response for type $\theta$ to any $\pi_{2}\in\Pi_{2}$ that matches $\pi_{2}^{*}(\cdot|s)$. Then there exists $\xi>0$ such that $s$ is not a $\xi$ best response to any strategy that differs by no more than $\xi$ from $\pi_{2}^{*}(\cdot|s)$ after $s$. Yet, by the law of large numbers and the \citet{diaconis_uniform_1990} result that with non-doctrinaire priors the posteriors converge to the empirical distribution at a rate that depends only on the sample size, if a sender who has played $s$ many times then with high probability her belief about $\pi_{2}^{*}(\cdot|s)$ is $\xi$-close to $\pi_{2}^{*}(\cdot|s)$. So when a sender who has played $s$ many times chooses to play it again, she is not doing so to maximize her current period's expected payoff. This implies that type $\theta$ has persistent option value for signal $s$, which contradicts the fact that this option value must converge to 0 with the sample size. \begin{rem} This proposition says that each sender type is playing a best response to a belief about the receiver's play that is correct on the equilibrium path, and that the receivers are playing an aggregate best response to the aggregate play of the senders. Thus the $\delta$-stable outcomes are a version of self-confirming equilibrium where different types of sender are allowed to have different beliefs.\footnote{\citet*{dekel_learning_2004} define type-heterogeneous self-confirming equilibrium in static Bayesian games. To extend their definition to signalling games, we can define the ``signal functions'' $y_{i}(a,\theta)$ from that paper to respect the extensive form of the game. See also \citet{fudenberg_kamada_2016}. } \end{rem} \begin{example} \noindent Consider the following game: \noindent \begin{center} \includegraphics[scale=0.4]{signal_selfconfirming-eps-converted-to.pdf} \noindent \end{center} Note the receiver is indifferent between all responses. Fix any regular prior $g_{2}$ for the receiver and let the sender's prior $g_{1}^{(s^{'})}$ be given by a Dirichlet distribution with weights 1 and 3 on $a^{'}$ and $a^{''}$ respectively. Fix any regular prior $g_{1}^{(s^{''})}$. We claim that it is $\delta$-stable when $\delta=0$ for both types of senders to play $s^{''}$ and for the receiver to play $a^{'}$ after every signal, which is a type-heterogeneous rationalizable self-confirming equilibrium. However, the behavior of ``pooling on $s^{''}$'' cannot occur even in the usual self-confirming equilibrium, where both types of the sender must hold the same beliefs about the receiver's response to $s^{'}$. \emph{A fortiori}, this pooling behavior cannot occur in a Nash equilibrium. To establish this claim, note that since $\delta=0$ each sender plays a myopically optimal signal after every history. For any $\gamma$, there is a steady state where the receivers' policy responds to every signal with $a^{'}$ after every history, type $\theta^{''}$ senders play $s^{''}$ after every history and never updates their prior belief about how receivers react to $s^{'}$, and type $\theta'$ senders with fewer than 6 periods of experience play $s^{'}$ but switch to playing $s^{''}$ forever starting at age 7. The behavior of the $\theta^{'}$ agents comes from the fact that after $k$ periods of playing $s^{'}$ and seeing a response of $a^{'}$ every period, the sender's expected payoff from playing $s^{'}$ next period is \[ \frac{1+k}{4+k}(-1)+\frac{3}{4+k}(2). \] This expression is positive when $0\le k\le5$ but negative when $k=6$. The fraction of type $\theta^{'}$ aged 6 and below approaches 0 as $\gamma\to1$, hence we have constructed a sequence of steady state strategy profiles converging to the strategy profile where the two types of senders both play $s^{''}$. This example illustrates that even though all types of senders start with the same prior $g_{1}$, their learning is endogenously determined by their play, which is in turn determined by their payoff structures. Since the two different types of senders play differently, their beliefs regarding how the receiver will react to $s^{'}$ eventually diverge. \hfill{} $\blacklozenge$ \end{example} We will now show that only Nash equilibrium profiles can be steady-state outcomes as $\delta$ tends to 1. Moreover, this limit also rules out strategy profiles in which the sender's strategy can only be supported by the belief that the receiver would play a dominated action in response to some of the unsent signals. \begin{defn} \label{def:PBE_hetero} In a signalling game, a \emph{perfect Bayesian equilibrium with heterogeneous off-path beliefs }is a strategy profile $(\pi_{1}^{*},\pi_{2}^{*})$ such that: \begin{itemize} \item For each $\theta\in\Theta,$ $u_{1}(\theta;\pi^{*})=\max_{s\in S}u_{1}(\theta,s,\pi_{2}^{*}(\cdot|s))$. \item For each on-path signal $s$, $u_{2}(p^{*}(\cdot|s),s,\pi_{2}^{*}(\cdot|s))=\underset{\hat{a}\in A}{\max}\ u_{2}(p^{*}(\cdot|s),s,\hat{a})$. \item For each off-path signal $s$ and each $a\in A$ with $\pi_{2}^{*}(a|s)>0$, there exists a belief $p\in\Delta(\Theta)$ such that $u_{2}(p,s,a)=\underset{\hat{a}\in A}{\max}\ u_{2}(p,s,\hat{a})$. \end{itemize} Here $u_{1}(\theta;\pi^{*})$ refers to type $\theta$'s payoff under $\pi^{*},$ and $p^{*}(\cdot|s)$ is the Bayesian posterior belief about sender's type after signal $s$, under strategy $\pi_{1}^{*}$. \end{defn} The first two conditions imply that the profile is a Nash equilibrium. The third condition resembles that of perfect Bayesian equilibrium, but is somewhat weaker as it allows the receiver's play after an off-path signal $s$ to be a mixture over several actions, each of which is a best response to a different belief about the sender's type. This means $\pi_{2}^{*}(\cdot|s)\in\Delta(\text{BR}(\Delta(\Theta),s))$, but $\pi_{2}^{*}(\cdot|s)$ itself may not be a best response to any unitary belief about the sender's type \begin{prop} \label{thm:nash} If strategy profile $\pi^{*}$ is patiently stable, then it is a perfect Bayesian equilibrium with heterogeneous off-path beliefs. \end{prop} \begin{proof} In the Online Appendix, we prove that if $\pi^{*}$ is patiently stable, then is a Nash equilibrium. We now explain why a patiently stable profile $\pi^{*}$must satisfy the third condition in Definition \ref{def:PBE_hetero}. After observing any history $y_{2}$, a receiver who started with a regular prior thinks every signal has positive probability in his next match. So, his optimal policy prescribes for each signal $s$ a best response to that receiver's posterior belief about the sender's type upon seeing signal $s$ after history $y_{2}$. For any regular prior $g$, $0\le\delta,\gamma<1$, and any sender aggregate play $\pi_{1}$, we thus deduce $\mathscr{R}_{2}^{g,\delta,\gamma}[\pi_{1}](\cdot|s)$ is entirely supported on $\text{BR}(\Delta(\Theta),s)$. This means the the same is true about the aggregate receiver response in every steady state and hence in every patiently stable strategy profile. \end{proof} The proof that patiently stable strategy profiles are Nash equilibria follows the proof strategy of \citet{fudenberg_steady_1993}, which derived a contradiction via excess option values. We provide a proof sketch here: Suppose $\pi^{*}$ is patiently stable but not a Nash equilibrium. From Proposition \ref{thm:fixed_delta}, the receiver strategy is a best response to the aggregate strategy of the senders, and the senders optimize given correct beliefs about the responses to on-path signals. So there must be an unsent signal $s^{'}$ that would be a profitable deviation for some type $\theta'$. Because priors are non-doctrinaire, they assign a non-negligible probability to receiver responses that would make $s^{'}$ a better choice for type $\theta'$ than the signals she sends with positive probability under $\pi^{*}$, so when type $\theta^{'}$ is very patient she should perceive a persistent option value to experimenting with $s^{'}.$ But this contradicts the fact that the option values evaluated at sufficiently long histories must go to 0.\footnote{The option values for the receivers are all identically equal to 0 as they get the same information regardless of their play.} In \citet{fudenberg_steady_1993}, this argument relies on the finite lifetime of the agents only to ensure that ``almost all'' histories are long enough, by picking a large enough lifetime. We can achieve the analogous effect in our geometric-lifetime model by picking $\gamma$ close to 1. Our proof uses the fact that if $\delta$ is fixed and $\gamma\to1,$ then the number of experiments that a sender needs to exhaust her option value is negligible relative to her expected lifespan, so that most senders are playing approximate best responses to their current beliefs. The same conclusion does not hold if we fix $\gamma$ and let $\delta\to1,$ even though the optimal sender policy only depends on the product $\delta\gamma$. This is because for a fixed sender policy the induced distribution on sender play depends on $\gamma$ but not on $\delta.$ Thus our results strictly speaking only apply to the case where $\gamma$ goes to one much more quickly than $\delta$ does. \subsection{Patient stability implies the compatibility criterion } We will now prove our main result: patiently stability selects a strict subset of the Nash equilibria, namely those that satisfy the\emph{ compatibility criterion. } \begin{defn} \label{def:J}For a fixed strategy profile $\pi^{*}$, let $u_{1}(\theta;\pi^{*})$ denote the payoff to type $\theta$ under $\pi^{*},$ and let \begin{eqnarray*} J(s,\pi^{*}) & \coloneqq & \left\{ \theta\in\Theta:\underset{a\in A}{\max}\ u_{1}(\theta,s,a)>u_{1}(\theta;\pi^{*})\right\} \end{eqnarray*} be the set of types for which \emph{some} response to signal $s$ is better than their payoff under $\pi^{*}.$ \end{defn} Note that the reverse strict inequality would mean that $s$ is ``equilibrium dominated'' for $\theta$ in the sense of \citet{cho_signaling_1987}. \begin{defn} The \emph{admissible beliefs at signal $s$} \emph{under profile} $\pi^{*}$ are \[ P(s,\pi^{*})\coloneqq\bigcap\left\{ P_{\theta^{'}\triangleright\theta^{''}}:\theta^{'}\succ_{s}\theta^{''}\text{ and }\theta^{'}\in J(s,\pi^{*})\right\} \] where $P_{\theta^{'}\triangleright\theta^{''}}$ is defined in Equation (\ref{eq:odds_ratio_P}). \end{defn} That is, $P(s,\pi^{*})$ is the joint belief restriction imposed by a family of $P_{\theta^{'}\triangleright\theta^{''}}$ for $(\theta^{'},\theta^{''})$ satisfying two conditions: $\theta^{'}$ is more type-compatible with $s$ than $\theta^{''},$ and furthermore the more compatible type $\theta^{'}$ belongs to $J(s,\pi^{*})$. If there are no pairs $(\theta^{'},\theta^{''})$ satisfying these two conditions, then (by convention of intersection over no elements) $P(s,\pi^{*})$ is defined as $\Delta(\Theta)$. In any signalling game and for any $\pi^{*}$, the set $P(s,\pi^{*})$ is always non-empty because it always contains the prior $\lambda$. \begin{defn} Strategy profile $\pi^{*}$ \emph{satisfies the compatibility criterion }if $\pi_{2}(\cdot|s)\in\Delta(\text{BR}(P(s,\pi^{*}),s))$ for every $s$. \end{defn} Like divine equilibrium but unlike the Intuitive Criterion or \citet{cho_signaling_1987}'s $D1$ criterion, the compatibility criterion says only that some signals should not increase the relative probability of ``implausible'' types, as opposed to requiring that these types have probability 0. One might imagine a version of the compatibility criterion where the belief restriction $P_{\theta^{'}\triangleright\theta^{''}}$ applies whenever $\theta^{'}\succ_{s}\theta^{''}$. To understand why we require the additional condition that $\theta^{'}\in J(s,\pi^{*})$ in the definition of admissible beliefs, recall that Lemma \ref{prop:receiver_learning} only gives a learning guarantee in the receiver's problem when $\pi_{1}(s|\theta^{'})$ is ``large enough'' for the more type-compatible $\theta^{'}$. In the extreme case where $s$ is a strictly dominated signal for $\theta^{'}$, she will never play it during learning. If $s$ is only equilibrium dominated for $\theta^{'}$, then $\theta^{'}$ may still not experiment very much with $s$. On the other hand, the next lemma provides a lower bound on the frequency that $\theta^{'}$ experiments with $s^{'}$ when $\theta^{'}\in J(s^{'},\pi^{*})$ and $\delta$ and $\gamma$ are close to 1. \begin{lem} \label{lem:nondom_message} Fix a regular prior $g$ and a strategy profile $\pi^{*}$ where for some type $\theta^{'}$ and signal $s^{'}$, $\theta^{'}\in J(s^{'},\pi^{*})$. There exist a number $\epsilon$ and functions $N\mapsto\delta(N)$ and $(N,\delta)\mapsto\gamma(N,\delta)$, all valued in $(0,1),$ such that whenever: \begin{itemize} \item $\delta\ge\delta(N)$, $\gamma\ge\gamma(N,\delta)$ \item $\pi\in\Pi^{*}(g,\delta,\gamma)$ \item $\pi$ is no further away than $\epsilon$ from $\pi^{*}$ in $\ell_{1}$ norm \end{itemize} we have $\pi_{1}(s^{'}|\theta^{'})\ge(1-\gamma)\cdot N.$ \end{lem} By $\ell_{1}$ norm we mean the norm generated by the metric \begin{equation} d(\pi,\pi^{*})=\sum_{\theta\in\Theta}\sum_{s\in S}|\pi_{1}(s|\theta)-\pi_{1}^{*}(s|\theta)|+\sum_{s\in S}\sum_{a\in A}|\pi_{2}(a|s)-\pi_{2}^{*}(a|s)|.\label{eq:l1} \end{equation} Note that since $\pi_{1}(s|\theta^{'})$ is between 0 and 1, we know that $(1-\gamma(N,\delta))\cdot N<1$ for each $N$. The proof of this lemma is in Section OA5 of the Online Appendix. To gain an intuition for it, , suppose that not only is $s^{'}$ equilibrium undominated in $\pi^{*},$ but furthermore $s^{'}$ can lead to the highest signalling game payoff for type $\theta^{'}$ under some receiver response $a^{'}$. Because the prior is non-doctrinaire, the Gittins index of each signal $s^{'}$ in the learning problem approaches its highest possible payoff in the stage game as the sender becomes infinitely patient. Therefore, for every $N\in\mathbb{N}$, when $\gamma$ and $\delta$ are close enough to 1, a newborn type $\theta^{'}$ will play $s^{'}$ in each of the first $N$ periods of her life, regardless of what responses she receives during that time. These $N$ periods account for roughly $(1-\gamma)\cdot N$ fraction of her life, proving the lemma in this special case. It turns out even if $s^{'}$ does not lead to the highest potential payoff in the signalling game, long-lived players will have a good estimate of their steady state payoff. So, type $\theta'$ will still play any $s^{'}$ that is equilibrium undominated in strategy profile $\pi^{*}$ at least $N$ times in any steady states that are sufficiently close to $\pi^{*}$, though these $N$ periods may not occur at the beginning of her life. \begin{thm} \label{thm:PS_is_compatible} Every patiently stable strategy profile $\pi^{*}$ satisfies the compatibility criterion. \end{thm} The proof combines Lemma \ref{prop:compatible_comonotonic}, Lemma \ref{prop:receiver_learning}, and Lemma \ref{lem:nondom_message}. Lemma \ref{prop:compatible_comonotonic} shows that types that are more compatible with $s^{'}$ play it more often. Lemma \ref{lem:nondom_message} says that types for whom $s^{'}$ is not equilibrium dominated will play it ``many times.'' Finally, Lemma \ref{prop:receiver_learning} shows that the ``many times'' here is sufficiently large that most receivers correctly believe that more compatible types play $s^{'}$ more than less compatible types do, so their posterior odds ratio for more versus less compatible types exceeds the prior ratio. \begin{proof} Suppose $\pi^{*}$ is patiently stable under regular prior $g$. Fix a $s^{'}$ and an action $\hat{a}\notin\text{BR}(P(s^{'},\pi^{*}),s^{'})$. Let $h>0$ be given. We will show that $\pi_{2}^{*}(\hat{a}|s^{'})<h$. Since the choices of $s^{'}$, $\hat{a}$, and $h>0$ are arbitrary, we will have proven the theorem. \textbf{Step 1}: Setting some constants. In the statement of Lemma \ref{prop:receiver_learning}, for each pair $\theta^{'},\theta^{''}$ such that $\theta^{'}\succ_{s^{'}}\theta^{''}\text{ and }\theta^{'}\in J(s^{'},\pi^{*})$, put $\epsilon=\frac{h}{2|\Theta|^{2}}$ and find $C_{\theta^{'},\theta^{''}}$ and $\underline{\gamma}_{\theta^{'},\theta^{''}}$ so that the result holds. Let $C$ be the maximum of all such $C_{\theta^{'},\theta^{''}}$ and $\underline{\gamma}$ be the maximum of all such $\underline{\gamma}_{\theta^{'},\theta^{''}}$. Also find $\underline{n}\ge1$ so that \begin{equation} 1-\frac{1}{\underline{n}}>1-\frac{h}{2|\Theta|^{2}}.\label{eq:frac_old} \end{equation} In the statement of Lemma \ref{lem:nondom_message}, for each $\theta^{'}$ such that $\theta^{'}\succ_{s^{'}}\theta^{''}$ for at least one $\theta^{''}$, find $\epsilon_{\theta^{'}},\delta_{\theta^{'}}(\underline{n}C)$, $\gamma_{\theta^{'}}(\underline{n}C,\delta)$ so that the lemma holds. Write $\epsilon^{*}>0$ as the minimum of all such $\epsilon_{\theta^{'}}$ and let $\delta^{*}(\underline{n}C)$ and $\gamma^{*}(\underline{n}C,\delta)$ represent the maximum of $\delta_{\theta^{'}}$ and $\gamma_{\theta^{'}}$ across such $\theta^{'}$. \textbf{Step 2}: Finding a steady state profile with large $\delta,\gamma$ that approximates $\pi^{*}$. Since $\pi^{*}$ is patiently stable under $g$, there exists a sequence of strategy profiles $\pi^{(j)}\to\pi^{*}$ where $\pi^{(j)}$ is $\delta_{j}$-stable under $g$ with $\delta_{j}\to1$. Each $\pi^{(j)}$ can be written as the limit of steady state strategy profiles. That is, for each $j$ there exists $\gamma_{j,k}\to1$ and a sequence of steady state profiles $\pi^{(j,k)}\in\Pi^{*}(g,\delta_{j},\gamma_{j,k})$ such that $\lim_{k\to\infty}\pi^{(j,k)}=\pi^{(j)}$. The convergence of the array $\pi^{(j,k)}$ to $\pi^{*}$ means we may find $\underline{j}\in\mathbb{N}$ and function $k(j)$ so that whenever $j\ge\underline{j}$ and $k\ge k(j),$ $\pi^{(j,k)}$ is no more than $\min(\epsilon^{*},\frac{h}{2|\Theta|^{2}})$ away from $\pi^{*}$. Find $j^{\circ}\ge\underline{j}$ large enough so $\delta^{\circ}:=\delta_{j^{\circ}}>\delta^{*}(\underline{n}C)$, and then find a large enough $k^{\circ}>k(j^{\circ})$ so that $\gamma^{\circ}:=\gamma_{j^{\circ},k^{\circ}}>\max(\gamma^{*}(\underline{n}C,\delta^{\circ}),\underline{\gamma})$. So we have identified a steady state profile $\pi^{\circ}:=\pi^{(j^{\circ},k^{\circ})}\in\Pi^{*}(g,\delta^{\circ},\gamma^{\circ})$ which approximates $\pi^{*}$ to within $\min(\epsilon^{*},\frac{h}{2|\Theta|^{2}})$. \textbf{Step 3}: Applying properties of $\mathscr{R}_{1}$ and $\mathscr{R}_{2}$. For each pair $\theta^{'},\theta^{''}$ such that $\theta^{'}\succ_{s^{'}}\theta^{''}\text{ and }\theta^{'}\in J(s^{'},\pi^{*})$, we will bound the probability that $\pi_{2}^{\circ}(\cdot|s^{'})$ does not best respond to $P_{\theta^{'}\triangleright\theta^{''}}$ by $\frac{h}{|\Theta|^{2}}$. Since there are at most $|\Theta|\cdot(|\Theta|-1)$ such pairs in the intersection defining $P(s^{'},\pi^{*})$, this would imply that $\pi_{2}^{\circ}(\hat{a}|s^{'})<[|\Theta|\cdot(|\Theta|-1)]\cdot\frac{h}{|\Theta|^{2}}$ since $\hat{a}\notin\text{BR}(P(s^{'},\pi^{*}),s^{'})$. And since $\pi_{2}^{\circ}$ is no more than $\frac{h}{2|\Theta|^{2}}$ away from $\pi_{2},$ this would show $\pi_{2}(\hat{a}|s^{'})<h$. By construction $\pi^{\circ}$ is closer than $\epsilon_{\theta^{'}}$ to $\pi^{*}$, and furthermore $\delta^{\circ}\ge\delta_{\theta^{'}}(\underline{n}C)$ and $\gamma^{\circ}\ge\gamma_{\theta^{'}}(\underline{n}C,\delta^{\circ})$. By Lemma \ref{lem:nondom_message}, $\pi_{1}^{\circ}(s^{'}|\theta^{'})\ge\underline{n}C(1-\gamma^{\circ})$. At the same time, $\pi_{1}^{\circ}=\mathscr{R}_{1}[\pi_{2}^{\circ}]$ and $\theta^{'}\succ_{s^{'}}\theta^{''}$, so Lemma \ref{prop:compatible_comonotonic} implies that $\pi_{1}^{\circ}(s^{'}|\theta^{'})\ge\pi_{1}^{\circ}(s^{'}|\theta^{''})$. Turning to the receiver side, $\pi_{2}^{\circ}=\mathscr{R}_{2}[\pi_{1}^{\circ}]$ with $\pi_{^{\circ}1}$ satisfying the conditions of Lemma \ref{prop:receiver_learning} associated with $\epsilon=\frac{h}{2|\Theta|^{2}}$ and $\gamma^{\circ}\ge\underline{\gamma}$. Therefore, we conclude \[ \pi_{2}^{\circ}(\text{BR}(P_{\theta^{'}\triangleright\theta^{''}},s^{'})\ |\ s^{'})\ge1-\frac{1}{\underline{n}}-\frac{h}{2|\Theta|^{2}}. \] But by construction of of $\underline{n}$ in (\ref{eq:frac_old}), $1-\frac{1}{\underline{n}}>1-\frac{h}{2|\Theta|^{2}}$. So the LHS is at least $1-\frac{h}{|\Theta|^{2}}$, as desired. \end{proof} \begin{rem} \label{rem:general_learning_model} More generally, consider \emph{any} model for our populations of agents with geometrically distributed lifetimes that generates aggregate response functions $\mathscr{R}_{1}$ and $\mathscr{R}_{2}$. Then the proof of Theorem \ref{thm:PS_is_compatible} applies to the steady states of Proposition \ref{prop:mutual_AR} provided that: \begin{enumerate} \item $\mathscr{R}_{1}$ satisfies the conclusion of Lemma \ref{prop:compatible_comonotonic}. \item $\mathscr{R}_{2}$ satisfies the conclusion of Lemma \ref{prop:receiver_learning}. \item For $(\theta^{'},s^{'})$ pairs such that $\theta^{'}\succ_{s^{'}}\theta^{''}$ for at least one type $\theta^{''}$and $\text{ }\theta^{'}\in J(s^{'},\pi^{*})$, Lemma \ref{lem:nondom_message} is valid for $(\theta^{'},s^{'})$. \end{enumerate} \end{rem} We outline two such more general learning models below. \begin{cor} \label{cor:1} Every patiently stable strategy profile under either of the following learning models satisfies the compatibility criterion. \begin{enumerate} \item \textbf{Heterogeneous priors}. There is a finite collection of regular sender priors $\{g_{1,k}\}_{k=1}^{n}$ and a finite collection of regular receiver priors $\{g_{2,k}\}_{k=1}^{n}$. Upon birth, an agent is endowed with a random prior, where the distributions over priors are $\mu_{1}$ and $\mu_{2}$ for senders and receivers. An agent's prior is independent of her payoff type, and furthermore no one ever observes another person's prior. \item \textbf{Social learning}. Suppose $1-\alpha$ fraction of the senders are ``normal learners'' as described in Section \ref{sec:Model}, but the remaining $0<\alpha<1$ fraction are ``social learners.'' At the end of each period, a social learner can observe the extensive-form strategies of her matched receiver and of $c>0$ other matches sampled uniformly at random. Each sender knows whether she is a normal learner or a social learner upon birth, which is uncorrelated with her payoff type. Receivers cannot distinguish between the two kinds of senders. \end{enumerate} \end{cor} \begin{proof} It suffices to verify the three conditions of Remark \ref{rem:general_learning_model} for these two models. (a) \textbf{Heterogeneous priors}. Write $\mathscr{R}_{1}^{(\mu,\delta,\gamma)}$ and $\mathscr{R}_{2}^{(\mu,\delta,\gamma)}$ to represent the ASR and ARR respectively in this model with heterogeneous priors. It is easy to see that \[ \mathscr{R}_{1}^{(\mu,\delta,\gamma)}[\pi_{2}]=\sum_{k=1}^{n}\mu_{1}(g_{1,k})\cdot\mathscr{R}_{1}^{(g_{1,k},\delta,\gamma)}[\pi_{2}] \] for every $0\le\delta,\gamma<1$, where by $\mathscr{R}_{1}^{(g_{1,k},\delta,\gamma)}$ we mean the ASR in the unmodified model where all senders have prior $g_{1,k}$. Each $\mathscr{R}_{1}^{(g_{1,k},\delta,\gamma)}$ satisfies Lemma \ref{prop:compatible_comonotonic}, meaning if $\theta^{'}\succ_{s^{'}}\theta^{''}$, then $\mathscr{R}_{1}^{(g_{1,k},\delta,\gamma)}[\pi_{2}](s^{'}|\theta^{'})\ge\mathscr{R}_{1}^{(g_{1,k},\delta,\gamma)}[\pi_{2}](s^{'}|\theta^{''})$. So Lemma \ref{prop:compatible_comonotonic} continues to hold for $\mathscr{R}_{1}^{(\mu,\delta,\gamma)}$, which is a convex combination of these other ASRs. Analogously, we have $\mathscr{R}_{2}^{(\mu,\delta,\gamma)}=\sum_{k=1}^{n}\mu_{2}(g_{2,k})\cdot\mathscr{R}_{2}^{(g_{2,k},\delta,\gamma)}$. Each $\mathscr{R}_{2}^{(g_{2,k},\delta,\gamma)}$ satisfies Lemma \ref{prop:receiver_learning}, that is to say for each $\theta^{'},\theta^{''}$, $s^{'}$ and $\epsilon$ there exists $C_{k}$ and $\underline{\gamma}_{k}$ such that the lemma holds. So Lemma \ref{prop:receiver_learning} must also hold for the convex combination $\mathscr{R}_{2}^{(\mu,\delta,\gamma)},$ taking $C:=\max_{k}C_{k}$ and $\underline{\gamma}:=\max_{k}\underline{\gamma}_{k}$. Finally, in the proof of Lemma \ref{lem:nondom_message} we may separately analyze the experimentation rates of senders born with different priors. Fix a strategy profile $\pi^{*}$where $\theta\in J(s,\pi^{*})$ for some type $\theta$ and signal $s$. The conclusion is that for each $k$ there exists $\epsilon_{k}$ and functions $\delta_{k},$ $\gamma_{k}$ so that whenever $\delta\ge\delta_{k}(N)$, $\gamma\ge\gamma_{k}(N,\delta)$, $\pi$ is a steady state of the heterogeneous priors model no further away than $\epsilon_{k}$ from $\pi^{*}$ in $L_{1}$ norm, then at least $(1-\gamma)N$ fraction of the type $\theta$senders who were born with $g_{1,k}$ prior will be playing $s^{'}$ each period. By taking $\epsilon:=\min_{k}\epsilon_{k}$, $\delta(\cdot):=\max_{k}\delta_{k}(\cdot),$ and $\gamma(\cdot,\cdot):=\max_{k}\gamma_{k}(\cdot,\cdot)$, we conclude that $(1-\gamma)N$ fraction of the entire type $\theta^{'}$ population must play $s^{'}$ each period. (b) \textbf{Social learning}. Write $\mathscr{R}_{1}^{*}$ for the ASR in this modified model and write $\mathscr{R}_{1}^{\bullet}$ for the ASR in a model where all senders are social learners. Social learners play myopic best responses to their current belief each period since they receive the same information regardless of their signal choice. But from the definition of $\theta^{'}\succ_{s^{'}}\theta^{''}$, whenever $s^{'}$ is a myopic weak best response for $\theta^{''}$, it is also a myopic strict best response for $\theta^{'}$. Fixing the receiver's aggregate play at $\pi_{2}$, both types of social learners face the same distribution over their beliefs. This shows $\mathscr{R}_{1}^{\bullet}[\pi_{2}](s^{'}|\theta^{'})\ge\mathscr{R}_{1}^{\bullet}[\pi_{2}](s^{'}|\theta^{''})$ whenever $\theta^{'}\succ_{s^{'}}\theta^{''}$, so $\mathscr{R}_{1}^{\bullet}$ satisfies Lemma \ref{prop:compatible_comonotonic} and since $\mathscr{R}_{1}^{*}[\pi_{2}]=\alpha\mathscr{R}_{1}^{\bullet}[\pi_{2}]+(1-\alpha)\mathscr{R}_{S}[\pi_{2}]$, $\mathscr{R}_{1}^{*}$ also satisfies Lemma \ref{prop:compatible_comonotonic} . Since receivers cannot distinguish between the two kinds of senders, we have not modified the receivers' learning problem. So $\mathscr{R}_{2}$ continues to satisfy Lemma \ref{prop:receiver_learning}. Moreover, the experimentation behavior of the $1-\alpha$ fraction of ``normal learners'' satisfies the conclusion of Lemma \ref{lem:nondom_message}. More precisely, there exists $\epsilon$ and functions $\hat{\delta},$ $\hat{\gamma}$ so that whenever $\delta\ge\hat{\delta}(N)$, $\gamma\ge\hat{\gamma}(N,\delta)$, $\pi$ is a steady state of the heterogeneous priors model no further away than $\epsilon$ from $\pi^{*}$ in $L_{1}$ norm, then at least $(1-\gamma)N$ fraction of normal learner senders will be playing $s^{'}$ each period. But if we set $\delta(N):=\hat{\delta}(N/(1-\alpha))$ and $\gamma(N,\delta):=\hat{\gamma}(N/(1-\alpha),\delta)$, then whenever $\delta\ge\delta(N)$, $\gamma\ge(N,\delta)$ and other relevant conditions are satisfied, the overall steady state play of the type $\theta^{'}$ population will place weight at least $(1-\gamma)\cdot(1-\alpha)\cdot(N/(1-\alpha))=(1-\gamma)\cdot N$ on $s^{'}$. \end{proof} \begin{example} The beer-quiche game of Example \ref{exa:beer-quiche} has two components of Nash equilibria: ``beer-pooling equilibria'' where both types play B with probability 1, and ``quiche-pooling equilibria'' where both types play Q with probability 1. The latter component requires the receiver to play F with positive probability after signal B. No quiche-pooling equilibrium satisfies the compatibility criterion. This is because when $\pi^{*}$ is a quiche-pooling equilibrium, type $\theta_{\text{strong}}$'s equilibrium payoff is 2, so $\theta_{\text{strong}}\in J(B,\pi^{*})$ since $\theta_{\text{strong}}$'s highest possible payoff under $B$ is 3. We have also shown in Example \ref{exa:beer-quiche} that $\text{\ensuremath{\theta}}_{\text{strong}}\succ_{B}\theta_{\text{weak}}$. Thus, \[ P(B,\pi^{*})=\left\{ p\in\Delta(\Theta):\frac{p(\text{\ensuremath{\theta}}_{\text{weak}})}{p(\text{\ensuremath{\theta}}_{\text{strong}})}\le\frac{\lambda(\text{\ensuremath{\theta}}_{\text{weak}})}{\text{\ensuremath{\lambda}(\ensuremath{\theta}}_{\text{strong}})}=1/9\right\} . \] F is not a best response after B to any such belief, so equilibria in which F occurs with positive probability after B do not satisfy the compatibility criterion. Since the compatibility criterion is a necessary condition for patient stability by Theorem \ref{thm:PS_is_compatible}, no quiche-pooling equilibrium is patiently stable. Since the set of patiently stable outcomes is a non-empty subset of the set of Nash equilibria by Proposition \ref{thm:nash}, pooling on beer is the unique patiently stable outcome. By Remark \ref{rem:general_learning_model}, quiche-pooling equilibria are still not patiently stable in more general learning models involving either heterogeneous priors or social learners.\hfill{} $\blacklozenge$ \end{example} \subsection{Patient stability and equilibrium dominance } In generic games, equilibria where the receiver plays a pure strategy must satisfy a stronger condition than the compatibility criterion to be patiently stable. \begin{defn} Let \[ \widetilde{J}(s,\pi^{*})\coloneqq\left\{ \theta\in\Theta:\underset{a\in A}{\max}\ u_{1}(\theta,s,a)\geq u_{1}(\theta;\pi^{*})\right\} . \] If $\widetilde{J}(s^{'},\pi^{*})$ is non-empty, define the \emph{strongly admissible beliefs at signal $s^{'}$} \emph{under profile} $\pi^{*}$ to be \[ \tilde{P}(s^{'},\pi^{*})\coloneqq\text{\ensuremath{\Delta(\widetilde{J}(s^{'},\pi^{*}))} }\bigcap\left\{ P_{\theta^{'}\triangleright\theta^{''}}:\theta^{'}\succ_{s^{'}}\theta^{''}\right\} \] where $P_{\theta^{'}\triangleright\theta^{''}}$ is defined in Equation (\ref{eq:odds_ratio_P}). Otherwise, define $\tilde{P}(s^{'},\pi^{*}):=\Delta(\Theta)$. \end{defn} Here, $\widetilde{J}(s,\pi^{*})$ is the set of types for which \emph{some} response to signal $s$ is at least as good as their payoff under $\pi^{*}.$ Note that $\widetilde{P},$ unlike $P,$ assigns probability 0 to equilibrium dominated types, which is the belief restriction of the Intuitive Criterion. \begin{defn} A Nash equilibrium $\pi^{*}$ is \emph{on-path strict for the receiver} if for every on-path signal $s^{*},$ $\pi_{2}(a^{*}|s^{*})=1$ for some $a^{*}\in A$ and $u_{2}(s^{*},a^{*},\pi_{1})>\max_{a\ne a^{*}}u_{2}(s^{*},a,\pi_{1})$. \end{defn} Of course, the receiver cannot have strict ex-ante preferences over play at unreached information sets; this condition is called ``on-path strict'' because we do not place restrictions on the receiver's incentives after off-path signals. In generic signalling games, all pure-strategy equilibria are on-path strict for the receiver, but the same is not true for mixed-strategy equilibria. \begin{defn} A strategy profile $\pi^{*}$ satisfies the \emph{strong compatibility criterion} if at every signal $s^{'}$ we have \[ \pi_{2}^{*}(\cdot|s^{'})\in\Delta(\text{BR}(\widetilde{P}(s^{'},\pi^{*}),s^{'})). \] \end{defn} It is immediate that the strong compatibility criterion implies the compatibility criterion, since it places more stringent restrictions on the receiver's behavior. It is also immediate that the strong compatibility criterion implies the Intuitive Criterion. \begin{thm} \label{thm:eqm_dominated_types} Suppose $\pi^{*}$ is on-path strict for the receiver and patiently stable. Then it satisfies the strong compatibility criterion. \end{thm} The proof of this theorem appears in Appendix \ref{subsec:eqm_dominated}. Here we provide an outline of the arguments. We first show there is a sequence of steady state strategy profiles $\pi^{(k)}\in\Pi^{*}(g,\delta_{k},\gamma_{k})$ with $\gamma_{k}\to1$ and $\pi^{(k)}\to\pi^{*}$, where the rate of on-path convergence of $\pi_{2}^{(k)}$ to $\pi_{2}^{*}$ is of order $(1-\gamma_{k})$. That is, there exists some $N^{\text{wrong}}\in\mathbb{N}$ so that $\pi_{2}^{(k)}(\cdot|s^{*})$ plays actions other than the equilibrium response to $s^{*}$ less than $(1-\gamma_{k})\cdot N^{\text{wrong}}$ of the time for each $k$ and each on-path signal $s^{*}$. Next, we consider a type $\theta^{\text{D}}$ for whom $s^{*}$ equilibrium dominates the off-path $s^{'}$. We show the probability that a very patient $\theta^{\text{D}}$ ever switches away from $s^{*}$ after trying it for the first time is bounded by a multiple of the weight that $\pi_{2}(\cdot|s^{*})$ assigns to non-equilibrium responses to $s^{*}$. Together with the fact that $\pi_{2}^{(k)}(\cdot|s^{*})$ converges to $\pi_{2}^{*}(\cdot|s^{*})$ at the rate of $(1-\gamma_{k})$, this lets us find some $N\in\mathbb{N}$ so that $\pi_{1}^{(k)}(s^{'}|\theta^{\text{D}})<N\cdot(1-\gamma_{k})$ for every $k$. On the other hand, for each $\theta^{'}\in\tilde{J}(s^{'},\pi^{*}),$ Lemma \ref{lem:nondom_message} shows for any $N^{'}\in\mathbb{N}$, for large enough $k$ we will have $\pi_{1}^{(k)}(s^{'}|\theta^{'})>N^{'}\cdot(1-\gamma_{k})$. So by choosing $N^{'}$ sufficiently large relative to $N$, we can show that $\lim_{k\to\infty}\frac{\pi_{1}^{(k)}(s^{'}|\theta^{'})}{\pi_{1}^{(k)}(s^{'}|\theta^{\text{D}})}=\infty$. Finally, we apply Theorem 2 of \citet*{fudenberg_he_imhof_2016} to deduce that a typical receiver has enough data to conclude someone who sends $s^{'}$ is arbitrarily more likely to be $\theta^{'}$ than $\theta^{\text{D}}$, thus eliminating completely any belief in equilibrium dominated types after $s^{'}$. \begin{rem} As noted by \citet{fudenbergKreps1988} and \citet*{sobel1990fixed}, it seems ``intuitive'' that learning and rational experimentation should lead receivers to assign probability 0 to types that are equilibrium dominated, so it might seem surprising that this theorem needs the additional assumption that the equilibrium is on-path strict for the receiver. However, in our model senders start out initially uncertain about the receivers' play, and so even types for whom a signal is equilibrium dominated might initially experiment with it. Showing that these experiments do not lead to ``perverse'' responses by the receivers requires some arguments about the \emph{relative} probabilities with which equilibrium-dominated types and non-equilibrium-dominated types play off-path signals. When the equilibrium involves on-path receiver randomization, a non-trivial fraction of receivers could play an action that a type finds strictly worse than her worst payoff under an off-path signal. In this case, we do not see how to show that the probability she ever switches away from her equilibrium signal tends to 0 with patience, since the event of seeing a large number of these unfavorable responses in a row has probability bounded away from 0 even when the receiver population plays exactly their equilibrium strategy. However, we do not have a counterexample to show that the conclusion of the theorem fails without on-path strictness for the receiver. \end{rem} \begin{example} \label{exa:modifed_beer_quiche}In the following modified beer-quiche game, we still have $\lambda(\theta_{\text{strong}})=0.9$, but the payoffs of fighting a type $\theta_{\text{weak}}$ who drinks beer have been substantially increased: \begin{center} \includegraphics[scale=0.4]{BQ2-eps-converted-to.pdf} \noindent \end{center} Consider the Nash equilibrium $\pi^{*}$ where both types play Q, supported by the receiver playing F after B. Since F is a best response to the prior $\lambda$ after B, it is not ruled out by the compatibility criterion. This pooling equilibrium is on-path strict for the receiver, because the receiver has a strict preference for NF at the only on-path signal, Q. Moreover, it does not satisfy the strong compatibility criterion, because $\widetilde{J}(\text{B},\pi^{*})=\{\theta_{\text{strong}}\}$ implies the only strongly admissible belief after $B$ assigns probability 1 to the sender being $\theta_{\text{strong}}$. So NF is the only off-path response after B that satisfies the strong compatibility criterion. Thus Theorem \ref{thm:eqm_dominated_types} implies that this equilibrium is not patiently stable.\hfill{} $\blacklozenge$ \end{example} \section{Discussion } Our learning model supposes that the agents have geometrically distributed lifetimes,which is one of the reasons that the senders' optimization problems can be solved using the Gittins index. If agents were to have fixed finite lifetimes, as in \citet{fudenberg_steady_1993,fudenberg_superstition_2006}, their optimization problem would not be stationary. For this reason, the finite-horizon analog of the Gittins index is only approximately optimal for the finite-horizon multi-armed bandit problem \citep{nino2011computing}. Applying the geometric lifetime framework to steady-state learning models for other classes of extensive-form games could prove fruitful, especially for games where we need to compare the behavior of various players or player types. Our results provide an upper bound on the set of patiently stable strategy profiles in a signalling game. In \citet*{FudenbergHe2017TCE}, we will provide a lower bound for the same set, as well as a sharper upper bound under additional restrictions on the priors. But together these results will not give an exact characterization of patiently stable outcomes. Nevertheless, our results do show how the theory of learning in games provides a foundation for refining the set of Nash equilibria in signalling games. In future work, we hope to investigate a learning model featuring temporary sender types. Instead of the sender's type being assigned at birth and fixed for life, at the start of each period each sender takes an i.i.d. draw from $\lambda$ to discover her type for that period. When the players are impatient, this yields different steady states than the fixed-type model here, as noted by \citet*{dekel_learning_2004}. This model will require different tools to analyze, since the sender's problem now becomes a restless bandit. Theorem \ref{prop:index} may also find applications in studies of other sorts of dynamic decisions. Consider any multi-armed bandit problem where each arm $m$ is associated with an unknown distribution over prizes $Z_{m}$. Given a discount factor, a prior belief over prize distributions, and a utility function over prizes $u:\cup_{m}Z_{m}\to\mathbb{R}$, we can characterize the agent's optimal dynamic behavior using the Gittins index. Theorem \ref{prop:index} essentially provides a comparison between the dynamic behavior of two agents based on their static preferences $u$ over the prizes. As an immediate application, consider a principal-agent setting where the principal knows the agent's utility $u$, but not the agent's beliefs over the prize distributions of different arms or agent's discount factor. Suppose the principal observes the agent choosing arm 1 in the first period. The principal can impose taxes and subsidies on the different prizes and arms, changing the agent's per-period utility to $\tilde{u}:\cup_{m}Z_{m}\to\mathbb{R}$. For what taxes and subsidies would the agent still have chosen arm 1 in the first period, robust across all specifications of her initial beliefs and discount factor? According to Theorem \ref{prop:index}, the answer is roughly those taxes and subsidies such that arm 1 is more type-compatible with $\tilde{u}$ than $u$.\footnote{This is precisely the answer when the compatibility relation is irreflexive.} \bibliographystyle{ecta}
1,108,101,563,537
arxiv
\section{Introduction} The subject of this paper is a discussion on the notion of classicality as emerging in quantum mechanical systems and the possibility of giving a quantification of the non-classical behaviour using an information theoretic measure. The physical context on which we mainly focus, is that of quantum fields in early universe cosmology. \par Recent years have seen a increased activity in the study of emergent classicality, which has led to the formation of new concepts and the significant increase in understanding the physical mechanisms underlying the classicalisation process. Key among the former are the notion of decoherence (either environmentally induced or through the intrinsic dynamics in closed systems) and its interplay with noise, setting limits to the degree of predictability enjoyed by any quantum mechanical system. As far as the latter is concerned, a large number of illustrative, exactly solvable models have been widely studied, mainly in the context of non - relativistic quantum mechanics. \par One of the driving forces of this activity, has been the need to understand the quantum to classical transition in a cosmological setting (quantum and early universe). In the context of the latter, it is well known that a basic premise of inflationary model is the eventual classicalisation of the quantum fluctuations as the seeds of later structure formation. Nevertheless in spite of the conceptual importance, it is fair to say that there is not yet a clear consensus on how the process of classicalisation is effected. The reason for this is partly that the well tested concepts have to be applied to a field theoretic setting with infinite number of degrees of freedom (hence besides the technical difficulties involved, a postulated split between system and environment is not intuitively transparent) and partly because of the fact that the relevant physics are somewhat remote from the better understood realm of the low energy world. By this, we mean that it is not easy to precisely identify, what is meant by classical behaviour and which physical quantities ought to exhibit it (is it the mean field \cite{CaHu2}, the field modes \cite{Star}, a coarse -grained version of the n - pt functions \cite{CaHu2}?). \par We must also remark the absence from the discussion of a clear - cut and quantitatively precise criterion for classicality. Of course the formulation of such a criterion ought to depend upon the degrees of freedom one is seeking to study. Very often there emergent characteristics of classicality are taken as definitive of it, something that might eventually lead to a confusion, as for instance when taking the large fluctuations as characteristic of classical behaviour. The only general and unambiguous (provided one correctly identifies the relevant variables) criterion, is provided by the consistent histories approach to quantum mechanics \cite{Gri, Omn3, Omn, Har, GeHa}. Unfortunately the technical demands raised by this approach are rather high, so that it has been possible to treat in detail only a number of relatively simple systems. \par The identification of a classicality criterion and the search of a measure to quantify it form the backbone of this paper. We argue that classicality ought to be though generically as a {\it phase space} manifestation \footnote{Note, that this does not preclude classicality emerging for much more coarse-grained variables, in particular at the level of hydrodynamics.} and in that light the most suitable object for this task is a version of Shannon information: the Shannon -Wehrl (SW) entropy \cite{Weh, Wehr}. This has been considered before as a measure of quantum and environmentally induced fluctuations \cite{Hal, AnHa}. \par We expand on this previous work, by tying its properties with a precise formulation of a criterion of phase space classicality. The emergent criterion is influenced by the work of Omn\`es within the consistent histories program \cite{Omn}. It essentially states that a state is to be thought as classical if it is phase space concentrated and this property preserved by dynamical evolution. But here we apend an important distinction: classicality is destroyed not only in view of the increase of fluctuations but also because of phase space mixing induced by the quantum evolution. With few exceptions \cite{Omn2, Zur} this has not been focused properly in the existing bibliography and we proceed to examine it in detail. In particular we argue that large squeezing (typical for field modes in an expanding universe) is a characteristic of non - classical behaviour, something that is implicitly well known in the field of quantum optics. \par Our criterion then is argued to entail that the SW entropy indeed quantifies the deviation from classicality : it takes into account both phase space spreading of the state as well as the phase space mixing. Hence it can be translated in the SW entropy taking values of the order of its lower bound of unity. We should stress the appealing fact that a single quantity is sufficient to capture the classicality relevant behaviour in even systems with a large number of degrees of freedom. \par The plan of the paper is then as follows. In the next section some preliminary definitions are given for information theory. We mainly focus on properties that are useful for the developement of our later argumentation. Section 3 is the main one. We give the definitions of the SW entropy, present some of its properties, state the classicality criterion and provide the connection between this and the SW entropy. A number of examples in non-relativistic quantum mechanics are studied so that particular features can be isolated and commented upon. In the next section the discussion is upgraded to the field theoretic context. Discussing the corresponding generalisations, we finally give a discussion of various proposals for field classicalisation as well as whether SW entropy could be identified with the phenomenological (thermodynamic) entropy, appearing in cosmological discussions. \section{Shannon information in quantum mechanics} \subsection{The notion of information} Information is largely not an absolute concept. Intuitively it corresponds to the degree of precision of the knowledge we can have about a particular system. As such, it has always to be defined with respect to the questions we want to ask. When one is dealing with systems exhibiting a degree of randomness, our knowledge about the is hidden in the assignment of probabilities to individual events. \par When one is dealing with alternatives that can be meaningfully assigned probabilities (either classical stochastic processes or quantum mechanics, but notably not quantum mechanical histories), one has an intuitive feeling of what properties a good measure of information should have: \\ 1. It should be small for peaked probability distributions and large for spread ones ( reflecting the fact that there is less to be discovered by a measurement or a precise determination in the former case). \\ 2. It should increase under coarse graining, i.e. when settling for a less detailed description of our system. \\ These properties are nicely captured by Shannon's definition of information: Given a sample space $\Omega$ with $N$ elements and assignement of probabilities $p_i$ for $i \in \Omega$ then information is naturally defined as: \begin{equation} I_{\Omega}[p] = - \sum_i p_i \log p_i \end{equation} This is clearly a positive quantity, obtaining its maximum of $\log N$ for the total ignorance probability distribution $p_i = 1/N$ and its minimum zero for a precise determination of an alternative. This incorporates nicely property 1), while property 2) is guaranteed by the concavity of this function. Hence, any coarse- graining $\Omega \rightarrow \Omega'$, with its corresponding restriction map for the probabilitites $p \rightarrow p'$ will entail \begin{equation} I_{\Omega}[p] \leq I_{\Omega'}[p'] \end{equation} It is not our purpose to give an exhaustive list of the properties of the Shannon information here, since they are fully covered in the relevant bibliography\cite{Cov}. We just restrict ourselves to two important results. \par In the case of continuous sampling space $\Omega$ with a probability distribution $p(x) $ with $x \in \Omega$ the Shannon information is given by \begin{equation} I_{\Omega}[p] = - \int dx p(x) \log p(x) \end{equation} It is generically not positive and it may not even be bounded from below. In the case that $\Omega = R^n$ and for distributions with constant covariance matrix $K$ it has a lower bound \begin{equation} I_{\Omega}[p] \geq 1 + \frac{1}{2} \log \det K \end{equation} which is achieved by the corresponding Gaussian probability distributions. \par Finally, we should note that one can define the relative information between two probability distributions $p_1$ and $p_2$ (henceforward we drop the subscript referring to sampling space unless explicitly required ) as \begin{equation} I[p_2|p_1] = \int dx p_1(x) (\log p_1(x) - \log p_2(x)) \end{equation} This quantity is always positive and jointly convex with respect to both probability distributions. It is to be interpreted as the ``extra'' amount of information contained in $p_2$ with reference to $p_1$. \subsection{The quantum mechanical context} Quantum mechanics is an inherently probabilistic theory. Given a quantum state $\rho$ one can construct probability measures for any observable by virtue of the spectral theorem. \par Shannon information can first be naturally defined with respect to any orthonormal basis on Hilbert space (hence with a maximal set of commuting observables). If we name the basis $|n \rangle$, then the probabilities $\langle n | \rho |n \rangle$ are constructed and the Shannon information $I_{\{n\}}[\rho]$ can be defined as in equation (2.1). Clearly the lower bound on $I$ is here again $0$ while the maximum bound is $\log N$ where $N$ is the dimension of the Hilbert space. \par Slightly more generally, one can define Shannon information with respect to any self- adjoint operator $A$ with discrete spectrum. Since then $A = \sum_n a_n P_n$, in term of the projectors $P_n$ in its eigenspaces we can define again \begin{equation} I_A[\rho] = \sum_n tr (P_n \rho) \log tr(P_n \rho) \end{equation} In such a case the lower bound is not zero, unless $A$ has non-degenerate spectrum. This comes from the fact that in the degeneracy case, the probability distribution is a coarse graining with respect to the one defined by a maximal set of observables to which $A$ belongs. \par There is an important relationship between Shannon information and von Neumann (vN) entropy $S[\rho] = - tr ( \rho \log \rho)$ in the case of discrete spectrum. \begin{equation} I_{\{n\}}[\rho] \geq S[\rho] \end{equation} The equality holds if $\rho$ is diagonal in the $|n \rangle$ basis. Hence the values of the quantity $I _ S$ provide a measure of how close a density matrix is to be diagonal in the particular basis. This can be an important tool, in the context of environment superselection rules and the identification of the pointer basis. \par The case of continuous spectrum is rather more interesting. The projection valued measure $dE(x)$ associated to a self-adjoint operator $X$ defines a distribution function $p(x) = \frac{d}{dx} tr ( \rho E(x)$, with respect to which the Shannon information is defined. For the case of the position operator $x$ on $L^2(R)$ we get a lower bound for fixed uncertainty $\Delta x$ \begin{equation} I_x[\rho] \leq 1 + \log \left(2 \pi (\Delta x)^2 \right) \end{equation} saturated by the Gaussian states. A similar result holding for the momentum distribution $I_p[\rho]$ these can be combined with the standard uncertainty relation to yield \begin{equation} I_x[\rho] + I_p[\rho] \geq 1 + \log \pi \hbar \end{equation} \section{Shannon - Wehrl entropy} When one needs to discuss the emergence of classical behaviour from a quantum system, one is in need to quantify the notion of fluctuations around classical predictability. In one dimensional case, the uncertainty $\Delta x \Delta p$ serves well this purpose, but in systems with many degrees of freedom uncertainties are not by themselves sufficient to capture the classicalisation of the system's state. Correlations are involved (in the strong form of entanglement) that can disqualify even a localised in phase space state from being considered as classical. The same situation is of course more important in field theory, where one is working with infinite number of degrees of freedom. \par It is therefore important that simple quantities can be used to codify the classicality of a state. A particular variant of Shannon information, the so - called Shannon-Wehrl (SW) entropy, seems well suited to provide such a quantification. The purpose of this chapter is to explain in which sense the study of this object yields information about the classical behaviour of quantum states. \par Before proceeding we should be explicit in what we refer here as classicality. Two are the necessary requirements a state must satisfy in order to be charactrised as classical (or quasiclassical) \\ 1. Suppression of interferences .\\ 2. Seen as a wave packet, it has to evolve with a good degree of accuracy according to the classical equations f motion. \\ The important point one has to stress here, is that we use the word classicality to refer to the Hamiltonian classical limit. Indeed (for instance in many body systems) classicality might refer to collective (hydrodynamic or thermodynamic) variables characterising the system. Our object is therefore concentrated on the phase space distributions associated with a quantum state. So suppression of interferences is implied with respect to some phase space ``basis'', an issue which is again very relevant when discussing classical equations of motion. Of course, given particular assumptions our discussion can involve classicality of collective variables as for instance center of mass of a many particle systems. For such issues, we refere the reader to \cite{Omn} for details. \par Therefore, classicality is defined with respect to {\it phase space properties}, rather than configuration space or momentum space ones. While phase space classicality straightforwardly implies configuration or momentum space one. The converse is not necessarily true. A state localised solely in position (and with a large momentum spread) cannot be considered as classical. The fluctuations around the classical path are too large, to destroy any sense of predictability. Moreover, such a localisation is not robust in the presence of even small interactions. \par Finally, we should remark that, since the SW entropy is defined in terms of coherent states, we have found expedient to employ intermittedly the Schr\"oddinger and the Bargmann reperesentation, according to calculational convention. \subsection{Definition and properties} The SW entropy is defined as \begin{equation} I[\rho] = - \int Dw Dw^* p(w,w^*) \log p(w,w^*) \end{equation} in terms of the probability density \begin{equation} p(w,w^*) = \langle w|\rho| w \rangle \end{equation} where $|w \rangle$ is a (normalised) coherent state. Given the fact that $w$ is a complex linear combination of position and momentum (in the standard case of one dimensional harmonic oscillator $ w = (\omega/2 \hbar)^{1/2} q + i (1/2 \hbar \omega)^{1/2} p$.) $p(w,w^*)$ can be viewed as a positive, normalised (due to the completeness relation of coherent states) distribution on phase space. This is invariably called the Q-symbol, or the Husimi distribution. It can be shown to correspond to a Gaussian smearing of the Wigner function (this rendering it positive). \par There is an ambiguity in the choice of the coherent states, essentially that they can be defined with respect to arbitrary state vector on the Hilbert space. Its resolution by demanding that our information measure is shapest, will be dealt shortly. We just comment here, that standardly the coherent states can be taken as defined with respect to the vacuum of a harmonic oscillator or in the case of many dimensions of an isotropic harmonic oscillator. Then without loss of generality, we can represent $|w \rangle$ as \begin{equation} \langle x|w \rangle = \langle x| {\bf q p} \rangle = \left(2 \pi \hbar \sigma^2 \right)^{-1/4} \exp \left( -\frac{({\bf x}- {\bf q})^2}{4 \hbar \sigma^2} + i {\bf p} {\bf x}\right) \end{equation} The SW entropy is the closest quantum object to the notion of Gibbs entropy (indeed Wehrl was calling it the classical entropy), in the sense that the coherent states define a cut-off phase space volume, with respect to which a finite and unambiguous notion of entropy can be defined. Its lower bound is determined by two inequalities \begin{eqnarray} I[\rho] \geq S[\rho] \\ I[\rho] \geq 1 \end{eqnarray} The latter is saturated by Gaussian coherent states (that it is only these states that achieve the minimum is a non-trivial theorem due to Lieb \cite{Lie}), while the former by thermal states of harmonic oscillator in the high temperature regime. \par We should also remark that by definition, the SW entropy of a state remains invariant when acting on the state with the elements of the Weyl group (translation in position and momentum). \par We should finally remark of an important property of the SW relative entropy. This is defined as \begin{equation} I[\rho_2|\rho_1] = \int Dw Dw^* p_{\rho_1}(w,w^*) \left( \log p_{\rho_1}(w,w^*) - \log p_{\rho_2}(w,w^*) \right) \end{equation} We have that \begin{equation} I[\rho_2|\rho_1] - I[\rho_2] + I [\rho_1] = \int Dw Dw^* (\log p_{\rho_1}(w,w^*) - \log p_{\rho_2}(w,w^*)) \leq 0 \end{equation} since by construction $p_{\rho}(w,w^*) \leq 1$. Hence \begin{equation} I[\rho_2|\rho_1] \leq I[\rho_2] - I [\rho_1] \end{equation} We are going to see later, that this inequality is saturated when $\rho_1$ is a coherent and $\rho_2$ a squeezed state with the same center. This property is not true for general probability distribution; in our case in holds by virtue of the particular definition of $p_{\rho}.$ \subsection{The classicality criterion} The point we need to address now, is in which respect the SW entropy is a measure of phase space classicality, or put differently what is implied by the deviation of $I$ from its lower bound $I = 1$. This goes together with the resolution of the ambiguity, regarding the choice of coherent states in equation (3.2). \par The first point we need to stress, is that when one tries to give a phase space picture of quantum mechanical evolution and discsuss classicality, the need inevitably arises to introduce a measure of distance on phase space. Indeed, in the simplest case of a single particle, to determine classicality (viewed as localisation) one is comparing the area in which the corresponding probability distribtion is supported with $\hbar$; indeed this criterion is encapsulated in the Heisenberg uncertainties: the phase space sampling has to be in a phase space shell of area much larger than $\hbar$, or correspondingly the state is viewed as classical if the uncertainty is of the order of magnitude of $\hbar$. \par In essence one needs to introduce a metric on the classical phase space. This is exactly, what a choice of a family of coherent states do. For, a coherent state is defined as $|pq \rangle = U(p,q) |\xi \rangle$, in terms of any vector of the Hilbert space. As such it defines a mapping $i_{\xi}$ from the phase space to the projective Hilbert space ${\cal RH}$. The latter is a K\"ahler manifold, thus having compatible metric and symplectic structures $g$ and $\Omega$. The pullbacks of these with respect to $i_{\xi}$ form the metric and symplectic structure respectively on the phase space. Hence, any choice of coherent state family defines a distinct metric on phase space with respect to which classicality is to be determined. The question then, translates into questioning which choice of metric is suitable for our purposes. \par The answer, is that this is largely irrelevant, provided some mild conditions are satisfied. First of all, we should note that there is an optimisation algorithm for coherent states of any group, so that the uncertainties (or the determinant of the covariance matrix) of the relevant operators are minimal. In the standard case, this corresponds to defining coherent states with respect to the family of the Gaussians ground states of some harmonic oscillator potential. But in fact, provided we take a sufficiently localised vector for $ |\xi \rangle$, this is not much of a restriction \footnote{ It is interesting to note, that at least in one approach to quantisation (Klauder's coherent state quantisation \cite{Kla}) a metric on the phase space is a primitive ingredient of the quantisation algorithm (so that the phase space can support Wiener measure). This could mean that that there is a preferred choice of an equivalence class of metrics, that give rise to unitarily equivalent quantum theories. In the case of $L^2(R^n)$ these are the homogeneous metrics of zero curvature.}. \par The reason is mainly, that one of the important classicality criteria is the stability under time evolution. That is, a state is to be considered classical, if the determining criterion remains during its time evolution. This means that provided we have made a reasonable choice for our coherent state family, the object one should look is the relative information $I[\rho(0)|\rho(t)]$ where $\rho(t)$ is the evolved density matrix. This object is rather the one that should remain small, if the state $\rho$ is to be assessed as classical (provided of course that the peaks of the phase distribution approximately satisfy some deterministic equations of motion). Hence, the important criterion is eventually dynamical. We should choose a family of coherent states, that is rather stable with respect to time evolution. For harmonic oscillator potentials , this entails a particular choice for $| \xi \rangle$. But more generally, given the fact that the Gaussian approximation is good for a large class of potentials, it would be reasonable to consider the Gaussian coherent states for a larger class of systems. Alternatively, for highly non-linear potentials a good choice might be to take for $\xi$ lowest lying eigenstate of the Hamiltonian, even this being not a Gaussian. This becomes rather a necessity when one is dealaing with interacting field theory, as we shall xplain in the next chapter, and in general it seems wise when the Hamiltonian is invariant under a group of symmetries, for they will be reflected in the choice of the metric. For similar reason, it seems more suitable to consider isotropic with respect to $q$ metrics for many dimensional systems. \par The question of course remains , what exactly is measured by the SW entropy. The answer, we will give here is simple: SW entropy is a measure of how much the ``shape'' of the phase space distribution associated to a state $\rho$ deviates from the corresponding to coherent states. What we mean by shape, can be intuitively viewed in the Wigner function case. The $1 - \sigma$ contour of the Wigner function corresponding to a coherent state is a circle (the characterisation of circle follows from the choice of metric associated with this family of coherent states) with area $\hbar/2$. The SW entropy of a state $\rho$ is a quantification of the difference between this circle and the $1- \sigma$ contour associated with $\rho$. In particular, two characteristics are quantified: \\ 1. The area enclosed in the contour. \\ 2. The ``squeezing'' of the contour, i.e. the ratio of its length to its area (hence how much structure a state developes in the scale of $\hbar$). \par In what follows, we shall try to explain both our interpretation of the SW entropy and its relevance for the classicality characterisation of a state. Later we shall give particular examples of our interpretation for the case of squeezed states. \subsubsection{Phase space quasiprojectors} One needs first to give a precise criterion for the notion of classicality of the state, and then examine how the use of the SW entropy, allows us to express this criterion in a quantified form. \par The approach we shall follow, is very much based on the ideas of Omn\'es \cite{Omn}, himself arguing within the context of the consistent histories approach to quantum mechanics. We believe his line of reasoning to allow for a sharp and precise characterisation of classicality. \par In quantum mechanics one says that a state is localised with respect to some observable $A$ if it is an eigenstate of one of $A$ 's spectral projection. Actually approximate eigenstate is a sufficient characterisation. That is we can say that $\psi$ is localised in the range of the spectrum $[a,b]$ of $A$ if $|| E([a,b]) \psi - \psi || < \epsilon |b-a|$ for some $\epsilon << 1$. (Note, a metric on the spectrum is implicitly assumed ). \par For the phase space localisation, one does not have projectors onto phase space ranges, but one can use rather unsharp phase space projectors (these are termed quasiprojectors by Omn\'es). These are essentially positive operator valued measures (POV) on the phase space \cite{Dav}, such that their marginal measures with respect to position and momentum space are respectively approximate position and momentum POV's. \par To define such a family of quasiprojectors one needs first introduce a metric $g$ on phase space and its corresponding distance function $d$. One can define the quasiprojector corresponding to a phase space cell $C$ through its Weyl symbol \begin{equation} f_C({\bf q}, {\bf p}) = \left(\frac{\hbar}{2 \pi}\right)^{n} \int d{\bf u} d{\bf v} e^{ i{\bf xu} + i {\bf p v}} Tr (e^{-i{\bf u} \hat{{\bf P}}- i {\bf v} \hat{{\bf Q}}} \hat{P}_C) \end{equation} where $2n$ the dimension of the phase space. The Weyl symbol, ought to correspond to a smeared characteristic function. One can defined such one by considering for instant \begin{equation} f_C({\bf q}, {\bf p} ) = \int_C \frac{d {\bf q'} d {\bf p'}}{(2 \pi \hbar)^n} \exp \left( - d^2[({\bf q},{\bf p}); ({\bf q'},{\bf p'})] \right) \end{equation} To each such projector one can associate a number $\epsilon$ which is roughly the ratio of volumes $[M]/[C]$. Here $[]$ stands for volume of a phase space cell and $M$ is the margin of the phase space cell $C$, defined as the region where the smeared characteristic function of $C$ is appreciably different from 1 (well inside the cell) and 0 (outside the cell). If also, $\epsilon > e^{-l^2}$, where $l$ is the maximum curvature radius of the boundary if $C$, $P_C$ is close to a true projector, since the following properties are satisfied \\ 1. $|P_C - P_C^2|_{tr} < c \epsilon |P_C|_{tr}$\\ 2. if $C$ and $C'$ do not intersect $|P_C - P_{C'}|_{tr} < c' \epsilon \max (|P_C|_{tr},|P_{C'}|_{tr})$. \\ with $c$ and $c'$ constants of order unity. Such phase space cells (regular in Omn\`es terminology), optimally have a value of $\epsilon$ of the order of $(\hbar/[C])^{n/2}$. For a given family of quasiprojectors ( meaning in particular a choice of metric on the phase space), we can view the optimal choice of $\epsilon$ for each cell as a function from the measurable phase space cells to the real positive numbers. We shall call it the {\it classicality function} associated with this choice of metric. \subsubsection{The classicality criterion} Given a family of quasiprojectors, one can say that a state $|\psi \rangle$ is localised within a phase space cell $C$, if it is an $\epsilon$-approximate eigenstate of $P_C$, i.e. if \begin{equation} ||P_c |\psi \rangle - |\psi \rangle || \leq \epsilon \end{equation} But, we should remark, that localisation in phase space does not imply classicality. As we have seen localisation in phase space is relative to a choice of a phase space metric. Hence it is not a stringent criterion of classicality, let alone that one needs still ensure that the state remains localised during its classical evolution. This is an essential requirement, that largely removes the redundancy due to the freedom of choosing a phase space metric. \par Hence a classicality criterion ought to read as follows: \\ \\ A pure state $\psi$ is considered to exhibit classical behaviour in some time interval $I$, if with respect to some choice of a family of quasiprojectors, is $\epsilon$- localised in phase space cells $C_t$, such that \\ 1. $C_t$ is correlated $C_{t`}$ by the classical equations of motion \\ 2. $[C_t] << [C_I]$, where $C_I$ is the smallest $\epsilon$ - regular phase space cell that contains the union of all $C_t$ 's. \\ \\ The second condition is added here, so that time evolution is not trivial, i.e. there is indeed some meaning to a coarse grained description of the classical equations of motion. Another important point stressed here, is that classicality is contingent upon a particular time interval, outside of which phenomena of wave packet spreading might invalidate the localisation condition. This is even apparent for the case of free particle evolution, as we shall examine later. \par The above criterion was for pure states. In the case of mixed states (and relaxing the condition for unitary evolution), it should be generalised to include density matrices. This is straightforwardly done by substituting the approximate eigenstate condition (3.11) by the requirement \begin{equation} || P_C \rho P_C - \rho ||_{tr} \leq \epsilon \end{equation} the rest following as before. \par The above definitions contain nothing more than the intuitive idea, that a state is classical if its Wigner function exhibits a number of sufficiently concentrated peaks, each of which follows with some degree of approximation the classical equations of motion. Such a criterion has been widely used in the literature. The point we insist is rather the importance of the introduction of the metric in the phase space, as the one determining localisation. While intuitive arguments, based on uncertainty principle might usually be sufficient for the determination of classicality. But what one may overlook in such considerations (particularly when one is dealing with many dimensional or field systems) is the loss of predictability (i.e the large growth of fluctuations) due extreme squeezing in some directions of the phase space distribution. Such a phenomenon will generically cause the state not to be an approximate eigenfunction of the relevant phase space projector, thus invalidating our criterion of classicality. This is particularly true in recent discussions on classicalisation of cosmological quantum fields \cite{Star}. \par It is exactly at this point that the SW entropy proves to be a meaningful calculational tool. For as we measured it does not only measure the spread of the Wigner function, but also its shape. In other words, SW entropy measures the degree of approximation of the classical equations of motion to the Ehrenfest's theorem mean values. The time parameter associated with the increase of the SW entropy (whenever this increases) , is essentially the parameter determining the breakdown of the classical approximation. \par Finally we should remark that a choice of coherent states defines naturally a family of quasiprojectors by \begin{equation} P_C = \int \frac{d {\bf p} d {\bf q}}{(2 \pi \hbar)^n} | {\bf qp} \rangle \langle {\bf qp}| \end{equation} These have actually been used by Omn\'es in the context of consistent histories to prove a semiclassical theorem. In view of our previous discussion, we can remark that computing the relative information between an initial coherent state and the evolved state at time $t$ provides a good measure of how much a particular Hamiltonian preserves or degrades classical predictability. \subsubsection{Estimating the SW entropy} Before examining some concrete examples, we should at first examine how the phase space spread of a state $\psi$ is encoded in the SW entropy. \par Let us consider first the case of $\psi$ being an approximate eigenstate of a phase space projector $P_C$ with classicality parameter $\epsilon$. The probabilty distribution asscociated to $P_C$ , namely $\langle z|P_c|z \rangle / Tr P_C$ is within an approximation of $\epsilon$ a characteristic function of $C$ divided by the trace. But this is also a smearing of the distribution function corresponding to its eigenstate $\psi$. Hence due to the concavity of the entropy we have: \begin{equation} I[\psi] \leq \log ( Tr P_C) + O(\epsilon) = \log \frac{[C]}{(2 \pi \hbar)^n} + O(\epsilon) \end{equation} which is essentially the number of ``classical states'' within phase space volume $C$. Reasoning inversely if for a state $\psi$ (due for instance to time evolution) its SW entropy becomes much larger than $\log \frac{[C]}{(2 \pi \hbar)^n}$, its corresponding classicality parameter for its time evolution grows essentially as fast as $I[\psi]$ hence becoming of the order or larger than unity. \subsubsection{Linear canonical transformations} The evaluation of SW and relative SW entropies for states that are obtained from implementing a linear canonical transformation on coherent state. is quite important for a number of reasons. First, it gives an intuitive example of the way entropy is connected with defomation of the shape of the $1 - \sigma$ contour. Second, this type of transformation appears naturally in time evolution of physically interesting systems: Hamiltonian evolution in the Gaussian approximation and in particular quantum fields in non-static spacetimes. Our results in this section will be valid for a description of such cases. \par Recall, that given a family of (Gaussian) coherent states $|w \rangle$ on some Hilbert space $L^2(R^n)$ the annihilation operators are naturally defined by \begin{equation} \hat{a}(\xi) |w \rangle = \xi^*_i w^i |w \rangle \end{equation} with $\xi, w \in C^n$ and can be written as $\hat{a}(\xi)r = \hat{a}^i \xi^*_i$. \par A linear canonical transformation is implemented by a unitary operator $S = e^{iA}$ where $A$ is a self-adjoint quadratic to $\hat{a}$ and $\hat{a}^{\dagger}$ \begin{equation} S \hat{a}(\xi) S^{-1} = \hat{a}(A^{\dagger} \xi) + \hat{a}^{\dagger}(B^{\dagger} \xi) \end{equation} where $A$ and $B$ are $n \times n$ complex matrices to be viewed as linear operators on the underlying real vector space $R^{2n}$. They are the parameters of the squeezing transformation and preservation of the canonical commutation reltions enforces the Bogolubov identities: \begin{eqnarray} A^{\dagger}A - B^{\dagger}B = 1 \\ A A^{\dagger} - B B^{\dagger} = 1 \\ A^{\dagger} \bar{B} = B^{\dagger} \bar{A} \end{eqnarray} where we use the bar to denote complex conjugation of a matrix. It is well known that the set of these transformations forms a reperesentation on our Hilbert space of the symplectic group $Sp(2n,R)$. Transformations with $B = 0$ are sometimes denoted as rotations ( forming a $U(n)$ subgroup) and ones generated by operators $A$ not containing terms mixing $a$ and $a^{\dagger}$ as squeezing. It is straightforward to check that the matrix elements of $S$ in a coherent state basis are given by \begin{eqnarray} \langle z|S|w \rangle = \left( \det(1-\bar{K}K) \right)^{-1/4} \nonumber \\ \exp \left(- |z|^2/2 - |w|^2/2 + \frac{1}{2} z^* K z^* + \frac{1}{2} w \bar{K} w + z^* A^{-1} w \right) \end{eqnarray} Here $K$ stands for the matrix \begin{equation} K = A^{ -1} \bar{B} \end{equation} \par The transformed vacuum $|0;A,B \rangle$ is defined by the action of $S$ on $|0 \rangle$ and a transformed state $|w;A,B \rangle$ by the action of the operator $U(w)$ of the Weyl group on $|0;A,B \rangle$. Since the SW entropy is invariant under phase space translation, one can use the transformed vacuum for its calculation. \par The corresponding probability distribution is \begin{eqnarray} p_{0;A,B}(w,w^*) = \left( \det(1 - \bar{K}K) \right)^{-1/2} \nonumber \\ \exp \left( - |w|^2 + \frac{1}{2} w^* K w^* + \frac{1}{2} w \bar{K}w \right) \end{eqnarray} From this one gets the following expression for the SW entropy of a transformed state (for brevity just use $A$ and $B$ as arguments) \begin{equation} I[A,B ] = 1 + \log [\det(1 - \bar{K}K)]^{-1/2} = 1 - \frac{1}{2} Tr \log (1 - \bar{K}K) \end{equation} Let us examine some special illustrative cases. \\ \\ {\bf Pure rotation:} In such a case $K = 0$ and hence $I[A,0] = 1$ (the transformed state is a coherent state). \\ \\ {\bf One dimensional case:} The general squeezing transformation is of the form \begin{equation} S \hat{a} S^{-1} = \cosh r \hat{a} - e^{i \phi} \sinh r \hat{a}^{\dagger} \end{equation} in terms of the positive real $r$ and the phase $\phi$. In this case $K = - e^{i \phi} \tanh r$ and the SW entropy for the squeezed states reads \begin{equation} I[r,\phi] = 1 + \log \cosh r \end{equation} This illustrates our earlier arguments since $r$ is interpreted as the eccentricity of the ellipse corresponding to the $1- \sigma$ contour of the squeezed state Wigner function. For large $r$ the ellipse becomes extremely prolongated in a direction determined by $\phi$ and the SW entropy grows linearly with $r$. \\ \\ {\bf Two - mode squeezing:} There is a 6 - parameter family of squeezing transformation in two dimensions. A widely studied case is the Caves - Schumaker squeezing, well studied in the field of quantum optics. This is generated by the unitary operator \begin{equation} S = \exp \left( r e^{i \phi} \hat{a}^{\dagger}_1 \hat{a}^{\dagger}_2 - r e^{-i \phi} \hat{a}_1 \hat{a}_2 \right) \end{equation} and corresponds to the matrices \begin{eqnarray} A= \left(\begin{array}{cc} \cosh r & 0 \\ 0 & \cosh r \end{array} \right) B = \left( \begin{array}{cc} 0 & - \sinh r e^{i \phi} \\ - \sinh r e^{i \phi}& 0 \end{array} \right) \end{eqnarray} which yield the value \begin{equation} I[r, \phi] = 1 + 2 \log \cosh r \end{equation} Note that here the parameter $r$ has a different physical interpretation. If our system represents two (non-identical) one dimensional particles, then the parameter $r$ is a measure of the entanglement of the total state. This is a non-classical feature; if our classical limit is to correspond to two classical particles the entanglement between them must be minimal. Hence SW entropy can quantify also this deviation from classicality (provided of course that the coherent state family with respect to which it is defined, is constructed from a factorised vacuum state). \subsubsection{Relative entropy} It is interesting also to compute the relative SW entropy between a coherent and a transformed state. Without loss of generality one can consider the coherent state to be the vacuum. \par The probability distribution associated to the transformed state $|z;A,B \rangle $ is \begin{eqnarray} p_{z;A,B}(w,w^*) = \left( \det(1 - \bar{K}K) \right)^{-1/2} \nonumber \\ \exp \left( - |w-z|^2 + \frac{1}{2} (w^* - z^*) K (w^* - z^*) + \frac{1}{2} (w - z) \bar{K}(w - z) \right) \end{eqnarray} hence the relative entropy with respect to vacuum is \begin{equation} I[0|z;A,B] = \log [\det(1 - \bar{K}K)]^{-1/2} + |z|^2 + \frac{1}{2} (z^* K z^* + z \bar{K}z) \end{equation} Hence the relative entropy is a sum of a term, purely from the squeezing plus a term containing the contribution of the Weyl translation. Note that in the case of $z = 0$ (pure linear tansformation) the inequality (3.8) is saturated. It is a reasonable conjecture, that this is true only for this particular class of states, i.e. for Gaussians with the same center. \subsection{Squeezing induced by quantum evolution} \subsubsection{The Gaussian approximation} We now come back to our main point. We shall consider the evolution of SW entropy for closed quantum systems , their evolution governed by a Hamiltonian $H = {\bf p}^2/2m + V({\bf q})$, in the Gaussian approximation. The latter consists essentially in approximating the evolution of Gaussian states, by the action of a linear canonical transformation. This is of course exact for systems evolving under a quadratic Hamiltonian and a good approximation for systems evolving in a macroscopically varying potential (at least within a particular time interval while the spread of the wave function has not become extremely large. We shall see that the evaluation of the SW entropy gives a self - consistency check for the validity o the Gaussian approximation. \par In this section, it is more convenient to switch back to the Schr\"oddinger representation for our Hilbert space vectors. Choosing our coherent state basis by the relation \begin{equation} w^i = (2 \hbar \sigma^2)^{-1/2} q^i + i (\sigma^2/2 \hbar)^{1/2} p^i \end{equation} i.e. choosing an isotropic and factorised Gaussian defining state, we get the following expression for a translated vector $\psi$: \begin{eqnarray} \psi({\bf x}) = \left( \det M^* M {2 \pi \hbar} \right)^{-1/4} \nonumber \\ \exp \left( - \frac{1}{4 \hbar} (x-q)^i (LM^{-1})_{ij} (x-q)^j + i\ \hbar p_i x^i \right) \end{eqnarray} where the matrices $R$ and $S$ are such that $ \hbar^{1/2} MM^*$ and $ \hbar^{1/2}/2 L^*L$ are the position and momentum covariance matrices respectively. Since the expression is invariant under a $U(n)$ matrix right acting on both $L$ and $M$ we have the freedom to define them in terms of the matrix $K$ of equation (3.21) through the following relations \begin{eqnarray} Q = LM^{-1} = (2 \sigma^2)^{-1} (1+K)(1-K)^{-1} \\ M = (Re Q)^{-1/2} \\ L = QM \end{eqnarray} or what is more important the inverse relationship \begin{equation} K = (1 + Q)^{-1} (1 - Q) \end{equation} \par Now, for the Gaussian approximation we shall utilise a result of Hagedorn \cite{Hag}. For a large class of physically relevant potentials (bounded from below, growing slower than a Gaussian) and time interval $[0,T]$, the Gaussian (3.32) evolves to another Gaussian of the same type with the center determined by the classical equation of motion, a phase given by the corresponding classical action and the matrices $A(t)$, $B(t)$ evolving according to the equations \begin{eqnarray} \frac{d}{dt}A(t) = \frac{i}{2m} B(t) \\ \frac{d}{dt}B(t) = 2 i V^{(2)}(q(t)) A(t) \end{eqnarray} In fact they can be shown to satisfy \begin{eqnarray} M(t) = \frac{\partial q(t)}{\partial q(0)} M(0) + \frac{i}{2} \frac{\partial q(t)}{\partial p(0)} L(0) \\ L(t) = \frac{\partial p(t)}{\partial p(0)} L(0) - 2 i \frac{\partial p(t)}{\partial q(0)} M(0) \end{eqnarray} \par To study the SW entropy production, we will consider the evolution of an initial coherent state (hence $K(0) = 0$) so that $M(0) = (2 \sigma^2)^{1/2} 1$ and $L(0) = (2 \sigma^2)^{-1/2} 1$. \subsubsection{One dimensional case} In the case of a free particle the complex numbers $M$ and $L$ read \begin{eqnarray} M(t) = (2 \sigma^2)^{1/2} + \frac{i}{2m} (2 \sigma^2)^{-1/2} t \\ L(t) = (2 \sigma^2) ^{-1/2} \end{eqnarray} Using the equations (3.39), (3.40) we find that the SW entropy at large times ($ t >> \sigma^2 m $ behaves like \begin{equation} I \simeq 1 + \log \frac{t}{8 \sigma^2 m} \end{equation} In a free particle time evolution produces strong squeezing, towards localising a particle in the position momentum (actually for free evolution momentum basis is some sort of pointer basis since superposition of two states with different momenta are asymptotically suppressed - though not exponentially as in the presence of environment). Hence eventually classical predictability breaks down for the free particle, though rather slowly. In view of our previous discussion the classicality parameter $\epsilon$ is increasing logarithmically with time. More precisely taking into account equation (3.14) a state that can be considered as localised in a volume $V$ of phase space initially will stop being localised (approximate eigenstate of the corresponding quasiprojector) after time \begin{equation} t \simeq \frac{4 \sigma^2 mV}{ \pi \hbar} \end{equation} Note the persistence of predictability for higher mass particles. \par For a harmonic oscillator Hamiltonian the choice of the standard coherent states with $ \sigma^2 = (4m \omega)^{-1}$ gives constant SW entropy, corresponding to maximum possible predictability. It is nonetheless instructive to see what would happen, had we not been wise to make this choice. The relevant quantities read now \begin{eqnarray} M(t) = (2 \sigma^2 \cos \omega t + \frac{i}{2m \omega} (2 \sigma^2)^{1/2} \sin \omega t \\ L(t) = (2 \sigma^2)^{-1/2} \cos \omega t - 2 i m \omega (2 \sigma^2)^{1/2} \sin \omega t \end{eqnarray} It is easy now to verify that the SW entropy remains bounded for all times, taking values around $ 1 + |\log (4 m \omega \sigma^2)|$. Hence, the harmonic oscillator potential generally preserves classicality for large class of phase space localised states, the SW entropy remaining bounded by the limit for the proper choice of the quasiprojectors monitoring the classical evolution). This also verifies that our classicality estimation based on the SW entropy is sufficiently stable with respect to the choice of coherent state family. \par Of interest is also the case of the inverse harmonic oscillator potential. In the context of inflationary cosmology, it is sometimes states that a number of modes that evolve for a time as inverse harmonic oscillators, undergo amplification of their fluctuations and hence become ``classicalised''. As mentioned in the introduction, we believe that in such claims there is a confusion between the notion of large fluctuations and classicality. Amplified quantum mechanical fluctuations are {\it not} classical fluctuations. They only imply lack of predictability, which can be a purely quantum mechanical phenomenon \footnote{To see this it is sufficient to compute the time evolution of a superposition of two spatially localised states under this potentials. There is no way one could interpret their amplified fluctuations as classical however large they might become.}. For a potential $V(q) = - \frac{1}{2} m k^2 q^2$, it is easy to verify that for coherent states defined by $ \sigma^2 = (4m k)^{-1}$ the evolution is as squeezing in equation (3.24) with parameters $r = kt $ and $\phi = \pi/2$. Hence the SW entropy evolves as \begin{equation} I(t) = 1 + \log \cosh kt \end{equation} and asymptotically grows with $kt$. \par In the case of general potentials in one dimension, one can make some qualitative predictions. If $V(q)$ is also bounded from above by $V_m$, for particles with $E >> V_m$ the results of the free particle case ought to be relevant: degradation of predictability growing logarithmically with time. For $U$- shaped potentials and low energies (hence mimicking a harmonic oscillator) predictability ought to remain good. Rugged potentials that vary within microscopic scales rapidly destroy predictability (tunnelling effects plus caustics typically weaken the effectiveness of the Gaussian approximation). \subsubsection{Higher dimensional systems} \par When considering systems with many degrees of freedom, equations (3.39), (3.40) can be the basis of a number of qualitative estimations. For instance, if the classical equations of motion admit runaway solutions (positive Lyapunov exponents), the matrices $M$ and $L$ are going to have exponentially increasing with time entries and typically a behaviour of the type of inverse harmonic oscillator is to be expected. \par Hence, the SW entropy is going to increase linearly at long times, eventually bringing again a breakdown of classicality. Hence we arrive at a conclusion, noted by many authors, that quantum mechanical systems the classical analogue of which exhibits chaotic behaviour (meaning exponential dependence on the initial conditions), typically does not have a good classical limit \cite{Omn}. It is interesting to note that the SW entropy plays a role of a measure of mixing. By this we mean, the thin spreading of an initial phase space distribution into a given partition of phase space, so that its components eventually occupy larger and larger number of partition cells. This suggests that the SW entropy plays a role similar to the classical Kolmogorov - Sinai entropy of classical dynamical systems (of course not sharing its invariance properties) and related measures of mixing. \par In view of our inequality (3.8) the difference between SW entropies at initial time and time $t$ can be viewed as an estimation of the upper bound to the relative entropy between the classically evolved state ( Weyl transforming according to classical equation of motion) and full quantum evolution. \par A further remark is at point here. Classicality and in particular predictability is a ``non-perturbative'' issue. Even in the Gaussian approximation knowledge of the full solutions to the classical equations of motion is necessary in order to establish whether or not there exists gradual deterioration of the amount of predictability. It is well known, that generically perturbative solutions to the classical equations of motion are valid only for short interval of time. The same argument is more pointedly true in the case of quantum open systems, when one wants to study environmentally induced decoherence and classicality \cite{Har}. \subsection{ Open systems} So far our discussion has been concentrated on closed systems. When our quantum system is coupled to an environment the evolution is not unitary and a class of interesting phenomena related to predictability appear. Most prominent amongst them is the emergence of superselection rules, namely that some class of environments produce a rapid diagonalisation of the density matrix in some phase space basis. \par So, examination of classicality in presence of an environment requires besides the study of predictability preservation a quantification of how close the density matrix is diagonal to a phase space basis ( and what is such a terminology). In this context, the SW entropy has been used before: a lower bound has been computed for a particular class of open systems \cite{HaAn} (see also \cite{An,HaDo} for a consistent histories analysis of the classical behaviour in such systems and \cite{HaAn,HuZh} for other measures of predictability.). \par Usually the discussion is carried out within the formalism of the Caldeira - Leggett model, where an one - dimensional particle is evolving under a potential $V(Q)$ and in contact with a thermal bath of harmonic oscillators. In the high temperature regime, the corresponding master equation is Markovian and reads \begin{eqnarray} \frac{\partial \rho}{\partial t} &=& \frac{1}{i \hbar} [ \frac{p^2}{2 M} + V(x) , \rho ] \nonumber \\ &-&\frac{\gamma}{ i \hbar} [x,\{ \rho,p\}] - \frac{D}{\hbar^2} [x,[x,\rho]] \end{eqnarray} where $D = 2M \gamma k T$ , $k$ the Boltzmann constant, $\gamma$ a dissipation constant depending on the details of the coupling. \par The analysis of the behaviour of this model has been quite thorough, so instead of giving a full treatment we shall restrict ourselves to some remarks that are particularly relevant to our approach and have not been made in the aforementioned references. \par One question of relevance is whether the characterisation of predictability for various potentials given in section 3.3 changes by the introduction of a thermal environment. Now for quadratic potentials any coherent state evolves into a Gaussian, the center of which is given by the classical equations of motion (actually there is a stronger statement involving the Gaussian approximation for general potentials \cite{An}, but we shall not need this here.) Hence the relevant object is the density matrix \begin{equation} \rho(x,y) = \left( \frac{\pi \hbar}{\alpha (1+s)} \right)^{1/2} \exp \left( - \frac{\alpha}{2 \hbar} (x^2 + y^2) - \frac{\alpha s}{\hbar} xy + i \frac{\alpha r}{2 \hbar} (x^2 - y^2) \right) \end{equation} where $ 0 \leq s < 1$. Up to a Weyl transformation, this is the most general Gaussian density matrix. It is straightforward to compute the corresponding SW entropy \begin{equation} I = 1 + \frac{1}{2} \log \left( \frac{ 4 \alpha (1+s)}{ \frac{1}{4 \sigma^2} + \sigma^2 \alpha^2(1 + s^2 + r^2) + \alpha} \right) \end{equation} Note that the parameter $s$ must lie between $0$ and $1$ in order that the function (3.49) corresponds to a true (positive) density matrix. Now, let us consider the case of a harmonic oscillator. It is known that for $t >> \gamma^{-1}$ any initial state approaches exponentially the thermal state , for which (taking again the natural choice $ \sigma^2 = (4m \omega)^{-1}$) \begin{equation} I \simeq 1 + log \frac{kT}{ 2 \pi \hbar \omega } \end{equation} which is the classical Gibbs entropy. This implies that as long as our phase space sampling volume $V$ is much larger than $kT/\omega$ (the size of the thermal fluctuations) one can meaningfully talk about the particle moving according to classical dissipative equations of motion, fluctuations around predictability becoming eventually fully thermal (see \cite{An} for more details). \par In the free particle case, the interest lies in whether the modification due to the environment is sufficient to cause a reduction in the asymptotical rate of increase of the SW entropy. Using the extensive calculations in \cite{HuZh} we find that asymptotically \begin{equation} I \simeq 1 + \log \frac{kTt}{\hbar \gamma} \end{equation} Hence the free particle exhibits again a logarithmic in time increase in its entropy hence essentially destroying the degree of predictability in the same manner as in the no environment case. But the important point, is that the corresponding fluctuations are to be interpreted as thermal (hence classical) rather than fully quantum as in the former case. \par Now in the case of an open system, the SW entropy as a measure of fluctuations cannot separate between the ones induced by the environment and the intrinsic to the system itself. What we would like is a quantification of the degree the distribution function of a quantum open system behaves as a classical one. The key for an answer lies in equation (3.4): the SW entropy is always larger than the von Neumann entropy. The latter enompasses the degree of mixing of the quantum state, hence their difference ought to be a measure of the purely quantum mechanical unperdictability. For the Gaussian density matrix (3.49) the von Neumann entropy reads \begin{equation} S = - \log(1-s) - \frac{s}{1-s} \log s \end{equation} Indeed one can check that both for the free particle and the harmonic oscillator in the Caldeira- Leggett environment at long times \begin{equation} I - S \simeq 0 \end{equation} hence the fluctuations around predictability of the particle are asymptotically classical ones. \subsubsection{A criterion for pointer basis} It is often stated in the bibliography that coherent sttes are essentially the pointer basis to which a density matrix becomes diagonal, due to interaction with the environment, these being the natural choice of phase space localised states. But some caution should be exercised on that point. A large class of density matric can be diagonalised in a coherent basis, the latter being overcomplete. The requirement is essentially the existence of the P-symbol $f({\bf q},{\bf p})$ given by \begin{equation} \hat{\rho} = \int \frac{d {\bf p} d{\bf q}}{(2 \pi \hbar)^n} f({\bf q},{\bf p}) |{\bf pq} \rangle \langle {\bf pq}| \end{equation} Now, recall the property (2.7) of quantum mechanical information. If a density matrix is diagonal in a given basis, the corresponding information is equal to the von Neumann entropy. This can provide a criterion for determining the pointer basis. Indeed, consider the SW entropy defined with respect to a particular coherent state family, labeled by the defining vector $\xi$. One 's task should be then to determine the family by requiring the minimisation of $I_{\xi} - S$. \par One can use a result due to Wehrl to improve the characterisation. If the P-symbol of a density matrix exists and is positive then there exists a lower bound to the von Neumann entropy \begin{equation} S \geq - \int \frac{d {\bf p} d{\bf q}}{(2 \pi \hbar)^n} f({\bf q},{\bf p}) \log f({\bf q},{\bf p}) = I_P \end{equation} Hence the quantity $I - I_P$ whenever $I_P$ exists is an upper bound to $I - S$ characterising the pointer basis. Now, if a P symbol is positive then its distance in norm (determined by the coherent state metric from the Q symbol is of the order of $\hbar$). Hence, a sufficient criterion for the determination of the pointer basis is the P- symbol corresponding to that basis becoming positive rapidly for all choices of initial states. Such a basis has been constructed in \cite{HaZo}, using ideas from the quantum state diffusion picture of quantum open systems, and it consists of Gaussian states with small value of the squeezing parameter. \par We should remark at this point, that information theoretic criteria seem to be strong enough to discuss the issue of pointer basis, without referring to the notion of the reduced density matrix. For instance in a combined system living in a Hilbert space $H_1 \otimes H_2$ , we could verify that the system $1$ gets asymptotically diagonalised in the basis of the operator $\hat{A}$ when the quantity $I_{A \otimes 1} - S$ is close to zero. This could also generalise to the case where there is no natural splitting between system and environment, hence the reduced density matrix is not naturally defined. One then consider the information associated to some self-adjoint operator $\hat{A}$, typically with degenerate spectrum so that a degree of coarse graining is to be incorporated, and compare it to the von Neumann entropy. If technically feasible, this would provide an alternative way of checking, for instance, the classicalisation of hydrodynamic variables or of variables corresponding to Boltzmann-type of coarse graining \cite{CaHu2}. \par Indeed this construction might easily be seen in the context of phase space classicality. One can construct a lattice on phase space, consisting of,say, cubic cells $C_i$ with volume much larger than $\hbar$ and then consider the operator $\hat{A} = \sum_i \lambda_i \hat{P}_{C_i}$, where $\lambda_i$ are real numbers and $P_{C_i}$ the relevant quasiprojectors. Using the properties of quasiprojectors it is an easy task to verify that the corresponding information $I_A$ isgenerically of the order of $\epsilon$ for phase space localised states , $\epsilon$ being the classicality function of the quasiprojectors. \par For similar ideas, using the von Neumann entropy to identify the most stable states in evolution under an environment, the reader is referred to \cite{Zur}. \section{Field theory} In this section, we shall try to examine whether our results can be generalised to a field theory case. A quantum field, being a system with infinite number of degrees of freedom (an infinite dimensional phase space) is expected to have much more complicated behaviour. In the case of interacting fields, the study of classical behaviour is much more complicated, since as we discussed earlier the notion of predictability is a non-perturbative phenomenon. \par The SW entropy is expected to play again an important role for the identification of classical predictability. But we should note, that a quantum field is itself a thermodynamical system (due to its infinite number of degrees of freedom), hence it would be important to see whether the SW entropy is connected to its proper thermodynamical entropy. It would be indeed an appealing picture, if we could (even in the simple free field case) transfer the notion of entropy due to mixing in phase space also in the field theory case. \subsection{The notion of classicality in field theory} A possible divergence between quantum field theory (QFT) and quantum particle mechanics, as far as the issue of classicality is concerned, lies mainly on the facts that \\ 1. QFT describes a system with infinite number of degrees of freedom \\ 2. QFT is relativistically invariant. \\ \\ The queston then arises, whether these differences are sufficient to necessitate a different approach towards the issue of classicality. Again, we are going to concentrate on the notion of Hamiltonian classicality, i.e. whether and in what regime QFT behaves as a classical field theory. The fact that we have a system with infinite number of degrees of freedom, necessitates the consideration of other type of quasiclassical domains associated to the field's thermodynamic or hydrodynamic behaviour. We shall return to this issue later, but for now we shall concentrate on the possible emergence of a classical field theory. \par The condition for classicality we developed in section 3, is at first sight sufficiently general to encompass the case of QFT as well. It makes no reference to whether the phase space is finite or infinite dimensional. But the issue of integration in an infinite dimensional space is quite complicated and there is no apparent way of how one would construct a classicality parameter associated to each quasiprojector, that would have an intuitive geometric meaning. Indeed in an attempt to generalise Omn\'es theorem for the case of free fields Blencowe \cite{Blen} restricted the consideration to finite dimensional phase space cells. Such a restriction implies a consideration of essentially a finite number of modes. While this might give sufficient physical information for free fields (studying the modes is standard practice for instance in cosmological setting) clearly cannot be transferred to the non - linear case. \par On the other hand, the function of the SW entropy as a measure of predictability seems to be unaffected by the transition to the infinite dimensional case. Indeed, one can define coherent states for the fields (we shall give the basic conventions later) even in the interacting case and there is a well defined notion of integration over the infinite dimensional phase space. \par The Hilbert space of a quantum field carries a unitary represenatation of tha canonical commutation relations \begin{equation} [\hat{\Phi}(x),\hat{\Pi}(x')] = i \delta (x,x') \end{equation} where $x$ and $x'$ are points on a Cauchy surface $\Sigma$. We can define the field coherent states as \begin{equation} |w \rangle = | \phi, \pi \rangle = e^{ i \Phi(\pi) + i \Pi(\phi)} |0 \rangle \end{equation} The relation between the complex function $w(x)$ (an element of $L^2(\Sigma)$ and the phase space coordinates $\phi$ and $\pi$ is dependent on the choice of the representation. Now, if the Hamiltonian is quadratic the vacuum state is a Gaussian (in either the Schr\"oddinger or the Bargmann representation) and so are our coherent states. \par Given then a density matrix $\rho$ one can define the probability distribution \begin{equation} p(w) = \langle w| \rho |w \rangle / \langle w|w \rangle \end{equation} and from this define the SW entropy as in equation (3.1), where now the integral measure is $Dw Dw^*$, which is the well defined Gaussian integral on the field phase space. \par Note, that for the free fields the Gaussian nature of coherent states reproduces again the lower bound (3.5) for SW entropy, but in the case of interacting fields this is not any more true (the vacuum is not a Gaussian). Also in the interacting field's case, it makes no sense to consider Gaussian coherent states, for they generically do not exist in the field 's Hilbert space. This marks a significance difference from the particle QM case where one could always consider and study the SW entropy minimising Gaussian coherent states, \par For technical reasons therefore we shall be forced to concnentrate only on the free field case. In the case of Minkowski spacetime, time evolution with the free Hamiltonian is rather trivial. The coherent states are preserved, and the analysis proceeds as in the simple harmonic oscillator case. There is no SW entropy production and a classical state will remain classical even as time increases. The same can be shown to be true in the presence of an external source coupled linearly to the field. \par But more interesting is the case of a field in a curved dynamical spacetime. These cases are relevant in the cosmological context, and to their examination we shall return shortly. \subsection{Field theory in cosmological spacetimes} The evolution of the vacuum states for a field in a time dependent cosmological spacetime essentially corresponds to a linear canonical transformation acting on the field. Hence equation (3.23) for the SW entropy is applicable here, provided that the trace exists, since of now $A$ and $B$ are operators in an infinite dimensional Hilbert space. Actually this condition is equivalent to $Tr \bar{K}K < \infty$ and since $A$ is bounded equivalent to $Tr B^{\dagger}B < \infty$. THis is of course the necessary condition for the Bogolubov transformation to be unitarily implementable or the total number of created particles to be finite. \par In cosmological situations (or at least in the models usually employed) this is not the case , but still one can define a kind of entropy density by restrict ing the spatial integration involved in the trace to a finite region, dividing by its volume and in the end taking the latter into infinity. \par It is usually the case that the Bogolubov transformation couple only a finite number of modes, in which case it is meaningful to define an entropy per particle by concentrating on the relevant finite dimensional subspaces. \par A case which has been explicitly discussed is the case where the Bogolubov transformation break into two dimensional blocks involving the modes labeled by ${\bf k}$ and $- {\bf k}$, in each block the transformation given by a two dimensional squeezing transformation. Transformations of type (3.26) appears for instance in pair creation of gravitons (or scalar fields) from the vacuum. \par An important point in this case is that the squeezing parameter $r_{{\bf k}}$ is related to the number of created particles on mode ${\bf k}$, $n_{{\bf k}}$ by \begin{equation} n_{{\bf k}} = \sinh^2 r_{{\bf k}} \end{equation} hence the SW entropy per mode can be written \begin{equation} I_{{\bf k}} = 1 + \log (1 +n_{{\bf k}}) \end{equation} and the entropy increases with the number of particles created. In general the knowledge of the Bogolubov coefficients in any cosmological model enables us to straightfowardly compute the SW entropy. Such calculations have been done in a number of cases \cite{Gasp,Ro,Bran} and is not the point we intend to pursue here. We are rather more interested in some interpretational issues. \paragraph{Field classicalisation:} As we have argued in the previous chapter, the SW entropy (or rather the relative SW entropy) is a measure of the deviation of the system from classical deterministic behaviour, while the quantity $I - S$ is a measure of the deviation from classical stochastic behaviour. Given the fact that in most relevant models the squeezing parameters increase with time ( for conformally coupled massless scalar field in de Sitter spacetime $r_{{\bf k}} = H t$) we conclude that rather than producing classicality, time evolution in time dependent background enhances non-classical behaviour. We have given detailed argumentation in the previous section, but we should also examine a number of possible counterarguments. \par We have already examined the case the argument that extreme squeezing in one direction is essentially equivalent to diagonalisation of the state in some pointer basis. We argued against this by pointing that classicality and determinism is essentially a phase space issue. Still, one can argue \cite{KiPo2} that in an operational sense the highly squeezed states corrspond to classical states , in the sense that the observationally relevant quantities are field amplitudes rather than field momenta. Setting aside the measurement-theoretic truth of this assertion, we should point out that this notion of classicality is not robust to even small external perturbations. This can be seen even in non - relativistic quantum mechanics from inspection of equations (3.39) and (3.40). Any interaction terms couple intrinsically position and momentum uncertainties and are prone to increase the small position uncertainty of a squeezed state. I In addition it is not robust in the presence of a decohering environment. Even when the system couples to the environment via its configuration space variables, the pointer basis is not the position basis but of a coherent state type. This has been demonstrated in \cite{Zur, HaZo}. \par Another argument usually put forward is that at the limit of large squeezing the number of created particles becomes very large and hence can be taken in some sense to correspond to classical behaviour. The problem with this, is that {\it a priori} classicality is insensitive to the number of particles (one can easily construct {\it Schr\"odinger cat states} even for a many particle system and there is no guararantee from first principles , unless some explicit mechanism being described, that suppresses such interferences. What is more, as is known from quantum optics the distribution function of photons in squeezed states is highly non classical (non - Poisson) \cite{OOO} . \par Given the fact that field classicalisation is important in any discussion of inflation, one should start examining alternatives. Coupling of the fields to an environment might seem to provide a solution to the problem, turning the quantum fluctuations into thermal ones, and indeed seems quite probable. But still one has to show that classicality does appear in such system. According to our argumentatation the calculation of $I -S$ is a good guide for obtaining a classical stochastic process. But still there are some problems. First of all the difficulty of separating between system and environment (in a non linear theory this splitting seems to be quite arbitrary \cite{CaHu2, An2}). Second, we should not forget that even the environment undergoes squeezing due to the time dependence of the scale factor and there is no guarantee (at least not from the well studied examples ilike the Caldeira Leggett model) whether such feature might render clasicalisation problematic. An investigation of this issue will be taken elsewhere. \par Another possibility, of the classicalisation of much more coarse - grained hydrodynamic (rather than phase space quantities as discussed here) is tentatively discussed in \cite{An3}. \paragraph{Phenomenological entropy} The other important question is whether the SW entropy for the fields can be taken to represent the phenomenological entropy of the matter as defined in the latter universe. This has been argued in reference \cite{Gasp}. This would be indeed an appealing feature, since the SW entropy can be conveniently interpreted as a measure of the phase space mixing induced to the field by the classical evolution. \par What entropy actually corresponds to the phenomenological thermodynamic entropy is often a difficult quastion to answer. In standard equilibrium thermodynamics the von Neumann entropy of a thermal density matrix is to be identified with the thermodynamic entropy, by the consideration of a quantum mechanical version of the macrocanonical distribution, implicitly aknowledging the openness of the thermodynamic system as coupled to a heat bath. \par In the case of cosmology our system is essentially closed and far from equilibrium. It seems therefore that the entropy ought to be identified with some coarse grained version of von Neumann entropy. The SW entropy is such a candidate, involving minimal smearing over phase space and being very close to Gibbs entropy, but is not the only one. Any thermodynamic description necessitates the identification of a finite number of macroscopic degrees of freedom describing the system. Should we wish for such a description in a quantum field, we ought to perform definitely further coarse graining, as for instance focusing on a set of hydrodynamic variables characterising it , or tracing out the effect of higher order correlation functions, smearing over spatial or spacetime regions etc. \par The point we want to make is that a thermodynamical description has to be given in terms of essentially classically behaving quantities. This is not the case of the minimally coarse grained phase space description implied by the SW entropy. It seems therefore necessary that extra coarse graining would be necessary in order to obtain a quantity that could naturally be considered as the thermodynamical entropy. For these reasons we are rather reluctant to consider the SW entropy as a measure of the actual thermodynamical entropy of the quantum field, and we are restricted to its interpretation as a measure of deviation of classicality and phase space mixing due to time evolution. \section{Conclusions} To conclude, we would like to put our results in a different perspective, that might turn out to provide an alternative way to discuss the issue of classicality. \par One can use coherent states to define unequal time n-point functions on phase space for any quantum systems (see for instance \cite{Sri} and references therein). Such objects, provided they satisfy the Kolmogorov conditions, can be used to define a measure on phase-space paths and hence a stochastic process. As expected from the Bell - Wigner theorem this is not true in the case of quantum mechanics. But then the question arises, when is the quantum process close to a classical process and how do we quantify the notion of closeness? \par The quantity $I - S$ we examined in this paper is able to play this role. This having value of the order of unity is a sign that the quantum mechanical evolution can be approximated by a classical stochastic process. Of course, classical determinism cannot be seen from the inspection of this quantity: the evolution of a superposition of two phase space localised states in presence of decohering environment is such an example: the system behaves classically, but stochastically rather than deterministically. \par We can easily see that our criterion for a classical state, corresponds to this way of addressing classicality. Indeed, given an initial density matrix and the evolution equation, the ``quantum stochastic process'' describing the system in phase space is uniquely constructed. As argued, the quantity $I - S$ can provide a good quantifying criterion for phase space classicality, giving a single quantification even for system with large number of degrees of freedom. But of course, a more complete and satisfactory description, would be given by translating our stated classicality criterion into a stochastic process language. This issue is currently our main investigation. \section{Aknowledgements} I would like to thank J.J. Halliwell, A. Zoupas and A. Roura for discussions and comments. The research was supported by Generalitat de Catalunya by grant 96SGR-48.
1,108,101,563,538
arxiv
\section{Classification}\seclabel{classification} Identifying political disaffection is a complex task even for human being, so, in order to create a system for the detection of this attitude in tweets, we have to define it in a formal way. The goal is to measure disaffection in conceptually similar ways to what is measured by public opinion surveys. To that end, in order to be labelled as an expression of political disaffection a tweet has to match the following three criteria: \begin{itemize} \item \textbf{Political}: the tweet should regard politics. \item \textbf{Negative}: the sentiment of the tweet should be negative. \item \textbf{Generic}: the message have to regard politicians or parties in general. Tweets regarding only a political party or specific politician are not considered. \end{itemize} \begin{figure} \centering{}\includegraphics[width=0.75\columnwidth]{schema_class_squared} \caption{Classification chain employed.} \label{fig:schema_class} \end{figure} Since we can not train a classifier able to consider all this criteria at the same time, we created a ``chain'' of classifiers as described in Figure \ref{fig:schema_class}. There are three fundamental steps, where the relevant tweets after each step became the input for the next one. After every step the number of relevant tweets is less or equal to the number of relevant tweets after the previous step. The relevant tweets after the third step are definitively classified as relevant and all the other tweets are classified not relevant. Roughly speaking, we give \muacro{TCorpus} as input of the chain and obtain a set of tweets denoting political disaffection (\muacro{TRelevant}) as output.\\ For the first step, we trained a classification algorithm using \muacro{TData} and \muacro{NData}; the resulting classifier is able to distinguish between ``political'' and ``non-political'' tweets. In the second step, the algorithm is trained with \muacro{TData} and the resulting classifier distinguish between tweets with negative sentiment and non-negative one. The third and last step is performed by an ad-hoc classifier created with a rule-based approach to identify the generic speech. Please note that the \muacro{TData} collection is fixed, but the features are extracted in different ways depending on classification step taken into account. In the next sections we summarize the feature extraction methodologies, and, subsequently, the tested classification approaches. \subsection{Feature extraction} The efficacy of textual classification crucially depends on how the textual data are transformed into numerical features. Nevertheless, identifying the best function for feature extraction and the best tokenization method is an hard problem, and the results are usually task dependent. For these reasons, we separately manage the two supervised classification tasks: politic topics and negative sentiment. Note that, in political topics we employ both the tweets data (\muacro{TData}) and newspaper titles (\muacro{NData}). We compared different techniques for features extraction in order to find the most suitable for our problems: 5-grams of characters, space-separated words\footnote{Considering the space-separated words' approach, we recognize as single word also emoticons, single punctuation marks such as ``!''. The URLs are also transformed in an unique token: $\langle$link$\rangle$). }, $\left\{ 1,2,3\right\}$-grams of words, and string kernels~\citep{Lodhi2002}. Concerning the counting function, we compute: term frequency~\citep{Manning2008}, boolean term presence~\citep{Mejova2011}, and term frequency-inverse document frequency (\muacro{tf-idf}, \cite{Manning2008}). The test was performed for the ``Politics'' topic classification using a 4-fold cross-validation with an online linear classifier. Our results show that 5-grams of characters, independently of the counting scheme, constitutes our best option. On the other hand, taking into account the negative sentiment problem and replicating the experiments with the same methodology of the previous case, we note that the feature characterized by the space-separated word tokenization achieves the overall best results, employing as counting scheme the term frequency. Moreover, an important improvement is given by performing a stemming process and collapsing synonyms into a single feature; to perform this task we employ a freely available Italian synonyms dictionary\footnote{http://webs.racocatala.cat/llengua/it/sinonimi.htm}. An important step to ensure a robust classification is to remove the sentiment target in each tweet. To perform this task, we employed DBpedia. This database, extracted from Wikipedia, allows to perform queries and provides a simple way to capture the semantic behind words. Then, we collect a list of possible targets for the sentiment task, by obtaining via queries a full and up-to-date set of Italian political parties, politicians, and political offices. Combining this data with the recognition of strings starting with ``@'' (Twitter user-names) we are able to remove the sentiment target from the tweets. \subsection{Classification algorithms} We need classifiers able to scale on huge corpus possibly able to be updated over time. So, we especially focused our attention on online classifiers since they require only one sweep on the data, making the classification process really fast with really good performances on the accuracy side. We ran all the experiments on an ordinary workstation: Intel(R) Core(TM) i$7$-$2600$K CPU at $3.40$GHz with $16$Gb of RAM. We tested four different online algorithms for classification\footnote{Most of these algorithms are well known and have a MATLAB implementation available in DOGMA \cite{dogma}.} and one batch classification algorithm: \begin{itemize} \item \textbf{\muacro{ALMA}} \citep{alma}: is a fast classifier which try to approximate the maximal margin hyperplane between the two classes. We set the parameter p equals to 2 and we also tested different values of $\alpha$. \item \textbf{\muacro{OIPCAC}} \citep{Rozza2012}: is a classification method employing a modified approach to estimate the Fisher Subspace, which allows to manage classification tasks where the space dimensionality is bigger than, or comparable to, the cardinality of the training set, and to deal with unbalanced classes. \item \textbf{\muacro{PASSIVE AGGRESSIVE}} (\muacro{PA}, \cite{ppaa}): is a Perceptron-like method. In our experiments we test only the binary classifier with different settings. \item \textbf{\muacro{PEGASOS}} \citep{pegasos}: is a well-know online Support Vector Machine (\muacro{SVM}) solver. \item \textbf{\muacro{RANDOM FOREST}} (\muacro{RF}, \cite{rf}): is an algorithm based on an ensemble of classification trees. Since the algorithm is widely used in machine learning challenges with good results, we will use it as yardstick in our comparison. \end{itemize} We compared these algorithms on the two aforementioned tasks: \begin{itemize} \item \textbf{Political}: a binary classification of tweets into ``related with politics'' and ``not related with politics'' \item \textbf{Negative}: a binary classification of tweets into ``tweets with negative sentiment'' and ``without negative sentiment'' (that is, objective or with positive sentiment). \end{itemize} Note that we test the online learning algorithms in a batch setting. For this particular case, we use only their one-sweep behaviour in order to speed up the classification process.\\ After an extensive tuning of the parameters, in~\tabref{10fold-results} and in~\tabref{10-fold-neg} we report for each predictor its best performances in 10-fold cross validation on the Political classification task and on the Negative sentiment identification respectively. \begin{table*}[th] \begin{centering} \begin{tabular}{|c|c|c|c|} \hline Interval & $\rho$ & $95\%$ C.I. & P-Value for $\rho>0$\tabularnewline \hline \hline \textbf{$\Delta_1^{14}$} & \textbf{0.7860} & \textbf{0.476-0.922} & \textbf{0.031\%}\tabularnewline \hline $\Delta_7^{14}$ & 0.7749 & 0.454-0.917 & 0.042\%\tabularnewline \hline $\Delta_1^{7}$ & 0.6880 & 0.310-0.878 & 0.226\%\tabularnewline \hline \end{tabular} \par\end{centering} \caption{Pearson correlation index achieved between Twitter political disaffection and \muacro{INEFFICACY} time-series.}\tablabel{correlationgood} \end{table*} \begin{table*}[th] \centering{}\begin{tabular}{|c|c|c|c|} \hline Interval & $\rho$ & $95\%$ C.I. & P-Value for $\rho>0$\tabularnewline \hline \hline $\Delta_1^{14}$ & \textbf{0.5920} & \textbf{0.248-0.803} & \textbf{0.231\%}\tabularnewline \hline \textbf{$\Delta_7^{14}$} & 0.5579 & 0.190-0.788 & 0.567\%\tabularnewline \hline $\Delta_1^{7}$ & 0.4433 & 0.049-0.718 & 3.00\%\tabularnewline \hline \end{tabular} \caption{Pearson correlation index achieved between Twitter political disaffection and \muacro{NO\_VOTE} time-series.}\tablabel{correlationbad} \end{table*} \begin{table} \begin{centering} {\scriptsize \begin{tabular} {|c|c|c|c|} \hline Classifier & Accuracy & F-Measure & Global time \tabularnewline \hline \hline \muacro{ALMA} & 0.883 $\pm$ 0.014 & 0.886 $\pm$ 0.011 & 13.5 $\pm$ 1 \tabularnewline \hline \textbf{\emph{\muacro{PA}}} & \textbf{\emph{0.889 $\pm$ 0.012}} & \textbf{\emph{0.890 $\pm$ 0.012}} & \textbf{\emph{10.62 $\pm$ 0.1}} \tabularnewline \hline \muacro{PEGASOS} & 0.882 $\pm$ 0.010 & 0.883 $\pm$ 0.010 & 1103 $\pm$ 10 \tabularnewline \hline \textbf{\muacro{OIPCAC}} & \textbf{0.889 $\pm$ 0.001} & \textbf{0.891 $\pm$ 0.010} & \textbf{5911 $\pm$ 52} \tabularnewline \hline \end{tabular} } \caption{10-fold results for political topic detection (in bold face the best results considering F-measure). In italic the classifier selected for the classification process. With "time", we mean time employed for training and classification, in seconds. We were not able to conclude all the runs with \muacro{RF} due to its time cost.}\tablabel{10fold-results} \end{centering} \end{table} \begin{table} \begin{centering} {\scriptsize \begin{tabular}{|c|c|c|c|} \hline Classifier & Accuracy & F-Measure & Global time \tabularnewline \hline \hline \textbf{\emph{\muacro{ALMA}}} & \textbf{\emph{0.703 $\pm$ 0.029}} & \textbf{\emph{0.745 $\pm$ 0.034}} & \textbf{\emph{0.82 $\pm$ 0.28}} \tabularnewline \hline \muacro{PA} & 0.665 $\pm$ 0.064 & 0.705 $\pm$ 0.124 & 0.91 $\pm$ $10^{-3}$ \tabularnewline \hline \muacro{PEGASOS} & 0.691 $\pm$ 0.033 & 0.732 $\pm$ 0.045 & 76 $\pm$ 0.1 \tabularnewline \hline \muacro{OIPCAC} & 0.714 $\pm$ 0.026 & 0.751 $\pm$ 0.024 & 121 $\pm$ 25 \tabularnewline \hline \textbf{\muacro{RF}} & \textbf{0.724 $\pm$ 0.026} & \textbf{0.776 $\pm$ 0.027} & \textbf{2173 $\pm$ 48} \tabularnewline \hline \end{tabular} } \caption{10-fold results for political topic detection (in bold face the best results considering F-measure). In italic the classifier selected for the classification process. With "time", we mean time employed for training and classification, in seconds. }\tablabel{10-fold-neg} \par\end{centering}{\tiny \par} \end{table} In order to obtain a good trade off between accuracy and running times, we adopted a combination of \muacro{PA} and \muacro{ALMA}. In particular we used \muacro{PA} for the political/non-political classification and \muacro{ALMA} for the sentiment classification part. It is also possible to obtain comparable performances with different combinations of classifiers. It is important to underline that since the ``Generic'' task (see~\secref{classification}) is trivial, we opt to employ an approach based on a list of a few words selected by domain experts to solve it. \\ \section{Conclusions and Future Works}\seclabel{Conclusion} In this work we analyse the well-known political attitude of political disaffection by using Twitter data through machine learning techniques; note that, to our knowledge, no studies have been proposed to analyze this political phenomenon employing Twitter. In order to validate the quality of the time-series extracted from Twitter data, we highlight the strong relation of these data with political disaffection as measured with public opinion surveys (measured through low intentions to vote and low political efficacy). Furthermore, we show that important political news of Italian newspapers are often correlated with the highest peaks of the produced time-series. Note that, despite some works present many doubts about the possibility to perform an electoral prediction using Twitter data~\citep{Metaxas2011,Avello2012}, the different task of political disaffection, as suggested by the strong correlations between the time-series generated by employing the public opinion surveys and the Twitter data automatically extracted by our approach, could offer an interesting research topic for further investigations. Moreover, it is important to notice that the Twitter timeliness to answer to a political events with respect to traditional public opinion surveys suggests that our method could be employed to perform a such of daily prediction of the the citizens political disaffection change. In future works, we would like to extend our machinery in order to achieve better results. In particular we would like to integrate some interesting features from the technical point of view: the classification accuracy can be improved with a proper selection of the tweets to be labelled by experts in an active-learning fashion (i.e. \cite{bbq}); furthermore, we would like to include the information about the graph topology in order to have a better understanding of the social component of these phenomena and the possibility to employ graph-based classifiers (i.e. \cite{shazoo}, \cite{gpa}). \section{Dataset}\seclabel{dataset} In this section we describe how we have generated the dataset employed to train our supervised methods. Furthermore, the procedure used to extract the tweets employed to identify political disaffection is described. Finally, the information extract from different Italian newspapers, and the employed public opinion surveys are described. \subsection{Training Data} In order to train the classifiers that compose our methodology to extract the political disaffection, we build the training set employing the Twitter API v$1$ by a $2$-step procedure involving a semi-automatic search method and a labelling phase guided by experts. The collection phase began at the beginning of April 2012 and ended at the beginning of June 2012 and collected about $120,000$ of tweets and retweets. The collected tweets resulted from a geo-localized trending topic\footnote{Trending topics are the most popular and talked-about words and phrases on Twitter for a specif time period.} search and a targeted search on political themes. In particular, at the end of each day we requested the top $10$ trending topics for Italy. As most trending topics regard non-political arguments (i.e. celebrities, sports or viral hashtags) we selected the political ones and a subset of the non-political. Furthermore, in order to have a more meaningful number of political tweets, we searched for tweets related to politicians, political news from online newspapers and talk-shows. As query keywords we choose the name of the most famous Italian politicians, the topics of the top news in the political section of online newspapers and the official hashtags of TV-talks. After the deletion of retweets, the resulting dataset is made up by a large corpus of $40,000$ records consisting of the tweet text, its date and the keyword used in the search. \par Once the dataset was collected, we started the labelling phase employing the expertise of a pool of $40$ sociology and political science students. Each student was assigned a set of $3000$ tweets to be classified by means of a web application. Two different labels have been associated to each tweet, the first is political/not political (the students need to identify if the tweet topic is political), and the second is positive or neutral sentiment versus negative sentiment of the political tweets. As the meaning of the labels are quite fuzzy, before the labelling process we made a kick-off meeting to ensure a label definition agreement and consequently an acquisition of homogeneous and reliable data. \par The tweets' assignment has been made so that each tweet text has been labelled by three different students. This way we can increase the accuracy and the meaningfulness of the labelling process by selecting tweets for which all the evaluators agree on its political nature, and we employ a majority voting approach for labelling the sentiment of the tweets. Precisely, the sentiment label is set in according to a decision rule that selects the label which has a majority, that is, more than half the votes. Note that, taking into account the Krippendorff's alpha coefficient\footnote{Krippendorff's alpha coefficient~\citep{Krippendorff2003} is a statistical measure of the agreement achieved when coding a set of units of analysis in terms of the values of a variable.}, we obtain for the sentiment label $\alpha = 0.79$. The final dataset (\muacro{TData}) is then composed by $31,000$ labelled tweets. To the best of our knowledge, it represents one of the biggest dataset containing tweets classified by experts. \subsection{Newspaper Data} The adoption of \muacro{TData} in the training could presents some drawbacks due to the limited period it spans. For example, some important features for a classifier could be characteristic of the retrieval period and could lost their relevance accounting for a wider period. These drawbacks result in a limited ability of generalization of the model employed to classify the political tweets. To improve its generality, we built up an additional dataset (\muacro{NData}) containing all the article titles of different Italian newspapers (\textit{Repubblica}, \textit{il Manifesto}, and \textit{Libero}) so that they spanned the whole spectrum of the political points of view from the Right wing to Left one. More precisely, we selected, from the feed \muacro{RSS} history, all the articles from January the $1^{st}$ $2012$ to October the $10^{th}$ $2012$ extracting the news title, and we employed the categorization proposed by the newspaper to associate a label to the title. Precisely, if a news belongs to the political category proposed by the newspaper we set the label to $1$, otherwise $-1$. The resulting \muacro{NData} is composed by $17,388$ labelled newspaper titles, $10,670$ of which political $(61\%)$. \subsection{Data employed for our Analysis} \muacro{TData} and \muacro{NData} were preliminary to train a supervised methodology which can detect the tweets useful for our goal (to identify political disaffection). To obtain general results on the political disaffection we perform our analysis on a large sample of Italian Twitter community. To achieve this goal, we randomly extract $50,000$ Italian users, which have posted at least one Italian tweet\footnote{To identify if a tweet is written in Italian we employ the GuessLanguage library (https://code.google.com/p/guess-language/).} in a fixed temporal range (October $10^{th}$ to October $30^{th}$). Moreover, to extend this set to include also the less active users we perform a one-level snowball sampling~\citep{Atkinson2001}; precisely, we select for each user all its Italian followers, thus producing a set of $261,313$ users. Note that we have not expanded recursively this process to reduce the intrinsic bias produced by the snowball sampling process. Moreover, we take into account only the user profiles that has been created before April the $4^{th}$ (obtaining $167,557$ users), thus to prevent the problem of the continuous grow of the Italian Twitter community, that could affect the quality of our political disaffection investigation. Finally, from each selected user we extract, for the period of interest (April the $4^{th}$ $2012$ to October the $10^{th}$ $2012$), all its tweets, thus producing our final set composed by $35,882,423$ tweets (\muacro{TCorpus}). \subsection{Public Opinion Surveys} The public opinion surveys have been collected by \muacro{IPSOS} from April the $11^{th}$ $2012$ to October the $10^{th}$ $2012$. The sampling procedure consisted in a survey through \muacro{CATI} (computer-assisted telephone interview) of a representative sample of the Italian electorate. More precisely, almost every week, respondents are contacted with a quota sampling on fixed parameters (age, gender, education) using the technique of random digit dialling. We build two indicators for political disaffection. The first one (\muacro{NO\_VOTE}) is broader and measures the intentions not to vote at the next elections, a behavioural consequence of disaffection. It includes the percentage of survey respondents that declare to have very low intention to vote at the next elections\footnote{The total sample consists in $24,971$ respondents ($\sim 1040$ respondents per poll).} (see~\tabref{ipsos}). In details, we consider the people that answered $1$ ($1$ low - $10$ high) to the question ``How likely is it that you will vote at the next election?''. The second indicator (\muacro{INEFFICACY}) is more specific. It measures the attitude of political inefficacy, that is to say the disbelief in the accountability of the political system and of all political parties (see~\tabref{ipsos}). We operationalize it using the propensity to vote (\muacro{PTV}) of respondents for a specific party\footnote{The total sample consists in $38,537$ respondents ($\sim 2267$ respondents per poll).} ($1$ low - $10$ high). Therefore, we include in this indicator the respondents that have low (equal to 1) \muacro{PTV} for all political parties included in the survey, or, alternatively, for those who have low (equal to $1$) \muacro{PTV}s for some political parties and missing answers for the other ones. The \muacro{PTV} has been coded for all major italian parties\footnote{ \muacro{PD} (Partito Democratico), \muacro{PDL} (Popolo delle Libert\`a.), Lega Nord, \muacro{IdV} (Italia dei Valori), \muacro{UDC} (Unione di Centro), \muacro{FLI} (Futuro e Libert\`a.), \muacro{SeL} (Sinistra Ecologia Libert\`a.), and \muacro{M5S} (Movimento 5 Stelle) }. \begin{table} \begin{center} {\footnotesize \centering \begin{tabular}{|c|c|c|} \hline $t_i$ & \muacro{INEFFICACY} & \muacro{NO\_VOTE} \tabularnewline \hline \hline 2012-04-11 & 14.86\% & 13.23\% \tabularnewline \hline 2012-04-18 & 13.77\% & 14.53\% \tabularnewline \hline 2012-05-02 & 22.20\% & 19.05\% \tabularnewline \hline 2012-05-09 & - & 13.37\% \tabularnewline \hline 2012-05-16 & 12.93\% & 13.34\% \tabularnewline \hline 2012-05-23 & 16.31\% & 12.47\% \tabularnewline \hline 2012-06-05 & 12.07\% & 13.31\% \tabularnewline \hline 2012-06-06 & 11.03\% & 13.76\% \tabularnewline \hline 2012-06-13 & - & 10.99\% \tabularnewline \hline 2012-06-20 & 10.77\% & 13.08\% \tabularnewline \hline 2012-06-26 & 6.91\% & 9.29\% \tabularnewline \hline 2012-06-27 & 6.84\% & 13.09\% \tabularnewline \hline 2012-07-04 & - & 11.88\% \tabularnewline \hline 2012-07-11 & 7.87\% & 10.04\% \tabularnewline \hline 2012-07-17 & 9.51\% & 13.64\% \tabularnewline \hline 2012-07-18 & 6.00\% & 10.03\% \tabularnewline \hline 2012-07-25 & - & 13.26\% \tabularnewline \hline 2012-09-04 & - & 13.53\% \tabularnewline \hline 2012-09-12 & 8.46\% & 11.22\% \tabularnewline \hline 2012-09-19 & - & 12.75\% \tabularnewline \hline 2012-09-25 & 9.44\% & 12.04\% \tabularnewline \hline 2012-09-26 & 10.46\% & 12.74\% \tabularnewline \hline 2012-10-03 & - & 12.87\% \tabularnewline \hline 2012-10-10 & 11.76\% & 14.38\% \tabularnewline \hline \end{tabular} } \caption{Public Opinion Surveys for \muacro{INEFFICACY} and \muacro{NO\_VOTE} indicators.} \tablabel{ipsos} \end{center} \end{table} \section{Introduction}\seclabel{Introduction} Twitter is one of the biggest micro-blogging services in the world. Micro-blogging refers to the publication of short text messages, used to share all kinds of information; on Twitter, these messages are called ``tweets'' (their maximum length is $140$ characters), and many millions of them are posted every day. Twitter is an interesting data source to explore public sentiment trends (\cite{Bollen2011,Indignados}): its content is easily available, and it has a very flexible nature due to the fact that it is currently used for open conversations, public opinions, and news commentaries. Another crucial characteristic of Twitter content is its timeliness; this peculiarity guarantees that tweets are related to a much closer temporal window with respect to other user-generated texts, such as blogs. Modelling trends from Twitter data has became a popular research task. Among such studies, those drawing attention to political topics are some of the most attractive, and in the last years a great deal of research has focused on them. In this study we want to concentrate on an important concept in political science: \textit{political disaffection}, or ``the subjective feeling of powerlessness, cynicism, and lack of confidence in the political process, politicians, and democratic institutions, but with no questioning of the political regime''~\citep{DiPalma1970}. In political science, levels of political disaffection are understood to relate to levels of political participation and, consequently to have important implications for the legitimacy of democratic political systems. This makes the study of political disaffection one of the key topics of contemporary studies of political behavior. Despite the popularity of the term in the relevant literature, political disaffection is a complex and multi-dimensional term. The concept of political disaffection does not necessarily imply low levels of satisfaction with the current government, nor adversity to the system of democratic politics. Instead, concept of political disaffection has two components. One, low trust in politics or politicians in general. Two, what is known as political inefficacy, "a belief that politics is complex, difficult to understand, self-referential and distant from citizens"~\citep{Campbell1954}. To our knowledge political disaffection has never been studied using Twitter data. In this work we propose an automatic approach to measuring political disaffection using Twitter data with the aim to study the relations between our measurement of political disaffection and political disaffection as measured by public opinion surveys. In accordance with the literature we define political disaffection as a general discontent with the political system, and not as a partisan position against a particular party, individual or policy, we operationalize this definition in three steps. One, we use a supervised methodology to extract a subset of political tweets form the universe of tweets in italian. Two, we perform a sentiment analysis to analyze political tweets with negative sentiment. Three, we automatically select the tweets that refer to politics of politicians in general, rather than specific political events or personalities. In order to validate our operational measurement, the selected tweets that represent political disaffection are used to create time series that are related to indicators of political disaffection in public opinion surveys. Furthermore, we also show that important political news from Italian newspapers are often correlated with peaks in the produced time-series. This paper is organized as follows: in~\secref{Related} the related works are summarized; in~\secref{dataset} we describe the procedures used to generate the datasets employed to train our supervised methods, the approach to extract the tweets employed in our analysis, and we summarize the public opinion surveys used to validate the quality of our approach; in~\secref{classification} we describe the employed feature extraction methodologies and the classification approach that we used; in~\secref{results} we present the achieved results; ~\secref{Conclusion} highlights the conclusions of our work. \section{Related Works}\seclabel{Related} In literature a great deal of research has focused on the analysis of different phenomena using the data of micro-blogging services. Among them, we recall the work proposed in~\citep{Popescu2011} where the authors explore the correlation between types of user engagement and events about celebrities using Twitter data. Furthermore, ~\citep{Bollen2011b} propose an approach able to predict the stock market by employing micro-blogging data. The most closely related works are those concerning the analysis of political topics by employing Twitter data. In~\citep{Bollen2011}, the authors propose a method able to extract from tweets data different time series corresponding to the evolution of $6$ emotional attributes (tension, depression, anger, vigour, fatigue, and confusion) called Profile of Mood States (\muacro{POMS}). The authors applied \muacro{POMS} to suggest that socio-economic agitations caused significant fluctuations of the mood levels. One of the earliest papers discussing the feasibility of using Twitter data as a substitute for traditional public opinion surveys has been proposed in~\citep{Oconnor2010}. The authors employ Opinion-Finder\footnote{Opinion-Finder is a system that performs subjectivity analysis, automatically identifying when opinions, sentiments, speculations and other private states are present in text.} to determine both a positive and a negative score for each tweet in their dataset. Then, raw numbers of positive and negative tweets regarding a given topic are used to compute a sentiment score. Using this method, sentiment time series are prepared for a number of topics such as: presidential approval, consumer confidence, and US $2008$ Presidential elections. According to the authors both consumer confidence and presidential approval public opinion surveys show correlation with the Twitter sentiment data computed with their approach. However, no correlation are been found between electoral public opinion surveys and Twitter sentiment data. In~\citep{Tumasjan2010} an analysis of the tweets related to different parties running for the German $2009$ Federal election is presented. Moreover the authors show that the count of tweets mentioning a party or a candidate accurately reflected the election results suggesting a possible approach to perform an electoral prediction. Furthermore, in~\citep{Livne2011} a novel method that tries to predict elections has been presented. This approach relies both on Twitter data and also in additional information such as the party a candidate belongs to, or incumbency. In~\citep{Bermingham2011} is proposed a methodology that extends the previous approaches to incorporate sentiment analysis to perform a prediction on the political election. The authors test their method in the $2011$ Irish General Election finding that it is not competitive when compared with traditional public opinion surveys. Similar approaches are proposed in~\citep{Tjong2012,Skoric2012}. Nevertheless, some works present some doubts about the possibility to perform an electoral prediction using Twitter data~\citep{Metaxas2011,Avello2012}. More precisely, in~\citep{Metaxas2011} the authors analyze the results from a number of different elections and they conclude that Twitter data is only slightly better than chance when predicting elections. \bigskip In our work we analyse the well-known political attitude of political disaffection by employing Twitter data through machine learning techniques. In order to validate the quality of the information extracted from the Twitter data, we highlight the relations of these data with political disaffection as measured in public opinion surveys. Here, the attitude of political inefficacy is used as proxy of this broad concept. This attitude expresses the (subjective) sense of powerlessness of citizens in politics. Its symmetrical concept, political efficacy, is instead the individual self-image as an influential participant in politics, ``the feeling that individual political action does have, or can have, an impact upon the political process...the feeling that political and social change is possible, and that the individual citizen can play a part in bringing about this change''~\citep{Campbell1954}. Political efficacy is considered crucial for participation since efficacious citizens have a greater propensity to engage in political action, and high levels of political efficacy within a population are important in creating support for the political system. If increasingly more citizens believe that they have not enough abilities to influence political decision-making and opportunities to participate in politics, whilst simultaneously not believing in the accountability of how the political system works, they will become frustrated and discontent, reducing their likelihood to act politically or to vote. Together with political inefficacy we also use low intentions to vote as a proxy of political disaffection, since we believe it is a behavioural consequence of citizens' sense of powerlessness (see~\cite{Torcal2006}). \section{Results}\seclabel{results} \begin{figure*} \centering\includegraphics[width=0.7\paperwidth]{plot_sentiment_smoothed} \caption{Twitter political disaffection time-series employing $\Delta_1^{7}$ compared with the \muacro{INEFFICACY} indicator.}\figlabel{pollsvssentiment} \end{figure*} \begin{figure*} \centering\includegraphics[width=0.7\paperwidth]{plot_sentiment_peaks} \caption{Political disaffection tweets day by day, with the selected peaks (highlighted by circles).}\figlabel{peak} \end{figure*} In this section we describe the time-series obtained employing the informations extracted with the approach describes in~\secref{classification} and the relations between them and the public opinion surveys. Moreover, we summarize our methodology to identify the political news that produce the highest peaks of the generated time-series (breaking news). To perform a correlation analysis with the \muacro{INEFFICACY} indicator taken from surveys, we employ the approach described in~\secref{classification} to generate the set of tweets denoting political disaffection (\muacro{TRelevant}). Subsequently, taking into account each survey sampling date $t_i$ (see~\tabref{ipsos}), we generate three time-series computing the ratio between the number of political disaffection tweets and the number of political ones by employing three time intervals: \begin{enumerate} \item from the date of the survey to $14$ days before ($\Delta_1^{14}$); \item from the day of the survey to $7$ days before ($\Delta_1^{7}$); \item from $7$ days before the date of the survey to $14$ before ($\Delta_7^{14}$). \end{enumerate} Note that the same approach has been employed for the \muacro{NO\_VOTE} indicator (also taken from the suveys). In~\tabref{correlationgood} it is shown the Pearson correlation index computed between the political disaffection tweet-series and the \muacro{INEFFICACY} time-series. It is possible to underline that, the best result ($0.79$) represents a strong correlation value between \muacro{INEFFICACY} and the information extracted by our methodology. Furthermore, it is important to highlight that the best time interval is $\Delta_1^{14}$. This result suggests that Twitter is able to capture the change in citizens political disaffection more promptly than what public opinion surveys are able to do (as can be seen in~\figref{pollsvssentiment}). \begin{table} {\centering \tiny \begin{tabular}{|c|p{5cm}|} \hline 2012-04-13 & \parbox{5cm}{ \vspace{1mm}$\checkmark$ La Lega prova a rifarsi un'immagine. Rinuncia agli ultimi rimborsi elettorali.\\ Lega tries to clear its name. It opts last electoral refunds out. } \tabularnewline \hline 2012-04-17 & \parbox{5cm}{ \vspace{1mm}$\checkmark$ Lavoro, Monti pensa alla fiducia ``I partiti approveranno la riforma''.\\ Labor, Monti thinks of a vote of confidence ``Parties will enact reform''. } \tabularnewline \hline 2012-04-18 & \parbox{5cm}{ \vspace{1mm}$\checkmark$ Monti: niente crescita fino al 2013, disagio lavoro per meta famiglie.\\ Monti: no economic growth until 2013, disadvantage for half of families. } \tabularnewline \hline 2012-05-09 & \parbox{5cm}{ \vspace{1mm}$\checkmark$ Bersani: ``Pd pi\'u forte, Monti ci ascolti'' Grillo: ``Partiti morti''. Crollo del Pdl.\\ Bersani: ``PD stronger, Monti listen to us'' Grillo:``Parties are dead''. PDL falls. } \tabularnewline \hline 2012-05-20 & \parbox{5cm}{ \vspace{1mm}$\checkmark$ Grillo su Brindisi: strage di Stato, fa comodo a loro.\\ Grillo about Brindisi: state massacre, it's convenient for them. } \tabularnewline \hline 2012-05-24 & \parbox{5cm}{ \vspace{1mm}$\times$ Grillo attacca: ``Noi soldi non li vogliamo rinunceremo a rimborsi prossime politiche''.\\ Grillo bashes: ``We don't need money, we opt last electoral refunds out''. } \tabularnewline \hline 2012-05-30 & \parbox{5cm}{ \vspace{1mm}$\times$ Riforma Csm, il gelo di Monti cos\'i \'e fallito il piano di Catrical\'a. \\ CSM reform, Monti's chill, Catrical\'a's project is doomed. } \tabularnewline \hline 2012-05-31 & \parbox{5cm}{ \vspace{1mm}$\times$ Spread, Monti resta preoccupato ``Rischio contagio malgrado gli sforzi''.\\ Spread, Monti worried ``Risk contagion despite the efforts''. } \tabularnewline \hline 2012-07-14 & \parbox{5cm}{ \vspace{1mm}$\checkmark$ Cicchitto: ``Primarie sono inutili Berlusconi candidato premier''. \\ Cicchitto: ``Primary election is useless, Berlusconi is the premier candidate''. } \tabularnewline \hline 2012-07-19 & \parbox{5cm}{ \vspace{1mm}$\times$ Monti ora teme il crac della Sicilia.\\ Now Monti is afraid of Sicily default. } \tabularnewline \hline 2012-08-09 & \parbox{5cm}{ \vspace{1mm}$\checkmark$ Monti al Wsj: ``Con Berlusconi spread a 1200'' L'ira del Pdl. E votano contro il governo. \\ Monti to WSJ: ``Spread at 1200 with Berlusconi'' PDL anger and vote against government. } \tabularnewline \hline 2012-08-28 & \parbox{5cm}{ \vspace{1mm}$\checkmark$ Grillo a Bersani: ``Io fascista? Tu sei un fallito d'accordo con la P2''. \\ Grillo to Bersani: ``Am I Fascist? You're failed at one with P2''. } \tabularnewline \hline 2012-09-23 & \parbox{5cm}{ \vspace{1mm}$\checkmark$ Cos\'i si rubava alla Regione Lazio ecco le rivelazioni di Fiorito ai pm.\\ Fiorito's admission to public prosecutor, how they stole at Lazio government. } \tabularnewline \hline 2012-09-26 & \parbox{5cm}{ \vspace{1mm}$\checkmark$ Caso Sallusti, salta l'accordo con il giudice il direttore domani rischia il carcere. \\ Sallusti's instance, legal agreement breaks, the lead director risks the jail. } \tabularnewline \hline 2012-09-27 & \parbox{5cm}{ \vspace{1mm}$\checkmark$ Sallusti: ``In Italia mancano le palle'' Paolo Berlusconi respinge le dimissioni. \\ Sallusti: ``In Italy many wimps'' Paolo Berlusconi rejects Sallusti's resignation. } \tabularnewline \hline \end{tabular} \caption{Each row represents the news with the highest cosine similarity identified by our approach. The symbol $\checkmark$ represents the identified news that also appears in the political Twitter trending topic of the day taken into account. As additional contextual information, Lega is an Italian party, Bersani, Cicchitto and Berlusconi are politicians, PD is the Italian Democratic Party, Monti is the Italy premier, Sallusti and Paolo Berlusconi (Silvio Berlusconi's brother) are respectively the lead director and the editor of a newspaper, Grillo is a comic/politician, Fiorito is a regional councilman involved in bribe inquiry. CSM is the magistrates' internal board of supervisors.}\tablabel{news} } \end{table} In~\tabref{correlationbad} it is shown the Pearson correlation index computed between the political disaffection tweet-series and the \muacro{NO\_VOTE} one. As can be noticed, this results ($0.59$) represents a medium correlation value but it is important because it shows that there is some connection between the modelled political disaffection and the intention to not participate at the next election day. Taking into consideration specific affordances of twitter as a medium (i.e. immediate diffusion of news and information), these results indicate that the data that we have extracted form Twitter and data derived form surveys are two reflections of a common underlying development that exhibit different temporal characteriscs. This leads us to suggest that Twitter data could be taken as a valid alternative measurement of political disaffection. \subsection{Breaking News Identification} Having verified the correlation between \muacro{INEFFICACY} and the Twitter political disaffection, we have employed the \muacro{TRelevant} data to empirically determine some of the possible causes that produce the variation in disbelief in politics and politicians, hypothesising that citizens’ political inefficacy is affected by controversial political news reported daily in media. To achieve this goal we have identified the peaks of the time-series generated as the daily ratio between the number of political disaffection tweets and the number of political ones, and we have associated to each peak a news belonging to \muacro{NData}. More precisely, to identify the peaks we have employed an approach similar to that proposed in~\citep{GruhlLGT04}, taking into account as peak the points of the time series greater than $\mu+2\sigma$, where $\mu$ is the mean of the points of the time-series and $\sigma$ is the standard deviation; however to improve the quality of our results we have considered for each point a set of its neighbourhood\footnote{We use a temporal window of 10 days, 5 before and 5 after the point of the time-series taken into account.} (instead of all the points) to estimate the local $\mu$s and $\sigma$s. The qualitative results are shown in~\figref{peak}. To associate to each peak a news, firstly we have created an inverse document frequency (\muacro{idf}) map by employing the words extracted from the corpus of the news included in \muacro{NData}, and $1,000,000$ of tweets (\muacro{PTCorpus}) randomly selected from the political subset of \muacro{TCorpus} (we employ the same classifier used for the political task as described in~\secref{classification} to identify the political tweets). Note that, these weights reduce the relevance of that terms that are recurrent in all the time-series. For each previously identified peak we create \muacro{tf-idf} vectors for the tokenized news and tweets by employing the \muacro{idf} map, thus obtaining, for each day taken into account, two vector sets: \begin{itemize} \item $\VSP{N}$ the vectors' set of the news; \item $\VSP{T}$ the vectors' set of the tweets belonging to \muacro{PTCorpus}. \end{itemize} Subsequently we employed the cosine similarity between vectors to select the most correlated news w.r.t the peak taken into account as follows: $$\operatornamewithlimits{arg\,max}{\vect{n}\in \VSP{N}} \sum_{\vect{t}\in \VSP{T}}\frac{\vect{n}\cdot \vect{t}}{\left\Vert \vect{n}\right\Vert \left\Vert \vect{t}\right\Vert }$$ The results achieved are summarized in~\tabref{news}. Finally, we have qualitatively compared the news identified with this approach with the trending topics of Twitter related to the day of each peak and we have noticed that the most of the news effectively correspond to one of the political daily trend. However, for few peaks, \muacro{NData} does not contain any news correlated with the majority of the tweets of that day. Looking at the Twitter trending topics, we can say that this happens whenever the political discussions on Twitter do not concern about facts reported in newspapers, such as discussions spontaneously grown in the Twitter community. A meaningful example concerns the trending topic \#no2giugno: this movement asked for the suspension of the military parade of June the $2^{nd}$, seen as a waste of resources, to use these moneys to reconstruct the city of Emilia (Italian region) after the earthquakes of the $2012$. This discussion generated two peaks (May the $30^{th}$ and the $31^{st}$) that had not correlation with the traditional media news.
1,108,101,563,539
arxiv
\section{Introduction} The quantity and composition of dust in metal absorption line and damped Ly-$\alpha$ (DLA) systems has been a topic of debate for over fifteen years. Quantifying the amount of dust in these systems is important not only for understanding the chemical evolution of galaxies over large lookback times, but also has implications for the biases involved in DLA selection in optical quasar surveys. Due to the difficulty in defining suitable samples, there are relatively few reports of evidence for dust in DLAs via the reddening effect on background quasar spectra. Pei, Fall \& Bechtold (1991)\nocite{1991ApJ...378....6P}, following up from the first study by Fall, Pei \& McMahon (1989)\nocite{1989ApJ...341L...5F}, found the spectral energy distributions (SEDs) of 20 quasars with DLAs to be significantly redder than those of 46 quasars without DLAs. Recently however, \citet{2004MNRAS.354L..31M} found no evidence for dust reddening in a much larger, homogeneous sample of DLAs from the Sloan Digital Sky Survey (SDSS), finding a limit, E(B-V)$_{\rm{SMC}}<$0.02, that is inconsistent with the results of Pei et al.. Whilst the optical selection of the SDSS sample could potentially introduce some bias into this result, Ellison, Hall \& Lira (2005)\nocite{corals_ebv} find an E(B-V)$_{\rm{SMC}}<$0.05 (3$\sigma$) using a smaller sample of radio selected quasars with DLAs. The alternative view, in which DLAs possess detectable quantities of dust, is supported by \citet{1998ApJ...494..175C} who found the paths of 4 out of 5 ``red quasars'' to contain strong HI 21cm absorption, suggesting the excessive redness of the quasars to be due to the intervening absorption systems. The importance of selection effects in quasar samples when searching for DLAs or metal absorption line systems is clear, a small amount of dust present in an intervening galaxy could cause enough extinction for the quasar to fall below the detection limit of optical magnitude-limited surveys. Using a complete sample of radio selected galaxies containing 22 DLAs with $z_{\rm abs}>1.8$ in 66 $z_{\rm em}>2.2$ quasars, \citet[][the CORALS survey]{2001A&A...379..393E} find that optical surveys could have underestimated the number of DLAs by at most a factor of two. By making use of the criteria for identifying potential DLAs from the strengths of metal absorption lines \citep{2000ApJS..130....1R}, \citet{2004ApJ...615..118E} extend this result to absorbers at lower redshift. Again their results permit up to a factor of 2.5 underestimate of the incidence of DLAs in optical, as opposed to radio selected, quasar samples. Whilst the intrinsic range in the SEDs of the underlying quasar spectra means estimates of reddening from differences in the average SEDs of quasar samples rely on statistical arguments, a clear indication of the presence of dust comes from the identification of spectroscopic features caused by the dust grains. The strongest such feature in the Galaxy is at $\sim$2175\AA, but the feature is weak or absent in the Large and Small Magellanic Clouds (LMC and SMC). The 2175\AA \ feature has been detected in the spectrum of a BL Lac object at the absorption redshift of a known intervening DLA by \citet{2004ApJ...614..658J}. Possible detections in SDSS quasar spectra with strong intervening metal absorption systems have been presented by \citet{2004ApJ...609..589W}. A further diagnostic of the presence of dust in DLAs and metal absorption line systems comes from relative abundances of elements, such as Cr to Zn, which are depleted by differing amounts onto dust grains \citep{1996ARA&A..34..279S}. Results indicate that dust depletion is far less severe than in the Galactic interstellar medium today \citep{1994ApJ...426...79P,1997ApJ...478..536P} which, combined with results from the radio-selected CORALS survey \citep{CJA05}, has further strengthened the argument that DLAs are relatively dust free compared with modern galaxies. In this paper we analyse a sample of \nca Ca~{\sc ii}~(H\&K) absorption line systems found in the spectra of SDSS Data Release 3 (DR3) quasars \citep{astro-ph/0503679}. Calcium is found to be generally underabundant in galaxies and is expected to be heavily depleted onto dust grains \citep{1996ARA&A..34..279S}. Without doubt the detection of Ca~{\sc ii}~ indicates a large column density of HI and a very high fraction of our sample are expected to be DLAs. This paper therefore presents one of the largest homogeneous samples of DLAs at intermediate redshifts and provides a detection method with an expected success rate substantially higher than methods based on the properties of Mg~{\sc ii}~ and Fe~{\sc ii}~ absorption. In Section 2 we present our sample of Ca~{\sc ii}~ absorbers. In Section \ref{sec:remqso} we describe our method for creating appropriate ``composite'' quasar spectra that allow the statistical removal of the underlying quasar SEDs to isolate any differences due to the presence of the Ca~{\sc ii}~ absorbers. Results are given in Section \ref{sec:results}. Further details of the method will be presented in an accompanying paper (Wild, Hewett \& Pettini 2005, in preparation) together with an analysis of Zn and Cr abundances in the composite spectrum. \vspace*{-0.5cm} \section{A sample of Ca~{\sc ii}~ absorbers} \label{sec:sample} An investigation into the properties of Ca~{\sc ii}~(H\&K: $\lambda\lambda 3934.8, 3969.6$) absorbers at redshifts $z$\,$\sim$\,1 is made possible by the extended wavelength coverage of the SDSS spectra (3800-9200\AA) combined with the improved sky subtraction of \citet{skysub}. The lower redshift limit to the absorber sample was set by the appearance of the 2175\AA \ feature at $\lambda>$4000\AA \ in the SDSS spectra, at $z=0.84$, while the upper redshift limit, $z=1.3$, was set by Ca~{\sc ii}~(H\&K) moving beyond the red limit of the spectra. We further restrict the sample of Ca~{\sc ii}~ absorbers by requiring that the 2175 \AA \ feature falls redward of the Ly-$\alpha$ forest in the individual quasar spectra. In order to detect weak Ca~{\sc ii}~(H\&K) absorption features in medium resolution spectra, the sample of quasars whose spectra are searched is restricted to those obeying the following criteria: \begin{itemize} \item entry in the DR3 quasar catalogue \citep{astro-ph/0503679} \item Galactic extinction corrected $i$-band PSF magnitude $<19.0$ \item spectroscopic signal-to-noise ratio (S/N) in the $i$-band $>10$ \item no broad absorption line (BAL) features. \end{itemize} The technique to identify BAL quasars will be presented in detail in the accompanying paper. Briefly, the scheme involves the calculation of the root-mean-square (rms) deviations around the C~{\sc iv}~, Mg~{\sc ii}~ and Fe~{\sc ii}~ lines from a continuum defined using Principal Component Analysis (PCA). Thirteen per cent of the quasar spectra, which satisfy the first three criteria above, are flagged as potential BALs. The final sample consists of 11\,427 quasars and the final conclusions of the paper are insensitive to the precise scheme used to define the subsample of quasars to be searched. The faint magnitude limit is similar to that used for the selection of ``low-redshift'' quasar candidates in the SDSS ($i$-band magnitude of 19.1). A matched-filter search \citep{1985MNRAS.213..971H} of the quasar spectra for Ca~{\sc ii}~ doublets (in the ratio of 2:1 and 1:1, matching the properties of unsaturated and saturated lines) above a 5$\sigma$ significance threshold was performed. Candidate systems also had to possess absorption line detections corresponding to Mg~{\sc ii}~ ($\lambda\lambda 2796,2804$). Visual inspection of the candidate list eliminated a small number of spurious detections. Table \ref{tab:1} lists the properties of the remaining \nca Ca~{\sc ii}~ absorber candidates. \begin{table*} \centering \vspace*{-0.4cm} \caption{\label{tab:1} \small Name and spectroscopic identification of each quasar in our sample, together with rest-frame EWs of Ca~{\sc ii}~(H\&K), Mg~{\sc ii}~($\lambda\lambda 2796,2804$), Mg~{\sc i}~($\lambda 2853$) and Fe~{\sc ii}~($\lambda 2600$). The final column gives the derived reddening of each quasar (bar one, see text). However, the scatter in the reddening is large due to the intrinsic variation in the quasar SEDs from object to object, causing some ``negative reddening''s to occur. The PSF magnitudes are corrected for Galactic extinction.} \vspace*{-0.1cm} \begin{minipage}{16cm} \begin{tabular}{cccccccccc} \hline\hline SDSS ID & mjd,plate,fibre & mag$_{\rm i}$ & $z_{\rm em}$ & $z_{\rm abs}$ & EW Ca~{\sc ii}~ & EW Mg~{\sc ii}~ & EW Mg~{\sc i}~ & EW Fe~{\sc ii}~ & E(B-V)$_{\rm{LMC}}$ \\ \hline J002133.36+004301.2 & 51900,0390,537 & 17.42 & 1.245 & 0.942 & 0.34, 0.22 & 1.80, 1.66 & 0.55 & 0.99 & -0.022 \\ J010332.40+133234.8 & 51821,0421,049 & 18.24 & 1.660 & 1.049 & 1.07, 0.80 & 3.06, 2.66 & 1.44 & 2.25 & 0.072 \\ J014717.76+125808.4 & 51820,0429,215 & 17.71 & 1.503 & 1.040 & 0.50, 0.17 & 4.28, 4.26 & 1.15 & 3.05 & 0.041 \\ J074804.08+434138.4 & 51885,0434,340 & 18.44 & 1.836 & 0.898 & 0.53, 0.27 & 1.71, 1.19 & 0.30 & 0.69 & 0.025 \\ J080736.00+304745.6 & 52319,0860,601 & 18.53 & 1.255 & 0.969 & 0.56, 0.79 & 2.83, 2.70 & 1.24 & 2.23 & 0.001 \\ J081054.00+352226.4 & 52378,0892,106 & 18.30 & 1.304 & 0.877 & 0.56, 0.31 & 2.17, 2.11 & 0.99 & 1.78 & 0.032 \\ J081930.24+480827.6 & 51885,0440,007 & 17.64 & 1.994 & 0.903 & 0.75, 0.36 & 1.69, 1.57 & 1.03 & 1.36 & *** \\ J083157.84+363552.8 & 52312,0827,001 & 17.91 & 1.160 & 1.127 & 0.73, 0.41 & 2.52, 2.49 & 0.78 & 1.53 & -0.024 \\ J085221.36+563957.6 & 51900,0448,485 & 18.58 & 1.449 & 0.844 & 0.65, 0.49 & 3.32, 3.03 & 1.24 & 2.45 & 0.010 \\ J085556.64+383231.2 & 52669,1198,100 & 17.57 & 2.065 & 0.852 & 0.45, 0.16 & 2.65, 2.50 & 0.71 & 2.07 & 0.042 \\ J093738.16+562837.2 & 51991,0556,456 & 18.49 & 1.798 & 0.980 & 1.23, 0.62\footnote{Multiple absorption line system. All lines are fit with double Gaussians and the quoted EW is the total of the two systems. When fitting Ca~{\sc ii}~, the velocity-separation of the systems is taken to be that determined from the associated Fe~{\sc ii}~($\lambda 2600$) lines.} & 4.90, 4.34 & 2.35 & 3.21 & 0.300 \\ J095352.80+080104.8 & 52734,1235,465 & 17.40 & 1.720 & 1.024 & 0.48, 0.35 & 0.91, 0.80 & 0.46 & 0.64 & 0.024 \\ J100000.96+514416.8 & 52400,0903,258 & 18.70 & 1.235 & 0.907 & 0.81, 0.58 & 4.47, 3.90 & 1.41 & 2.47 & -0.012 \\ J100145.12+594008.4 & 52282,0770,087 & 17.82 & 1.186 & 0.900 & 0.47, 0.33 & 0.64, 0.58 & 0.50 & 0.41 & 0.057 \\ J103024.24+561832.4 & 52411,0947,179 & 17.81 & 1.288 & 1.001 & 0.68, 0.33 & 1.91, 1.84 & 0.95 & 1.58 & 0.030 \\ J112053.76+623104.8 & 52295,0775,455 & 17.39 & 1.130 & 1.073 & 0.57, 0.44 & 2.02, 1.94 & 0.92 & 1.50 & 0.070 \\ J112932.64+020422.8 & 51992,0512,113 & 17.31 & 1.193 & 0.966 & 0.56, 0.53 & 2.08, 2.03 & 0.71 & 1.64 & -0.004 \\ J113357.60+510845.6 & 52367,0880,288 & 18.28 & 1.576 & 1.030 & 1.25, 0.67 & 2.66, 2.72 & 0.82 & 1.97 & 0.117 \\ J115244.16+571203.6 & 52765,1311,631 & 17.92 & 1.603 & 0.848 & 0.54, 0.35 & 3.35, 3.19 & 1.15 & 2.33 & 0.212 \\ J122144.64-001142.0 & 52000,0288,078 & 18.52 & 1.750 & 0.929 & 0.58, 0.23 & 0.97, 0.82 & 0.72 & 0.74 & -0.012 \\ J124659.76+030307.2 & 52024,0522,531 & 18.81 & 1.178 & 0.939 & 1.07, 0.60 & 2.91, 2.78 & 1.30 & 2.15 & 0.184 \\ J131058.08+010824.0 & 51985,0295,325 & 17.80 & 1.389 & 0.862 & 0.74, 0.49 & 2.18, 2.27 & 1.33 & 1.46 & 0.209 \\ J144104.80+044348.0 & 52026,0587,329 & 18.42 & 1.112 & 1.040 & 0.97, 0.64 & 2.27, 2.38 & 1.05 & 1.93 & 0.149 \\ J145633.12+544832.4 & 52353,0792,242 & 17.94 & 1.518 & 0.879 & 0.47, 0.45 & 4.02, 3.72 & 1.71 & 3.11 & 0.060 \\ J151247.52+573842.0 & 52079,0612,438 & 18.69 & 2.135 & 1.045 & 0.98, 0.59 & 2.04, 2.28 & 0.75 & 1.57 & 0.139 \\ J153730.96+335837.2 & 52823,1355,633 & 17.44 & 1.024 & 0.913 & 0.38, 0.50 & 1.82, 1.78 & 0.70 & 1.16 & 0.004 \\ J160932.88+462613.2 & 52354,0813,070 & 18.67 & 2.361 & 0.966 & 0.65, 0.36 & 1.07, 0.92 & 0.65 & 0.60 & -0.034 \\ J172739.12+530227.6 & 51821,0359,042 & 17.97 & 1.442 & 0.945 & 0.62, 0.53 & 2.71, 2.57 & 0.98 & 2.17 & -0.027 \\ J173600.00+573104.8 & 51818,0358,529 & 18.18 & 1.824 & 0.872 & 0.81, 0.60 & 2.01, 1.80 & 0.87 & 1.55 & 0.066 \\ J224511.28+130903.6 & 52520,0739,030 & 18.56 & 1.546 & 0.861 & 2.03, 0.93\footnote{Multiple absorption line system detected in Mg~{\sc ii}~ doublet, for which the total EW of both systems is quoted.} & 3.99, 3.67 & 1.77 & 3.39 & -0.004 \\ J233917.76-002942.0 & 51877,0385,229 & 18.26 & 1.344 & 0.967 & 0.74, 0.64 & 2.71, 2.44 & 0.85 & 1.80 & 0.108 \\ \end{tabular} \vspace*{-0.4cm} \end{minipage} \end{table*} \vspace*{-0.5cm} \section{Removing the underlying quasar spectra} \label{sec:remqso} Traditionally the study of reddening in quasar samples has been a difficult one due to the intrinsic variation in the shape of quasar SEDs. Using the large sample of quasars in the SDSS survey we can however obtain a good estimate of the average quasar spectrum. We do this by creating composite spectra in redshift bins of $\Delta z = 0.1$ staggered in redshift by $z=0.05$. Each spectrum is corrected for Galactic reddening using the quoted extinction in the SDSS photometric catalogue and the Galactic extinction curve from \citet{1989ApJ...345..245C} and \citet{1994ApJ...422..158O}. The spectra are then moved to the quasar rest frame and normalised by dividing by the median flux in a common wavelength range (avoiding the main quasar emission lines) before all spectra are combined using an arithmetic mean. The statistical power of the SDSS is clear in the shear number of quasar spectra available for such an analysis. Up to a redshift of $z_{\rm em}=1.9$ more than 500 quasars contribute to each composite and only for one quasar in our Ca~{\sc ii}~ sample does the appropriate composite depend on less than 100 quasars. By construction, the composite spectra take account of any systematic variation in the quasar SEDs as a function of redshift. It is also important to take account of any possible variation with magnitude. A number of systematic magnitude- or luminosity-dependent effects are evident in the SDSS quasar spectra \citep{2004AJ....128.2603Y} and applying a magnitude-dependent correction also has the advantage of accounting for any potential small systematic variations of the SDSS spectrophotometry with object flux. For each quasar with detected Ca~{\sc ii}~ absorption we calculate a correction in slope based on its $i$-band PSF magnitude. The correction is achieved by selecting from those quasars in the correct redshift range the 80 closest in magnitude. A second composite is created from these 80 quasars; dividing this composite by the control spectrum results in a residual spectrum representing the magnitude dependence, to which we fit a straight line. On dividing each of our quasar spectra with Ca~{\sc ii}~ absorption by their relevant control spectrum, we then use this fitted line to account for potential magnitude dependence. Finally, we combine all \nca spectra, divided by their control spectra and corrected for any magnitude dependence, into a single composite we term the ``residual spectrum''. \vspace*{-0.5cm} \section{Properties of the sample}\label{sec:results} In the following subsections we present the measured E(B-V) for the residual spectrum along with the reddening in each object; a Monte Carlo analysis of the significance of the result; the equivalent widths (EW) of important metal lines in the systems and an estimate of the number of DLAs in our sample. \subsection{The average reddening} Fig. \ref{fig:comp} shows the residual spectrum. Each overplotted dust extinction curve is fitted over a wavelength range of 1900:4500\AA \, excluding the regions containing absorption lines. The LMC and SMC curves are evaluated from the tabulated results of \citet{1992ApJ...395..130P} with an ${\rm R_V}$ of 3.1 and the Galactic curve is evaluated as in Section \ref{sec:remqso}. The similarity of the spectrum to the overplotted dust extinction curves is striking. However, the Galactic 2175\AA \ dust feature appears to be weak or absent. The composite spectrum is consistent with LMC- or SMC-type dust and the best fit curves have values of E(B-V)=0.056 and 0.057. A jackknife error of 0.003 is calculated by removing each absorber in turn from the composite. \begin{figure*} \begin{minipage}{\textwidth} \includegraphics[scale=0.91]{ca_dust2104.eps} \end{minipage} \caption{The residual spectrum: a composite of \nca quasars with Ca~{\sc ii}~(H\&K) absorption created after division of each spectrum by a high S/N quasar control spectrum. Overplotted are dust extinction laws for the Galaxy, LMC and SMC. For clarity these are also shown offset from the spectrum. The normalisation is calculated from that used to fit the LMC dust curve.} \label{fig:comp} \vspace*{-0.2cm} \end{figure*} \vspace*{-0.2cm} \subsection{EW Ca~{\sc ii}~(H\&K) vs. reddening} For each absorber we define a continuum around the Ca~{\sc ii}~ lines using the IRAF routine {\small CONTINUUM} to fit a cubic spline to the rest-frame wavelength regions 3800:3930, 3940:3965, 3975:4100. Each spectrum is fit interactively by altering primarily the order of the spline until the best by-eye fit is achieved. The two lines are fit jointly with Gaussian profiles, fixing the wavelength difference to the known value and constraining the width of the two lines to be equal. Two objects (indicated in the table) show evidence for multiple absorption systems. For J093738.16+562837.2 we were able to fit a double Gaussian to all absorption lines, however, for J224511.28+130903.6 this was only possible for Mg~{\sc ii}~. In all but one case, in which the quasar is a poor match to the control spectrum, we can fit a dust curve to each residual spectrum individually. Although the results are strongly affected by the variations in the underlying quasar SEDs, the E(B-V) of the best-fit LMC extinction curve is given in the final column of Table 1. Whilst the scatter is large, it is clear that those spectra with large Ca~{\sc ii}~ EWs in general have a large E(B-V) and are dominating the reddening signal seen in the residual spectrum (Fig. \ref{fig:comp}). Fig. \ref{fig:compsplit} shows the two residual spectra created from the sample split by the EW of Ca~{\sc ii}~(K) at 0.7\AA. Fitting LMC dust laws to each, we measure E(B-V)=0.099 and 0.025 for the large and small EW composites respectively. The measured E(B-V) values are stable to the precise EW chosen to split the sample and the conclusions drawn in the final section of the paper are affected little by moving the boundary. \begin{figure*} \begin{minipage}{\textwidth} \includegraphics[scale=0.91]{ca_dustsplit2104.eps} \end{minipage} \caption{Residual spectra created by splitting our sample into two: EW(Ca K)$>$0.7\AA \ in black and EW(Ca K)$<$0.7\AA \ in grey. The large (small) EW composite contains 13 (18) objects. Overplotted and offset are the best fitting LMC extinction curves.} \label{fig:compsplit} \vspace*{-0.2cm} \end{figure*} \vspace*{-0.2cm} \subsection{Monte Carlo simulations of random samples} To confirm that our result is not simply due to the variation in quasar SEDs we create 10~000 Monte Carlo simulations of \nca randomly chosen quasars, randomly assign them a $z_{\rm abs}<z_{\rm em}$ in the same redshift range as the Ca~{\sc ii}~ absorbers, and treat each sample identically to the actual sample of Ca~{\sc ii}~ absorbers. Fitting an LMC dust curve to the final residual spectra, we find a maximum E(B-V) of $<0.04$, i.e. we never obtain an E(B-V) as large as that found in the Ca II absorber sample and thus obtain a detection significance of $>99.99\%$. The distribution of simulation E(B-V) has a variance of $\sim$0.01 and a small tail to redder values, possibly due to extinction from the host galaxy. \vspace*{-0.2cm} \subsection{Dust obscuration, the proportion of DLAs and HI column density} Extinction of the quasar due to dust in the intervening absorbers will cause objects to be missed from the magnitude limited survey. For each absorber we calculate the extinction to the background quasar at 7500\AA \ in the observed frame (the effective wavelength of the $i$-band that defines the magnitude selection) based on their estimated E(B-V). The average extinction for all objects is $\langle A_{7500}\rangle$ = 0.39, with values of 0.67 and 0.18 respectively for the large- and small-EW sub-samples. Assuming our ability to detect Ca~{\sc ii}~ is unaffected by the magnitude of the background quasars, we can obtain a lower limit on the fraction of Ca~{\sc ii}~ systems we would have missed due to extinction. A correction factor is calculated as the ratio of the total pathlength searched in quasars within the magnitude limit ($m_{lim}=19$) to the total pathlength searched in quasars between $m_{lim}$ and $m_{lim}-\langle A_{7500}\rangle$. The result is 2.3 for the large-EW sub-sample and the number of absorbers should therefore be corrected from 13 to 30. Equivalently, for the small-EW subsample a factor of 1.1 corrects the number from 18 to 20. Overall a total corrected number of $\sim50$ Ca~{\sc ii}~ systems is therefore expected along the 11\,427 sightlines searched with an average E(B-V) = 0.07 i.e. at least 38\% of objects are missed. This is consistent with the upper limit expected for DLAs from the radio selected CORALS survey at high redshift (50\%). However, our results represent lower limits on the fraction of systems missed: any objects with a higher dust content will not be seen in the SDSS quasar sample. To assess the implications of our result, it is important to understand the relation between the Ca~{\sc ii}~ absorbers and DLAs. Firstly, converting the minimum EW(Ca K) detected in our sample ($\sim$0.2\AA) into a column density, N(Ca II) \citep{1948ApJ...108..242S}, we find Hydrogen column densities of $6 \times 10^{20}$ to $ 2 \times 10^{21}{\rm cm}^{-2}$ are required in our Galaxy to achieve similar values of N(Ca II) \citep{1993A&AS..100..107S}. The conversion does not account for the probable lower metallicity of the DLAs and suggests column densities well in excess of the accepted limit for DLAs. Secondly, using a sample of 197 absorbers with measured N(HI), Rao, Turnshek \& Nestor (2005, in preparation) find that, where line EWs are available, all DLAs have EW(Mg~{\sc ii}~$\lambda 2796$)/EW(Fe~{\sc ii}~$\lambda 2600$)$<$2 and EW(Mg~{\sc i}~$\lambda 2853$)$>$0.2\AA \ and these DLAs represent 43\% of the Mg~{\sc ii}~ systems fulfilling this criteria. Also, 8 of 10 systems with EW(2853) $>0.8$\AA \ are confirmed to be DLAs. Table \ref{tab:1} lists the EWs of metal lines for each of our absorbers, measured in a similar way to the Ca~{\sc ii}~ EWs. 30 of \nca of our objects fall within the (Mg~{\sc ii}~,Fe~{\sc ii}~,Mg~{\sc i}~) criterion and 20 of \nca within the Mg~{\sc i}~ criterion. These fractions suggest that the majority of our objects are DLAs according to accepted definitions. We can also make an estimate of the number density of Ca~{\sc ii}~ absorbers and compare this to values for DLAs. For our sample of 11\,427 quasars and absorbers with $0.84<z_{\rm abs}<1.3$ we search a total $\Delta z \sim 4214$. Correcting for the effects of dust extinction we estimate $\sim 50$ absorbers to lie along this path, giving an $\rm{n(z)} \sim 0.012$. This is of course only a lower limit as it does not account for regions in the spectra of poor S/N ratio prohibiting the identification of Ca~{\sc ii}~. \citet{2000ApJS..130....1R} find $\rm{n_{DLA} (z=1.15)} = 0.1^{+0.10}_{-0.08}$. Performing our own search for Mg~{\sc ii}~ absorbers in the sample of quasars defined in Section \ref{sec:sample}, using the Mg~{\sc ii}~/Fe~{\sc ii}~/Mg~{\sc i}~ EW criteria above, we find $\rm{n(z)} = 0.089\pm0.005$. We conclude that our results are consistent with strong Ca~{\sc ii}~ absorbers representing $\sim$10\% of all DLAs. Finally, by assuming an average value of N(HI) for our absorbers ($16.6 \times 10^{20}{\rm cm}^{-2}$, calculated from Table 3 of RT00 for $z_{em} > 0.83$) and taking the obscuration corrected average E(B-V) of 0.07, we can estimate their gas-to-dust ratio to be $2.4 \times 10^{22}{\rm cm}^{-2}{\rm mag}^{-1}$, which can be compared to that of the Galaxy; $4.93 \times 10^{21}$ \citep{1994ApJ...427..274D} and LMC; $2.00 \times 10^{22}$ \citep{1982A&A...107..247K}. If our absorbers were to contain column densities of neutral Hydrogen in excess of average DLAs, the value would increase. Multiplying our result by 10 (i.e. assuming the remaining 90\% of DLAs not in our sample contains negligible dust) we can estimate a global dust-to-gas ratio for DLAs of $2.4 \times 10^{23}{\rm cm}^{-2}{\rm mag}^{-1}$, in agreement with the measured lower metallicities of DLAs compared to the Galaxy. \vspace*{-0.5cm} \section{Discussion and Conclusions} Recent results have pointed to DLAs as a sample of chemically young galaxies with mixed morphologies and evidence for only small quantities of dust. We have shown that significant reddening due to dust is present in a subsample of intermediate redshift DLAs identified through Ca~{\sc ii}~(H\&K) absorption. The extinction associated with this detected reddening results in a minimum of $\sim$40\% of absorbers to be missed from the SDSS magnitude limited sample, a fraction that is just consistent with the conclusions of the CORALS radio-selected sample (which relates to the entire DLA population). The detection of reddening due to dust helps to fill in the gaps in our understanding of the relation between DLAs and other high redshift galaxies - our measured E(B-V) values, representing the upper limit found in DLAs, fit into the lower end of the range covered by Lyman Break Galaxies at z$\sim$3 \citep[e.g.][]{2001ApJ...562...95S} and gravitational lens galaxies with $0<z<1$ \citep{1999ApJ...523..617F}. Follow up observations will add substantially to our knowledge of the morphology and chemical properties of z$\sim$1 DLAs. Finally, while criteria based on the properties of Mg~{\sc ii}~, Fe~{\sc ii}~ and Mg~{\sc i}~ lines can identify DLAs with a success rate of $\sim$43\%, selection based on the detection of Ca~{\sc ii}~(H\&K) is likely to increase the success-rate to $\sim$100\%, albeit at the expense of sensitivity to only $\sim$10\% of the DLA population. The two selection techniques are complimentary and further comparison of samples defined using both techniques should lead to a greater understanding of the nature of DLAs. \vspace*{-0.5cm} \section*{acknowledgements} We would like to thank Max Pettini, Michael Murphy, Chris Akerman and Tae-Sun Kim for valuable discussions and the referee, Joe Liske, for the prompt and thorough response. Also Sandhya Rao for making available to us her most recent results on Mg~{\sc ii}~ systems in DLAs and Chris Akerman and Sara Ellison for their recent results from the CORALS survey. VW acknowledges the award of a PPARC research studentship. This work made extensive use of the Craig Markwardt IDL library. {\small Funding for the Sloan Digital Sky Survey (SDSS) has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS Web site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, University of Pittsburgh, Princeton University, the United States Naval Observatory, and the University of Washington.} \bibliographystyle{mn2e} \vspace*{-0.5cm}
1,108,101,563,540
arxiv
\section{Introduction} Following the work of Barg and Forney \cite{BF02}, Nazari \cite{Nazari11} and Nazari {\it et al.} \cite{NAP14}, in a recent work \cite{trc}, the error exponent of the typical random block code for a general discrete memoryless channel (DMC) was studied. The error exponent of the typical random code (TRC) was defined as the long--block limit of the negative normalized {\it expectation of the logarithm} of the error probability, as opposed to the classical random coding exponent, defined as the negative normalized {\it logarithm of the expectation} of the error probability. The investigation of error exponents for TRCs was motivated in \cite[Introduction]{trc} by a few points: (i) Owing to Jensen's inequality, it cannot be smaller than the random coding error exponent, and so, it is a more optimistic performance measure than the ordinary random coding exponent, especially at low rates. (ii) Given that a certain measure concentration property holds, it is more relevant as a performance metric, since the code is normally assumed to be randomly selected just once, and then used repeatedly. (iii) It captures correctly the behavior of random--like codes \cite{Battail95}, which are well known to be very good codes. In \cite{trc}, an exact single--letter expression was derived for the error exponent function of the TRC assuming a general discrete memoryless channel (DMC) and an ensemble of fixed composition codes. Among other things, it was shown in \cite{trc} (similarly as in \cite{BF02} and \cite{Nazari11}), that the TRC error exponent is: (i) the same as the expurgated exponent at zero rate, (ii) below the expurgated exponent, but above the random coding exponent for low positive rates, and (iii) the same as the random coding exponent beyond a certain rate. In view of the practical importance and the rich literature on trellis codes, and convolutional codes in particular (see, e.g., \cite{Costello74}, \cite{Forney74}, \cite{Johannesson77}, \cite{JR99}, \cite{Massey76}, \cite{SF00}, \cite{VO69}, \cite{VO79}, \cite{ZSSHJ99} just to name a few, as well as and many references therein), the purpose of this paper is to study the behavior and the performance of typical random trellis codes. More specifically, our aim is at an investigation parallel to that of \cite{trc}, in the realm of ensembles of time--varying trellis codes. The main motivation is to compare the error exponent of the typical random trellis code to that of the typical block code on the basis of similar decoding complexity, in the spirit of the similar comparison in \cite[Chap.\ 5]{VO79}, which was carried out for the ordinary random coding exponents of the two classes of codes. Technically speaking, our main result is that the error exponent of the typical random, time--varying trellis code is lower bounded by a certain expression that is related to the expurgated exponent, and its value lies between those of the convolutional random coding error exponent and the convolutional--coding expurgated exponent functions \cite{VO69}, \cite[Sect.\ 5]{VO79}. For the subclass of linear trellis codes, namely, time--varying convolutional codes, the result is improved: the typical time--varying convolutional code achieves the convolutional--coding expurgated exponent, provided that the channel is binary--input, output--symmetric (see also \cite{VO69}). In other words, in the limit of large constraint length, a randomly selected time--varying convolutional code achieves the convolutional expurgated exponent with an overwhelmingly high probability. This is parallel to a similar behavior in the context of ordinary random block codes (without structure), where the error exponent of the typical random code is inferior to the corresponding expurgated exponent, and superior to the random coding error exponent (at low rates), but when it comes to linear random codes, the typical--code error exponent coincides with the expurgated exponent. These results both sharpen and generalize some earlier statements on the fraction of time--varying (or periodically time--varying) convolutional codes with certain properties (see, for example, \cite[Lemma 3.33, Lemma 4.15]{JR99}), and in particular, the fact that (at least) half of the convolutional codes achieve the convolutional coding exponent \cite[Theorem]{VO69}. Beyond this, our contributions are in several aspects. \begin{enumerate} \item Our analysis provides a fairly clear insight on the behavior of the typical codes, i.e., their free distances and their distance enumerators. \item Thanks to the use the method of types, we are able to characterize the dominant error events, that is, typical lengths of error bursts and joint types of incorrect trellis paths together with the correct path, which are even more informative than distances. \item Our analysis is considerably general: we address general trellis codes (not merely convolutional codes) with a general random coding distribution (not necessarily the uniform distribution) and a general discrete memoryless channel (DMC), not merely binary--input, output--symmetric channels. \item We further extend the results in two directions simultaneously, allowing both channels with input memory and mismatch. \end{enumerate} The outline of the remaining part of this paper is the following. In Section 2, we establish notation conventions, define the problem setting, provide some background, and spell out the objectives of the paper more formally. In Section 3, we state the main result, and in Section 4 we prove it. Section 5 is devoted to some discussion, and finally, in Section 6, we extend the main result to channels with memory and mismatch. \section{Notation, Problem Setting, Background and Objectives} \label{npbo} \subsection{Notation} \label{not} Throughout the paper, random variables will be denoted by capital letters, specific values they may take will be denoted by the corresponding lower case letters, and their alphabets will be denoted by calligraphic letters. Random vectors and their realizations will be denoted, respectively, by capital letters and the corresponding lower case letters, both in the bold face font. Their alphabets will be superscripted by their dimensions. For example, the random vector $\bX=(X_1,\ldots,X_r)$, ($r$ -- positive integer) may take a specific vector value $\bx=(x_1,\ldots,x_r)$ in ${\cal X}^r$, the $r$--th order Cartesian power of ${\cal X}$, which is the alphabet of each component of this vector. The probability of an event ${\cal E}$ will be denoted by $\mbox{Pr}\{{\cal E}\}$, and the expectation operator will be denoted by $\bE\{\cdot\}$. For two positive sequences $\{a_k\}$ and $\{b_k\}$, the notation $a_k\exe b_k$ will stand for equality in the exponential scale, that is, $\lim_{k\to\infty}\frac{1}{k}\log \frac{a_k}{b_k}=0$. Similarly, $a_k\lexe b_k$ means that $\limsup_{k\to\infty}\frac{1}{k}\log \frac{a_k}{b_k}\le 0$, and so on. The indicator function of an event ${\cal E}$ will be denoted by ${\cal I}\{E\}$. The empirical distribution of a string of symbols in a finite alphabet ${\cal X}$, denoted by $\hat{P}_X$, is the vector of relative frequencies $\hat{P}_X(x)$ of each symbol $x\in{\cal X}$ along the string. Here $X$ denotes an auxiliary random variable (RV) distributed according to this distribution. Information measures associated with empirical distributions will be denoted with `hats'. For example, the entropy associated with the empirical distribution $\hat{P}_X$, namely, the empirical entropy, will be denoted by $\hat{H}(X)$. Similar conventions will apply to the joint empirical distribution, the joint type class, the conditional empirical distributions and the conditional type classes associated with pairs (and multiples) of sequences of length $r$. Accordingly, $\hP_{XX^\prime}$ will be the joint empirical distribution associated with a pair of strings of the same length, $\hH(X,X^\prime)$ will designate the empirical joint entropy, and $\hH(X|X^\prime)$ will be the empirical conditional entropy. \subsection{Problem Setting} \label{ps} Consider the system configuration depicted in Fig.\ \ref{blockdiagram}. Let the information source, $U_1, U_2,\ldots$, be the binary symmetric source (BSS), i.e., an infinite sequence of binary random variables taking on values in ${\cal U}=\{0,1\}$, independently of each other, and with equal probabilities for `0' and `1'. We shall group the bits of this information source in blocks of length $m$, and denote $\bU_t=(U_{m(t-1)+1},U_{m(t-1)+2},\ldots,U_{mt})$, $\bU_t\in{\cal U}^m$, $t=1,2,\ldots$. A {\it time--varying trellis code} of rate $R=m/n$ and with memory size $k$, is a sequence of functions $f_1, f_2,\ldots$, $f_t:{\cal U}^{mk}\to{\cal X}^n$, $t=1,2,\ldots$, where ${\cal X}$ is the finite channel input alphabet of size $J$. When fed with an input information sequence, $\bu_1,\bu_2,\ldots$, which is a realization of $\bU_1,\bU_2,\ldots$, the time--varying trellis codes outputs a code sequence, $\bx_1,\bx_2,\ldots$, according to \begin{equation} \bx_t=f_t(\bu_t,\bu_{t-1},\ldots,\bu_{t-k+1}),~~~~~~~t=1,2,\ldots \end{equation} The product $mk$ designates the {\it constraint length} of the trellis code, and it will henceforth be denoted by $K$. As is well known, a trellis code is a special case of a finite--state encoder whose total number of states is $2^K$. On the other hand, a convolutional code is a special case of a trellis code where $\{f_t\}$ are linear functions over the relevant field. A discrete memoryless channel (DMC) $W$ is defined by a set of single--letter conditional probabilities (or probability density functions), $\{W(y|x),~x\in{\cal X},~y\in{\cal Y}\}$, where ${\cal X}$ is as before and ${\cal Y}$ is the channel output alphabet, which may be discrete or continuous.\footnote{Throughout the sequel, we will treat ${\cal Y}$ as a discrete alphabet, with the understanding that in the continuous case, all summations over ${\cal Y}$ should be replaced by integrals.} When the channel is fed by a sequence, $x_1,x_2,\ldots$, $x_t\in{\cal X}$, $t=1,2,\ldots$ (a realization of a random process, $X_1,X_2,\ldots$), it responds by generating a corresponding output sequence, $y_1,y_2,\ldots$, $y_t\in{\cal Y}$, $t=1,2,\ldots$ (a realization of a random process, $Y_1,Y_2,\ldots$), according to \begin{equation} \label{channel} \mbox{Pr}\{Y_1=y_1,Y_2=y_2,\ldots,Y_r=y_r|X_1=x_1,X_2=x_2,\ldots,X_r=x_r\}= \prod_{t=1}^rW(y_t|x_t). \end{equation} As customary, we assume that the trellis code is decoded in long blocks using the maximum--likelihood (ML) decoder, which is implementable by the Viterbi algorithm, and by terminating each block with $m(k-1)$ zero input bits in order to reset the state of the encoder. As mentioned earlier, we also extend the results to channels with input memory (inter-symbol interference) along with mismatched decoding metrics, which are still implementable by the Viterbi Algorithm. We consider the ensemble of time--varying trellis codes where for every $t=1,2,\ldots$ and every possible value of $(\bu_t,\bu_{t-1},\ldots,\bu_{t-k+1})\in{\cal U}^K)$, the value of $f_t(\bu_t,\bu_{t-1},\ldots,\bu_{t-k+1})\in{\cal X}^n$ is selected independently at random under the i.i.d.\ distribution $Q^n$, namely, each one of the $n$ components of $f_t(\bu_t,\bu_{t-1},\ldots,u_{t-k+1})\in{\cal X}^n$ is randomly drawn independently under a fixed distribution $Q$ over ${\cal X}$. For the case of time--varying convolutional codes, the symbols $\{x_t\}$ are assumed binary ($J=2$), and $\{f_t\}$ are assumed linear functions over $\mbox{GF}(2)$, namely, \begin{equation} f_t(\bu_t,\ldots,\bu_{t-k+1})=\bx_{0,t}\oplus\sum_{j=0}^{k-1}\bu_{t-j}G_j(t), \end{equation} where $\{\bu_{t-j}\}$ are considered row--vectors of dimension $m$, $\{\bx_{0,t}\}$ are binary vectors of dimension $n$, $\{G_j(t)\}$ are binary $m\times n$ matrices, the operations $\oplus$ and $\sum$ both designate summations modulo 2, and the channel is assumed binary--input, output--symmetric. The entries of $\{\bx_{0,t}\}$ and $\{G_j(t)\}$ are randomly and independently selected with equal probabilities of $0$ and $1$. \begin{figure}[ht] \vspace*{1cm} \hspace*{2cm}\input{trelliscode.pstex_t} \caption{\small Block diagram of a communication system based on a time--varying trellis code.} \label{blockdiagram} \end{figure} \subsection{Background} \label{bg} The traditional ensemble performance metric is the exponential decay rate (as a function of $K$) of the expectation of the first--error event probability, or the per--node error probability \cite[p.\ 243]{VO79}, as well as the related bit error probability, \begin{equation} {\cal E}_{\mbox{\tiny rtc}}(R,Q)=\liminf_{K\to\infty} \left\{-\frac{\log \bE P_{\mbox{\tiny e}}}{K}\right\}, \end{equation} where the subscript ``rtc'' stands for ``random trellis code'' and accordingly, the expectation is w.r.t.\ the randomness of the time--varying trellis code, see, e.g., \cite[Chap.\ 5]{VO79}. As shown in \cite[Sect.\ 5.1]{VO79}, the result for random time--varying convolutional codes, which easily extends to random time---varying trellis codes, is that this error exponent is essentially\footnote{The actual exponent is slightly smaller than that, but by an amount $\epsilon$ that can be made arbitrarily small. Here and in the sequel, we will ignore this very small loss.} given by \begin{equation} {\cal E}_{\mbox{\tiny rtc}}(R,Q)\ge E_{\mbox{\tiny rtc}}(R,Q)\dfn \left\{\begin{array}{ll} R_0(Q)/R & R < R_0(Q)\\ E_0(\rho_{\mbox{\tiny rtc}}(R),Q)/R & R > R_0(Q)\end{array}\right. \end{equation} where $\rho_{\mbox{\tiny rtc}}(R)$ is the solution $\rho$ the equation $R = E_0(\rho,Q)/\rho$, $E_0(\rho,Q)$ being the Gallager function, \begin{equation} E_0(\rho,Q)=-\log\left(\sum_y\left[\sum_x Q(x)W(y|x)^{1/(1+\rho)}\right]^{1+\rho}\right), \end{equation} and $R_0(Q)=E_0(1,Q)$. The best result is obtained, of course, upon maximizing over $Q$, in which case, for $R > R_0=\max_Q R_0(Q)$, the resulting error exponent is the best achievable error exponent, as it meets the converse bound of \cite[Theorem 5.4.1]{VO79}.\footnote{Although the converse bound in \cite[Sect.\ 5.4]{VO79} is proved with convolutional codes in mind, the linearity of convolutional codes is not really used there, and so the very same proof applies also to non--linear trellis codes.} It follows then that there is room for improvement only for rates below $R_0$. Indeed, an improvement in this range is accomplished, for binary--input, output symmetric channels \cite[p.\ 86]{VO79}, by an expurgated bound, derived in \cite{VO69}, \cite[Sect.\ 5.3]{VO79}, and given by \begin{equation} \label{cex} E_{\mbox{\tiny cex}}(R,Q)=\frac{E_{\mbox{\tiny x}}(\rho_{\mbox{\tiny cex}}(R),Q)}{R}, \end{equation} where $\rho_{\mbox{\tiny cex}}(R)$ is the solution $\rho\ge 1$ to the equation $R = E_{\mbox{\tiny x}}(\rho,Q)/\rho$, with $E_{\mbox{\tiny x}}(\rho,Q)$ being defined as \begin{equation} E_{\mbox{\tiny x}}(\rho,Q)=-\rho\log\left[\sum_{x,x^\prime}Q(x)Q(x^\prime) \left(\sum_y\sqrt{W(y|x)W(y|x^\prime)}\right)^{1/\rho}\right]. \end{equation} More precisely, in \cite{VO69} the main theorem asserts that for {\it at least half} of the rate--$1/n$ time--varying convolutional codes, the probability of error does not exceed \begin{equation} \label{vo69} \left(\frac{2L}{1-2^{-\epsilon/\rho R}}\right)^\rho\cdot\exp\{-KE_{\mbox{\tiny cex}}(\rho,Q)\}, \end{equation} where $Q$ is the binary symmetric source (which in our notation, means the uniform distribution over the binary alphabet ${\cal X}$), $L$ is the block length, $\epsilon = E_{\mbox{\tiny x}}(\rho,Q)-\rho R > 0$ is an arbitrarily small positive real, and $\rho\ge 1$ is any number that satisfies $R= [E_{\mbox{\tiny x}}(\rho,Q)-\epsilon]/\rho < R_0$. It is clear from the proof of this theorem that choosing to refer to exactly half of the codes is quite arbitrary, and a similar bound, with the same exponential rate (assuming that $L$ is sub--exponential in $K$), would apply to any, arbitrarily large, fraction of the codes, at the expense of increasing the pre--exponential factor of (\ref{vo69}) accordingly. For example, if the factor $2L$ at the numerator of the pre--exponent of (\ref{vo69}) is replaced by $100L$, then the bound would apply to at least 99\% of the time--varying convolutional codes with block length $L$, and so on. This indicates that the ensemble of convolutional codes obeys a {\it measure concentration property} concerning their error exponent.\footnote{As mentioned in the Introduction, several assertions in the same spirit can be found also in \cite{JR99}, see for example, Lemmas 3.33 and 4.15 therein.} \subsection{Objectives} \label{obj} The purpose of this work is to study the above mentioned measure concentration property in a systematic manner and to broaden the scope in several directions at the same time, as will be specified shortly. In this context, similarly as in \cite{trc}, we refer to the error exponent of typical random trellis code, and as discussed in \cite[Introduction]{trc}, if the ensemble of codes possesses the relevant measure concentration property associated with exponential error bounds, then the error exponent of the the {\it typical random trellis code}, is captured by the quantity \begin{equation} {\cal E}_{\mbox{\tiny trtc}}(R,Q)\dfn\liminf_{K\to\infty} \left\{-\frac{\bE\log P_{\mbox{\tiny e}}}{K}\right\}, \end{equation} which is similar to the above definition of ${\cal E}_{\mbox{\tiny rtc}}(R,Q)$, except that the expectation operator and the logarithmic function are commuted. It will be understood that the limit of $K=mk\to\infty$ will be taken under the regime where $m$ and $n$ (and hence also $R=m/n$) are held fixed whereas $k\to\infty$. A similar definition will apply to the smaller ensemble to time--varying convolutional codes and it will be denoted by ${\cal E}_{\mbox{\tiny trcc}}(R,Q)$, where the subscript stands for {\it typical random convolutional code}. \section{Main Result} Our main theorem has two parts, where the second part actually follows directly from \cite{VO69} (as discussed in Subsection \ref{bg}) and is included here for completeness. \begin{theorem} \label{thm} Consider the problem setting defined in Subsection \ref{ps}. Then, for $R < R_0(Q)$, \begin{enumerate} \item[(a)] \begin{equation} {\cal E}_{\mbox{\tiny trtc}}(R,Q)\ge E_{\mbox{\tiny trtc}}(R,Q)\dfn \frac{E_{\mbox{\tiny x}}(\rho_{\mbox{\tiny trtc}}(R),Q)}{R}, \end{equation} where $\rho_{\mbox{\tiny trtc}}(R)$ is the solution, $\rho\ge 1$, to the equation \begin{equation} R=\frac{E_{\mbox{\tiny x}}(\rho,Q)}{2\rho-1}. \end{equation} \item[(b)] For the ensemble of time--varying convolutional codes and the binary--input output symmetric channel (with $Q(0)=Q(1)=\frac{1}{2}$), \begin{equation} {\cal E}_{\mbox{\tiny trcc}}(R,Q)\ge E_{\mbox{\tiny cex}}(R,Q). \end{equation} \end{enumerate} \end{theorem} We emphasize that here the setup is considerably extended relative to that of \cite{VO69}, especially in part (a). This extension takes place in several dimensions at the same time: \begin{enumerate} \item Allowing general rational coding rates, $R=m/n$, rather than $R=1/n$. \item Using ensembles with a general random coding distribution $Q$, instead of just the uniform distribution. In this case, assertions about fractions of codes with certain properties are replaced by parallel assertions concerning (high) probabilities of possessing these properties. \item Assuming a general DMC, not necessarily a binary--input, output symmetric channel. \item As was mentioned already, we are referring to general trellis codes, as an extension to convolutional codes, which are linear. \item A further extension is for mismatched decoding for a channel with input memory. \end{enumerate} Furthermore, our analysis, which is strongly based on the method of types, will provide some insights on the character of two ingredients of interest: \begin{enumerate} \item Structure and distance enumeration (or more generally, type class enumeration) of the typical random trellis code, that achieves the convolutional coding expurgated exponent. \item Error events that dominate the error probability: joint types of decoded trellis paths and the correct paths, along with the lengths of the typical error bursts. \end{enumerate} These points, among others, will be discussed in mode detail in Section \ref{dis}. \section{Proof of Theorem \ref{thm}} \label{proof} Here we prove part (a) only, because part (b) can be obtained in a very similar manner by a small modification in a few places. Also, as discussed in Subsection \ref{bg}, part (b) was actually proved already in \cite{VO69} (at least for rate--$1/n$ codes, but the extension to $m/n$--codes is not difficult). Clearly, in order to derive a bound on ${\cal E}_{\mbox{\tiny trtc}}(R,Q)$, we have to assess $\bE\log P_{\mbox{\tiny e}}({\cal C}_k)$, where ${\cal C}_k$ designates a randomly selected trellis code with memory $k$ (and constraint length $K=mk$) in the ensemble described in Subsection \ref{ps}. Our first observation is the following: suppose we can define, for every $k\ge 1$, a subset ${\cal T}_k$ of codes $\{{\cal C}_k\}$ whose probability, $1-\epsilon_k\dfn \mbox{Pr}\{{\cal T}_k\}$, tends to unity as $k\to\infty$. Then, \begin{eqnarray} \label{condexp} \bE\log P_{\mbox{\tiny e}}({\cal C}_k)&=&\mbox{Pr}\{{\cal T}_k \}\cdot\bE\{\log P_{\mbox{\tiny e}}({\cal C}_k)|{\cal C}_k\in {\cal T}_k\}+\mbox{Pr}\{{\cal T}_k^{\mbox{\tiny c}}\}\cdot\bE\{\log P_{\mbox{\tiny e}}({\cal C}_k)|{\cal C}_k\in {\cal T}_k^{\mbox{\tiny c}}\}\nonumber\\ &\le&(1-\epsilon_k)\cdot\bE\{\log P_{\mbox{\tiny e}}({\cal C}_k)|{\cal C}_k\in{\cal T}_k \}+\epsilon_k\cdot\log 1\nonumber\\ &=&(1-\epsilon_k)\cdot\bE\{\log P_{\mbox{\tiny e}}({\cal C}_k)|{\cal C}_k\in{\cal T}_k\}\nonumber\\ &\le&(1-\epsilon_k)\cdot\log\left[\max_{{\cal C}_k\in{\cal T}_k} P_{\mbox{\tiny e}}({\cal C}_k)\right]. \end{eqnarray} Thus, if we can define a subset of codes ${\cal T}_k$, which on the one hand, has very high probability, and on the other hand, there is a uniform upper bound on $P_{\mbox{\tiny e}}({\cal C}_k)$ for every ${\cal C}_k\in{\cal T}_k$, this would yield a lower bound on the error exponent of the typical random trellis code. We will use this simple observation shortly after we define the subset ${\cal T}_k$. As mentioned earlier, we are assuming that each transmitted block is terminated by $k-1$ all--zero input vectors (each of dimension $m$) in order to reset the state of the shift register of the trellis encoder. Similarly as in linear convolutional codes, here too, every incorrect path $\{\bv_t\}$, diverging from the correct path, $\{\bu_t\}$, at a given node $j$ and re-merging with the correct path exactly after $k+\ell$ branches, must have the form $$\bv_j,\bv_{j+1},\ldots,\bv_{j+\ell},\bu_{j+\ell+1}, \bu_{j+\ell+2},\ldots,\bu_{j+\ell+k-1},$$ where $\bv_j$ and $\bv_{j+\ell}$ can be any one of the $2^m-1$ incorrect input $m$--vectors at nodes $j$ and $j+\ell$, respectively. Between $j$ and $j+\ell$ there should be no sub-strings of $k-1$ consecutive correct inputs. Thus, overall there are no more than $(2^m-1)2^{m\ell}$ such incorrect paths \cite[p.\ 311]{VO79}. Following a similar\footnote{Note that here, unlike in \cite{VO79}, in part (a) of Theorem \ref{thm}, we are considering general trellis codes, not convolutional codes, which are linear. Therefore, we cannot assume, without loss of generality, that the all--zero message was sent, but rather average over all input messages. In part (b), on the other hand, this averaging is not needed. This difference causes certain modifications in the analysis, which yield eventually $E_{\mbox{\tiny cex}}(R,Q)$.} line of thought as in the derivations of \cite{VO79}, for a given trellis code ${\cal C}_k$, the probability of an error event beginning at any given node is upper bounded by \begin{equation} P_{\mbox{\tiny e}}({\cal C}_k)\le \sum_{\ell\ge 1}\frac{1}{2^{m\ell}}\sum_{\bx\in{\cal X}^{k+\ell}}\sum_{\bx^\prime\in{\cal X}^{k+\ell}} \mbox{Pr}\left\{W(\by|\bx^\prime)\ge W(\by|\bx)\right\}, \end{equation} where $\bx$ designates the codeword associated with the correct path and $\bx^\prime$ stands for any incorrect path diverging from the correct path at node $j$ and re-merging at $j+k+\ell$. Since $\bx$ and $\bx^\prime$ may disagree at no more than $n(k+\ell)$ channel uses, the summand is actually the pairwise error probability associated with two vectors of length $n(k+\ell)$, and it depends only on the joint empirical distribution of these two $n(k+\ell)$--vectors, which we denote by $\hP_{XX^\prime}$. In particular, by the Chernoff bound, it is readily seen that for a given pair $(\bx,\bx^\prime)$, \begin{eqnarray} \mbox{Pr}\left\{W(\by|\bx^\prime)\ge W(\by|\bx)\right\}&\le&\exp\left\{-n(k+\ell)\max_{0\le s\le 1} \sum_{x,x^\prime}P_{XX^\prime}(x,x^\prime)d_s(x,x^\prime)\right\}\nonumber\\ &\dfn&\exp_2\left\{-n(k+\ell)\max_{0\le s\le 1}\Delta_s(\hP_{XX^\prime})\right\}\nonumber\\ &\dfn&\exp_2\left\{-n(k+\ell)\Delta(\hP_{XX^\prime})\right\}, \end{eqnarray} where \begin{equation} d_s(x,x^\prime)=-\log_2\left[\sum_y W^{1-s}(y|x)W^s(y|x^\prime)\right], \end{equation} is the Chernoff distance between $x$ and $x^\prime$. It follows then that \begin{equation} \label{first} P_{\mbox{\tiny e}}({\cal C}_k)\le \sum_{\ell\ge 1}2^{-m\ell}\sum_{\{\hP_{XX^\prime}\}}N_\ell(\hP_{XX^\prime})\cdot \exp\left\{-n(k+\ell)\Delta(\hP_{XX^\prime})\right\}, \end{equation} where $N_\ell(\hP_{XX^\prime})$ is the number of pairs $\{(\bx,\bx^\prime)\}\in{\cal X}^{2n(k+\ell)}$ having joint empirical distribution that is given by $\hP_{XX^\prime}$. Here, the inner summation over $\{\hP_{XX^\prime}\}$ is defined over the set ${\cal P}^{n(k+\ell)}$ of all possible empirical distributions of pairs of vectors in ${\cal X}^{n(k+\ell)}$. For a given joint empirical distribution $\hP_{XX^\prime}$, we denote \begin{equation} D(\hP_{XX^\prime}\|Q\times Q)=\sum_{x,x^\prime\in{\cal X}}\hP_{XX^\prime}(x,x^\prime)\log_2\frac{\hP_{XX^\prime}(x,x^\prime)} {Q(x)Q(x^\prime)}. \end{equation} We note that \begin{eqnarray} \bE\{N_\ell(\hP_{XX^\prime})\}&\le& (2^m-1)2^{2m\ell}\cdot\mbox{Pr}\{(\bx,\bx^\prime)~\mbox{have joint type}~\hP_{XX^\prime}\}\nonumber\\ &\le&(2^m-1)2^{2m\ell}\cdot\exp_2\{-n(k+\ell)D(\hP_{XX^\prime}\|Q\times Q)\}\nonumber\\ &=&(2^m-1)\cdot\exp_2\left\{m[2\ell-(k+\ell)D(\hP_{XX^\prime}\|Q\times Q)/R]\right\}. \end{eqnarray} We now define ${\cal T}_k$ as the subset of codes, henceforth referred to as the {\it typical trellis codes}, with the following property for a given arbitrarily small $\epsilon > 0$: for every $\ell\ge 1$ and every empirical joint distribution $\hP_{XX^\prime}$ derived from $n(k+\ell)$--vectors: \begin{itemize} \item $N_\ell(\hP_{XX^\prime})=0$ whenever $\bE\{N_\ell(\hP_{XX^\prime})\} < (2^m-1)\cdot 2^{-n(k+\ell)\epsilon}$, and \item $N_\ell(\hP_{XX^\prime})\le 2^{n(k+\ell)\epsilon}\cdot \bE\{N_\ell(\hP_{XX^\prime})\}$ whenever $\bE\{N_\ell(\hP_{XX^\prime})\} \ge (2^m-1)\cdot 2^{-n(k+\ell)\epsilon}$. \end{itemize} Obviously, by the Markov inequality, for every $\ell$ and $\hP_{XX^\prime}$ in the first category, we have \begin{equation} \mbox{Pr}\{N_\ell(\hP_{XX^\prime})\ge 1\}\le \bE\{N_\ell(\hP_{XX^\prime})\} < (2^m-1)\cdot 2^{-n(k+\ell)\epsilon}, \end{equation} and similarly, for $\ell$ and $\hP_{XX^\prime}$ in the second category, we have \begin{equation} \mbox{Pr}\{N_\ell(\hP_{XX^\prime})> 2^{n(k+\ell)\epsilon}\cdot \bE\{N_\ell(\hP_{XX^\prime})\}\le 2^{-n(k+\ell)\epsilon} < (2^m-1)\cdot 2^{-n(k+\ell)\epsilon}. \end{equation} It follows by the union bound that \begin{eqnarray} \label{PGkc} \mbox{Pr}\{{\cal T}_k^{\mbox{\tiny c}}\}&\le&(2^m-1)\sum_{\ell\ge 1}\sum_{\{\hP_{XX^\prime}\}} 2^{-n(k+\ell)\epsilon}\nonumber\\ &\le&(2^m-1)\sum_{\ell\ge 1}[n(k+\ell)+1)^{J^2}\cdot 2^{-n(k+\ell)\epsilon}\nonumber\\ &=&(2^m-1)\cdot \sum_{\ell\ge k+1}(n\ell+1)^{J^2}\cdot 2^{-n\ell\epsilon}\nonumber\\ &=&(2^m-1)\cdot \sum_{\ell\ge k+1} \exp_2\left\{-n\ell\left[\epsilon-\frac{J^2\log(n\ell+1)}{n\ell}\right]\right\}. \end{eqnarray} The sequence $\{\frac{\log(n\ell+1)}{n\ell}\}$ is monotonically decreasing and so, since $\ell\ge k+1$, we have, for large enough $k$, $$\frac{J^2\log(n\ell+1)}{n\ell}\le \frac{J^2\log[n(k+1)+1]}{n(k+1)}\le \frac{\epsilon}{2},$$ and then the last line of (\ref{PGkc}) cannot exceed the sum of the geometric series, $(2^m-1)\cdot 2^{-n(k+1)\epsilon/2}/(1-2^{-n\epsilon/2})$, which tends to zero as $k\to\infty$. Thus, $\mbox{Pr}\{{\cal T}_k\}$ tends to unity as $k\to\infty$. Denoting \begin{eqnarray} {\cal S}_\ell^\prime&=&\{\hP_{XX^\prime}\in{\cal P}^{n(k+\ell)}:~\bE\{N_\ell(\hP_{XX^\prime})\} \ge (2^m-1)\cdot 2^{-n(k+\ell)\epsilon}\}\nonumber\\ &\subseteq&\left\{\hP_{XX^\prime})\in{\cal P}^{n(k+\ell)}:~2\ell \ge \frac{k+\ell}{R}[D(\hP_{XX^\prime}\|Q\times Q)-\epsilon]\right\}\nonumber\\ &\dfn&{\cal S}_\ell, \end{eqnarray} it now follows that for every typical trellis code, ${\cal C}_k\in{\cal T}_k$, \begin{eqnarray} P_{\mbox{\tiny e}}({\cal C}_k)&\le&\sum_{\ell\ge 1}2^{-m\ell}\sum_{\{\hP_{XX^\prime})\in{\cal S}_\ell^\prime\}} N_{\ell}(\hP_{XX^\prime})\cdot\exp_2\{-n(k+\ell)\Delta(\hP_{XX^\prime})\}\nonumber\\ &\le&(2^m-1)\sum_{\ell\ge 1}\sum_{\{\hP_{XX^\prime})\in{\cal S}_\ell\}} \exp_2\{m(\ell-(k+\ell)[D(\hP_{XX^\prime}\|Q\times Q)+\nonumber\\ & &\Delta(\hP_{XX^\prime})-\epsilon]/R)\}. \end{eqnarray} In order to address this summation over ${\cal S}_\ell$, let us partition it as the disjoint union of the subsets \begin{equation} {\cal S}_{\ell,i}={\cal S}_\ell\cap\{\hP_{XX^\prime}\in{\cal P}^{n(k+\ell)}:~R_{i-1}\le D(\hP_{XX^\prime}\|Q\times Q)< R_i\},~~~~R_i=i\epsilon,~~i=1,2,\ldots,\lceil 2R/\epsilon\rceil \end{equation} and observe that for a given $i$, ${\cal S}_{\ell,i}$ is non--empty only when $2\ell \ge (k+\ell)(R_{i-1}-\epsilon)/R$, or equivalently, $$\ell\ge \frac{k(R_{i-1}-\epsilon)}{2R-R_{i-1}+\epsilon}\dfn k\theta(R_{i-1}).$$ Then, \begin{eqnarray} \label{bound} P_{\mbox{\tiny e}}({\cal C}_k)&\le&\sum_{i=1}^{\lceil 2R/\epsilon\rceil} \sum_{\ell\ge 1}\sum_{\{\hP_{XX^\prime}\in{\cal S}_{\ell,i}\}} \exp_2\{m[\ell-(k+\ell)[D(\hP_{XX^\prime}\|Q\times Q)+\Delta(\hP_{XX^\prime})-\epsilon]/R\}\nonumber\\ &\le&\sum_{i=1}^{\lceil 2R/\epsilon\rceil}\sum_{\ell\ge k\theta(R_{i-1})} [n(k+\ell)+1]^{J^2}\max_{\{\hP_{XX^\prime}:~D(\hP_{XX^\prime}\|Q\times Q)\le R_i\}}\nonumber\\ & &\exp_2\{m[\ell-(k+\ell)[R_{i-1}+\Delta(\hP_{XX^\prime})-\epsilon]/R\}\nonumber\\ &=&\sum_{i=1}^{\lceil R/\epsilon\rceil}\sum_{\ell\ge k\theta(R_{i-1})} [n(k+\ell)+1]^{J^2} \exp_2\{m[\ell-(k+\ell)[R_{i-1}+Z(R_i)-\epsilon]/R\}\nonumber\\ &=&\sum_{i=1}^{\lceil R/\epsilon\rceil}\exp_2\{-K[R_{i-1}+Z(R_i)-\epsilon]/R\}\times\nonumber\\ & &\sum_{\ell\ge k\theta(R_{i-1})}[n(k+\ell)+1]^{J^2} \exp_2\{-m\ell[R_{i-1}+Z(R_i)-R-\epsilon]/R\}, \end{eqnarray} where we have defined \begin{equation} Z(R_i)=\min\{\Delta(\hP_{XX^\prime}):~D(\hP_{XX^\prime}\|Q\times Q)\le R_i\}. \end{equation} Now observe that \begin{eqnarray} R_{i-1}+Z(R_i)&=&R_i+Z(R_i)-\epsilon\nonumber\\ &=&R_i+\min\{\Delta(\hP_{XX^\prime}):~D(\hP_{XX^\prime}\|Q\times Q)\le R_i\}-\epsilon\nonumber\\ &\ge&\min_{\{\hP_{XX^\prime}:~D(\hP_{XX^\prime}\|Q\times Q)\le R_i\}}[ D(\hP_{XX^\prime}\|Q\times Q)+\Delta(\hP_{XX^\prime})]-\epsilon\nonumber\\ &\ge&\min_{\hP_{XX^\prime}}[ D(\hP_{XX^\prime}\|Q\times Q)+\Delta(\hP_{XX^\prime})]-\epsilon\nonumber\\ &=&\min_{\hat{P}_{XX^\prime}}\max_{0\le s\le 1}\left[D(\hat{P}_{XX^\prime}\|Q\times Q)+ \sum_{x,x^\prime}\hat{P}_{XX^\prime}(x,x^\prime)d_s(x,x^\prime)\right]-\epsilon\nonumber\\ &=&\max_{0\le s\le 1}\min_{\hat{P}_{XX^\prime}} \left[D(\hat{P}_{XX^\prime}\|Q\times Q)+ \sum_{x,x^\prime}\hat{P}_{XX^\prime}(x,x^\prime)d_s(x,x^\prime)\right]-\epsilon\nonumber\\ &=&\max_{0\le s\le 1}\left\{-\log\left[\sum_{x,x'}Q(x)Q(x')2^{-d_s(x,x')}\right]\right\}-\epsilon\nonumber\\ &=&-\min_{0\le s\le 1}\log\left[\sum_{x,x',y}Q(x)Q(x')W^s(y|x)W^{1-s}(y|x')\right]-\epsilon\nonumber\\ &=&-\log\left[\sum_{x,x',y}Q(x)Q(x')\sqrt{W(y|x)W(y|x')}\right]-\epsilon\nonumber\\ &=&-\log\left(\sum_{y}\left[\sum_xQ(x)\sqrt{W(y|x)}\right]^2\right)-\epsilon\nonumber\\ &=&R_0(Q)-\epsilon, \end{eqnarray} where the commutation of the minimization and the maximization is allowed by convexity--concavity of the objective, and the final minimization over $s$ is achieved by $s=1/2$ due to the convexity and the symmetry of the function $\sum_{x,x',y}Q(x)Q(x')W^s(y|x)W^{1-s}(y|x')$ around $s=1/2$. Thus, the series in the last line of (\ref{bound}) is convergent as long as $R < R_0(Q)-2\epsilon$, and its exponential order as a function of $K$ (ignoring $\epsilon$--terms) is given by \begin{eqnarray} & &\frac{1}{R}\min_i\left\{R_i+Z(R_i)+\theta(R_i)[R_i+Z(R_i)-R]\right\}\nonumber\\ &=&\frac{1}{R}\min_i\left\{R_i+Z(R_i)+\frac{R_i}{2R-R_i}\cdot[R_i+Z(R_i)-R]\right\}\nonumber\\ &=&\min_i\frac{R_i+2Z(R_i)}{2R-R_i}\nonumber\\ &\ge&\inf_{\hat{R}< 2R}\frac{2Z(\hat{R})+\hat{R}}{2R-\hat{R}}\nonumber\\ &=&\inf_{\hat{R}/2 < R}\frac{Z(\hat{R})+\hat{R}/2}{R-\hat{R}/2}\nonumber\\ &=&\inf_{\hat{R} < R}\frac{Z(2\hat{R})+\hat{R}}{R-\hat{R}}\nonumber\\ &=&\inf_{\hat{R}< R}\inf_{\{P_{XX^\prime}:~ D(P_{XX^\prime}\|Q\times Q)\le 2\hat{R}\}}\max_{0\le s\le 1}\frac{\Delta_s(P_{XX^\prime})+\hat{R}}{R-\hat{R}}. \end{eqnarray} Thus, we have shown that the typical random trellis code error exponent is lower bounded by \begin{equation} \label{csiszarstyle} {\cal E}_{\mbox{\tiny trtc}}(R,Q)\ge \inf_{\hat{R}< R}\inf_{\{P_{XX^\prime}:~ D(P_{XX^\prime}\|Q\times Q)\le 2\hat{R}\}}\max_{0\le s\le 1}\frac{\Delta_s(P_{XX^\prime})+\hat{R}}{R-\hat{R}}. \end{equation} We next show that this expression is equivalent to the one asserted in part (a) of Theorem \ref{thm}. First, observe that since $\Delta_s(P_{XX^\prime})$ is a linear functional of $P_{XX^\prime}$, then $\Delta(P_{XX^\prime})= \max_{0\le s\le 1}\Delta(P_{XX^\prime})$ is convex in $P_{XX^\prime}$. We argue that the minimizer, $P_{XX^\prime}^*$, of $\Delta(P_{XX^\prime})$ within the set $\{P_{XX^\prime}:~ D(P_{XX^\prime}\|Q\times Q)\le 2\hat{R}\}$ must be a symmetric distribution, namely, $P_{XX^\prime}^*(x,x^\prime)=P_{XX^\prime}^*(x^\prime,x)$ for all $x,x^\prime\in{\cal X}$. To see why this is true, given any $P_{XX^\prime}$ that satisfies the divergence constraint, define its transpose, $\tilde{P}_{XX^\prime}$ by $\tilde{P}_{XX^\prime}(x,x^\prime)=P_{XX^\prime}(x^\prime,x)$ for all $x,x^\prime\in{\cal X}$. Obviously, $\Delta(\tilde{P}_{XX^\prime})=\Delta(P_{XX^\prime})$ because if $s^*$ achieves $\Delta(P_{XX^\prime})$, then $1-s^*$ achieves $\Delta(\tilde{P}_{XX^\prime})$ and the value of the maximum is the same (just by swapping $x$ and $x^\prime$). Next, define $\bar{P}_{XX^\prime}=\frac{1}{2}P_{XX^\prime}+\frac{1}{2}\tilde{P}_{XX^\prime}$. Then, \begin{equation} \Delta\left(\bar{P}_{XX^\prime}\right)=\Delta\left(\frac{1}{2}P_{XX^\prime}+ \frac{1}{2}\tilde{P}_{XX^\prime}\right)\le \frac{1}{2}\Delta(P_{XX^\prime})+ \frac{1}{2}\Delta(\tilde{P}_{XX^\prime})=\Delta(P_{XX^\prime}), \end{equation} and at the same time, \begin{equation} D(\bar{P}_{XX^\prime}\|Q\times Q)\le\frac{1}{2}D(P_{XX^\prime}\|Q\times Q)+ \frac{1}{2}D(\tilde{P}_{XX^\prime}\|Q\times Q)=D(P_{XX^\prime}\|Q\times Q)\le 2\hat{R}, \end{equation} so the divergence constraint is satisfied. It follows then that the symmetric distribution $\bar{P}_{XX^\prime}$ is never worse than $P_{XX^\prime}$ in terms of minimizing $\Delta(\cdot)$ under the divergence constraint. Thus, it is sufficient to seek the minimizing $P_{XX^\prime}$ among the symmetric distributions. However, given that $P_{XX^\prime}$ is symmetric, the maximizing $s$ is $s^*=1/2$, because then $\Delta_{1-s}(P_{XX^\prime})=\Delta_s(P_{XX^\prime})$. Thus, the r.h.s.\ of eq.\ (\ref{csiszarstyle}) is equivalent to $$\inf_{\hat{R}< R}\inf_{\{P_{XX^\prime}:~ D(P_{XX^\prime}\|Q\times Q)\le 2\hat{R}\}} \frac{\Delta_{1/2}(P_{XX^\prime})+\hat{R}}{R-\hat{R}}.$$ Now, \begin{eqnarray} & &\inf\{\Delta_{1/2}(P_{XX^\prime}): D(\tilde{P}_{XX^\prime}\|Q\times Q) \le 2\hat{R}\}\nonumber\\ &=&\inf_{P_{XX^\prime}}\sup_{\rho\ge 0} \left[\sum_{x,x^\prime}P_{XX^\prime}(x,x^\prime) d_{1/2}(x,x^\prime)+ \rho\left(\sum_{x,x^\prime}P_{XX^\prime}(x,x^\prime) \log\frac{P_{XX^\prime}(x,x^\prime)}{Q(x)Q(x^\prime)}- 2\hat{R}\right)\right]\nonumber\\ &=&\sup_{\rho\ge 0}\inf_{P_{XX^\prime}}\left[ \rho\cdot\sum_{x,x^\prime}P_{XX^\prime}(x,x^\prime)\log \frac{P_{XX^\prime}(x,x^\prime)} {Q(x)Q(x^\prime)2^{-d_{1/2}(x,x^\prime)/\rho}}-2\rho\hat{R}\right]\nonumber\\ &=&\sup_{\rho\ge 0}\left\{-\rho\log\left[\sum_{x,x^\prime}Q(x)Q(x^\prime) 2^{-d_{1/2}(x,x^\prime)/\rho}\right]-2\rho\hat{R}\right\}\nonumber\\ &=&\sup_{\rho\ge 0}\left\{-\rho\log\left[\sum_{x,x^\prime}Q(x)Q(x^\prime) \left(\sum_y\sqrt{W(y|x)W(y|x^\prime)}\right)^{1/\rho}\right]- 2\rho\hat{R}\right\}\nonumber\\ &=&\sup_{\rho\ge 0}[E_{\mbox{\tiny x}}(\rho,Q)-2\rho\hat{R}], \end{eqnarray} and so, \begin{eqnarray} \label{almostdone} {\cal E}_{\mbox{\tiny trtc}}(R,Q)&\ge&\inf_{\hat{R} < R}\sup_{\rho\ge 0}\frac{E_{\mbox{\tiny x}}(\rho,Q)-(2\rho-1)\hat{R}}{R-\hat{R}}\nonumber\\ &\ge&\inf_{\hat{R} < R} \frac{E_{\mbox{\tiny x}}(\rho_{\mbox{\tiny trtc}}(R),Q)-(2\rho_{\mbox{\tiny trtc}}(R)-1)\hat{R}}{R-\hat{R}}\nonumber\\ &=&\inf_{\hat{R} < R}\frac{(2\rho_{\mbox{\tiny trtc}}(R)-1)R-(2\rho_{\mbox{\tiny trtc}}(R)-1) \hat{R}}{R-\hat{R}}\nonumber\\ &=&2\rho_{\mbox{\tiny trtc}}(R)-1\nonumber\\ &=&\frac{E_{\mbox{\tiny x}}(\rho_{\mbox{\tiny trtc}}(R),Q)}{R}\nonumber\\ &=&E_{\mbox{\tiny trtc}}(R,Q). \end{eqnarray} Formally, this proves Theorem 1, but as a final remark, to complete the picture, we also argue that the there is no loss of tightness in the passage from the right--hand side of the first line of eq.\ (\ref{almostdone}) to $E_{\mbox{\tiny trtc}}(R,Q)$. This follows from the following matching upper bound on the first line of (\ref{almostdone}). Let $\tilde{R}$ be such that the maximizer of $E_{\mbox{\tiny x}}(\rho,Q)-(2\rho-1)\tilde{R}$ is $\rho_{\mbox{\tiny trtc}}(R)$. This is feasible due to the concavity of $E_{\mbox{\tiny x}}(\rho,Q)$ in $\rho$ \cite[Theorem 3.3.2]{VO79}, $$\tilde{R}=\frac{1}{2}\cdot\frac{\partial E_{\mbox{\tiny x}}(\rho,Q)}{\partial\rho}\bigg|_{\rho=\rho_{\mbox{\tiny trtc}}(R)}\le \frac{E_{\mbox{\tiny x}}(\rho_{\mbox{\tiny trtc}}(R),Q)}{2\rho_{\mbox{\tiny trtc}}(R)}\le \frac{E_{\mbox{\tiny x}}(\rho_{\mbox{\tiny trtc}}(R),Q)}{2\rho_{\mbox{\tiny trtc}}(R)-1}=R.$$ Thus, \begin{eqnarray} \inf_{\hat{R} < R}\sup_{\rho\ge 0} \frac{E_{\mbox{\tiny x}}(\rho,Q)-(2\rho-1)\hat{R}}{R-\hat{R}} &\le&\sup_{\rho\ge 0} \frac{E_{\mbox{\tiny x}}(\rho,Q)-(2\rho-1)\tilde{R}}{R-\tilde{R}}\nonumber\\ &=&\frac{E_{\mbox{\tiny x}}(\rho_{\mbox{\tiny trtc}}(R),Q)-(2\rho_{\mbox{\tiny trtc}}(R)-1)\tilde{R}}{R-\tilde{R}}\nonumber\\ &=&\frac{(2\rho_{\mbox{\tiny trtc}}(R)-1)R-(2\rho_{\mbox{\tiny trtc}}(R)-1) \tilde{R}}{R-\tilde{R}}\nonumber\\ &=&2\rho_{\mbox{\tiny trtc}}(R)-1\nonumber\\ &=&\frac{E_{\mbox{\tiny x}}(\rho_{\mbox{\tiny trtc}}(R),Q)}{R}\nonumber\\ &=&E_{\mbox{\tiny trtc}}(R,Q). \end{eqnarray} \section{Discussion} \label{dis} Several comments are in order concerning Theorem \ref{thm} and its proof.\\ \noindent {\bf Relations among the exponents.} It is easy to see that $E_{\mbox{\tiny trtc}}(0)$ is equal to the zero--rate expurgated exponent, $E_{\mbox{\tiny ex}}(0,Q)=E_{\mbox{\tiny cex}}(0,Q)=\lim_{\rho\to\infty}E_{\mbox{\tiny x}}(\rho,Q)$, and that for all $R < R_0(Q)$, $$E_{\mbox{\tiny rtc}}(R,Q)=\frac{R_0(Q)}{R}\le E_{\mbox{\tiny trtc}}(R,Q) \le E_{\mbox{\tiny cex}}(R,Q).$$ In other words, the typical random trellis code exponent is between the convolutional coding random coding exponent and the convolutional coding expurgated exponent. This is parallel to the ordering among the corresponding the block code exponents \cite{trc}. These relations are displayed graphically in Fig.\ \ref{graphical}, where the concave curve of $E_{\mbox{\tiny x}}(\rho,Q)$ is plotted as a function of $\rho$, along with the straight lines, $\rho R$ and $(2\rho-1)R$. For $\rho=1$, we have $E_{\mbox{\tiny x}}(1,Q)=E_0(1,Q)=R_0(Q)$. The straight lines $\rho R$ and $(2\rho-1)R$ intersect at the point $(1,R)$, which is below the point $(1,R_0(Q))$ on the curve (as $R$ is assumed smaller than $R_0(Q)$). The straight lines $\rho R$ and $(2\rho-1)R$ meet the curve $E_{\mbox{\tiny x}}(\rho,Q)$ at the points $(\rho_{\mbox{\tiny cex}}(R),R\cdot E_{\mbox{\tiny cex}}(R,Q))$ and $(\rho_{\mbox{\tiny trtc}}(R),R\cdot E_{\mbox{\tiny trtc}}(R,Q))$, respectively. As can be seen, $R\cdot E_{\mbox{\tiny cex}}(R,Q))\ge R\cdot E_{\mbox{\tiny trtc}}(R,Q))\ge R_0(Q)$.\\ \begin{figure}[ht] \vspace*{1cm} \hspace*{2cm}\input{graphical.pstex_t} \caption{\small Graphical representation of $E_{\mbox{\tiny trtc}}(R,Q)$ and $E_{\mbox{\tiny cex}}(R,Q)$.} \label{graphical} \end{figure} \noindent {\bf Properties of the typical random trellis codes.} For typical randomly selected trellis codes, we are able to characterize the features that make them achieve $E_{\mbox{\tiny trtc}}(R,Q)$. This is, in fact, spelled out explicitly in the definition of the subset of typical codes, ${\cal T}_k$. We know that for these codes, joint types that correspond to empirical distributions that are too far from $Q\times Q$ (e.g., those that exhibit too strong empirical dependency between the incorrect path and the correct one), are not populated. For the other types, we know the distance spectrum, or more precisely, the population profile of the various joint types.\\ \noindent {\bf Dominant error events.} In the process of proving Theorem \ref{thm} in Section \ref{proof}, we have seen also alternative forms of the error exponent expression, like the Csisz\'ar--style expression (\ref{csiszarstyle}). While this expression may not be easier to calculate numerically (due to the nested optimizations involved), it is nevertheless useful for gaining some insight. We learn the following from the first part of the derivation: the error probability is dominated by a sub--exponential number of incorrect paths whose joint empirical distribution with the correct path is given by \begin{equation} \label{pxxp} P_{XX^\prime}^*(x,x^\prime)=\frac{Q(x)Q(x^\prime)2^{-d_{1/2}(x,x^\prime)/\rho}} {\sum_{\hat{x},\tilde{x}}Q(\hat{x})Q(\tilde{x}) 2^{-d_{1/2}(\hat{x},\tilde{x})/\rho}} \end{equation} and whose total unmerged length, $k+\ell$ (a.k.a.\ the critical length), spans $$k+k\theta(D(P_{XX'}^*\|Q\times Q))=kR/[2R-D(P_{XX'}^*\|Q\times Q)]$$ branches.\footnote{Interestingly, this is different from the total critical length that dominates ordinary average error probability, which for $R < R_0$, is $k$ branches long \cite[Theorem 5.5.1]{VO79}.} The error exponent expression (\ref{csiszarstyle}) is therefore essentially the same as that of a zero--rate\footnote{The zero rate is because of the sub--exponential number of dominant incorrect paths.} block code of block length $K/[2R-D(P_{XX'}^*\|Q\times Q)]$, where the competing trellis paths are at normalized Bhattacharyya distance $\Delta_{1/2}(P_{XX'}^*)$ from the correct path, hence the product, $\Delta_{1/2}(P_{XX'}^*)/[2R-D(P_{XX'}^*\|Q\times Q)]$. For time--varying convolutional codes over the binary--input, output--symmetric channel, better performance is obtained (as discussed above) as one obtains \cite[Corollary 5.3.1]{VO79}, $$E_{\mbox{\tiny cex}}(R,Q)=\frac{\log Z}{\log(2^{1-R}-1)},$$ with $Z=\sum_y\sqrt{W(y|0)W(y|1)}$, which has the simple interpretation of the Costello lower bound on the free distance \cite{Costello74} multiplied by the corresponding Bhattacharyya bound (see also \cite[p.\ 1652]{ZSSHJ99}). In other words, the typical time--varying convolutional code achieves the Costello bound. Note that the parameter $\rho$ in (\ref{pxxp}) controls the similarity (and hence the dominant distance) between $P_{XX^\prime}^*$ and the product distribution $Q\times Q$. When $\rho$ is very large (at low rates), the dominant distance is large and when $\rho$ is very small (low rates), the distance is very small.\\ \noindent {\bf A numerical example.} In \cite[Chap. 5]{VO79}, there is a comparison of the performance--complexity trade-off between unstructured block codes and convolutional codes, where the performance is measured according to the traditional random coding error exponents. As explained therein, the idea is that for block codes of length $N$ and rate $R$, the complexity is $G=2^{NR}$ and the error probability is exponentially $2^{-NE_{\mbox{\tiny block}}(R)}=G^{-E_{\mbox{\tiny block}}(R)/R}$. For convolutional codes, decoded by the Viterbi algorithm, the complexity is about $G=2^K$ and the error probability decays like $2^{-KE_{\mbox{\tiny conv}}(R)}=G^{-E_{\mbox{\tiny conv}}(R)}$, and so, it makes sense to compare $E_{\mbox{\tiny block}}(R)/R$ with $E_{\mbox{\tiny conv}}(R)$, or more conveniently, to compare $E_{\mbox{\tiny block}}(R)$ with $R\cdot E_{\mbox{\tiny conv}}(R)$. It is interesting to conduct a similar comparison when the performance of both classes of codes is measured according to error exponents of the typical random codes. In Fig.\ \ref{graph3}, this is done for the binary symmetric channel with crossover parameter $p=0.1$ and the uniform random coding distribution. For reference, the ordinary random coding exponent of convolutional codes, $R\cdot E_{\mbox{\tiny rtc}}(R,Q)\equiv R_0(Q)$, is also plotted in the displayed range of rates. As can be seen, the typical code exponent of the ensemble of time--varying convolutional codes is much larger than that of block codes for the same decoding complexity. \begin{figure}[h!t!b!] \centering \includegraphics[width=8.5cm, height=8.5cm]{ectrc2.eps} \caption{The functions $E_{\mbox{\tiny trc}}(R,Q)$ \cite{trc} of general (unstructured) random block codes (green dashed curve $-\cdot-$), $R\cdot E_{\mbox{\tiny rtc}}(R)\equiv R_0$ of the random convolutional coding exponent (red dashed curve $---$) and $R\cdot E_{\mbox{\tiny trcc}}(R)=E_{\mbox{\tiny x}}(\rho_{\mbox{\tiny cex}}(R),Q)$ of the typical random coding convolutional code (blue solid curve), all in the range $[0,R_{\mbox{\tiny crit}}]$ for the binary symmetric channel with crossover parameter $p=0.1$, where $R_0=0.2231$ and $R_{\mbox{\tiny crit}}=0.1308$. All rates are in units of nats/channel--use.} \label{graph3} \end{figure} \section{Channels with Memory and Mismatch} In this section, we extend our main results in two directions at the same time. The first direction is that instead of assuming memoryless channels, we now allow channels that memorize a finite number of the most recent past inputs, with the clear motivation of channels with intersymbol interference (see also \cite[Sect.\ 5.8]{VO79}). For the sake of simplicity, we consider the case where the memory contains the one most recent past input only, in other words, the channel model (\ref{channel}) is replaced by \begin{equation} \label{isichannel} \mbox{Pr}\{Y_1=y_1,Y_2=y_2,\ldots,Y_r=y_r|X_0=x_0,X_1=x_1,\ldots,X_r=x_r\}= \prod_{t=1}^rW(y_t|x_t,x_{t-1}). \end{equation} The extension to any fixed number $p$ of the most recent past inputs is conceptually straightforward by redefining the channel input at time $t$ as $\bar{x}_t=(x_t,\ldots,x_{t-p+1})$ and taking into account that in the sequence $\{\bar{x}_t\}$ not all $(J^p)^2$ state transitions $\bar{x}_t\to\bar{x}_{t+1}$ are allowed, but only those in which the two states are consistent with each other. Using this transformation, we are back to the model (\ref{isichannel}), except that $\{x_t\}$ are replaced by $\{\bar{x}_t\}$. The other direction of extension is that we allow mismatch. The decoding metric is assumed to be $\prod_t \tW(y_t|x_t,x_{t-1})$ for some channel $\tW$ that may differ from $W$. To avoid further complications, the ensemble of time--varying trellis codes continues to be defined exactly as in Section \ref{npbo} (without any attempt at introducing memory). These model assumptions are motivated by the facts that: (i) they are practically relevant, and (ii) the Viterbi algorithm is still implementable, although the number of states is now larger than before. In the remaining part of this section, we will not repeat all the derivations of Section \ref{proof}, but only highlight the differences and the state the results. The first basic difference, relative to the derivation in Section \ref{proof}, is associated the pairwise error probability: given the correct trellis path $\bx$ and a competing path $\bx'$, both of length $n(k+\ell)$ channel uses, the pairwise average error probability is upper bounded using the Chernoff bound as follows: \begin{eqnarray} \bar{P}_{\mbox{\tiny e}}(\bx\to\bx')&\le&\sum_{\bx,\bx'}Q(\bx)Q(\bx') \cdot\min_{s\ge 0}\sum_{\by}W(\by|\bx)\cdot \left[\frac{\tW(\by|\bx')}{\tW(\by|\bx)}\right]^s\nonumber\\ &=&\sum_{\bx,\bx'}Q(\bx)Q(\bx') \cdot\min_{s\ge 0}\sum_{\by}\prod_{t=1}^{n(k+\ell)}W(y_t|x_t,x_{t-1}) \tW^{1-s}(y_t|x_t,x_{t-1})\tW^s(y_t|x_t',x_{t-1}')\nonumber\\ &=&\sum_{\bx,\bx'}Q(\bx)Q(\bx') \cdot\min_{s\ge 0}\prod_{t=1}^{n(k+\ell)}\sum_{y_t}W(y_t|x_t,x_{t-1})\tW^{1-s}(y_t|x_t,x_{t-1}) \tW^s(y_t|x_t',x_{t-1}')\nonumber\\ &=&\sum_{\bx,\bx'}Q(\bx)Q(\bx') \cdot\min_{s\ge 0}\exp_2\left\{-\sum_{t=1}^{n(k+\ell)}d_s(x_t,x_{t-1};x_t',x_{t-1}')\right\}\nonumber\\ &=&\sum_{\bx,\bx'}Q(\bx)Q(\bx') \cdot\exp_2\left\{-\max_{s\ge 0}\sum_{t=1}^{n(k+\ell)}d_s(x_t,x_{t-1};x_t',x_{t-1}')\right\} \end{eqnarray} where we have defined \begin{equation} d_s(x,x_-;x',x_-')=-\log\left[\sum_y W(y|x,x_-)\tW^{1-s}(y|x,x_-)\tW^s(y|x',x_-')\right], ~~~~x,x_-,x',x_-'\in{\cal X}. \end{equation} Note that here, it is no longer necessarily true that the optimal choice of $s$ is $s=1/2$, as the symmetry properties that were valid in the memoryless matched case of Section \ref{proof}, do not continue to hold here, in general. To make the derivation more tractable, in the sequel, we interchange the optimization over $s$ with the summation over $\{\bx,\bx^\prime\}$, at the possible risk of losing exponential tightness.\footnote{ Of course, one may always select $s=1/2$, as in Section \ref{proof}, and then Theorem \ref{thm} will still be obtained as a special case.} The expression $\sum_{t=1}^{n(k+\ell)}d_s(x_t,x_{t-1};x_t',x_{t-1}')$ depends on $(\bx,\bx')$ only via their joint ``Markov type'', defined by the joint empirical distribution, \begin{equation} \hat{P}_{XX'X_-X_-'}(x,x',x_-,x_-')=\frac{1}{k+\ell} \sum_{t=1}^{n(k+\ell)}{\cal I}\{x_t=x,x_t'=x',x_{t-1}=x_-,x_{t-1}'=x_-'\}, \end{equation} ignoring edge effects. Let us denote \begin{equation} \Delta_s(\hat{P}_{XX'X_-X_-'})=\sum_{x,x',x_-,x_-'}\hat{P}_{XX'X_-X_-'}(x,x',x_-,x_-') d_s(x,x_-;x',x_-'). \end{equation} Using the extension of the method of types to Markov types (see, e.g., \cite[Sect.\ VII.A]{Csiszar98}, \cite{DLS81}, \cite[Sect.\ 3.1]{DZ93}, \cite{Natarajan85}), we find that \begin{eqnarray} \bar{P}_{\mbox{\tiny e}}(\bx\to\bx')&\lexe&\min_{s\ge 0} \max_{\hat{P}_{XX'X_-X_-'}} \exp\bigg\{n(k+\ell)\bigg[\hat{H}(X,X'|X_-,X_-')-\hat{H}(X,X')-\nonumber\\ & &D(\hat{P}_{XX'}\|Q\times Q)-\Delta_s(\hat{P}_{XX'X_-X_-'})\bigg]\bigg\}\nonumber\\ &=&\exp\bigg\{-n(k+\ell)\max_{s\ge 0}\min_{\hat{P}_{XX'X_-X_-'}}\bigg[ D(\hat{P}_{XX'|X_-X_-'}\|Q\times Q|\hat{P}_{X_-X_-'})+\nonumber\\ & &\Delta_s(\hat{P}_{XX'X_-X_-'})\bigg]\bigg\}, \end{eqnarray} where $\hat{H}(X,X'|X_-,X_-')$ is the empirical conditional entropy of $(X,X')$ given $(X_-,X_-')$, derived from $\hat{P}_{XX'X_-X_-'}$, \begin{equation} D(\hat{P}_{XX'|X_-X_-'}\|Q\times Q|\hat{P}_{X_-X_-'})\dfn\sum_{x,x_-,x',x_-'} \hat{P}_{XX'X_-X_-'}(x,x',x_-,x_-')\log \frac{\hat{P}_{XX'|X_-X_-'}(x,x'|x_-,x_-')}{Q(x)Q(x')},\nonumber\\ \end{equation} $\hat{P}_{XX'|X_-X_-'}$ being the conditional distribution induced by $\hat{P}_{XX'X_-X_-'}$, and the minimization over $\{\hat{P}_{XX'X_-X_-'}\}$ is confined to joint distributions where the marginals of $(X,X')$ and $(X_-,X_-')$ are the same. Repeating the same steps as in Section \ref{proof}, and assuming that \begin{equation} \label{RoQ} R < R_0(Q)=\max_{s\ge 0}\min_{P_{XX'X_-X_-'}}[D(P_{XX'|X_-X_-'}\|Q\times Q|P_{X_-X_-'})+\Delta_s(P_{XX'X_-X_-'})], \end{equation} the resulting error exponent of the typical random trellis code is lower bounded by \begin{equation} \max_{s\ge 0}\min_{\hat{R} < R} \min_{\{\hat{P}_{XX'X_-X_-'}:~D(\hat{P}_{XX'|X_-X_-'}\|Q\times Q|\hat{P}_{X_-X_-'})\le 2\hat{R}\}} \frac{\Delta_s(\hat{P}_{XX'X_-X_-'})+\hat{R}}{ R-\hat{R}}. \end{equation} As for the inner--most minimization, let us define the functions \begin{equation} F_s(d)=\min\{D(\hat{P}_{XX'|X_-X_-'}\|Q\times Q|\hat{P}_{X_-X_-'}):~\Delta_s(\hat{P}_{XX'X_-X_-'})\le d\} \end{equation} and \begin{equation} G_s(2\hat{R})=\min\{\Delta_s(\hat{P}_{XX'X_-X_-'}):~D(\hat{P}_{XX'|X_-X_-'}\|Q\times Q|\hat{P}_{X_-X_-'})\le 2\hat{R}\}. \end{equation} From large deviations theory \cite[Sect.\ 3.1]{DZ93}, we know that an alternative expression for $F_s(d)$ is given by \begin{equation} F_s(d)=\sup_{r\ge 0}[G_s(r)-rd], \end{equation} where $G_s(r)=-\log\lambda_s(r)$, $\lambda_s(r)$ being the Perron--Frobenius eigenvalue of the $J^2\times J^2$ matrix $$A_s(r)=\{Q(x)Q(x^\prime)e^{-rd_s(x,x_-;x',x_-')}\}$$ whose rows and columns are indexed by the pairs $(x,x')$ and $(x_-,x_-')$, respectively.\footnote{ This equivalence between the two forms of $F_s(d)$ follows from the fact that they are both expressions of the large deviations rate function \cite[Sect.\ 3.1]{DZ93} of the probability of the event $\{\sum_{t=1}^Nd_s(X_t,X_{t-1};X_t^\prime,X_{t-1}^\prime)\le Nd\}$, where $\{X_t\}$ and $\{X_t^\prime\}$ are independent i.i.d.\ processes, both governed by $Q$.} Thus, given $d$, $$\Delta_s(\hat{P}_{XX'X_-X_-'})\le d~\mbox{implies}~D(\hat{P}_{XX'|X_-X_-'}\|Q\times Q|\hat{P}_{X_-X_-'})\ge F_s(d).$$ Equivalently, given that $2\hat{R}=F_s(d)$, $$D(\hat{P}_{XX'|X_-X_-'}\|Q\times Q|\hat{P}_{X_-X_-'})\le 2\hat{R}~\mbox{implies}~ \Delta_s(\hat{P}_{XX'X_-X_-'})\ge F_s^{-1}(2\hat{R}).$$ But \begin{equation} F_s^{-1}(2\hat{R})=\sup_{r\ge 0}\frac{G_s(r)-\hat{R}}{r}=\sup_{\rho\ge 0} [\rho G_s(1/\rho)-2\rho\hat{R}], \end{equation} and so, similarly as in Section \ref{proof}, the error exponent of the typical random trellis code is lower bounded by \begin{equation} \sup_{s\ge 0}\inf_{\hat{R} < R} \sup_{\rho\ge 0}\frac{\rho G_s(1/\rho)-(2\rho-1)\hat{R}}{R-\hat{R}} =\sup_{s\ge 0}\frac{\rho_{R,s}G_s(1/\rho_{R,s})}{R}, \end{equation} where $\rho_{R,s}$ is the solution to the equation $(2\rho-1)R=\rho G_s(1/\rho)$. Note that $\rho G_s(1/\rho)$ is an extension of $E_{\mbox{\tiny x}}(\rho,Q)$ to a channel with both memory and mismatch. Using similar considerations, it is easy to see that $R_0(Q)$ of eq.\ (\ref{RoQ}) is equal to $\sup_{s\ge 0}G_s(1)$. Referring to the comment on the extension to channels with memory of the $p$ most recent past channel inputs (see the introductory paragraph of this section), the only difference is that in such a case, the matrix $A_s(r)$ has larger dimensions, $J^{2p}\times J^{2p}$, but it is rather sparse: all entries vanish except those where both pairs $(x,x_-)$ and $(x^\prime,x_-^\prime)$ are consistent. \clearpage
1,108,101,563,541
arxiv
\section{Introduction} The relationship between the local structure of a many-particle system and interparticle correlations is fundamental to condensed-matter theory. This intimate connection provides a useful image of the regularity \cite{FN1a} of all phases of matter, allowing researchers to track the local structure over increasing length scales approaching the global system. In practice, one measures pair correlations between distinct points in the form of the structure factor $S(k)$, which is proportional to the scattering intensity from x-ray or small-angle neutron scattering \cite{Ch87}. It is intuitive from such measurements that a hierarchy of structural order can be established, ranging from crystalline structures such as Bravais lattices \cite{FNBV} to highly disordered systems, the prototypical example of which is the ideal gas \cite{ToSt03, ToTrDe00, ZaTo09, StNeRo83}. Unfortunately, quantitative descriptors consistent with this stratification of order are difficult to identify, and this area of research is currently open. One recently introduced order metric \cite{ToSt10RMP} involves the notion of hyperuniformity of point patterns, whereby infinite-wavelength local density fluctuations vanish \cite{ToSt03, ZaTo09}. This order metric explicitly indicates the degree to which density fluctuations are suppressed on large length scales. The local structure of a hyperuniform many-particle configuration (i.e., on the order of a few nearest-neighbor distances between particles) is by definition indicative of the global arrangement of particles \cite{ToSt03}. Also known as superhomogeneity \cite{PiGaLa02}, this phenomenon is fundamental to the description of all Bravais lattices, lattices with a multiparticle basis, quasicrystals, and certain disordered systems possessing pair correlation functions decaying to unity exponentially fast \cite{ZaTo09}. We emphasize that while hyperuniformity in periodic configurations is a trivial consequence of their intrinsic long-range order, the fact that disordered many-particle systems can also display this property is nonintuitive. This behavior is especially surprising since the appearance of hyperuniformity marks the onset of an ``inverted'' critical point in which the structure factor vanishes in the limit of small wavenumbers while the direct correlation function, defined through the Ornstein-Zernike formalism, becomes long-ranged \cite{ToSt03}. Hyperuniform systems have played a fundamental role in our understanding and design of materials, including those with large, complete photonic band gaps \cite{FlToSt09}, ``stealth'' materials invisible to certain frequencies of radiation \cite{BaStTo08}, and prototypical glassy structures consisting of maximally random strictly jammed (MRJ) monodisperse hard spheres \cite{DoStTo05, ZaJiTo10}. Other examples of disordered hyperuniform systems include noninteracting spin-polarized fermions \cite{ToScZa08, ScZaTo09}, the ground state of liquid helium \cite{ReCh67}, the density fluctuations of the early Universe \cite{Pe98}, one-component plasmas \cite{ToSt03}, and so-called $g_2$-invariant processes \cite{ToSt03}, in which the form of the pair correlation function is held fixed over a certain density interval. Note for \emph{equilibrium} many-particle configurations at positive temperature, hyperuniformity implies that the isothermal compressibility vanishes; this relationship does not hold, however, for nonequilibrium systems. Hyperuniform particle distributions possess structure factors with a small-wavenumber scaling $S(k) \sim k^{\alpha}$ for $\alpha > 0$, including the special case $\alpha = +\infty$ for periodic crystals. This behavior implies that the variance $\sigma^2_N(R)$ in the number of particles within a local observation window (here a $d$-dimensional sphere of radius $R$) increases asymptotically as \cite{ZaTo09} \begin{equation}\label{NVscaling} \sigma^2_N(R) \sim \begin{cases} R^{d-1}\ln R, & \alpha = 1\\ R^{d-\alpha}, & \alpha < 1\\ R^{d-1}, & \alpha > 1 \end{cases}\qquad (R\rightarrow +\infty). \end{equation} However, all known hyperuniform configurations to date have a scaling parameter $\alpha \geq 1$ \cite{UcToSt06, GaJoTo08}, meaning that the second asymptotic regime of the number variance in \eqref{NVscaling} has never been observed in either theoretical or experimental studies. Indeed, the aforementioned MRJ packings, which are \emph{maximally} disordered among all jammed sphere packings with diverging elastic moduli, possess a small-wavenumber scaling $\alpha = 1$, and this observation has provoked the question of whether this value corresponds to a \emph{minimal} scaling among all hyperuniform point patterns. Zachary, Jiao, and Torquato have provided strong arguments that this claim is indeed true for strictly jammed hard-particle packings \cite{ZaJiTo10}, but it is unclear whether general point patterns must also possess exponents $\alpha \geq 1$. Here we provide for the first time constructions of ``anomalous'' disordered hyperuniform many-particle ground states for which $\alpha < 1$, demonstrating the diversity of possible structures within of this class of systems. Our approach involves placing explicit constraints on the so-called collective coordinates associated with a point distribution, which are defined by a Fourier transform of the local density variable (discussed in Section II below) \cite{FaPeStSt91, UcStTo04,UcToSt06}. Controls on collective coordinates have been previously used in the development of novel stealth materials \cite{BaStTo08} and in the identification of unusual disordered classical ground states for certain classes of pair potentials \cite{BaStTo09}. This problem can be viewed as the determination of the ground state of a many-particle system with up to four-body interactions \cite{UcToSt06}; duality relations that relate the energy per particle of a many-body potential in real space to the corresponding energy of the dual (Fourier-transformed) potential can be used to examine analytically the ground state structures and energies \cite{ToSt08}. Importantly, since collective coordinates directly probe the configuration space associated with the two-particle information of the structure factor, they are ideally suited to the construction of hyperuniform point patterns. Formally, we numerically construct a configuration of particles whose spatial distribution is consistent with a targeted form of the structure factor at small wavenumbers. By constraining a certain number of degrees of freedom in the system, we ``fix'' the positions of a known fraction of the total number of particles based on the locations of the remaining particles and the implicit constraints imposed by the targeted form of $S(k)$. By varying the fraction of constrained degrees of freedom within the system, we are able to explore directly the relationship between hyperuniformity and internal structural constraints of a many-particle configuration, allowing us to interpolate between the ``disordered'' and ``ordered'' regimes of hyperuniformity. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{Fig1A}\hspace{0.05\textwidth} \includegraphics[width=0.45\textwidth]{Fig1B} \caption{(Color online) Numerically-generated configurations of particles in two dimensions with a circular local observation window of radius $R$. Both configurations exhibit strong local clustering of points and possess a highly irregular local structure; however, the configuration on the left is hyperuniform while the one on the right is not. The hyperuniform point pattern was generated with the same methodology outlined in Section III of the text. }\label{PPfig} \end{figure} In order to elucidate the connection between the local coordination structure and pair correlations for our anomalous hyperuniform ground states, we have investigated the distribution of the available \emph{void space} external to the particles. Prior work on MRJ packings of binary hard disks has shown that the appearance of hyperuniformity in a many-particle system is related to the underlying distribution of the local voids between particles \cite{ZaJiTo10}; in this sense, the void space is more fundamental to the local structure than the particles themselves. Strong arguments have also been put forth to support the claim that exponential values $\alpha$ less than unity in the small-wavenumber region of the structure factor indicate the presence of larger interparticle voids with higher frequency, thereby deregularizing the microstructure while maintaining hyperuniformity \cite{ZaJiTo10}. This behavior is notable since it is not obvious that hyperuniformity can be consistent with a highly clustered microstructure; see Fig. \ref{PPfig}. Here we provide further evidence to link rigorously the void space and the local coordination structure of a point pattern, and we highlight the differences in the void space distribution for ``regular'' and ``anomalous'' hyperuniform systems. Since we can directly control the fraction of constrained degrees of freedom via collective coordinates, our results have implications for understanding how the void space distribution is affected by increased constraints on the many-particle configuration. Indeed, our work directly supports the fundamental role of the void space in the microstructure and reinforces the relationship between constraints on the local structure and the aforementioned observed minimal scaling $\alpha = 1$ found in $S(k)$ for MRJ hard sphere packings. Our major results are summarized as follows: \begin{itemize} \item[(i)] Disordered hyperuniform many-particle ground states can, counterintuitively, exhibit a substantial degree of clustering in the absence of a large number of constraints on the particle distribution (Sections IV and V). \item[(ii)] The order-disorder phase transition that occurs upon increasing the fraction of constrained degrees of freedom is related to the \emph{realizability} of the constrained contribution to the pair correlation function $g_2(r)$ (defined below) (Section IV). \item[(iii)] Hyperuniform particle distributions with anomalous asymptotic local density fluctuations (i.e., slower than the volume but faster than the surface area of an observation window) can be constructed, and these fluctuations are intimately related to the distribution of \emph{void sizes} external to the particles (Section V). \item[(iv)] With few constrained degrees of freedom (e.g., a perturbation from an ideal gas), the entropy (configurational degeneracy) decreases linearly with the number of constraints imposed on the particle distribution (Section V). \end{itemize} Section II provides a brief overview of the important ideas related to point processes, collective coordinates, and hyperuniformity. We apply these concepts in Section III to discuss how control over collective coordinates can be used to numerically generate configurations of hyperuniform point patterns, including those with anomalous asymptotic local density fluctuations, to a high numerical precision. Section IV explores how increasing the fraction of constrained degrees of freedom within hyperuniform systems affects the observed pair correlations and, therefore, the local coordination structure. In Section V we provide explicit calculations for the void statistics of our hyperuniform point patterns under weak and strong constraints, and we draw explicit connections among the regularity of the local structure, the exponential form of the small-wavenumber region of the structure factor, and the distribution of the local voids. Concluding remarks are given in Section VI. \section{Stochastic point patterns, collective coordinates, and hyperuniformity} We consider many-particle configurations to be realizations of stochastic point processes in some subset of Euclidean space $\mathbb{R}^d$. A (finite) \emph{stochastic point pattern} is formally defined as a distribution of $N$ points $\{\mathbf{r}^N\}$ in some compact space $\mathcal{V}$ of volume (Lebesgue measure) $V$. We consider the case where the distribution is statistically homogeneous with periodic boundary conditions on $\mathcal{V}$; the thermodynamic limit $N, V\rightarrow +\infty$ with $\rho = N/V =$ constant can be taken appropriately to extend the point pattern to Euclidean space $\mathbb{R}^d$. The statistics of the process are determined by an $N$-particle probability density function $P_N(\mathbf{r}^N)$, which need not be a Gibbs measure. Equivalently, one can specify the countable set of \emph{generic $n$-particle probability density functions} $\rho_n(\mathbf{r}^n)$, defined by \begin{equation} \rho_n(\mathbf{r}^n) = \frac{N!}{(N-n)!} \int P_N(\mathbf{r}^n, \mathbf{r}^{N-n}) d\mathbf{r}^{N-n}. \end{equation} The function $\rho_n$ is therefore the probability density associated with finding a subset of \emph{any} $n$ particles within volume elements $d\mathbf{r}^n$. Note that for statistically homogeneous point patterns $\rho_1 = \rho$. Related to the generic $n$-particle probability density function is the $n$-particle correlation function $g_n(\mathbf{r}^n)$, defined by \begin{equation} \rho^n g_n(\mathbf{r}^n) = \rho_n(\mathbf{r}^n). \end{equation} Of particular importance is the \emph{pair correlation function} $g_2(\mathbf{r})$, which can be made integrable by subtracting its long-range value of unity to give the \emph{total correlation function} $h(\mathbf{r}) = g_2(\mathbf{r}) - 1$. A Fourier representation of $g_2(\mathbf{r})$ is given by the \emph{structure factor} $S(\mathbf{k})$, defined by \begin{equation} S(\mathbf{k}) = 1+\rho \hat{h}(\mathbf{k}), \end{equation} where we utilize the following convention for the Fourier transform: \begin{equation} \hat{f}(\mathbf{k}) = \int_{\mathbb{R}^d} \exp(-i\mathbf{k}\cdot\mathbf{r}) f(\mathbf{r}) d\mathbf{r}. \end{equation} Corresponding to any \emph{single} configuration of points $\{\mathbf{r}^N\}$ is a local density variable \begin{equation} \rho(\mathbf{r}) = \sum_{j=1}^N \delta(\mathbf{r}-\mathbf{r}_j), \end{equation} where $\delta$ denotes the Dirac delta function. The ensemble average of this local density with respect to the statistics of the point process is \begin{equation} \langle \rho(\mathbf{r})\rangle = \rho, \end{equation} and the autocorrelation function is given by \begin{equation}\label{seven} \langle \rho(\mathbf{r}_1) \rho(\mathbf{r}_2)\rangle = \rho \delta(\mathbf{r}) + \rho^2 g_2(\mathbf{r}) \end{equation} with $\mathbf{r} = \mathbf{r}_1 - \mathbf{r}_2$. Note from \eqref{seven} that the autocorrelation function contains two contributions: a delta function corresponding to the self-correlation of a point in the process and the pair correlation function between two distinct particles. The self-correlation contribution is \emph{independent} of the distribution of particles in the system and arises for all correlated and uncorrelated point patterns. For statistically homogeneous point patterns subject to periodic boundary conditions, it is convenient to assume ergodicity and equate ensemble averages with volume averages over the unit cell. This assumption is expected to be valid in the thermodynamic limit. One can show that the volume-averaged local density and autocorrelation function are \begin{align} \overline{\rho(\mathbf{r})} &= \rho\label{eight}\\ \overline{\rho(\mathbf{x}+\mathbf{r})\rho(\mathbf{x})} &= \rho \delta(\mathbf{r}) + \frac{1}{V} \sum_{j\neq \ell = 1}^N \delta(\mathbf{r}-\mathbf{r}_{j\ell})\label{nine}, \end{align} where $\mathbf{r}_{j\ell} = \mathbf{r}_j - \mathbf{r}_{\ell}$. Equation \eqref{nine} suggests the following alternative definition of the pair correlation function: \begin{equation} \rho^2 g_2(\mathbf{r}) = \left\langle \frac{1}{V} \sum_{j\neq\ell = 1}^N \delta(\mathbf{r} - \mathbf{r}_{j\ell})\right\rangle. \end{equation} Since the Dirac delta functions in \eqref{nine} are by definition localized, this result has little practical utility when handling finite particle distributions. However, one can take advantage of the periodicity of the unit cell to expand the local density in a Fourier series according to \begin{equation}\label{eleven} \rho(\mathbf{r}) = \frac{1}{V} \sum_{j=1}^N \sum_{\mathbf{k}} \exp\left[i \mathbf{k} \cdot(\mathbf{r}-\mathbf{r}_j)\right], \end{equation} which is equivalent to a discrete (inverse) Fourier transform. The wavevectors $\mathbf{k}$ in \eqref{eleven} are determined by the geometry of the unit cell; if the unit cell is formed with basis vectors $\{\mathbf{e}_i\}$, then the wavevectors satisfy \begin{equation} \mathbf{k}\cdot \mathbf{e}_i = 2\pi m \end{equation} for all $i$ and for some $m \in \mathbb{Z}$. For simplicity, we will henceforth consider a $d$-dimensional cubic cell $[0, L]^d \subset \mathbb{R}^d$, which implies $\mathbf{k} = 2\pi \mathbf{n}/L$ for some $\mathbf{n} \in \mathbb{Z}^d$. Rewriting \eqref{eleven} in the form \begin{equation} \rho(\mathbf{r}) = \frac{1}{V} \sum_{\mathbf{k}} \exp(i \mathbf{k}\cdot \mathbf{r}) \hat{\rho}(\mathbf{k}), \end{equation} where \begin{equation}\label{fourteen} \hat{\rho}(\mathbf{k}) = \sum_{j=1}^N \exp(-i\mathbf{k}\cdot\mathbf{r}_j), \end{equation} we observe that the local density is the discrete (inverse) Fourier transform of $\hat{\rho}$, which we call a \emph{collective density variable}. The identity \eqref{eight} can also be obtained using the Fourier representation \eqref{eleven}, meaning that only the mode $\mathbf{k} = \mathbf{0}$ contributes to the local density on average. However, the autocorrelation function is now of the form \begin{align} \overline{\rho(\mathbf{x}+\mathbf{r}) \rho(\mathbf{x})} &= \frac{\rho}{V} \sum_{\mathbf{k}} \exp(i \mathbf{k}\cdot\mathbf{r}) + \frac{1}{V^2} \sum_{j\neq\ell} \sum_{\mathbf{k}} \exp\left[i \mathbf{k}\cdot (\mathbf{r}-\mathbf{r}_{j\ell})\right]\\ &= \rho \delta(\mathbf{r}) + \frac{\rho^2}{N^2} \sum_{\mathbf{k}} \exp(i \mathbf{k} \cdot\mathbf{r})\left[\lvert \hat{\rho}(\mathbf{k})\rvert^2 - N\right], \end{align} which, by comparing with \eqref{seven}, implies \cite{FN1} \begin{equation}\label{seventeen} g_2(\mathbf{r}) = \frac{1}{N^2} \sum_{\mathbf{k}} \exp(i \mathbf{k}\cdot \mathbf{r}) \left[\lvert\hat{\rho}(\mathbf{k})\rvert^2 - N\right]. \end{equation} The result \eqref{seventeen} allows one to directly compute the pair correlation function from the collective density variables $\hat{\rho}$; note that the $\mathbf{k} = \mathbf{0}$ mode must be included in this calculation to ensure the correct long-range behavior $g_2(\mathbf{r}) \rightarrow 1$ as $\lVert\mathbf{r}\rVert \rightarrow +\infty$. In practice, one must truncate the wavevector summation in \eqref{seventeen}, leading to oscillatory approximations to $g_2$ within some threshold determined by the cut-off magnitude of the wavevectors. \emph{Hyperuniform} point patterns constitute a subclass of point processes lacking infinite-wavelength local density fluctuations \cite{ToSt03}. Specifically, it has been shown that the variance $\sigma^2_N(R)$ in the number of points within a local spherical observation window $\mathcal{W}(R)$ of radius $R$ and volume $v(R)$ scales asymptotically as \cite{ToSt03} \begin{equation}\label{hypscale} \sigma^2_N(R) = \langle N(R)\rangle \left[ A_N(R) + B_N(R)/R + \text{lower-order terms}\right], \end{equation} where $\langle N(R)\rangle = \rho v(R)$ is the average number of points in the observation window. The coefficients $A_N(R)$ and $B_N(R)$ in \eqref{hypscale} are determined solely by the two-particle information of the point pattern: \begin{align} A_N(R) &= 1+ \rho \int_{\mathcal{W}(R)} h(\mathbf{r}) d\mathbf{r} \qquad (R\rightarrow +\infty)\\ B_N(R) &= -\frac{\rho \Gamma(1+d/2)}{\Gamma[(d+1)/2]\Gamma(1/2)} \int_{\mathcal{W}(R)} h(\mathbf{r}) r d\mathbf{r} \qquad (R\rightarrow +\infty). \end{align} So long as $h(r) \rightarrow 0$ faster than $r^{-d}$, the leading-order coefficient $A_N(R)$ converges asymptotically as $A_N(R) = A_N \equiv \lim_{\lVert\mathbf{k}\rVert\rightarrow 0}S(\mathbf{k})$ \cite{FN2}. By definition, a hyperuniform point pattern possesses a number variance growing slower than the volume $v(R)$ of the observation window (equivalently, the mean number of points $\langle N(R)\rangle$), implying that $A_N = 0$ and infinite-wavelength density fluctuations vanish. The most common examples, including all Bravais lattices, periodic non-Bravais lattices, quasicrystals possessing Bragg peaks, and certain disordered point patterns with pair correlation functions decaying to unity exponentially fast, of hyperuniform point patterns possess constant number variance coefficients $B_N(R) = B_N$ \cite{ToSt03}. This behavior implies that the isotropic structure factor $S(k)$ possesses a small-wavenumber scaling $Dk^{\alpha}$ with $\alpha \geq 2$, including the special case $\alpha = +\infty$ for periodic structures. However, it is also possible to find hyperuniform point patterns for which $0 < \alpha < 2$, in which case $C_1 \leq B_N(R) \leq C_2 R$ as $R\rightarrow +\infty$ for some constants $C_1$ and $C_2$. The most well-known examples of these types of ``anomalous'' local density fluctuations occur when $S(k) \sim k$ as $k\rightarrow 0$, in which case $B_N(R) = A_1 \ln(R) + A_2$ with $A_1$ and $A_2$ constant. This situation has been well-characterized in three-dimensional maximally random jammed packings of hard spheres \cite{DoStTo05}, the ground states of liquid helium \cite{ReCh67}, and noninteracting spin-polarized fermion ground states \cite{ToScZa08}. However, examples where $\alpha < 1$ have heretofore not appeared in the literature. \section{Collective coordinate construction of hyperuniform point patterns} One goal of this work is to construct examples of hyperuniform point patterns possessing the aforementioned ``anomalous'' asymptotic local density fluctuations, meaning that the number variance grows slower than the volume of an observation window but faster than the surface area. Collective density variables provide an attractive means to control the small-wavenumber region of the structure factor $S(\mathbf{k})$, thereby allowing us to construct a hyperuniform point pattern with targeted local density fluctuations. Specifically, we define an objective function $\Phi$ according to \begin{equation}\label{eighteen} \Phi(\mathbf{r}^N) = \sum_{\mathbf{k} \in \mathcal{Q}} \left[S(\mathbf{k}; \mathbf{r}^N)-S_0(\mathbf{k})\right]^2, \end{equation} where $S_0(\mathbf{k})$ is the targeted form of the structure factor and $\mathcal{Q}$ denotes some finite subset of wavevectors $\mathbf{k}$. The structure factor is determined using collective density variables; specifically, \begin{equation}\label{nineteen} S(\mathbf{k}; \mathbf{r}^N) = \frac{\lvert\hat{\rho}(\mathbf{k})\rvert^2}{N} \qquad (\mathbf{k}\neq \mathbf{0}), \end{equation} where $\hat{\rho}(\mathbf{k})$, implicitly a function of the particle positions $\mathbf{r}^N$, is defined by \eqref{fourteen}. The zero-wavevector is excluded from \eqref{nineteen} since it provides an $\mathcal{O}(N)$ contribution to the structure factor, corresponding to a delta function in the thermodynamic limit from the long-range behavior of $g_2$. By expanding \eqref{eighteen}, one can show that our minimization problem corresponds to finding the classical ground state of a many-particle system with up to four-body interactions \cite{UcToSt06} \begin{equation} \Phi(\mathbf{r}^N) = \sum_{i\neq j\neq \ell \neq m} v_4(\mathbf{r}_i, \mathbf{r}_j, \mathbf{r}_\ell, \mathbf{r}_m) + \sum_{i\neq j \neq \ell} v_3(\mathbf{r}_i, \mathbf{r}_j, \mathbf{r}_\ell) + \sum_{i\neq j} v_2(\mathbf{r}_i, \mathbf{r}_j) + v_0, \end{equation} where \begin{align} v_4(\mathbf{r}_i, \mathbf{r}_j, \mathbf{r}_\ell, \mathbf{r}_m) &= \frac{1}{N^2} \sum_{\mathbf{k}\in \mathcal{Q}} \cos(\mathbf{k}\cdot\mathbf{r}_{ij}) \cos(\mathbf{k}\cdot\mathbf{r}_{\ell m})\\ v_3(\mathbf{r}_i, \mathbf{r}_j, \mathbf{r}_\ell) &= \frac{4}{N^2} \sum_{\mathbf{k} \in \mathcal{Q}} \cos(\mathbf{k}\cdot\mathbf{r}_{ij})\cos(\mathbf{k}\cdot\mathbf{r}_{i\ell})\\ v_2(\mathbf{r}_i, \mathbf{r}_j) &= \frac{2}{N}\sum_{\mathbf{k}\in \mathcal{Q}} \cos(\mathbf{k}\cdot\mathbf{r}_{ij})[1-S_0(\mathbf{k})]\\ v_0 &= \sum_{\mathbf{k}\in\mathcal{Q}} \left[S_0(\mathbf{k}) - 1\right]^2. \end{align} The set $\mathcal{Q}$ in \eqref{eighteen} is chosen to contain all wavevectors, excluding the zero mode, with norm less than some upper bound $K$. This construction allows us to target specifically the small-wavenumber region of the structure factor, which controls the asymptotic local density fluctuations. The target function $S_0$ is chosen with the form \begin{equation}\label{twenty} S_0(\mathbf{k}) = D\lVert\mathbf{k}\rVert^{\alpha} \qquad \text{for all } \mathbf{k}\in \mathcal{Q}. \end{equation} In order for the target function to correspond to a realizable point pattern, it is necessary that $D \geq 0 $ to enforce positivity of the structure factor. The parameter $\alpha$ determines the asymptotic behaviors of the pair correlation function and the number variance [c.f. \eqref{NVscaling}]. Previous work \cite{UcToSt06} has considered the cases $\alpha = 1, 2, 4, 6, 8,$ and $10$ in dimensions $d = 2$ and $3$. It has recently been conjectured that $\alpha = 1$ corresponds to the \emph{minimal} exponent consistent with the constraints of saturation and strict jamming in sphere packings \cite{ZaJiTo10}; however, systems for which $\alpha < 1$ have not been reported in the literature, and their statistical properties are unknown. The objective function \eqref{eighteen} is minimized to within $10^{-17}$ of its global minimum using the MINOP algorithm \cite{DeMe79, Ka99}, which has several computational advantages for this type of investigation as previously reported in the literature \cite{UcToSt06}. MINOP applies a dogleg strategy that uses a gradient direction when one is far from the minimum, a quasi-Newton direction when one is close, and a linear combination of the two when one is at intermediate distances from the minimum. It is important for this study to verify that the constructed point patterns are indeed hyperuniform with the correct targeted asymptotic local density fluctuations. This criterion requires high resolution of the small-wavenumber region of the structure factor. Specifically, the smallest observable wavenumber magnitude in the collective coordinates representation (in a $d$-dimensional cubic unit cell) is $k_{\text{min}} = 2\pi/L = 2\pi \rho^{1/d}/N^{1/d}$, where $L$ is the box length, $N$ is the number of particles, and $\rho$ is the number density. To ensure hyperuniformity, we therefore require that $\lim_{N\rightarrow +\infty} S(k_{\text{min}}) = 0$, where the limit is taken at constant density. Since any simulation necessarily requires choosing $N$ finite, it is essential to select a value of $N$ sufficiently large to enforce both hyperuniformity and the desired form of the structure factor near the origin. Unfortunately, the $\mathcal{O}(N^{-1/d})$ scaling of $k_\text{min}$ makes obtaining such resolution increasingly difficult in higher dimensions. Our interest is in verifying the existence of anomalous hyperuniform point patterns and understanding their statistical properties, and we therefore limit our studies to one dimension, where the scaling is most favorable, with $N = 2000$ particles. It should be appreciated, however, that hyperuniform point patterns with logarithmically-growing asymptotic density fluctuations are known in arbitrarily high dimensions \cite{ToScZa08}. Importantly, since our minimization procedure is equivalent to finding the classical ground state of a long-range interaction with up to four-body potentials and can be used in principle to construct hyperuniform point patterns in any dimension, nontrivial phase behaviors can still be observed \cite{LiMa66}, and we are therefore able to extend our conclusions to higher-dimensional structures. \section{Collective coordinates and realizability of point patterns} For a general $d$-dimensional point pattern of $N$ particles, there are $dN$ translational degrees of freedom in the absence of constraints on the system. One must therefore choose a set of wavevectors $\mathcal{Q}$ for the objective function \eqref{eighteen} containing only a fraction $\chi$ of these degrees of freedom. In one dimension there are $2M(K) = \text{floor}(KL/\pi)$ wavevectors, excluding the zero mode, with magnitude less than or equal to $K$. Inversion-invariance of the modulus of the collective density variable implies that $M(K)$ of these wavevectors can be independently constrained; we therefore define a new parameter \begin{equation}\label{chidef} \chi = \frac{M(K)}{dN}, \end{equation} which represents the fraction of independently constrained degrees of freedom from the objective function $\Phi$. For the case where the targeted structure factor $S_0(\mathbf{k}) = 0$ for all $\mathbf{k} \in \mathcal{Q}$, it has been previously shown \cite{FaPeStSt91} that increasing the parameter $\chi$ induces a greater degree of order on the particle distribution. Specifically, in one dimension the corresponding point patterns are disordered for $0 < \chi < 1/3$ and crystalline for $\chi > 1/2$ \cite{FN3}; intermediate values of $\chi$ interpolate between these two regimes \cite{FNhighd}. However, it is known that target functions of the form \eqref{twenty} interfere with this order-disorder phase transition; here we provide analytic results suggesting that this transition is shifted to higher values of $\chi$ for all finite $\alpha$ \cite{FN4}. For a one-dimensional point pattern, the wavevectors are of the form $k = 2\pi m/L$ for $m \in \mathbb{Z}$, and one can write the collective density variable as: \begin{equation} \hat{\rho}(m) = \sum_{j = 1}^N \exp(-i 2\pi m r_j/L). \end{equation} Additionally, the total correlation function is of the form [cf. \eqref{seventeen}] \begin{equation}\label{twentythree} h(r) = \frac{2}{N^2}\sum_{m=1}^{+\infty} \cos(2\pi m r/L)\left[\lvert\hat{\rho}(2\pi m/L)\rvert^2 - N\right], \end{equation} which for the targeted point pattern can be decomposed as: \begin{align} h(r) &= \frac{2}{N} \sum_{m=1}^M \cos(2\pi m r/L)\left[D (2\pi m/L)^{\alpha} - 1\right] + \frac{2}{N^2} \sum_{m = M+1}^{+\infty} \cos(2\pi m r/L) \left[\lvert\hat{\rho}(2\pi m/L)\rvert^2 - N\right]\\ &= h_0(r; M) + h_1(r; M), \end{align} where $h_0(r; M)$ is the contribution to the total correlation function due to \emph{constrained} wavevectors and $h_1(r; M)$ is the \emph{unconstrained} contribution. The function $h_0$ can be simplified as \begin{align} h_0(r; M) &= \left(\frac{2^{\alpha+1} \pi^{\alpha}D}{N L^{\alpha}}\right) \sum_{m=1}^M \cos(2\pi m r/L) m^{\alpha} - \frac{2}{N} \sum_{m=1}^M \cos(2\pi m r/L)\\ &= C(\alpha, D) \sum_{m=1}^M \cos(2\pi m r/L) m^{\alpha} - (2/N) \cos[(M+1)\pi r/L] \csc(\pi r/L) \sin(M\pi r/L) \end{align} where \begin{equation} C(\alpha, D) = \frac{2^{\alpha+1} \pi^{\alpha} D}{N L^{\alpha}} \end{equation} is a parameter-dependent constant. The global minimum of $h_0(r; M)$ occurs at $r = 0$, corresponding to \begin{align} h_0(0; M) &= C(\alpha, D) \sum_{m=1}^M m^{\alpha} - (2M/N)\\ &= C(\alpha, D) H^{(-\alpha)}(M) - (2M/N)\label{thirty}, \end{align} where \begin{equation} H^{(\alpha)}(n) = \sum_{m=1}^n m^{-\alpha} \end{equation} is the \emph{harmonic number} of order $\alpha$. The negative contribution to $h_0(0; M)$ in \eqref{thirty} suggests that there may be an upper threshold $M^*$ beyond which $h_0(0; M) < 0$. For any values of $M$ in this region, the constrained contribution $h_0$ to the total correlation function of the point pattern is no longer in itself \emph{realizable} as a point process. The realizability problem in classical statistical mechanics \cite{ToSt02} and the associated $N$-representability problem in quantum statistics \cite{Co63} are notoriously difficult and unsolved problems in physics that ask under what sufficient and necessary conditions a reduced two-particle correlation function can be expressed as the integral over a full $N$-particle probability density. In the classical case, one can consider specifying a pair correlation function $g_2$ and attempting to construct a corresponding point process. Known necessary realizability conditions on $g_2$ include \begin{align} g_2(\mathbf{r}) &\geq 0 \text{ for all } \mathbf{r}\\ S(\mathbf{k}) &\geq 0 \text{ for all } \mathbf{k} \end{align} along with the somewhat weaker Yamada condition \begin{equation} \sigma^2_N(R) \geq \theta (1-\theta) \end{equation} on the fractional part $\theta$ of the average number of particles in an observation window \cite{Ya61}. The Yamada condition appears easy to satisfy in all but relatively low dimensions \cite{ToSt02}. The determination of other realizability conditions on $g_2$ is an open problem \cite{KuLeSp07}. \begin{figure}[!tp] \centering \includegraphics[width=0.45\textwidth]{Fig2A}\hspace{0.05\textwidth} \includegraphics[width=0.45\textwidth]{Fig2B} \caption{(Color online) Left panel: Pair correlation function $g_2$ for numerically-constructed hyperuniform point patterns with small-wavenumber scalings $Dk^{\alpha}$ and $\chi = 0.1$. Right panel: Constrained contributions to the pair correlation functions.}\label{chi01g2} \end{figure} \begin{figure}[!tp] \centering \includegraphics[width=0.45\textwidth]{Fig3A}\hspace{0.05\textwidth} \includegraphics[width=0.45\textwidth]{Fig3B} \caption{(Color online) Left panel: Pair correlation function $g_2$ for numerically-constructed hyperuniform point patterns with small-wavenumber scalings $Dk^{\alpha}$ and $\chi = 0.35$. Right panel: Constrained contributions to the pair correlation functions.}\label{chi035g2} \end{figure} Figures \ref{chi01g2} and \ref{chi035g2} compare the pair correlation functions and the constrained contributions $h_0(r) + 1$ for numerically-constructed point patterns (using the methodology of Section III) with small-wavenumber exponents $\alpha = 0.5, 1.0,$ and $2.0$ and $\chi = 0.1$ and $0.35$. For $\chi = 0.1$, corresponding to a small fraction of constrained degrees of freedom, the constrained contribution $h_0(r) + 1$ places only moderate constraints on the local structure of the system, primarily controlling oscillations in $g_2$ beyond approximately five nearest-neighbor distances. Interestingly, the small-$r$ behaviors of $g_2(r)$ and $h_0(r)+1$ are strikingly different. Although the \emph{constrained} contribution to the pair correlation function generates an effective repulsion between particle pairs, the full pair correlation function indicates a tendency for particles to cluster at short pair separations. It follows that the unconstrained contribution to the pair correlation function plays a substantial role in determining the local structure for this system. \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{Fig4} \caption{Typical threshold values $\chi^*(\alpha)$ beyond which the \emph{constrained} contribution $h_0(r)$ to the total correlation function is no longer \emph{realizable} as a point process. This curve corresponds to choosing $S(K) = 0.5,$ where $K$ is the magnitude of the maximally constrained wavevector. Note that as $\alpha \rightarrow +\infty$ we recover the crystallization threshold $\chi^* = 0.5$ reported in Ref. \cite{FaPeStSt91}.}\label{achi} \end{figure} However, the situation is quite different upon increasing the constrained degrees of freedom to $\chi = 0.35$. Figure \ref{chi035g2} shows that the constrained contribution to $g_2$ almost exactly mirrors the full pair correlation function, implying that sufficiently constraining the collective density variables places a \emph{strong} constraint on the local structure of the point pattern. It follows that the value $M^*$ beyond which $h_0(0; M) < 0$ is an indicative precursor to the \emph{loss of realizability} of the targeted structure factor. We have mapped the threshold value $M^*$ (equivalently, $\chi^*$) in Fig. \ref{achi}. We emphasize that this loss of realizability is associated with negativity of the real-space pair correlation function; the structure factor itself is still positive over its entire domain. Interestingly, as the exponent $\alpha$ controlling the small-wavenumber region of the structure factor increases, we recover the value $\chi = 0.5$ corresponding to crystallization in the case where $S_0(\mathbf{k}) = 0$ for all $\mathbf{k} \in \mathcal{Q}$. This observation suggests that the threshold values of $\chi$ beyond which $h_0(0; M) < 0$ generalize this phase transition. In Section V, we provide additional arguments to support this claim. \section{Void statistics and coordination structure} \subsection{Exclusion probability functions} The $n$-particle correlation functions contain information concerning the relative locations of points within a point process, and, in principle, specifying the countably infinite set (in the thermodynamic limit) of such functions is sufficient to completely determine the point pattern. However, any finite collection of correlation functions contains only partial details of the spatial arrangements of the points, implying that there are degenerate structures with these same statistics \cite{JiStTo10}. In particular, the $n$-particle correlations functions do not in themselves provide direct information about the space \emph{exterior} to the points, or the so-called \emph{void space}. It has been shown for point patterns \cite{ZaJiTo10, ToLuRu90} (and random media \cite{Tobook}) that the distribution of the void space is indeed a more fundamental descriptor of the point process than the arrangements of the points themselves. Here we are interested in characterizing the relationship between asymptotic local number density fluctuations and the void space statistics; in particular, we would like to examine the constraints that the exponent $\alpha$ in the small-wavenumber region of the structure factor places on the distribution of the void space. \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{Fig5} \caption{Events contributing to the void exclusion probability $E_V(r)$ (upper left) and the particle exclusion probability $E_P(r)$ (lower). The points correspond to a realization of a disordered point process.}\label{E} \end{figure} One can define two types of ``exclusion'' functions, both of which measure the availability of empty space surrounding points of a stochastic process. The \emph{void exclusion probability function} $E_V(r)$ is the probability of finding a $d$-dimensional spherical cavity of radius $r$ centered at an arbitrary position in $\mathbb{R}^d$. The void exclusion probability has recently been shown to play a fundamental role in the covering and quantizer problems from discrete geometry and number theory \cite{To10}. Closely related to this descriptor is the \emph{particle exclusion probability function} $E_P(r)$, which is the probability of finding a $d$-dimensional sphere of radius $r$ centered on a point of the point process but containing no other points. Figure \ref{E} highlights the differences between these functions. The exclusion probability functions are complementary cumulative distributions of the void and particle nearest-neighbor functions $H_V(r)$ and $H_P(r)$, respectively \cite{ToLuRu90, Tobook}. The void nearest-neighbor function is the probability density of a finding the nearest point of a point process with respect to an arbitrary location in $\mathbb{R}^d$ within a radial distance $r+dr$. The particle nearest-neighbor function is defined similarly but with respect to nearest neighbors between two points of a point process. One therefore has the following simple relationships between these sets of functions: \begin{align} H_V(r) &= -\frac{\partial E_V(r)}{\partial r}\\ H_P(r) &= -\frac{\partial E_P(r)}{\partial r}. \end{align} One can relate the void and particle exclusion probabilities via a simple probabilistic construction \cite{ScZaTo09}. Specifically, we consider a generalized exclusion probability $E_V(r; \epsilon)$, which is the probability of finding a $d$-dimensional annulus of outer radius $r$ and inner radius $\epsilon$; by definition, $E_V(r; 0) = E_V(r)$. Taking the derivative of this function with respect to the inner radius $\epsilon$ gives a function proportional to the probability of finding a point within a small radial region inside the annulus and the annulus itself devoid of points. It follows that $E_P(r)$, the conditional probability of finding a spherical cavity centered on a point, is \begin{equation} E_P(r) = \lim_{\epsilon\rightarrow 0^+} \frac{1}{\rho s(\epsilon)} \frac{\partial E_V(r; \epsilon)}{\partial \epsilon}, \end{equation} where $s(\epsilon)$ is the surface area of a $d$-dimensional sphere of radius $\epsilon$. This construction is known in the theory of point processes \cite{DaVe08} and has also been used in the literature to identify the void statistics of certain point patterns related to problems in number theory, random matrix theory, and quantum mechanics \cite{ToScZa08}. One can without loss of generality define the \emph{exclusion correlation function} $\eta(r)$ according to \begin{equation} \eta(r) \equiv \frac{E_P(r)}{E_V(r)}. \end{equation} This function provides a measure of the correlations between $neighboring$ points in a stochastic point pattern and is identically unity for a Poisson point process. It is interesting to note that for a system of \emph{equilibrium} hard spheres of diameter $D$, the exclusion correlation function is given by \cite{ToLuRu90} \begin{equation} \eta(r) = \begin{cases} [E_V(r)]^{-1}, & r\leq D\\ [E_V(D)]^{-1}, & r\geq D, \end{cases} \end{equation} which depends only on knowledge of $E_V(r)$ and is monotonically nondecreasing for all $r$ with $\eta(0) = 1$. Further insight into the probabilistic meanings of $E_P$, $E_V$, and $\eta$ can be gained by introducing the notion of the particle space, defined to be the subset (of Lesbesgue measure zero) of $\mathbb{R}^d$ occupied by the points of the point process. The particle exclusion probability function $E_P(r)$ is then the fraction of the particle space that can be decorated by a $d$-dimensional sphere of radius $r$ containing no other points of the process. To define the void exclusion probability $E_V(r)$, one decorates all of the points in the process by spheres of radius $r$ and then determines the fraction of \emph{all} space not occupied by the spheres; this value corresponds to the portion of space available to insert a cavity of radius $r$ \cite{To10}. The exclusion correlation function $\eta(r)$ then provides a measure of the relative available space for a cavity of radius $r$ in the particle space compared to the external void space. Torquato and coworkers \cite{ToLuRu90} have provided the following series representations for the exclusion probability functions: \begin{align} E_V(r) &= 1+\sum_{k=1}^{+\infty} \frac{(-\rho)^k}{\Gamma(k+1)} \int g_k(\mathbf{r}^k) \prod_{j=1}^k m(\lVert \mathbf{x}-\mathbf{r}_j\rVert; r) d\mathbf{r}_j\\ E_P(r) &= 1+\sum_{k=1}^{+\infty} \frac{(-\rho)^k}{\Gamma(k+1)} \int g_{k+1}(\mathbf{r}^{k+1}) \prod_{j=2}^{k+1} m(\lVert\mathbf{r}_1-\mathbf{r}_j\rVert; r) d\mathbf{r}_j, \end{align} where $m(r; R) = \Theta(R-r)$. Since these functions are special cases of a more general canonical $n$-particle correlation function \cite{To86}, one can establish rigorous upper and lower bounds by truncating these series at finite order. Specifically, by writing \begin{equation} E_{V/P}(r) = \sum_{k=0}^{+\infty} E_{V/P}^{(k)}(r), \end{equation} where $E_{V/P}^{(0)} \equiv 1$, we have the following hierarchy of bounds: \begin{align} E_{V/P}(r) &\leq \sum_{k=0}^{\ell}E_{V/P}^{(k)}(r) \qquad (\ell \text{ even})\label{lbound}\\ E_{V/P}(r) &\geq \sum_{k=0}^{\ell} E_{V/P}^{(k)}(r) \qquad (\ell \text{ odd}), \end{align} which become sharper with increasing $\ell$. \subsection{Local statistics of anomalous hyperuniform point patterns} \begin{figure}[!t] \centering \includegraphics[height=2.3in]{Fig6A}\hspace{0.05\textwidth} \includegraphics[width=0.4\textwidth]{Fig6B} \caption{(Color online) Left panel: Structure factor with small-$k$ behavior $Dk^{\alpha}$ (inset) for numerically-constructed hyperuniform point patterns with $\chi = 0.1$. Right panel: Structure factor with small-$k$ behavior for $\chi = 0.35$.}\label{Sk} \end{figure} We have been able to successfully construct point configurations exhibiting anomalous asymptotic local number density fluctuations. Figure \ref{Sk} provides images of the structure factors for our configurations at $\chi = 0.1$ and $\chi=0.35$. As we show, for all wavevectors within the constrained portion of the spectrum, the structure factor matches its target value within an exceedingly small numerical tolerance (on the order of $10^{-17}$). In addition to the systems shown, we have also been able to reliably construct configurations with small-$k$ exponential behaviors $\alpha \geq 0.25$. In order to keep the exposition clear, we have only presented results for $\alpha = 0.5$ with the disclaimer that our conclusions will apply for other point patterns with anomalous local number density fluctuations. It is interesting to note the substantial differences in the structure factors of the systems for $\chi = 0.1$ and $\chi = 0.35$, particularly for unconstrained wavevectors. For $\chi = 0.1$, the structure factor exhibits an unusually slow decay to its asymptotic value of unity; we have fit the large-$k$ region of the structure factor with an asymptotic fit of the form $1+\beta/k^{\gamma}$ and have found a power-law decay $\gamma = 1$. This behavior is due to the local clustering of particles as expected from the small-$r$ region of the pair correlation function in Fig. \ref{chi01g2}. This effect can be directly observed in Figs. \ref{config1} and \ref{config2}, which provide illustrative portions of our numerically-constructed hyperuniform point patterns at $\chi = 0.1$ and $\chi = 0.35$. As has been previously reported in the literature \cite{UcToSt06, BaStTo08}, increasing the fraction of constrained degrees of freedom in the many-particle system has the effect of imposing greater local order in the form of an effective short-range repulsive interaction. By increasing $\chi$ from $0.1$ to $0.35$, the relative influence of the constrained wavevectors on the pair correlation function increases, suppressing the formation of local clusters. However, we also observe that as the exponent $\alpha$ controlling the small-wavenumber region of the structure factor decreases (equivalently, as anomalous local number density fluctuations appear), this effective repulsion between particles becomes noticeable weaker, manifested in the pair correlation function by larger values of $g_2(0)$. This behavior suggests that anomalous hyperuniform point patterns possess greater variability in their local structures, particularly with regard to the shapes and sizes of voids between particles. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{Fig7A}\hspace{0.05\textwidth} \includegraphics[width=0.45\textwidth]{Fig7B}\newline \includegraphics[width=0.45\textwidth]{Fig7C} \caption{(Color online) Portions of numerically-constructed hyperuniform point patterns with $\chi = 0.1$ and small-$k$ exponential scalings (upper left) $\alpha = 0.5$, (upper right) $\alpha = 1.0$, and (lower) $\alpha = 2.0$.}\label{config1} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{Fig8A}\hspace{0.05\textwidth} \includegraphics[width=0.45\textwidth]{Fig8B}\newline \includegraphics[width=0.45\textwidth]{Fig8C} \caption{(Color online) Portions of numerically-constructed hyperuniform point patterns with $\chi = 0.35$ and small-$k$ exponential scalings (upper left) $\alpha = 0.5$, (upper right) $\alpha = 1.0$, and (lower) $\alpha = 2.0$.}\label{config2} \end{figure} We have verified the expected asymptotic behaviors of the number variance for our numerically-constructed point patterns as shown in Fig. \ref{NV}. The asymptotic scalings of these fluctuations exactly correspond to their theoretical predictions. In particular, we have provided the first example of a hyperuniform point pattern for which the asymptotic number variance grows more slowly than the volume of an observation window but faster than a logarithmic scaling. Interestingly, the local clustering of points at $\chi = 0.1$ generates strong oscillations in the number variance that persist for several nearest-neighbor distances. In contrast, these local oscillations essentially vanish after two nearest-neighbor distances at $\chi = 0.35$, reflecting the strong constraints placed on the local structure by the small-wavenumber region of the structure factor. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{Fig9A}\hspace{0.05\textwidth} \includegraphics[width=0.45\textwidth]{Fig9B} \caption{(Color online) Left panel: Number variances $\sigma^2_N(R)$ for numerically-constructed hyperuniform point patterns with $\chi = 0.1$. Right panel: Corresponding number variances for $\chi = 0.35$.}\label{NV} \end{figure} Our calculations for the void and particle exclusion probabilities of these systems, shown in Figure \ref{EvEpfig}, demonstrate previously-unobserved statistical properties for hyperuniform point patterns. For a Poisson point pattern, one has the result that \begin{equation} E_V(r) = E_P(r) = \exp[-\rho v(r)], \end{equation} implying that the exclusion correlation function $\eta(r) = 1$ for all $r$. This result follows from the absence of interparticle correlations for the process and the underlying Poisson probability distribution for the number of particles within an arbitrary compact set. Gabrielli and Torquato \cite{GaTo04} have provided strong arguments to suggest that for any hyperuniform point pattern, the void exclusion probability should asymptotically decay faster than for a Poisson point process. This behavior implies that arbitrarily large cavities within the system, while not prohibited by the constraint of hyperuniformity, are expected to be significantly rare events owing to the underlying regularity of the global structure of the pattern. It is therefore not unreasonable to expect the functional form of $E_V(r)$ for the Poisson point process to provide an upper bound on the exlcusion probability of any hyperuniform point pattern, and this observation is indeed rigorously true for point patterns generated from fermionic particle distributions (so-called determinantal point processes) \cite{ToScZa08}. More generally, the Poisson result will place an upper bound on $E_V$ for any point pattern with $n$-particle correlation functions $g_n \leq 1$ for all $n$. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{Fig10A}\hspace{0.05\textwidth} \includegraphics[width=0.45\textwidth]{Fig10B} \caption{(Color online) Left panel: Void- and particle-exclusion probabilities for the numerically-constructed hyperuniform point patterns ($\chi = 0.1$) along with the reference curve for the Poisson point process. Right panel: Corresponding functions for $\chi = 0.35$.}\label{EvEpfig} \end{figure} For $\chi = 0.1$, we observe the unusual property that $E_V(r)$ is greater than the Poisson result for all values of $r$ that can be reliably determined from numerical simulation. It is instead the \emph{particle} exclusion probability function that is bounded from above by the Poisson curve. To understand this discrepancy, we first note that $E_V(r)$ and $E_P(r)$ are rigorously bounded from below by [c.f. \eqref{lbound}] \cite{ToScZa08} \begin{align} E_V(r) &\geq 1-\rho v(r)\label{Evlower}\\ E_P(r) &\geq 1-Z(r)\label{Eplower}, \end{align} where $Z(r)$ is the \emph{cumulative coordination number} \begin{equation} Z(R) = \rho \int_{\mathbb{R}^d} \Theta(R-r) g_2(\mathbf{r}) d\mathbf{r}. \end{equation} These bounds become sharp at low density or small $r$. Therefore, while $E_V(r)$ is related to the geometry of a cavity within the void space, the particle exclusion probability $E_P$ depends explicitly on the local coordination structure of the underlying point process. To elucidate further the relationship between the local coordination structure and the void statistics, we can consider a modification of the number variance problem, whereby one measures fluctuations in the number of points within an observation window \emph{centered on a point of the point process}. Let $N_P^{(i)}(R)$ denote this quantity; it can be represented as \begin{equation} N_P^{(i)}(R) = \sideset{}{^\prime}\sum_{j=1} \Theta(R-\lVert\mathbf{r}_j-\mathbf{r}_i\rVert), \end{equation} where the prime on the summation means that particle $i$ is excluded. The average value of this random variable is \begin{equation} \langle N_P^{(i)}(R)\rangle = \rho\int_{\mathbb{R}^d} g_2(\mathbf{r}) \Theta(R-\lVert\mathbf{r}\rVert) d\mathbf{r} = Z(R). \end{equation} The cumulative coordination number therefore measures the local number density within \emph{the particle space}. It follows that when $Z(R) \leq \rho v(R)$, $\langle N_P^{(i)}(R)\rangle$, the average number of points in the particle space within an observation window of radius $R$, is less than or equal to $\langle N_V(R)\rangle$, the average number of points of the process within a window in the void space. This behavior then implies that the points are more greatly dispersed within the particle space, and $E_P(R) \geq E_V(R)$; equivalently, $\eta(R) \geq 1$. Note that this analysis is consistent with the lower bounds \eqref{Evlower} and \eqref{Eplower} on the exclusion probability functions. Conversely, for the case where $Z(R) \geq \rho v(R)$, we have that $\langle N_P^{(i)}(R)\rangle \geq \langle N_V(R)\rangle$, which suggests that the points are more closely located within the particle space, leaving larger cavities within the void space. We therefore conclude that $E_P(R) \leq E_V(R)$ [$\eta(R) \leq 1$], and the point process should exhibit local clustering among points. These claims are also consistent with our results for the exclusion probability functions in Fig. \ref{EvEpfig}, whereby we observe a transition from $\eta(R) < 1$ at $\chi = 0.1$ to $\eta(r) > 1$ for $\chi = 0.35$. Indeed, the configurations at $\chi = 0.1$ exhibit substantial clustering among points (c.f. Fig. \ref{config1}). Our arguments can be extended by examining the scaling of the configurational degeneracy with the fraction of constrained degrees of freedom $\chi$. Here we measure this degeneracy by calculating the entropy (logarithm of the degeneracy) of the system relative to an ideal gas of $N$ particles in a volume $V$ on the line. For the ideal gas, we coarse-grain the system by dividing the volume $V$ into $M \gg 1$ cells such that no more than one particle occupies each cell with probability one, thereby representing the degeneracy associated with the $dN$ translational degrees of freedom as a combinatorial occupancy problem. The size of a cell determines the length scale, meaning without loss of generality that we need only consider the regime $\rho = N/M \ll 1$. Assuming that the particles are indistinguishable, the number of configurations $\Omega$ available to the system is \begin{equation} \Omega = \frac{M!}{(M-N)! N!}. \end{equation} Since the underlying distribution of particles is uniform within the cells, Boltzmann's formula for the entropy gives (with $k_B = 1$) \begin{equation} S = \ln \Omega = \ln\left[\frac{M!}{(M-N)! N!}\right], \end{equation} which for large $M$ and $N$ becomes \begin{align} S &= M\ln M - M - (M-N)\ln(M-N) + (M-N) - N\ln N + N\\ &= -M\ln(1-N/M) + N\ln(M/N) + N\ln(1-N/M). \end{align} Under the assumption that $N/M \ll 1$, we have the following result for the entropy per particle $\overline{S}_{\text{ideal}} = S_{\text{ideal}}/N$: \begin{equation} \overline{S}_{\text{ideal}} = 1-N/M + \ln(M/N) \approx 1-\ln(\rho). \end{equation} The entropy of the ideal gas is therefore large and positive as expected. For the density regime $\rho \ll 1$ that we have in mind, one can simplify further by taking $\overline{S} \approx -\ln(\rho)$, which diverges to $+\infty$ for small $\rho$. Suppose now that we constrain $K$ degrees of freedom, where $K \ll N$. This construction correponds to making a small perturbation away from the ideal gas configuration with $N-K$ degrees of freedom still available to the many-particle system. We again divide the volume $V$ into $M$ cells of unit length ($N/M \ll 1$). For sufficiently small values of $\chi$ (i.e., near the ideal gas), we may assume that the length scale of the effective repulsion between particles is negligible compared to the cell size, meaning that the $N-K$ unconstrained particles may be distributed freely among the $M$ cells. Note that the $K$ constrained degrees of freedom are explicitly determined once these particles have been placed. The number of microstates available to the system is then \begin{equation} \Omega = \frac{M!}{(M-N+K)!(N-K)!}, \end{equation} and the configurational entropy is \begin{equation} S = \ln\Omega = M\ln\left(\frac{M}{M-N+K}\right) + K \ln\left(\frac{N-K}{M-N+K}\right) + N\ln\left(\frac{M-N+K}{N-K}\right). \end{equation} Defining the fraction of constrained degrees of freedom $\chi = K/N$, we may write for the entropy per particle $\overline{S} = S/N$: \begin{equation} \overline{S} = (1-M/N-\chi)\ln(1-N/M+\chi N/M) + (1-\chi)\ln(M/N)-(1-\chi)\ln(1-\chi) \end{equation} where $\chi \ll 1$. For $N/M \ll 1$, this expression simplifies as \begin{equation} \overline{S} = 1-\chi - (1-\chi)\ln(1-\chi) + (1-\chi)\ln(M/N) - (N/M)(1-\chi)^2. \end{equation} The last term in this expression is negligible within the density regime where $\overline{S}_{\text{ideal}} \approx \ln(M/N) = \ln(1/\rho)$. Since $\overline{S}_{\text{ideal}}$ is large and positive compared to the first terms of this result, we have the expected scaling \begin{equation} \frac{\overline{S}}{\overline{S}_{\text{ideal}}} \approx 1-\chi, \end{equation} suggesting a roughly \emph{linear} decrease in the entropy of the system for small values of $\chi$. By increasing the parameter $\alpha$, we expect also to increase the rate at which $\overline{S}\rightarrow 0$ with respect to $\chi$ since higher values of $\alpha$ are associated with larger effective radii around the constrained particles. However, since we require that our configurations are hyperuniform, there are additional implicit constraints on the ``unconstrained'' degrees of freedom. Formally, hyperuniformity requires that the local structure of a point process approach the global structure over sufficiently short length scales, on the order of several nearest-neighbor distances. This behavior is typically associated with a highly regularized distribution of points, such as with a Bravais lattice. However, our results for $E_V$ at $\chi = 0.1$ suggest an alternative mechanism by which hyperuniformity can be achieved in a point pattern. Specifically, the fact that $\eta(r) < 1$ is consistent with local clusters of particles that are \emph{globally} regularized by an increased probability of finding sufficiently large voids to separate them. In this case, the appearance of these large voids external to the particle space is apparently essential to enforce hyperuniformity of the point pattern by overcoming the highly inhomogeneous local structure of the clusters. In the context of our analysis above, for small perturbations from the ideal gas, the high configurational degeneracy that remains after constraining only a few degrees of freedom implies that highly disordered configurations are most likely to appear from our numerical constructions. However, with the added implicit constraint of hyperuniformity the system will sacrifice local structural regularity for clustering of points that are globally separated by sufficiently large voids, resulting in a negatively correlated exclusion correlation function. \begin{figure}[!tp] \centering \includegraphics[width=0.5\textwidth]{Fig11} \caption{Cumulative coordination numbers $Z(R)$ for numerically-constructed hyperuniform point patterns with small-wavenumber exponents $\alpha = 0.5$ and $\alpha = 2.0$. The fractions of constrained degrees of freedom are $\chi = 0.1$ and $\chi = 0.35$.}\label{Z01035} \end{figure} Figure \ref{Z01035} highlights this behavior by examining the cumulative coordination numbers $Z(R)$ at $\chi = 0.1$ and $\chi = 0.35$ for our systems. Although at $\chi = 0.1$ the $\alpha = 2$ system exhibits greater clustering at small-$r$ than for $\alpha = 0.5$, at large $r$ this trend reverses, which is consistent with the appearance of larger voids with higher probability and supports our claim that these voids serve to regularize the global structures of the systems. Using the lower bound \eqref{Eplower} on the particle exclusion probability function, we also observe explicitly the effect of the locally clustered structure on the small-$r$ particle exclusion probability. Upon reaching $\chi = 0.35$, we recover the usual behavior associated with hyperuniformity. By constraining a sufficient number of degrees of freedom, the effective interparticle repulsions induced by the collective coordinate constraints control the small-$r$ region of the pair correlation function (and, therefore, the cumulative coordination number), prohibiting local cluster formation. The regularizing factor in this case is therefore the distribution of voids within the particle space, contained in $E_P(r)$. Indeed, the void exclusion probability $E_V(r)$ is highly constrained by this effective repulsion and decays to zero faster than $E_P(r)$, resulting in a positively correlated exclusion correlation function. By increasing the exponent $\alpha$ governing the small-wavenumber region of the structure factor, we observe increased regularity in the local structure, corresponding to a decreased $Z(R)$ and $E_V(r)$ and a particle exclusion correlation function $E_P(r)$ that decays more rapidly to zero, in perfect accordance with the aforementioned void-space criterion on hyperuniformity put forth by Gabrielli and Torquato \cite{GaTo04}. It is also noteworthy that increased void-space constraints associated with increased $\alpha$ are consistent with the behavior of the structure factor for MRJ hard-sphere packings, about which we have more to say in Section VI. \section{Concluding remarks and discussion} We have provided the first known constructions of disordered hyperuniform many-particle ground states possessing anomalous local density fluctuations. Such systems are defined by a number variance $\sigma^2_N(R)$ that asymptotically scales faster than the surface area of an observation window but slower than the window volume. By controlling the collective density variables associated with the underlying point pattern, we have also be able to probe the relationship between interparticle correlations and constraints on the local coordination structure. Specifically, we have provided detailed statistics to measure the distribution of the \emph{void space} external to the particles, including measurements of the void and particle exclusion probabilities. Under sufficiently low constraints on the system, our numerically constructed many-particle distributions exhibit substantial clustering, resulting in a highly inhomogeneous local structure. However, on the global scale of the system as measured by asymptotic local density fluctuations, these local clusters are separated by comparatively large interparticle voids, thereby regularizing the microstructure and preserving the constraint of hyperuniformity that we impose. Indeed, this effect becomes more pronounced upon passing from the ``anomalous'' regime of hyperuniformity to the more usual case, where $\sigma^2_N(R) \sim R^{d-1}$ asymptotically (i.e., for all periodic point patterns, quasicrystals with Bragg peaks, and disordered systems with pair correlations decaying exponentially fast) \cite{ToSt03, ZaTo09}. Upon increasing the fraction of constrained degrees of freedom within the system, we are able to preclude this clustering affect by reinforcing the effective interparticle repulsion imposed by our targeted structure factor $S(k)$. Furthermore, we have shown that this effective repulsion becomes increasingly more pronounced as the exponent $\alpha$ governing the small-wavenumber scaling of $S(k)$ is increased. It follows from these observations that one can formally define an effective repulsive radius around each point within a hyperuniform point pattern. However, so long as this radius is not substantially large compared to the expected interparticle spacing $\rho^{-1/d}$, i.e., in the absence of microstructural constraints, clustering effects can still dominate the interparticle correlations and the local coordination structure. We have shown that this effect is entropically favorable since slight deviations from the ideal gas are still associated with an exponentially large configurational degeneracy. However, this degeneracy is expected to increase rapidly upon constraining a sufficient number of degrees of freedom, or, equivalently, increasing the effective repulsive radius. We have shown that this loss of configurational degeneracy is associated with a highly-constrained void space distribution, which can be considered as a signature of predominantly ``repulsive'' hyperuniform point patterns. Our results have particular implications for understanding the appearance and nature of hyperuniformity in MRJ packings of hard spheres. It is interesting to note that equilibrium distributions of hard spheres are known not to be hyperuniform \cite{ToSt03} except at the close-packed density, at which point the system freezes into a crystal with long-range order. MRJ hard-sphere packings are therefore unique in that they are \emph{nonequilibrium} systems that are uniformly mechanically rigid, and it is this rigidity that has been shown to be essential for the onset of hyperuniformity (with logarithmic asymptotic local density fluctuations) \cite{ZaJiTo10}. Importantly, rigidity places severe geometric constraints on the local arrangements of particles \cite{DoToSt05} and has been shown to regularize the void space distribution on the global scale of the microstructure \cite{ZaJiTo10}. We have demonstrated in this work that by decreasing the exponential form of the structure factor within the small-wavenumber region, these void-space constraints are relaxed in accordance with the decreased effective radius surrounding the particles. Since MRJ packings are \emph{maximally} disordered among all strictly jammed packings, it follows that such systems must already possess the maximal number of degrees of freedom consistent with the geometric constraints of strict jamming. Therefore, \emph{any} increase in the distribution of the void sizes is inconsistent with these same constraints, highlighting why exponents $\alpha < 1$ in the small-wavenumber scaling of $S(k)$ have never been observed for such systems. Our work has also raised a number of interesting questions related to the physics of collective coordinate constraints. The mathematical properties associated with collective coordinates are surprisingly subtle and have only partially been explored in the literature \cite{FaPeStSt91, UcStTo04}. Constraining a collective density variable, including, for example, either a complete suppression to zero or fixing its magnitude, results in a highly nonlinear equation relating the components of the particle positions $\{\mathbf{r}_j\}$. For higher-dimensional systems, it has been previously observed \cite{BaStTo09} that these nonlinear equations have a tendency to ``couple'' in such a way that one must go beyond $\chi = 0.5$ to crystallize the system, meaning that one cannot simply count constraints and degrees of freedom. It is an open problem to determine analytically the relationship between $\chi$ and the ``true'' number of constrained degrees of freedom; recent work has examined the fraction of normal modes with vanishing frequency as a more appropriate indicator of the latter \cite{BaStTo09}. We have provided some analysis here in one dimension to suggest how the configuration space is constrained with increasing $\chi$ by determining the entropy for small deviations from the ideal gas. Certainly for higher values of $\chi$ our simple linear scaling will break down; characterizing the deviations from this linear behavior, especially in higher dimensions, is an attractive problem warranting further consideration.
1,108,101,563,542
arxiv
\section{Introduction} Atomic parity violation (APV) has implications for exploring physics beyond the Standard Model (SM) of particle physics \cite{Commins,Erler,Marciano1992}. The neutral current weak interactions due to exchange of the $Z_0$ bosons between electrons and nucleus in atomic systems lead to APV \cite{Bouchiat}. APV studies can offer a fundamental quantity known as nuclear weak charge $Q_W$, from which model independent values of electron-up and electron-down quarks coupling coefficients can be inferred \cite{Bouchiat, Pierre}. Hence, any deviations of these values from the SM can be used to probe new physics or to complement some of the findings of the Large Hadron Collider facility when it is upgraded at the higher TeV energy scale. The nuclear spin-independent (NSI) component of APV has been measured to an accuracy of 0.35\% in the $6s ~ ^2S_{1/2} - 7s ~ ^2S_{1/2}$ transition in $^{133}$Cs \cite{Wood1997}. Advanced experimental techniques have been proposed recently to improve accuracy of the above measurement, as well as carrying out similar measurement in the $6s ~ ^2S_{1/2} - 5d ~ ^2D_{3/2}$ transition of $^{133}$Cs \cite{Choi2016, Kastberg2019}. In order to extract $Q_W$ value from these measurements, it is imperative to perform calculation of the parity violating electric dipole ($E1_{PV}$) amplitudes of the corresponding transitions very precisely (arguably less than 0.5\%). There have been a long history of performing calculations of the $E1_{PV}$ amplitude for the $6s ~ ^2S_{1/2} - 7s ~ ^2S_{1/2}$ transition in $^{133}$Cs by employing different state-of-the-art relativistic atomic many-body theories at different levels of approximation. Among the early calculations, Dzuba et al had employed the time-dependent Hartree-Fock (TDHF) method \cite{Dzuba1984,Dzuba1985}, while M{\aa}rtensson had applied the combined coupled-perturbed Dirac-Hartree-Fock (CPDF) method and random-phase approximation (RPA), together referred as CPDF-RPA method \cite{maartensson1985}, to investigate the roles of core-polarization effects to $E1_{PV}$. Both these methods are technically equivalent, but M{\aa}rtensson had also provided results at the intermediate levels using approximations at the Dirac-Hartree-Fock (DHF), CPDF and RPA methods as well as listing contributions from double-core-polarization (DCP) effects explicitly. Later, Blundell et al had employed a linearized version relativistic coupled-cluster method in the singles and doubles excitation approximation (SD method) to estimate the $E1_{PV}$ amplitude of the above transition \cite{Blundell1992}. However, they had adopted a sum-over-states approach in which matrix elements of the electric dipole (E1) operator and APV interaction Hamiltonian were evaluated for the transitions involving $np ~ ^2P_{1/2}$ intermediate states (called as ``Main" contribution) with the principal quantum number $n=6-9$. This method also utilized the calculated E1 matrix elements and magnetic dipole hyperfine structure constants to estimate the uncertainty of $E1_{PV}$. Uncertainties from the energies were removed by considering the experimental energies, while contributions from core orbitals (referred as ``Core" contribution henceforth) and higher $np ~ ^2P_{1/2}$ intermediate states (hereafter called as ``Tail" contribution) were estimated using lower-order methods. Following this work, Dzuba et al improved their calculation of TDHF method by incorporating correlation contributions through the Br\"uckner orbitals (BO) and referred the approach as RPA$+$BO method \cite{Dzuba2001}. Higher-order contributions from the Breit, lower-order QED and neutron skin effects were added subsequently through different works to claim for more precise $E1_{PV}$ value for extracting the BSM physics \cite{Dzuba1989, Milstein2002, Brown2009, Derevianko2001, Kozlov2001, Shabaev2005, Dzuba2002}. It is worth mentioning here that all these higher-order effects were estimated through different types of many-body methods and without accounting correlations among themselves. Soon after these theoretical results, relativistic coupled-cluster (RCC) theory with singles and doubles approximation (RCCSD method) was employed to treat both the electromagnetic and weak interactions on an equal footing \cite{Sahoo2006, Sahoothesis, Sahoo2010}. Moreover, it also treated correlations among the Main, Core and Tail contributions to $E1_{PV}$ but its accuracy was an concern due to its {\it ab initio} nature and use of a relatively smaller basis size in the available computational resources at that time. A decade ago, Porsev et al made further improvement in the sum-over-states result of Blundell et al by considering contributions from the non-linear terms from RCCSD method to their SD method as well as adding valence triple excitations (CCSDvT method) \cite{Porsev2010}. Their claimed accuracy to the $E1_{PV}$ amplitude of the $6s ~ ^2S_{1/2} - 7s ~ ^2S_{1/2}$ transition in $^{133}$Cs was about 0.27\%. However, the Core and Tail contributions were still estimated using a blend of many-body methods without explicitly stating which physical effects were taken into account for their evaluations. We refer these two contributions together as X-factor in this work. In an attempt to improve the calculated $E1_{PV}$ value further, Dzuba et al estimated the X-factor contributions using their TDHF approach but omitting the DCP contributions \cite{Dzuba2012} in the similar line to their earlier works \cite{Dzuba1984,Dzuba1985}. This calculation showed an opposite sign of Core contribution than that was reported by Porsev et al. In 2013, Roberts reported the DCP contribution separately \cite{Roberts2013} and the result was slightly different than the value of M{\aa}rtensson \cite{maartensson1985}. The opposite sign of the Core contribution of Dzuba et al with Porcev et al was criticized in two papers \cite{Safronova2018, Wieman2019}, which prompted for carrying out further investigation on different correlation contributions to $E1_{PV}$ from the first-principle approach. In 2021, Sahoo et al improved their calculation of the above $E1_{PV}$ amplitude by implementing the singles, doubles and triples approximation to both the unperturbed and perturbed wave functions (RCCSDT method) and using a much bigger set of basis functions \cite{Sahoo2021}. They also used the same $V^{N-1}$ potential as in the previous cases and presented the Core and Valence (Main and Tail together) contributions explicitly. As per the convention adopted in this approach, the Core contribution agreed with the earlier RCCSD result \cite{Sahoo2006, Sahoothesis, Sahoo2010}, and was close to the reported value of Blundell et al \cite{Blundell1992} and Porsev et al \cite{Porsev2010}. In a recent Comment, Roberts and Ginges have argued in favour of opposite sign of Core contribution than other findings by giving intermediate results of their RPA$+$BO method \cite{Roberts2022}. In another work, Tan el al estimated combined Core and Tail contributions to the $E1_{PV}$ amplitude of the above transition using mixed-parity orbitals through RPA \cite{Tan2022} and support the value reported in Ref. \cite{Porsev2010}. It is, therefore, abundantly clear that it is necessary to find out the issue of sign problem with the Core contribution to the above $E1_{PV}$ amplitude. More importantly, the basis of dividing net $E1_{PV}$ result into Core, Main, Tail, DCP, etc. contributions in an approach should be properly defined and missing physical effects in a method compared to others need to be well understood when a mixture of methods are used to estimate these contributions piece-wise. Any misinterpretation or misrepresentation of these contributions can have repercussion effect when they are used to infer beyond the SM (BSM) physics. The present work is devoted to addressing the aforementioned sign issue with the Core contribution among various works, bringing to notice the shortcomings of the sum-over-states approach, and explaining the reason for the coincidental agreement between the core contributions of Porsev et al. \cite{Porsev2010} and Sahoo et al. \cite{Sahoo2021}. We start with various procedures that can be adopted through a general many-body method to evaluate $E1_{PV}$ amplitudes in atomic systems and demonstrate how the definition of Core contribution can vary from one procedure to another. With the help of lower-order many-body methods, we find out the missing contributions in a typical sum-over-states approach to estimate $E1_{PV}$. We then analyze results from different methods to learn about how and to what extent these missing effects are incorporated in the previous calculations. We also make a similar analysis for the $E1_{PV}$ amplitude of the $6s ~ ^2S_{1/2} - 5d ~ ^2D_{3/2}$ transition in $^{133}$Cs and compare it with the result of the $6s ~ ^2S_{1/2} - 7s ~ ^2S_{1/2}$ transition. There are two important reasons for quoting result of the $S-D_{3/2}$ transition of Cs here. First, our analysis would be helpful to estimate its $E1_{PV}$ amplitude more precisely which is required for the ongoing experiments \cite{Choi2016, Kastberg2019}. Second, we intend to address a comment \cite{Roberts2022,Sahoo2022} on why sign for Core contribution to the $S-D_{3/2}$ transitions from different methods agree in contrast to the $S-S$ transitions. \section{Theory} The short range effective Lagrangian corresponding to the vector- -axial-vector neutral weak current interaction of an electron with up- and down-quarks in an atomic system can be given by \cite{Cahn, Marciano1,Commins1,Bouchiat} \begin{eqnarray} {\cal L}_{\rm eq} &=& \frac{G_F}{\sqrt{2}} \bar{\psi}_e \gamma^{\mu} \gamma^5 \psi_e \sum_{u,d} \left [ C_{1u} \bar{\psi}_{u} \gamma_{\mu} \psi_{u} + C_{1d} \bar{\psi}_{d} \gamma_{\mu} \psi_{d} \right ] \nonumber \\ &=& \frac{G_F}{\sqrt{2}} \bar{\psi}_e \gamma^{\mu} \gamma^5 \psi_e \sum_n C_{1n} \bar{\psi}_{n} \gamma_{\mu} \psi_{n} , \end{eqnarray} where $G_F=1.16632 \times 10^{-5}$ GeV$^{-2}$ is the Fermi constant, sums $u$, $d$ and $n$ stand for up-quark, down-quark and nucleons respectively, and $C_{1i}$ with $i=u$, $d$ and $n$ represent coupling coefficients of the interaction of an electron with up-quark, down-quark and nucleons (protons (Pn) and neutrons (Nn)) respectively. Adding them coherently and taking the non-relativistic approximation for nucleons, the temporal component can give the nuclear-spin-independent APV interaction Hamiltonian as \begin{eqnarray} H_{\rm PV}^{\rm NSI} &=& -\frac{G_F}{2\sqrt{2}} Q_W \gamma^5 \rho(r) , \end{eqnarray} where $\rho(r)$ is the averaged nuclear density and $Q_W=2[ZC_{1Pn}+(A-Z)C_{1Nn}]$ is known as the nuclear weak charge with $Z$ and $A$ representing for atomic and mass numbers respectively. It is obvious that $Q_W$ is a model dependent quantity. Thus, the difference of its actual value from the SM can provide signatures of physics supporting other models. In the SM, $C_{1u}=\frac{1}{2}\left [1- \frac{8}{3}\sin^2 \theta_W^{\rm SM} \right ]$ and $C_{1d}=-\frac{1}{2}\left [1- \frac{4}{3}\sin^2 \theta_W^{\rm SM} \right ]$ \cite{Cahn,Marciano1,Commins1,Bouchiat}. This follows $C_{1Nn}=2C_{1d}+C_{1u} = -1/2$ and $C_{1Pn}=2C_{1u}+C_{1d}=(1-4 \sin^2 \theta_W^{\rm SM})/2 \approx 0.04$, which are accurate up to 1\%. Hence, it is necessary to evaluate model independent $Q_W$ in any atomic system within this accuracy in order to infer physics beyond the SM. \begin{figure}[t!] \centering \includegraphics[height=4.0cm, width=8.0cm]{DF.eps} \caption{Goldstone diagrams representing Core (a and b) and Valence (c and d) contributions to $E1_{PV}$ in the $V^{N-1}$ DHF potential of an one valence atomic system. Here, double arrows represent initial ($i$) and final ($f$) valence orbitals, single arrows going down ($a$) means occupied orbitals and single arrows going up ($p$) means virtual orbitals. The operators $D$ and $H_W$ are shown in curly line and a line with bullet point respectively.} \label{DF} \end{figure} \section{Evaluation procedures of $E1_{PV}$} \label{sec3} In the presence of APV, the net atomic Hamiltonian is given by \begin{eqnarray} H_{at} &=& H + H_{\rm PV}^{\rm NSI} \nonumber \\ &=& H + G_F H_W , \end{eqnarray} where $H$ contains contributions from electromagnetic interactions and $H_W$ is defined in order to treat $G_F$ as a small parameter to include contributions from $H_{\rm PV}^{\rm NSI}$ perturbatively through a many-body method. Using the wave functions of $H_{at}$, we determine $E1_{PV}$ of a transition between states $|\Psi_i \rangle $ and $|\Psi_f \rangle$ as \begin{eqnarray}\label{defn} E1_{PV} = \frac{\langle \Psi_f | D | \Psi_i \rangle} {\sqrt{\langle \Psi_f | \Psi_f \rangle \langle \Psi_i | \Psi_i \rangle } } , \label{eq1} \end{eqnarray} where $D$ is the E1 operator. In the earlier calculations of $E1_{PV}$ amplitude in $^{133}$Cs, atomic wave functions were determined by using the $V^{N-1}$ potential with $N$ is the number of electrons of the atom. Choice of this potential is convenient to produce both the ground and excited states of Cs atom using the Fock-space formalism. We adopt the same formalism here, so that description of different correlation effects and comparison of results are consistent to each other in all the considered works including in the definition of the Core contribution. Following a typical approach in the many-body problems, we define a suitable mean-field Hamiltonian $H_0$ to replace the exact Hamiltonian H to obtain a set of approximated solutions \begin{eqnarray} H_0 |\Phi_k \rangle = {\cal E}_k |\Phi_k \rangle, \end{eqnarray} where subscript $k$ is used to identify different states. Since $[5p^6]$ is the common closed-shell configuration in the states of our interest in the present work, we obtain the solution of this closed-core first. We consider the DHF method to obtain its mean-field wave function $| \Phi_0 \rangle$. Then, the $|\Phi_k \rangle$ wave functions are obtained as \begin{eqnarray} | \Phi_v \rangle = a_v^{\dagger} | \Phi_0 \rangle . \end{eqnarray} Starting with $|\Phi_v \rangle$, we can express the exact wave function of the state using Bloch's prescription as \cite{Lindgren} \begin{eqnarray} | \Psi_v \rangle = \Omega^v | \Phi_v \rangle , \label{eqp} \end{eqnarray} where $\Omega^v$ is known as the wave operator. In the $V^{N-1}$ potential approximation, we first solve electron correlation effects among electrons from the core orbitals of $| \Phi_0 \rangle$. Then, correlation effects involving electron from the valence orbital are included. Accordingly $\Omega^v$ is divided into two parts \begin{eqnarray} \Omega^v = \Omega_0 + \Omega_v , \label{eqw} \end{eqnarray} where $\Omega_0$ represents wave operator accounting correlations of electrons only from the core orbitals while $\Omega_v$ takes care of correlations of electrons from all orbitals including the valence orbital. In a given many-body method, we can solve amplitudes of the above wave operators using the equations \begin{eqnarray} [\Omega_0, H_0] P_0 &=& U_{res} \Omega_0 P_0 \label{blw0} \end{eqnarray} and \begin{eqnarray} [\Omega_v, H_0] P_v &=& U_{res} (\Omega_0 + \Omega_v) P_v \nonumber \\ && - \Omega_v P_v U_{res} (\Omega_0 + \Omega_v) P_v, \label{blwv} \end{eqnarray} where $P_n= |\Phi_n \rangle \langle \Phi_n |$ and $Q_n=1-P_n$ with $n\equiv 0,v$, and $U_{res}=H-H_0$ is known as the residual interaction that contributes to the amplitudes of $\Omega_0$ and $\Omega_v$. In fact, the energy of the $|\Psi_v \rangle$ state can be evaluated as the expectation value of the effective Hamiltonian \begin{eqnarray} H_{eff} = P_v U_{res} (\Omega_0 + \Omega_v) P_v \end{eqnarray} with respect to the reference state $|\Phi_v \rangle$. It should be noted that energy of the state is given by \begin{eqnarray} E_v=\langle\Phi_v|H_{eff}|\Phi_v\rangle \end{eqnarray} For the choice of $V^N$ potential in the generation of single particle orbitals, amplitudes of the $\Omega^v$ operator can be estimated only using the following equation \begin{eqnarray} [\Omega^v, H_0] P_v &=& U_{res} \Omega^v P_v . \label{blwv1} \end{eqnarray} It means that $E_v$ that gives the energy of the state does not appear in the wave function determining equation for the case of $V^N$ potential. Thus, the core-valence interaction effects in the construction of DHF potential in case of $V^{N-1}$ is partly compensated through the wave operator amplitude determining equation through the extra term with energy. If any method utilizes the $V^{N-1}$ potential without taking into account the above-mentioned extra term, it can be termed as an improper theory. Both $| \Psi_i \rangle$ and $| \Psi_f \rangle$ can be determined by solving the equation-of-motion for $H_{at}$ in the above formalism. However, parity cannot be treated as a good quantum number for $H_{at}$. As a consequence, it will relax one degree of freedom in describing atomic states for which computations of amplitudes for the $\Omega_0$ and $\Omega_v$ operators will increase by many folds. Compared to $H$, the strength of $H_{\rm PV}^{\rm NSI}$ in $H_{at}$ is smaller by $10^{12}$ order. Thus, it is important that contributions from $H$ is accounted as much as possible in the determination of the above wave functions with the available computational resources and only the first-order effect due to $H_{\rm PV}^{\rm NSI}$ can be accounted. Anyway, inclusion of higher-order effects from $H_{\rm PV}^{\rm NSI}$ will not serve for any purpose in our study as they will be much smaller then our interest. Thus, we express atomic wave function $|\Psi_v \rangle$ of a general state with valence orbital $v$ as \begin{eqnarray} |\Psi_v \rangle = |\Psi_v^{(0)} \rangle + G_F |\Psi_v^{(1)} \rangle , \label{eq2} \end{eqnarray} where $|\Psi_v^{(0)} \rangle$ is the zeroth-order wave function containing contributions only from $H$ while $|\Psi_v^{(1)} \rangle$ includes one-order contribution from $H_W$ with respect to $|\Psi_v^{(0)} \rangle$. Substituting Eq. (\ref{eq2}) in Eq. (\ref{defn}) and keeping finite terms up to first-order in $G_F$, we get \begin{eqnarray} E1_{PV} & \simeq & G_F \left [ \frac{\langle \Psi_f^{(0)} | D | \Psi_i^{(1)} \rangle} {{\cal N}_{if} } + \frac{\langle \Psi_f^{(1)} | D | \Psi_i^{(0)} \rangle} {{\cal N}_{if} } \right ] , \label{eq3} \end{eqnarray} where normalization factor ${\cal N}_{if}= \sqrt{N_f N_i}$ with $N_v=\langle \Psi_v^{(0)} | \Psi_v^{(0)} \rangle$. Presenting results in this paper $G_F$ is absorbed with the $E1_{PV}$ value, so it does not appear explicitly in our calculation. It can be noted that contribution from the first term is referred as the initial perturbed state contribution whereas contribution from the second term is referred as the final perturbed state contribution in the above expression during the discussion of results later. In order to treat both the electromagnetic and weak interaction Hamiltonians on an equal footing in a many-body method and consider correlations among them, solutions of the unperturbed and first-order perturbed wave functions should satisfy \begin{eqnarray} H |\Psi_v^{(0)}\rangle = E_v^{(0)} |\Psi_v^{(0)}\rangle \label{eq0} \end{eqnarray} and \begin{equation} (H-E_v^{(0)})|\Psi_v^{(1)}\rangle=(E_v^{(1)} -H_W)|\Psi_v^{(0)}\rangle , \label{eq1} \end{equation} respectively, where $E_v^{(1)}=0$ for odd-parity interaction operators. Using the wave operator formalism, we can express \begin{eqnarray} | \Psi_v^{(0)} \rangle &=&\Omega^{v(0)}| \Phi_v \rangle \nonumber\\ &=& (\Omega_0^{(0)} + \Omega_v^{(0)} ) | \Phi_v \rangle \end{eqnarray} and \begin{eqnarray} | \Psi_v^{(1)} \rangle &=&\Omega^{v(1)}| \Phi_v \rangle \nonumber\\ &=& (\Omega_0^{(1)} + \Omega_v^{(1)} ) | \Phi_v \rangle . \label{eq03} \end{eqnarray} Substituting the wave operators, Eq. (\ref{eq3}) can be expressed as \begin{eqnarray} E1_{PV} &=& \frac{\langle \Phi_f | (\Omega_0^{(0)} + \Omega_f^{(0)} )^{\dagger} D (\Omega_0^{(1)} + \Omega_i^{(1)} ) | \Phi_i \rangle} {{\cal N}_{if} } \nonumber \\ &+& \frac{\langle \Phi_f | (\Omega_0^{(1)} + \Omega_f^{(1)} )^{\dagger} D (\Omega_0^{(1)} + \Omega_i^{(1)} ) | \Phi_i \rangle} {{\cal N}_{if} } \nonumber \\ &=& \frac{\langle \Phi_0 | a_f (\Omega_0^{(0)} + \Omega_f^{(0)} )^{\dagger} D (\Omega_0^{(1)} + \Omega_i^{(1)} ) a_i^{\dagger} | \Phi_0 \rangle} {{\cal N}_{if}} \nonumber \\ &+& \frac{\langle \Phi_0 | a_f (\Omega_0^{(1)} + \Omega_f^{(1)} )^{\dagger} D (\Omega_0^{(1)} + \Omega_i^{(1)} ) a_i^{\dagger} | \Phi_0 \rangle} {{\cal N}_{if} } \nonumber \\ &=& \frac{\langle \Phi_0 | a_f [\Omega_0^{(0)\dagger} D \Omega_0^{(1)} + \Omega_0^{(1)\dagger} D \Omega_0^{(0)} ] a_i^{\dagger} | \Phi_0 \rangle} {{\cal N}_{if}} \nonumber \\ &+& \frac{\langle \Phi_0 | a_f [\Omega_f^{(0)\dagger} D \Omega_0^{(1)} + \Omega_f^{(1)\dagger} D \Omega_0^{(0)} ] a_i^{\dagger} | \Phi_0 \rangle} {{\cal N}_{if}} \nonumber \\ &+& \frac{\langle \Phi_0 | a_f [\Omega_0^{(0)\dagger} D \Omega_i^{(1)} + \Omega_0^{(1)\dagger} D \Omega_i^{(0)} ] a_i^{\dagger} | \Phi_0 \rangle} {{\cal N}_{if}} \nonumber \\ &+& \frac{\langle \Phi_0 | a_f [\Omega_f^{(0)\dagger} D \Omega_i^{(1)} + \Omega_f^{(1)\dagger} D \Omega_i^{(0)} ] a_i^{\dagger} | \Phi_0 \rangle} {{\cal N}_{if}} . \ \ \ \ \label{eq5} \end{eqnarray} In the above expression,contribution from the first term is referred as ``Core" correlation contribution while the rest is termed as ``Valence" correlation contribution in this paper. Analogously, contributions to $N_v=\langle \Phi_0 | a_v (\Omega_0^{(0)} + \Omega_v^{(0)} )^{\dagger} (\Omega_0^{(0)} + \Omega_v^{(0)} ) a_v^{\dagger} | \Phi_0 \rangle $ can also be divided into two parts. \begin{figure}[t] \centering \includegraphics[width=7.0cm,height=1.4cm]{CPHF-talk-1.eps} \includegraphics[width=7.0cm,height=1.7cm]{CPHF-talk.eps} \includegraphics[width=7.0cm,height=1.4cm]{RPA-talk-1.eps} \includegraphics[width=7.0cm,height=1.7cm]{RPA-talk.eps} \includegraphics[width=7.0cm,height=3.2cm]{Non-RPA-1.eps} \includegraphics[width=7.0cm,height=4.0cm]{MBPT-v.eps} \includegraphics[width=7.0cm,height=4.0cm]{MBPT-v-1.eps} \caption{A few important electron correlation contributing diagrams to $E1_{PV}$ in the RMBPT(3) method. The dotted lines represent the atomic Hamiltonian. Diagrams (i)-(viii) give Core contributions in the RMBPT(3)$^w$ method, but they correspond to Valence contributions in the RMBPT(3)$^d$ method. Diagrams (ix)-(xvi) follow other way around. } \label{MBPT} \end{figure} \subsection{Sum-over-states approach} In the sum-over-states approach, the first-order wave function of a general state can be expressed as \begin{eqnarray} |\Psi_v^{(1)} \rangle &=& \sum_{I \ne v} | \Psi_I^{(0)} \rangle \frac{ \langle \Psi_I^{(0)} | H_W | \Psi_v^{(0)} \rangle} {{\cal N}_{v} (E_v^{(0)} - E_I^{(0)}) } , \end{eqnarray} where $| \Psi_I^{(0)} \rangle$ are the zeroth-order intermediate states and $E_n^{(0)}$ is the unperturbed energy of the $n^{th}$ level. Thus, Eq. (\ref{eq3}) can be written as \begin{eqnarray} E1_{PV} &=& \sum_{I \ne i} \frac{\langle \Psi_f^{(0)} | D | \Psi_I^{(0)} \rangle \langle \Psi_I^{(0)} | H_W | \Psi_i^{(0)} \rangle} {{\cal N}_{if} (E_i^{(0)} - E_I^{(0)}) } \nonumber \\ && + \sum_{I \ne f} \frac{\langle \Psi_f^{(0)} | H_W | \Psi_I^{(0)} \rangle \langle \Psi_I^{(0)} | D| \Psi_i^{(0)} \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}) } . \label{eq4} \end{eqnarray} In the above expression of $E1_{PV}$, correlations among the $H$ and $H_W$ that appear through Eq. (\ref{eq1}) are omitted. Secondly, there could be conflict with the definitions of using Core, Main and Tail contributions to $E1_{PV}$ with the definitions used in various first-principle based calculations. This is explicitly demonstrated later how formula given by Eq. (\ref{eq5}) can be altered to redefine Core and Valence contributions. To understand definitions of Core, Main and Tail contributions used in Ref. \cite{Porsev2010}, we follow the work of Blundell et al \cite{Blundell1992} where these terms were used for the first time in the context of estimating $E1_{PV}$. Division of the total $E1_{PV}$ value in this calculation was made as ``Main", ``Core" and ``Tail" contributions based on the mere assumption that $|\Psi_i^{(0)}\rangle$, $|\Psi_f^{(0)}\rangle$ and $|\Psi_I^{(0)}\rangle$ are represented by only single Slater determinants like in the DHF method. Thus, the intermediate states $|\Psi_I^{(0)}\rangle$ are considered to have only the $np ~ ^2P_{1/2}$ configurations. In such assumption, the Core ($C$), Main ($V$) and Tail ($T$) contributions to the $E1_{PV}$ amplitude of the $6s ~ ^2S_{1/2} - 7s ~ ^2S_{1/2}$ transition in $^{133}$Cs were estimated as \begin{eqnarray} E1_{PV}(C) &=& \sum_{n \le 5 } \frac{\langle 7S_{1/2} | D | nP_{1/2} \rangle \langle nP_{1/2} | H_W |6S_{1/2} \rangle} {(E_{6S_{1/2}} - E_{nP_{1/2}} ) } \nonumber \\ & + & \sum_{n \le 5} \frac{\langle 7S_{1/2} | H_W | nP_{1/2} \rangle \langle nP_{1/2} | D| 6S_{1/2} \rangle} {(E_{7S_{1/2}} - E_{nP_{1/2}} )} \ \ \ \label{eq6} \\ E1_{PV}(V) &=& \sum_{n=6-9} \frac{\langle 7S_{1/2} | D | nP_{1/2} \rangle \langle nP_{1/2} | H_W |6S_{1/2} \rangle} {(E_{6S_{1/2}} - E_{nP_{1/2}} ) } \nonumber \\ & + & \sum_{n=6-9} \frac{\langle 7S_{1/2} | H_W | nP_{1/2} \rangle \langle nP_{1/2} | D| 6S_{1/2} \rangle} {(E_{7S_{1/2}} - E_{nP_{1/2}} )} \ \ \ \label{eq7} \end{eqnarray} and \begin{eqnarray} E1_{PV}(T) &=& \sum_{n \ge 10 } \frac{\langle 7S_{1/2} | D | nP_{1/2} \rangle \langle nP_{1/2} | H_W |6S_{1/2} \rangle} {(E_{6S_{1/2}} - E_{nP_{1/2}} ) } \nonumber \\ & + & \sum_{n \ge 10} \frac{\langle 7S_{1/2} | H_W | nP_{1/2} \rangle \langle nP_{1/2} | D| 6S_{1/2} \rangle} {(E_{7S_{1/2}} - E_{nP_{1/2}} )}, \ \ \ \label{eq8} \end{eqnarray} respectively. However, wave functions of multi-electron atomic systems are determined through a many-body method by expressing as a linear combination of many Slater determinants which can differ by either single or multiple entries of rows or columns. As a result, contributions from cross-terms involving other Slater determinants, e.g. excited configurations $5p^5 6s 7s$ of the intermediate states with respect to both the $6s ~ ^2S_{1/2}$ and $7s ~ ^2S_{1/2}$ states, cannot appear through the above breakup. One of such contributions is referred to as DCP effects which arise through the CPDF-RPA (or TDHF) method as described by M{\aa}rtensson \cite{maartensson1985} and Roberts \cite{Roberts2013}. There are other contributions that could come through effects that are neither part of BO contributions nor CPDF-RPA method. However those effects appear through the first-principle approach of the RCC method employed by Sahoo et al \cite{Sahoo2022} as shown later part of this paper. These contributions are not small and demands for an appropriate many-body method to account their contributions at the par with the $np ~ ^2P_{1/2}$ intermediate states. This argument can be understood better with the following explanations. \begin{figure}[t] \centering \includegraphics[scale=0.45]{DCP.eps} \caption{A few representative DCP diagrams from the RMBPT(3) method. All-order forms of diagrams from (i) to (viii) appear in the CPDF-RPA method, but the rest are not. These missing contributions appear in the RCC theory to all-orders.} \label{DCP} \end{figure} In the $^{133}$Cs atom, the low-lying excited states have a common core $[5p^6]$ and differ by only a valence orbital. Thus, the DHF wave functions of these states can be expressed as $|\Phi_I \rangle = a_I^{\dagger} |\Phi_0 \rangle$ and the exact wave functions can be defined as \begin{eqnarray} | \Psi_I^{(0)} \rangle &=& \Omega^{I(0)} | \Phi_I \rangle \nonumber \\ &=& (\Omega_0^{(0)} + \Omega_I^{(0)} ) | \Phi_I \rangle , \end{eqnarray} where $\Omega^{I(0)}$ and $\Omega_I^{(0)}$ are the total and valence correlation contributing wave operators respectively. Using these wave operators, we can express Eq. (\ref{eq4}) as \begin{eqnarray} E1_{PV} &=& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f (\Omega_0^{(0)} + \Omega_f^{(0)})^{\dagger} D (\Omega_0^{(0)} + \Omega_I^{(0)}) a_I^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if}} \nonumber \\ &\times& \frac{\langle \Phi_0 | a_I (\Omega_0^{(0)} + \Omega_I^{(0)})^{\dagger} H_W (\Omega_0^{(0)} + \Omega_i^{(0)}) a_i^{\dagger}| \Phi_0 \rangle} {(E_i^{(0)} - E_I^{(0)})} \nonumber \\ &+& \sum_{I \ne f} \frac{\langle \Phi_0 | a_f (\Omega_0^{(0)} + \Omega_f^{(0)})^{\dagger} H_W (\Omega_0^{(0)} + \Omega_I^{(0)}) a_I^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if}} \nonumber \\ &\times& \frac{\langle \Phi_0 | a_I (\Omega_0^{(0)} + \Omega_I^{(0)})^{\dagger} D (\Omega_0^{(0)} + \Omega_i^{(0)}) a_i^{\dagger}| \Phi_0 \rangle} {(E_f^{(0)} - E_I^{(0)})} . \label{eq50} \end{eqnarray} Since wave operators from the initial, final and intermediate states include linear combinations of configurations describing one-hole--one-particle, two-hole--two-particle etc. excitations, it is obvious that a sum-over-states approach cannot include contributions from the higher-level excited configurations contributing to the intermediate states. \subsection{First-principle approach} It is evident that it is imperative to determine the $E1_{PV}$ amplitudes in atomic systems using the first principle approaches that account contributions from all possible intermediate configurations. This can be done using either Eq. (\ref{eq3}) or Eq. (\ref{eq5}). It is desirable to solve both Eqs. (\ref{eq0}) and (\ref{eq1}) in the former case, while Bloch's equations for the unperturbed and perturbed wave operators need to be solved for the later approach. The amplitude solving Bloch's equations for the unperturbed operators $\Omega_0^{(0)}$ and $\Omega_v^{(0)}$ are similar to that are given by Eqs. (\ref{blw0}) and (\ref{blwv}), respectively. The Bloch's equations for the first-order perturbed wave operators can be given by \begin{eqnarray} [\Omega_0^{(1)}, H_0] P_0 &=& (H_W \Omega_0^{(0)} +U_{res} \Omega_0^{(1)} ) P_0 \label{blw1} \end{eqnarray} and \begin{eqnarray} [\Omega_v^{(1)}, H_0] P_v &=& [ H_W (\Omega_0^{(0)} + \Omega_v^{(0)}) + U_{res} ( \Omega_0^{(1)} +\Omega_v^{(1)} ) P_v \nonumber \\ && - \Omega_v^{(1)} E_v^{(0)} . \label{blwv1} \end{eqnarray} As mentioned in Introduction, several all-order methods in the CPDF, RPA, CPDF-RPA/TDHF and RCC theory frameworks are employed to determine the $E1_{PV}$ amplitudes in $^{133}$Cs. Here, we attempt to formulate all these methods using wave operators so that in the end it would be easier for us to make one-to-one relations among various contributions arising through these methods. Especially, such exercise is going to be useful in explaining the reason for which there is a sign difference between the Core contributions to $E1_{PV}$ from the TDHF method of Dzuba et al \cite{Dzuba2012} and RCC method employed by Sahoo et al \cite{Sahoo2021}. We can rewrite Eq. (\ref{eq3}) as \begin{eqnarray} E1_{PV} = \frac{\langle \Psi_f^{(0)} | H_W | \tilde{\Psi}_i^{(1)} \rangle} {{\cal N}_{if} } + \frac{\langle \tilde{\Psi}_f^{(1)} | H_W | \Psi_i^{(0)} \rangle} {{\cal N}_{if} }. \label{eqn7} \end{eqnarray} This can be equivalently expressed by either \begin{eqnarray} E1_{PV} &=& \frac{\langle \Psi_f^{(0)} | D | \Psi_i^{(1)} \rangle} {{\cal N}_{if} } + \frac{\langle \Psi_f^{(0)} | H_W | \tilde{\Psi}_i^{(1)} \rangle} {{\cal N}_{if} } \label{eq8a} \end{eqnarray} or \begin{eqnarray} E1_{PV} &=& \frac{\langle \tilde{\Psi}_f^{(1)} | H_W | \Psi_i^{(0)} \rangle} {{\cal N}_{if} } + \frac{\langle \Psi_f^{(1)} | D | \Psi_i^{(0)} \rangle} {{\cal N}_{if} } . \label{eq8b} \end{eqnarray} In the above expressions, we define \begin{eqnarray} |\tilde{\Psi}_i^{(1)} \rangle &=& \sum_{I \ne f } | \Psi_I^{(0)} \rangle \frac{ \langle \Psi_I^{(0)} | D | \Psi_i^{(0)} \rangle} {(E_i^{(0)} - E_I^{(0)} + \omega ) } \label{eq9} \end{eqnarray} and \begin{eqnarray} |\tilde{\Psi}_f^{(1)} \rangle &=& \sum_{I \ne i} | \Psi_I^{(0)} \rangle \frac{ \langle \Psi_I^{(0)} | D | \Psi_f^{(0)} \rangle} {(E_f^{(0)} - E_I^{(0)} - \omega ) } \label{eq10} \end{eqnarray} with $\omega = E_f^{(0)} - E_i^{(0)}$ is the excitation energy between the initial and final states. It implies that mathematically Eqs. (\ref{eq3}), (\ref{eqn7}), (\ref{eq8a}) and (\ref{eq8b}) are equal in an exact many-body method. Thus, any of these expressions can be used in the determination of the $E1_{PV}$ amplitude. We shall demonstrate later that the CPDF, RPA, CPDF-RPA and RCC methods are using different formulas as mentioned above. So it is important to understand their relations and classifications of individual contributions through the above methods. Since approximations made to the unperturbed and perturbed wave functions are not at the same level in these methods, it is obvious to guess that results from these methods can be very different unless electron correlation effects in an atomic system are negligibly small. It is also not clear whether classifications of Core and Tail contributions in these methods are uniquely defined or not. \begin{figure}[t!] \centering \includegraphics[height=4.0cm,width=8.5cm]{CPHF-1.eps} \caption{Goldstone diagrams contributing to amplitude determining equation of $\Omega_0^{CPDF}$. Through iterative scheme these effects are included to all-orders in the CPDF method.} \label{CPHF1} \end{figure} To understand the above points, lets find out the Core contributions from Eqs. (\ref{eq3}) and (\ref{eqn7}) by expressing the perturbed wave function due to the $D$ operator in terms of wave operators as \begin{eqnarray} | \tilde{\Psi}_v^{(1)} \rangle = (\tilde{\Omega}_0^{(1)} + \tilde{\Omega}_v^{(1)} ) | \Phi_v \rangle . \label{eq12} \end{eqnarray} With this, the Core contributing terms in both $H_W$ and $D$ perturbing approaches are given by \begin{eqnarray} E1_{PV}(C) &=& \frac{\langle \Phi_0 | a_f [\Omega_0^{(0)\dagger} D \Omega_0^{(1)} + \Omega_0^{(1)\dagger} D \Omega_0^{(0)} ] a_i^{\dagger} | \Phi_0 \rangle} {{\cal N}_{if}} \ \ \ \label{eq110} \end{eqnarray} and \begin{eqnarray} E1_{PV}(C) &=& \frac{\langle \Phi_0 | a_f [\tilde{\Omega}_0^{(1)\dagger} H_W \Omega_0^{(0)} + \Omega_0^{(0)\dagger} H_W \tilde{\Omega}_0^{(1)} ] a_i^{\dagger} | \Phi_0 \rangle} {{\cal N}_{if}} . \ \ \ \label{eq111} \end{eqnarray} After a careful analysis it can be shown that the Core contributions arising through the wave operators $\Omega_0^{(1)}$ and $\tilde{\Omega}_0^{(1)}$ can be different. Similar arguments also hold for the Tail contributions arising through the perturbed valence operators $\Omega_v^{(1)}$ and $\tilde{\Omega}_v^{(1)}$. To get better inside of this argument, we can rewrite the sum-over-states formula given by Eq. (\ref{eq50}) as \begin{eqnarray} E1_{PV} &=& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_0^{(0)\dagger} D \Omega_0^{(0)} \Omega_0^{(0)\dagger} H_W \Omega_0^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_f^{(0)\dagger} D \Omega_0^{(0)} \Omega_0^{(0)\dagger} H_W \Omega_i^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_0^{(0)\dagger} D \Omega_0^{(0)} \Omega_0^{(0)\dagger} H_W \Omega_i^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_f^{(0)\dagger} D \Omega_0^{(0)} \Omega_0^{(0)\dagger} H_W \Omega_0^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_0^{(0)\dagger} D \Omega_I^{(0)} \Omega_I^{(0)\dagger} H_W \Omega_0^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_f^{(0)\dagger} D \Omega_I^{(0)} \Omega_I^{(0)\dagger} H_W \Omega_i^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_0^{(0)\dagger} D \Omega_I^{(0)} \Omega_I^{(0)\dagger} H_W \Omega_i^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_f^{(0)\dagger} D \Omega_I^{(0)} \Omega_I^{(0)\dagger} H_W \Omega_0^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_0^{(0)\dagger} D \Omega_I^{(0)} \Omega_0^{(0)\dagger} H_W \Omega_0^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \end{eqnarray} \begin{eqnarray} &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_f^{(0)\dagger} D \Omega_I^{(0)} \Omega_0^{(0)\dagger} H_W \Omega_i^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_0^{(0)\dagger} D \Omega_I^{(0)} \Omega_0^{(0)\dagger} H_W \Omega_i^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_f^{(0)\dagger} D \Omega_I^{(0)} \Omega_0^{(0)\dagger} H_W \Omega_0^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_0^{(0)\dagger} D \Omega_0^{(0)} \Omega_I^{(0)\dagger} H_W \Omega_0^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_f^{(0)\dagger} D \Omega_0^{(0)} \Omega_I^{(0)\dagger} H_W \Omega_i^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_0^{(0)\dagger} D \Omega_0^{(0)} \Omega_I^{(0)\dagger} H_W \Omega_i^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+& \sum_{I \ne i} \frac{\langle \Phi_0 | a_f \Omega_f^{(0)\dagger} D \Omega_0^{(0)} \Omega_I^{(0)\dagger} H_W \Omega_0^{(0)} a_i^{\dagger}| \Phi_0 \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)}-\omega)} \nonumber \\ &+&\{H_W\Leftrightarrow D\}\nonumber \\ \label{eq60} \end{eqnarray} Both Eqs. (\ref{eq50}) and Eq. (\ref{eq60}) are equal but they are given differently. These equations are nothing but the expanded forms of Eqs. (\ref{eq5}) and (\ref{eqn7}) respectively. However, different terms are rearranged to place them under the categories of Core and Valence contributing terms in the respective formulas. Thus, we may now outline findings from the above discussions as follows \begin{enumerate} \item It is important to note that in the evaluation of $E1_{PV}$ both the $H_W$ and $D$ operators can be treated symmetrically. Thus, in an approximated method where correlation effects through both these operators are not incorporated equivalently, distinctions of ``Core" and ``Valence" contributions to $E1_{PV}$ cannot be defined uniquely. \item As a consequence of the above point, estimating both the ``Core" and ``Valence" contributions using a blend of many-body methods could mislead the final result. \item Numerical stability to the calculation of $E1_{PV}$ can be verified by evaluating expressions given by Eqs. (\ref{eq3}), (\ref{eqn7}) and (\ref{eq8a}) simultaneously though it can be a strenuous procedure. \item Scaling wave functions for estimating a part of contribution or using experimental value of $\omega$ in an approximated method may not always imply that the result is improved rather it could introduce further uncertainty to the calculation. \end{enumerate} \begin{figure}[t] \centering \includegraphics[height=4.0cm,width=8.0cm]{CPHF-2.eps} \caption{Diagrams denoting amplitude solving equation for $\Omega_v^{CPDF}$. These core-polarization effects are included to all-orders in the CPDF method.} \label{CPHF2} \end{figure} The last point mentioned above can be understood as discussed below. Lets use the experimental value for $\omega$ (shown as $\omega^{ex}$) to define the first-order perturbed wave functions due to the $D$ operator \begin{eqnarray} |\tilde{\Psi}_i^{(1)} \rangle &=& \sum_{I \ne f } | \Psi_I^{(0)} \rangle \frac{ \langle \Psi_I^{(0)} | D | \Psi_i^{(0)} \rangle} {(E_i^{(0)} - E_I^{(0)} + \omega^{ex} ) } \label{eq09} \end{eqnarray} and \begin{eqnarray} |\tilde{\Psi}_f^{(1)} \rangle &=& \sum_{I \ne i} | \Psi_I^{(0)} \rangle \frac{ \langle \Psi_I^{(0)} | D | \Psi_f^{(0)} \rangle} {(E_f^{(0)} - E_I^{(0)} - \omega^{ex} ) } \label{eq010} . \end{eqnarray} Substituting these wave functions in Eq. (\ref{eq4}), the sum-over-states expression for $E1_{PV}$ can be given by \begin{eqnarray} E1_{PV} &=& \sum_{I \ne i} \frac{\langle \Psi_f^{(0)} | D | \Psi_I^{(0)} \rangle \langle \Psi_I^{(0)} | H_W | \Psi_i^{(0)} \rangle} {{\cal N}_{if} (E_i^{(0)} - E_I^{(0)} - \delta \omega ) } \nonumber \\ && + \sum_{I \ne f} \frac{\langle \Psi_f^{(0)} | H_W | \Psi_I^{(0)} \rangle \langle \Psi_I^{(0)} | D| \Psi_i^{(0)} \rangle} {{\cal N}_{if} (E_f^{(0)} - E_I^{(0)} + \delta \omega ) } , \label{eq04} \end{eqnarray} where $\delta \omega = \omega^{ex} - \omega$, with $\omega$ being the theoretical value, cannot be zero when $\omega$ is obtained using a particular many-body method. As can be seen, introduction of $\omega^{ex}$ value affects contributions from the initial and final perturbed terms differently leading to inconsistency in the evaluation of $E1_{PV}$ than the \textit{ab initio} calculation. This can be better evident from the following inequalities in an approximated many-body method \begin{eqnarray} E1_{PV} &=& \frac{\langle \Psi_f^{(0)} | D | \Psi_i^{(1)} \rangle} {{\cal N}_{if} } + \frac{\langle \Psi_f^{(1)} | D | \Psi_i^{(0)} \rangle} {{\cal N}_{if} } \nonumber \\ &\ne & \frac{\langle \Psi_f^{(0)} | H_W | \tilde{\Psi}_i^{(1)} \rangle} {{\cal N}_{if} } + \frac{\langle \tilde{\Psi}_f^{(1)} | H_W | \Psi_i^{(0)} \rangle} {{\cal N}_{if} } \nonumber \\ & \ne & \frac{\langle \Psi_f^{(0)} | D | \Psi_i^{(1)} \rangle} {{\cal N}_{if} } + \frac{\langle \Psi_f^{(0)} | H_W | \tilde{\Psi}_i^{(1)} \rangle} {{\cal N}_{if} } \label{eq08a} \nonumber \\ & \ne & \frac{\langle \tilde{\Psi}_f^{(1)} | H_W | \Psi_i^{(0)} \rangle} {{\cal N}_{if} } + \frac{\langle \Psi_f^{(1)} | D | \Psi_i^{(0)} \rangle} {{\cal N}_{if} } . \label{eq08b} \end{eqnarray} The above inequalities are the result of introduction of $\delta \omega$ in the first-order perturbed wave function due to $D$ after the substitution of $\omega^{ex}$ in place of $\omega$. \section{Many-body methods of $E1_{PV}$}\label{sec4} The main objective in the APV study is to obtain the $E1_{PV}$ amplitude within sub-one percent accuracy from the atomic many-body theory perspective. In view of this, it is imperative to evaluate both the zeroth-order and first-order atomic wave functions in Eq. (\ref{eq3}) very accurately by employing a powerful relativistic atomic many-body method. Owing to complications in accounting for various contributions, the entire calculation is usually performed in several steps. The majority of the contribution from $H$ arises from electron correlation effects due to Coulomb interactions in the presence of APV interactions, while corrections from the Breit and QED interactions are added separately. It would be necessary to include all these interactions through a common many-body theory in order to consider correlations among themselves as well. Since corrections from the Breit and QED interactions to $E1_{PV}$ are very small, their reported estimations are found to be almost consistent to each other by various works \cite{Dzuba1989, Milstein2002, Brown2009, Derevianko2001, Kozlov2001, Shabaev2005, Dzuba2002, Sahoo2022}. Thus, we focus here mainly on the discussions considering the Dirac-Coulomb (DC) interaction Hamiltonian in the determination of unperturbed wave functions. Again, our aim in the present paper is to demonstrate how to achieve high-accuracy calculations of the $E1_{PV}$ amplitudes in $^{133}$Cs by understanding about the roles of Core correlations to these quantities and identifying contributions that can arise through a particular many-body method but can be missed by another method. To explain these points explicitly, we discuss calculations of the $E1_{PV}$ amplitudes in $^{133}$Cs using the following methods \begin{enumerate} \item {\it Relativistic Many-body Perturbation Theory (RMBPT)}: This method is employed in the Rayleigh-Schr\"odinger perturbation theory framework to fathom about significance of various physical effects to $E1_{PV}$ arising at the lower-order level and to learn how they propagate in the all-order perturbative methods. \item {\it CPDF method}: This method is employed in order to reproduce previous results reported by other groups using the same method using our basis functions that are later considered in the RCC calculations. \item {\it RPA}: This method is employed for the same purpose as the above and show importance of correlation contributions arising through this method which are missing in the CPDF method. \item {\it CPDF-RPA method}: This method is employed again for the above purpose as well as understanding the reason for which Core contribution from Ref. \cite{Dzuba2012} has a different sign than other works. \item {\it RCC method}: This method is employed both in the sum-over-states and first-principle approaches. Differences in both the results are further compared with the values that are included through the CPDF-RPA method. \end{enumerate} \begin{figure}[t!] \centering \includegraphics[height=4.0cm,width=8.0cm]{CPHF-3.eps} \caption{Property diagrams of the CPDF method. These diagrams are similar to the DHF method, but the $H_W$ operators of the DHF diagrams are replaced by $\Omega_0^{CPDF}$ and $\Omega_v^{CPDF}$. Expansion of $\Omega_0^{CPDF}$ and $\Omega_v^{CPDF}$ in terms of amplitude diagrams shown in Figs. \ref{CPHF1} and \ref{CPHF2} can reveal how the DHF contributions and lower-order core-polarization effects of the RMBPT(3) method are incorporated within the CPDF method.} \label{CPHF3} \end{figure} The atomic Hamiltonian H with DC approximation can be expressed as sum of one-body and two-body operators \begin{eqnarray}\label{eq:DC} H &=& \sum_i \left [c\mbox{\boldmath${\vec \alpha}$}^D \cdot {\vec p}_i+(\beta-1)c^2+V_{nuc}(r_i)\right] +\sum_{i,j>i}\frac{1}{r_{ij}} \nonumber \\ &=& \sum_i h (r_i) + \sum_{i,j>i} g (r_{ij}), \end{eqnarray} where $c$ is the speed of light, $\mbox{\boldmath${\vec \alpha}$}^D$ and $\beta$ are the Dirac matrices, ${\vec p}$ is the single particle momentum operator, and $\sum_{i,j}\frac{1}{r_{ij}}$ represents the Coulomb potential between the electrons located at the $i^{th}$ and $j^{th}$ positions. The entire Hamiltonian can be divided into one-body part ($\sum_i h (r_i)$) and two-body part ($\sum_{i,j>i} g(r_{ij})$) for convenience. Using the DHF method, we can obtain the single particle orbitals using the modified DHF Hamiltonian as $F = \sum_i f (r_i) = \sum_i (h_i + u_i^{(0)})$ and residual interaction as $V_{res}=H-F$ in the DC approximation. Hence \begin{eqnarray} f | i \rangle = \epsilon_i | i \rangle \label{eqhfe} \end{eqnarray} with the single particle energies $\epsilon_i$ giving unperturbed DHF energy ${\cal E}_0 = \sum_b^{N_c} \epsilon_b $ and the DHF potential $U_{DHF} = \sum_i u_0 (r_i) $ is given by \begin{eqnarray} u_0 (r_i) | i \rangle = \sum_b^{N_c} \left [ \langle b | g |b \rangle |i \rangle - \langle b | g |i \rangle |b \rangle \right ] \label{eqhfu} \end{eqnarray} for $b$ summing over all occupied-orbitals $N_c$. In the DHF method, both Eqs. (\ref{eqhfe}) and (\ref{eqhfu}) are solved iteratively to obtain self-consistent solutions. It can be followed from the above expression that for the determinant expressed by $|\Phi_k \rangle = a_k^{\dagger} |\Phi_0 \rangle$, the DHF energy is given by ${\cal E}_k = {\cal E}_0 + \epsilon_k$. Similarly, DHF energies of the excited configurations are given by $ {\cal E}_{a,b,\cdots }^{p, q, \cdots } = {\cal E}_0 + \epsilon_p + \epsilon_p + \cdots - \epsilon_a - \epsilon_b - \cdots $ where $\{a, b, \cdots \}$ and $\{p, q, \cdots \}$ denoting for the occupied and unoccupied orbitals respectively. We discuss below $E1_{PV}$ expressions using the DHF method and different many-body methods that are employed to account for the electron correlation effects due to $V_{res}$. \begin{figure}[t] \centering \includegraphics[height=4.0cm,width=8.7cm]{RPA-1.eps} \caption{Graphical representation of $\Omega_0^{RPA}$ and its expansion in terms of lower-order RMBPT(3) method.} \label{RPA1} \end{figure} \subsection{DHF method} Using wave functions from the DHF method, we can evaluate the $E1_{PV}$ amplitude in the mean-field approach as \begin{eqnarray} E1_{PV} = \langle \Phi_f | D | \Phi_i^{(1)} \rangle + \langle \Phi_f^{(1)} | D | \Phi_i \rangle, \end{eqnarray} where $| \Phi_{n=i,f}^{(1)} \rangle $ is the first-order perturbed wave function with respect to $| \Phi_{n=i,f} \rangle $. We can express these wave function as \begin{eqnarray} | \Phi_n^{(1)} \rangle = \sum_I |\Phi_I \rangle \frac{\langle \Phi_I | H_W | \Phi_n \rangle} {{\cal E}_n - {\cal E}_I} , \end{eqnarray} where $|\Phi_I \rangle$ are the intermediate states with mean-field energies ${\cal E}_I$. Substituting this expression above, it yields \begin{eqnarray} E1_{PV} &=& \sum_{I\ne i} \frac{\langle \Phi_f | D | \Phi_I \rangle \langle \Phi_I | H_W | \Phi_i \rangle} { {\cal E}_i - {\cal E}_I } \nonumber \\ && + \sum_{I\ne f} \frac{\langle \Phi_f | H_W | \Phi_I \rangle \langle \Phi_I | D | \Phi_i \rangle} { {\cal E}_f - {\cal E}_I } . \end{eqnarray} Using $H_W = \sum_i h_w (r_i)$ and $D=\sum_i d(r_i)$, and following the Slater-Condon rules, it gives \begin{eqnarray} E1_{PV} &=& \sum_{a } \frac{ \langle f | d | a \rangle \langle a | h_w | i \rangle} {\epsilon_i - \epsilon_a } + \sum_{a } \frac{\langle f | h_w | a \rangle \langle a | d | i \rangle} {\epsilon_f - \epsilon_a } \nonumber \\ &+& \sum_{p \ne i} \frac{ \langle f | d | p \rangle \langle p | h_w | i \rangle} {\epsilon_i - \epsilon_p } + \sum_{p \ne f} \frac{\langle f | h_w | p \rangle \langle p | d | i \rangle} {\epsilon_f - \epsilon_p } , \ \ \ \ \end{eqnarray} where $|k=a,p \rangle$ denotes $k^{th}$ single particle DHF orbital with energy $\epsilon_k$ for the occupied orbitals denoted by $a$ and virtual orbitals denoted by $p$. Contributions arising from the first two terms of the above expression are referred to as the lowest-order Core contributions while contributions from the later two terms are said to be Valence contributions that include the lowest-orders to both Main and Tail parts. In terms of wave operators, the DHF expression for $E1_{PV}$ can be given by \begin{eqnarray} E1_{PV} = \langle \Phi_f | D \Omega^{i(0,1)} | \Phi_i \rangle + \langle \Phi_f | \Omega^{f(0,1) \dagger} D | \Phi_i \rangle, \end{eqnarray} where $\Omega^{v(0,1)} = \Omega_0^{(0,1)} + \Omega_v^{(0,1)}$. For single excitations, $\Omega_{0}^{(0,1)} \rightarrow \Omega_{1}^{(0,1)}= \sum_{a,p} \frac{ \langle \Phi_a^p | H_W | \Phi_0 \rangle} { {\cal E}_0 - {\cal E}_a^p } a_p^{\dagger} a_a = \sum_{a,p} \frac{ \langle p | h_w | a \rangle} { \epsilon_a - \epsilon_p} a_p^{\dagger} a_a \equiv \sum_{a,p} \Omega_a^p $ and $\Omega_{v}^{(0,1)} \rightarrow \Omega_{1v}^{(0,1)}= \sum_{p} \frac{ \langle \Phi_v^p | H_W | \Phi_v \rangle} { {\cal E}_v-{\cal E}_v^p} a_p^{\dagger} a_v= \frac{ \langle p | h_w | v \rangle} { \epsilon_v - \epsilon_p} a_p^{\dagger} a_a \equiv \sum_p \Omega_v^p $ for the intermediate states $|\Phi_a^p\rangle$ with energies ${\cal E}_a^p = {\cal E}_0 + \epsilon_p - \epsilon_a$ and $|\Phi_v^p\rangle$ with energies ${\cal E}_v^p= {\cal E}_0 + \epsilon_p $ with respect to the $|\Phi_0\rangle$ and $|\Phi_v \rangle$ respectively. It should be noted that for double excitations $\Omega_{0}^{(0,1)} \rightarrow \Omega_{2}^{(0,1)}=0$ and $\Omega_{v}^{(0,1)} \rightarrow \Omega_{2v}^{(0,1)}=0$. Representing the wave operators in terms of the Goldstone diagrams, we show the Core and Valence contributions to $E1_{PV}$ in Fig. \ref{DF}. Figs. \ref{DF}(a) and (b) correspond to Core contributing terms, while Figs. \ref{DF}(c) and (d) correspond to Valence contributing terms here. \begin{figure}[t] \centering \includegraphics[height=4.0cm,width=8.5cm]{RPA-2.eps} \caption{Graphical representation of $\Omega_v^{RPA}$ and its expansion in terms of lower-order RMBPT(3) method.} \label{RPA2} \end{figure} \subsection{RMBPT method} We employ the RMBPT method in the Rayleigh-Schr\"odinger approach and estimate contributions only up to third-order of perturbation (RMBPT(3) method) by considering two orders of $V_{res}$ and one-order of $H_W$; i.e. the net Hamiltonian is expressed as \begin{equation} H_{at} = F + \lambda_1 V_{res} + \lambda_2 H_W , \end{equation} where $\lambda_1$ and $\lambda_2$ are arbitrary parameters introduced to count orders of $V_{res}$ and $H_W$ in the calculation. Here, we can calculate either matrix element of $D$ after perturbing wave functions by $H_W$ or matrix element of $H_W$ after perturbing wave functions by $D$. We adopt here both approaches for two reasons. First, it can help us to identify the lower-order contributions terms to the CPDF and RPA methods so that their inclusion through the RCC method can be easily understood. Second, it would be interesting to see classification of the Core and Valence contributions in both the approaches of the RMBPT method. In both the approaches, the unperturbed wave operators in the RMBPT method can be given by \begin{eqnarray} \Omega_n^{(0)} = \sum_k \Omega_n^{(k,0)} . \end{eqnarray} Similarly, the first-order perturbed wave operators can be denoted by \begin{eqnarray} \Omega_n^{(1)} = \sum_k \Omega_n^{(k,1)}, \end{eqnarray} where subscript $n=0,v$ stands for core or valence operators and superscript $k$ denotes for order of $V_{res}$. Amplitudes of the wave operators in the RMBPT method are determined by \begin{eqnarray} [\Omega_0^{(k,0)}, F ] P_0 &=& Q_0 V_{res} \Omega_0^{(k-1,0)}P_0 \label{bl00} \end{eqnarray} and \begin{eqnarray} [\Omega_v^{(k,0)}, F ] P_v &=& Q_v V_{res} (\Omega_0^{(k-1,0)} + \Omega_v^{(k-1,0)} )P_v \nonumber \\ && - \sum_{m=1 }^{k-1} \Omega_v^{(k-m,0)} E_v^{(m,0)} , \label{bl01} \end{eqnarray} where $E_v^{(m,0)}$ is the $m^{th}$-order perturbed energy of the total energy $E_v^{(0)}= \sum_{m=1 }^{\infty} E_v^{(m,0)}$ and is evaluated by using $m^{th}$-order effective Hamiltonian $H_{eff}^{(m,0)} = P_v V_{res} (\Omega_0^{(m-1,0)} + \Omega_v^{(m-1,0)} )P_v $ as part of the net effective Hamiltonian $H_{eff}^{(0)}= \sum_{m=1}^{\infty} H_{eff}^{(m,0)}$. Amplitudes of the perturbed wave operators due to $H_W$ can be evaluated by \begin{eqnarray} [\Omega_0^{(k,1)},F]P_0 &=& Q_0 H_w \Omega_0^{(k,0)}P_0 + Q_0 V_{res} \Omega_0^{(k-1,1)}P_0 \ \ \ \ \ \label{eq44a} \end{eqnarray} and \begin{eqnarray} [\Omega_v^{(k,1)},F]P_v &=& Q_v V_{res} (\Omega_0^{(k-1,1)} + \Omega_v^{(k-1,1)}) P_v \nonumber \\ && + Q_v H_w (\Omega_0^{(k,0)} + \Omega_v^{(k,0)}) P_v \nonumber \\ && - \sum_{m=1 }^{k-1} \Omega_v^{(k-m,1)} E_v^{(m,0)} . \ \ \ \label{eq44b} \end{eqnarray} In the above expression, the lower-order unperturbed wave operator is given by $\Omega_n^{(0,0)}=1$. For the case of considering $D$ as perturbation, Eqs. (\ref{eq44a}) and (\ref{eq44b}) can be again used to solve amplitudes of the $\tilde{\Omega}_0^{(1)}$, $\tilde{\Omega}_i^{(1)}$ and $\tilde{\Omega}_f^{(1)}$ operators in the RMBPT methods by replacing $\Omega_{a/v}^p$ by $ \Omega_{a/v}^{p +} =\frac{ \langle p | d | a \rangle} { \epsilon_{a/v} - \epsilon_p - \omega } a_p^{\dagger} a_{a/v}$ and $\Omega_{a/v}^{p \dagger}$ by complex conjugate of $\Omega_{a/v}^{p -} = \frac{ \langle p | d | v \rangle} {\epsilon_{a/v} - \epsilon_p + \omega} a_p^{\dagger} a_{a/v}$. This follows, the $n^{th}$-order $E1_{PV}$ expression as \begin{eqnarray} E1_{PV} &=& \frac{1}{\sum_{l=0}^{n-1} N_{if}^l } \Big{[} \sum_{k=0}^{n} \Big{(} \langle \Phi_f | ({\Omega_0^{(n-k,0)}} + {\Omega_f^{(n-k,0)}})^{\dagger} D \nonumber \\ && (\Omega_0^{(k,1)} + \Omega_i^{(k,1)} ) \Big{)} |\Phi_i \rangle + \sum_{k=0}^{n} \Big{(} \langle \Phi_f | ({\Omega_0^{(k,1)}} \nonumber \\ && + {\Omega_f^{(k,1)}})^{\dagger} D (\Omega_0^{(n-k,0)} + \Omega_i^{(n-k,0)} \Big{)} |\Phi_i \rangle) \Big{]} , \label{eqmp1} \end{eqnarray} where $H_W$ is considered in the perturbation with ${\cal N}_{if}^k = [ (\sum_l \langle \Phi_f| ( \Omega_0^{(k-l,0)} + \Omega_f^{(k-l,0)})^{\dagger} (\Omega_0^{(l,0)} + \Omega_f^{(l,0)} ) |\Phi_f \rangle) (\sum_m \langle \Phi_i| ( \Omega_0^{(k-m,0)} + \Omega_i^{(k-m,0)})^{\dagger} (\Omega_0^{(m,0)} + \Omega_i^{(m,0)} ) |\Phi_i \rangle) ]^{1/2}$. In the case of $D$ as perturbation, it yields \begin{eqnarray} E1_{PV} &=& \frac{1}{\sum_{l=0}^{n-1} N_{if}^l } \Big{[} \sum_{k=0}^{n} \Big{(} \langle \Phi_f | ({\Omega_0^{(n-k,0)}} + {\Omega_f^{(n-k,0)}})^{\dagger} H_W \nonumber \\ && (\tilde{\Omega}_0^{(k,1)} + \tilde{\Omega}_i^{(k,1)} ) \Big{)} |\Phi_i \rangle + \sum_{k=0}^{n} \Big{(} \langle \Phi_f | ({\tilde{\Omega}_0^{(k,1)}} \nonumber \\ && + {\tilde{\Omega}_f^{(k,1)}})^{\dagger} H_W (\Omega_0^{(n-k,0)} + \Omega_i^{(n-k,0)} \Big{)} |\Phi_i \rangle) \Big{]} . \label{eqmp2} \end{eqnarray} In Fig. \ref{MBPT}, we show the important correlation contributing Goldstone diagrams to $E1_{PV}$ arising through the RMBPT(3) method. It should be noted that the lowest-order diagrams of the RMBPT(3) method are same as the diagrams corresponding to the DHF method and they are not shown here. Since both Eqs. (\ref{eqmp1}) and (\ref{eqmp2}) are equivalent at given level of approximation, the Goldstone diagrams are identical in both the cases. Thus, the Core and Valence contributions arising through both the expressions can be distinguished and quoted separately by adopting the definitions of the respective wave operators. This would help us identifying lower-order Core and Valence correlation contributions to the CPDF, RPA, CPDF-RPA and RCC methods that are going to be discussed next. In order to distinguish results while presenting from both the approaches, we use the notations RMBPT(3)$^w$ and RMBPT(3)$^d$ in place of RMBPT(3) for the cases with $H_W$ as the perturbation and with $D$ as the perturbation respectively. \begin{figure}[t] \centering \includegraphics[height=4.0cm,width=8.0cm]{RPA-3.eps} \caption{Property contributing diagrams of the RPA. These diagrams are similar to the CPDF diagrams, but the core-polarization effects are included through the $D$ operator instead of $H_W$. Expansion of the $\Omega_0^{RPA}$ and $\Omega_v^{RPA}$ in terms of Figs. \ref{RPA1} and \ref{RPA2} can show that the RPA property diagrams include the DHF contributions and core-polarization effects that are distinctly different than the CPDF method.} \label{RPA3} \end{figure} \subsection{CPDF Method} In the CPDF method, it is possible to extend the $E1_{PV}$ calculation to all-orders in a very simple manner by extending the DHF expression and with much less computational effort compared to the RMBPT(3) method. This can be derived by starting with the DHF expression, given by \begin{eqnarray} E1_{PV} = \langle f|d|i^{(1)} \rangle + \langle f^{(1)} |d|i\rangle, \label{eqhf} \end{eqnarray} where \begin{eqnarray} |k^{(1)} \rangle = \sum_{I \ne k} |I \rangle \frac{\langle I |h_w|k \rangle}{\epsilon_k -\epsilon_I} . \end{eqnarray} In the CPDF method, the first-order perturbed single particle orbital $|k^{(1)} \rangle $ is obtained by including core-polarization effects due to $V_{res}$ to all-orders (denoted by $|k^{PV} \rangle $) by defining an effective potential, similar to $u_i^{(0)}$, in the presence of $h_w$. To arrive at this expression, we consider the net Hamiltonian $H_{int}$ to define the modified single particle DHF Hamiltonian $f_i^{PV} = f_i + \lambda_2 h_w$ and potential as \begin{eqnarray} f_i^{PV} | \tilde{i} \rangle = \tilde{\epsilon}_i | \tilde{i} \rangle \label{eqcpp} \end{eqnarray} and \begin{eqnarray} \tilde{u}_i = \sum_b^{N_c} \left [ \langle \tilde{b} | g_{bi} | \tilde{b} \rangle | \tilde{i} \rangle - \langle \tilde{b} | g_{bi} | \tilde{i} \rangle | \tilde{b} \rangle \right ] , \end{eqnarray} where the tilde symbol denotes solution for $H_{int}$ in place of $H$. Now expanding $| \tilde{i} \rangle = | i \rangle + \lambda_2 | i^{PV} \rangle + {\cal O} (\lambda_2^2)$ from Eq. (\ref{eqcpp}) and retaining terms that are linear in $\lambda_2$, we can get \begin{eqnarray} (f_i - \epsilon_i) |i^{PV} \rangle = -h_w |i \rangle - u_i^{PV} |i \rangle , \label{eqhf1} \end{eqnarray} where \begin{eqnarray} u_i^{PV} | i \rangle = \sum_b^{N_c} \left [ \langle b | g |b \rangle |i^{PV} \rangle - \langle b | g |i^{PV} \rangle |b \rangle \right. \nonumber \\ \left. + \langle b^{PV} | g |b \rangle |i \rangle - \langle b^{PV} | g |i \rangle |b \rangle \right ] . \label{eqhfu1} \end{eqnarray} \begin{figure}[t] \centering \includegraphics[scale=0.35]{CPDF-RPA-2.eps} \caption{Goldstone diagrams representing last four terms of Eq. (\ref{eqhfu4}) representing the $u^{PV\pm}$ term that give rise to DCP contributions in the CPDF-RPA method. Diagrams from (ix) to (xvi) along with their exchanges are coming due to the implementation of the orthogonalization condition.} \label{CPHF-RPA} \end{figure} Like Eqs. (\ref{eqhfe}) and (\ref{eqhfu}), both Eqs. (\ref{eqhf1}) and (\ref{eqhfu1}) are also solved iteratively to obtain the self-consistent solutions to account for core-polarization effects to all-orders. Using the above APV modified orbitals, $E1_{PV}$ can be evaluated as \begin{eqnarray} E1_{PV} = \langle f|d|i^{PV} \rangle + \langle f^{PV} |d|i\rangle . \label{eqcphf} \end{eqnarray} To make one-to-one comparison between contributions arising through the CPDF method and other many-body methods including lower-order terms of the RMBPT(3) method, we can present the CPDF expression for $E1_{PV}$ using the wave operators as \begin{eqnarray} E1_{PV} = \langle \Phi_f | D \Omega^{i,CPDF} | \Phi_i \rangle + \langle \Phi_f | \Omega^{f,CPDF \dagger} D | \Phi_i \rangle, \end{eqnarray} where $\Omega^{v,CPDF} = \Omega_0^{CPDF} + \Omega_{v}^{CPDF} = \sum_{k=1}^{\infty} \left [\sum_{a,p} \Omega_{a,p}^{(k,1)} + \sum_p \Omega_{v,p}^{(k,1)} \right ] $ with subscripts $0$ and $v$ stand for operators responsible for including core and valence correlations, and superscript $k$ denotes order of $V_{res}$. Amplitudes of these operators are given by \begin{eqnarray} \Omega_{a,p}^{(k,1)} &=& \Omega_a^p + \sum_{b,q} \Big { ( } \frac{ [\langle pb | g | aq \rangle - \langle pb | g| qa \rangle] } {\epsilon_a - \epsilon_p } \Omega_{b,q}^{(k-1,1)} \nonumber \\ && + \Omega_{b,q}^{{(k-1,1)}^{\dagger}} \frac{ [\langle pq | g | ab \rangle - \langle pq | g | ba \rangle] }{\epsilon_a-\epsilon_p} \Big {) } \label{cphfwaveop1} \end{eqnarray} and \begin{eqnarray} \Omega_{v,p}^{(k,1)} &=& \Omega_v^p + \sum_{b,q} \Big { ( } \frac{ [\langle pb | g | vq \rangle - \langle pb | g | qv \rangle] } {\epsilon_v - \epsilon_p} \Omega_{b,q}^{(k-1,1)} \nonumber \\ && + \Omega_{b,q}^{{(k-1,1)}^{\dagger}} \frac{ [\langle pq | g | vb \rangle - \langle pq | g | bv \rangle] }{\epsilon_v-\epsilon_p} \Big {) } , \label{cphfwaveop2} \end{eqnarray} where the sums over $a,b$ and $p,q$ represent occupied and virtual operators respectively. To compute amplitude of the above operators, we set $\Omega_{a,p}^{(0,1)} \approx \Omega_a^p$ and $\Omega_{v,p}^{(0,1)} \approx \Omega_v^p$ in the beginning to initiate the iteration procedure from $k=1$. As can be followed here that only the effective singly excited configurations are contributing through the $\Omega^{v,CPDF} $ operators. Thus, it completely misses out pair-correlation contributions. \begin{figure}[t] \centering \includegraphics[height=8.0cm,width=8.0cm]{CPDF-RPA-P-1.eps} \caption{Goldstone diagrams contributing to the $E1_{PV}$ amplitude in the CPDF-RPA method. Comparing these diagrams with the diagrams of the RMBPT(3) method, can be understood by expanding the $\Omega_{0/v}^{CPDF}$, $\Omega_{0/v}^{RPA}$ and $\Omega^{CPDF\pm}$ operators, it can be followed that the CPDF-RPA method does not include contributions from pair-correlation effects and some of the DCP effects representing diagrams.} \label{CPHF-RPA1} \end{figure} The Goldstone diagrams that contribute to the amplitudes of $\Omega_0^{CPDF}$ are shown in Fig. \ref{CPHF1}. Similarly, the Goldstone diagrams contributing to the amplitudes of $\Omega_v^{CPDF}$ are shown in Fig. \ref{CPHF2}. Using these operators, we show the final Goldstone diagrams that contribute to $E1_{PV}$ in Fig. \ref{CPHF3}. By analyzing these diagrams in terms of the Goldstone diagrams shown in Figs. \ref{CPHF1} and \ref{CPHF2}, it is easy to follow how the core-polarization effects are included to all-orders through the CPDF method. Again, comparing those diagrams with the diagrams from the DHF and RMBPT(3) methods the relations among them can be easily understood. One can also realize here that number of Goldstone diagrams appear in the CPDF method are very less than the RMBPT(3) method. Thus, the computational efforts in the CPDF method are much smaller compared to the RMBPT(3) method though it is an all-order theory. \subsection{RPA} From Eq. (\ref{eqcphf}), it can be followed that the CPDF method includes correlation effects in the first-order wave functions through the $H_W$ operator only. Thus, it completely misses out correlation effects in the unperturbed state that can arise through the $D$ operator. It can also be noted that the CPDF method is formulated based on Eq. (\ref{eq3}). Therefore proceeding with a similar manner based on Eq. (\ref{eqn7}), it can lead to capturing core-polarization effects through the $D$ operator and the RPA is formulated exactly on the same line. \begin{figure}[t] \centering \includegraphics[scale=0.40]{T0.eps} \caption{Goldstone diagrams demonstrating breakdown of the $T^{(0)}$ operators in terms of lower-order perturbative excitations.} \label{RCC-T0} \end{figure} To derive the RPA expression, we consider the net Hamiltonian $H_{int}^{\pm} = H + \lambda_3 D \mp \omega $ to define the modified single particle DHF Hamiltonian $f_i^{\pm} = f_i + \lambda_3 d \mp \omega$ and potential as \begin{eqnarray} f_i^{\pm} | \tilde{i}^{\pm} \rangle = \tilde{\epsilon}_i^{\pm} | \tilde{i}^{\pm} \rangle \end{eqnarray} and \begin{eqnarray} \tilde{u}_i^{\pm} = \sum_b^{N_c} \left [ \langle \tilde{b}^{\pm} | g | \tilde{b}^{\pm} \rangle | \tilde{i}^{\pm} \rangle - \langle \tilde{b}^{\pm} | g | \tilde{i}^{\pm} \rangle | \tilde{b}^{\pm} \rangle \right ] , \end{eqnarray} respectively, where the superscript $\pm$ denotes solution for $H_{int}^{\pm}$ in place of $H$. Now expanding $| i^{\pm} \rangle = | i \rangle + \lambda_3 | i^{\pm} \rangle + {\cal O} (\lambda_3^2)$ and retaining terms that are linear in $\lambda_3$, we can get \begin{eqnarray} (f_i - \epsilon_i\mp \omega) |i^{\pm} \rangle = - d |i \rangle - u_i^{\pm} |i \rangle , \label{eqhf2} \end{eqnarray} where \begin{eqnarray} u_i^{\pm} | i \rangle = \sum_b^{N_c} \left [ \langle b | g |b \rangle |i^{\pm} \rangle - \langle b | g |i^{\pm} \rangle |b \rangle \right. \nonumber \\ \left. + \langle b^{\mp} | g |b \rangle |i \rangle - \langle b^{\mp} | g |i \rangle |b \rangle \right ] . \label{eqhfu2} \end{eqnarray} Here also both Eqs. (\ref{eqhf2}) and (\ref{eqhfu2}) are solved iteratively to obtain the self-consistent solutions to account for core-polarization effects to all-orders. Using the above $D$ operator modified orbitals, $E1_{PV}$ can be evaluated as \begin{eqnarray} E1_{PV} = \langle \Phi_f | H_W \Omega^{i,+} | \Phi_i \rangle + \langle \Phi_f | \Omega^{f,- \dagger} H_W | \Phi_i \rangle, \end{eqnarray} where $\Omega^{v,\pm} = \Omega_0^{\pm} + \Omega_v^{\pm} = \sum_{k=1}^{\infty} \left [\sum_{a,p} \Omega_{a,p}^{\pm(k,1)} + \sum_p \Omega_{v,p}^{\pm(k,1)} \right ] $ with subscripts $0$ and $v$ stand for operators responsible for including Core and Valence correlations, and superscript $k$ denotes order of $V_{res}$. Amplitudes of these operators are given by \begin{eqnarray} \Omega_{a,p}^{\pm (k,1)} &=& \Omega_a^{p \pm} + \sum_{b,q} \Big { ( } \frac{ [\langle pb | g | aq \rangle - \langle pb | g| qa \rangle] } {\epsilon_a - \epsilon_p \pm \omega } \Omega_{b,q}^{\pm (k-1,1)} \nonumber \\ && + \Omega_{b,q}^{\mp {(k-1,1)}^{\dagger}} \frac{ [\langle pq | g | ab \rangle - \langle pq | g | ba \rangle] }{\epsilon_a-\epsilon_p \pm \omega} \Big {) } \label{rpawaveop1} \end{eqnarray} and \begin{eqnarray} \Omega_{v,p}^{\pm (k,1)} &=& \Omega_v^{p \pm} + \sum_{b,q} \Big { ( } \frac{ [\langle pb | g | vq \rangle - \langle pb | g | qv \rangle] } {\epsilon_v - \epsilon_p \pm \omega} \Omega_{b,q}^{\pm (k-1,1)} \nonumber \\ && + \Omega_{b,q}^{{\mp (k-1,1)}^{\dagger}} \frac{ [\langle pq | g | vb \rangle - \langle pq | g | bv \rangle] }{\epsilon_v-\epsilon_p \pm \omega} \Big {) } , \label{rpawaveop2} \end{eqnarray} where we assume $\Omega_{a,p}^{\pm (0,1)} \approx \Omega_a^{p \pm}$ and $\Omega_{v,p}^{\pm (0,1)} \approx \Omega_v^{p \pm}$ initially for the iteration procedure. We also intend to mention here is that in Eqs. (\ref{rpawaveop1}) and (\ref{rpawaveop2}) , $\omega$ value can be used from the experiment while in the {\it ab initio} framework it is taken from the DHF method. Following the explanation in the previous sub-section, it is obvious that the RPA wave operators will pick up core-polarization correlations through the $D$ operator to all-orders. Again based on the classification adopted in this work, the Goldstone diagrams contributing to the amplitude determining equation for Core operator are shown in Fig. \ref{RPA1}, while the diagrams contributing to the amplitudes of the Valence operator are shown in Fig. \ref{RPA2}. \begin{figure}[t] \centering \includegraphics[scale=0.4]{S0.eps} \caption{Goldstone diagrams demonstrating breakdown of the $S_v^{(0)}$ operators in terms of lower-order perturbative excitations.} \label{RCC-Sv0} \end{figure} Using the above operators, we show the final Goldstone diagrams that contribute to $E1_{PV}$ of RPA in Fig. \ref{RPA3}. By analyzing these diagrams in terms of the Goldstone diagrams shown in Figs. \ref{RPA1} and \ref{RPA2}, it can be followed how the core-polarization effects are included through $D$ to all-orders through the RPA. Again, comparing these diagrams with the diagrams from the DHF and RMBPT(3) methods the relations among both the methods can be understood. Though the number of Goldstone diagrams appear in the RPA and the CPDF method are same, but amplitudes in the CPDF method are expected to converge faster than the RPA owing to strong correlations arising through $D$ than $H_W$. It can be noticed here that the Core correlations (without DHF contributions) arising in the RPA are distinctly different than that appear via the CPDF method. \begin{figure}[t] \centering \includegraphics[scale=0.4]{T1.eps} \caption{Goldstone diagrams demonstrating breakdown of the $T^{(1)}$ operators in terms of lower-order perturbative excitations.} \label{RCC-T1} \end{figure} \subsection{CPDF-RPA/TDHF method} As realized above, the CPDF method and the RPA include core-polarization effects only through the first-order perturbed wave functions but the unperturbed wave functions and energies in both the cases are used from the DHF method. In order to achieve core-polarization effects through both the states it is necessary to include the $H_W$ and $D$ operators in the perturbation. The simplest approach for doing so would be to add both the CPDF and RPA results and remove repeated appearance of the DHF value from one of the approaches. However, such an approach would omit correlations among both the $H_W$ and $D$ operators which may not be negligible. Keeping in view of the above, we define the total Hamiltonian as \begin{eqnarray} H_t &=& H + \lambda_2 H_W + \lambda_3 D \nonumber \\ &\equiv & H_{int} + \lambda_3 D . \end{eqnarray} Treating both the $H_W$ and $D$ operators perturbatively, the exact atomic wave function ($| \overline{\Psi}_v \rangle$) of $H_t$ can be expressed as \begin{eqnarray} | \overline{\Psi}_v \rangle &=& |\Psi_v^{(0,0)} \rangle + \lambda_2 |\Psi_v^{(1,0)} \rangle + \lambda_3 | \tilde{\Psi}_v^{(0,1)} \rangle \nonumber \\ && + \lambda_2 \lambda_3 |\Psi_v^{(1,1)} \rangle + \cdots . \end{eqnarray} As denoted earlier, $|\Psi_v^{(m,n)} \rangle $ represents consideration of $m$ orders of $H_w$ and $n$ orders of $D$ in the atomic wave function $|\Psi_v \rangle$ of $H$. In the wave operator formalism, it is given by \begin{eqnarray} \overline{\Omega}_v | \Phi_v \rangle &=& \Omega_v^{(0,0)} |\Phi_v \rangle + \lambda_2 \Omega_v^{(1,0)} |\Phi_v \rangle + \lambda_3 \tilde{\Omega}_v^{(0,1)} |\Phi_v \rangle \nonumber \\ && + \lambda_2 \lambda_3 \Omega_v^{(1,1)} |\Phi_v \rangle + \cdots , \end{eqnarray} where superscripts denote the same meaning as above. It can be noted that each $\Omega_v^{(m,n)}$ component will have parts carrying Core and Valence correlations separately. \begin{figure}[t] \centering \includegraphics[scale=0.4]{S1.eps} \caption{Goldstone diagrams demonstrating breakdown of the $S_v^{(1)}$ operators in terms of lower-order perturbative excitations.} \label{RCC-Sv1} \end{figure} In this case, we can determine the $E1_{PV}$ amplitude as the transition amplitude of $O\equiv \lambda_2 H_w + \lambda_3 D $ between the initial perturbed state to the final unperturbed state or between the initial unperturbed state to the final perturbed state (see Eqs. (\ref{eq8a}) and (\ref{eq8b})). i.e. \begin{eqnarray} E1_{PV} &=& \langle \Psi_f^{(0,0)} |\Psi_i^{(1,1)} \rangle + \langle \Psi_f^{(0,0)} | D |\Psi_i^{(1,0)} \rangle \nonumber \\ && + \langle \Psi_f^{(0,0)} | H_W | \tilde{\Psi}_i^{(0,1)} \rangle \nonumber \\ &=& \langle \Phi_f | \Omega_f^{(0,0)\dagger} \Omega_i^{(1,1)} | \Phi_i \rangle + \langle \Phi_f | \Omega_f^{(0,0)\dagger} D \Omega_i^{(1,0)} |\Phi_i \rangle \nonumber \\ &+& \langle \Phi_f | \Omega_f^{(0,0)\dagger} H_W \tilde{\Omega}_i^{(0,1)} |\Phi_i \rangle \label{cprpa1} \end{eqnarray} or \begin{eqnarray} E1_{PV} &=& \langle \Psi_f^{(1,1)} |\Psi_i^{(0,0)} \rangle + \langle \Psi_f^{(1,0)} | D |\Psi_i^{(0,0)} \rangle \nonumber \\ && + \langle \tilde{\Psi}_f^{(0,1)} | H_W |\Psi_i^{(0,0)} \rangle \nonumber \\ &=& \langle \Phi_f | \Omega_f^{(1,1)\dagger} \Omega_i^{(0,0)} | \Phi_i \rangle + \langle \Phi_f | \Omega_f^{(1,0)\dagger} D \Omega_i^{(0,0)} |\Phi_i \rangle \nonumber \\ &+& \langle \Phi_f | \tilde{\Omega}_f^{(0,1)\dagger} H_W \Omega_i^{(0,0)} |\Phi_i \rangle , \label{cprpa2} \end{eqnarray} keeping terms that are of the order of $\lambda_2 \lambda_3$. Note that both $H_W$ and $D$ operators are treated on an equal footing in this approach. Thus, definitions of both Core and Valence contributions to $E1_{PV}$ will be identical for both Eqs. (\ref{cprpa1}) and (\ref{cprpa2}). Also, it would be prudent to use both the expressions in an approximated method to verify numerical uncertainty to the final result. However, if $\omega^{ex}$ (some earlier studies have done it through the scaling procedure) is used then the results from both these equations may not agree to each other due to inconsistencies in the treatment of the intermediate states through these equations. \begin{figure}[t] \centering \includegraphics[height=8.0cm,width=8.0cm]{PROPCC.eps} \caption{A few important $E1_{PV}$ evaluating diagrams in the RCCSD method. Diagrams shown as $D T^{(1)}$ and its complex conjugate (c.c.)) term are the dominant Core contributing effects that include all the core-polarization effects of the CPDF method, pair-correlation effects of the RMBPT(3) method and their correlations. Similarly, diagrams shown as $D S_{1i}^{(1)}$ and $S_{1f}^{(1)\dagger} D$ contain Valence contributing core-polarization effects of the CPDF method, pair-correlation effects from the RMBPT(3) method and their correlations. $D S_{2i}^{(1)}$ and $S_{2f}^{(1)\dagger} D$ representing diagrams contain Core contributions from the RPA and DCP contributions from the CPDF-RPA method as well as that are absent in this method but appears in the RMBPT(3) method. Non-linear RCCSD terms incorporate higher-order correlation effects of the above terms and account correlations among core-polarization, pair-correlation and their intercombinations. } \label{RCC-prop} \end{figure} The modified single particle Hamiltonian for the corresponding Hamiltonian $H_t = H_{int} + \lambda_3 D$ in the CPDF-RPA method can be written as $f_i^{PV\pm} = f_i^{PV} + \lambda_3 d \mp \omega$. It follows \begin{eqnarray} f_i^{PV \pm} | \overline{i} \rangle = \overline{\epsilon}_i | \overline{i} \rangle \end{eqnarray} and \begin{eqnarray} \overline{u}_i = \sum_b^{N_c} \left [ \langle \overline{b} | g | \overline{b} \rangle | \overline{i} \rangle - \langle \overline{b} | g | \overline{i} \rangle | \overline{b} \rangle \right ] , \end{eqnarray} where the bar symbol denotes solution for $H_t$. By expanding, we get $| \overline{i} \rangle = | i^{PV} \rangle + \lambda_3 | i^{PV \pm} \rangle + {\cal O}(\lambda_3^2) $. It gives \begin{eqnarray} (f_i^{PV} - \epsilon_i^{PV} \mp \omega) |i^{PV\pm} \rangle = -d |i^{PV} \rangle - u_i^{PV(1)} |i^{PV} \rangle , \label{eqhf3} \end{eqnarray} where \begin{eqnarray} u_i^{PV(1)} | i^{PV} \rangle = \sum_b^{N_c} \left [ \langle b^{PV} | g |b^{PV} \rangle |i^{PV \pm} \rangle \right. \nonumber \\ \left. - \langle b^{PV} | g |i^{PV \pm} \rangle |b^{PV} \rangle \right. \nonumber \\ \left. + \langle b^{PV} | g |b^{PV} \rangle |i^{PV \pm} \rangle \right. \nonumber \\ \left. - \langle b^{PV} | g |i^{PV \pm} \rangle |b^{PV} \rangle \right ] . \label{eqhfu3} \end{eqnarray} Further expanding Eqs. (\ref{eqhf3}) and (\ref{eqhfu3}), and retaining terms of the order of $\lambda_2 \lambda_3$ we get \begin{eqnarray} (f_i - \epsilon_i \mp \omega) |i^{PV\pm} \rangle &=& -d |i^{PV} \rangle - u_i^{\pm} | i^{PV} \rangle - h_w |i^{\pm} \rangle \nonumber \\ && - u_i^{PV} | i^{\pm} \rangle - u_i^{PV \pm} |i \rangle , \label{eqhf4} \end{eqnarray} where \begin{eqnarray} u_i^{PV \pm} | i \rangle = \sum_b^{N_c} \left [ \langle b^{\mp} | g |b^{PV} \rangle |i \rangle \right. \nonumber \\ \left. - \langle b^{\mp} | g |i \rangle |b^{PV} \rangle \right. \nonumber \\ \left. + \langle b^{PV} | g |b^{\pm} \rangle |i \rangle \right. \nonumber \\ \left. - \langle b^{PV} | g |i \rangle |b^{\pm} \rangle \right. \nonumber \\ \left. + \langle b | g |b^{PV \pm} \rangle |i \rangle \right. \nonumber \\ \left. - \langle b | g |i \rangle |b^{PV \pm} \rangle \right. \nonumber \\ \left. + \langle b^{PV \mp} | g |b \rangle |i \rangle \right. \nonumber \\ \left. - \langle b^{PV \mp} | g |i \rangle |b \rangle \right ] . \label{eqhfu4} \end{eqnarray} \begin{table}[t] \caption{$E1_{PV}$ values, in units $10^{-11}i(-Q_w/Nn)|e|a_0$, of the $6s ~ ^2S_{1/2}-7s ~ ^2S_{1/2}$ and $6s ~ ^2S_{1/2}- 5d ~ ^2D_{3/2}$ transitions in $^{133}$Cs from DC Hamiltonian reported by various works. Methods shown as `Sum-over' and `Mixed' are obtained using sum-over-states approach and mixed many-body methods respectively. Results shown in bold fonts are claimed to be within 0.5\% accuracy.} \begin{tabular}{lll ll } \hline \hline \\ Method & This work & Others & This work & Others \\ \hline \\ \multicolumn{3}{c}{\underline{$6s ~ ^2S_{1/2}-7s ~ ^2S_{1/2}$}} & \multicolumn{2}{c}{\underline{$6s ~ ^2S_{1/2}-5d ~ ^2D_{3/2}$}} \\ & & & & \\ DHF & $0.7375$ & $0.736$ \cite{maartensson1985} & $-2.3933$ & \\ RMBPT(3)$^w$ & $1.0902$& &$-2.4639$ \\ CPDF & $0.9226$ & $0.924$ \cite{maartensson1985} & $-2.7989$ & \\ RPA & $0.7094$ &$0.707$ \cite{maartensson1985} & $-2.2362$ & \\ CPDF-RPA$^*$ & $0.8876$ & $0.8914$ \cite{Dzuba2012} & $-3.1665$ & $-3.70$ \cite{Roberts2013-2} \\ & & $0.8923^{\dagger}$ \cite{Dzuba2012} & \\ CPDF-RPA &$0.8844$ & $0.886$ \cite{maartensson1985} & $-3.0281$ & $-3.80$ \cite{Roberts2013} \\ & & $0.907$ \cite{Roberts2013} & \\ RCCSD & $0.8964$& $0.8961$ \cite{Sahoo2021} & $-3.5641$&$-3.210$\cite{Sahoosymm} \\ RCCSDT & &$\textbf{0.8967}$ \cite{Sahoo2021} & \\ Sum-over & & $0.9053$ \cite{Porsev2010} & & $-3.76$ \cite{Dzuba2001} \\ & & $\textbf{0.8998}$$^{\dagger}$ \cite{Porsev2010} & \\ Mixed-states & &$0.8967$ \cite{Dzuba2012} & $-3.62$ \cite{Dzuba2001} \\ & & 0.8938$^{\dagger}$ \cite{Dzuba2012} & & \\ & & \textbf{0.9083}$^{\ddagger}$ \cite{Dzuba2012} & \\ \hline \hline \multicolumn{5}{l}{$^\dagger$Note: Scaled value.} \\ \multicolumn{5}{l}{$^\ddagger$Scaled value $+$ borrowed contribution from Ref. \cite{Porsev2010}.} \end{tabular} \label{tab1} \end{table} \begin{table*}[t] \caption{Reduced E1 (in a.u.) and $H_W$ (in units of $10^{-11}i(-Q_w/Nn)|e|a_0$) matrix elements from the RCCSD method, and the excitation energies (in cm$^{-1}$) of the low-lying states of $^{133}$Cs that are used to estimate the `Main' contribution of $E1_{PV}$ for the $6S-7S$ transition. E1 matrix elements and energies are compared with the available most precise experimental values. } \begin{ruledtabular} \begin{tabular}{l c c c c c} Transition & \multicolumn{2}{c}{E1 amplitude}& \multicolumn{2}{c}{Excitation Energy} & $H_W$ amplitude \\ & \cline{1-4}\\ & This work & Experiment &This work& Experiment \cite{NIST} \\ \hline $6P_{1/2}$-$6S$ & $4.5487$ & $4.5097(74)$ \cite{Young1994} & $-11243.93$ & $-11178.27$ & $-1.2541$\\ $7P_{1/2}$-$6S$ & $0.3006$ & $0.2825(20)$ \cite{Vasilyev2002} & $-21838.93$ & $-21765.35$ & $-0.7135$\\ $8P_{1/2}$-$6S$ & $0.0914$ & & $-25787.48$ & $-25708.83$ & $-0.4808$\\ $9P_{1/2}$-$6S$ & $-0.0388$ & & $-27735.96$ & $-27637.00$ & $0.3471$\\ $6P_{1/2}$-$7S$ & $-4.2500$ &$4.233(22)$\cite{Bouchiat1984} & $7352.53$ & $7357.26$ & $0.6067$\\ $7P_{1/2}$-$7S$ & $10.2967$ & $10.308(15)$\cite{Bennett1999} & $-3242.47$ & $-3229.82$ & $0.3445$\\ $8P_{1/2}$-$7S$ & $0.9492$ & & $-7191.02$ & $-7173.31$ & $0.2320$\\ $9P_{1/2}$-$7S$ & $-0.3867$ & & $-9139.50$ & $-9101.47$ & $-0.1674$\\ \end{tabular} \end{ruledtabular} \label{tab2} \end{table*} It can be further noted that in the CPDF method the perturbed core DHF orbital ($| a^{PV} \rangle $) is orthogonal to the unperturbed core orbital ($| a \rangle$) and the same is also true in the RPA. i.e. $\langle a | a^{PV} \rangle=0$ and $\langle a | a^{\pm} \rangle=0$. However, $\langle a| a^{PV \pm } \rangle \ne 0$ in the CPDF-RPA method. This necessitates to use the orthogonalized core orbitals ($| a^{o \pm} \rangle$) by imposing the condition \begin{eqnarray} | a^{o \pm} \rangle &=& |a^{PV \pm } \rangle - \sum_b |b \rangle \langle b | a^{PV \pm } \rangle . \end{eqnarray} In Fig. \ref{CPHF-RPA}, we show the Goldstone diagrams contributing to the determination of the $|a^{PV \pm } \rangle$ and also the extra diagrams that are subtracted to obtain $| a^{o \pm} \rangle$. It can be understood below that it is not required to obtain the modified orbitals for the virtual orbitals in the CPDF-RPA method to estimate the $E1_{PV}$ amplitude, so we do not show Goldstone diagrams contributing to the amplitudes of valence operator here. Using the following expressions in the formula given by Eq. (\ref{cprpa1}), we can write \begin{eqnarray} E1_{PV} &=& \langle f | d + u_i^+ | i^{PV} \rangle + \langle f | h_w + u_i^{PV} | i^+ \rangle \nonumber \\ && + \langle f | u_i^{PV +} | i \rangle . \label{cprpa3} \end{eqnarray} Similarly, using the formula given by Eq. (\ref{cprpa2}) we can get \begin{eqnarray} E1_{PV} &=& \langle f^{PV} | d + u_i^+ | i \rangle + \langle f^{-} | h_w + u_i^{PV} | i \rangle \nonumber \\ && + \langle f | u_f^{PV -} | i \rangle \nonumber \\ &=& \langle f^{PV} | d + u_i^+ | i \rangle + \langle f^{-} | h_w + u_i^{PV} | i \rangle \nonumber \\ && + \langle f | u_i^{PV +} | i \rangle . \label{cprpa4} \end{eqnarray} In the wave operator form, Eq. (\ref{cprpa3}) can be given by \begin{eqnarray} E1_{PV} &=& \langle \Phi_f | D \Omega^{i,CPDF} | \Phi_i \rangle + \langle \Phi_f | H_w \Omega^{i,+} | \Phi_i \rangle \nonumber \\ && + \langle \Phi_f | \Omega^{CPDF + } | \Phi_i \rangle. \label{eqq1} \end{eqnarray} From (\ref{cprpa4}), we can write \begin{eqnarray} E1_{PV} &=& \langle \Phi_f | \Omega^{f,CPDF \dagger} D | \Phi_i \rangle + \langle \Phi_f | \Omega^{- \dagger} H_w | \Phi_i \rangle \nonumber \\ && + \langle \Phi_f | \Omega^{f,CPDF -} | \Phi_i \rangle \nonumber \\ &=& \langle \Phi_f | \Omega^{f,CPDF \dagger} D | \Phi_i \rangle + \langle \Phi_f | \Omega^{f,- \dagger} H_w | \Phi_i \rangle \nonumber \\ && + \langle \Phi_f | \Omega^{CPDF +} | \Phi_i \rangle . \label{eqq2} \end{eqnarray} In the above expressions, we define \begin{eqnarray} \Omega^{CPDF + } = \sum_{i, j} ( \langle f | u_i^+ | i^{PV} \rangle + \langle f | u_i^{PV} | i^+ \rangle \nonumber \\ + \langle f | u_i^{PV +} | i \rangle ) a_j^{\dagger} a_i \end{eqnarray} and \begin{eqnarray} \Omega^{CPDF - } = \sum_{i, j} ( \langle f^{PV} | u_f^- | i \rangle + \langle f^- | u_f^{PV} | i \rangle \nonumber \\ + \langle f | u_f^{PV -} | i \rangle ) a_j^{\dagger} a_i \end{eqnarray} It is also worth noting that some of the works in the literature do not consider contribution from $\langle f | u_i^{PV +} | i \rangle $ in the CPDF-RPA method, and their contributions are separately quoted as `DCP' effects. In Fig. \ref{CPHF-RPA1}, we show the diagrams that are contributing to $E1_{PV}$ in the CPDF-RPA method icluding the DCP effect. Now comparing diagrams from Fig. \ref{CPHF-RPA1} and the diagrams from the RMBPT(3) method shown in Fig. \ref{MBPT}, it can be shown that some of the Core and Valence correlation diagrams of the RMBPT method are appearing as the Core diagrams of the CPDF-RPA method (so also for Valence contributions). This, therefore, clearly demonstrates that the definitions of Core and Valence correlation contributions to $E1_{PV}$ are not unique and their classifications differ based on the approach adopted in a many-body method. Among the approximated methods where various physical effects or correlation effects through both the $H_W$ and $D$ operators are not included on an equal footing, it may not be possible to make an one-to-one comparative analysis among contributions arising through the Core and Valence correlations. In such a scenario, it is suggestive to compare only the final results from different methods. The advantages of using the CPDF-RPA method are that this method includes core-polarization effects to all-orders, treats both the $H_W$ and $D$ operators on an equal footing (which means the results would remain invariant in what-so-ever order either $H_W$ or $D$ is included in $H_t$ along with $H$), and gives DCP effects that are not present in the CPDF method or RPA. However, it still misses out many non-core-polarization effects including pair-correlation contributions and correlations among core-polarization, and pair-correlation effects in the determination of $E1_{PV}$. Again, orthogonalization of perturbed occupied orbitals is incorporated by hand while the approach does not take care of them in a natural manner. We would like to mention that some of the earlier calculations using the CPDF-RPA method neglected contributions from the DCP effects (see Ref. \cite{Roberts2013}). Results from such as an approximation are denoted as CPDF-RPA* method. In fact, omission of some of the DCP contributions in this method is just due to a similar reason. Moreover, the CPDF, RPA and CPDF-RPA methods cannot be derived using Bloch's prescription though we have expressed them using wave operators just to make one-to-one connection of these methods with the RMBPT method. Thus, these methods are simply the extension of the DHF method and cannot take into account the effects that may have been neglected in generating the single particle orbitals. For example, the effects of $V^N$, $V^{N-1}$, $V^{N-2}$ etc. potentials used in the determination of wave functions through the Bloch equation are taken care through an effective Hamiltonian like $H_{eff}= P_v V_{res} \Omega_v^{(0)} P_v$ that appears in solving amplitude of $\Omega_v^{(0)}$. However, the wave operator amplitude solving equations in the CPDF, RPA and CPDF-RPA methods remain to be the same. Thus, the valence electron interaction neglected in the construction of DHF potential (inactive valence orbital) is not amended through the above methods like in the RMBPT method. All these shortcomings of the CPDF-RPA method will be adequately addressed by the RCC method. \begin{table}[t] \caption{Estimated `Main' contributions to the $E1_{PV}$ values, in units $10^{-11}i(-Q_w/Nn)|e|a_0$, of the $6s ~ ^2S_{1/2}-7s ~ ^2S_{1/2}$ transition in $^{133}$Cs using matrix elements involving $np ~ ^2P_{1/2}$ intermediate states in the sum-over-states approach. Four cases are being considered: (a) $\textit{ab initio}$ result in which calculated values from Table \ref{tab2} are used; (b) replacing calculated E1 matrix elements by their experimental values; (c) retaining calculated E1 matrix elements and using experimental energies; and (d) using experimental values for both E1 matrix elements and energies.} \begin{ruledtabular} \begin{tabular}{c c c c} Approach & $\langle7S|D|6S^{PNC}\rangle$ & $\langle 7S^{PNC}|D|6S\rangle$ & Total \\ \hline (a) & $-0.4461$ & $1.3171$ & $0.8710$\\ (b) & $-0.4373$ & $1.3121$ & $0.8748$\\ (c) & $-0.4522$ & $1.3156$ & $0.8634$\\ (d) & $-0.4434$ & $1.3106$ & $0.8672$\\ \end{tabular} \end{ruledtabular} \label{tab3} \end{table} \subsection{RCC method} The RCC method both in the non-relativistic and its counter relativistic forms have been extensively considered these days to include electron correlation effects in the determination of properties of different types of many-body systems accurately such as nuclear, atomic, molecular and solid state systems. This is the reason for which these days it is commonly referred to as the gold standard of many-body theory. Compared to the CPDF, RPA and CPDF-RPA methods, implementation and computational efforts in the RCC method are extensively complex and expensive. It can account correlations through both the $H_W$ and $D$ operators to all-orders, as well take care other shortcomings of the CPDF-RPA method. All CPDF-RPA effects along with other effects like pair-correlations, inter-correlations among core-polarization and pair-correlation effects, corrections due to choice of $V^{N-1}$ DHF potential approximation etc. are sub-summed within our RCC method. This theory was already implemented and results using the method were already reported for Cs \cite{Sahoo2006, Sahoo2010}, Ba$^+$ \cite{Sahoo2006}, Yb$^+$ \cite{Sahooyb2011} and Ra$^+$ \cite{Wansbeek2008}. Here, we consider this theory in the singles and doubles approximation (RCCSD method) only to demonstrate how it captures correlations of previously mentioned methods including appearance of orthogonalization of perturbed core orbitals in a natural fashion, all-order pair-correlations, additional DCP effects and normalization of wave functions etc. compared to a mixed many-body method. Since all these effects are present within the RCC theory and the wave functions are obtained through iterative scheme, all of these effects are inter-correlated. Incorporation of additional effects from the Breit and QED interactions can also be treated in the similar fashion, if their corresponding interaction potential terms are added in the atomic Hamiltonian. Going beyond the RCCSD method approximation in the RCC theory such as the RCCSDT method, means it can capture even higher-order non core-polarization effects and further inter-correlations among core-polarization and pair-correlation effects that are beyond the reach of the mixed many-body methods employed earlier to estimate the $E1_{PV}$ amplitudes. As shown in Ref. \cite{Sahoo2021} though the energies, E1 matrix elements and magnetic dipole hyperfine structure constants from both the RCCSD and RCCSDT methods were quite significant, the difference in the $E1_{PV}$ values from both these two methods was rather small. This suggests that consideration of the RCCSD method would be sufficient enough to address the earlier mentioned concerns. In the RCC theory framework, the exact wave function of an atomic state can be given by \begin{eqnarray} |\Psi_v \rangle = e^S |\Phi_v \rangle , \end{eqnarray} where $S$ is an excitation operator carrying out excitations of electrons out of core orbitals from the DHF wave function $|\Phi_v \rangle$ to generate the excited state configurations due to $V_{res}$. In other words, these excitation configurations can be thought of as contributions taken from each order of corrections to the wave function from the RMBPT method to construct an all-order form. We can further define \begin{eqnarray} S = T + S_v , \end{eqnarray} in order to distinguish excitations of electrons among the core orbitals, denoted by $T$, and excitations of electrons from valence orbital or valence orbital along with core orbitals, denoted by $S_v$ in the $V^{N-1}$ framework of constructing the DHF wave function $|\Phi_v \rangle= a_v^{\dagger} |\Phi_0 \rangle$. Accordingly we can write \begin{eqnarray} |\Psi_v \rangle &=& e^{T+S_v} |\Phi_v \rangle \nonumber \\ &=& e^T \left \{ 1+ S_v \right \} |\Phi_v \rangle . \end{eqnarray} Here $e^{S_v} = 1+ S_v$ is the exact form for the atomic states having one valence orbital $v$. In the RCCSD method, the excitation operators are denoted as \begin{eqnarray} T = T_1 + T_2 \end{eqnarray} and \begin{eqnarray} S_v = S_{1v} + S_{2v} , \end{eqnarray} where subscripts 1 and 2 stand for the singles and doubles excitations respectively. \begin{table}[t] \caption{Core and Valence correlation contributions to the $E1_{PV}$ values, in units $10^{-11}i(-Q_w/Nn)|e|a_0$, for the $6s ~ ^2S_{1/2}-7s ~ ^2S_{1/2}$ and $6s ~ ^2S_{1/2}- 5d ~ ^2D_{3/2}$ transitions in $^{133}$Cs from the RMBPT(3)$^w$ and RMBPT(3)$^d$ approaches. Results (a) considering $H_{eff}$ effect due to $V^{N-1}$ potential and (b) using DHF orbital energies are shown for comparison.} \begin{ruledtabular} \begin{tabular}{lcccc} Approach&\multicolumn{2}{c}{RMBPT(3)$^w$}&\multicolumn{2}{c}{RMBPT(3)$^d$}\\ \cline{2-5} &Core&Valence&Core&Valence\\ \hline & &\multicolumn{2}{c}{{\underline{$6s ~ ^2S_{1/2}-7s ~ ^2S_{1/2}$}}}&\\ (a)&$-0.00205$&$1.09220$&$-0.00003$&$1.24290$\\ (b)&$-0.00206$&$0.43938$&$-0.00031$&$0.43763$\\ \hline\hline\\ & &\multicolumn{2}{c}{\underline{$6s ~ ^2S_{1/2}-5d ~ ^2D_{3/2}$}}&\\ (a)&$-0.16391$&$-2.30000$&$-0.12208$&$-2.55427$\\ (b)&$-0.18267$&$-3.97004$&$-0.12513$&$-4.02758$\\ \end{tabular} \end{ruledtabular} \label{RMBPT3} \end{table} In the wave operator form given by Eqs. (\ref{eqp}) and (\ref{eqw}), it corresponds to \begin{eqnarray} \Omega_0 = e^T \end{eqnarray} and \begin{eqnarray} \Omega_v = e^T S_v . \end{eqnarray} Following the Bloch's equations given by Eqs. (\ref{blw0}) and (\ref{blwv}) in general form, amplitudes of the $T$ and $S_v$ excitation operators are obtained by \begin{eqnarray} \langle \Phi_0^* | (H e^T)_l | \Phi_0 \rangle =0 \end{eqnarray} and \begin{eqnarray} \langle \Phi_v^* | [(H e^T)_l - E_v] S_v | \Phi_v \rangle = - \langle \Phi_v^* | (H e^T)_l | \Phi_v \rangle , \label{eqsv1} \end{eqnarray} where subscript $l$ denotes for the linked terms, bra states with superscript $*$ means excited states with respect to the respective DHF ket states appear in the equations. In the {\it ab initio} procedure, the energy of the $|\Psi_v \rangle$ is determined by calculating expectation value of the effective Hamiltonian i.e. \begin{eqnarray} E_v&=&\langle \Phi_v|H_{eff}|\Phi_v \rangle\nonumber\\ &=&\langle \Phi_v| P_v (H e^T )_l \left \{ 1 + S_v \right \} P_v|\Phi_v \rangle , \label{heff1} \end{eqnarray} with respect to $|\Phi_v \rangle$. As can be noticed $E_v$ is a function of $S_v$ and $S_v$ itself depends on $E_v$. Thus, the non-linear Eqs. (\ref{eqsv1}) and (\ref{heff1}) are solved iteratively to obtain amplitudes of $S_v$. As pointed out earlier, appearance of $E_v$ in the determination of $S_v$ amplitudes is a consequence of using orbitals from $V^{N-1}$ potential. It is also possible to obtain amplitudes of the RCC operators by substituting experimental energy in the semi-empirical approach. Also, one can scale the $S_v$ amplitudes by multiplying by a suitable parameter $\lambda$ to obtain the calculated $E_v$ value matching with the experimental energy in the similar spirit of the {\it scaling} procedures adopted in Refs. \cite{Porsev2010} and \cite{Dzuba2012}. \begin{table}[t] \caption{{\it Ab initio} contributions to Core and Valence parts of the $E1_{PV}$ values, in units $10^{-11}i(-Q_w/Nn)|e|a_0$, for the $6s ~ ^2S_{1/2}-7s ~ ^2S_{1/2}$ and $6s ~ ^2S_{1/2}- 5d ~ ^2D_{3/2}$ transitions in $^{133}$Cs from different methods considered in this work using the DC Hamiltonian. Available results from previous calculations are also given for comparison.} \begin{tabular}{lcc cc} \hline \hline \\ Method & \multicolumn{2}{c}{This work} & \multicolumn{2}{c}{Others} \\ \cline{2-3} \cline{4-5} \\ & Core & Valence & Core & Valence \\ \hline \\ \multicolumn{5}{c}{\underline{$6s ~ ^2S_{1/2}-7s ~ ^2S_{1/2}$}} \\ & & \\ DHF & $-0.00173$ & $0.73923$ &$-0.00174$\cite{Roberts2022} \\ RMBPT(3)$^w$ & $-0.00205$&$1.09220$ \\ RMBPT(3)$^d$ &$-0.00003$ &$1.24290$ \\ CPDF & $-0.00199$ & $0.92454$ &$-0.00201$ \cite{Roberts2022} \\ RPA & $0.00028$ &$0.70912$ \\ CPDF-RPA$^*$ & $0.00169$ & $0.88591$ & $0.00170$ \cite{Roberts2022} \\ CPDF-RPA & $0.00169$ & $0.88267$ & & \\ RCCSD & $-0.00197$& $0.89840$ & $-0.0019$ \cite{Sahoo2021} & $0.8980$ \cite{Sahoo2021}\\ \hline \\ \multicolumn{5}{c}{\underline{$6s ~ ^2S_{1/2}-5d ~ ^2D_{3/2}$}} \\ & & \\ DHF & $-0.11684$ & $-2.27646$ \\ RMBPT(3)$^w$ &$-0.16391$&$-2.3000$ \\ RMBPT(3)$^d$ & $-0.12208$&$-2.55427$ \\ CPDF & $-0.19122$ &$-2.60768$ \\ RPA & $-0.12037$ &$-2.11585$ \\ CPDF-RPA$^*$ & $-0.20786$ & $-2.95860$ \\ CPDF-RPA &$-0.20786$ &$-2.82021$ \\ RCCSD & $-0.14745$& $-3.41667$ & & \\ \hline \hline \end{tabular} \label{tab4} \end{table} To evaluate $E1_{PV}$, we need to express the RCC operators in terms of both the unperturbed and first-order perturbed operators. As explained in Sec. \ref{sec3}, we have three different options to obtain the $E1_{PV}$ amplitude in the RCC theory framework. First, by adopting the approach similar to the CPDF method, in which $H_W$ is considered as external perturbation and matrix element of $D$ can be determined. Second, considering $D$ as the external perturbative operator as in the RPA. Third and the most effective approach would be along the line of the CPDF-RPA method, in which, both the $H_W$ and $D$ operators can be treated as external perturbations. The implementation of the third approach would be more challenging and computationally very expensive as it will demand to store amplitudes of four different types of perturbed RCC operators instead of storing only one type of perturbed amplitudes in the first case and two types in the second case. Among the first two approaches, computational efforts are almost similar but implementation-wise considering $H_W$ as perturbation will be more natural and is easier to deal with its angular momentum couplings owing to its scalar form. Moreover, amplitudes of the perturbed operators due to $H_W$ will converge faster than when $D$ is treated as perturbation. Again, it is possible to use experimental energies in the first approach to obtain semi-empirical results in case it is required while it is a problem in the second case owing to the fact that is already discussed earlier. From this view, we adopt the first approach to estimate the $E1_{PV}$ value. We expand the $T$ and $S_v$ operators by treating $H_W$ as the perturbation to separate out the solutions for the unperturbed and the first-order wave functions by expressing \begin{eqnarray} T = T^{(0)} + \lambda_2 T^{(1)} \end{eqnarray} and \begin{eqnarray} S_v = S_v^{(0)} + \lambda_2 S_v^{(1)} , \end{eqnarray} where the superscript meanings are same as specified earlier. This yields \begin{eqnarray} |\Psi_v^{(0)}\rangle &=& (\Omega_0^{(0)} + \Omega_v^{(0)} ) |\Phi_v \rangle \end{eqnarray} and \begin{equation} |\Psi_v^{(1)}\rangle=(\Omega_0^{(1)} + \Omega_v^{(1)} ) |\Phi_v \rangle \end{equation} with the definitions $\Omega_0^{(0)}= e^{T^{(0)}}$, $\Omega_0^{(1)}= e^{T^{(0)}}T^{(1)}$, $\Omega_v^{(0)}= e^{T^{(0)}}S_v^{(0)}$ and $\Omega_v^{(1)}= e^{T^{(0)}} \left \{ T^{(1)} S_v^{(0)} + S_v^{(1)} \right \}$. The unperturbed operator amplitudes are obtained by solving the usual RCC theory equations as mentioned above. The first-order perturbed RCC operator amplitudes are determined as \begin{eqnarray} \langle \Phi_0^* | (H e^{T^{(0)}})_l T^{(1)} | \Phi_0 \rangle = - \langle \Phi_0^* | (H_W e^{T^{(0)}})_l | \Phi_0 \rangle \end{eqnarray} and \begin{eqnarray} \langle \Phi_v^* | [(H e^{T^{(0)}})_l - E_v^{(0)}] S_v^{(1)} | \Phi_v \rangle = -\langle \Phi_v^* | [ (H_W e^{T^{(0)}})_l && \nonumber \\ + (H e^{T^{(0)}})_l T^{(1)} ] \{ 1+S_v^{(0)} \} | \Phi_v \rangle \ \ \ \ \label{eqsv} \end{eqnarray} As can be seen, the exact calculated energy also enters into the amplitude determining equation of $S_v^{(1)}$ because of the $V^{N-1}$ potential. This is one of the advantages of the RCC method over the CPDF-RPA method. In Figs. \ref{RCC-T0},\ref{RCC-Sv0}, \ref{RCC-T1}, \ref{RCC-Sv1} we show some of the important Goldstone diagrams contributing to the $T_2^{(0)}$, $T_1^{(0)}$, $S_{1v}^{(0)}$, $T_2^{(1)}$,$T_1^{(1)}$,$S_{2v}^{(0)}$, $S_{1v}^{(1)}$ and $S_{2v}^{(1)}$ amplitudes. These diagrams can be compared with the CPDF-RPA wave operator amplitude determining diagrams in order to understand how they are embedded within the RCC operators irrespective of the fact that denominators in the RCC method will contain the exact energy of the state instead of the DHF energy in the CPDF-RPA method. The $E1_{PV}$ expression between the states $| \Psi_i \rangle$ and $| \Psi_f \rangle$ in the RCC theory is given by \begin{eqnarray} E1_{PV} = \frac{\langle \Phi_f | \{S_f^{(1)\dagger} + (S_f^{(0)\dagger} +1) T^{(1)\dagger}\} \bar{D} \{ 1+ S_i^{(0)} \} |\Phi_i \rangle} {\langle \Phi_f | \{S_f^{(0)\dagger} +1 \} \bar{N} \{ 1+ S_i^{(0)} \} |\Phi_i \rangle} && \nonumber \\ + \frac{\langle \Phi_f |\{ S_f^{(0)\dagger} +1 \} \bar{D} \{T^{(1)}(1+ S_i^{(0)}) + S_i^{(1)}\} |\Phi_i \rangle}{\langle \Phi_f | \{S_f^{(0)\dagger} +1 \} \bar{N} \{ 1+ S_i^{(0)} \} |\Phi_i \rangle} , \ \ \ && \label{e1pnc} \end{eqnarray} where $\bar{D}=e^{T^{(0)+}}De^{T^{(0)}}$ and $\bar{N}=e^{T^{(0)+}}e^{T^{(0)}}$. Unlike the CPDF, RPA and CPDF-RPA methods, normalization factors appear explicitly in the RCC expression. Using the wave operator notations, one can easily identify which RCC terms contribute to the Core and Valence correlations in the evaluation of $E1_{PV}$. It means basically, any term is connected either with the $S_{n=i,f}{^{(0/1)}}$ operators or with their conjugate operators will be a part of the Valence correlation otherwise they will be a part of the Core correlation. It can be further clarified that the definitions of Core and Valence correlation contributions to $E1_{PV}$ in our RCC theory are in the line of the RMBPT(3)$^w$ and CPDF methods, and different than the RPA and CPDF-RPA methods. In Fig. \ref{RCC-prop} we show a few important contributing Goldstone diagrams from the RCC method to Core and Valence correlations. Also, for better understanding, the Goldstone diagrams of the RCCSD operators are further demonstrated as the sum of lower-order Goldstone diagrams of the RMBPT(3) method. From these relations, it can be followed that the RCC method includes correlation effects from core-polarization, pair-correlation, and DCP to all orders. It is also obvious from the above diagrams that orthogonalization to core orbitals and extra DCP contributions are also appearing in a natural manner in our RCC theory. Moreover, correlations among all these effects are implicitly present due to the fact that singles and doubles excitation amplitude equations are coupled through many non-linear terms in the RCC theory. \begin{table}[t] \caption{Comparison of contributions from the initial and final perturbed states to $E1_{PV}$ of the $6s ~ ^2S_{1/2}-7s ~ ^2S_{1/2}$ transition of $^{133}$Cs, in units $10^{-11}i(-Q_w/Nn)|e|a_0$, at different levels of approximation between the present work and that are reported in Refs. \cite{maartensson1985,Roberts2022}.} \begin{tabular}{l cc cc} \hline\hline \\ Method & \multicolumn{2}{c}{$\langle{7S}^{PNC}|D|6S\rangle$} & \multicolumn{2}{c}{$\langle7S|D|6S^{PNC}\rangle$} \\ \cline{2-3} \cline{4-5} \\ & Ours & Ref. \cite{maartensson1985} & Ours & Ref. \cite{maartensson1985} \\ \hline \\ & \multicolumn{4}{c}{\underline{Total contribution}} \\ DHF &$1.01168$&$1.010$&$-0.27418$&$-0.274$ \\ CPDF & $1.26664$&$1.267$&$-0.34409$&$-0.344$ \\ RPA &$1.02557$&$1.023$&$-0.31617$&$-0.316$ \\ CPDF-RPA* &$1.27910$&$1.279$&$-0.39150$&$-0.391$ \\ \hline \\ & Ours & Ref. \cite{Roberts2022} & Ours & Ref. \cite{Roberts2022} \\ \hline \\ & \multicolumn{4}{c}{\underline{Core contribution}} \\ DHF & $-0.02638$ & $-0.02645$ & $0.02465$ & $0.02472$ \\ CPDF & $-0.04298$ & $-0.04319$ & $0.04099$ & $0.04119$ \\ RPA & $-0.03536$ & & $0.03564$ & \\ CPDF-RPA* & $ -0.05794$& $-0.05822$ & $0.05963$ & $0.05992$ \\ & & & & \\ \multicolumn{5}{c}{\underline{Valence contribution}} \\ DHF &$1.03806$& &$-0.29883$& \\ CPDF & $1.30962$& & $-0.38508$& \\ RPA & $1.06094$& &$0.70912$& \\ CPDF-RPA* & $1.33704$& &$-0.45113$& \\ \hline \hline \end{tabular} \label{tab5} \end{table} \section{Results \& Discussions} We present the calculated values of $E1_{PV}$ of the $6S-7S$ and $6S-5D_{3/2}$ transitions in $^{133}$Cs from the DHF, RMBPT(3), CPDF, RPA, CPDF-RPA and RCCSD methods. As mentioned in Introduction, the main intention of carrying out this study is to demonstrate similarities and differences among various contributions to $E1_{PV}$ through the above methods. This would be useful in addressing the issue of the sign for the Core correlation contributions to the $E1_{PV}$ value of the $6S-7S$ transition in $^{133}$Cs that are reported differently by various groups \cite{Porsev2010,Dzuba2012,Sahoo2021}. Moreover, this exercise would be useful in understanding the missing contributions in a method compared to others that are under consideration here so that accuracy of the earlier reported results in an atomic system, including Cs, obtained using a particular method can be further improved. We have allowed correlations from all the occupied orbitals and allowed excitations of electrons from a given set of virtual orbitals in all the considered many-body methods to make comparative analysis of results from them. In order to show that we have taken a sufficiently large set basis functions, we validate our calculations by comparing our $E1_{PV}$ values from the DHF, CPDF, RPA and CPDF-RPA methods with the earlier reported values by M{\aa}rtensson \cite{maartensson1985}. The reason for presenting the $E1_{PV}$ value of the $6S-5D_{3/2}$ transition in $^{133}$Cs is to answer to a comment by Roberts and Ginges in Ref. \cite{Roberts2022}, where they argue about the agreement of the sign of Core contribution to the $E1_{PV}$ value of a $S-D$ transition reported earlier using the RCCSD method \cite{Wansbeek2008} while observing a sign difference for the $6S-7S$ transition in $^{133}$Cs. In Table \ref{tab1}, we present the $E1_{PV}$ values of the $6S-7S$ and $6S-5D_{3/2}$ transitions in $^{133}$Cs using the DC Hamiltonian from a number of methods including the DHF method in order to understand the importance of correlation effects in their determination and to demonstrate that choice of a method matters a lot for their rigourous inclusion. The values shown in bold fonts in this table are claimed to be accurate within 0.5\% by the earlier works. A careful look into these results reveal that some of them differ by 1\% from each other, which suggests there could be issues with the estimation of accuracy in these calculations that needs to be investigated. Results from sum-over-states approach, given as `Sum-over' in the table, uses scaled E1 matrix elements and energies from the CCSDvT method to estimate the Main contribution of $E1_{PV}$ for the $6S-7S$ transition while the X-factor is obtained using a blend of many-body methods \cite{Porsev2010}. In Ref. \cite{Dzuba2012}, the same `Main' contribution is utilized but Core and Tail contributions to the X-factor are estimated using the CPDF-RPA* method (denoted as only RPA in the original paper). Pair-correlation effects to these estimations were estimated using the BO-correlation method. Result from these RPA$+$BO methods is given under `Mixed-states' in the above table. Thus large discrepancy seen in both the results from the above table come from the X-factors estimated in Refs \cite{Porsev2010, Dzuba2012}. If the total X-factors had agreed between two works but individual contributions would have differed, then the difference in the results could have been attributed to distribution of contributions under the Core and Valence correlations in the considered approaches in both the works. From the significant differences seen between the X-factors from the Mixed-states approach and our RCCSD method in both the $6S-7S$ and $6S-5D_{3/2}$ transitions, it does not support such distributions. To understand the reasons for significant discrepancies seen in the X-factors from various works, we first analyze the Main contribution to the $6S-7S$ transition by using the calculated properties from our RCCSD method in the sum-over-states approach. We used the E1 matrix elements and energies from our calculations as well as from experiments to show the differences. In Table \ref{tab2}, we present the $H_W$ matrix elements, E1 matrix elements and energies obtained using the DC Hamiltonian in the RCCSD method. The calculated E1 matrix elements and energies are also compared with the experimental values \cite{Young1994, Vasilyev2002, Bouchiat1984,Bennett1999, NIST} in the same table. Using these values we estimate the Main contributions to $E1_{PV}$ of the $6S-7S$ transition and they are given in Table \ref{tab3}. Results (a) from {\it ab initio} calculations, (b) using experimental E1 values with calculated energies, (c) calculated E1 values with experimental energies and (d) using experimental values for both the E1 matrix elements and energies are given separately. This analysis shows that result from (b) is larger than (a), but results from (c) and (d) are lower than (a). It means that accuracy of energies in a given method affect the results more than E1 matrix elements. Later we demonstrate explicitly that it introduces error to $E1_{PV}$ estimation when we use experimental energy only of the initial or final state through the first-principle calculations. Differences between the {\it ab initio} and semi-empirical calculations can be minimised by including contributions from the triples, quadruples etc. higher-level excitations of the RCC method. In Ref. \cite{Sahoo2021}, Sahoo et al have demonstrated difference between the {\it ab initio} calculations of $E1_{PV}$ of the $6S-7S$ transition using the RCCSD and RCCSDT methods is very small. They further showed that Core contributions are almost same in both the methods, and the agreement of the $E1_{PV}$ values from both the methods was the result of opposite trends of correlation effects in the evaluation of the $H_W$ matrix elements than the E1 matrix elements and energies. A similar trend was anticipated from the Tail contribution. Subtracting the {\it ab initio} value of Main from the final RCCSD result, we find the X-factor to $E1_{PV}$ of the $6S-7S$ transition as 0.0254, against 0.0175 and 0.0256 of Refs. \cite{Porsev2010} and \cite{Dzuba2012} respectively in units of $10^{-11}i(-Q_w/Nn)|e|a_0$. It means that there is a large difference between the X-factor of Ref. \cite{Porsev2010} and our work, whereas these values almost agree between Ref. \cite{Dzuba2012} and the present work. Since there is a sign difference between the Core contribution from Ref. \cite{Dzuba2012} and the RCCSD value of Ref. \cite{Sahoo2021}, the above analysis suggests that the sign difference is solely due to different definitions used for the Core contribution in both the works. In order to explain how definition of Core contribution changes depending upon the choice of an approach to estimate $E1_{PV}$, we present the Core and Valence contributions separately to the $6S-7S$ and $6S-5D_{3/2}$ transitions from both the RMBPT(3)$^w$ and RMBPT(3)$^d$ approaches in Table \ref{RMBPT3}. Just for the sake of demonstrating how appearance of $H_{eff}$ in the wave function determining equation due to choice of $V^{N-1}$ modify the result, we present RMBPT(3) results considering effect of $H_{eff}$ (given results as (a) in the above table) and replacing it with DHF energy as the case of the CPDF-RPA method (corresponding results are given under (b) in the above table). As can be seen, the Core and Valence contributions from both the RMBPT(3)$^w$ and RMBPT(3)$^d$ approaches are coming out differently whereas the final results from both the methods are almost close to each other. It can also be realized that changes in the results for both the transitions are enormous when $H_{eff}$ is considered in the wave function solving equation than otherwise. \begin{table*}[t] \caption{Contributions to Core and Valence parts from different terms to $E1_{PV}$ of the $6s ~ ^2S_{1/2}-7s ~ ^2S_{1/2}$ and $6s ~ ^2S_{1/2}-5d ~ ^2D_{3/2}$ transitions in $^{133}$Cs, in units $10^{-11}i(-Q_w/Nn)|e|a_0$ from Eqs. (\ref{cprpa3}) and (\ref{cprpa4}) of the CPDF-RPA method, which are quoted under `Expression a' and `Expression b' respectively. Results are given using the calculated $\omega$ value, $\omega^{ex}$ and $\omega^{ex}$ with the experimental energies of the initial and final states (denoted by $E_{i,f}^{expt}$).} \label{tab6} \begin{tabular}{lccc |ccc} \hline \hline Contribution & \multicolumn{3}{c|}{Expression a} & \multicolumn{3}{c}{Expression b} \\ \cline{2-4} \cline{5-7} \\ & \multicolumn{6}{c}{\underline{$6s ~ ^2S_{1/2}-7s ~ ^2S_{1/2}$}} \\ & $\langle7s|h_w|6s^+\rangle$ & $\langle7s|u_{6s}^{PV}|6s^+\rangle$ & Total & $\langle7s^{PV}|d|6s\rangle$ & $\langle7s^{PV}|u_{6s}^{+}|6s\rangle$ & Total \\ \hline \\ Core ($\omega$) &$-0.0357$&$-0.02257$ &$-0.05794$ & $-0.04299$&$-0.01495$ &$-0.05794$ \\ Valence ($\omega$) &$1.06094$ &$0.27610$ &$1.33704$ & $1.30962$&$0.02742$ &$1.33704$\\ & & & & & & \\ Core ($\omega^{ex}$) &$-0.03464$&$-0.02211$&$-0.05675$&$-0.04299$&$-0.01458$&$-0.05757$\\ Valence ($\omega^{ex}$)&$-0.19464$&$-0.04598$&$-0.24062$&$1.30962$&$0.02743$&$1.33705$\\ & & & & & & \\ Core ($E_{i,f}^{expt}$)&$-0.03546$&$-0.02283$&$-0.05829$&$-0.04331$&$-0.01498$&$-0.05829$\\ Valence ($E_{i,f}^{expt}$)&$1.21721$&$0.31956$&$1.53677$&$1.53384$&$0.00293$&$1.53677$\\ \hline \\ & $\langle 7s|d|6s^{PV}\rangle$ & $\langle 7s|u_{6s}^{+}|6s^{PV}\rangle$ & Total & $\langle 7s^{-}|h_w|6s\rangle$ & $\langle7s^{-}|u_{6s}^{PV}|6s\rangle$ & Total\\ \hline \\ Core ($\omega$) &$0.04099$ &$0.01864$ &$0.05963$ &$0.03564$ &$0.023399$ &$0.05963$ \\ Valence ($\omega$) &$-0.38508$ &$-0.06605$ &$-0.45113$ &$-0.35181$ &$-0.09932$ &$-0.45113$ \\ & & & & & & \\ Core ($\omega^{ex}$) &$0.04099$&$0.01915$&$0.06014$&$0.03651$&$0.02458$&$0.06109$\\ Valence ($\omega^{ex}$)&$-0.38508$&$-0.06644$&$-0.45152$&$-0.17081$&$-0.05022$&$-0.22103$\\ & & & & & & \\ Core ($E_{i,f}^{expt}$) &$0.04210$&$0.01999$&$0.06209$&$0.03686$&$0.02523$&$0.06209$\\ Valence ($E_{i,f}^{expt}$) &$-0.12128$&$-0.05800$&$-0.17928$&$-0.13743$&$-0.04185$&$-0.17928$\\ \hline \hline\\ & \multicolumn{6}{c}{\underline{$6s ~ ^2S_{1/2}-5d ~ ^2D_{3/2}$}} \\ & $\langle5d_{3/2}|h_w|6s^+\rangle$ & $\langle5d_{3/2}|u_{6s}^{PV}|6s^+\rangle$ & Total & $\langle5d_{3/2}^{PV}|d|6s\rangle$ & $\langle5d_{3/2}^{PV}|u_{6s}^{+}|6s\rangle$ & Total \\ \hline \\ Core ($\omega$) &$0.0$ &$-0.00616$ &$-0.00616$ &$-0.00451$ & $-0.00165$&$-0.00616$\\ Valence ($\omega$)&$0.0$ &$-0.27386$ &$-0.27386$&$-0.27878$ &$0.00492$ & $-0.27386$ \\ & & & & & & \\ Core ($\omega^{ex}$)&$0.0$&$-0.00612$&$-0.0612$&$-0.00451$&$-0.00164$&$-0.00615$\\ Valence ($\omega^{ex}$)&$0.0$&$-0.23287$&$-0.23287$&$-0.27878$&$0.00493$&$-0.27385$\\ & & & & & & \\ Core ($E_{i,f}^{expt}$) &$0.0$&$-0.00628$&$-0.00628$&$-0.00459$&$-0.001087$&$-0.00628$\\ Valence ($E_{i,f}^{expt}$) &$0.0$&$-0.83936$&$-0.83936$&$-0.87962$&$0.04028$&$-0.83936$\\ \hline \\ & $\langle 5d_{3/2}|d|6s^{PV}\rangle$ & $\langle 5d_{3/2}|u_{6s}^{+}|6s^{PV}\rangle$ & Total & $\langle 5d_{3/2}^{-}|h_w|6s\rangle$ & $\langle5d_{3/2}^{-}|u_{6s}^{PV}|6s\rangle$ & Total\\ \hline \\ Core ($\omega$)&$-0.19574$&$-0.00596$ &$-0.20170$& $-0.12037$&$-0.08133$ &$-0.20170$ \\ Valence ($\omega$) & $-2.88646$&$0.20172$ & $-2.68474$&$-2.11585$ & $-0.56889$&$-2.68474$ \\ & & & & & & \\ Core ($\omega^{ex}$)&$-0.19574$&$-0.00636$&$-0.20210$&$-0.12109$&$-0.08182$&$-0.20291$\\ Valence ($\omega^{ex}$)&$-2.88646$&$0.20201$&$-2.68445$&$-1.94829$&$-0.52407$&$-2.47236$\\ & & & & & & \\ Core ($E_{i,f}^{expt}$) &$-0.20110$&$-0.00744$&$-0.20854$&$-0.12363$&$-0.08491$&$-0.20854$\\ Valence($E_{i,f}^{expt}$) &$-2.02306$&$0.16022$&$-1.86284$&$-1.46451$&$-0.39833$&$-1.86284$\\ \hline \hline \end{tabular} \end{table*} To further figure out about the mismatch in the X-factors from various works, we present the Core and Valence contributions to the $E1_{PV}$ values separately for both the $6S-7S$ and $6S-5D_{3/2}$ transitions arising through the first-principle calculations in Table \ref{tab4}. As can be seen from the table, signs of Core contributions to $E1_{PV}$ of both the transitions from the DHF method and many-body methods at a given level of approximation employed by different groups match each other. This indicates that there is no issue with the implementation of these theories in our code. To support results from our methods further, we also compare Core and Valence contributions to the $6S-7S$ transition in Table \ref{tab5} from the initial and final perturbed states through the DHF, CPDF, RPA and CPDF-RPA* methods with the values reported in a Comment by Roberts and Ginges \cite{Roberts2022}, and M{\aa}rtensson \cite{maartensson1985}. We find reasonably good agreement between our results with the earlier estimations. As it has been explained in the previous section, definitions of Core correlation effects arising through the CPDF, RPA and CPDF-RPA methods all differ. Thus, the exact reason for which sign of Core contribution to $E1_{PV}$ of the $6S-7S$ transition in $^{133}$Cs differ between Ref. \cite{Porsev2010} and Ref. \cite{Dzuba2012} is not clear to us as the exact method(s) employed in Ref. \cite{Porsev2010} for its estimation is not mentioned explicitly. Only from the sign of the Core contribution quoted in Ref. \cite{Porsev2010} and by comparing it with the signs of Core contributions from the RMBPT(3)$^w$, CPDF and RCCSD methods of the present work and from the RCCSD and RCCSDT methods of Ref. \cite{Sahoo2021}, we can assume that Ref. \cite{Porsev2010} estimates Core contribution by considering $H_W$ as perturbation. In such a case, the Tail contributions to $E1_{PV}$ for the $6S-7S$ transition in $^{133}$Cs from Refs. \cite{Porsev2010} and \cite{Sahoo2021} as well as from the RCCSD result of the present work should almost agree each other on the basis of the argument that the net X-factor value should agree irrespective of the fact that whether $H_W$ or $D$ is treated as perturbation. Large differences between the X-factors from Refs. \cite{Porsev2010}, \cite{Dzuba2012} and this work, which report as 0.0175, 0.0256 and 0.0254 in units of $10^{-11}i(-Q_w/Nn)|e|a_0$, suggests that the former work underestimates the Tail contribution. It should be noted that the Tail contributions are estimated without using sum-over-states approach in all the works, so the difference in these values are mainly due to different levels of approximation made in the many-body methods employed for their estimations. \begin{table*}[t] \caption{First-principle calculated $E1_{PV}$ values (in $-i (Q_{W}/Nn) ea_0 \times 10^{-11}$) of the $6s ~ ^2S_{1/2} - 7s ~ ^2S_{1/2}$ and $6s ~ ^2S_{1/2} - 5d ~ ^2D_{3/2}$ transitions in $^{133}$Cs from different terms of the RCCSD method. Both {\it ab initio} and scaled values are given for comparison. We have used two different types of scaling: (a) only scaling amplitudes of the the unperturbed $S_v^{(0)}$ operators and (b) scaling amplitudes of both the $S_v^{(0)}$ and $S_v^{(1)}$ operators. Here, contributions under `Norm' represents the difference between the contributions after and before normalizing the RCCSD wave functions. `Others' denote contributions from those RCCSD terms that are not shown explicitly in this table.} \begin{ruledtabular} \begin{tabular}{lrrrrrr} RCC term & \multicolumn{3}{c}{$6s ^2S_{1/2} - 7s ~ ^2S_{1/2}$} & \multicolumn{3}{c}{$6s ~ ^2S_{1/2} - 5d ~ ^2D_{3/2}$} \\ \cline{2-4} \cline{5-7} \\ & {\it Ab initio} & Scaled-a & Scaled-b & {\it Ab initio} & Scaled-a & Scaled-b \\ \hline \\ & \multicolumn{6}{c}{Core contribution} \\ $\overline{D}T_1^{(1)}$ &$-0.04161$ &$-0.04161$&$-0.04161$&$-0.00062$&$-0.00062$&$-0.00062$ \\ $T_1^{(1)\dagger} \overline{D}$ & $0.03964$&$0.03964$&$0.03964$&$-0.17132$& $-0.17132$&$-0.17132$ \\ Others &$-0.00005$ &$-0.00005$& $-0.00005$&$0.01757$&$0.01757$& $0.01757$\\ Norm&$0.00005$&$0.00005$&$0.00005$&$0.00692$&$0.00670$&$0.00670$\\ \hline \\ & \multicolumn{6}{c}{Valence contribution} \\ $\overline{D}S_{1i}^{(1)}$ &$-0.19363$&$-0.19363$&$-0.19688$&$-2.96310$&$-2.96310$&$-2.97589$ \\ $S_{1f}^{(1)\dagger} \overline{D}$ &$1.80382$ &$1.80382$&$1.80263$&$-0.89993$&$-0.89993$&$-1.30760$ \\ $S_{1f}^{(0)\dagger} \overline{D} S_{1i}^{(1)}$ &$-0.23184$&$-0.23187$&$-0.23297$&$-0.06863$&$-0.06548$&$-0.06487$ \\ $S_{1f}^{(1)\dagger} \overline{D} S_{1i}^{(0)}$ & $-0.41826$&$-0.41895$&$-0.41942$&$0.10487$&$0.10502$&$0.14626$ \\ $\overline{D}S_{2i}^{(1)}$ &$-0.00039$&$-0.00039$&$-0.00039$&$0.00107$&$0.00107$&$0.00108$ \\ $S_{2f}^{(1)\dagger} \overline{D}$ & $0.00033$&$0.00033$&$0.00033$&$-0.00023$& $-0.00023$&$-0.00023$ \\ Others & $-0.04040$&$-0.04222$&$-0.04025$&$0.24888$&$0.24806$&$0.27704$ \\ Norm& $-0.02122$&$-0.01942$&$-0.02110$ & $0.16040$&$0.15505$&$0.15309$\\ \hline\\ Total &$0.89643$&$0.89570$&$0.88998$&$-3.56412$& $-3.56721$&$-3.91879$ \end{tabular} \end{ruledtabular} \label{tab7} \end{table*} Now, we wish to address the reason why Roberts and Ginges were able to get same sign for the Core contribution to $E1_{PV}$ of the $7S-6D_{3/2}$ transition in Ra$^+$ using their RPA$+$BO method with that are reported using the RCC method in Ref. \cite{Wansbeek2008}. Since correlation trends to $E1_{PV}$ of the $nS - (n-1)D_{3/2}$ transitions are almost similar in Cs and Ra$^+$, with the ground state principal quantum number $n$ of the respective system, we can understand the above point by analysing the Core contributions to $E1_{PV}$ of the $6S-5D_{3/2}$ transition from different methods and comparing their trends with the $6S-7S$ transition of $^{133}$Cs. By looking at these contributions from Table \ref{tab4}, it can be easily followed that there is one-order magnitude difference between the Core contribution in the $6S-7S$ transition from the RMBPT(3)$^{w}$ and RMBPT(3)$^d$ methods while there is a sign change between these results from the CPDF method and the RPA. However, the difference between the Core contributions from the RMBPT(3)$^{w}$ and RMBPT(3)$^d$ methods in the $6S-5D_{3/2}$ transition are small, and there is no sign difference between the CPDF and RPA results. These trends can be explained as follows. In the $6S-7S$ transition, wave functions of both the associated states have large overlap over the nucleus while in the $6S-5D_{3/2}$ transition only the wave function of the ground state has large overlap with the nucleus. As a result, strong core-polarization effects contribute through both the states in the former case. Also, contribution from individual diagram of the CPDF-RPA method is almost comparable in the $6S-7S$ transition, while a selective diagrams contribute predominantly in the $6S-5D_{3/2}$ transition. Since core-polarization effects arising through the $D$ operator are stronger and have opposite signs than that arise through $H_W$, the net Core contributions in the $S-S$ and $S-D$ transitions behave very differently in the CDHF method and RPA, and the same propagates to the CPDF-RPA*/CPDF-RPA method. Since Core and Valence contributions are basically redistributed in the CPDF-RPA* and RCCSD methods, difference between the final values between Refs. \cite{Dzuba2012} and \cite{Sahoo2021} as well as from the present work are coming out to be very small in the $6S-7S$ transition, while it is slightly noticeable in the $6S-5D_{3/2}$ transition (refer to Table \ref{tab1} for the comparison of results from the Mixed-states and RCCSD methods) . The DCP contributions from our calculations can be estimated by taking the differences in the results from the CPDF-RPA* and CPDF-RPA methods. This difference for the $6S-7S$ transition from our work is compared with the corresponding values from Refs. \cite{maartensson1985} and \cite{Roberts2013}. From this comparison, we find that our result agrees better with Roberts than M{\aa}rtensson. However, our final CPDF-RPA result agrees well with Ref. \cite{maartensson1985} than Ref. \cite{Roberts2013}. We also intend to mention that the CPDF-RPA* results in Refs. \cite{Dzuba2012} and \cite{Roberts2022} are also scaled by using $\omega^{ex}=0.0844$ a.u.. In the previous section, we have justified theoretically why such an approach would lead to errors in the determination of the $E1_{PV}$ values. To demonstrate it numerically, we have given results for both the $6S-7S$ and $6S-5D_{3/2}$ transitions from the CPDF-RPA method using Eqs. (\ref{cprpa3}) and (\ref{cprpa4}) in Table \ref{tab6}. We have given these values using $\omega$, $\omega^{ex}$ and then also using $\omega^{ex}$ and experimental energies (denoted by $E_{i,f}^{expt}$) of the $6S$, $7S$ and $5D_{3/2}$ states. From the comparison of the results, we observe a very a interesting trend. When both $\omega$ and energies of the atomic states are considered either from theory or experiment, results from both Eqs. (\ref{cprpa3}) and (\ref{cprpa4}) match each other, otherwise large discrepancies are seen. In the RMBPT, RPA or CPDF-RPA method, it is possible to use $\omega^{ex}$ and experimental energies of the initial and final states simultaneously in the $E1_{PV}$ evaluating expressions. However, one can either use $\omega^{ex}$ or $\omega^{ex}$ with experimental energy of only the valence state (whose perturbed state wave function is evaluated) in the complicated methods like the RCC method. Energies of the double, triple excited configurations appear in the denominator of the RCC theory, their experimental energies cannot be used in the wave function determining equations. By corroborating this fact with the above finding, it can be said that scaling the wave function by using experimental energy of the valence state alone may not always give accurate result, rather it may introduce additional error to the calculation. As explained in the previous section, this fact can be theoretically understood using Eq. (\ref{eq04}). Nevertheless, it can be found from Table \ref{tab6} that our result with $\omega^{ex}$ value from the CPDF-RPA* method does not match with the corresponding results from Refs. \cite{Dzuba2012,Roberts2022} for the $6S-7S$ transition. We are unable to understand the reason for this though results with theoretical $\omega$ value from both the works agree quite well. As mentioned in Sec. \ref{sec4}, three different approaches can be adopted in any many-body theory framework for evaluation of the $E1_{PV}$ amplitudes. The same applies to the RCC theory as well. However, we adopt the approach of evaluating the matrix element of the $D$ operator after considering $H_W$ as the external perturbation. Though this approach is in the line with the CPDF method, it is effectively takes care of electron correlation effects through both the $H_W$ and $D$ operators as in the CPDF-RPA method. In fact, it goes much beyond the CPDF-RPA method to include the electronic correlation effects which will be evident from the follow-up discussions. It means that it is possible to deduce all the CPDF-RPA contributions from the RCC theory, which is even true at the level of the RCCSD method approximation. In this sense that the RCCSDT method employed by Sahoo et al. \cite{Sahoo2021} to estimate the $E1_{PV}$ amplitude of the $6S-7S$ transition in $^{133}$Cs includes the RPA contributions that are mentioned in Refs. \cite{Dzuba2012,Roberts2022}. However, some of these contributions are not a part of the Core contribution rather they come through the Valence contribution in our RCCSD method owing to the fact that the $D$ operator is not treated as an external perturbation here to determine the perturbed atomic wave functions. This point can be comprehend from the comparison between the RMBPT(3)$^w$ and RMBPT(3)$^d$ results, which are propagated to all-orders in the RCC theory. To define Core contributions in the line of CPDF-RPA method, the RCC theory of $E1_{PV}$ can be derived either treating the $H_W$ and $D$ operators simultaneously as external perturbation or perturbing wave functions by considering one of these operators as external perturbation and evaluating matrix element of the other operator in the normal-order RCC theory framework similar to that is discussed in Ref. \cite{SahooPRL}. Among the choices of considering one of them as the external perturbation, it is advisable to use $H_W$ as perturbation in which evaluation of perturbed wave function in an iterative scheme can converge faster. In fact, this should also be the natural choice from the APV theory point of view and is being adopted here. In order to understand the Core and Valence contributions to the $E1_{PV}$ amplitudes from our RCCSD method, we can take the help of the diagrams from the RMBPT(3)$^w$ method. Since wave operator amplitude determining equations for both the methods follow the same Bloch's prescription, all physical effects appear in the RMBPT(3)$^w$ method are present to all-orders in the RCCSD method. It means that the core-polarization effects and additional lower-order DCP contributions that arise in the RMBPT(3)$^w$ method are present to all-orders in the RCCSD method. In the CPDF-RPA method, core-polarization effects are appearing through the single excitations while the DCP effects are implicitly present through the double excitations in the RCCSD method. Similarly, the pair-correlation effects of the RMBPT(3)$^w$ method are present to all-orders through the single excitations in the RCCSD method. Since both single and double excitation amplitude solving equations are coupled in the RCCSD method, correlations among all these physical effects are taken into account in this method. Again through the non-linear terms from the exponential form of the RCCSD method, higher-order correlation effects, neither a part of core-polarization or pair-correlation types, are also included in the RCCSD method. It is not possible to include these effects systematically using a blend of many-body methods. In Table \ref{tab7}, we present the $E1_{PV}$ values of the $6S-7S$ and $6S-5D_{3/2}$ transitions in $^{133}$Cs from individual RCCSD terms to fathom the discussions of the previous paragraph quantitatively. By using definitions of the $T$ and $S_v$ RCC excitation operators, we categorized the results into Core and Valence correlation contributions. By subtracting the Core contributions of the DHF method from the contributions of the $\bar{D} T_1^{(1)}$ and its complex conjugate (c.c.) term, the net Core correlation contributions to $E1_{PV}$ in the RCCSD method can be inferred. Similarly by subtracting the Valence contributions of the DHF method from the $\bar{D} S_{1i}^{(1)} + S_{1f}^{(1)\dagger} \bar{D}$ terms and adding contributions from other Valence correlation contributing terms, we can get the net Valence correlation contributions to $E1_{PV}$ in the RCCSD method. The Core correlations arising through $\bar{D} T_1^{(1)}$ and c.c. terms contain correlation contributions from both the singly and doubly excited configurations. By analysing the RMBPT(3)$^w$ diagrams contributing to the $T_1^{(1)}$ amplitude determining equation shown in Fig. \ref{RCC-T1}, it can be understood that the $\bar{D} T_1^{(1)}$ and c.c. terms contain the Core contributions of the CPDF method, pair-correlation contributions of the RMBPT(3) method to all-orders and many more. By analyzing diagrams from $\bar{D} T_1^{(1)}$ and their breakdown in terms of the RMBPT(3)$^w$ method carefully, it can be evident that this term does not include Core contributions arising through the RPA and some of the contributions that arise through the CPDF-RPA method. Similarly, all the Valence correlation contributions from the CPDF method, RPA and CPDF-RPA* method are included through the $\bar{D} S_{1i}^{(1)} + S_{1f}^{(1)\dagger} \bar{D} $ terms in the RCCSD method. In addition, they also include many contributions that can appear through the BO-correlation technique and beyond. However, a lot more correlation contributions to $E1_{PV}$ arise through other RCCSD terms among which correlation contributions arising through $\bar{D} S_{2i}^{(1)}$, $\bar{D} T_{1/2}^{(1)}S_{1/2i}^{(0)}$, $T_{1/2}^{(1)\dagger} \bar{D} S_{1/2i}^{(0)}$, such terms but replacing $S_{1/2i}^{(0/1)}$ operators with $S_{1/2f}^{(0/1)\dagger}$, $S_{1/2f}^{(0)\dagger} \bar{D} S_{1/2i}^{(1)}$, $S_{1/2f}^{(1)\dagger} \bar{D} S_{1/2i}^{(1)}$ etc. terms in the RCCSD method. Obviously, these contributions are not present in the CPDF-RPA* method and many of them cannot be considered as a part of the BO-correlation method. Moreover, corrections to the entire correlation contributions including that appear through the CPDF-RPA method due to normalization of the wave functions (given as `Norm') are quoted separately in the above table and they are found to be non-negligible. The most prominent DCP contributions are absorbed through the $\bar{D} S_{2i}^{(1)} + S_{2f}^{(1)\dagger} \bar{D} $ terms in the RCCSD method. Along with some of Core contributions from the CPDF-RPA method (like the ones appears in the RPA) are also included through these terms in the RCCSD method. In addition, non-linear terms $\bar{D} T_{1/2}^{(1)}S_{2i}^{(0)}$, $T_{2}^{(1)\dagger} \bar{D} S_{1i}^{(0)}$, $S_{2f}^{(0)\dagger} \bar{D} S_{2i}^{(1)}$ etc. including their c.c. terms posses a lot more Valence correlation contributions that are beyond the scope of considering by the combined CPDF-RPA and BO-correlation methods. In Table \ref{tab7}, contributions to $E1_{PV}$ of the $6S-7S$ and $6S-5D_{3/2}$ transitions from the RCCSD terms using the scaled $S_v^{(0)}$ and $S_v^{(1)}$ amplitudes are also given. To show how the results vary with scaling both the unperturbed and perturbed wave functions independently, we present results after (a) scaling only the amplitudes of the $S_v^{(0)}$ operators and (b) then by scaling amplitudes of both the $S_v^{(0)}$ and $S_v{(1)}$ operators. Significant differences from both the scaled results are noticed. As mentioned before, it would not be correct to scale the $T^{(0/1)}$ amplitudes as orbitals used for their determination see $V^{N}$ potential against the amplitude determining equations for the $S_v^{(0/1)}$ operators, where orbitals see $V^{N-1}$ potential. Thus, substituting $\omega^{ex}$ value in the estimation of Core contribution may not be theoretically proper. Again, we can only substitute energy of the valence state from outside in the wave function solving equations, whereas energies of the intermediate states have to be generated implicitly in the RCC theory. It means that evaluating the $E1_{PV}$ amplitudes through the scaling procedure using the RCC method may introduce numerical errors to the calculations. Nonetheless, comparison of the semi-empirical results obtained by using experimental energies with the {\it ab initio} values of $E1_{PV}$ for both the $6S-7S$ and $6S-5D_{3/2}$ transitions shows that there are significant differences among them. These differences can be minimised by including higher-level excitations in the RCC theory. However, these higher-level excitations will not only improve the energy values but they will also change the matrix elements of the $H_W$ and $D$ operators. As shown by Sahoo et al. \cite{Sahoo2021, SahooaRxiv} inclusion of the triple excitations in the RCC theory, modify the energies and the matrix elements of the $H_W$ and $D$ operators in such a way that the $E1_{PV}$ values from the RCCSD and RCCSDT methods almost remain to be same. From this argument, we cannot argue that the scaled $E1_{PV}$ values are more accurate than the {\it ab initio} values in the RCCSD method approximation. \section{Summary} By employing a number of relativistic many-body methods at different levels of approximation such as finite-order perturbation theory, coupled-perturbed Dirac-Fock method, random phase approximation, combined coupled-perturbed Dirac-Fock and random phase approximation method, and relativistic coupled-cluster theory, we investigated various roles of core and valence correlation effects in the calculations of the parity violating electric dipole amplitudes of the $6S \rightarrow 7S$ and $6S \rightarrow 5D_{3/2}$ transitions in $^{133}$Cs. From this analysis, we were able to address a long standing issue of getting opposite signs to the core correlation contribution to the parity violating electric dipole amplitude of the aforementioned $6S \rightarrow 7S$ transition using the combined coupled-perturbed Dirac-Fock and random phase approximation methods. We also analysed results from the sum-over-states approach and first-principle calculations using the relativistic coupled-cluster method with singles and doubles approximation to figure out the missing contributions in the former approach. Inclusion of these missing contributions through the combined coupled-perturbed Dirac-Fock, random phase approximation and Br\"uckener-orbital correlation methods is compared with the first-principle calculations using the coupled-cluster method. This comparison shows that the first-principle approach using the relativistic coupled-cluster theory incorporates electron correlation effects due to the Dirac-Coulomb Hamiltonian more rigorously than the other methods mentioned above in the evaluation of parity violating electric dipole amplitudes in the $^{133}$Cs atom. \section*{Acknowledgement} The computations reported in the present work were carried out using the Vikram-100 HPC cluster of the Physical Research Laboratory (PRL), Ahmedabad, Gujarat, India.
1,108,101,563,543
arxiv
\section*{Figure Captions\markboth {FIGURECAPTIONS}{FIGURECAPTIONS}}\list {Figure \arabic{enumi}:\hfill}{\settowidth\labelwidth{Figure 999:} \leftmargin\labelwidth \advance\leftmargin\labelsep\usecounter{enumi}}} \let\endfigcap\endlist \relax \def\tablecap{\section*{Table Captions\markboth {TABLECAPTIONS}{TABLECAPTIONS}}\list {Table \arabic{enumi}:\hfill}{\settowidth\labelwidth{Table 999:} \leftmargin\labelwidth \advance\leftmargin\labelsep\usecounter{enumi}}} \let\endtablecap\endlist \relax \def\reflist{\section*{References\markboth {REFLIST}{REFLIST}}\list {[\arabic{enumi}]\hfill}{\settowidth\labelwidth{[999]} \leftmargin\labelwidth \advance\leftmargin\labelsep\usecounter{enumi}}} \let\endreflist\endlist \relax \def\list{}{\rightmargin\leftmargin}\item[]{\list{}{\rightmargin\leftmargin}\item[]} \let\endquote=\endlist \makeatletter \newcounter{pubctr} \def\@ifnextchar[{\@publist}{\@@publist}{\@ifnextchar[{\@publist}{\@@publist}} \def\@publist[#1]{\list {[\arabic{pubctr}]\hfill}{\settowidth\labelwidth{[999]} \leftmargin\labelwidth \advance\leftmargin\labelsep \@nmbrlisttrue\def\@listctr{pubctr} \setcounter{pubctr}{#1}\addtocounter{pubctr}{-1}}} \def\@@publist{\list {[\arabic{pubctr}]\hfill}{\settowidth\labelwidth{[999]} \leftmargin\labelwidth \advance\leftmargin\labelsep \@nmbrlisttrue\def\@listctr{pubctr}}} \let\endpublist\endlist \relax \makeatother \newskip\humongous \humongous=0pt plus 1000pt minus 1000pt \def\mathsurround=0pt{\mathsurround=0pt} \def\eqalign#1{\,\vcenter{\openup1\jot \mathsurround=0pt \ialign{\strut \hfil$\displaystyle{##}$&$ \displaystyle{{}##}$\hfil\crcr#1\crcr}}\,} \newif\ifdtup \def\panorama{\global\dtuptrue \openup1\jot \mathsurround=0pt \everycr{\noalign{\ifdtup \global\dtupfalse \vskip-\lineskiplimit \vskip\normallineskiplimit \else \penalty\interdisplaylinepenalty \fi}}} \def\eqalignno#1{\panorama \tabskip=\humongous \halign to\displaywidth{\hfil$\displaystyle{##}$ \tabskip=0pt&$\displaystyle{{}##}$\hfil \tabskip=\humongous&\llap{$##$}\tabskip=0pt \crcr#1\crcr}} \relax \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\bar{\partial}{\bar{\partial}} \def\bar{J}{\bar{J}} \def\partial{\partial} \def f_{,i} { f_{,i} } \def F_{,i} { F_{,i} } \def f_{,u} { f_{,u} } \def f_{,v} { f_{,v} } \def F_{,u} { F_{,u} } \def F_{,v} { F_{,v} } \def A_{,u} { A_{,u} } \def A_{,v} { A_{,v} } \def g_{,u} { g_{,u} } \def g_{,v} { g_{,v} } \def\kappa{\kappa} \def\rho{\rho} \def\alpha{\alpha} \def {\bar A} {\Alpha} \def\beta{\beta} \def\Beta{\Beta} \def\gamma{\gamma} \def\Gamma{\Gamma} \def\delta{\delta} \def\Delta{\Delta} \def\epsilon{\epsilon} \def\Epsilon{\Epsilon} \def\pi{\pi} \def\Pi{\Pi} \def\chi{\chi} \def\Chi{\Chi} \def\theta{\theta} \def\Theta{\Theta} \def\mu{\mu} \def\nu{\nu} \def\omega{\omega} \def\Omega{\Omega} \def\lambda{\lambda} \def\Lambda{\Lambda} \def\sigma{\sigma} \def\varphi{\varphi} \def{\cal M}{{\cal M}} \def{\cal M}{{\cal M}} \def\tilde V{\tilde V} \def{\cal V}{{\cal V}} \def\tilde{\cal V}{\tilde{\cal V}} \def{\cal L}{{\cal L}} \def{\cal R}{{\cal R}} \def{\cal A}{{\cal A}} \def{\cal{G} }{{\cal{G} }} \def{\cal{D} } {{\cal{D} } } \def\tilde{m}{\tilde{m}} \defSchwarzschild {Schwarzschild} \defReissner-Nordstr\"om {Reissner-Nordstr\"om} \defChristoffel {Christoffel} \defMinkowski {Minkowski} \def\bigskip{\bigskip} \def\noindent{\noindent} \def\hfill\break{\hfill\break} \def\qquad{\qquad} \def\bigl{\bigl} \def\bigr{\bigr} \def\overline\del{\overline\partial} \def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}} \def$SL(2,\IR)_{-k'}\otimes SU(2)_k/(\IR \otimes \tilde \IR)${$SL(2,\relax{\rm I\kern-.18em R})_{-k'}\otimes SU(2)_k/(\relax{\rm I\kern-.18em R} \otimes \tilde \relax{\rm I\kern-.18em R})$} \def Nucl. Phys. { Nucl. Phys. } \def Phys. Lett. { Phys. Lett. } \def Mod. Phys. Lett. { Mod. Phys. Lett. } \def Phys. Rev. Lett. { Phys. Rev. Lett. } \def Phys. Rev. { Phys. Rev. } \def Ann. Phys. { Ann. Phys. } \def Commun. Math. Phys. { Commun. Math. Phys. } \def Int. J. Mod. Phys. { Int. J. Mod. Phys. } \def\partial_+{\partial_+} \def\partial_-{\partial_-} \def\partial_{\pm}{\partial_{\pm}} \def\partial_{\mp}{\partial_{\mp}} \def\partial_{\tau}{\partial_{\tau}} \def \bar \del {\bar \partial} \def {\bar h} { {\bar h} } \def {\bar \phi} { {\bar \phi} } \def {\bar z} { {\bar z} } \def {\bar A} { {\bar A} } \def {\tilde {A }} { {\tilde {A }}} \def {\tilde {\A }} { {\tilde { {\bar A} }}} \def {\bar J} {{\bar J} } \def {\tilde J} { {\tilde {J }}} \def {1\over 2} {{1\over 2}} \def {1\over 3} {{1\over 3}} \def \over {\over} \def\int_{\Sigma} d^2 z{\int_{\Sigma} d^2 z} \def{\rm diag}{{\rm diag}} \def{\rm const.}{{\rm const.}} \def\relax{\rm I\kern-.18em R}{\relax{\rm I\kern-.18em R}} \def\relax{\rm I\kern-.18em L}{\relax{\rm I\kern-.18em L}} \def^{\raise.15ex\hbox{${\scriptscriptstyle -}$}\kern-.05em 1}{^{\raise.15ex\hbox{${\scriptscriptstyle -}$}\kern-.05em 1}} \def$SL(2,\IR)\otimes SO(1,1)^{d-2}/SO(1,1)${$SL(2,\relax{\rm I\kern-.18em R})\otimes SO(1,1)^{d-2}/SO(1,1)$} \def$SL(2,\IR)_{-k'}\otimes SU(2)_k/(\IR \otimes \tilde \IR)${$SL(2,\relax{\rm I\kern-.18em R})_{-k'}\otimes SU(2)_k/(\relax{\rm I\kern-.18em R} \otimes \tilde \relax{\rm I\kern-.18em R})$} \def$SO(d-1,2)_{-k}/ SO(d-1,1)_{-k}${$SO(d-1,2)_{-k}/ SO(d-1,1)_{-k}$} \def$SO(d-1,2)/ SO(d-1,1)${$SO(d-1,2)/ SO(d-1,1)$} \defPoisson--Lie T-duality{Poisson--Lie T-duality} \def{\cal M}{{\cal M}} \def\tilde V{\tilde V} \def{\cal V}{{\cal V}} \def\tilde{\cal V}{\tilde{\cal V}} \def{\cal L}{{\cal L}} \def{\cal R}{{\cal R}} \def{\cal A}{{\cal A}} \def{\cal E}{{\cal E}} \def{\tilde X}{{\tilde X}} \def{\tilde J}{{\tilde J}} \def{\tilde P}{{\tilde P}} \def{\tilde L}{{\tilde L}} \def{\rm tr}{{\rm tr}} \def{\rm Tr}{{\rm Tr}} \renewcommand{\headrulewidth}{0.4pt} \begin{document} \renewcommand{\thesection.\arabic{equation}}}{\thesection.\arabic{equation}} \csname @addtoreset\endcsname{equation}{section} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\eeq}[1]{\label{#1}\end{equation}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\eer}[1]{\label{#1}\end{equation}} \newcommand{\eqn}[1]{(\ref{#1})} \begin{titlepage} \begin{center} \hfill CERN-TH-2019-053 \vskip .4 cm \vskip .3 in {\large\bf An exact symmetry in $\lambda$-deformed CFTs} \vskip 0.4in {\bf George Georgiou},$^a$\ {\bf Eftychia Sagkrioti},$^a$\\ \vskip .09 cm {\bf Konstantinos Sfetsos}$^a$\ and {\bf Konstantinos Siampos}$^{a,b}$ \vskip 0.17in {\em${}^a$Department of Nuclear and Particle Physics,\\ Faculty of Physics, National and Kapodistrian University of Athens,\\15784 Athens, Greece } \vskip 0.1in {\em${}^b$Theoretical Physics Department, CERN, 1211 Geneva 23, Switzerland} \vskip 0.1in {\footnotesize \texttt george.georgiou, esagkrioti, ksfetsos, [email protected]} \vskip .5in \end{center} \centerline{\bf Abstract} \noindent We consider $\lambda$-deformed current algebra CFTs at level $k$, interpolating between an exact CFT in the UV and a PCM in the IR. By employing gravitational techniques, we derive the two-loop, in the large $k$ expansion, $\beta$-function. We find that this is covariant under a remarkable exact symmetry involving the coupling $\lambda$, the level $k$ and the adjoint quadratic Casimir of the group. Using this symmetry and CFT techniques, we are able to compute the Zamolodchikov metric, the anomalous dimension of the bilinear operator and the Zamolodchikov $C$-function at two-loops in the large $k$ expansion, as exact functions of the deformation parameter. Finally, we extend the above results to $\lambda$-deformed parafermionic algebra coset CFTs which interpolate between exact coset CFTs in the UV and a symmetric coset space in the IR. \noindent \noindent \vskip .4in \noindent \end{titlepage} \vfill \eject \newpage \tableofcontents \noindent \def1.2{1.2} \baselineskip 20 pt \noindent \setcounter{equation}{0} \renewcommand{\thesection.\arabic{equation}}}{\thesection.\arabic{equation}} \section{Introduction} The classical actions of field theories may easily have certain global symmetries depending on the field content and on the particular form of the constants coupling the various fields. Discovering emergent non-perturbative symmetries in quantum field theories acting also in their coupling space can be of major importance. These may arise unexpectedly and can provide strict constraints on the observables of the theory. An important example of the above is the maximally supersymmetric field theory, $\mathcal N=4$ SYM, which possesses a remarkable non-perturbative symmetry, similar in a sense to the exact symmetry presented in this work, called S-duality \cite{S-duality}, i.e. for zero theta angle this reads $g_{\rm YM}\to 1/g_{\rm YM}$. \noindent There is a certain class of quantum field theories where one may test these ideas and which in recent years have been intensively explored. In particular, consider a current algebra theory at level $k$ realized by a two-dimensional $\sigma$-model action, e.g. a WZW model theory \cite{Witten:1983ar} perturbed by current bilinear terms of the form $\lambda_{ab} J_+^a J_-^b$. Here, the $\lambda_{ab}$'s are couplings and elements of a matrix and $a,b$ run over the dimensionality of the Lie-algebra of a semisimple group $G$. As it stands the action may have certain global symmetries depending on the particular form of $\lambda_{ab}$. However, another symmetry appears at the quantum level. Specifically, it was argued using path integrable techniques \cite{Kutasov:1989aw}, that the theory is quantum mechanically invariant under an additional remarkable master symmetry. In the space of couplings this acts as $\lambda\to \lambda^{-1}$ and $k\to- k $, where for the purposes of our introduction we have presented it for $k\gg 1$. This is a non-perturbative symmetry, not valid at any finite order in perturbation theory in the couplings $\lambda_{ab}$. The first class of theories where the above symmetry was explicitly realized classically in a $\sigma$-model was constructed in \cite{Sfetsos:2013wia}, whereas the symmetry itself was noticed and demonstrated in \cite{Itsios:2014lca,Sfetsos:2014jfa}. This action captures all loop effects in the deformation matrix $\lambda$ and is valid to leading order for large $k$. This effective action, in conjunction with results from conformal perturbation theory and the above symmetry has been instrumental in extracting vast information at the quantum regime of the theory \cite{Georgiou:2015nka}. This includes the $\beta$-function and the anomalous dimensions of current, primary \cite{Georgiou:2016iom} and composite operators \cite{Georgiou:2016zyo}. The prototype $\lambda$-deformed $\sigma$-model action of \cite{Sfetsos:2013wia} represents the exact deformation of a single WZW current algebra theory due to the interactions of currents belonging to the theory, i.e. self-interactions. Since then, this construction has been extended to cover cases with more than one current algebra theories, mutually and/or self-interacting \cite{Georgiou:2017jfi,Georgiou:2016urf,Georgiou:2018hpd,Georgiou:2018gpe }.\footnote{We mention in passing that perhaps the major reason these models have attracted attention is integrability. Such cases exist first for isotropic deformation matrices \cite{Sfetsos:2013wia,Georgiou:2017jfi,Georgiou:2018hpd,Georgiou:2018gpe} (for the $SU(2)$ group case, integrability has been demonstrated in \cite{Balog:1993es}). Nevertheless integrability holds for some anisotropic models as well. In particular, for the $\lambda$-deformed $SU(2)$ based models in \cite{Sfetsos:2014lla,Sfetsos:2015nya}, as well as for subclasses of those in \cite{Georgiou:2018hpd,Georgiou:2018gpe}. Integrable deformations based on cosets, symmetric and semi-symmetric spaces have also been constructed in \cite{Sfetsos:2013wia,Sfetsos:2017sep}, \cite{Hollowood:2014rla} and \cite{Hollowood:2014qma}, respectively. Finally, deformed models of low dimensionality were promoted to solutions of type-II supergravity \cite{Sfetsos:2014cea,Demulder:2015lva,Borsato:2016zcf,Chervonyi:2016ajp,Borsato:2016ose}. } Compared to the single $\lambda$-deformed model these models involve several deformation parameters and their renormalization group has a very rich structure, namely their RG flow possesses several fixed points. The use of non-trivial outer automorphisms in this context was put forward in \cite{Driezen:2019ykp} for the case of a single group $G$. Outer automorphisms for the product group $G\times G$ was considered earlier in \cite{Georgiou:2016urf}. In all cases there is an analog of the above mentioned master symmetry involving the levels of the current algebra and the various deformation matrices \cite{Kutasov:1989aw}. \noindent The next crucial question is how to proceed deeper into the quantum regime of these theories by going beyond the leading expressions for large $k$, that is go higher in the $\nicefrac1k$ expansion. Experience shows that perhaps we may progress in computing by brute force the $\beta$-function to two-loops, but unless we understand the fate of the above master symmetry when such corrections are taken into account, the progress will stay minimal. The major purpose of the present paper is to precisely make progress along the above line of research. The outline of this paper is as follows: In Sec. \ref{into.lambda.models}, we review the $\lambda$-deformed models constructed in \cite{Georgiou:2017jfi}, which has two interesting limits -- the PCM and the pseudo-chiral model. In Sec. \ref{singlegeometry} we present the two-loop RG flows in the group case for an isotropic coupling. We present a symmetry of the $\beta$-functions in the coupling space $(\lambda,k)$ (\Sigma~\ref{symmetryisosection}). Using the symmetry and CFT input we determine the Zamolodchikov metric of the current bilinear driving the conformal perturbation (\Sigma~\ref{group.Zamolo}). Then we work out the Zamolodchikov $C$-function and the anomalous dimension of the current bilinear (\Sigma~\ref{isoCfunction}). Using the above we determine the Zamolodchikov $C$-function for the $\lambda$-deformed $G_k$ (\Sigma~\ref{group.connection}). In Sec. \ref{coset}, we generalize the above for the coset space $\displaystyle \frac{SU(2)_k\times SU(2)_k}{U(1)_k}$, working out the two-loop $\beta$-function and the corresponding symmetry in the coupling space $(\lambda,k)$. Using the symmetry and CFT data we determine the Zamolodchikov metric of the parafermionic bilinear driving the conformal perturbation (\Sigma~\ref{coset.Zamo}), the Zamolodchikov $C$-function and the anomalous dimension of the parafermionic bilinear (\Sigma~\ref{cfunction.double.parafermionic}). Using the above we work out the Zamolodchikov $C$-function for the $\lambda$-deformed $SU(2)_k/U(1)_k$ (\Sigma~\ref{coset.connection}). In Sec. \ref{conclu}, contains some concluding remarks. In App. \ref{RGequalappend} we compute the two-loop RG flows for the group case at unequal levels. At equal levels it yields the result analyzed in Sec. \ref{singlegeometry} and agreement with the corresponding limits already described in Sec. \ref{into.lambda.models} is found for the PCM and pseudo-chiral model (\Sigma~\ref{limits}). \subsection*{Note added} \vskip -.5 cm Extensive parts of this work, including the $\beta$-function equations for the group and coset cases \eqref{betaonetwo} and \eqref{fjhskdjklswq} below, have been presented in talks by one of the authors (K. Siampos), at the Recent Developments in Strings and Gravity (Corfu, Greece, 10-16 September 2019) \cite{Siampos.Corfu} and at the $10^\text{th}$ Crete regional meeting in String Theory (Kolymbari, Greece, 15-22 September 2019) \cite{Siampos.Crete}. Towards the completion of the present work, the work of \cite{Hoare:2019mcc} appeared where similar issues concerning the two-loop $\beta$-function in $\lambda$-deformed models, are discussed. \section{The $\lambda$-deformed models} \label{into.lambda.models} Consider the following deformed single-level action \cite{Georgiou:2016urf,Georgiou:2017jfi} \begin{equation} S= S_{k}(\frak{g}_1) + S_{k}(\frak{g}_2) + {k\lambda_{ab}\over \pi}\, \int \text{d}^2\sigma\,J^a_{1+}\,J^b_{2-}\,. \label{defactigen} \end{equation} We have denoted by $S_{k}(\frak{g})$ the WZW action at level $k$ \cite{Witten:1983ar} \begin{equation} \label{wzwacc} S_{k}(\frak{g}) = {k\over 2\pi} \int \text{d}^2\sigma\, \text{Tr}(\partial_+ \frak{g}^{-1} \partial_- \frak{g}) + S_{{\rm WZ},k}(\frak{g})\ ,\quad S_{{\rm WZ},k}(\frak{g})= {k\over 12\pi} \int_\text{B} \text{Tr}(\frak{g}^{-1} \text{d}\frak{g})^3\ , \end{equation} where $\frak{g}\in G$, with $G$ being a semi-simple group of dimension $\dim G$. The $t_a$'s are Hermitian matrices normalized to $\text{Tr}\left(t_at_b\right)=\delta_{ab}$, $[t_a,t_b]=i f_{abc}t_c$ with $a=1,\dots,\text{dimG}$, where the structure constants $f_{abc}$ are taken to be real. The currents $J^a_{\pm}$ are given by \begin{equation} \label{pfkdlddsks} J_{+}^a=-i\,\text{Tr}\big(t_a\partial_+\frak{g}\frak{g}^{-1}\big)\,,\quad J_{-}^a=-i\,\text{Tr}\big(t_a\frak{g}^{-1}\partial_-\frak{g}\big)\ . \end{equation} We also define the orthogonal matrix $D_{ab}=\text{Tr}\big(t_a\frak{g}t_b\frak{g}^{-1}\big)$. All these may appear with an extra index $1$ or $2$ depending on which group element $\frak{g}_1$ or $\frak{g}_2$ will be used in the particular expressions. \noindent The above model can be obtained as a limit of the doubly $\lambda$-deformed models constructed in \cite{Georgiou:2016urf} - see also \cite{Georgiou:2017jfi} for the unequal level case- by setting one of the deformation parameters to zero. In the same works it was also stressed that the linearized action \eqref{defactigen} is, in fact, the effective action incorporating all loop effects in the deformation parameter $\lambda_{ab}$, that is it does not receive further $\lambda$-dependent corrections. This is the first reason for using \eqref{defactigen}, instead of the prototype $\lambda$-deformed model. The second reason is that, as was shown in \cite{Georgiou:2017aei} by using CFT arguments, both actions share the same $\beta$-function for the deformation parameter $\lambda_{ab}$ to all orders not only in the $\lambda_{ab}$, but also in the $\nicefrac1k$ expansion. This is strictly true only when we choose the chiral anti-chiral current two-point function to vanish. It is important to note that this is precisely the choice in which the symmetry of \cite{Kutasov:1989aw} is realized. The third reason is that, the $\sigma$-model \eqref{defactigen} does not receive quantum corrections in contradistinction to the action of the single $\lambda$-deformed model. The action \eqref{defactigen} has two interesting limits for $\lambda_{ab}\to\pm\delta_{ab}$. They will give rise to the PCM and pseudo-chiral models, respectively. To analyze the limit $\lambda_{ab}\to\delta_{ab}$ we rewrite \eqref{defactigen} as \begin{equation} \label{jfjkfhsjhdfjs} S=S_k\left(\frak{g}_2\frak{g}_1\right)+\left(\lambda_{ab}-\delta_{ab}\right)\int\text{d}^2\sigma J_{1+}^a J_{2-}^b\ , \end{equation} where we made use of the Polyakov--Wiegmann (PW) identity \cite{Polyakov:1983tt}.\footnote{In our conventions the PW identity reads \begin{equation*} S_{k}(\frak{g}_2\frak{g}_1)=S_{k}(\frak{g}_1)+S_{k}(\frak{g}_2)+\frac{k}{\pi}\int\text{d}^2\sigma\, J_{1+}^aJ_{2-}^a\,. \end{equation*} } Then we perform the following zoom-in limit \begin{equation} \label{pcmlim} \lambda_{ab}=\delta_{ab}-\frac{E_{ab}}{k}\,,\quad k\gg1\,,\quad \frak{g}_1=\frak{g}^{-1}_2\left(\mathbb{I}+i\frac{u^at_a}{\sqrt{k}}\right)+\cdots\ . \end{equation} Then, the action \eqref{jfjkfhsjhdfjs} takes the form of a PCM model, with the $\dim G$ additional spectators bosons $u^a$ \begin{equation} \label{PCMlimit} S_\text{PCM}=-\frac{E_{ab}}{\pi}\int\text{d}^2\sigma\, {\rm Tr}(t^a \frak{g}^{-1}_2 \partial_+ \frak{g}_2) {\rm Tr}(t^b \frak{g}^{-1}_2 \partial_- \frak{g}_2) + \frac{1}{2\pi}\int\text{d}^2\sigma\,\partial_+u^a\partial_-u^a \,. \end{equation} We note here that similar to \eqref{pcmlim} a zoom-in limit to the prototype $\lambda$-deformed action of \cite{Sfetsos:2013wia} gives rise to the non-Abelian T-dual of the PCM $\sigma$-model. This fact is not a surprise since \eqref{defactigen} is canonically equivalent \cite{Georgiou:2017oly} to the sum of a WZW action and the $\lambda$-deformed action of \cite{Sfetsos:2013wia}. The two zoom-in limits simply relate the PCM model and its non-Abelian T-dual which are also known to be canonically equivalent as well \cite{Curtright:1994be,Alvarez:1994wj}. This limit is a way to make sense of the theory in the IR when $\lambda$ approaches unity and strong coupling effects prevail. \noindent To analyze the limit $\lambda_{ab}\to-\delta_{ab}$ we rewrite \eqref{defactigen} by making use of the PW identity, as \begin{equation} \label{jfjkfhsjhdfjss} S=S_k\big(\frak{g}_2\frak{g}_1^{-1}\big)+2S_{\text{WZ},k}\left(\frak{g}_1\right)+ \frac{k}{\pi}\int\text{d}^2\sigma \left(D_1+\lambda\right)_{ab}J_{1+}^aJ_{2-}^b\ . \end{equation} Next, we perform the following slightly different zoom-in limit \begin{equation} \begin{split} \label{psliim} &\lambda_{ab}=-\delta_{ab}+\frac{E_{ab}}{k^{1/3}}\ ,\quad k\gg1\ , \\ &\frak{g}_1=\mathbb{I}+i\frac{v^at_a}{2k^{1/3}}-i\frac{u^at_a}{2k^{1/2}}+\cdots\ , \quad \frak{g}_2=\mathbb{I}+i\frac{v^at_a}{2k^{1/3}}+i\frac{u^at_a}{2k^{1/2}}+\cdots\ . \end{split} \end{equation} Then, the action \eqref{jfjkfhsjhdfjss} takes the form of the generalized pseudo-chiral model found in \cite{Georgiou:2016iom} by performing in the prototype $\lambda$-deformed action a similar to \eqref{psliim} zoom-in limit plus the $\dim G$ spectator bosons $u^a$ \begin{equation} \label{pseudolimit} S_\text{pseudo}=\frac{1}{4\pi}\int\text{d}^2\sigma\, \Big(E_{ab}+\frac13f_{ab}\Big)\partial_+v^a\partial_-v^b + \frac{1}{2\pi}\int\text{d}^2\sigma\,\partial_+u^a\partial_-u^a\ , \end{equation} where $f_{ab}=f_{abc}v^c$. For diagonal $E_{ab}$ the first term is the prototype pseudo-dual model studied in \cite{Nappi:1979ig}. These limits should be well defined at the level of the physical quantities of the theory, such as for the $\beta$-functions and the operator's anomalous dimensions. \section{The group space} \label{singlegeometry} We would like to compute the RG flow equations of \eqref{defactigen} at two-loop order in the $\nicefrac1k$ expansion for isotropic coupling $\lambda_{ab}=\lambda\delta_{ab}$. This is a rather long but quite standard computation that is performed in the App. \ref{RGequalappend}. The end result is that the model is renormalizable at order $\nicefrac{1}{k^2}$ and that there is no need for a diffeomorphism or an addition of a counter term. The $\beta$-function for $\lambda$ reads \eqref{betaonetwoappend} \begin{equation} \label{betaonetwo} \beta^\lambda(\lambda)={\text{d}\lambda\over \text{d}t}= -{c_G\over 2 k} {\lambda^2\over (1+\lambda)^2} + {c_G^2\over 2 k^2 } {\lambda^4(1-2\lambda)\over (1-\lambda)(1+\lambda)^5}\,, \end{equation} where $t=\ln\mu^2$, $\mu$ is the RG scale and $c_G$ is the quadratic Casimir in the adjoint representation of the semi-simple group $G$, i.e. $f_{acd} f_{bcd}=c_G \delta_{ab}$. The level $k$ does not run, thus retaining its topological nature (also) at two-loop order. The above $\beta$-function is well defined in the two interesting zoom-in limits around $\lambda=\pm1$ performed in the previous section. These are studied in \Sigma~\ref{limits}. \subsection{Symmetry} \label{symmetryisosection} It has been conjectured \cite{Kutasov:1989aw} that beyond the leading in the $\nicefrac1k$-expansion, the theory is invariant under the symmetry \begin{equation} \lambda\to\lambda^{-1}\,,\quad k\to-k-c_G\,. \label{cksks} \end{equation} It can be easily checked that \eqref{betaonetwo} is not invariant under this to order $\nicefrac1k^2$. However, contrary to the one-loop result the two-loop result is scheme dependent. Furthermore, as was mentioned in \cite{Kutasov:1989aw}, the symmetry \eqref{cksks} is realized only when we choose the chiral anti-chiral current two-point function to vanish. The fact that the symmetry \eqn{cksks} is not respected by our two-loop $\beta$-function indicates that the scheme used in gravity calculations is not compatible with the left-right symmetric scheme of the CFT. However, it is possible to redefine the coupling $\lambda$ in such a way that the resulting $\beta$-function respects the aforementioned symmetry \eqn{cksks}. Based on the general structure of the one-loop in $\nicefrac1k$ results for the $\beta$-function, as well for the anomalous dimensions of current operators \cite{Georgiou:2015nka}, we redefine $\lambda$ as \begin{equation} \lambda=\tilde\lambda\bigg(1+\frac{c_G}{k}\frac{P(\tilde\lambda)}{(1-\tilde\lambda)(1+\tilde\lambda)^3}\bigg)\ , \end{equation} where $P(\tilde\lambda)$ is an analytic function of $\tilde\lambda$. Subsequently, we demand that the symmetry of the $\beta$-function becomes \begin{equation} \label{sfjsldjsssk} \tilde\lambda\to\tilde\lambda^{-1}\,,\quad k\to-k-c_G\,. \end{equation} This enforces $P(\tilde\lambda)$ to satisfy the first-order differential equation \begin{equation} \tilde\lambda^3 P'(\tilde\lambda^{-1})-\tilde \lambda P'(\tilde\lambda) + {\tilde\lambda^4(\tilde\lambda^2-3)\over 1-\tilde\lambda^2}P(\tilde\lambda^{-1})+{1-3\tilde\lambda^2\over 1-\tilde \lambda^2}P(\tilde\lambda) +1-\tilde\lambda^4=0\, . \end{equation} This has as a solution the fourth order polynomial \begin{equation} P(\tilde\lambda)=(1-\tilde\lambda^2)\big[(1+d_0)\tilde \lambda^2 + d_1\tilde\lambda + d_0\big]\ , \end{equation} where $d_{0,1}$ are two arbitrary constants (one is due to the fact that the differential equation involves $\tilde \lambda$ as well $\nicefrac{1}{\tilde \lambda}$ as arguments in $P(\tilde\lambda)$). Using the above reparametrization into \eqref{betaonetwo}, we find that \begin{equation} \label{fdjfhshdjs} \beta^{\tilde\lambda}(\tilde\lambda)=-\frac{c_G\tilde\lambda^2}{2k(1+\tilde\lambda)^2}-\frac{c_G^2\tilde\lambda^2\big[d_0(1-\tilde\lambda^2)^2+\tilde\lambda^2(\tilde\lambda^2 +2\tilde \lambda-2)\big]}{2k^2(1-\tilde\lambda)(1+\tilde\lambda)^5}\ . \end{equation} Note that the constant $d_1$ does not appear in this expression, while $d_0$ does so and it remains to be determined. To do so first recall again the scheme dependence of the above result concerning the level $k$. We would like to match this scheme to that corresponding to the conformal perturbation theory. Using the latter, for small $\tilde \lambda$ the contribution to the $\beta$-function can only be of ${\cal O}(\nicefrac{\tilde \lambda^2}{k})$ and a term of ${\cal O}(\nicefrac{\tilde \lambda^2}{k^2})$ should be absent. Alternatively, one may establish that by the fact that the anomalous dimension of the composite operators $J^a\bar J^a$ is of order one less than the corresponding order of the $\beta$-function (see \eqref{anomalouscompositegeneral} below). This anomalous dimension cannot have a term of ${\cal O}(\nicefrac{\tilde \lambda}{k^2})$ since, a linear in $\tilde \lambda$ term arises from a single insertion operator giving rise to an integral involving the product of a three-point function of holomorphic currents with a similar one with just anti-holomorphic ones. In our normalizations each one of the two correlators contributes a factor of ${\cal O}(\nicefrac{1}{\sqrt{k}})$. This computation was performed in \cite{Georgiou:2015nka,Georgiou:2016zyo}. Therefore, one must require the vanishing of the term of ${\cal O}(\nicefrac{\tilde \lambda^2}{k^2})$ in \eqref{fdjfhshdjs}. This can be achieved, for instance, by choosing $d_0=0$ in which case the contribution of the second term in \eqref{fdjfhshdjs} becomes of ${\cal O}(\nicefrac{\tilde \lambda^4}{k^2})$. This choice is problematic since it will give rise to non-analytic terms with branch cuts, i.e. $\ln\frac{1-\lambda}{1+\lambda}$, in the $C$-function as it will be discussed in the \Sigma~\ref{isoCfunction}. Their absence implies that $d_0=-\nicefrac12$ which is the choice we make. Then, the $\beta$-function \eqref{fdjfhshdjs} of course contains a term of ${\cal O}(\nicefrac{\tilde\lambda^2}{k^2})$. To get rid of it we redefine the perturbative parameter from $\nicefrac1k$ to $\nicefrac{1}{k_G}$, where $k_G$ is $k$ shifted by a constant proportional to $c_G$. It turns out that the correct such redefinition is \begin{equation} \label{kgkg} k_G=k+ {c_G\over 2}\ . \end{equation} Notably, this is the right combination of $k$ and $c_G$ appearing in the Sugawara construction of the energy--momentum tensor in current algebra CFTs and in the conformal dimension of the corresponding primary fields. Then \eqref{fdjfhshdjs} simplifies to \begin{equation} \boxed{ \label{fdjfhshdjs1} \beta^{\tilde\lambda}(\tilde\lambda)=-\frac{c_G\tilde\lambda^2}{2k_G(1+\tilde\lambda)^2}-\frac{c_G^2\tilde\lambda^3(1-\tilde\lambda+\tilde\lambda^2)}{2k_G^2(1-\tilde\lambda)(1+\tilde\lambda)^5} }\ . \end{equation} The above is covariant under \eqref{sfjsldjsssk} or equivalently in terms of $k_G$ \begin{equation} \label{sfjsldjsssk123} \boxed{ \tilde\lambda\to\tilde\lambda^{-1}\,,\quad k_G\to-k_G}\ . \end{equation} We, thus, see that the perturbation theory is naturally organized around the CFT with level $k_G$ deformed by the term \ $\displaystyle {k_G \tilde \lambda \over \pi} J_+ J_-$. In fact its covariance is achieved for the two term separately. We expect that this is an exact symmetry to all order in the large $k_G$ expansion. This can be very useful in trying to extend the $\beta$-function to ${\cal O}(\nicefrac{1}{k_G^3})$ or even to higher ones. \subsection{Zamolodchikov metric} \label{group.Zamolo} Let us consider the two-point correlation function\footnote{ We pass to the Euclidean regime with complex coordinate $\displaystyle z={1\over \sqrt{2}} \left(\tau+i\,\sigma\right)$.} \begin{equation} G_{\tilde\lambda}(z_1,\bar z_1;z_2,\bar z_2)=\langle{\cal O}(z_1,\bar z_1){\cal O}(z_2,\bar z_2)\rangle_{\tilde\lambda}\ , \end{equation} where the perturbing current bilinear operator is \begin{equation} \label{sdtadkks} {\cal O}(z,\bar z)=J^a(z)\bar J^a(\bar z)\, . \end{equation} The currents $J^a$ satisfy a current algebra at level $k_G$ with OPEs (operator product expansions)\footnote{Note that we have rescaled the currents as $J^a \to \nicefrac{J^a}{{\sqrt{k_G}}}$.} \begin{equation} J^a(z_1)J^b(z_2)=\frac{\delta_{ab}}{z_{12}^2}+\frac{i}{\sqrt{k_G}}\frac{f_{abc}J^c(z_2)}{z_{12}}\ ,\qquad z_{12}=z_1-z_2\,, \end{equation} while the OPE of $J^a$ with $\bar J^a$ is regular. \noindent From \eqref{sdtadkks} we can read off the Zamolodchikov metric as \begin{equation} g(\tilde\lambda;k)=|z_{12}|^{2(2+\gamma^{(\cal O)})}G_{\tilde\lambda}(z_1,\bar z_1;z_2,\bar z_2)\,, \end{equation} where $\gamma^{(\cal O)}$ is the anomalous dimension of ${\cal O}$ that is given by \cite{Kutasov:1989dt,Georgiou:2015nka} \begin{equation} \label{anomalouscompositegeneral} \gamma^{({\cal O})}=2\partial_{\tilde\lambda}\beta^{\tilde\lambda}(\tilde\lambda)+\beta^{\tilde\lambda}(\tilde\lambda)\partial_{\tilde\lambda}\ln g(\tilde\lambda;k_G)\,. \end{equation} The finite part of the two-point function should behave as \begin{equation} \label{sjfklfhdsjs} g(\tilde\lambda;k_G)=\frac12\frac{\text{dim}G}{(1-\tilde\lambda^2)^2}\left(1+\frac{c_G}{k_G}\frac{Q(\tilde\lambda)}{(1-\tilde\lambda)(1+\tilde\lambda)^3}\right)\,, \end{equation} where the zeroth order in the $\nicefrac1k$ expansion was computed in \cite{Kutasov:1989dt,Georgiou:2015nka}. The poles on the sub-leading part in $\tilde\lambda=\pm1$ and their order, are chosen such that the line element \begin{equation} \label{lfkfdkdld} \text{d}\ell^2=g(\tilde\lambda;k_G)\text{d}\tilde\lambda^2\,, \end{equation} is finite at the PCM and pseudo-dual limits \eqref{limit1} and \eqref{limit2} respectively. The function $Q(\tilde\lambda)$ is everywhere analytic with $Q(0)=0$, so that it agrees with the CFT result \cite{Georgiou:2018vbb} \begin{equation} \label{jkshfuswq} g(0;k_G)=\frac12\text{dim}G\,. \end{equation} Demanding that \eqref{lfkfdkdld} is invariant under the symmetry \eqref{sfjsldjsssk123} leads to the condition \begin{equation} \tilde\lambda^4Q(\tilde\lambda^{-1})=Q(\tilde\lambda)\,, \end{equation} having as a solution a quartic polynomial of the form \begin{equation} \label{fjjdjkd} Q(\tilde\lambda)=\tilde\lambda\left(c_1+c_2\tilde\lambda+c_1\tilde\lambda^2\right)\,, \end{equation} where we have used \eqref{jkshfuswq}. To proceed we note that the Zamolodchikov metric receives no finite contribution up to ${\cal O}(\tilde\lambda^2)$ \cite{Georgiou:2015nka,Georgiou:2016zyo}, fixing $c_{1,2}=0$, Then \eqref{sjfklfhdsjs} simplifies as \begin{equation} \label{sjfklfhdsjsa} g(\tilde\lambda;k_G)=\frac12\frac{\text{dim}G}{(1-\tilde\lambda^2)^2}\, , \end{equation} that is, the possible ${\cal O}(\nicefrac{1}{k_G})$-correction vanishes. \subsection{$C$-function and the anomalous dimension of the current bilinear} \label{isoCfunction} Next we compute the $C$-function from Zamolochikov's $c$-theorem \cite{Zamolodchikov:1986gt} by following the procedure introduce in the present context in \cite{Georgiou:2018vbb}. We have that \cite{Zamolodchikov:1986gt} \begin{equation} \frac{\text{d}C}{\text{d}t}=\beta^i\partial_i C=24g_{ij}\beta^i\beta^j\geqslant0\,. \end{equation} For a single coupling $\tilde\lambda$, the above simplifies to the first order ordinary differential equation \begin{equation} \partial_{\tilde\lambda} C_\text{single}(\tilde\lambda;k)=24g_{\tilde\lambda\tilde\lambda}\beta^{\tilde\lambda}(\tilde\lambda)\,,\quad g_{\tilde\lambda\tilde\lambda}=g(\tilde\lambda;k_G)\,, \end{equation} with solution \begin{equation} \label{djeksiskasjw} C_\text{single}(\tilde\lambda;k_G)=c_\text{UV}+24\int_0^{\tilde\lambda}\text{d}\tilde\lambda_1\, g(\tilde\lambda_1;k_G)\beta^{\tilde\lambda}(\tilde\lambda_1)\,, \end{equation} where $c_\text{UV}$ is the central charge at the UV CFT $G_k\times G_k$, namely that \begin{equation} c_\text{UV}=2\, \frac{2k\text{dimG}}{2k+c_G}=\text{dimG}\Big(2\, -\frac{c_G}{k_G}\Big)\,. \end{equation} Integrating \eqref{djeksiskasjw}, we find that \begin{equation} \label{Cfunctioniso} C_\text{single}(\tilde\lambda;k_G)=2\text{dimG}-\frac{c_G\text{dimG}}{k_G}\frac{1+2\tilde\lambda}{(1-\tilde\lambda)(1+\tilde\lambda)^3}- \frac{3c_G^2\text{dimG}}{2k^2_G}\frac{\tilde\lambda^4}{(1-\tilde\lambda)^2(1+\tilde\lambda)^6}\,. \end{equation} This is in agreement with the results of \cite{Georgiou:2018vbb} to leading order in $\nicefrac{1}{k_G}$. In addition, \eqref{Cfunctioniso} is invariant under \eqref{sfjsldjsssk123} to order $\nicefrac{1}{k_G^2}$, up to a constant \begin{equation} C_\text{single}(\tilde\lambda^{-1};-k_G)= C_\text{single}(\tilde\lambda;k_G)+\frac{c_G\,\text{dimG}}{k_G}\,. \end{equation} Note the absence of non-analytic terms with branch cuts, i.e. $\ln\frac{1-\lambda}{1+\lambda}$, in the expression of the $C$-function. This is due to the choice of the parameter $d_0=-\nicefrac12$ in \eqref{fdjfhshdjs} as it has been already noted. Such terms cannot appear, as it can be seen from a free field expansion around the identity group element \cite{Georgiou:2019aon}. Finally, we compute the anomalous dimension of ${\cal O}$ to order $\nicefrac{1}{k_G^2}$. Plugging \eqref{fdjfhshdjs1}, \eqref{sjfklfhdsjsa} into \eqref{anomalouscompositegeneral}, we find that \begin{equation} \label{anomalousisotwo} \boxed{\gamma^{({\cal O})}=-\frac{2c_G}{k_G}\frac{\tilde\lambda(1-\tilde\lambda+\tilde\lambda^2)}{(1-\tilde\lambda)(1+\tilde\lambda)^3}- \frac{c_G^2}{k_G^2}\frac{\tilde\lambda^2(3-2 \tilde\lambda +\tilde\lambda^2)(1-2 \tilde\lambda +3\tilde\lambda^2)}{(1-\tilde\lambda)^2(1+\tilde\lambda)^6} } \ . \end{equation} This is in agreement with the results of \cite{Georgiou:2018vbb} to leading order in $\nicefrac1k$ \cite{Georgiou:2015nka}. In addition, \eqref{anomalousisotwo} is invariant under the symmetry \eqref{sfjsldjsssk123} to order $\nicefrac{1}{k_G^2}$. Again, invariance is achieved for each term separately. \subsection{Connection with the $\lambda$-deformed $G_k$} \label{group.connection} Let us now consider the $\lambda$-deformed $\sigma$-model of $\displaystyle G_k$ \cite{Sfetsos:2013wia}. This model shares the same $\beta$-function, Zamolodchikov metric and anomalous dimension as the $\lambda$-deformed $G_k\times G_k$. The equivalence is based on the perturbation of current algebra CFTs is driven by the same current bilinears \cite{Georgiou:2017aei}. However, the UV fixed point differs and its central charge is given by \begin{equation} c_\text{UV}=\frac{2k\text{dimG}}{2k+c_G}=\text{dimG}\Big(1-\frac{c_G}{2k_G}\Big)\,. \end{equation} Thus, the corresponding $C$-function will be different than \eqref{Cfunctioniso}. It can be found through \eqref{fdjfhshdjs1}, \eqref{sjfklfhdsjsa} and \eqref{djeksiskasjw} and reads \begin{equation} \label{Cfunctioniso.original} C(\tilde\lambda;k_G)=\text{dimG}-\frac{c_G\text{dimG}}{2k_G}\frac{1+2\tilde\lambda+2\tilde\lambda^3+\tilde\lambda^4}{(1-\tilde\lambda)(1+\tilde\lambda)^3}- \frac{3c_G^2\text{dimG}}{2k^2_G}\frac{\tilde\lambda^4}{(1-\tilde\lambda)^2(1+\tilde\lambda)^6}\,. \end{equation} Note that, this is invariant under the symmetry \eqref{cksks}. \section{The coset space} \label{coset} We now turn to the discussion of the coset case. Let us consider the single level action \eqref{defactigen} for an anisotropic coupling $\lambda_{ab}$ where now we take the group elements $\frak{g}_{1,2}\in SU(2)$ and $\lambda_{ab}=\text{diag}(\lambda,\lambda,\lambda_3)$. We would like to compute its RG flow equations at two-loop order in the $\nicefrac1k$ expansion. It is a tour de force computation, analogue to the one performed in App. \ref{RGequalappend}. The end result is that the model is renormalizable at order $\nicefrac{1}{k^2}$, there is no need for a diffeomorphism or an addition of a counter term, and its $\beta$-functions read \begin{eqnarray} \label{betaonetwosu2} && \frac{\text{d}\lambda}{\text{d}t}=-\frac{2\lambda(\lambda_3-\lambda^2)}{k(1+\lambda_3)(1-\lambda^2)}-\frac{4\lambda^3(3\lambda^2+4\lambda^4-2\lambda_3-10\lambda^2\lambda_3+5\lambda_3^2-\lambda^2\lambda_3^2+\lambda_3^4)}{k^2(1+\lambda_3)^2(1-\lambda^2)^3}\,, \nonumber\\ && \frac{\text{d}\lambda_3}{\text{d}t}=-\frac{2\lambda^2(1-\lambda_3)^2}{k(1-\lambda^2)^2}+\frac{8\lambda^2(1-\lambda_3)^2(\lambda^4-(3-\lambda_3)\lambda_3\lambda^2+\lambda_3^2)}{k^2(1+\lambda_3)(1-\lambda^2)^4}\,. \end{eqnarray} As a consistency check the above result agrees with \eqref{betaonetwo}, in the isotropic limit $\lambda_3=\lambda$ and $c_G=4$ for $SU(2)$ in our normalizations. \noindent Let us now consider $\lambda_3=1$, which is a consistent truncation of the RG flows \eqref{betaonetwosu2} \begin{equation} \label{fjhskdjklswq} \boxed{ \beta^\lambda(\lambda)=\frac{\text{d}\lambda}{\text{d}t}=-\frac{\lambda}{k}-\frac{4}{k^2}\frac{\lambda^3}{1-\lambda^2} }\ . \end{equation} It can be easily seen that \eqref{fjhskdjklswq} is invariant under the symmetry \eqref{cksks} ($c_G=4$) \begin{equation} \label{jdkslsnsms} \lambda\to\lambda^{-1}\,,\quad k\to-k-4\,, \end{equation} to order $\nicefrac{1}{k^2}.$ This $\beta$-function is describing the RG flow between the UV $\lambda=0$ towards a strongly coupled model at the IR $\lambda\to1^-$.\footnote{Analyzing the $\beta$-function \eqref{fjhskdjklswq} near $\lambda=1$, we obtain that \begin{equation*} \lambda=1-\frac{\kappa^2}{k}\,,\quad k\gg1\,,\quad \frac{\text{d}\kappa^2}{\text{d}t}=1+\frac{2}{\kappa^2}\,, \end{equation*} which matches the two-loop $\beta$-function \eqref{fhsjdjss} for the PCM on $S^2$, i.e. $\text{d}\ell^2=\kappa^2\left(\text{d}\vartheta^2+\sin^2\vartheta\text{d}\varphi^2\right)$. There is of course an associated limit taken in \eqref{action.coset} which gives the PCM for $S^2$ and three spectator bosons. This is most easily seem when one goes back to \eqref{PCMlimit} and sets $E_{33}=0$ since this corresponds to setting $\lambda_3=1$ as well as $E_{11}=E_{22}=\kappa^2$ and $E_{12}=E_{21}=0$. Obviously one may, more generally, have a symmetric space $G/H$ by choosing appropriately the matrix $E={\rm diag}(\mathbb{I}_{H},\kappa^2 \mathbb{I}_{G/H})$ in \eqref{PCMlimit}. } In what follows, we shall show that $\lambda_3=1$ corresponds to a parafermionic perturbation of the coset CFT $\displaystyle \frac{SU(2)_k\times SU(2)_k}{U(1)_k}$, a member of a class of coset CFTs discussed extensively in \cite{Guadagnini:1987ty}. Let us parametrize the group elements $\frak{g}_{1,2}$ as \begin{equation} \frak{g}_i=\text{e}^{i\sigma_3\frac{\varphi_i}{2}}\text{e}^{-i\sigma_2\frac{\vartheta_i}{2}}\text{e}^{i\sigma_3\frac{\psi_i}{2}}\,,\quad i=1,2\,, \end{equation} where $\sigma_a$ are the Pauli matrices traced normalized to $\text{Tr}(\sigma_a\sigma_b)=2\delta_{ab}$. Using the above parameterization and Eqs. \eqref{wzwacc} and \eqref{pfkdlddsks} (with $t_a=\nicefrac{\sigma_a}{\sqrt{2}}$) into \eqref{defactigen} for $\lambda_{ab}=\text{diag}(\lambda,\lambda,1)$, we find a five dimensional target space $\sigma$-model since its metric possesses the eigenvector $ \mathcal{X}=\partial_{\varphi_1}-\partial_{\psi_2} $, which has vanishing eigenvalue. To identify the corresponding isometry, we define $\psi=\varphi_1+\psi_2$ and we also relabel $\psi_1\to\varphi_1$, leading to the $\sigma$-model \begin{equation} \label{action.coset} S_\text{coset}=S_\text{CFT}+\frac{k\lambda}{4\pi}\int\text{d}^2\sigma\left(\Psi\bar\Psi+\Psi^\dagger\bar\Psi^\dagger\right)\,. \end{equation} In the above expression the coset CFT is $\displaystyle\frac{SU(2)_k\times SU(2)_k}{U(1)_k}$, whose metric and the two-form field read \cite{PandoZayas:2000he} \begin{equation} \text{d}\ell^2=\left(\text{d}\psi+\cos\vartheta_1\text{d}\varphi_1+\cos\vartheta_2\text{d}\varphi_2\right)^2+\text{d}\vartheta_1^2+\sin^2\vartheta_1\text{d}\varphi_1^2 +\text{d}\vartheta_2^2+\sin^2\vartheta_2\text{d}\varphi_2^2 \end{equation} and \begin{equation} B=\left(\text{d}\psi+\cos\vartheta_1\text{d}\varphi_1\right)\wedge\left(\text{d}\psi +\cos\vartheta_2\text{d}\varphi_2\right)\, , \end{equation} where we have ignored an overall factor of $\nicefrac{k}{4\pi}$. The $(\Psi,\bar\Psi)$ are classical expressions for parafermionic operators given by \begin{equation} \label{fhjkj11} \begin{split} \Psi=\left(\partial_+\vartheta_1+i\sin\vartheta_1\,\partial_+\varphi_1\right)\text{e}^{-i(\psi/2+\bar\psi)}\,,\quad \bar\Psi=\left(\partial_-\vartheta_2+i\sin\vartheta_2\,\partial_-\varphi_2\right)\text{e}^{-i(\psi/2-\bar\psi)}\,, \end{split} \end{equation} and their complex conjugates $\Psi^\dagger$ and $\bar\Psi^\dagger$ respectively.\footnote{Note that the $\sigma$-model \eqref{action.coset}, is invariant under the symmetry: $\lambda\to-\lambda\,,\quad \psi\to\pi+\psi\,.$ } Here $\bar\psi$ represents a non-local function of the angles. This effectively dresses the operators to ensure conservation $\partial_-\Psi=0=\partial_+\bar\Psi$.\footnote{In particular, employing the equations of motion \eqref{action.coset} leads for to the non-local function $\bar\psi$ to satisfy \begin{equation*} \partial_-\bar\psi=\frac12\partial_-\psi+\cos\vartheta_2\partial_-\varphi_2\ ,\quad \partial_+\bar\psi=-\frac12\partial_+\psi-\cos\vartheta_1\partial_+\varphi_1\,. \end{equation*} } As a consistency check we have used the action \eqref{action.coset} and the two-loop RG flows \eqref{trosksk}, \eqref{djrjkdkd} and derived the $\beta$-functions of Eq.\eqref{fjhskdjklswq}. There is no need for a diffeomorphism or a counter term. Finally, we note the similarity of the \eqref{fhjkj11} to the classical parafermions \cite{Bardacki:1990wj,Bardakci:1990ad} corresponding to the exact coset $SU(2)_k/U(1)_k$ CFT \cite{Fateev:1985mm}. \subsection{The Zamolodchikov metric} \label{coset.Zamo} Similarly to \eqref{sjfklfhdsjs}, the finite part of the two-point function should behave as \begin{equation} \label{sjfklfhdsjsqq} g(\lambda;k)=\frac{1}{(1-\lambda^2)^2}\left(1+\frac{1}{k}\frac{Q(\lambda)}{1-\lambda^2}\right)\,, \end{equation} where the pole structure in \eqref{sjfklfhdsjsqq}, is inspired from the $\beta$-function in \eqref{fjhskdjklswq}. Demanding that the line element \begin{equation} \text{d}\ell^2=g(\lambda;k)\text{d}\lambda^2\,, \end{equation} is invariant under the symmetry \eqref{cksks}, leads to the second degree polynomial \begin{equation} Q(\lambda)=c_0+c_1\lambda+c_0\lambda^2\,. \end{equation} The constant $c_0=0$ since the unperturbed Zamolodchikov metric is $k$-independent. The order-$\lambda$ term also vanishes since it is proportional to correlators involving and off (three) number of parafermions. Therefore $c_1=0$ as well. Therefore \eqref{sjfklfhdsjsqq}, is simply given by the $k$-independent part \begin{equation} \label{sjfklfhdsjsqqq} g(\lambda;k)=\frac{1}{(1-\lambda^2)^2}\,. \end{equation} \subsection{C-function and the anomalous dimension of the parafermionic bilinear} \label{cfunction.double.parafermionic} Similarly to Sec.~\ref{isoCfunction}, $c_\text{UV}$ is the central charge of the coset CFT $\frac{SU(2)_k\times SU(2)_k}{U(1)_k}$ at $\lambda=0$, namely \begin{equation} c_\text{UV}=\frac{6k}{k+2}-1= 5-\frac{12}{k}+\frac{24}{k^2}+{\cal O}\left(\frac{1}{k^3}\right)\,, \end{equation} and the $C$-function can be found through \eqref{djeksiskasjw} \begin{equation} \label{jdadksdjsks} C_\text{single}(\lambda,k)=5-\frac{12}{k}\frac{1}{1-\lambda^2}+\frac{24}{k^2}\frac{1-2\lambda^2}{(1-\lambda^2)^2}\,, \end{equation} where we have used \eqref{fjhskdjklswq}, \eqref{sjfklfhdsjsqqq}. It is invariant under the symmetry \eqref{jdkslsnsms} to order $\nicefrac1k^2$, up to an additive constant \begin{equation} C_\text{single}(\lambda^{-1},-k)= C_\text{single}(\lambda,k)+\frac{12}{k}-\frac{24}{k^2}\,. \end{equation} We are now in position to compute the anomalous dimension of the parafermionic bilinear ${\cal O}$, that was given in \eqref{anomalouscompositegeneral}. The end result reads \begin{equation} \label{wqoqosksap} \boxed{ \gamma^{({\cal O})}=-\frac{2}{k}\frac{1+\lambda^2}{1-\lambda^2}-\frac{8}{k^2}\frac{\lambda^2(3+\lambda^2)}{(1-\lambda^2)^2} }\ . \end{equation} which is invariant under the symmetry \eqref{cksks}, to order $\nicefrac1k^2$. There is a non-trivial check of the above result. Namely that, at the UV CFT point $\lambda=0$ one should obtain the exact conformal dimension of the parafermionic bilinear $\Delta=2+\gamma^{({\cal O})}=2-\nicefrac2k$, which is indeed the case. \subsection{Connection with the $\lambda$-deformed $SU(2)_k/U(1)_k$} \label{coset.connection} Let us now consider the $\lambda$-deformed $\sigma$-model of $\displaystyle SU(2)_k/U(1)_k$ \cite{Sfetsos:2013wia}.\footnote{The two-loop RG equation of this model was also recently considered in \cite{Hoare:2019ark}. The background metric was modified by a quantum correction (determinant) arising from the integration of the gauge fields. It was found that the level $k$ runs with the RG scale.} This model shares the same $\beta$-function, Zamolodchikov metric and anomalous dimension as the $\lambda$-deformed $\displaystyle \frac{SU(2)_k\times SU(2)_k}{U(1)_k}$. The reason is essentially that the perturbation in both cases is driven by parafermion bilinears which have the same quantum properties, i.e. the same OPE's. The proof goes along the lines of the similar case in which the perturbation of current algebra CFTs is driven by the same current bilinears \cite{Georgiou:2017aei}. However, the UV fixed point differs, so that its central charge is given by \begin{equation} c_\text{UV}=\frac{3k}{k+2}-1= 2-\frac{6}{k}+\frac{12}{k^2}+{\cal O}\left(\frac{1}{k^3}\right)\, . \end{equation} Hence, the corresponding $C$-function will be different than \eqref{jdadksdjsks}. It can be found through \eqref{fjhskdjklswq}, \eqref{sjfklfhdsjsa} and \eqref{djeksiskasjw} and reads \begin{equation} C(\lambda,k)=2-\frac{6}{k}\frac{1+\lambda^2}{1-\lambda^2}+\frac{12}{k^2}\frac{1-2\lambda^2-\lambda^4}{(1-\lambda^2)^2}\ . \end{equation} Note that, this is invariant under the symmetry \eqref{jdkslsnsms}. \section{Concluding remarks} \label{conclu} In this paper we have uncovered an exact symmetry in the space of couplings of the $\lambda$-deformed $\sigma$-models constructed in \cite{Sfetsos:2013wia}. This goal was achieved by making use of one of the models constructed in \cite{Georgiou:2016urf,Georgiou:2017jfi}. More precisely is due to the fact that the single $\lambda$-deformed model and the doubly $\lambda$-deformed model with one of the deformation parameters set to zero share the same $\beta$-functions to all orders in both the $\lambda$ and $\nicefrac1k$ expansions \cite{Georgiou:2017aei}. For the group case this symmetry is simply stated by \eqref{sfjsldjsssk123}, with the definition \eqref{kgkg}. Due to its simplicity it is conceivable that we may use it to push the computation of loop-corrections to the $\beta$-function, operator anomalous dimensions and Zamolodchikov's $C$-function even further. This will be done using also some minimal input form conformal perturbation theory. This approach seems to be the most promising way to make progress in this direction since attempting to use the gravitational approach in obtaining loop-corrections higher than two is really cumbersome. Another promising approach could be to use the free field expansion of the $\lambda$-deformed action in \cite{Georgiou:2019aon} and study using standard field theoretical methods the renormalization of the interaction vertices. An advantage of this approach is that all the dependence on the deformation parameter $\lambda$ is already encoded in the vertices. Note that, similar comments hold for the symmetric coset case as well. \noindent We have calculated the anomalous dimensions as exact function of $\lambda$ and at two-loops in the $\nicefrac1k$ expansion for the $J\bar J$ composite operator that drives the perturbation away of the conformal point. We have also calculated Zamolodchikov's $C$-function at the same order. It will be very interesting to extend our results for the single current as well as for composite current operators of higher rank. In this direction the method developed in \cite{Georgiou:2019jcf} should be useful. An important comment is in order. One may wonder if the relation \eqref{kgkg} may get further $\nicefrac1k$-correction with coefficients that may be $\lambda$-dependent. Recalling that $k_G$ will be the coefficient in the topological WZ term, for a well defined theory it has to be an integer. Therefore, since $k$ is an integer itself such corrections are not expected/allowed. To conclude, we conjecture that there exists a scheme where the symmetry \eqref{kgkg} persists to all orders in the $\nicefrac{1}{k}$ expansion. We have also seen that the $\sigma$-model \eqn{defactigen} is renormalizable without the need to correct the target space geometry, for the case of an isotropic coupling matrix and of an anisotropic coupling for the $SU(2)$ case. For an isotropic coupling matrix this fact was also observed in \cite{Hoare:2019mcc}. Finally, we quote some a partial result concerning the isotropic deformation of the two-level action \cite{Georgiou:2017jfi} \begin{equation} S_{k_1,k_2}(\frak{g}_1,\frak{g}_2) = S_{k_1}(\frak{g}_1) + S_{k_2}(\frak{g}_2) + {k\lambda\over \pi} \int \text{d}^2\sigma\, {\cal O}\,,\quad {\cal O}=J^a_{1+}\,J^a_{2-}\, , \label{defactigendif} \end{equation} in which in contrast to \eqref{defactigen} the two levels $k_1$ and $k_2$ are not equal. In the above action $k=\sqrt{k_1k_2}$ and we also define the parameter $\lambda_0=\sqrt{\frac{k_1}{k_2}}<1$. These models interpolate between two exact CFTs, namely $G_{k_1}\times G_{k_2}$ at $\lambda=0$ and $G_{k_2-k_1}\times G_{k_1}$ at $\lambda=\lambda_0$ respectively \cite{Georgiou:2017jfi}. The computation performed in App. \ref{RGequalappend} reveals that the model is renormalizable at order $\nicefrac{1}{k^2}$ and there is no need for a diffeomorphism or an addition of a counter term. Its $\beta$-functions reads \begin{equation} \label{betaaniso} \begin{split} \beta^\lambda(\lambda;\lambda_0) & = \frac{\text{d}\lambda}{\text{d}t}=-\frac{c_G\lambda^2(\lambda-\lambda_0)(\lambda-\lambda_0^{-1})}{2k(1-\lambda^2)^2}\\ &+\frac{c_G^2\lambda^4(\lambda-\lambda_0)(\lambda-\lambda_0^{-1})((\lambda_0+\lambda_0^{-1})(1+5\lambda^2)-8\lambda-4\lambda^3)}{4k^2(1-\lambda^2)^5}\,. \end{split} \end{equation} The levels $k_{1,2}$ do not run, thus retaining their topological nature (also) at two-loop order. For equal levels \eqn{betaaniso} coincides with \eqref{betaonetwo}. Up to ${\cal O}(\nicefrac1k)$ the above expression is invariant under the symmetry $k_{1,2}\to -k_{2,1}$ and $\lambda \to \lambda^{-1}$. Extending this symmetry up to two-loops along the lines of Sec. \ref{singlegeometry} \, presents some technical challenges and work in direction is in progress. \section*{Acknowledgements} We would like to thank Ben Hoare for a useful correspondence. G. Georgiou and K. Siampos work has received funding from the Hellenic Foundation for Research and Innovation (HFRI) and the General Secretariat for Research and Technology (GSRT), under grant agreement No 15425.\\ The research of E. Sagkrioti is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Programme "Human Resources Development, Education and Lifelong Learning" in the context of the project "Strengthening Human Resources Research Potential via Doctorate Research" (MIS-5000432), implemented by the State Scholarships Foundation (IKY). \begin{appendices} \section{Renormalization group flow at two-loops} \label{RGequalappend} The scope of this appendix is to work out the RG flow equations of the action \begin{equation} S_{k_1,k_2}(\frak{g}_1,\frak{g}_2)=S_{k_1}(\mathfrak{g}_1)+S_{k_2}(\mathfrak{g}_2)+\frac{k\lambda}{\pi}\int \text{d}^2\sigma J_{1+}^aJ_{2-}^a\,,\quad k=\sqrt{k_1k_2}\,, \end{equation} which is nothing else but the action \eqref{defactigendif}. From the above we find the line element \begin{equation} \text{d}s^2=R^aR^a+\lambda_0^{-2}L^{\hat{a}} L^{\hat{a}} +2\lambda_0^{-1}\lambda R^aL^{\hat{a}}\,,\quad \lambda_0=\sqrt{\frac{k_1}{k_2}}\,, \end{equation} and the two-form \begin{equation} B=B_0+\lambda_0^{-1}\lambda R^a\wedge L^{\hat a}\ , \end{equation} where $B_0$ is the two-form which corresponds to the two WZW models at levels $k_{1,2}$ with \begin{equation} H_0=\text{d}B_0=-\frac16 f_{abc}\left(R^a\wedge R^b\wedge R^c+\lambda_0^{-2} L^{\hat a}\wedge L^{\hat b}\wedge L^{\hat c}\right)\,. \end{equation} In the above we have disregarded an overall $\frac{k_1}{2\pi}$ factor and the Maurer--Cartan one forms are given by \begin{equation} \begin{split} R^a=-i\text{Tr}(t^a\text{d}\frak{g}_1\frak{g}_1^{-1}),\qquad L^{\hat{a}}=-i\text{Tr}(t^a\frak{g}^{-1}_2\text{d}\frak{g}_2)\,,\\ \text{d}R^a=-\frac{1}{2}f_{abc}R^{{b}}\wedge R^{{c}},\qquad \text{d}L^{\hat{a}}=\frac{1}{2}f_{abc}L^{\hat{b}}\wedge L^{\hat{c}}\,. \end{split} \end{equation} Here, the unhatted and hatted indices denote the Maurer--Cartan one forms evaluated at the group elements $\frak{g}_1$ and $\frak{g}_2$ respectively. By introducing the vielbeins \begin{equation} \text{e}^a=R^a,\quad \text{e}^{\hat{a}}=\lambda R^a+\lambda_0^{-1}L^{\hat{a}} \label{veilbeins} \end{equation} and the double index notation $A=(a,\hat{a})$, the line element can be written as \begin{equation} \label{fskskfhjsk} \text{d}s^2=(1-\lambda^2) \text{e}^a\text{e}^a+\text{e}^{\hat{a}}\text{e}^{\hat{a}}=G_{AB}\,\text{e}^A\text{e}^B\,. \end{equation} The spin connection and the torsion for the action \eqref{defactigendif} have been found in Eqs. (2.14) and (2.16) of \cite{Sagkrioti:2018rwg}. For an isotropic coupling $\lambda_{ab}=\lambda\delta_{ab}$ read \begin{align} \begin{split} &\omega_{ab}=-\frac{1}{2}(1-\lambda^2)f_{abc}\text{e}^c+\frac{\lambda}{2}(1-\lambda_0\lambda)f_{abc}\text{e}^{\hat{c}}\,,\\ &\omega_{\hat{a}b}=\omega_{a\hat{b}}=\frac{\lambda}{2}(\lambda_0\lambda-1)f_{abc}\text{e}^c\,,\\ &\omega_{\hat{a}\hat{b}}=-\lambda_0\lambda f_{abc}\text{e}^c+\frac{\lambda_0}{2}f_{abc}\text{e}^{\hat{c}}\ , \end{split} \end{align} where we note that, since the metric \eqref{fskskfhjsk} is constant, $\omega_{AB}$ is antisymmetric. Also \begin{align} \begin{split} &H=-\frac{1}{6}\left(1-\lambda^2(3-2\lambda_0\lambda)\right)f_{abc}\,\text{e}^a\wedge{}\text{e}^b\wedge \text{e}^c\\ &\phantom{xxx}-\frac{\lambda}{2}(1-\lambda_0\lambda)f_{abc}\,\text{e}^{\hat{a}}\wedge \text{e}^b \wedge \text{e}^c-\frac{\lambda_0}{6}f_{abc}\,\text{e}^{\hat{a}}\wedge \text{e}^{\hat{b}}\wedge \text{e}^{\hat{c}}\,. \end{split} \end{align} For the two-loop computation, we are going to need the torsionfull spin connection $\omega^-_{AB}$ \begin{equation} \omega^-_{AB}=\omega^-_{AB|C}\text{e}^C=\left(\omega_{AB|C}-\frac12\,H_{ABC}\right)\text{e}^C \end{equation} and in terms of components is given by \cite{Sagkrioti:2018rwg} \begin{align} \begin{split} &\omega^-_{ab}=\lambda^2(\lambda_0\lambda-1)f_{abc}\text{e}^c+\lambda(1-\lambda_0\lambda)f_{abc}\text{e}^{\hat{c}}\,,\\ &\omega^-_{\hat{a}b}=\omega^-_{a\hat{b}}=0\,,\\ &\omega^-_{\hat{a}\hat{b}}=-\lambda_0\lambda f_{abc}\text{e}^c+\lambda_0f_{abc}\text{e}^{\hat{c}}\,. \end{split} \end{align} We can now compute the torsionfull Riemann two-form $\Omega^-_{AB}$ reads \begin{equation} \Omega^-_{AB}=\frac{1}{2}R^-_{ABCD}\,\text{e}^C\wedge \text{e}^D=\text{d}\omega^-_{AB}+\omega^-_{AC}\wedge \omega^{-C}{}_B\ \end{equation} and the corresponding components read \begin{equation} R^-_{ABCD}=\left(\omega^K{}_{C|D}-\omega^K{}_{D|C}\right)\omega^-_{AB|K}+\omega^-_{AK|C}\omega^{-K}{}_{B|D}- \omega^-_{AK|D}\omega^{- K}{}_{B|C}\,, \label{Riemann} \end{equation} where we have used that $\omega_{AB|C}$'s are constants. Employing the above and \eqref{Riemann}, we find the components of the torsionfull Riemann tensor \begin{equation} \begin{split} &R^-_{abcd}=R_1 f_{abe}f_{cde}\ , \quad R^-_{abc\hat{d}}=R_2 f_{abe}f_{cde}\ ,\quad R^-_{ab\hat{c}\hat{d}}=R_3 f_{abe}f_{cde}\ , \\ &R_1=\lambda^3\Lambda\,,\quad R_2=-\lambda^2\Lambda\ , \quad R_3=\lambda\Lambda\ , \quad \Lambda=\frac{(\lambda-\lambda_0)(\lambda_0\lambda-1)}{1-\lambda^2}\ . \end{split} \end{equation} While the other components identically vanish. We are also going to need $H^2_{AB}=H_{ACD}H_B{}^{CD}$, where \begin{align} \begin{split} &(H^2)_{ab}=c_GH_1\delta_{ab}\ ,\qquad H_1=\frac{1-4\lambda^2+\lambda^4\left(7+2\lambda_0\big(\lambda_0-\lambda(4-\lambda_0\lambda)\big)\right)}{(1-\lambda^2)^2}\ , \\ &(H^2)_{\hat{a}b}=c_GH_2\delta_{ab}\ , \qquad H_2=\frac{\lambda(1-\lambda_0\lambda)\big(1-\lambda^2(3-2\lambda_0\lambda)\big)}{(1-\lambda^2)^2}\ , \\ &(H^2)_{\hat{a}\hat{b}}=c_GH_3\delta_{ab} \ , \qquad H_3=\frac{\lambda^2(1-\lambda_0\lambda)^2+\lambda_0^2(1-\lambda^2)^2}{(1-\lambda^2)^2}\ . \end{split} \end{align} We are now in position to compute the two loop $\beta$-functions of \eqref{defactigendif}. These were given by \begin{equation} \label{trosksk} \frac{\text{d}}{\text{d}t}\left(G_{MN}+B_{MN}\right)=\left(\beta^{(1)}_{AB}+\beta^{(2)}_{AB}\right)\text{e}^A{}_M\text{e}^B{}_N\,, \end{equation} where $t=\ln\mu^2$, $\mu$ is the RG scale and \cite{Curtright:1984dz,Braaten:1985is,Hull:1987pc,Hull:1987yi,Metsaev:1987bc,Metsaev:1987zx,Osborn:1989bu}\footnote{We are using Eq.(7) in Hull--Townsend \cite{Hull:1987pc} or equivalently Eq.(4.26) in Osborn \cite{Osborn:1989bu}. Note that in our conventions of the generalized Riemann tensor we replace $+\to -$ and we also rescale $H \to \nicefrac12 H$, due to our different normalization of the $H = \text{d}B$ field.} \begin{equation} \label{djrjkdkd} \beta^{(1)}_{AB}=R^-_{AB}\,,\quad \beta^{(2)}_{AB}=R^-_{ACDE}\left(R^{-CDE}{}_B-\frac{1}{2}R^{-DEC}{}_B\right)+\frac{1}{2}(H^2)^{CD}R^-_{CABD}\,. \end{equation} To proceed we analyze the left-hand side of \eqref{trosksk}, which equals to \begin{equation} \label{qiakmd} \frac{\text{d}}{\text{d}t}\left(G_{MN}+B_{MN}\right)=2\frac{\text{d}\lambda}{\text{d}t}\left(\text{e}^a{}_M\text{e}^{\hat{a}}{}_N-\lambda\,\text{e}^a{}_M\text{e}^a{}_N\right)\ . \end{equation} The one-loop contribution $\beta^{(1)}$ was analyzed in \cite{Sagkrioti:2018rwg} and we shall present the end result \begin{equation} \label{qiakmd1} \beta^{(1)}_{ab}=c_G\delta_{ab}\frac{R_1}{1-\lambda^2}\,,\quad \beta^{(1)}_{\hat{a}b}=\beta^{(1)}_{\hat{a}\hat{b}}=0\,,\quad \beta^{(1)}_{a\hat{b}}=c_G\delta_{ab}\frac{R_2}{1-\lambda^2}\,, \end{equation} with $\beta^{(1)}_{ab}=-\lambda\beta^{(1)}_{a\hat{b}}$. Then, we move to the two-loop contribution $\beta_{AB}^{(2)}$. Employing the above results we find\footnote{Where we have used the identity $f_{aa_1a_2}f_{ba_2a_3}f_{ca_3a_1}=\frac{c_G}{2}f_{abc}$, easily proved using the Jacobi identity.} \begin{align} \label{qiakmd2} \begin{split} &\beta^{(2)}_{ab}=c_G^2\delta_{ab}\left(\frac{R_1^2}{(1-\lambda^2)^3}+\frac{1}{2}\frac{R_2^2-H_1R_1}{(1-\lambda^2)^2}-\frac{1}{2}\frac{H_2R_2}{1-\lambda^2}\right)\ , \\ &\beta^{(2)}_{\hat{a}b}=\beta^{(2)}_{\hat{a}\hat{b}}=0\ , \\ &\beta^{(2)}_{a\hat{b}}=c_G^2\delta_{ab}\left(\frac{R_1R_2}{(1-\lambda^2)^3}+\frac{1}{2}\frac{R_2R_3-H_1R_2}{(1-\lambda^2)^2}-\frac{1}{2}\frac{H_2R_3}{1-\lambda^2}\right)\ , \end{split} \end{align} where $\beta^{(2)}_{ab}=-\lambda\beta^{(2)}_{a\hat{b}}$. Employing \eqref{qiakmd}, \eqref{qiakmd1}, \eqref{qiakmd2} into \eqref{trosksk} and reinserting the overall $k_1$ factors on the line element and two-form field, one finds \begin{align} \label{betaonetwoappendaniso} \begin{split} \beta^\lambda(\lambda;\lambda_0)=& \frac{\text{d}\lambda}{\text{d}t}=-\frac{c_G}{2k}\frac{\lambda^2(\lambda-\lambda_0)(\lambda-\lambda_0^{-1})}{(1-\lambda^2)^2}\\ &+\frac{c_G^2}{4k^2}\frac{\lambda^4(\lambda-\lambda_0)(\lambda-\lambda_0^{-1})\big((\lambda_0+\lambda_0^{-1})(1+5\lambda^2)-8\lambda-4\lambda^3\big)}{(1-\lambda^2)^5}\ \end{split} \end{align} and the levels $k_{1,2}$ do not flow. \subsection{Equal levels} \label{limits} For equal levels $k_1=k=k_2$, \eqref{betaonetwoappendaniso} drastically simplifies to \begin{equation} \label{betaonetwoappend} \beta^\lambda(\lambda)={\text{d}\lambda\over \text{d}t}= -{c_G\over 2 k} {\lambda^2\over (1+\lambda)^2} + {c_G^2\over 2 k^2 } {\lambda^4(1-2\lambda)\over (1-\lambda)(1+\lambda)^5} \ . \end{equation} Let us now analyze two interesting limits of the above expression around $\lambda=1$ and $\lambda=-1$ for $k\to\infty$ -- retaining its topological nature at two-loop in $\nicefrac{1}{k}$ expansion. These limits were studied in detail in Sec.~\ref{singlegeometry} and they correspond to the isotropic PCM and the pseudo-dual chiral model respectively.\footnote{Analogue limits exist for the single $\lambda$-deformed model \cite{Sfetsos:2013wia,Georgiou:2016iom}, corresponding to the non-abelian T-dual of the isotropic PCM and the pseudo-dual chiral model respectively.} In particular, expanding around $\lambda=1$ and $\lambda=-1$ one finds \begin{equation} \label{limit1} \lambda=1-\frac{\kappa^2}{k}\ ,\quad k\gg1\,,\quad {\text{d}\kappa^2\over \text{d}t}=\frac{c_G}{8}+\frac{c_G^2}{64\kappa^2}\ \end{equation} and \begin{equation} \label{limit2} \lambda=-1+\frac{1}{b^{2/3}k^{1/3}}\ , \quad k\gg1\ , \quad {\text{d}b\over \text{d}t}=\frac{3}{4}\,c_Gb^3-\frac{9}{8}c_G^2b^5\ . \end{equation} In what follows, we shall prove that the above limiting expressions are in agreement with those found from the PCM and the pseudo-dual chiral model: Let us consider the action \eqref{PCMlimit} for an isotropic PCM with $E_{ab}=2\kappa^2\delta_{ab}$, where $\kappa$ is a coupling constant. This is a pure metric non-linear $\sigma$-model, whose $\beta$-functions drastically simplify to \cite{Ecker:1972bm,Honerkamp:1971sh,Friedan:1980jf,Friedan:1980jm}: \begin{equation} \label{fhsjdjss} \frac{\text{d}G_{\mu\nu}}{\text{d}t}=R_{\mu\nu}-R_{\mu\kappa\rho\sigma}R^{\rho\sigma\kappa}{}_\nu\ , \end{equation} where $G_{\mu\nu}=2\kappa^2R^a_\mu R^a_\nu$. Using of the above we easily find \begin{equation} {\text{d}\kappa^2\over \text{d}t}=\frac{c_G}{8}+\frac{c_G^2}{64\kappa^2}\ , \end{equation} which is in agreement with \eqref{limit1}. \noindent Let us now consider the action \eqref{pseudolimit} for the pseudo-dual chiral model \cite{Nappi:1979ig}, with $\displaystyle G_{ab}=\frac{\delta_{ab}}{2b^{2/3}}$ and $\displaystyle B_{ab}=\frac{1}{6}f_{abc}v^c$. This is a torsionfull $\sigma$-model whose $\beta$-functions were given in \eqref{trosksk}, \eqref{djrjkdkd}. Using the above, one finds \begin{equation} {\text{d}b\over \text{d}t}=\frac{3}{4}\,c_Gb^3-\frac{9}{8}c_G^2b^5\,, \end{equation} which is in agreement with \eqref{limit2}. \end{appendices}
1,108,101,563,544
arxiv
\section{Introduction} \label{s:introduction} The geometry of the hyperbolic plane ${\mathbb H}$ appears in a large variety of mathematical contexts and, as such, has been extensively studied. Nonetheless, there are certain combinatorial questions about ${\mathbb H}$ about which not much is known. We're mainly interested in a type of chromatic number for ${\mathbb H}$. The celebrated Hadwiger-Nelson problem is the search for the minimal number of colors necessary to color the Euclidean plane such that any two points at distance $1$ are colored differently. This chromatic number, denoted by $\chi({\mathbb R}^2)$, has been known to between $4$ and $7$ for a half-century, but significant progress has eluded mathematicians for decades (see \cite{Soifer11} for details). This can - and has - been studied for other metric spaces such as ${\mathbb R}^n$ \cite{SoiferBook}. The choice of distance $1$ for a Euclidean space is not important thanks to homotheties. In general however, the chromatic number of a metric space will depend on a choice of $d>0$ and colorings are required to have points at distance exactly $d$ colored differently. For ${\mathbb H}$ the choice of $d$ is important and we denote the $d$-chromatic number $\chi({\mathbb H},d)$. As suggested in \cite{Kloeckner}, letting $d$ grow and studying the growth of $\chi({\mathbb H},d)$ could be compared to the study of $\chi({\mathbb R}^n)$ for growing $n$ which is known to grow exponentially in $n$ (see \cite{Taha-Kahle,SoiferBook} and references therein). The analogy will only be interesting if $\chi({\mathbb H},d)$ is shown to grow with $d$ but that's not known to be true. The same proof as for the Euclidean plane \cite{Kloeckner} gives a universal lower bound of $4$ for $\chi({\mathbb H},d)$ and that seems to be the extent of the current state of knowledge for lower bounds. Our focus point will be on upper bounds. The following theorem summarizes some of our concrete results. \begin{theorem} For $d\leq 2 \log(2) \approx 1.389...$ we have $$\chi({\mathbb H},d) \leq 9.$$ For $d \leq 2 \log(3)$ $$\chi({\mathbb H},d) \leq 12.$$ For $d\geq 2 \log(3)$ the following holds: $$ \chi({\mathbb H},d) \leq 5 \left( \left\lceil \frac{d}{\log(4)} \right\rceil +1\right). $$ \end{theorem} Our methods and proof follow the general strategy of using a "hyperbolic checker board", a method outlined in \cite{Kloeckner} and attributed to Sz\'ekely. Kloeckner \cite{Kloeckner} explains how to get a linear upper bound (in $d$) and asks many interesting questions. Our bounds answer one of the questions (Problem {\bf R}). More importantly, we optimize the strategy (Theorems \ref{thm:chromahypupper1} and \ref{thm:chromahypupper2}) and provide some missing arguments. It is these additional details that allow for improved bounds for both small $d$ and larger $d$ (Theorems \ref{thm:smalld}, \ref{thm:larged} and Proposition \ref{prop:effectivesmall}). Note that these questions could also be asked more generally for any hyperbolic surface, but, as was shown by the authors in \cite{ParlierPetit}, the bounds are very different and grow exponentially in $d$. We note that for small $d$, it seems very unlikely that the bounds we provide are close to optimal. This is illustrated in Proposition \ref{prop:funddom} where we show how to use a fundamental domain to bound $\chi({\mathbb H},d)$ by $8$, but it only works for certain values of $d$. When studying the problem of the hyperbolic plane, we started looking for discrete analogs that might help us understand the structure of subgraphs of ${\mathbb H}$ that occur for larger $d$ and that have hyperbolicity properties. This lead us to looking at infinite $q$-regular trees. Although they are bipartite, we can look at their $d$-chromatic number and study it analogously to ${\mathbb H}$. The upper bounds we obtained are close in spirit to those of ${\mathbb H}$ and obtained by a similar method. We synthesize them as follows (see Theorem \ref{thm:puretree}). \begin{theorem} If $d$ is odd then $$\chi(T_q,d) = 2.$$ If $d$ is even then $$\chi(T_q,d) \leq (q-1) (d+1).$$ \end{theorem} Lower bounds seem difficult, just like for ${\mathbb H}$. One way of obtaining lower bounds is using a type of clique number, here the maximal number of points at pairwise distance $d$. For even $d$ this clique number is always $q$ (see Proposition \ref{prop:cliqueq}) which is quite far from our upper bounds. We can improve on that, but only slightly, by producing a generalized Moser spindle (Proposition \ref{prop:moserq}). This gives a lower bound of $q+1$. Nonetheless, we know this lower bound is not optimal as by an extensive computer search we found that $\chi(T_3,8) \geq 5$ (see Remark \ref{rem:ammar}). As for ${\mathbb H}$, the combinatorics seem to get out of hand pretty quickly. A property shared by both ${\mathbb H}$ and $T_q$ is that both are natural homogeneous Gromov hyperbolic spaces. In particular, they have thin triangles by which we mean that geodesic triangles with long sides look roughly like tripods (and for $T_q$ they {\it are} tripods). This suggests that an interval chromatic problem might be relevant. In this adaptation, we fix an interval $[d,cd]$ with $d>0$ and $c>1$. We ask that points that have distances that lie in $[d,cd]$ be colored differently. Kloeckner \cite{Kloeckner} points out that for the Euclidean plane this interval chromatic number grows like $c^2$ for fixed $d$ and growing $c$ and asks whether $$ \lim_{c\to \infty} \frac{\chi({\mathbb R}^2, [d,cd])}{c^2} $$ exists. He states a purposefully vague interval chromatic problem for the hyperbolic plane (Problem {\bf Z} from \cite{Kloeckner}). We're able to show the following results (Theorems \ref{thm:inthypupper} and \ref{thm:inthyplower}): \begin{theorem} For sufficiently large $d$, the quantity $\chi({\mathbb H},[d,cd])$ satisfies $$ 2 \, e^{\frac{cd-1}{2}} < \chi({\mathbb H}, [d,cd]) < 2 \left(2 e^{\frac{cd-1}{2}} + 1\right)(cd +1). $$ \end{theorem} For $T_q$, using the same techniques, we show the following (Theorems \ref{thm:inttreeupper} and \ref{thm:inttreelower}). \begin{theorem} The quantity $\chi(T_q,[d,cd])$ satisfies $$ q (q-1)^{\lfloor \frac{cd}{2} \rfloor - \lceil\frac{d}{2} \rceil}\leq \chi(T_q,[d,cd]) \leq (q-1)^{\lfloor \frac{cd}{2} +1 \rfloor} (\lfloor cd \rfloor+1). $$ \end{theorem} The lower bounds in both theorems above come from lower bounds on the (interval) clique numbers. {\bf Acknowledgements.} We heartily thank Ammar Halabi for graciously writing the code necessary to test chromatic numbers for regular trees. Remark \ref{rem:ammar} is thanks to him. \section{Preliminaries} \subsection{Chromatic numbers of metric spaces} The chromatic number of a graph $G$ is the minimal number of colors needed to color the vertices of a graph such that any two adjacent vertices are of different colors. Given a metric space $(X,\delta)$ and a number $d>0$, we define the chromatic number $\chi((X,\delta),d)$ relative to $d>0$ to be the minimal number of colors needed to color all points of $X$ such that any $x,y \in X$ with $\delta(x,y) = d$ are colored differently. We'll sometimes refer to the $d$-chromatic number of $(X,\delta)$. One can define the chromatic number of a metric space via the chromatic number of graphs as follows. Given a metric space $(X, \delta)$ and a real number $d>0$, we construct a graph $G(\{X,\delta\},d)$ with vertices points of $X$ and an edge between points if they are exactly at distance $d$. We'll refer to the above chromatic numbers as being {\it pure} chromatic numbers, as opposed to the notion we'll introduce now. One variant on the pure chromatic number is to ask that points that lie at a distance belonging to a given set be of different colors. An example of this is the chromatic number of $G^k$ power of a graph $G$. This is equivalent to asking that any two vertices at distance belonging to the set $\{1,\hdots,k\}$ be colored differently. More generally, for a metric space $(X,\delta)$ and a set of distances $\Delta$, the $\Delta$-chromatic number $\chi((X,\delta),\Delta)$ is the minimal number of colors necessary to color points of $X$ such that any two points at distance belonging to $\Delta$ are of a different color. As above, this can be seen as the chromatic number of a graph $G(\{X,\delta\}, D)$ where vertices are points of $X$ and edges belong to $D$. We'll be particularly interested in this problem when $D$ is an interval $[a,b]$. We'll refer to these quantities as interval chromatic numbers. A straightforward way of obtaining a lower bound for chromatic numbers of graphs is via the {\it clique number} which is the order of the largest embedded complete graph. The clique number $\Omega(G)$ clearly satisfies $\Omega(G) \leq \chi(G)$. Similarly we define $\Omega((X,\delta),d)$, resp. $\Omega((X,\delta),\Delta)$, to be the size of the largest number of points of $X$ all pairwise at distance exactly $d$, resp. all at distance lying in $\Delta$. \subsection{Our metric spaces} The two types of metric spaces we'll work with are the hyperbolic plane ${\mathbb H}$ and $q$-regular trees (for $q\geq 3$). The unique infinite tree of degree $q$ in every vertex will be denoted $T_q$. Both are viewed as metric space, ${\mathbb H}$ with the standard Poincar\'e metric (an explicit distance formula will be provided below) and $T_q$ as a metric space on vertices obtained by assigning length $1$ to each edge. Although we think of regular trees as a type of discrete analog of the hyperbolic plane, note that the two metric spaces are not even quasi-isometric to one another. In the next section we'll briefly describe a metric relationship between ${\mathbb H}$ and $T_q$, namely a quasi-isometric embedding of $T_q$ into ${\mathbb H}$. It is provided for motivational purposes and, as it will not be used in the sequel, it can be skipped by the less interested reader. \subsection{Locally flat models of the hyperbolic plane and geometrically embedded trees} We describe a locally "flat" model of the hyperbolic plane which is quasi-isometric to ${\mathbb H}$ into which regular trees geometrically embed. One way of constructing a space which shares properties with hyperbolic plane is to paste together copies of an equilateral Euclidean triangle $\tau$ with sides lengths $1$. To do so, fix an integer $n\geq 6$ and construct a simply connected space as follows. Starting with a base copy of $\tau$, paste $n$ copies of $\tau$ around each vertex to obtain a larger simply connected shape. Then we repeat the process indefinitely to get an unbounded simply connected domain which we'll denote $H_n$. For example: if $n=6$ then the result is the Euclidean plane. In particular, vertices of copies of $\tau$ map to points of angle $2 \pi$. However, for any $n\geq 7$, the set of vertices maps to singular points of angle $\frac{\pi }{3} n$. We note that, for all $n\geq 6$, $H_n$ is a $\mathrm{CAT}(0)$ metric space. The following is well-known to experts, but we provide a sketch proof for completeness. \begin{proposition} For $n\geq 7$, $H_n$ and ${\mathbb H}$ are quasi-isometric. \end{proposition} \begin{proof} To see this, it suffices to construct a quasi-isometry between ${\mathbb H}$ and $H_n$. Consider the cell decomposition of $H_n$ dual to its triangulation: each cell is an $n$-gon with a singularity in its center. As it is dual to a triangulation, the valency in each vertex of this $n$-gon decomposition is three. Denote by $P_n$ a copy of this singular $n$-gon. Now consider the unique tiling (up to isometry) of ${\mathbb H}$ by regular $n$-gons of angles $\frac{2\pi}{3}$. Denote by $Q_n$ this hyperbolic $n$-gon we use to tile. Note that both $P_n$ and $Q_n$ have the $n$-th dihedral group $D_n$ as isometry group. Let $f: H_n \to Q_n$ be any bijective map which, for simplicity, we'll suppose sends the boundary to the boundary and is invariant by the actions of $D_n$. (This is actually not strictly necessary but it simplifies the discussion somewhat.) The map $f$, by compactness of $H_n$ and $Q_n$, is of bounded distortion. There are now natural maps between ${\mathbb H}$ and $H_n$ which consists on replacing each regular hyperbolic $n$-gon of the tiling by the singular Euclidean analogue and vice-versa. Points are associated via $f$ and by invariance of $D_n$, coincide with respect to the pasting. The result is a bijection between ${\mathbb H}$ and $H_n$ which is clearly of bounded distortion. \end{proof} The reason we've introduced $H_n$ is that we have the following embedding. By isometric embedding we mean an embedding between metric spaces $(X_1,\delta_1) \hookrightarrow (X_2,\delta_2)$ such that the induced metric on $X_1$ by $X_2$ coincides with the metric $\delta_1$. \begin{proposition}\label{prop:embed} For any $q \leq \lfloor \frac{n}{3} \rfloor$, $T_q$ isometrically embeds into $H_n$. \end{proposition} \begin{proof} There are two things to prove. The first is that there is an embedding. To do so, we think of $T_q$ as being embedded in the plane (this gives us an orientation at every vertex). By vertices of $H_n$ we mean the set of points that are the image of the vertices of the triangles used to construct $H_n$. By edges of $H_n$, we mean the image of the edges of the triangles (all of length $1$). We're going to map vertices and edges of $T_q$ to their counterparts in $H_n$ Take a base vertex $v_0$ of $T_q$ and map it to a base vertex $w_0$ of $H_n$. Now map an edge $e$ of $T_q$ incident to $v_0$ to an edge $e'$ of $H_n$ incident to $w_0$. We now map edges incident in $v_0$ to edges incident in $w_0$. Edges around $v_0$ and $w_0$ both have orientations and are ordered relatively to $e$ and $e'$. Following this orientation, edges incident to $v_0$ are mapped to edges incident to $w_0$ ensuring that if any two edges in $P_n$ that are image edges form an angle of at least $\pi$. In other terms, following the order around $w_0$, there are at least two edges between image edges. Note that this was possible thanks to the condition on $q$ and $n$. We've now mapped all edges of $T_q$ incident in $v_0$. We now repeat this to map to all vertices distance $1$ from $v_0$, and then inductively, to those at distance $r \geq 2$. This provides us with an embedding $\varphi$. By construction, the embedding is geometric. Indeed, let $v,w$ be vertices of $\varphi(T_q)$ and let $\gamma$ be the unique simple path between them contained in $\varphi(T_q)$ (image of the unique geodesic in $T_q$). We want to check that $\gamma$ is locally geodesic everywhere. As the space $H_n$ is $\mathrm{CAT}(0)$, this will guarantee that $\gamma$ is the unique geodesic in $H_n$ between $v$ and $w$. To do so we check the angle conditions along $\gamma$. By construction, the angle is $\pi$ along the flat portions of $\gamma$ and at least $\pi$ in every vertex by construction. This proves that our embedding is isometric. \end{proof} In terms of chromatic numbers, the isometric embedding above provides the following immediate lower bound. \begin{corollary} For all $d \geq 1$ and $q\geq 3$ and $n$ satisfying $n\geq \lfloor \frac{n}{3} \rfloor$: $$ \chi(T_q, d) \leq \chi(H_n, d). $$ \end{corollary} \section{Pure chromatic number problem} In this section we'll be concerned with finding upper and lower bounds for the $d$-chromatic number for both the hyperbolic plane and for $q$-trees. We begin with the former. \subsection{Bounds for the hyperbolic plane} We'll be using the upper half plane model ${\mathbb H}$ of the hyperbolic plane. The hyperbolic distance formula for ${\mathbb H} = \{ (x,y) \in {\mathbb R}^2 \mid y >0 \}$ can be expressed as $$ d_{{\mathbb H}}\left((x,y), (x',y')\right) = {\,\rm arccosh}\left( 1+ \frac{(x-x')^2 + (y-y')^2}{2 y y'}\right). $$ We want to minimally color ${\mathbb H}$ for given $d>0$ such that any two points at distance $d$ are of different colors. One quick word about lower bounds for these quantities. Getting a good lower bound via induced complete graphs is futile - just like for the Euclidean plane the clique number has an upper bound of $3$. And just like in the Euclidean plane, a lower bound of $4$ for any $d$ can be obtained by finding a metric copy of the Moser spindle. It seems likely that one can do better, at least for large $d$, but the combinatorics quickly get out of hand. So we focus on upper bounds. The general strategy will always be the same: cover ${\mathbb H}$ with monochromatic regions of diameter less than $d$ and ensure that any two regions of the same color are sufficiently far apart. \subsubsection{Construction of a hyperbolic checkerboard} To color the hyperbolic plane we use a horocyclic checker board constructed as follows. According to \cite{Kloeckner} where it is used to color the hyperbolic plane, this construction is due to Sz\'ekely. Unfortunately some of the key details in \cite{Kloeckner} are incorrect so for completeness we provide a detailed construction. The method consists in tiling the hyperbolic plane by isometric rectangles where two sides of the rectangle are sub arcs of geodesics with a common point at infinity (so called ultra parallel curves) and the other two sides are horocyles surrounding that same point at infinity. To do so formally, we begin by cutting the hyperbolic plane into infinite strips bounded by horocycles as follows. We fix $h>0$ and set $j\in {\mathbb Z}$, the strip $S_j(h)$ to be the set $$S_j(h) := \left\{ (x,y) \in {\mathbb H} \mid y \in [e^{jh}, e^{(j+1) h}[ \right\}. $$ We'll sometimes call $S_j(h)$ a {\it strata}. In this model the horizontal lines $y = e^{jh}$ are horocycles around $\infty$ and $h$ is the distance between the horocycles $y = e^{jh}$ and $y = e^{(j+1)h}$. Roughly speaking, we now subdivide the strips $S_j(h)$ by cutting them along vertical lines (geodesics). We fix a value $w>0$ and cut along vertical geodesic segments in a way that the two endpoints of the base of each rectangle are at distance $w$. As we don't want the rectangles to overlap even in their boundary, we choose that the vertical geodesic segments belongs to the rectangle on its right. There is some choice is doing the above procedure but we will not make use of that choice in any way (and in fact trying to use this horizontal parameter is tricky). One possible choice leads to the following rectangles which for fixed $h,w$ we can label with elements of ${\mathbb Z}^2$: $$ R_{i,j}(h,w) := \left\{ (x,y) \in {\mathbb H} \mid x \in [ r j e^{ih}, r(j+1) e^{ih} [,\, y \in [e^{jh}, e^{(j+1) h}[ \right\} $$ where $$r := \sqrt{2(\cosh(w) - 1)}.$$ \begin{figure}[h] {\color{linkblue} \leavevmode \SetLabels \L(0.68*.73) ${\,\rm arccosh}\left( 1+ \frac{\cosh(w)-1}{e^{2h}}\right)$\\ \L(0.68*.5) $h$\\ \L(0.68*.37) $w$\\ \L(0.34*.33) $(0,1)$\\ \L(0.34*.6) $(0,e^h)$\\ \L(0.58*.33) $(r,1)$\\ \L(0.58*.6) $(r,e^h)$\\ \endSetLabels \begin{center} \AffixLabels{\centerline{\epsfig{file =Rectangle2.png,width=10.0cm,angle=0} }} \vspace{-30pt} \end{center} \caption{The rectangle $R_{0,0}(h,w)$} \label{fig:Rectangle} } \end{figure} If the above formula is slightly confusing, it's useful to keep in mind one particular copy of the rectangle since all of them are isometric. The rectangle $R_{0,0}(h,w)$ has its four vertices given by the points $(0,1)$, $(r,1)$, $(0, e^h)$ and $(r,e^{h})$. Although the base points of the rectangle are at distance $w$, the upper corners are closer to each other and their distance is in fact $$ {\,\rm arccosh}\left( 1+ \frac{\cosh(w)-1}{e^{2h}}\right). $$ Understanding the geometry of the rectangles is key in our argument and we want to understand the diameter of the (closed) rectangle. Via a simple variational argument, the diameter is realized by some pair of points that lie in the corners. As discussed above, the distance between the upper corners is smaller than the distance between points on the base but one possibility is that the bottom corners realize the diameter and in fact for fixed $w$ and small enough $h$, this is the case. The other possibility is that opposite corners realize the diameter and their distance is $$ {\,\rm arccosh}\left( 1+ \frac{2(\cosh(w)-1)+ (e^h-1)^2}{2 e^h}\right). $$ Again, if $h$ is sufficiently large, the above value will be the diameter. To see the above observations, it suffices to look at the distance formula for a pair of points $(0,y)$ and $(r,y')$. We think of $y'$ as being a variable beginning at $y'=y$. Now as $y'$ increases, their distance begins by decreasing until eventually reaching a minimum and then increasing towards infinity. Thus there is a certain value of $y'>y$ for which the distance between $(0,y)$ and $(r,y')$ is exactly that of the distance between $(0,y)$ and $(r,y)$. This shows that the pair of points that realize the diameter is either the base or the diagonal and this depends on how large $h$ is. All in all, we've shown the following. \begin{proposition} The rectangle $R_{i,j}(w,h)$ satisfies $$ {\rm diam}\left( R_{i,j}(w,h) \right) = \max \bigg\{ w, {\,\rm arccosh}\left( 1+ \frac{2(\cosh(w)-1)+ (e^h-1)^2}{2 e^h}\right) \bigg\}. $$ \end{proposition} We also need to get a handle on the distance between consecutive rectangles in a stratum (so rectangles $R_{i,j}(w,h)$ and $R_{i',j}(w,h)$ for $i'>i$). By the same considerations as above the distance will be realized (in the closure of the rectangles) by the upper right corner of $R_{i,j}(w,h)$ and the upper left corner of $R_{i',j}(w,h)$. The distance formula gives us $$ d_{{\mathbb H}} \left( R_{i,j}(w,h), R_{i',j}(w,h) \right) = {\,\rm arccosh} \left( 1+ \frac{(i-i')^2 (\cosh(w)-1)}{e^{2h}}\right). $$ With this in hand we can construct a well-adapted checker board in function of the parameter $d$. The method is to use a checkerboard with rectangles of diameter $\leq d$. We color stratum by stratum cyclically using the above formula to ensure that if rectangles are horizontally sufficiently far apart, they can be colored the same way. We then need to repeat the above process with completely new colors until the strata are sufficiently far apart (see Figure \ref{fig:distancerectangle}). This requires exactly $\lceil \frac{d}{h} \rceil + 1$ strata to be colored before repeating the procedure. This leads to the following general statement that we state as a theorem. \begin{theorem}\label{thm:chromahypupper1} The $d$-chromatic number of the hyperbolic plane satisfies $$ \chi({\mathbb H},d) \leq (k + 1) \left( \left\lceil \frac{d}{h} \right\rceil +1 \right) $$ for any $w,h$ that satisfy $$ \max \bigg\{ w, {\,\rm arccosh}\left( 1+ \frac{2(\cosh(w)-1)+ (e^h-1)^2}{2 e^h}\right) \bigg\} \leq d $$ and $k$ is the smallest integer satisfying $$ k \geq e^h \sqrt{\frac{\cosh(d)-1}{\cosh(w)-1}}. $$ \end{theorem} We note that the above condition implies that $k\geq 2$. \begin{figure}[h] {\color{linkblue} \leavevmode \SetLabels \L(0.47*.95) $\geq d$\\ \L(0.08*.35) $\geq d$\\ \endSetLabels \begin{center} \AffixLabels{\centerline{\epsfig{file =DistanceRectangles2.png,width=12.0cm,angle=0} }} \vspace{-30pt} \end{center} \caption{$k=3$ and $\left\lceil \frac{d}{h} \right\rceil=2$} \label{fig:distancerectangle} } \end{figure} As stated, it is not clear how to optimally apply the theorem. We state a formulation which, although not necessarily practical, will give us the optimal solution for a checkerboard coloring. Suppose we are given an $h>0$ which satisfies $h<d$. We want to optimize the checkerboard using this fixed $h$. There is now a clear choice of $w$: $$ w= \min\left\{d, {\,\rm arccosh}\left( \frac{1 + 2 e^h\cosh(d) - e^{2h}}{2}\right)\right\}. $$ Using this we again have a canonical choice of $k$: $$ \left\lceil e^h \sqrt{\frac{\cosh(d)-1}{\cosh(w)-1}}\right\rceil. $$ Everything is now expressed in terms of $h$ and thus we have the following. \begin{theorem}\label{thm:chromahypupper2} The $d$-chromatic number of the hyperbolic plane satisfies $$ \chi({\mathbb H},d) \leq \min_{h<d} \left\{ (k(h) + 1) \left( \left\lceil \frac{d}{h} \right\rceil +1 \right)\right\} $$ where $$ w(h):= \min\left\{d, {\,\rm arccosh}\left( \frac{1 + 2 e^h\cosh(d) - e^{2h}}{2}\right)\right\} $$ and $$ k(h):= \left\lceil e^h \sqrt{\frac{\cosh(d)-1}{\cosh(w(h))-1}}\right\rceil. $$ \end{theorem} We now apply these results to get effective bounds in terms of $d$. \subsubsection{Bounds on $\chi({\mathbb H},d)$ for small $d$} Note that the above method requires that $k(h), \lceil \frac{d}{h} \rceil >1$. So in particular the method will never allow for a better bound than $9$ on the chromatic number. We now show that this bounds holds for sufficiently small $d$. \begin{theorem}\label{thm:smalld} For $d\leq 2 \log(2) \approx 1.389...$ we have $$\chi({\mathbb H},d) \leq 9.$$ \end{theorem} \begin{proof} We'll apply the strategy from Theorem \ref{thm:chromahypupper2}. If we want to bound $\chi({\mathbb H},d)$ by $9$, we need to have $\frac{d}{h} \leq 2$. With this constraint in hand, we set $h = \frac{d}{2}$ as any larger $h$ can only increase $k$ and the diameter of a rectangle. We'll need to set $w(h)$ as in Theorem \ref{thm:chromahypupper2} and this depends on $d$. To determine our choice, we'll need to study the function $$ \min\left\{d, {\,\rm arccosh}\left( \frac{1 + 2 e^h\cosh(d) - e^{2h}}{2}\right)\right\} $$ for $h =\frac{d}{2}$. A straightforward analysis tells us to set $$ w(h) = {\,\rm arccosh}\left( \frac{1 + 2 e^h\cosh(d) - e^{2h}}{2}\right) $$ for $d\in ]0,d_0]$, where $d_0$ is the non zero positive solution to the equation $$ \frac{1+ 2 e^{\frac{d_0}{2}} \cosh(d_0)}{2} - e^{d_0} - \cosh(d_0)=0. $$ The precise value for $d_0$ can be computed: $$ d_0 = 2\log\left( \frac{(108 + 12 \sqrt{69})^\frac{1}{3} + \frac{12}{(108 + 12 \sqrt{69})^\frac{1}{3}}}{6}\right)\approx 0.56... $$ For $d$ in this interval we can take $k =2$ as the following inequality is satisfied \begin{eqnarray*} 4 &>& e^{2h} \frac{\cosh(d)-1}{\cosh(w(h))-1}\\ & = & 2 \, e^d \frac{\cosh(d)-1}{- e^d + 2 e^{\frac{d}{2}} \cosh(d) +1}. \end{eqnarray*} For $d > d_0$ we are required to set $w(d)=d$. In order to be able to set $k =2$ we need to satisfy: $$ 2 \geq e^{\frac{d}{2}} $$ which is true provided $$ d\leq 2 \log(2) $$ as desired. \end{proof} Even though we had the previous theorems in hand, the above argument still required a case by case analysis, which can be explained geometrically. The diameter of the rectangle for small $d$ was realized by diagonally opposite points, but for larger $d$ it was realized by the base points. We can argue similarly to obtain the following results, which again require a case by case analysis. Note that we needed to argue case by case in terms of $k$ and $\lceil \frac{d}{h} \rceil$ so we only include the upper bounds that work for larger intervals of $d$. The strategy is always the same: we want to bound $\chi({\mathbb H},d)$ by $N = (k+1)(m+1)$, so we set $h = \frac{d}{m}$ and we argue as above. As we've treated (very) small $d$ already, the diameter of the rectangle will be generally be the base of the rectangle. This will work for all $d$ that satisfy $$ d \leq m \log(k). $$ Now if $N = (a+1)(b+1)$, to get a larger interval will require comparing $a \log(b)$ and $b \log(a)$. \begin{proposition}\label{prop:effectivesmall} The chromatic numbers of the hyperbolic plane satisfy the following inequalities for certain $d$: \begin{itemize} \item For $d \leq 2 \log(3)$: $$\chi({\mathbb H},d) \leq 12.$$ \item For $d \leq 2 \log(4)$: $$\chi({\mathbb H},d) \leq 15.$$ \item For $d \leq 3 \log(3)$: $$\chi({\mathbb H},d) \leq 16.$$ \item For $d \leq 5 \log(2)$: $$\chi({\mathbb H},d) \leq 18.$$ \end{itemize} \end{proposition} The process can be continued to obtain optimal intervals where $\chi({\mathbb H},d)$ is bounded by integers of the form $N = (a+1)(b+1)$ where both $a$ and $b$ are greater or equal to $2$. We now turn our attention to large values of $d$. \subsubsection{Bounds on $\chi({\mathbb H},d)$ for large $d$} For large values of $d$ we set $w:=d$ and $h:= \log(k)$. Provided $d$ is large enough, our bounds tell us that $$ \chi({\mathbb H},d) \leq (k+1) \left( \left\lceil \frac{d}{\log(k)} \right\rceil +1\right). $$ We want to optimize the asymptotic growth of this bound in terms of $d$. The relevant factor is $$ \frac{k+1}{\log(k)} $$ which is minimized for $k = 4$. Note that the above bound, for $k=4$ will hold, by {Theorem \ref{thm:chromahypupper1}}, provided $$ d \geq {\,\rm arccosh}\left( \frac{1 + 2 e^h\cosh(d) - e^{2h}}{2}\right) $$ which is certainly true for all $d \geq 2$ (a more precise value is true but we've proved better bounds above). We have thus proved the following. \begin{theorem}\label{thm:larged} For $d \geq 2 $ we have $$ \chi({\mathbb H},d) \leq 5 \left( \left\lceil \frac{d}{\log(4)} \right\rceil +1\right). $$ \end{theorem} \begin{remark} We end this analysis by observing that the same argument tells us that $$ \chi({\mathbb H},d) \leq 4 \left( \left\lceil \frac{d}{\log(3)} \right\rceil +1\right) $$ for $d \geq 2$. Although this bound is not asymptotically as good as the one in the theorem above, for certain $d$ up until approximately $143$, it provides a stronger estimate. This illustrates the touch and go aspect of the checkerboard method. \end{remark} \subsubsection{Using a fundamental domain} In this section, we briefly remark that there are certain $d$ for which we can bound $\chi({\mathbb H},d)$ by $8$. The method is really a hyperbolic analogue of the classical $7$ upper bound on the chromatic number of the Euclidean plane. We provide it to illustrate the current lack of a monotonic method: one might expect that $$ \chi({\mathbb H},d) \leq \chi({\mathbb H},d') $$ provided $d'< d$ but it seems like a tricky question. The coloring is based on tilings that appear when studying Klein's quartic in genus $3$. We'll describe it in simple terms, and show how it's an adaptation of the $7$ upper bound for the Euclidean plane. One way of describing the classical Euclidean coloring (for $d=1$) is as follows. Take a tiling of ${\mathbb R}^2$ by a set of regular hexagons of diameter $<1$ (say $0.99$). Now consider the dual graph to this tiling. We fix a base tile and associate to all of its points color $1$. We color each of the adjacent hexagons colors $2$ to $7$. We now describe how to color all remaining hexagons. From a vertex $u$ of the dual graph, we travel along any edge and then travel along the unique edge at oriented angle $\frac{2\pi}{3}$ to reach a new vertex $v$. From $v$ we then travel along the unique edge at oriented angle $-\frac{2\pi}{3}$ to reach a new vertex $w$ and we color $w$ the same color as $u$. A standard argument tells us that we've colored the entire plane like this. We adapt this method as follows: we take a regular hyperbolic heptagon $H$ with all angles equal to $\frac{2\pi}{3}$. There is a unique such heptagon and it can be decomposed into $7$ triangles $T$ of angles $\frac{\pi}{3}, \frac{\pi}{3}$ and $\frac{2\pi}{7}$. The diameter of $H$ can be computed using standard hyperbolic trigonometry and it has a value of slightly more than $1.22$. We now consider a standard tiling of ${\mathbb H}$ by copies of $H$. Fixing a base copy, we color all points of $H$ the same color. Each of the $7$ surrounding heptagons are given a different color. And we've colored a shape $O$ consisting of $8$ copies of $H$. To describe how to color all other heptagons, we argue using the dual graph. Here the edges of the dual graph meet at angles multiples of $\frac{2\pi}{7}$. From a vertex $u$ of the dual graph, we travel along any edge and then travel along the unique edge at oriented angle $\frac{4\pi}{7}$ to reach a new vertex $v$. From $v$ we then travel along the unique edge at oriented angle $-\frac{4\pi}{7}$ to reach a new vertex $w$ and we color $w$ the same color as $u$. As above, this colors the entire hyperbolic plane. Of course this won't work for all $d$. We choose $d\geq 1.22$ to ensure that its bigger than the diameter of the heptagons but we also need to choose $d$ small enough so that translates of the same color are further than $d$. Using standard hyperbolic trigonometry, one can see that any two heptagons are at distance at least $\approx 1.77$. The result of all of this is the following proposition. \begin{proposition}\label{prop:funddom} For $d\in [1.22, 1.77]$ we have $$ \chi({\mathbb H},d) \leq 8. $$ \end{proposition} \subsection{Bounds for $q$-trees} Recall that $\chi(T_q, d)$ is the minimum number of colors required to color a $q$-regular tree such that any two vertices at distance $d$ apart are of a different color. A first immediate bound on this quantity is given by Brooks' theorem. Consider the distance $d$ graph associated to $T_q$: it is a regular graph of degree $q (q-1)^{d-1}$ so $$ \chi(T_q, d) \leq q (q-1)^{d-1}+1. $$ We want to do much better and to do so we emulate the method for ${\mathbb H}$ which required coloring strata. We begin by using a horocyclic decomposition of a tree. \subsubsection{Strata for horocyclic decompositions} We describe the method which works identically for any $q$-regular tree $T_q$. We begin by choosing a base point $x_0 \in T_q$ and choosing an infinite geodesic ray leaving from this point $[x_0,x_1,\cdots]$ (where $d_{T_q}(x_k, x_{k+1}) = 1$). We think of $\eta = [x_0,x_1,\cdots]$ as a {\it boundary point} of $T_q$ (formally a boundary point is an equivalence class of rays but we won't dwell on that here). We define the Busemann function associated to $\eta = [x_0,x_1,\cdots]$ as $$ h_\eta(x):= \lim_{y\to \eta} \left(d_{T_q}(y,x)- d_{T_q}(y,x_0) \right) \left(= \lim_{k\to \infty} \left(d_{T_q}(x_k,x)- d_{T_q}(x_k,x_0) \right)\right). $$ We can now define the strata $S_n$ as being level sets of the function $h_\eta$: $$ S_n:= \{ x\in T_q \mid h_\eta(x) = n \}, \, n\in {\mathbb Z}. $$ We note that the strata are, by analogy with the hyperbolic plane, generally called horocyles and can be thought of as circles centered around a point at infinity. Note that $x_0 \in S_0$ but $x_{k} \in S_{-k}$ for all $k\in {\mathbb N}$ (see Figure \ref{fig:horocyclic construction}). \begin{figure}[h] {\color{linkblue} \leavevmode \SetLabels \L(0.1*.93) $S_{-1}$\\ \L(0.1*.72) $S_0$\\ \L(0.1*.5) $S_1$\\ \L(0.1*.28) $S_2$\\ \L(0.1*.05) $S_3$\\ \L(0.52*.96) $x_1$\\ \L(0.52*.74) $x_0$\\ \endSetLabels \begin{center} \AffixLabels{\centerline{\epsfig{file =Horocyclic.png,width=10.0cm,angle=0} }} \vspace{-30pt} \end{center} \caption{Horocyclic construction} \label{fig:horocyclic construction} } \end{figure} A first observation is that distances between points in the same stratum are always even. More generally, distances are even between points that lie respectively in $S_k$ and $S_{k'}$ with $k$ and $k'$ of same parity. Thus as an immediate corollary of the horocyclic construction we obtain the following. \begin{corollary} If $d$ is odd then $\chi(T_q,d) = 2$. \end{corollary} \begin{proof} Clearly $\chi(T_q,d)\geq 2$ and we can color $T_q$ using one color for all points lying in $S_k$ with $k$ even and another for all points lying in $k$ odd. \end{proof} When $d$ is even, the problem is not so obvious. \subsubsection{Bounds for even $d$} We now prove upper bounds for even $d$. \begin{theorem}\label{thm:puretree} When $d$ is even $\chi(T_q,d) \leq (q-1) (d+1)$. \end{theorem} \begin{proof} We color one stratum at a time and by thinking of the tree as a rooted tree with root at infinity, we bundle vertices on a stratum in terms of their ``ancestors". More precisely we'll color all vertices of $S_k$ the same color if they have a common root at distance $\frac{d-2}{2}$. Note that this is possible because any two such vertices are at distance at most $d-1$. For a given monochromatic bundle $B$, we now consider all of the other bundles of $S_k$ that have a common ancestor at distance $\frac{d}{2}$. Note there are exactly $q-1$ of these in total (which we'll call a super bundle) and we'll color each bundle a different color requiring $q-1$ colors. We can color all other vertices of $S_k$ with the same $q-1$ colors using the same method as any two vertices lying in different super bundles are distance $>d$ apart. Now any two stata $S_k$ and $S_{k'}$ can be colored using the same colors provided $|k-k'|\geq d+1$ so we obtain a coloring with $(q-1) (d+1)$ as required. \end{proof} \subsubsection{Lower bounds} \begin{proposition}\label{prop:cliqueq} For any even $d\geq 2$, the clique number satisfies $\Omega(T_q,d) = q$. \end{proposition} \begin{proof} The lower bound comes from the following construction. Fix a base vertex: it divides the graph into $q$ branches. Now choosing $q$ vertices, one in each branch, at distance $d/2$ from the base vertex. Any two are at distance $d$, hence the lower bound. The upper bound works as follows. Suppose by contradiction that there is a clique of size $c> q$ and consider the subgraph of $T_q$ spanned by the distance paths between the $c$ vertices. The vertices of the clique are the leaves in this subgraph $G$. It must contain at least $2$ branching points $v,w$ (vertices of degree at least $3$) as the inner degree is at most $q$. Removing the edges between $v$ and $w$ separates $G$ into two parts $G_v$ and $G_w$. Because the degrees of $v$ and $w$ were at least $3$, both $G_v$ and $G_w$ must contain at least two leaves of $G$. Let $v_1,v_2$, resp. $w_1,w_2$, be leaves of $G_1$, resp. $G_2$. We have $$ d_{T_q}(v_1,v_2) = 2 d_{T_q}(v_1,v) $$ and $$ d_{T_q}(w_1,w_2) = 2 d_{T_q}(w_1,w) $$ but \begin{eqnarray*} d_{T_q}(v_1,w_1) & = & d_{T_q}(v_1,v) + d_{T_q}(v,w) + d_{T_q}(w_1,w) \\ &>& d_{T_q}(v_1,v) + d_{T_q}(w_1,w) \\ &\geq& 2 \min\{ d_{T_q}(v_1,v), d_{T_q}(w_1,w) \} \end{eqnarray*} and so either $v_1,v_2$ and $w_1$ or $w_1,w_2$ and $v_1$ cannot form a triangle, a contradiction. \end{proof} In certain low complexity cases, we can compute the chromatic number explicitly. \begin{proposition} $\chi(T_3, 2) = 3$. \end{proposition} \begin{proof} Consider the graph $G(T_3,2)$ consisting of vertices of $T_3$ and edges between vertices is they are distance $2$ in $T_3$. The $G(T_3,2)$ is pretty easy to visualize. First of all, observe it has two connected components as it is impossible to travel between two vertices at odd distance in $T_3$. By homogeneity, both connected components are isomorphic. Take a vertex $v_0$ in $T_3$ and the three vertices it is connected to. Together they form a tripod. The three end vertices of this tripod are all pairwise distance $2$ apart so they form a triangle in $G(T_3,2)$. (Note they are not connected to $v_0$ in $G(T_3,2)$.) In particular $\chi(T_3,2)\geq 3$. Now each of these three vertices belongs to $2$ other triangles in $G(T_3,2)$ and the figure repeats itself (see Figure \ref{fig:GT32}). \begin{figure}[h] {\color{linkblue} \leavevmode \SetLabels \L(0.52*.94) $1$\\ \L(0.46*.80) $2$\\ \L(0.505*.765) $3$\\ \L(0.65*.82) $1$\\ \L(0.59*.78) $2$\\ \L(0.63*.65) $3$\\ \L(0.545*.60) $1$\\ \L(0.45*.4) $2$\\ \L(0.56*.375) $3$\\ \L(0.65*.41) $1$\\ \L(0.62*.24) $2$\\ \L(0.662*.205) $3$\\ \L(0.574*.155) $1$\\ \L(0.51*.12) $2$\\ \L(0.554*.00) $3$\\ \L(0.38*.64) $1$\\ \L(0.32*.51) $2$\\ \L(0.385*.534) $3$\\ \L(0.365*.33) $1$\\ \L(0.34*.186) $2$\\ \L(0.382*.154) $3$\\ \endSetLabels \begin{center} \AffixLabels{\centerline{\epsfig{file =GT32.pdf,width=6.0cm,angle=0} }} \vspace{-30pt} \end{center} \caption{A connected component of $G(T_3,2)$ and its coloring} \label{fig:GT32} } \end{figure} There is a iterative $3$ coloring of this graph by first coloring the vertices of a base triangle, and then those belonging to the triangles attached level by level. The same colors can be used for both connected components and these shows the proposition. \end{proof} \begin{proposition} \label{prop:moserq} For any even $d\geq 4$ we have $$\chi(T_q, d) \geq q+1.$$ \end{proposition} \begin{proof} We can embed a type of generalized Moser spindle in each of these graphs as follows. We take a base vertex $v_0$ and consider two sets of vertices $v_1,\hdots,v_{q-1}$ and $v'_1,\hdots,v'_{q-1}$ all at distance $d$ from $v_0$ and with the following property. Any two $v_i,v_j$, resp. $v'_i,v'_j$, for distinct $i,j$ are at distance $d$. We then consider two additional vertices $v_q$ and $v'_q$ at distance $d$ from one another and such that $v_q$ is distance $d$ from $v_i$ for $i=1,\hdots,q-1$ and $v'_q$ is distance $d$ from $v'_i$ for $i=1,\hdots,q-1$. An example for $q=4$ is illustrated in Figure \ref{fig:MoserSpindle}. \begin{figure}[h] {\color{linkblue} \leavevmode \SetLabels \L(0.49*.93) $v_0$\\ \L(0.19*-0.06) $v_1$\\ \L(0.24*-0.06) $v_2$\\ \L(0.28*-0.06) $v_3$\\ \L(0.29*.28) $v_4$\\ \L(0.695*-0.06) $v'_1$\\ \L(0.735*-0.06) $v'_2$\\ \L(0.78*-0.06) $v'_3$\\ \L(0.623*.605) $v'_4$\\ \endSetLabels \begin{center} \AffixLabels{\centerline{\epsfig{file =MoserSpindle.png,width=9.0cm,angle=0} }} \vspace{-30pt} \end{center} \caption{The Moser spindle} \label{fig:MoserSpindle} } \end{figure} Suppose now that it can be colored with $q$ colors. By construction, $q$ colors are needed to color the vertices $v_1,\hdots,v_q$ so $v_0$ has the same color as $v_q$. By symmetry, $v'_q$ must have the same color as $v_0$ and thus $v_q$ and $v'_q$ are the same color. This is a contradiction since $v_q$ and $v'_q$ are at distance $d$. \end{proof} \begin{remark}\label{rem:ammar} By an exhaustive computer search, we checked the chromatic numbers $\chi(T_q,d)$ of certain finite subgraphs. Of what was computable, one notable result came up: $$ \chi(T_{3},8) \geq 5. $$ The subgraph of $T_{3}$ we considered to compute the lower bound was the graph consisting of all vertices at distance at most $8$ from a given base vertex. \end{remark} \section{Interval chromatic number problem} We now focus our attention on bounding the $\Delta$-chromatic number $\chi((X,\delta),\Delta)$ when the metric space is the hyperbolic plane or a $q$-regular tree and for $\Delta := [d, cd]$ for some $d>0$ and some $c>1$. We re-use the same stratification of our spaces ${\mathbb H}$ and $T_q$ and modify the coloring to obtain the upper bounds. The lower bounds are obtained by exhibiting cliques. \subsection{Bounds for the hyperbolic plane} By slightly adapting the proof of Theorem \ref{thm:chromahypupper1}, we obtain the following upper bound for large $d$. Note that our focus is how these bounds grow in terms of $d$ so in particular we'll use inequalities that possibly only hold for somewhat large values of $d$. Let's illustrate this by a simple example. We'll be using a bound on the $\arcsin(x)$ function. Although $\arcsin(x) > x$ for all $x>0$, they have the same behavior close to $0$, for sufficiently small $x$ we have the reverse inequality $$ \arcsin(x) < 1.1 x. $$ \begin{theorem}\label{thm:inthypupper} Let $d >> 0$ be sufficiently large. Then $$ \chi({\mathbb H}, [d,cd]) < 2 (2 e^{\frac{cd-1}{2}} + 1)(cd +1). $$ \end{theorem} \begin{proof} We use the checkerboard as in the bound for the $d$-chromatic number choosing $w:=d$ and $h := \log(4)$ so as to ensure that each rectangle has diameter less than $d$ for sufficiently large $d$. We color stratum by stratum coloring every $(\lfloor cd \rfloor +1)$th stratum with the same colors. The main difference is in how we color a stratum. This time we need $k+1$ colors to color a stratum where $k$ is the smallest integer that satisfies \begin{equation}\label{eqn:boundk} k \geq e^h \sqrt{\frac{\cosh(cd)-1}{\cosh(d)-1}} = 4 \sqrt{\frac{\cosh(cd)-1}{\cosh(d)-1}} . \end{equation} The value $(k+1)(cd+1)$ is an upper bound. Via a small manipulation, Equation \eqref{eqn:boundk} is certainly true provided $$ k \geq 4 \, e^{\frac{cd-1}{2}} $$ for large enough $d$. Thus $$ 2 (2 e^{\frac{cd-1}{2}} + 1)(cd +1) $$ is an upper bound. \end{proof} We now focus on lower bounds. To do so we will exhibit large cliques to bound $\Omega({\mathbb H},[d,cd])$ from below. \begin{theorem}\label{thm:inthyplower} For $d>>0$ sufficiently large $$\Omega({\mathbb H},[d,cd]) > 2 \, e^{\frac{cd-1}{2}}. $$ \end{theorem} \begin{proof} We choose a point $x_0 \in {\mathbb H}$ and consider the circles $C$ of radius $\frac{c - 1}{2}d$. We now choose a maximal set of points $x_1, \hdots, x_n$ on $C$ that are successively exactly $d$ apart and $d_{{\mathbb H}}(x_1,x_n) \geq d$. By construction the points satisfy $d_{\mathbb H}(x_i,x_j) \in [d,cd]$ for $i,j\in \{1,\hdots,n\},$ $i\neq j$. We now need to estimate $n$ in function of $d$ and $c$. To do so we look at the angle $\theta$ in $x_0$ formed by a triangle $x_0, x_j, x_{j+1}$. By hyperbolic trigonometry in the triangle we have $$ \sinh \left(\frac{d}{2}\right) = \sin\left( \frac{\theta}{2}\right) \sinh\left(\frac{cd}{2}\right) $$ so $$ \theta = 2 \arcsin \left(\frac{\sinh\left(\frac{d}{2} \right) }{ \sinh\left(\frac{cd}{2}\right) }\right). $$ From this $$ n\geq \frac{2\pi}{\theta} = \frac{\pi}{\arcsin\left( \frac{\sinh\frac{d}{2} }{ \sinh\left(\frac{cd}{2}\right) } \right) }> 2 \, e^{\frac{cd-1}{2}}. $$ \end{proof} Obviously in the above proof, we could optimize the constant in front of the leading term but it's really the order of growth we're interested in. Put together, Theorems \ref{thm:inthypupper} and \ref{thm:inthyplower} tell us that, up to linear factor in $cd$, $\chi({\mathbb H}, [d,cd])$ grows like $e^{\frac{cd-1}{2}}$. \subsection{Bounds for $k$-trees} We begin with an upper bound which works almost identically to Theorem \ref{thm:puretree}. \begin{theorem}\label{thm:inttreeupper} $$\chi(T_q,[d,cd]) \leq (q-1)^{\lfloor \frac{cd}{2} +1 \rfloor} (\lfloor cd \rfloor+1)$$ \end{theorem} \begin{proof} The proof is very similar to the proof of Theorem \ref{thm:puretree} so we'll mainly highlight the differences. Using a horocyclic decomposition we color each stratum separately and reuse the colors for strata $\lfloor cd\rfloor+1$ apart. For a given stratum: we begin by creating bundles of vertices where vertices belong to the same bundle if they have a common root at distance $\frac{d-2}{2}$. We now create a super bundle consisting of all bundles with vertices that have a common ancestor at distance at most $\lfloor \frac{cd}{2} +1 \rfloor$. All vertices of a bundle are colored by the same color and any two bundles in a same super bundle are colored differently. This requires $(q-1)^{\lfloor \frac{cd}{2}+1 \rfloor}$ colors. These same colors can be used to color any other super bundle as two vertices that lie in different super bundles are at least $2 \lfloor \frac{cd}{2}+1 \rfloor > cd$ apart. \end{proof} The lower bound follows the same idea as the lower bound of the corresponding theorem for the hyperbolic plane. \begin{theorem}\label{thm:inttreelower} $$\Omega(T_q,[d,cd]) \geq q (q-1)^{\lfloor \frac{cd}{2} \rfloor - \lceil\frac{d}{2} \rceil}$$ \end{theorem} \begin{proof} Consider a vertex $v_0$ in $T_q$ and the set $S_1$ of all vertices distance $\lfloor \frac{cd}{2} \rfloor - \lceil\frac{d}{2} \rceil$ from $v_0$. Let $S_2$ be the set of vertices distance $\lfloor \frac{cd}{2} \rfloor $ from $v_0$. Now for each vertex $v$ of $S_1$ , we associate exactly one companion vertex $v' \in S_2$ such that $v$ is on the geodesic between $v'$ and $v_0$. Denote the set of companion vertices $S$. Now if $v',w' \in S$ are distinct, then $$\delta (v',w') \geq d$$ but $$ \delta (v',w') \leq cd.$$ Furthermore $| S | = | S_1 |$ and as $T_q$ is $q$ regular, we have $$ | S_1 | = q (q-1)^{\lfloor \frac{cd}{2} \rfloor - \lceil\frac{d}{2} \rceil} $$ as desired. \end{proof} \addcontentsline{toc}{section}{References} \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,108,101,563,545
arxiv
\section{INTRODUCTION} \indent A spinning magnetized neutron star provides huge electric potential differences between different parts of its surface as a result of unipolar induction \citep{gol69}. A part of the potential difference may be expended on an electric field along the magnetic field somewhere in the magnetosphere. Although a fully self-consistent model for the pulsar magnetosphere has not yet been constructed, several promising models have been proposed. Among them, the polar cap model \citep{stu71, rud75} assumes that an electric field $E_\parallel$ parallel to the magnetic field lines exists just above the magnetic poles. The electric field accelerates charged particles up to TeV energies, and resultant curvature radiation from these particles produces copious electron-positron pairs through magnetic pair production. These pairs are believed to be responsible for radio emission. The extremely high brightness temperature of pulsar radio emission requires a coherent source for this radiation. In the past decade, various maser processes have been proposed and studied: for example, curvature emission induced by curvature drift or torsion \citep{luo92,luo95} and emission due to the cyclotron-Cerenkov and Cerenkov drift instabilities \citep{lyu99a,lyu99b}. In most cases these models are sensitive to the magnetic field strength or other parameters. Although the torsion-driven maser process \citep{luo95} is not so sensitive to the physical parameters, we have no definite ideas to achieve the required inverted energy spectrum of particles and twisted field lines. Therefore, there is still no widely accepted model of maser emission. On the other hand, one of the simplest mechanisms of radiation is the coherent curvature radiation from bunches of electron-positron pair plasma, which was discussed mainly in the 1970s. If $N$ particles are in a bunch of some characteristic size $\lambda$, emission with wavelengths longer than $\lambda$ will be coherent. Then the particles in the bunch radiate like a single particle with charge $N e$. The brightness temperature becomes $N$ times that estimated from the intensity when the $N$ single particles radiate incoherently. \citet{che77} suggested that the bunching arises from the electrostatic two-stream instability of electron and positron streams. However, \citet{ben77} found that the growth time of the instability is far too long. Although some other mechanisms to excite instabilities \citep[e.g.][]{gol71,ass80, ass83,uso87} have been proposed, the problem remains unsolved so far. Another serious problem in the polar cap model is the screening of the electric field. The localized potential drop in the polar cap region is maintained by a pair of anode and cathode regions. For a space charge-limited flow model \citep{faw77, sch78, aro79}, where electrons can freely escape from the stellar surface, i.e., $E_\parallel = 0$ on the stellar surface, an anode has been considered to be provided by pair polarization. However, as \citet{shi98, shi02} found, the thickness of the screening layer is restricted to be extremely small, in order to screen the electric field consistently. A huge number of pairs must be injected within the small thickness. The required pair multiplication factor per primary particle is enormously large and cannot be realized in the conventional pair creation models. In addition to the above problems, there are various unsolved problems in pulsar physics; for example, the current closure problem is a representative one. It may be worthwhile to explore the possibility of a novel model, even if the model has some ambiguities at present. Recently, \citet{asa04} (hereafter AT) have proposed a new mechanism to screen the electric field. In the AT model, nonrelativistic protons are supplied from the corotating magnetosphere to flow toward the stellar surface. Protons can provide an anode to screen an electric field the standard polar cap model supposes. Injected electron-positron pairs yield an asymmetry of the electrostatic potential around the screening point. The required pair creation rate in this model is consistent with the conventional models. The existence of the proton counterflows is also favorable for the bunching of pair plasma. The existence of protons apparently makes excitation of two-stream instability easier \citep{che80}. In this Letter, we show that the pair and proton flows in the AT model excite electrostatic waves, which can explain the pulsar radio emission through coherent curvature radiation. In contrast to most modern theories of pulsar radio emission, the bunch models have been so much discussed from the early days of pulsar research and have received several criticisms, such as insufficient growth rates and rapid bunch dispersion due to random motion of bunching particles (see, e.g., Melrose 1995). However, reconsideration of this old idea without any prejudice may be valuable \citep{gil04} in order to overcome present difficulties in pulsar physics, although the maser models are still an alternative possibility of the radiation mechanism. Another interesting reason to reconsider the bunch model is that space charge density waves appear outside the screening region in the AT model, even though the velocity dispersion of pairs is taken into account (see also Shibata et al. 2002). The two stream instability discussed here may be useful to excite non-linear radiation processes in a relativistic plasma \citep{lyu99a,lyu99b}. In \S 2, we show that the two-stream instability is easily excited in the AT model. The excited wavelength is long enough to bunch particles. In \S 3, we discuss counteraction of the excited waves on the pair flows. \S 4 is devoted to a summary and discussion. \section{TWO-STREAM INSTABILITY} In this section, we discuss the two-stream instability for the situation the AT model supposes. In the AT model, protons are assumed to flow from the corotating magnetosphere toward the stellar surface. The primary electron beam is accelerated from the stellar surface. Electron-positron pairs start to be injected at a certain height above the polar cap, and the electric field is screened there. Outside this point, two-stream instability may be excited, and the resultant bunching of pair plasma yields coherent curvature radiation. The AT model requires that the proton current $J_{\rm p}$ is of the order of the Goldreich-Julian (GJ) current $J_{\rm GJ} \equiv -\mathbf{\Omega}_* \cdot \mathbf{B}/ 2 \pi$, where $\mathbf{\Omega}_*$ and $\mathbf{B}$ are the angular velocity of the star and magnetic field, respectively. Hereafter, we assume $\mathbf{\Omega}_* \cdot \mathbf{B}>0$. In the case of opposite polarity, the electric fields are not screened in this model. There is no observational evidence for two classes of pulsars due to the polarity, as expected. The polarity problem remains unsolved so far for all pulsar models. The proton flow is mildly relativistic (the average velocity in AT $\simeq -0.4 c$) in the pulsar frame. The current of the primary electron beam from the stellar surface is also of the order of $J_{\rm GJ}$. The Lorentz factor of pairs at injection is required to be more than $\sim 500$. From the above assumptions, we obtain the proton number density $n_{\rm p} \simeq \Omega B/ 2 \pi c e$. Then the proton plasma frequency, $\omega_{\rm pp}^2 \equiv 4 \pi e^2 n_{\rm p}/m_{\rm p}$, becomes \begin{eqnarray} \omega_{\rm pp}/2 \pi \simeq 100 T_{0.3}^{-1/2} B_{12}^{1/2} \quad \mbox{MHz,} \end{eqnarray} where $T_{0.3}$ and $B_{12}$ are the rotation period of the pulsar and $B$ in units of 0.3 s and $10^{12}$ G, respectively. In order to simplify the situation, we consider one-dimensional homogeneous flows of protons and electron-positron pairs. Since the Lorentz factor of the primary beam of electrons from the stellar surface is too large to contribute to the dispersion relation, we neglect the beam component hereafter. The distributions of protons and pairs are functions of the 4-velocity $u=\beta/(1-\beta^2)^{1/2}$, where $\beta=v/c$. In the linear perturbation theory, the dispersion relation for electrostatic waves \citep{bal69} is given by \begin{eqnarray} 1+\sum_a \frac{4 \pi q_a^2}{\omega m_a} \int d u \frac{\beta}{\omega-k v} \frac{\partial f_a}{\partial u}=0, \end{eqnarray} where $f_a$, $q_a$, and $m_a$ denote the distribution function, charge, and mass of the particle species $a$, respectively. Solutions of the dispersion relation usually yield a complex frequency $\omega=\omega_r+i \omega_i$. The imaginary part $\omega_i$ of $\omega$ corresponds to the growth rate of waves. A positive growth rate $\omega_i>0$ implies an exponentially growing wave, while a negative $\omega_i$ implies an exponentially damped wave. In the cold-plasma limit the distribution function of the proton flow may be written as $f_{\rm p}=n_{\rm p} \delta(u)$ in the rest frame of the proton flow. We assume that the number densities and flow velocities of pair electrons and positrons are the same. In this case, the contributions of pair electrons and positrons are degenerate, so that we write the total pair distribution as $f_\pm=n_\pm \delta(u-u_\pm)$. We neglect the injection of pairs, although the pair injection continues outside the polar cap region in general. Then we obtain \begin{eqnarray} 1-\frac{\omega_{\rm pp}^2}{\omega^2} -\frac{\omega_{\rm p \pm}^2}{(\omega-k v_{\pm})^2 \gamma_\pm^3}=0, \end{eqnarray} where $\omega_{\rm p \pm}^2 \equiv 4 \pi e^2 n_\pm/m_e$, and $\gamma_\pm=500 \gamma_{\pm,5}$ is the Lorentz factor of pairs. The charge density wave due to pairs is constructed by the density waves of electrons and positrons with displaced phases of $\pi$. We normalize $k$ and $\omega$ by the proton plasma frequency as $\tilde{k}=c k/\omega_{\rm pp}$ and $\tilde{\omega}=\omega/\omega_{\rm pp}$. We write $\omega_{\rm p \pm} \equiv \xi \omega_{\rm pp}$. The current of the primary electron beam (we have neglected here) is also of the order of $J_{\rm GJ}$. The number of pairs one primary electron creates may be $10^3$--$10^4$. Therefore, $n_\pm/n_{\rm p} \equiv M$ may be in the range of $10^3$--$10^4$. As a result, $\xi=\sqrt{M m_{\rm p}/m_{\rm e}}$ is in the range of 1400--4300. The condition to yield a pair of complex solutions is given by \begin{eqnarray} \tilde{k} \beta_\pm <(1+\zeta)^{3/2}, \end{eqnarray} where $\zeta \equiv \xi^{2/3}/\gamma_\pm$. Since $\gamma_\pm$ is required to be larger than $\sim 500$ in the AT model, we assume $\gamma_\pm=500$--1000. In this case, $\zeta$ is in the range of 0.1--0.5, and $\beta_\pm \simeq 1$. Thus, the threshold of $\tilde{k}$ is close to unity irrespective of the parameters. The phase velocity of the excited wave is obtained as \begin{eqnarray} \frac{\omega_r}{k} \simeq \frac{v_\pm}{1+\zeta}. \label{phase} \end{eqnarray} The maximum of $\omega_i$ is given by $\tilde{k}$ smaller than and close to the threshold $(1+\zeta)^{3/2} \sim 1$. Since the right-hand side of Eq. (\ref{phase}) is of the order of unity, $\tilde{\omega_r}$ for the growing wave is of the order of unity, too. In general, $\omega_r$ is much larger than $\omega_i$. Therefore, $\tilde{\omega}_i$ is about $\sim 0.1$ at most. Numerical solutions (see Fig. 1) confirm this estimate. The growth time $\sim 1/(0.1 \omega_{\rm pp}) \sim 10^{-8}$ s $\ll 1/\Omega$, $R/c$, where $R=10^7 R_7$ cm is the curvature radius of magnetic fields, is short enough to bunch particles. Therefore, the bunched particles emit coherent radiation just above the polar cap. The wavelength $\lambda$ of the excited wave is $2 \pi/k \sim 2 \pi c/\omega_{\rm pp} \sim 300$ cm. The maximum coherent amplification of curvature radiation by bunches of pairs is obtained at wavelengths long compared to the size of bunches $\lambda$. For wavelengths shorter than $\lambda$ (frequency higher than $\nu_0 \equiv \omega_{\rm pp}/ 2 \pi \sim 100$ MHz), the flux of curvature radiation diminishes with a power law of some index $\alpha$ as $\propto \nu^{-\alpha}$ \citep[see, e.g.,][]{sag75,ben77,mic82} extending to the critical frequency $\nu_{\rm c}=3 \gamma_\pm^3 c/4 \pi R \sim 100 \gamma_{\pm,5}^3 R_7^{-1}$ GHz. The above frequencies are estimated in the proton rest frame. Since the proton flow is nonrelativistic, the redshift of frequencies by the flow is unimportant. Thus, the coherent radio emission in this model is well consistent with the observed pulsar radio spectra. The number of pairs $N$, which can radiate coherently, may be written as $N \simeq \epsilon M n_{\rm GJ} \lambda^3 \simeq 6 \times 10^{21} \epsilon (M/10^3) (\lambda_3)^3 B_{12} T_{0.3}^{-1}$, where $\lambda_3=\lambda/300$ cm, and the factor $\epsilon<1$ is the fraction of particles that are in a coherent motion. The cooling time scale for coherent curvature radiation may be written as \begin{eqnarray} t_{\rm cool} \simeq \frac{\gamma_\pm m_{\rm e} c^2}{p N} \left( \frac{\nu_{\rm c}}{\nu_0} \right)^{4/3}= \frac{3 R^2 m_{\rm e} c} {2 e^2 \gamma_\pm^3 N} \left( \frac{\nu_{\rm c}}{\nu_0} \right)^{4/3}, \end{eqnarray} where $p=2 e^2 c \gamma_\pm^4/3 R^2$ is the emitting power by a single particle. Here we adopt $\nu_{\rm c}/\nu_0=10^3$. If $N>3 \times 10^{15}$, $t_{\rm cool}$ becomes shorter than $R/c \sim 3 \times 10^{-4} R_7$ s. This is realized for $\epsilon > 5 \times 10^{-7}$ and gives a bright enough radio luminosity. Since high radio luminosity can be realized even for a smaller $N$ if the number of bunches is large enough, this estimate is not unique. On the other hand, the radio brightness temperature $T_{\rm b}$ may be limited by the self-absorption \citep{che80}; $T_{\rm b}<\gamma_\pm N m_{\rm e} c^2/k_{\rm B}$, where $k_{\rm B}$ is the Boltzmann constant. The value $N=3 \times 10^{15}$ gives a limit $\sim 10^{28}$ K, which is high enough to be consistent with observations. The above estimate of $N$ and $\epsilon$ appear to be appropriate, although it is not unique. Thus, a small value of $\epsilon$ suffices for coherent radio emission, although it is hard to estimate $\epsilon$ from the first principle because of some nonlinear effects on the bunching process. \section{WAVE COUNTERACTION ON THE FLOWS} The excited waves may produce effective frictional force on the flows. If the frictional force is too strong, the force destroys the structure of the two streams. We have to construct a model that satisfies the two requirements of wave excitation in a short timescale and sustainable structure of the flows, which seems incompatible at first glance. This problem has not been considered seriously so far. In this section, we obtain a physical requirement to make the frictional force negligible. In the quasi-linear theory, the time-averaged distribution functions evolve according to \citep[see, e.g.,][]{hin78} \begin{eqnarray} \frac{\partial f_a}{\partial t}=\frac{\partial}{\partial u} \Biggl[ D \frac{\partial f_a}{\partial u} \Biggr], \end{eqnarray} where \begin{eqnarray} D=\frac{q_a^2}{m_a^2 c^2} {\rm Re} \Biggl[\int dk \frac{E_k^2} {i (k v-\omega_{r,k})+\omega_{i,k}} \Biggr], \end{eqnarray} and $E_k$ is amplitude of the electric field with a wavenumber $k$. Then the change of $u$ due to the friction per unit time is obtained as \begin{eqnarray} \dot{u}=\frac{\partial D}{\partial u}. \end{eqnarray} The amplitude $E_k$ is difficult to estimate. The electric field traps pairs, if the amplitude of the excited electric field $E$ becomes larger than a threshold \begin{eqnarray} E_{\rm max} &\equiv& \frac{k \gamma_\pm m_{\rm e} c^2}{2 e \gamma_{\rm w}^2} \simeq \frac{\omega_{\rm pp} \gamma_\pm m_{\rm e} c}{2 e \gamma_{\rm w}^2} \nonumber \\ &\simeq& 10^3 \left( \frac{\gamma_{\rm w}^2}{10} \right)^{-1} \gamma_{\pm,5} B_{12}^{1/2} T_{0.3}^{-1/2} \quad \mbox{in esu,} \label{denba} \end{eqnarray} where $\gamma_{\rm w}$ is the Lorentz factor of the phase velocity of the wave. We have assumed $\gamma_{\rm w}^2 \gg 1$ in the above estimate. However, even if $\gamma_{\rm w}^2 \sim 1$, the correct value differs from the above at most by a factor of 2. In the relativistic limit, $\gamma_{\rm w}^2 \simeq (1+\zeta)/2 \zeta$. Thus, $\gamma_{\rm w}^2$ is 10 at most. Assuming the maximum charge density $\rho_{\rm max} \sim E_{\rm max}/\lambda$, we obtain $\rho_{\rm max} \sim \gamma_\pm m_{\rm e}/(\gamma_w^2 m_{\rm p}) \rho_{\rm GJ}$, where $\rho_{\rm GJ} \equiv \Omega B/(2 \pi c)$ is the GJ charge density. Since $\gamma_\pm \sim 10^3$ and $\gamma_{\rm w}^2=1$-$10$, the maximum charge density turns out to be of the order of $0.1$-$1$ times the GJ charge density. Although there may be various nonlinear effects to excite waves, the maximum amplitude of the electric field may not be much larger than $E_{\rm max}$. We approximate the excited electric field by a monochromatic wave of $E_k=E \delta(k-\omega_{\rm pp}/c)$. It is apparent that the time scale for the slowdown due to the friction $t_{\rm fric}$ for the pair flow is longer than that for the proton flow owing to the large value of $\gamma_\pm$. Taking into account $\omega_r \simeq \omega_{\rm pp}$ and $\omega_i \simeq 0.1 \omega_{\rm pp}$, $t_{\rm fric}$ for the proton flow is obtained as \begin{eqnarray} t_{\rm fric}=\frac{1}{|\dot{u}|} \simeq 5 \omega_{\rm pp} \left( \frac{m_{\rm p} c} {e E} \right)^2. \end{eqnarray} We consider that the physical condition remains unchanged within a scale $R/c$. In order to make $t_{\rm fric}$ be longer than $R/c$, $E$ should be smaller than $E_{\rm max}$ by a factor of 3. Experience has shown that longitudinal waves are frequently strongly damped by nonlinear processes. So we expect that $E$ may be smaller than $E_{\rm max}$, small enough to keep the fictional force small and strong enough to bunch a small fraction of pair particles. \section{SUMMARY AND DISCUSSION} We examined the two-stream instability and coherent curvature radiation in the proton counterflow model that we have recently proposed. The existence of proton flows is favorable not only for screening of the electric field but also for the bunching of pair plasma by the two-stream instability. This model predicts a high growth rate and wavelength of electrostatic waves appropriate to reproduce observed radio emission by coherent curvature radiation. The growth rate is basically determined by $\omega_{\rm pp} \propto \sqrt{B_{12}/T_{0.3}}$. Since rapidly rotating pulsars tend to have weak magnetic fields, $\omega_{\rm pp}$ is not so sensitive to the model parameters. For example, $\omega_{\rm pp}/2 \pi$ becomes about 50 MHz for $B=10^{9}$ G and $T=1$ ms. On the other hand, for $B=10^{15}$ G and $T=10$ s, $\omega_{\rm pp}/2 \pi \sim 500$ MHz. The resultant growth rate is also insensitive to the model parameters. It is interesting that the predicted wave length is comparable to the wave length of the space charge density wave, which appears outside the screening region in AT. We noticed the interesting paper of \citet{les98} in which they claimed that coherent curvature radiation cannot be the source for the radio emission of pulsars. However, their treatment is based on several simplifying assumptions: a full coherence extending up to $\nu_{\rm c}$, a large coherence volume, and others. Basically, these assumptions lead to a short cooling time and a lower value for the upper limit of luminosity. In our case, the coherence volume is smaller and at $\nu_{\rm c}$ only a partial coherence is supposed. Moreover, only a small fraction of pairs are bunched and the cooling time is much longer than their estimate. Thus, the limit claimed by \citet{les98} is irrelevant in our case. Examinations of more general constraints are beyond the scope of this Letter. Although the AT model resolves both the screening and radio emission problems, there remain many ambiguous points: the frictional force discussed in section 3, mechanisms to achieve the proton counterflow, and so on. In the AT model, the proton counterflow comes from the corotating magnetosphere via the anomalous diffusion \citep{lie85}. The currents of the primary beam and protons are most likely to be determined by the global dynamics in the magnetosphere. However, we should not persist in the anomalous diffusion, because there may be other mechanisms to cause the proton counterflow. Considering that we still do not understand well the fundamental issues of pulsar physics, we should be free from any kind of prejudices. We cannot exclude any possibility at the present stage. \acknowledgments This work is supported in part by a Grant-in-Aid for Scientific Research from the Ministry of Education and Science (14079205 and 16540215; F. T.).
1,108,101,563,546
arxiv
\section{Introduction and Main Results.} It is somewhat surprising that large classes of smooth systems can be accurately described by a symbolic model. In these cases, the extra tools of symbolic dynamics provide a concrete framework to study the more complicated differentiable system, allowing instead to work with concrete zero-dimensional models. A prototypical example here are the uniformly hyperbolic diffeomorphisms, where the existence of Markov Partitions \cite{SinaiMarkov},\cite{BowenMarkov} permit to reduce many aspects of their theory to properties of subshifts of finite type. The celebrated Sinai-Ruelle-Bowen theory for Axiom A attractors, one of the fundamental pieces in smooth ergodic theory, originated in this reduction. We refer the reader to \cite{EquSta} for an introduction to the techniques and background material. A similar structure is valid for Anosov flows \cite{RatnerMarkov} or more generally, hyperbolic flows \cite{SymbHyp}. In this case R. Bowen showed that if $\phi_t:M\rightarrow M$ is a flow on compact manifold and $\Omega$ is a non-trivial basic set, then $\phi_t:\Omega\rightarrow \Omega$ is finitely covered by the suspension of a subshift of finite type under a Lipschitz roof function. This means the following: in the above situation there exist a subshift of finite type $\Sigma_A$ with shift map $\sigma$, a Lipschitz function $\rho:\Sigma_A\rightarrow \mathbb R_{>0}$ such that for the space $$ S=\{(\underline{a},t)\in \Sigma_A\times \mathbb R: t\in[0,\rho(\underline{a})]\}/(\sigma(\underline{a}),0)\sim (\underline{a},\rho(\underline{a}))\ $$ one can find a continuous function $h:S\rightarrow \Omega$ satisfying \begin{enumerate} \item $h$ is onto. \item There exists $K>0$ such that for every $x\in \Omega$, $\# h^{-1}(x)\leq K$. \item $h$ is one to one on a $\mathcal{G}_{\delta}$ set $X\subset \Sigma_A$, and moreover $X$ is of full probability. \item $h\circ s_t=\phi_t\circ h$ for every $t$, where $s_t:S\rightarrow S$ is the natural horizontal flow\ ($s_t[(\underline{a},t')]=[(\underline{a},t+t')]$). \end{enumerate} As in the case of hyperbolic diffeomorphisms the above construction permits to establish very subtle properties of hyperbolic flows, for example an extension of the Sinai-Ruelle-Bowen theory \cite{BowenRuelle}, the Central Limit Theorem for geodesic flows on manifolds of negative sectional curvature \cite{RatnerCLT}, of the decay of matrix coefficients for geodesic flows on negatively curved surfaces \cite{Dolgodecay}, just to cite a few. It is important to notice that Bowen's construction establishes the symbolic model for the flow, and not for a fixed map in the flow (say, the time one-map). The time-t maps of the flow are not hyperbolic, and a simple entropy argument shows that one cannot hope for a symbolic representation of them as in the case of hyperbolic diffeormorphisms. The purpose of this paper is to present a symbolic model for a generalization of hyperbolic systems, the so called partially hyperbolic diffeomorphisms. In those we allow, besides the hyperbolic directions, a third center direction where the dynamics is only known to be dominated by the hyperbolic ones. Time $t$-maps of hyperbolic flows are partially hyperbolic, provided that $t$ is not zero, but there are other totally different systems which are also partially hyperbolic, as the ergodic linear automorphisms of the Torus, or the so called skew-products. We provide the precise definition in the next section and refer the reader to \cite{PesinLect} for an introduction to Partially Hyperbolic Dynamics. We will assume that, as is the case for the majority of the known examples, the center direction integrates to a foliation with $f$-invariant leaves. Although our main interest is the study of the hyperbolic directions, this transverse dynamics is in principle not well defined and we need to make use of the holonomy pseudo-group of the center foliation to gain some control on its structure. The presence of this pseudo-group, on the other hand, is one of the main difficulties for the construction. Another problem appears due to in general, the partially hyperbolic diffeomorphism does not fixes the center foliation. In any case, we are able to stablish a Markov structure for the transverse hyperbolic directions (see next section for the definition of $\ensuremath{\mathcal{W}^{cs}}_{loc},\ensuremath{\mathcal{W}^{cu}}_{loc}$). \begin{main} Let $f:M\rightarrow M$ be a partially hyperbolic diffeomorphism with (plaque expansive) center foliation $\ensuremath{\mathcal{W}^c}$. Then given $\delta>0,\rho>0$ there exist a family $\mathcal{D}=\{D_1,\ldots, D_N\}$ of $s+u$ dimensional (embedded) discs transverse to \ensuremath{\mathcal{W}^c}, a family $\mathcal{R}=\{R_1,\ldots,R_s\}$\ and a function $af:\mathcal{R}\rightarrow 2^{\mathcal{D}}$\ satisfying the following. \begin{enumerate} \item The family $\mathcal{D}$ is pairwise disjoint. \item Each $R_{\mu}$ is contained in some disc $D_i$ away from the boundary, and with respect to the relative topology of $D_i$ it holds $R_{\mu}=cl\, int R_{\mu}$. Moreover $diam R_{\mu}<\rho$. \item If $R_{\mu},R_{\nu}$ are contained in the same disc $D_i$ and their relative interiors share a common point, then $R_{\mu}= R_{\nu}$. \item $M=\cup_{\mu=1}^s sat^c(R_{\mu},\delta)$, where $sat^c(R_{\mu},\delta)$ consists of all the center plaques of size $2\delta$ centered at points of $R_{\mu}$. \item Suppose that $R_{\mu}\subset D_i,R_{\nu}\in D_{i'}$ satisfy $D_{i'}\in af(R_{\mu})$. If $x\in int\, R_{\mu}, \phi_{\mu,i'}^+(x)\in int\, R_{\nu}$, then it holds \begin{enumerate} \item $\phi_{\mu,i'}^+(\Wst{x,R_{\mu}})\subset \Wst{\phi_{\mu,i'}^+(x),R_{\nu}}$. \item $\phi_{\mu,i'}^{-}(\Wut{\phi_{\mu,i'}^+(x),R_{\nu}})\subset \Wut{x,R_{\mu}}$ \end{enumerate} where $\phi_{\mu,i'}^+=proj_{D_{i'}}\circ f,\phi_{\mu,i'}^-=proj_{D_i}\circ f^{-1}, proj_{D_i}:sat^c(D_i,\delta)\rightarrow D_i$ is the map that collapses center plaques and $$ \Wst{x,R_{\mu}}=\Wcsloc{x}\cap R_{\mu}, \quad \Wut{x,R_{\mu}}=\Wculoc{x}\cap R_{\mu}. $$ \end{enumerate} \end{main} Fix families $\mathcal{D}=\{D_i\}_{i=1}^N,\mathcal{R}=\{R_{\mu}\}_{\mu=1}^s$ as the ones given in the previous theorem. Denote $\Gamma(\mathcal{R})=\cup_{\mu}R_{\mu}$ and consider the pseudo-group $\mathcal{G}(\mathcal{R})$ of homeomorphisms of $\Gamma(\mathcal{R})$ generated by $\{\phi_{\mu,i'},\phi_{\mu,i'}^-:i'\in af(R_{\mu})\}$. Consider the subshift of finite type $\Sigma_S$ determined by the matrix $S$ where $$ S_{\mu,\nu}=1\Leftrightarrow \nu\subset D_{i'},i'\in af(R_{\mu})\text{ and } \phi_{\mu,i'}^+(int R_{\mu})\cap int R_{\nu}\neq\emptyset. $$ \begin{corollary}\label{coroA} There exists a continuous surjective map $h:\Sigma_S\rightarrow \Gamma(\mathcal{R})$ that semiconjugates the action of the shift $\sigma:\Sigma_S\rightarrow\Sigma_S$ with the action of the pseudo-group $\mathcal{G}(\mathcal{R})\curvearrowright\Gamma(\mathcal{R})$. \end{corollary} The previous Corollary can be seen as an analog of Ratner-Bowen construction for hyperbolic flows, although the map $h$ will not be bounded-to-one due to the fact that we have to take into account the center holonomy, and this is much worse behaved in general than in the case of $\mathbb{R}$-actions. \vspace{0.5cm} The remainder of the article is divided into three sections. In the first we provide some background material and notation that will be used in the sequel. The third Section is devoted to the proof of the main theorem, and in the final one we discuss the coding associated to this transverse model. \section{Complete Markov Transversal} Throughout this section we will work with a fixed plaque expansive partially hyperbolic diffeormorphism $f: M\rightarrow M$ and establish the existence of a complete transversal to its center foliation $\ensuremath{\mathcal{W}^c}_f$ consisting of closed sets with an additional Markov property (Cf. Theorem A). Precision in this stage seems crucial, so for convenience of the reader we have divided the construction in several parts. \subsection{Symbols and Transition Matrices.} We fix an $(\epsilon,\delta)$-adapted family $\{D_i\}_{i=1}^N$ satisfying the conclusion of Lemma \ref{projectain}. Subdivide each $B_i$ into finitely many (relatively) open sets $\Delta^i_1,\ldots \Delta^i_{r_i}$ with the following property: given $i\in\{1,\ldots N\},\alpha\in\{1,\ldots r_i\}$ there exist $j(i,\alpha)$, $k(i,\alpha) \in \{1,\ldots N\}$ so that $$ f(\Delta^i_{\alpha})\subset sat^c(B_j;\delta) $$ $$ f^{-1}(\Delta^i_{\alpha})\subset sat^c(B_k;\delta). $$ Let $\nu=\max_{i,\alpha}\{diam(\Delta^i_{\alpha}),diam(f(\Delta^i_{\alpha})),diam(f^{-1}(\Delta^i_{\alpha}))\}$; note that $\nu$ can be taken arbitrarily small, and we assume that $3Lip(f)\nu<\delta$. The previous consideration allows us to define a map $\phi: \sqcup_{i,\alpha} \Delta^i_{\alpha}\rightarrow \cup_{i} B_i$ simply by $$ \phi(x)=\Wcl[\delta]{fx}\cap B_j \text{ if }x\in \Delta^i_{\alpha}. $$ Nonetheless, the presence of the holonomy pseudo-group of $\ensuremath{\mathcal{W}^c}$ forces a complicated combinatorics which make $\phi$ very ill behaved, even in simple examples. Instead, we will use symbolic dynamics to control the center pseudo-group. We proceed as follows. On each $\Delta^i_{\alpha}$ choose a point $x^i_{\alpha}$, and consider $I=\{x^i_{\alpha}:i,\alpha\}$. Define the 0--1 matrix $A$ with indices on $I$ by the rule that $A_{x^i_{\alpha},x^{i'}_{\alpha'}}=1$ iff either \begin{enumerate} \item $f(\Delta^i_{\alpha})\subset sat^c_{\delta}(B_{i'})$, or \item exists $\Delta^{i'}_{\beta}\text{ s.t. }f^{-1}(\Delta^{i'}_{\beta})\subset sat^c_{\delta}(B_{i})\text{ and } \ proj_{D_i}(f^{-1}(\Delta^{i'}_{\beta}))\cap \Delta^i_{\alpha}\neq\emptyset$ \end{enumerate} We remark the following. \begin{lemma}\label{combinatoria} For every $i,\alpha,\alpha'$ we have $$ A_{x^{i}_{\alpha},x^{j(i,\alpha)}_{\alpha'}}=1=A_{x^{k(i,\alpha)}_{\alpha'},x^i_{\alpha}}. $$ \end{lemma} \vspace{0.5cm} We consider the subshift of finite type $\Sigma_{A}$ and observe that every $\underline{a}\in \Sigma_{A}$ defines a $(3+Lip(f|\ensuremath{\mathcal{W}^c}))\delta$-pseudo orbit, hence it is $C\delta$-shadowed by a $C\delta$-pseudo orbit $\underline{a}'$ that preserves \ensuremath{\mathcal{W}^c}. By Lemma \ref{projectain}, the point \begin{equation}\label{deftheta} \theta(\underline{a}):=\Wcl[C\delta]{a'_0}\cap E_i, \end{equation} where $a_0\in D_i$, is well defined and thus we have a map $\theta=\theta_{A}:\Sigma_{A}\rightarrow \cup_i E_i$. \vspace{0.5cm} \begin{proposition} The map $\theta :\Sigma_{A}\rightarrow\cup_1^n E_i$\ is continuous. \end{proposition} \begin{proof} We need the following lemma. \begin{lemma} Assume that $(\underline{a}^k)_{k\in \mathbb{N}}$\ is a sequence in $\Sigma_{A}$ that converges to $\underline{a}$. Then there exists a subsequence $(\underline{a}^k)_{k\in S}$\ such that for every $n$, $$ (a^k)'_n \xrightarrow[k\in S]{} a_n'.$$ \end{lemma} \begin{proof} We recall that the construction of $\underline{a}'$\ goes by finding for each $N>0$\ a finite pseudo-orbit $\{z_{-N}^N(\underline{a}),\ldots$ $z_0^N(\underline{a}),\ldots z_N^N(\underline{a})\}$\ which shadows the finite segment $\{a_{-N},\ldots,a_0,\ldots , a_{N}\}$\ and then passing to a converging subsequence $z_n^{N_j}(\underline{a})\xrightarrow[j\mapsto\infty]{} a_n'$ (Cf. Theorem 7A-2 in \cite{HPS}). \vspace{0.5cm} Now, since $\underline{a}^k\xrightarrow[k\mapsto\infty]{} \underline{a}$, there exists $k_1$\ such that $\forall k\geq k_1\ a_n^k=a_n$\ if $|n|\leq N_1$ . Thus $\forall k\geq k_1$\ we can take $$z_n^{N_1}(a^k)=z_n^{N_1}(a) \ \text{if } |n|\leq N_1 $$ By induction, we can find $k_j>k_{j-1}$\ \begin{equation*} \forall k\geq k_j , \ z_n^{N_j}(a^n)=z_n^{N_j}(a) \ \text{if } |n|\leq N_j \end{equation*} and from this follows $$ z_n^{N_j}(a^{k_j})\xrightarrow[j\mapsto\infty]{} a_n' $$ \end{proof} Back to the proof of the proposition, suppose that there exist two sequences $(\underline{a}^k)_k , (\underline{b}^k)_k$\ in $\Sigma_{A}$ converging to the same limit $\underline{a}$, but with $$x=\lim_k \theta(\underline{a}^k)\neq \lim_k \theta(\underline{b}^k)=y.$$ By the previous lemma we can assume with no loss of generality that for every $n$, \begin{equation}\label{conv1} (a^k)'_n\xrightarrow[k\mapsto\infty]{}a'_n \end{equation} \begin{equation}\label{conv2} (b^k)'_n\xrightarrow[k\mapsto\infty]{}a'_n. \end{equation} Consider the adapted disc $D_i$ that contains $x,y$ and denote by $proj: sat^c(D_{i})\rightarrow D_i$ the projection map: $proj$ is continuous. Thus by \eqref{conv1},\eqref{conv2} $$x=\lim_{k\rightarrow \infty}proj((a^k)'_0)=proj(a'_0)=\lim_{k\rightarrow \infty}proj((b^k)'_0)=y,$$ contrary to what we assumed. Since $\cup_{i=1}^{N}E_i$ is pre-compact we conclude that $\theta$\ is continuous. \end{proof} We now study the image of the map $\theta$. First of all we have the following. \begin{proposition}\label{sobre} The image of $\theta$\ is compact in $\cup_1^N E_i$\ and covers $\cup_1^N B_i$. \end{proposition} \begin{proof} The image is $\theta$\ is clearly compact due to the previous Proposition. To prove the second part we fix $x\in B_i$\ and define an element $\underline{a}\in\Sigma_{A}$\ by the following procedure. Let $i_0,\alpha_0$ such that $x\in \Delta^{i_0}_{\alpha_0}$, and set $a_0=x^{i_0}_{\alpha_0},x_0=x$. Now we inductively define $a_n,x_n$\ as follows. \begin{itemize} \item Assume first that $n>0$, and that we had determined $x_k,a_k$\ for $k=0,\ldots n$, with $a_n=x^{i_n}_{\alpha_n},x_n\in \Delta^{i_n}_{\alpha_n}$. Define $x_{n+1}=proj_{D_{j(i_n,\alpha_n)}}(fx_n)$ and choose $\Delta^{j(i_n,\alpha_n)}_{\alpha_{n+1}}$ containing $x_{n+1}$. Finally define $a_{n+1}=x^{j(\alpha_n)}_{\alpha_{n+1}}$. Note that $A_{a_n,a_{n+1}}=1$. \item For $-n<0$\ and assuming we have $a_{-k},x_{-k}$\ for $k=0,\ldots,n$\ note that $f^{-1}\Delta^{i_{-n}}_{\alpha_{-n}}\subset sat^c(B_{k(i_{-n},\alpha_{-n})},\delta)$, hence denoting $i_{-n-1}=k(i_{-n},\alpha_{-n})$ there exist $\alpha_{-n-1}$ and $x_{-n-1}\in\Delta^{i_{-n-1}}_{\alpha_{-n-1}}$ satisfying $proj_{D_{i_{-n}}}(fx_{-n-1})=x_{-n}$. Labelling $a_{-n-1}=x^{i_{-n-1}}_{\alpha_{-n-1}}$ we get by Lemma \ref{combinatoria} that $A_{a_{-n-1},a_{-n}}=1$. \end{itemize} In the end we have constructed two bi-infinite sequences $\underline{x},\underline{a}$ satisfying \begin{enumerate} \item the sequence $\underline{a}\in\Sigma_{A}$ \ \item The sequence $\underline{x}$\ is a central $(2\nu+Lip(f|\ensuremath{\mathcal{W}^c}))\delta$-pseudo orbit.\ \item For every $n,\ d{(a_n,x_n)}\leq \nu$. \end{enumerate} By plaque expansiveness we conclude $x=x_0=\theta(\underline{a})$. \end{proof} \vspace{0.5cm} \begin{remark}\label{sobresymbol} For later use we record that in the previous Proposition the sequence $\underline{a}=(x^{i_n}_{\alpha_n})_n$ constructed is such that for every $n$, $proj(f\Delta_{x^{i_n}_{\alpha_n}})\cap \Delta_{x^{i_{n+1}}_{\alpha_{n+1}}}\neq \emptyset$. \end{remark} \subsection{Rectangles and Markov Structure} As explained in the first section, the local product structure allows us to define a bracket for points in $\cup_{i=1}^N E_i$, where for $x,y\in E_i$\ we have $\langle x,y\rangle_D=proj_{D_i}(\Wsloc{x}\cap\Wculoc{y})$. We now relate this bracket structure with the one of $\Sigma_{A}$ \begin{proposition}\label{brapre} Let $\underline{a}\in \Sigma_{A}$ such that $a_o\in D_i$. Then \begin{enumerate} \item $\theta(\Wsloc{\underline{a}})\subset proj_{D_{i}}(\Wcsloc{\theta(\underline{a})}$ \ \item $\theta(\Wuloc{\underline{a}})\subset proj_{D_{i}}(\Wculoc{\theta(\underline{a})}$ \end{enumerate} \end{proposition} \begin{proof} Take $\underline{b}\in W^s_{loc}(\underline{a})$. Since $a_n=b_n\ \forall n\geq 0$\ we have $$d(a_n',b_n')\leq 2C\delta\quad \forall n\geq 0.$$ Theorem 6.1 of \cite{HPS} implies then that $b_0'\in\Wcsloc{a_0'}$, hence the result. The second part is analogous. \end{proof} \vspace{0.5cm} \begin{corollary}\label{conmuta} The map $\theta$\ is bracket preserving. \end{corollary} \begin{proof} Let $\underline{a},\underline{b}\in\Sigma_{A}$\ with $a_0=b_0\in D_i$. Then $\langle\theta\underline{a},\theta \underline{b}\rangle)_{D_i}$\ is well defined and \begin{equation*} \begin{split} \theta(\langle \underline{a},\underline{b}\rangle) &= \theta(W^s_{loc}(\underline{a})\cap W^u_{loc}(\underline{b}))\subset proj_{D_i}(\Wcsloc{\theta \underline{a}})\cap proj_{D_i}(\Wculoc{\theta \underline{b}})=\{\langle\theta \underline{a},\theta \underline{b}\rangle_{D_i}\}. \end{split} \end{equation*} \end{proof} \begin{definition} For $i=1,\ldots N$, a subset $R\subset D_i$ will be called a rectangle if it is invariant under the bracket $\langle \cdot, \cdot \rangle_{D_i}$. Similarly, a subset $R'\subset \Sigma_{A}$ will be called a rectangle if it is invariant under the bracket. \end{definition} One verifies easily for arbitrary $x_{\alpha_0}^{i_0},\ldots x_{\alpha_r}^{i_r}\in I$ the set \begin{equation} [x_{\alpha_0}^{i_0},\ldots x_{\alpha_r}^{i_r}]:=\{\underline{a}\in\Sigma_{A} : a_0=x_{\alpha_0}^{i_0},\ldots a_r=x_{\alpha_r}^{i_r}\} \end{equation} is a compact rectangle, hence by Proposition \ref{brapre} the set \begin{equation} C_{x_{\alpha_0}^{i_0},\ldots x_{\alpha_r}^{i_r}}=\theta[x_{\alpha_0}^{i_0},\ldots x_{\alpha_r}^{i_r}]. \end{equation} is a compact rectangle as well. We denote \begin{align} \Wst{x,C_{x_{\alpha_0}^{i_0},\ldots x_{\alpha_r}^{i_r}}}&=\Wst{x,D_i}\cap C_{x_{\alpha_0}^{i_0},\ldots x_{\alpha_r}^{i_r}}\\ \Wut{x,C_{x_{\alpha_0}^{i_0},\ldots x_{\alpha_r}^{i_r}}}&=\Wut{x,D_i}\cap C_{x_{\alpha_0}^{i_0},\ldots x_{\alpha_r}^{i_r}}. \end{align} Observe that $\emph{im}\theta=\cup_{i,\alpha} C_{x_{\alpha}^i}$, and each one of these rectangles has diameter less than equal to $2C\delta$, thus $$A_{x^i_{\alpha},x^{i'}_{\alpha'}}=1\Rightarrow f(C_{x_{\alpha}^i})\subset sat^c(D_{i'},r_0).$$ \vspace{0.5cm} \begin{lemma} For $\underline{a}\in [x^i_{\alpha}]$ it holds \begin{enumerate} \item[a)] $\theta\left( W^s_{loc}(\underline{a}) \right)=\Wst{\theta(\underline{a}), C_{x_{\alpha}^i}}$. \item[b)] $\theta\left( W^u_{loc}(\underline{a}) \right)=\Wut{\theta(\underline{a}), C_{x_{\alpha}^i}}$. \end{enumerate} \end{lemma} \begin{proof} By proposition \ref{brapre}, $\theta\left( W^s_{loc}(\underline{a}) \right)\subset\Wst{\theta(\underline{a}),C_{x_{\alpha}^i}}$. Consider a point $x\in$ $\Wst{\theta(\underline{a}),C_{x_{\alpha}^i}}$\ and take $\underline{b}\in [x_{\alpha}^i]$\ such that $\theta(\underline{b})=x$. Now $\underline{c}=\langle \underline{a},\underline{b}\rangle \in W^s_{loc}(\underline{a})$\ and $$ \theta(\underline{c})=\langle\theta(\underline{a}),\theta (\underline{b})\rangle=\langle\theta( \underline{a}),x\rangle =x$$ since $x$\ in $\Wst{\theta \underline{a},D_i}$. Therefore $x\in\theta(W^s_{loc}(\underline{a}))$. The second part is similar. \end{proof} Next we analyse the behaviour of $\theta$ with respect to the shift dynamics: it will be established below that the natural Markov structure of the rectangles $[x^i_{\alpha}]$ with respect to $\sigma$ induces a similar property on $\emph{im}\theta$. First note: \begin{lemma}\label{dynrectangulo} Consider $\underline{a}\in [x^i_\alpha,x^{i'}_{\alpha'}]$. Then $proj_{D_{i'}}(f\theta\underline{a})=\theta\sigma\underline{a}$. \end{lemma} \begin{proof} Let $x=\theta(\underline{a})$. Observe that the sequence $(b_n:=a_{n+1}')_{n\in\mathbb{Z}}$\ $C\delta$-shadows $\sigma (\underline{a})$, and since it respects the center foliation we conclude $$\theta(\sigma\underline{a})=proj_{D_{i'}}(a_1'),$$ $$ fx\in \Wcloc{a_1'} $$ hence the result. \end{proof} \vspace{0.5cm} \begin{proposition}\label{markovseq} Let $\underline{a}\in \Sigma_{A}$ and consider $x=\theta(\underline{a}) , y=\theta\sigma\underline{a}, z=\theta\sigma^{-1}\underline{a}$. If $i,i',i''$ are such that $x\in D_i,y\in D_{i'},z\in D_{i''}$, then \begin{align} proj_{D_{i'}}(f\Wst{x,C_{a_0}})&\subset\Wst{y,C_{a_1}} \\ proj_{D_{i}}(f\Wut{z,C_{a_{-1}}})&\supset\Wut{x,C_{a_0}} \end{align} \end{proposition} \begin{proof} This is an immediate consequence of the two previous lemmas: \begin{equation*} \begin{split} proj_{D_{i'}}(f\Wst{x,C_{a_0}})& =proj_{D_{i'}}(f\Theta(W^s_{loc}(\underline{a})))= \theta\sigma W^s_{loc}(\underline{a}) \\ &\subset\theta(W^s_{loc}(\sigma \underline{a}))=\Wst{y,C_{a_1}}, \end{split} \end{equation*} and likewise for the unstable part. \end{proof} \vspace{0.5cm} \subsection{Bowen's refinement procedure.} The reader may have observed before that we overdetermined the possible transitions determined by $A^1$. We did so in order to guarantee that every $\Delta^i_{\alpha}$ is contained in the image of $\phi$ (Cf. Proposition \ref{sobre}). There appears to be a trade-off between this ``surjectivity'' and the the over determinacy, related to the fact that we are using projections along centers, and those are difficult to control. Now we are going to eliminate some unnecessary redundancy using a method of R. Bowen (compare \cite{Shub} pages 133-134) to cut the rectangles $C_{x^i_{\alpha}}$ into disjoint sub-rectangles, while maintaining the Markov property. For rectangles $C_{x^i_{\alpha}},C_{x^i_{\beta}}$ with non-empty intersection, define: \begin{align*} C_{x^i_{\alpha}:x^i_{\beta}}^1=\{x\in C_{x^i_{\alpha}}:\Wst{x,C_{x^i_{\alpha}}}\cap C_{x^i_{\beta}}\neq \emptyset, \Wut{x,C_{x^i_{\alpha}}}\cap C_{x^i_{\beta}}\neq \emptyset\} \\ C_{x^i_{\alpha}:x^i_{\beta}}^2=\{x\in C_{x^i_{\alpha}}:\Wst{x,C_{x^i_{\alpha}}}\cap C_{x^i_{\beta}}\neq \emptyset, \Wut{x,C_{x^i_{\alpha}}}\cap C_{x^i_{\beta}}= \emptyset\} \\ C_{x^i_{\alpha}:x^i_{\beta}}^3=\{x\in C_{x^i_{\alpha}}:\Wst{x,C_{x^i_{\alpha}}}\cap C_{x^i_{\beta}}= \emptyset, \Wut{x,C_{x^i_{\alpha}}}\cap C_{x^i_{\beta}}\neq \emptyset\} \\ C_{x^i_{\alpha}:x^i_{\beta}}^4=\{x\in C_{x^i_{\alpha}}:\Wst{x,C_{x^i_{\alpha}}}\cap C_{x^i_{\beta}}= \emptyset, \Wut{x,C_{x^i_{\alpha}}}\cap C_{x^i_{\beta}}= \emptyset\}. \end{align*} It is not too hard to show that each $C_{x^i_{\alpha}:x^i_{\beta}}^m, m=1,\ldots, 4$ is a rectangle, and furthermore \begin{enumerate} \item[i)] $C_{x^i_{\alpha}:x^i_{\beta}}^1=C_{x^i_{\alpha}}\cap C_{x^i_{\beta}}$, \item[ii)] the sets $C_{x^i_{\alpha}:x^i_{\beta}}^1,C_{x^i_{\alpha}:x^i_{\beta}}^1\cup C_{x^i_{\alpha}:x^i_{\beta}}^2,C_{x^i_{\alpha}:x^i_{\beta}}^1\cup C_{x^i_{\alpha}:x^i_{\beta}}^3$ are closed in $\cup_{i} D_{i}$, and \item[iii)] $\cup_{m=1}^4 C_{x^i_{\alpha}:x^i_{\beta}}^m= C_{x^i_{\alpha}}$. \end{enumerate} Now define \begin{align*} R_{x^i_{\alpha}:x^i_{\beta}}^{m}&=cl\ C_{x^i_{\alpha}:x^i_{\beta}}^m,\ m=1,\ldots, 4\\ F_{x^i_{\alpha}:x^i_{\beta}}&=\cup_{m=1}^4 \partial C_{x^i_{\alpha}:x^i_{\beta}}^m, \end{align*} where the closure and the boundary are taken with respect to the induced topology of the disc $D_{i}$. We have that $\{R_{x^i_{\alpha}:x^i_{\beta}}^{m}\}_{m=1}^4$ is a covering of $C_{x^i_{\alpha}}$ by closed rectangles whose interiors (with respect to the induced topology)\ are disjoint. See Lemma 10.24 in \cite{Shub}. \vspace{0.5cm} Consider the set \begin{equation*} \mathcal{N}:=\emph{im}\theta\setminus \bigcup_{i,\alpha,\beta}F_{x^i_{\alpha}:x^i_{\beta}}, \end{equation*} which is relatively open and also dense in $im\theta$, due to Baire's catetegory theorem. For $x\in \mathcal{N}$ define: \begin{align*} \mathcal{J}(x)&=\{C_{x^i_{\alpha}}:x\in C_{x^i_{\alpha}}\}\\ \mathcal{J}^{*}(x)&=\{C_{x^i_{\beta}}:\exists C_{x^i_{\alpha}}\in\mathcal{J}(x), C_{x^i_{\alpha}}\cap C_{x^i_{\beta}}\neq\emptyset\}\\ P(x)&=\bigcap\{int\, R_{x^i_{\alpha}:x^i_{\beta}}^m: C_{x^i_{\alpha}}\in\mathcal{J}(x), C_{x^i_{\beta}}\in\mathcal{J}^{*}(x), x \in C_{x^i_{\alpha}:x^i_{\beta}}^m\}. \end{align*} \vspace{0.5cm} \begin{lemma}\label{rectanglePx} Each $P(x)$\ is an open rectangle and $x,x'\in \mathcal{N},\ P(x)\cap P(x')\neq\emptyset \Rightarrow P(x)=P(x')$. In particular there are finitely many rectangles $P(x)$. \end{lemma} \begin{proof} It follows easily that each $P(x)$ is an open rectangle. Since $\mathcal{N}$\ is dense in $\emph{im}\theta$, it it suffices to show that $$x'\in P(x)\cap \mathcal{N}\Rightarrow P(x)=P(x')$$ and for this it will be enough to show $\mathcal{J}(x)=\mathcal{J}(x')$, as the rectangles $C_{x^i_{\alpha}:x^i_{\beta}}^m$\ have disjoint interiors. It is immediate that $\mathcal{J}(x)\subset\mathcal{J}(x')$; for the other inclusion take $C_{x^i_{\beta}}\ni x'$ and consider $C_{x^i_{\alpha}}\in\mathcal{J}(x)$\ such that $C_{x^i_{\alpha}}\cap C_{x^i_{\beta}}\neq\emptyset$. By checking the possibilities, one sees that necessarily $x\in C_{x^i_{\alpha}:x^i_{\beta}}^1=C_{x^i_{\alpha}}\cap C_{x^i_{\beta}}$, and thus $x\in C_{x^i_{\beta}}$\ as well, i.e. $\mathcal{J}(y)\subset\mathcal{J}(x)$. \end{proof} \vspace{0.5cm} We now discuss the dynamics of the rectangles $P(x)$. \begin{definition} We say that $D_i'$ is an allowed future for $P(x)\subset D_i$ if there exists $\alpha,\alpha'$ such that $C_{x^i_{\alpha}}\in \mathcal{J}(x)$ and $A_{x^i_{\alpha},x^{i'}_{\alpha'}}=1$. \end{definition} \begin{lemma}\label{rectangulosB} Suppose that $x\in C_{x^i_{\alpha},x^{i'}_{\alpha'}}$, and $y=proj_{D_i'}(fx)\in C_{x^{i'}_{\beta}}$. Then there exists $\underline{b}\in\Sigma_A$ satisfying \begin{enumerate} \item $\underline{b}\in [x^i_{\alpha},x^{i'}_{\beta}]$, \item $\theta(\underline{b})=x, \theta(\sigma\underline{b})=y$. \end{enumerate} \end{lemma} \begin{proof} Observe that by definition of $A$ one has $A_{x^i_{\alpha},x^{i'}_{\beta}}=1$. By hypothesis there exist $\underline{a},\underline{c}\in\Sigma_A$ such that \begin{enumerate} \item[i)] $a_0=x^i_{\alpha},c_0=x^{i'}_{\beta}$, \item[ii)]$\theta(\underline{a})=x,\theta(\underline{c})=y$. \end{enumerate} Consider the sequence defined by $$ b_n=\begin{cases} a_n&\text{if }n\leq 0 \\ c_{n-1}&\text{if }n\geq 1, \end{cases}$$ and observe that $\underline{b}\in\Sigma_A$. Let $z=\theta\underline{b}$: we claim that $z=x$. To see this observe that since $\underline{b}\in W^u_{loc}(\underline{a}), z\in\Wut{x,C_{a_0}}$. Furthermore $\sigma \underline{b} \in W^s_{loc}(\underline{c})$, so we get $proj_{D_{i'}}(fz)\in\Wst{y,C_{c_0}}$ and thus $fz\in \Wcs{fx}$. We conclude $z\in\Wcsloc{x}\cap\Wut{x,C_{a_0}}=\{x\}$. \end{proof} \begin{proposition}\label{dynamicaP} Let $x,x'\in \mathcal{N}\cap D_i$ be such that \begin{enumerate} \item $P(x)=P(x')$. \item $x'\in \Wst{x,D_{i}}$. \item $i'$ is an allowed future for $P(x)$. \end{enumerate} Consider the points $y=proj_{D_{i'}}(fx),y'=proj_{D_{i'}}(fx')$, and suppose that $y,y'\in\mathcal{N}$. Then $P(y)=P(y')$ \end{proposition} \begin{proof} We will first establish that $\mathcal{J}(y)=\mathcal{J}(y')$, and for this we note that due to symmetry it suffices to show only one inclusion. Suppose that $y\in C_{x^{i'}_{\beta}}$: since $i'$ is an allowed future for $P(x)$ the previous Lemma allow us to find a sequence $\underline{a}\in [x^{i}_{\alpha},x^{i'}_{\beta}]$ satisfying $\theta(\underline{a})=x,\theta(\sigma\underline{a})=y$. By Proposition \ref{markovseq}, $$ y'\in proj_{D_{i'}}(f\Wst{x,C_{a_0}})\subset\Wst{y,C_{a_1}}\subset C_{x^{i'}_{\beta}}, $$ i.e. $C_{x^{i'}_{\beta}}\in \mathcal{J}(y')$, hence $\mathcal{J}(y)\subset \mathcal{J}(y')$ as we wanted to show. It then follows that $\mathcal{J}^{\ast}(y)=\mathcal{J}^{\ast}(y')$, and $$ \Wst{y,C_{x^{i'}_{\beta}}}\cap C_{x^{i'}_{\beta'}}\neq \emptyset\Leftrightarrow \Wst{y',C_{x^{i'}_{\beta}}}\cap C_{x^{i'}_{\beta'}}\neq \emptyset. $$ We want to show that the same is true for the unstable manifolds, and thus conclude $P(y)=P(y')$. Consider a point $w\in\Wut{y,C_{x^{i'}_{\beta}}}\cap C_{x^{i'}_{\beta'}}$\ and take a sequence $\underline{a}\in [x^{i}_{\alpha},x^{i'}_{\beta}]$ as above. We have $$ proj_{D_{i'}}(f\Wut{x;C_{a_0}})\supset \Wut{y;C_{a_1}}, $$ hence there exists $z\in \Wut{x;C_{a_0}}$ such that $proj_{D_{i'}}(fz)=w$. The set $C_{a_0}$ is a rectangle, and since $P(x)=P(x')$ we get $u=\langle z,x' \rangle\in \Wut{x';C_{a_0}}$. As $\theta$ is bracket preserving and $C_{a_1}$ is a rectangle, $$ u'=proj_{D_{i'}}(fu)=\langle w,y' \rangle \in C_{a_1}. $$ On the other hand, $z\in C_{x^{i}_{\alpha}},w\in C_{x^{i'}_{\beta'}}$ and thus there exists $\underline{c}\in\Sigma_A$\ such that $$ \theta(\underline{c})=z,\theta(\sigma\underline{c})=w,c_0=x_{\alpha}^{i},c_1=x^{i'}_{\beta'}. $$ Using that $proj_{D_{i'}}(f\Wst{z;C_{c_0}})\subset \Wst{w,C_{c_1}}$ we conclude $u'\in C_{c_1}$, i.e. $$ u'\in \Wut{y,C_{x^{i'}_{\beta}}}\cap C_{x^{i'}_{\beta'}}. $$ We have shown that $$ \Wut{y,C_{x^{i'}_{\beta}}}\cap C_{x^{i'}_{\beta'}}\neq \emptyset\Rightarrow \Wut{y,C_{x^{i'}_{\beta}}}\cap C_{x^{i'}_{\beta'}}\neq \emptyset $$ and by symmetry we have the other implication. This completes the proof of the Proposition. \end{proof} \vspace{0.5cm} \subsection{The Markov Partition for the transversal.} Lemma \ref{rectanglePx} implies that the family $\mathcal{R}=\{R=cl P(x):x\in\mathcal{N}\}$ is finite, say $\mathcal{R}=\{R_1,\ldots R_s\}$. Each $R_m$ is a compact rectangle, with $R_m=cl\, int\, R_m$. Furthermore, if $P(x)\neq P(y)$ then $P(x)\cap P(y)=\emptyset$, and since these are open sets a simple topological argument implies that rectangles in $\mathcal{R}$ have pairwise disjoint interiors. We extend the notion of `allowed future' to rectangles in $\mathcal{R}$. \begin{definition} If $R_{\mu}\in \mathcal{R},D_i'$ allowed future for $R_{\mu}$ we define the map $\phi_{\mu,i'}^+:R_{\mu}\mapsto D_{i'}$ by \begin{equation} \phi_{\mu,i'}^+(x)=proj_{D_{i'}}(fx). \end{equation} \end{definition} \begin{proposition}[Markov Property of $\mathcal{R}$]\label{dynamicaR} Let $R_{\mu},R_{\nu}\in \mathcal{R}$ with $D_{i'}$ allowed future for $R_{\mu}$, and suppose that $x\in int\, R_{\mu}, \phi_{\mu,i'}^+(x)\in int\, R_{\nu}$. Then \begin{align} \phi_{\mu,i'}^+(\Wst{x,R_{\mu}})\subset \Wst{\phi_{\mu,i'}^+(x),R_{\nu}}\\ \phi_{\mu,i'}^+(\Wut{x,R_{\mu}})\supset \Wut{\phi_{\mu,i'}^+(x),R_{\nu}} \end{align} \end{proposition} \begin{proof} For a rectangle $R_{\eta}$ denote $$ \partial^s R_{\eta}:=\{y:\in R_{\eta}:\Wst{x;R_{\eta}}\cap int\, R_{\eta}\}. $$ Now take $x\in R_{\mu}\cap \mathcal{N}$ such that $y=\phi_{\mu,i'}(x)\in R_{\nu}\cap \mathcal{N}$. Then $R_{\mu}=cl P(x),R_{\nu}=cl P(y)$, and $$ \Wst{x,R_{\mu}}=cl \Wst{x;P(x)},\ \Wst{y,R_{\nu}}=cl \Wut{y;P(x)}. $$ By Proposition \ref{dynamicaP} $$ \phi_{\mu,i'}^+(\Wst{x,P(x)})\subset \Wst{y;P(y)}, $$ and thus by continuity $$ \phi_{\mu,i'}^+(\Wst{x,R_{\mu}})\subset \Wst{\phi_{\mu,i'}^+(x),R_{\nu}}. $$ Points $x$ as above are dense in $\cup_{\eta} R_{\eta}$: standard arguments permit then to extend the conclusion for any point $x'\in int\, R_{\mu}, \phi_{\mu,i'}^+(x')\in int\, R_{\nu}$. Details can be found for example in \cite{Shub} page 137. Likewise for the unstable part. \end{proof} \vspace{0.5cm} \begin{definition} The disc $D_k$ is an allowed past of $R_{\mu}\subset D_i$ if $D_i$ is an allowed future of some rectangle $R_{\nu}\subset D_k$ with $\phi_{\nu,i}(int R_{\nu})\cap int R_{\mu}\neq \emptyset$. In this case define the map $\phi_{\mu,k}^{-}:R_{\mu}\mapsto D_{k}$ by \begin{equation} \phi_{\mu,k}^{-}(x)=proj_{D_{k}}(f^{-1}x). \end{equation} \end{definition} \vspace{0.5cm} The following is immediate from Proposition \ref{dynamicaR}. \begin{corollary}\label{CorodynamicaR} Let $R_{\mu},R_{\nu}\in \mathcal{R}$ with $D_{k}$ allowed past for $R_{\mu}$, and suppose that $x\in int\, R_{\nu}$, $\phi_{\mu,k}^{-}(x)\in int\, R_{\mu}$. Then \begin{equation} \phi_{\mu,k}^{-}(\Wut{x,R_{\mu}})\subset \Wut{\phi_{\mu,k}^{-}(x);R_{\nu}} \end{equation} \end{corollary} We subsume the results of this section in the following Theorem. \begin{theorem}\label{MarkovTransversal} Given $\delta>0,\rho>0$ there exist a family $\mathcal{D}=\{D_1,\ldots, D_N\}$of $s+u$ dimensional (embedded) discs transverse to \ensuremath{\mathcal{W}^c}, a family $\mathcal{R}=\{R_1,\ldots,R_s\}$ and a function $af:\mathcal{R}\rightarrow 2^{\mathcal{D}}$ satisfying the following. \begin{enumerate} \item The family $\mathcal{D}$ is pairwise disjoint. \item Each $R_{\mu}$ is a rectangle contained in some disc $D_i$, and with respect to the relative topology of $D_i$ it holds $R_{\mu}=cl\, int R_{\mu}$. Moreover $diam R_{\mu}<\rho$. \item If $R_{\mu},R_{\nu}$ are contained in the same disc $D_i$ and their interiors share a common point, then $R_{\mu}= R_{\nu}$. \item $M=\cup_{\mu=1}^s sat^c(R_{\mu},\delta)$. \item Suppose that $R_{\mu}\subset D_i,R_{\nu}\in D_{i'}$ satisfy $D_{i'}\in af(R_{\mu})$. If $x\in int\, R_{\mu}, \phi_{\mu,i'}^+(x)\in int\, R_{\nu}$, then it holds \begin{enumerate} \item $\phi_{\mu,i'}^+(\Wst{x,R_{\mu}})\subset \Wst{\phi_{\mu,i'}^+(x),R_{\nu}}$. \item $\phi_{\mu,i'}^{-}(\Wut{\phi_{\mu,i'}^+(x),R_{\nu}})\subset \Wut{x,R_{\mu}}$ \end{enumerate} where $\phi_{\mu,i'}^+=proj_{D_{i'}}\circ f,\phi_{\mu,i'}^-=proj_{D_i}\circ f^{-1}$. \end{enumerate} \end{theorem} In synthesis, the rectangles $R_1,\ldots R_s$ are proper, with disjoint interiors and satisfy the Markov property for suitable choices of the future. Observe nonetheless that these rectangles do not have a canonically defined future, nor a (canonically defined) past. Theorem A is proved. \section{Preliminaries} In this section we collect some definitions and establish some notation used in the sequel. Throughout the paper, $M$ will denote a closed (compact, without boundary) differentiable manifold. We are interested in the following generalization of Anosov systems. \begin{definition} A $\mathcal{C}^1$\ diffeomorphism $f: M \rightarrow M$\ is \emph{partially hyperbolic} if there exists a continuous splitting of the tangent bundle of the form $$TM=E^u\oplus E^c\oplus E^s$$\ where both bundles $E^s,E^u$\ have positive dimension, and a continuous Riemannian metric $\norm{\cdot}$ on $M$ such that \begin{enumerate} \item all bundles $E^u,E^s,E^c$\ are $df$-invariant. \item For every $x\in M$, for every unitary vector $v^{\sigma}\in E^{\sigma}_x, \sigma=s,c,u$, \begin{gather*} \norm{d_xf(v^{s})}<1<\norm{d_xf(v^{u})}\\ \norm{d_xf(v^{s})}< \norm{d_xf(v^{c})}<\norm{d_xf(v^{u})} \end{gather*} \end{enumerate} \end{definition} The bundles $E^s,E^u,E^c$\ are called the \emph{stable, unstable} and \emph{center} bundle respectively. The well known \emph{Stable Manifold Theorem} implies that the continuous bundles $E^s,E^u$ are integrable to $f$-invariant continuous foliations $W^s,W^u$, whose leaves are of class $C^1$. These are called the stable and unstable foliations, respectively. See chapter $4$ of \cite{HPS} for the proof. On the other hand, the center bundle is not always integrable (Cf. \cite{SmaleBull},\cite{Nodyncoh}), although it is almost always satisfied in the examples. As one of our main objectives is providing a model for the foliation tangent to $E^c$, the center foliation, we will assume that this bundle is integrable and in fact, to avoid some technical subtleties, we will assume something (seemingly) stronger. \begin{definition} A partially hyperbolic map $f:M\rightarrow M$ is \emph{dynamically coherent} if the bundles $E^c$,$E^{cs}=E^c\oplus E^s$,$E^{cu}=E^c\oplus E^u$ are integrable to continuous $f$-invariant foliations $\ensuremath{\mathcal{W}^c}, \ensuremath{\mathcal{W}^{cs}}, \ensuremath{\mathcal{W}^{cu}}$ \end{definition} \textbf{Convention:} From now on, we will omit dynamically coherent when referring to partially hyperbolic maps. \vspace{0.5cm} For a point $x\in M$, a positive number $\gamma>0$\ and $\sigma\in \{s,u,c,cs,cu\}$ we will denote by $W^{\sigma}(x;\gamma)$\ the open disc of size $\gamma$\ inside the leaf $W^{\sigma}(x)$, where distances are measured with the corresponding induced metric. Moreover, if $A\subset M$ we denote \begin{align} W^{\sigma}(A;\gamma)&:=\cup_{x\in A}W^{\sigma}(x;\gamma),\quad \sigma\neq c\\ sat^c(A)&:=\cup_{x\in A}W^{c}(x;\gamma). \end{align} In the definition or partial hyperbolicity the metric used can be assumed such that the bundles $E^c,E^s$\ and $E^u$\ are mutually orthogonal \cite{AdaptedMet}. It follows that we have \emph{local product structure}: there exists some $r_0>0$\ such that for $0<r\leq r_0$, $$d(x,y)<r\Rightarrow \exists\ z\text{ s.t. } H_x^r\cap V_y^r\supset \Wcl[r]{z},$$ where $H_x^r=\Wul[2r]{\Wcl[2r]{x}},V_x^r=\Wsl[2r]{\Wcl[2r]{x}}$. See Section 7 of \cite{HPS}. \begin{definition} The local plaque of $\ensuremath{\mathcal{F}}^{\sigma}$ centered at $x$ is defined as $$ W^{\sigma}_{loc}(x):=\begin{cases} W^{\sigma}(x,2r_0)&\quad \sigma\in\{cs,cu\}\\ W^{\sigma}(x,r_0)&\quad \sigma\in\{c,s,u\}. \end{cases} $$ \end{definition} One of the main consequences of dynamical coherence if the property of shadowing. \begin{definition} Let $f: M\rightarrow M$\ be partially hyperbolic. \begin{enumerate} \item A sequence $\underline{x}=\left\{ x_n \right\}_{-N}^N $ where $N \in \mathbb{N}\cup \{\infty\}$\ is called a $\delta$-pseudo orbit for $f$ if for every $n=-N,\ldots , N-1$, $d(fx_n,x_{n+1})\leq \delta$. If furthermore $f(x_n)\in \Wcl[\delta]{x}$ for every $n=-N,\ldots,N-1$ we say that $\underline{x}$ is a central $\delta$-pseudo orbit. Center stable and center unstable pseudo orbits are defined similarly using the foliations \ensuremath{\mathcal{W}^{cs}},\ensuremath{\mathcal{W}^{cu}}. \item For $\epsilon>0$, we say that the pseudo orbit $\underline{y}=\left\{ y_n \right\}_{-N}^N$\ $\epsilon$-shadows the pseudo orbit $\underline{x}=\left\{ x_n \right\}_{-N}^N$\ if $d(x_n,y_n)<\epsilon$ for every $n=-N,\ldots , N-1$. \end{enumerate} \end{definition} \begin{theorem}[Shadowing]\label{shadowing} Let $f: M\rightarrow M$\ be partially hyperbolic. Then there exist constants $C>1,\delta_0>0$ such that for $0<\delta\leq \delta_0$ any $\delta$ pseudo-orbit\ can be $C\delta$-shadowed by a $C\delta$-central pseudo-orbit. \end{theorem} This theorem is a generalization of the classical shadowing theorem for Anosov maps due to Hirsch-Pugh-Shub, and appears as Theorem 7A-2 in \cite{HPS} in a slightly different formulation, although the version above follows directly from their arguments. In principle, there could be more than one bi-infinite central pseudo-orbits shadowing a given one. To remedy this we will work with plaque expansive systems. \begin{definition} Let $f:M\rightarrow M$ be a partially hyperbolic set. We say that: \begin{enumerate} \item The foliation \ensuremath{\mathcal{W}^c} is plaque expansive if there exists $e>0$ such that if $\underline{x}=\left\{ x_n \right\}_{-\infty}^{\infty},\underline{y}=\left\{ y_n \right\}_{-\infty}^{\infty}$ are central $e$-pseudo orbits satisfying $d(x_n,y_n)$ for every $n$, then $x_0\in \Wc{y_0;e}$. In this case we usually say that the map $f$ is plaque expansive \item The foliation \ensuremath{\mathcal{W}^{cs}}\ is plaque expansive if there exists $e>0$ such that if $\underline{x}=\left\{ x_n \right\}_{0}^{\infty},\underline{y}=\left\{ y_n \right\}_{0}^{\infty}$ are center stable $e$-pseudo orbits satisfying $d(x_n,y_n)$ for every $n$, then $x_0\in \Wcs{y_0;e}$. Similarly for \ensuremath{\mathcal{W}^{cu}}. \end{enumerate} \end{definition} Plaque expansiveness holds in every known example, and is guaranteed if $\ensuremath{\mathcal{W}^c}$ is $C^1$, or if $f$ is a center isometry, i.e. $\norm{df|E^c}=1=m(df|E^c)$. It is also $C^1$-stable: if $f$ is plaque expansive then there exists a $C^1$ neighbourhood $U$ of $f$ such that every $g\in U$ is plaque expansive. See Chapter 6 in \cite{HPS}. \begin{lemma}\label{plaqueexpother} If $f$ is plaque expansive, the foliations $\ensuremath{\mathcal{W}^{cs}},\ensuremath{\mathcal{W}^{cu}}$ are also plaque expansive. \end{lemma} The proof is not hard. \subsection{Adapted families of discs.} We will make extensive use of transverse discs to the center foliation. \begin{definition} For $0<\epsilon<r_0$ we say that $D\subset M$ is an adapted disc of size $\epsilon$ if there exists an embedding $h_D:D^{s+u}(\epsilon):=\{p\in R^{s+u}:\|p\|< \epsilon\}\rightarrow M$ satisfying \begin{enumerate} \item $im h=D$ and $D$ is transverse to \ensuremath{\mathcal{W}^c}. \item For all $p, \frac{1}{K_1}\leq m(dh_p), \|dh_p\| \leq K_1$ where $K_1\approx 1$ is a constant. \item If $x,y\in D$, $\Wcloc{x}\cap\Wcloc{y}\neq \emptyset$ implies $x=y$. \end{enumerate} \end{definition} \begin{remark} The constant $K_1$ is chosen to guarantee that $h$ is almost an isometry, and only depends on $M$ and $f$. The center of $D$ is $h(0)$. \end{remark} For $D$ as above we have a projection $proj_D:sat^{c}(D,r_0)\rightarrow D$ given by \begin{equation} proj_D(y)=\Wcloc{y}\cap D, \end{equation} and if $z\in D$ we define the sets \begin{align} \Wut{z,D}:=H_z\cap D\\ \Wst{z,D}:=V_z\cap D. \end{align} Observe that if $D$ is an adapted disc of size $\epsilon\geq 3r$ centred at $x$ and $y\in D, d(x,y)<2r$, there exist uniquely defined points $y^u_D\in \Wut{x,D},y^s_D\in\Wst{x,D}$\ such that $y=D\cap V_{y^u_D}^r\cap H_{y^s_D}^r$. The map $\Psi_x^D$ given by $$ \Psi_x^D(y)=(y^u_D,y^s_D) $$ is an open embedding, and thus define a continuous system of coordinates on $D$. We write $<\cdot,\cdot>_D:\Wut{x,D}\times\Wst{x,D}\rightarrow D$\ for the inverse of $\Psi^D$. \vspace{0.5cm} \begin{definition} Let $0<\delta<\epsilon<r_0$. We say that $\{D_i\}_{i=1}^N$ is an $(\epsilon,\delta)$ adapted family if the following conditions hold \begin{enumerate} \item Each $D_i$ is an $\epsilon$-adapted disc. \item If $i\neq j$ then $D_i\cap D_j=\emptyset$. \item For each $i$ there exist open sub-discs $B_i\subset E_i \subset D_i$ of the form \begin{itemize} \item $B_i=h_{D_i}(\delta)$. \item $E_i=h_{D_i}(\frac{\epsilon}{2})$ \end{itemize} \item $M=\cup_{i=1}^N sat^c(B_i,\delta)$. \end{enumerate} \end{definition} \textbf{Convention:} Recall Theorem \ref{shadowing}. For the previous definition we will assume that $\delta$ is chosen so that \begin{enumerate} \item $8C\delta<<\epsilon$. \item Every $(3+Lip(f|\ensuremath{\mathcal{W}^c}))\delta$-pseudo-orbit can be $C\delta$-shadowed by a pseudo-orbit subordinated to \ensuremath{\mathcal{W}^c}. \end{enumerate} \begin{lemma} For every $\epsilon>0$ sufficiently small and $0<\delta<\frac{\epsilon}{8C}$ there exists an $(\epsilon,\delta)$ adapted family. \end{lemma} \begin{proof} This is easy consequence of the existence of nice (finite) atlases of arbitrarily small diameter for the center foliation\footnote{A foliated atlas $\{U\}_k$ is nice if whenever $U_k\cap U_{k'}\neq \emptyset$ there exists a distinguished open set containing $cl(U_k\cup U_{k'})$.}. See Lemma 1.2.17 in \cite{FoliationsI}. \end{proof} For later use, we also record the following simple remark. \begin{lemma}\label{projectain} There exists $\epsilon_0>0$ such that every $(\epsilon,\delta)$ adapted family $\{D_i\}_{i=1}^N$ with $0<\epsilon\leq \epsilon_0$ satisfies the following: $$ x\in M, d(x,B_i)<C\delta\Rightarrow \#E_i\cap \Wcl[C\delta]{x}=1. $$ \end{lemma} \vspace{0.5cm} \subsection{Shift Spaces} Let $k$ be a natural number. If $A$ is a $0-1$ $k\times k$ square matrix we denote by $\Sigma_A,\Sigma_A^+$ the two-sided and one-sided subshifts of finite type determined by $A$, namely \begin{align} \Sigma_A&:=\{\underline{x}\in \{1,\ldots,k\}^{\mathbb{Z}}:A_{x_n,x_{n+1}}\forall n\in\mathbb{Z} \}\\ \Sigma_A^+&:=\{\underline{x}\in \{1,\ldots,k\}^{\mathbb{N}}:A_{x_n,x_{n+1}}\forall n\in\mathbb{N} \}. \end{align} Equipped with the product topology, these are compact metrizable spaces where compatible metrics are given by $d(\underline{x},\underline{y})=\frac{1}{2^{N(\underline{x},\underline{y})}}$ with \begin{equation} N(\underline{x},\underline{y})=\begin{cases}sup\{n:x_i=y_i\ \forall\ |i|<n\}&\text{ for }\underline{x},\underline{y}\in \Sigma_A\\ sup\{n:x_i=y_i\ \forall\ 0\leq i<n\}&\text{ for }\underline{x},\underline{y}\in \Sigma_A^+. \end{cases} \end{equation} It is well known that both $\Sigma_A,\Sigma_A^+$ are zero-dimensional perfect compact sets, and thus homeomorphic to Cantor sets. The corresponding shift map in each of them will be denoted simply by $\sigma$. The two-sided subshift has a natural bracket structure: for sequences $\underline{a},\underline{b}$\ with $a_0=b_0$\ their bracket is the sequence $\underline{c}$\ given by $$c_{n}=\begin{cases} a_n&\text{if }n\geq 0 \\ b_{n}&\text{if }n<0 \end{cases}.$$ To understand this structure we introduce the local stable set and the local unstable set of a point $\underline{a}\in \Sigma_{A}$ as \begin{align} W^s_{loc}(\underline{a})&=\{\underline{b}\in\Sigma_{A^1}:d(\sigma^n\underline{a},\sigma^n\underline{b})\leq 1/3\ \forall\ n\geq 0\}\\ W^u_{loc}(\underline{a})&=\{\underline{b}\in\Sigma_{A^1}:d(\sigma^n\underline{a},\sigma^n\underline{b})\leq 1/3\ \forall\ n\leq 0\}. \end{align} Observe that $$ W^s_{loc}(\underline{a})=\{\underline{b}\in\Sigma_{A^1}:a_n=b_n\ \forall n\geq 0\} $$ $$ W^u_{loc}(\underline{a})=\{\underline{b}\in\Sigma_{A^1}:a_n=b_n\ \forall n\leq 0\}, $$ and in particular $\langle\underline{a},\underline{b}\rangle=W^s_{loc}(\underline{a})\cap W^u_{loc}(\underline{b})$. \vspace{0.5cm} \section{Symbolic representation of the transversal.} Consider the subshift of finite type $\Sigma_S$ determined by the matrix $S$ where $$ S_{\mu,\nu}=1\Leftrightarrow \nu\subset D_{i'},i'\in af(R_{\mu})\text{ and } \phi_{\mu,i'}^+(int R_{\mu})\cap int R_{\nu}\neq\emptyset. $$ For $\underline{a}\in \Sigma_S$ and $l\leq l'\in \mathbb{Z}$ denote $a[l,l']=a_l,\ldots,a_{l'}$. Observe then for the string $a[l,l']$ the rectangles $R_{a_l},\ldots R_{a_{l'}}$ are ``linked'', in the sense that if $D_{i_{\eta}}$ denotes the disc containing $R_{\eta}, \eta=l,\ldots,l'$, there exists a non-empty relatively open set $V_{a[l,l']}\subset R_{a_l}$ such that the map $$\ \phi_{a[l,l']}^+=\phi_{a_{l'},i_{l'}}^+\circ,\ldots,\phi_{a_{l+1},i_{l+1}}^+:R_{a_l}\rightarrow R_{a_{l'}} $$ is well defined and continuous. Similarly, $$ \phi_{a[l,l']}^-:W_{a[l,l']}\subset R_{a_{l'}}\rightarrow R_{a_l} $$ is constructed using the maps $\phi^{-}_{a_{\eta},i_{\eta}}$. These considerations allow us to define \begin{equation} h^{s}(\underline{a})=\bigcap_{n\geq 0}(\phi_{a[o,n]}^{+})^{-1}(R_{a_n})=\bigcap_{n\geq 0} V_{a[0,n]} \end{equation} \begin{equation} h^{u}(\underline{a})=\bigcap_{n\geq 0}(\phi_{a[-n,0]}^{-})^{-1}(R_{a_{-n}})=\bigcap_{n\geq 0} W_{a[-n,0]} \end{equation} \begin{equation} h(\underline{a})=h^{+}(\underline{a})\cap h^{-}(\underline{a}). \end{equation} \vspace{0.5cm} \begin{lemma}\label{haches} If $\underline{a}\in\Sigma_S$, there exists $x\in R_{a_0}$ such that \begin{enumerate} \item $h^s(\underline{a})=\Wst{x,R_{a_0}}$ \item $h^{u}(\underline{a})=\Wut{x,R_{a_0}}$ \item $h(\underline{a})=x$. \end{enumerate} \end{lemma} \begin{proof} Part 5 of Theorem \ref{MarkovTransversal} imply that there exists a subset $T\subset R_{a_0}$ such that $\cap_{n\geq 0} V_{a[0,n]}=\cup_{x\in T} \Wst{x,R_{a_0}}$. As $\ensuremath{\mathcal{W}^c}$ is plaque expansive, $\ensuremath{\mathcal{W}^{cs}}$ is plaque expansive as well (Cf. Lemma \ref{plaqueexpother}), and thus for some $x\in R_{a_0}, T=\Wst{x,R_{a_0}}$. The second part is similar and the third is consequence of the first two. \end{proof} We discuss now continuity of these maps. Continuity of $h:\Sigma_S\rightarrow \cup_{\eta} R_{\eta}$ will be understood with respect to the product metric of $\Sigma_S$. In the case of $h^{+},h^{-}$ we arbitrarily pick points $x_{\mu}\in int R_{\mu}$: continuity of $h^{s}$ will be understood as continuity of the map $$\Sigma_S\xrightarrow[]{h^{s}}\cup_{\mu} R_{\mu}\rightarrow \cup_{\mu}\Wut{x_{\mu},R_{\mu}},$$ where the last map collapses each $R_{\mu}$ to $\Wu{x_{\mu},R_{\mu}}$ along the partition $\{\Ws{x,R_{\mu}}\}_{x\in R_{\mu}}$. Similarly for $h^{u}$. \vspace{0.5cm} \begin{proposition}\label{projcont} The maps $h^{s},h^{u},h$ are continuous, where $\Sigma_S$ is equipped with the product metric. If furthermore $\ensuremath{\mathcal{W}^c}$ is Lipschitz, then they are Lipschitz continuous if $\delta$ is sufficiently small. \end{proposition} \begin{proof} It suffices to establish continuity of $h^{s}$. Since $\mathcal{R}$ is finite and $\delta$ is fixed we have that for $n\geq0$ it holds $$ r_n:=\inf\{diam(\Wut{x,V_{a[0,n]}}):\underline{a}\in\Sigma_S, x\in V_{a[0,n]}\}>0. $$ Now assume by contradiction that $h^{s}$ is not continuous. Then there exists a converging sequence $\underline{a}^N\xrightarrow[N\mapsto\infty]{}\underline{a}$ in $\Sigma_S$ such that $x_N=h^{s}(\underline{a}^N)\xrightarrow[n\mapsto\infty]{} x\neq h^{s}(\underline{a})$. Fix $n$ and take $N_0$ such that for $N\geq N_0,d(x_N,x)<r_n$. Then by definition of $h^{s}$ $$ x\in V_{a^N[0,n]}, $$ and since $\underline{a}^N\xrightarrow[N\mapsto\infty]{}\underline{a}$, we deduce that $x\in V_{a[0,n]}$ which in turn implies that $x\in\cap_{n\geq 0} V_{a[0,n]}$. This contradicts the fact that $\ensuremath{\mathcal{W}^{cs}}$ is plaque expansive. If we further assume that $\ensuremath{\mathcal{W}^c}$ is Lipschitz, we can choose $\delta$ sufficiently small such that for every holonomy transport $h$ associated to an $(\epsilon,\delta)$ family, the map $h\circ f$ contracts stable distances by a factor of $0<k<1$. Then for every $\underline{a}\in \Sigma_S,n\geq0$, $$ sup\{diam(\Wu{x,V_{a[0,n]}}):x\in V_{a[0,n]}\}\leq k^n sup\{diam(\Wu{x,R_{a_0}}):x\in R_{a_0}\}, $$ and it follows that $h^{s}$ is Lipschitz. \end{proof} \begin{remark} For a general partially hyperbolic diffeomorphism, the center foliation is only H\"older continuous \cite{HolFol}. Nonetheless, establishing that $h,h^{s},h^{u}$ are H\"older in general seems difficult without assuming some condition that forzes the diameter of the sets $V$ to diminish after every iteration. The argument seems similar to the one necessary to establish plaque expansiveness in general, which is at the moment of writing, still unknown. \end{remark} We now investigate the dynamics of the subshifts. Recall that given a topological space $X$, a pseudo-group on $X$ is a set $\mathcal{G}=\{g:d(g)\rightarrow r(g)\}$ of local homeomorphisms between proper sets\footnote{In the literature, the notion of pseudo-group is usually defined using open sets instead of proper ones, but the extension to the later case is straightforward} of $X$ such that \begin{enumerate} \item $Id:X\rightarrow X\in \mathcal{G}$ \item $g\in \mathcal{G}\Rightarrow g^{-1}\in\mathcal{G}$. \item $g:d(g)\rightarrow r(g),g':d(g')\rightarrow r(g')\in \mathcal{G}$ with $d(g')\cap r(g))$ proper then $$ g'\circ g:g^{-1}d(g')\rightarrow g'(r(g))\in \mathcal{G}. $$ \item $g\in \mathcal{G}, U\subset d(g)$ proper implies $g|:U\rightarrow g(U)\in\mathcal{G}$. \item Suppose that $U,V\subset X$ are proper and $g:U\rightarrow V$ is a homeomorphism. Suppose that there exist a family $\{g_i:d(g_i)\rightarrow r(g_i)\}_i\subset \mathcal{G}$ such that $\cup_i d(g_i)-U, g|d(g_i)=g$. Then $g\in\mathcal{G}$. \end{enumerate} See for example \cite{FolMeasPres}. Note that there is a natural action $\mathcal{G}\curvearrowright X$. \vspace{0.5cm} Let $\Gamma(\mathcal{R})=\cup_{\mu}R_{\mu}$ and consider the pseudo-group $\mathcal{G}(\mathcal{R})$ of homeomorphisms of $\Gamma(\mathcal{R})$ generated by $\{\phi_{\mu,i'},\phi_{\mu,i'}^-:i'\in af(R_{\mu})\}$. Our previous discussions can be summarized by the following (Cf. Corollary \ref{coroA}). \vspace{0.5cm} \begin{theorem} There exists a continuous surjective map $h:\Sigma_S\rightarrow \Gamma(\mathcal{R})$ which semi-conjugates the action of the shift $\sigma\curvearrowright\Sigma_S$ with the natural action $\mathcal{G}(\mathcal{R})\curvearrowright \Gamma(\mathcal{R})$. \end{theorem} Similar, conclusions can be drawn for the dynamics of the center-stable and center-unstable plaques using the maps $h^s,h^u$. \vspace{0.5cm} The symbolic model presented is not well behaved under center holonomy. This is to be expected since there is no reason for this holonomy to have any hyperbolic behavior. Take for example $f:\mathbb{T}^3\rightarrow \mathbb{T}^3$ a product of an Anosov diffeomorphism of $\mathbb{T}^2)$ and the identity of $S^1$: here the center foliation is a circle fibration and thus the center holonomies are just the identity. Even when there is some hyperbolicity associated to the center foliation it is not clear how to deal with the interchange of leaves. The simplest case would be when $f:M\rightarrow M$ is a regular element in an (abelian) Anosov action. \vspace{0.5cm} \textbf{Question:} Let $\mathcal{R}^k\curvearrowright M$ Anosov action with regular element $f$. Can one get an analogue of Theorem A where the rectangles are invariant by all elements of $G$, or at least by a subgroup strictly larger than $<f>$? \vspace{0.5cm} The plausible answer is No, at least without imposing some strong conditions on the action. Otherwise it seems that one would get many different invariant measures for the action, something very unusual. \section*{Acknowledgements} The present results are a generalization of the ones obtained by the author his MSc project under the supervision of Mike Shub. I am very grateful to him for all the ideas and encouragement that he gave me. I have also benefited a lot from conversations wit Charles Pugh, Enrique Pujals and Federico Rodriguez-Hertz. To all of them my most sincere thanks
1,108,101,563,547
arxiv
\section{Introduction} \label{intro} We are interested in the numerical approximation of solutions to certain classes of moving boundary problems for $d$-dimensional ($d = 2,3$) bounded domains that includes, specifically, the so-called single phase Hele-Shaw problem. The classical Hele-Shaw moving boundary problem seeks a solution to a Laplace's equation in an unknown region whose boundary changes with time. In the present study, we are actually interested with the more general Hele-Shaw problem that also arises in shape optimization problems. Let $T>0$ be fixed and $B$ be an open bounded set in $\mathbb{R}^d$ $(d=2,3)$ with a smooth boundary $\partial B$. For $t\in[0,T]$, consider a larger open bounded set $\Omega(t) \subset \mathbb{R}^d$ containing $\overline{B}$ with boundary $\Gamma(t):={\partial\Omega}(t)$ such that $\partial B \cap \Gamma(0) = \emptyset$ (i.e., $\operatorname{dist}(\partial B, \Gamma(0)) > 0$). Denote by ${\nu}$ the outward unit normal vector on the boundary of $\Omega(t) \setminus \overline{B}$ as illustrated in Fig. \ref{fig:Fig1}. Given the functions $f:\mathbb{R}^d \times [0,T] \to \mathbb{R}$, $q_B:\partial B \times [0,T] \to \mathbb{R}$, $\bb{\gamma}:\mathbb{R}^d \times [0,T] \to \mathbb{R}^d$, the constant $\lambda \in \mathbb{R}$, and the initial profile $\Omega_0$ of $\Omega(t)$, with $V_{n}:=V_{n}(x,t)$, $x \in \Gamma(t)$, describing the outward normal velocity of the moving interface $\Gamma(t)$, we consider the following moving boundary problem: \begin{prob} \label{prob:general_Hele-Shaw} Find $\Omega(t) \supset \overline{B}$ and $u(\cdot,\ t):\overline{\Omega(t)} \setminus B \to \mathbb{R}$ such that \begin{equation} \label{eq:general_Hele-Shaw} \left\{\arraycolsep=1.4pt\def1.{1} \begin{array}{rcll} -\Delta u &=&f &\quad\text{in $\Omega(t)\setminus \overline{B}$, \quad $t \in [0,T]$},\\ (1-\alpha)u + \alpha \nabla u \cdot \nu &=&q_{B} &\quad\text{on $\partial B$},\\ u &=&0 &\quad\text{on $\Gamma(t)$, \quad $t \in [0,T]$},\\ V_{n} &=&(- \nabla u + \bb{\gamma})\cdot {\nu} +\lambda &\quad\text{on $\Gamma(t)$, \quad $t \in [0,T]$},\\ \Omega(0) &=& \Omega_0, \end{array} \right. \end{equation} where $\alpha \in \{0,1\}$. \end{prob} Here, for simplicity, we assume that the boundaries $\partial B$ and $\Gamma(t)$ are smooth, or equivalently, of class $C^{\infty}$. The topological situation illustrating the above problem is depicted in Fig. \ref{fig:Fig1}. \begin{figure}[htbp] \centering \scalebox{0.33}{\includegraphics{fig1}} \caption{The moving domain $\Omega(t)$ and fixed domain $B$} \label{fig:Fig1} \end{figure} In \eqref{eq:general_Hele-Shaw}, the parameter $\alpha$ indicates whether the boundary condition on the fixed boundary $\partial B$ is a Dirichlet boundary condition $(\alpha = 0)$ or a Neumann boundary condition ($\alpha = 1$). The fourth equation in \eqref{eq:general_Hele-Shaw} expresses the motion of the free boundary that evolves according to $V_{n} = (- \nabla u + \bb{\gamma})\cdot {\nu}+\lambda$, where the function $u$ satisfies the first three equations in \eqref{eq:general_Hele-Shaw}. Here, equation \eqref{eq:general_Hele-Shaw} with $f \equiv 0$, $\bb{\gamma} \equiv \bb{0}$ and $\lambda = 0$ is also known in the literature as the classical Hele-Shaw problem or simply the Hele-Shaw problem (see, e.g., \cite{EscherSimonnet1997}): Let us discuss more about the case $f \equiv 0$ and $\lambda = 0$ in \eqref{eq:general_Hele-Shaw}. If $\alpha = 1$, $q_{B} > 0$, and $\bb{\gamma} \equiv 0$, problem \eqref{eq:general_Hele-Shaw} describes a model of the expanding (two-dimensional) Hele-Shaw flow (see, e.g., \cite{Crank1984,ElliottOckendon1982,Elliott1980,ElliottJanovsky1981,Richardson1972}) which provides a simple description either of the flow of a viscous Newtonian liquid between two horizontal plates separated by a thin gap, or of a viscous liquid moving under Darcy's law in a porous medium \cite{CummingsHowisonKing1999} (see also \cite{MilneThomsonBook1996}). In a typical situation, $u$ represents the pressure in an incompressible viscous fluid blob $\Omega \setminus \overline{B}$, and because the Neumann flux $q_B$ is positive, more fluid is injected through the fixed boundary $\partial B$. As a result, the blob expands in time and is modelled by the moving boundary $\Gamma$. The problem is sometimes formulated with the prescribed pressure (i.e., $q_{B}$ is now interpreted as a given pressure instead of a Neumann flux) on the fixed boundary, i.e., with the non-homogeneous Dirichlet boundary condition $u = q_{B}$ on $\partial B$ (see, e.g., \cite{FasanoPrimecerio1993}). This situation corresponds to the case $\alpha = 0$ in \eqref{eq:general_Hele-Shaw}. For further classical applications of \eqref{eq:general_Hele-Shaw} at the current setting, we refer the readers, for example, to \cite{Crank1984,Elliott1980,ElliottJanovsky1981,Friedman1979}. In the case that $\bb{\gamma} \not\equiv 0$, the given quantity may, in a sense, be interpreted as an (external) background flow. Here, we do not consider the interesting question of existence of unique \emph{classical} solution to the general problem \eqref{eq:general_Hele-Shaw}, but readers may refer to \cite{EscherSimonnet1997} for existence result in the case of $f \equiv 0$ and $q_{B} > 0$. Nevertheless, this issue will be the topic of our future investigation. Meanwhile, results regarding existence of a weak solution to the Hele-Shaw problem via variational inequalities can be found in \cite{Elliott1980,ElliottJanovsky1981,Gustafsson1985}. Of course, it would be nice if we could actually transform equation \eqref{eq:general_Hele-Shaw} into an elliptic variational inequality formulation such as in the case of the classical Hele-Shaw problem (see \cite{ElliottJanovsky1981}). However, it seems that such method which employs the so-called Baiocchi transform \cite{BaiocchiBook1984} does not apply directly to our problem due to the presence of the external background flow $\bb{\gamma}$. Moreover, we emphasize that we are not aware of any existing solution methods to treat the given problem. So, as in many past studies, this motivates us to at least find an approximate numerical solution to the problem for concrete cases by providing a simple and convenient numerical method in accomplishing the task. Problem \ref{prob:general_Hele-Shaw} is also related to the Bernoulli free boundary problem. Suppose now that $f=f(x)$, $\bb{\gamma} \equiv 0$, and $\lambda < 0$, and that the shape solution to \eqref{eq:general_Hele-Shaw} happens to converge to a stationary point as $t$ increases indefinitely, i.e., there exists a domain $\Omega^*$ such that $ V_{n} = 0$ on $\Gamma^* = \partial \Omega^*$, then we call \eqref{eq:general_Hele-Shaw} a generalized \emph{exterior} Bernoulli-like free boundary problem: \begin{prob} Given a negative constant $\lambda$ and a fixed open bounded set $B$, find a bounded domain $\Omega \supset \overline{B}$ and a function $u:\overline{\Omega} \setminus B \rightarrow \mathbb{R}$ such that \begin{equation} \label{eq:generallized_Bernoulli_problem} \left\{\arraycolsep=1.4pt\def1.{1} \begin{array}{rcll} -\Delta u &=&f &\quad\text{in $\Omega\setminus \overline{B}$},\\ (1-\alpha)u + \alpha \nabla u \cdot \nu &=&q_{B} &\quad\text{on $\partial B$},\\ u\ =\ 0 \quad \text{and}\quad \nabla u\cdot{\nu}&=& \lambda &\quad\text{on $\Gamma$}. \end{array} \right. \end{equation} \end{prob} Bernoulli problems find their origin in the description of free surface for ideal fluids (see, e.g., \cite{Friedman1984,Friedrichs1934}). However, it also arises in the context of optimal design, such as in electro chemical machining and galvanization \cite{Crank1984,LaceyShillor1987}, as well as in insulation problems \cite{Acker1981,Flucher1993}. For some qualitative properties of solutions to the Bernoulli problem, including existence, classifications, and uniqueness of its solution, and some ideas about numerical approximations of its solutions via fixed-point iterations, we refer the readers to \cite{FlucherRumpf1997}, as well as to the references therein (see also \cite{RabagoThesis2020}). As mentioned earlier, our main objective in this study is to present a simple numerical scheme for solving the moving boundary problem \eqref{eq:general_Hele-Shaw}. Of course, there are already several numerical approaches to solve the present problem, especially in the case of the Hele-Shaw flow $V_{n} = -\nabla u \cdot {\nu}$ (with $f \equiv 0$, $\bb{\gamma} \equiv \bb{0}$, $\lambda = 0$, $\alpha = 1$, and $q_{B} > 0$ in \eqref{eq:general_Hele-Shaw}). In fact, it is well-known that the Hele-Shaw problem can be solved numerically using the boundary element method which was employed, for instance, in \cite{GustafssonVasilev2006}, or the charge simulation method (CSM) applied in \cite{Kimura1997,SakakibaraYazaki2015}. The latter method can also be used to other two-dimensional moving boundary problems, but is not actually easy to utilized in the case of three-dimensional problems. To address this difficulty, the authors in \cite{KimuraNotsu2002} proposed an improvement of CSM by combining it with the level-set method. Still, however, to the best of our knowledge, no convenient and effective numerical approach has yet been developed to numerically solve the more general equation \eqref{eq:general_Hele-Shaw} with $f \not\equiv 0$ and $\bb{\gamma} \not\equiv \bb{0}$. The purpose of this investigation, therefore, is to fill this gap by developing a numerical method to solve \eqref{eq:general_Hele-Shaw} with the following three main characteristics: \begin{itemize}[label=$\bullet$] \item firstly, as opposed to CSM, our proposed method is easier to implement and can easily treat three-dimensional moving boundary problems without ad hoc procedures; \item secondly, in contrast to existing traditional finite element methods used to solve many moving boundary problems, our propose scheme does not require mesh regeneration at every time step in the approximation process; \item and, lastly, our method can easily be adapted to solve other classes of moving boundary problems, such as the mean curvature problem. \end{itemize} The rest of the paper is organized as follows. In Section \ref{sec:CMM}, we formally introduce and give the motivation behind our proposed method which we termed as the `comoving mesh method'. We also write out the structure of the numerical algorithm for the method, and then illustrate its applicability in solving the Hele-Shaw problem. Moreover, we evaluate the correctness and accuracy of the scheme through the method of manufactured solution. Then, in Section \ref{sec:Bernoulli}, we will discuss how equation \eqref{eq:general_Hele-Shaw} is closely related to the so-called exterior Bernoulli problem in connection with a shape optimization formulation of the said free boundary problem (FBP). In addition, we numerically solve the FBP using our propose scheme. Meanwhile, in Section \ref{sec:MCF}, as further application of CMM, we will also apply our method to curve shortening problem, thus showcasing the versatility of the method. Furthermore, in Section \ref{sec:properties}, we state and prove two simple qualitative properties of the proposed numerical approximation procedure. Finally, we end the paper by giving out a concluding statement in Section \ref{sec:conclusion} and a brief remark about our future work. \section{The Comoving Mesh Method for the Hele-Shaw Problem} \label{sec:CMM} This section is mainly devoted to the introduction of the proposed method. The motivations behind its formulation are also given in this section. Moreover, the structure of the algorithm that will be used in the numerical implementation of the method is also provided here. This is followed by a presentation of two simple numerical examples illustrating the applicability of the scheme in solving concrete cases of problem \eqref{eq:general_Hele-Shaw}, one with $\bb{\gamma}(\cdot,t) \equiv \bb{0}$ and the other one with $\bb{\gamma}(\cdot,t) \not \equiv \text{constant} \neq 0$ on $\Gamma(t)$ ($t \in [0,T]$). To check the accuracy of the proposed scheme, we also examine the error of convergence or EOC of the method with the help of the method of manufactured solutions \cite{SalariKnupp2000}. \subsection{Idea and motivation behind CMM} \label{sec:idea} As alluded in Introduction, the main purpose of the present paper is the development of a simple Lagrangian-type numerical scheme that we call ``comoving mesh method,'' or simply CMM, for solving a class of moving boundary problems. To begin with, we give out a naive idea of the method. For simplicity, we set $\alpha = 1$. Let $T>0$ be a given final time of interest, $N_T$ be a fixed positive integer, and set the time discretization step-size as $\tau := T/N_T$. For each time-step index $k = 0,1,\cdots,N_T$, we denote the time discretized domain by $\Omega^k \approx \Omega(k \tau)$ (similarly, $\Gamma^k \approx \Gamma(k \tau)$) and the associated time discretized function as $u^k \approx u(\cdot, k \tau)$, $f^k \approx f(\cdot, k \tau)$, $q_B^k \approx q_B(\cdot, k \tau)$, and $\bb{\gamma}^k \approx \bb{\gamma}(\cdot, k \tau)$. The rest of the notations used below are standard and will only be stressed out for clarity. After specifying the final time of interest $T>0$ and deciding the value of $N_T \in \mathbb{N}$, a naive numerical method for the Hele-Shaw problem \eqref{eq:general_Hele-Shaw} consists of the following three steps: \begin{itembox}{{Conventional scheme for \eqref{eq:general_Hele-Shaw}}} \vspace{5pt} At each time $t = k \tau$: \begin{description} \item{\underline{Step 1.}} The first step is to solve $u^k$ over the domain $\Omega^k \setminus \overline{B}$: \[ -\Delta u^k = f^k \ \ \text{in $\Omega^k\setminus \overline{B}$},\quad \nabla u^k \cdot \nu^k =q_{B}^k \ \ \text{on $\partial B$},\quad u^k = 0 \ \ \text{on $\Gamma^k$}. \] \item{\underline{Step 2.}} Then, we define the normal velocity of $\Gamma^k$ in terms of the function $u^k$ and the normal vector ${\nu}^k$ to $\Gamma^k$, i.e., we set $V_{n}^k := (-\nabla u^k + \bb{\gamma}^k) \cdot \nu ^k + \lambda$ on $\Gamma ^k$. \item{\underline{Step 3.}} Finally, we move the boundary along the direction of the velocity field $V_{n}^k$, i.e., we update the moving boundary according to \[\Gamma^{k+1} := \left\{ x+\tau V_{n}^k (x) \nu^k (x) \ \middle\vert \ x \in \Gamma^k \right\}.\] \end{description} \end{itembox} However, there are two obstacles in the realization of this naive idea in a finite element method (FEM). The first main difficulty is that if $u^k$ is a piecewise linear function on a triangular finite element mesh, then $V_{n}^k$ only lives in the space $P_0(\Gamma^k_h)$ (here, of course, $\Gamma^k_h$ denotes the exterior boundary of the triangulation $\Omega_h^k \setminus \overline{B_h}$ of the domain $\Omega \setminus \overline{B}$ with the maximum mesh size $h>0$, at the current time step $k$). Unfortunately, this local finite element space is not enough to uniquely define $V_{n}^k$ on nodal points of the mesh, and, in fact, it must belong to the (conforming piecewise) linear finite element space $P_1(\Gamma^k_h)$ in the third step. The second one is not actually an impediment in implementing the method, but more of a preference issue in relation to mesh generation. Typically, moving boundary problems requires mesh regeneration when solved using finite element methods; that is, one needs to generate a triangulation of the domain $\Omega^k \setminus \overline{B}$ at each time step $k$ after the boundary moves. To circumvent these issues, we offer the following remedies. We first address the second issue. In order to avoid generating a triangulation of the domain at every time step, we move not only the boundary, but also the internal nodes of the mesh triangulation at every time step. By doing so, the mesh only needs to be generated at the initial time step $k=0$. This is the main reason behind the terminology used to name the present method (i.e., the `comoving mesh' method). In order to move the boundary and internal nodes simultaneously, we first create a smooth extension ${\bb{w}^k}$ of the velocity field $V_{n}^k{\nu}^k$ into the entire domain $\Omega^k \setminus \overline{B}$ using the Laplace operator. This is done more precisely by finding ${\bb{w}^k_h} \in P_1(\Omega_h^k\setminus \overline{B_h};\mathbb{R}^d)$ which is a finite element solution to the following Laplace equation: \begin{equation} \label{eq:naive} - \Delta {\bb{w}^k} = \bb{0} \ \ \text{in $\Omega_h^k \setminus \overline{B_h}$},\qquad {\bb{w}^k} = \bb{0} \ \ \text{on $\partial B_h$},\qquad {\bb{w}^k} = V_{n}^k {\nu}^k \ \ \text{on $\Gamma_h^k$}, \end{equation} where, we suppose a polygonal domain $\overline{\Omega_h^k}$, at $t=k \tau$, and its triangular mesh $\mathcal{T}_h(\overline{\Omega_h^k}\setminus B_h) = \{ K^k_l \} ^{N_e}_{l=1}$ ($K^{k}_l$ is a closed triangle $(d=2)$, or a closed tetrahedron $(d=3)$), are given, and $P_1(\Omega_h^k\setminus \overline{B_h};\mathbb{R}^d)$ denotes the $\mathbb{R}^d$-valued piecewise linear function space on $\mathcal{T}_h(\overline{\Omega_h^k}\setminus B_h)$. Then, $\Omega_h^{k+1}$ and $\mathcal{T}_h(\overline{\Omega_h^{k+1}}\setminus B_h) = \{ K^{k+1}_l \} ^{N_e}_{l=1}$ are defined as follows: \begin{eqnarray} \label{eq:mesh-update} \overline{\Omega^{k+1}_h}\setminus B_h &:=&\left \{ x + \tau \bb{w}^k_h(x) \ \middle\vert \ x \in \overline{\Omega^{k}_h}\setminus B_h \right\}, \\ \label{eq:triangle-update} K^{k+1}_l &:=&\left\{ x + \tau \bb{w}^k_h(x) \ \middle\vert \ x \in K^{k}_l \right\}, \end{eqnarray} for all $k = 0,1,\cdots,N_T$, see Fig. \ref{fig:Fig2} for illustration. \begin{remark} If ${\bb{w}^k_h}$ is belongs to $P_2$ or higher order finite element space, then, instead of \eqref{eq:triangle-update}, we set the triangular mesh $\mathcal{T}_h(\overline{\Omega_h^k}\setminus B_h)$ with the set of nodal points $\mathcal{N}^k_h = \{ p^k_j \} ^{N_p}_{j=1}$ : \begin{equation} \label{eq:node-update} \mathcal{T}_h(\overline{\Omega_h^{k+1}}\setminus B_h) := \left\{\arraycolsep=1.4pt\def1.{1.5} \begin{array}{rcll} p^{k+1}_j &:=& p^k_j + \tau {\bb{w}^k_h}(p^k_j) \\[0.2em] K^{k+1}_l \cap \mathcal{N}^{k+1}_l &=& K^k_l \cap \mathcal{N}^k_l . \end{array} \right. \end{equation} \end{remark} Note that the definition of the (discrete) time evolution of the annular domain $\overline{\Omega} \setminus B$ given in \eqref{eq:mesh-update} clearly agrees with the original characteristic (at least for the interior boundary) of its desired evolution. This is because the choice of extension for the vector field $V_{n}{\nu}$ fixes the boundary $\partial B$ of the interior domain $B$ throughout the entire time evolution interval $[0,T]$. It is worth to emphasize that a similar idea is adopted in the so-called \emph{traction method} developed by Azegami \cite{Azegami1994} for shape optimization problems (see also \cite{AzegamiBook2020}). \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.55\textwidth} \centering \raisebox{-0.5\height}{\resizebox{\textwidth}{!}{\includegraphics{fig2a}}} \caption{Nodal points relocation} \label{fig:Fig2a} \end{subfigure}% \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \raisebox{-0.5\height}{\resizebox{0.6\textwidth}{!}{\includegraphics{fig2b}}} \caption{A superimposed sectional illustration} \label{fig:Fig2b} \end{subfigure}% \caption{Plot \ref{fig:Fig2a}: initial and deformed mesh after nodes relocation (scaled with the time-step parameter $\tau$ and moved in accordance with the direction of the velocity field $\bb{w}$); plot \ref{fig:Fig2b}: a superimposed comparison of corresponding sections of the domains $\overline{\Omega^{k}} \setminus B$ and $\overline{\Omega^{k+1}} \setminus B$} \label{fig:Fig2} \end{figure} On the other hand, concerning the first issue, we treat the $P_0(\Gamma^k_h)$-function by using a Robin approximation ${\varepsilon} \nabla {\bb{w}^k} \cdot {\nu}^k + {\bb{w}^k} = V_{n}^k {\nu}^k$ of ${\bb{w}^k} = V_{n}^k {\nu}^k$ in \eqref{eq:naive}, where $\varepsilon > 0$ is a sufficiently small fixed real number. In other words, given $\varepsilon > 0$, we define ${\bb{w}^k_h} \in P_1(\Omega_h^k\setminus \overline{B_h};\mathbb{R}^d)$ as the finite element solution to the following mixed Dirichlet-Robin boundary value problem: \begin{equation} \label{eq:velocity} \left\{\arraycolsep=1.4pt\def1.{1} \begin{array}{rcll} - \Delta {\bb{w}^k} &=& \bb{0} &\quad \text{in $\Omega_h^k \setminus \overline{B_h}$},\\[0.3em] {\bb{w}^k} &=& \bb{0} &\quad \text{on $\partial B_h$},\\[0.3em] {\varepsilon} \nabla \bb{w}^k \cdot \nu^k + {\bb{w}^k} &=& V_{n}^k {\nu}^k &\quad \text{on $\Gamma_h^k$}. \end{array} \right. \end{equation} In variational form, the system of partial differential equations \eqref{eq:velocity} is given as follows: find ${\bb{w}^k} \in H^1_{\partial B,\bb{0}}(\Omega_h^k\setminus \overline{B_h};\mathbb{R}^d)$ such that \begin{align} &\displaystyle \int_{\Omega_h^k \setminus \overline{B_h}} \nabla {\bb{w}^k} : \nabla \bb{\varphi} \ {\rm d}x + \frac{1}{{\varepsilon}} \int_{\Gamma_h^k} {\bb{w}^k} \cdot \bb{\varphi}\ {\rm d}s \nonumber\\ &\displaystyle \hspace{1in} = \frac{1}{{\varepsilon}} \int_{\Gamma_h^k} V_{n}^k {\nu}^k \cdot \bb{\varphi}\ {\rm d}s, \quad \forall \bb{\varphi} \in H_{\partial B,\bb{0}}^1(\Omega_h^k\setminus \overline{B_h};\mathbb{R}^d), \label{eq:velocity_weakform} \end{align} where $H_{\partial B, \bb{0}}^1(\Omega \setminus \overline{B} ;\mathbb{R}^d)$ denotes the Hilbert space $\{\bb{\varphi} \in H^1(\Omega \setminus \overline{B} ;\mathbb{R}^d) \mid \bb{\varphi} = \bb{0} \ \text{on}\ \partial B\}$. Obviously, the integral equation in \eqref{eq:velocity_weakform} can be evaluated even for $V_{n}^k {\nu}^k \in P_0(\Gamma_h^k)$. To summarize the above idea, we provide the following algorithm for the comoving mesh method. \begin{algorithm}[H] \caption{Comoving mesh method} \label{alg:cmm} {\fontsize{9}{10}\selectfont \begin{algorithmic}[1] \STATE Specify $T>0$, $N_T\in \mathbb{N}$, $\varepsilon>0$, and set $k=0$. Also, generate a finite element mesh of the initial domain $\overline{\Omega_h^0} \setminus B_h \approx \overline{\Omega^0} \setminus B$. \WHILE{$k \leqslant N_T$} \STATE Solve the finite element solution $u^k_h \in P_1(\Omega_h^k \setminus \overline{B_h})$ for the following: \vspace{-1mm} \[ - \Delta u^k = f^k\quad \text{in $\Omega_h^k \setminus B_h$},\qquad \nabla u^k \cdot {\nu}^k = q_B^k\quad \text{on $\partial B_h$},\qquad u^k = 0\quad \text{on $\Gamma_h^k$}. \] \vspace{-3mm} \STATE Define the normal velocity as $V_{n}^k := (-\nabla u^k_h + \bb{\gamma}^k) \cdot \nu ^k +\lambda$ on $\Gamma_h^k$. \STATE Create an extension of $V_{n}^k \nu^k$ by solving the finite element solution ${\bb{w}^k_h} \in P_1(\Omega_h^k\setminus \overline{B_h};\mathbb{R}^d)$ for the following: \vspace{-1mm} \[ - \Delta {\bb{w}^k} = \bb{0}\ \ \text{in $\Omega_h^k \setminus \overline{B_h}$},\quad {\bb{w}^k} = \bb{0}\ \ \text{on $\partial B_h$},\quad \varepsilon \nabla \bb{w}^k \cdot \nu^k + {\bb{w}^k} = V_{n}^k {\nu}^k\ \ \text{on $\Gamma_h^k$}. \] \vspace{-3mm} \STATE Update the current domain by moving the mesh according to \eqref{eq:mesh-update} and \eqref{eq:triangle-update}. \STATE $k \leftarrow k+1$ \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Application of CMM to a classical Hele-Shaw problem} \label{sec:HS} In this subsection, we apply the comoving mesh method to solve two concrete examples of problem \eqref{eq:general_Hele-Shaw}. First, let us consider the classical Hele-Shaw problem: \begin{equation} \label{eq:classical_Hele-Shaw} \left\{\arraycolsep=1.4pt\def1.{1.1} \begin{array}{rcll} -\Delta u &=&0 &\quad\text{in $\Omega(t)\setminus \overline{B}$, \quad $t \in [0,T]$},\\ \nabla u\cdot{\nu} &=&1 &\quad\text{on $\partial B$},\\ u &=&0 &\quad\text{on $\Gamma(t)$, \quad $t \in [0,T]$},\\ V_{n} &=& - \nabla u\cdot{\nu} &\quad\text{on $\Gamma(t)$, \quad $t \in [0,T]$},\\ \Omega(0) &=& \Omega_0. \end{array} \right. \end{equation} Note that, because of the maximum principle and the unique continuation property \cite{HormanderBook}, $u$ is positive in $\Omega\setminus \overline{B}$. This means that $\nabla u\cdot{\nu} < 0$ on the moving boundary, and, in this case, since the normal velocity $V_{n}$ is always positive, the hypersurface expands. \begin{numex} \label{example1} In this example, the initial profile $\Omega_0$ of the moving domain $\Omega(t)$ ($t \in [0, T]$) is given as an ellipse, as shown in Fig. \ref{fig:Fig3a} (along with its mesh triangulation), and the final time of interest is set to $T=2$. Algorithm \ref{alg:cmm} is executed using mesh sizes of uniform width $h \approx 0.1$ with parameter value ${\varepsilon} = 0.1$ and time step size $\tau = 0.1$. \end{numex} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{0.9\textwidth}{!}{\includegraphics{fig3a}} \caption{Initial mesh profile $\Omega_h^0$} \label{fig:Fig3a} \end{subfigure}% \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{0.9\textwidth}{!}{\includegraphics{fig3b}} \caption{Mesh profile of $\Omega_h^{N_T}$ at $T=2$} \label{fig:Fig3b} \end{subfigure}% \par\bigskip \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{0.9\textwidth}{!}{\includegraphics{fig3c}} \caption{Trajectory of boundary nodes } \label{fig:Fig3c} \end{subfigure}% \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{0.9\textwidth}{!}{\includegraphics{fig3d}} \caption{Time evolution of the moving boundary} \label{fig:Fig3d} \end{subfigure}% \caption{Computational results of Example \ref{example1}} \label{fig:Fig3} \end{figure} The numerical results of the present experiment are shown in Fig. \ref{fig:Fig3}. Fig. \ref{fig:Fig3b}, in particular, shows the shape of the annular domain at the final time of interest $T=2$ (with its mesh profile). Meanwhile, Fig. \ref{fig:Fig3c} plots the trajectory of the boundary nodes, and we see from this figure that the nodes are well-spaced at every time-step. The last plot, Fig. \ref{fig:Fig3d}, depicts the evolution of the (exterior) moving boundary $\Gamma^k_h$. Here, the innermost exterior boundary represents the initial profile of $\Gamma^0_h$ and the outermost corresponds to its final shape. As expected, the annular domain $\Omega(t)\setminus \overline{B}$ expands through time. \subsection{EOC of CMM for the Hele-Shaw problem} \label{sec:EOC-HS} To check the accuracy of CMM for Hele-Shaw problems of the form \eqref{eq:general_Hele-Shaw}, with $\alpha=1$ and $\lambda=0$, we use the method of manufactured solutions \cite{SalariKnupp2000}. Therefore, we construct a proper manufactured solution for the Hele-Shaw problem \eqref{eq:general_Hele-Shaw}. \begin{proposition}\label{prop:Manufactured1} We suppose $\phi(x,t)$ is a smooth function with $\phi<0$ for $x \in \overline{B}$ and $|\nabla \phi| \neq 0$ on $\{\phi = 0\}$, for $t \in [0,T]$. We define $f:=\Delta \phi$, $q_B:= -\nabla \phi \cdot {\nu}$, $\bb{\gamma}:= \left(-\frac{\phi _t}{|\nabla \phi|^2}+1\right) \nabla \phi$ (where $\phi_t$ means the partial derivative of $\phi$ with respect to $t$), and $\Omega_0:=\{ \phi(x,0)<0 \}$. Then, $u(x,t)=-\phi(x,t)$ and $\Omega (t):= \{ x \in \mathbb{R}^d \mid \phi(x,t) < 0) \}$ satisfy the moving boundary value problem \eqref{eq:general_Hele-Shaw} with $\alpha = 1$ and $\lambda=0$. \end{proposition} \begin{proof} The proposition is easily verified by straightforward computation noting that the normal velocity of the moving boundary $V_{n}$ and the unit normal vector ${\nu}$ with respect to the moving boundary $\Gamma(t)$ can be expressed in terms of the level set function $\phi$; that is, $V_{n} = -\frac{\phi_t}{|\nabla \phi|}$ and ${\nu} = \frac{\nabla \phi}{|\nabla \phi|}$ (see, e.g., \cite{KimuraNotes2008}). \end{proof} We check the experimental order of convergence (EOC) by comparing the approximate solution $u^k_h$ with the manufactured solution $\phi^k$. In this case, $\phi^k$ is viewed as the interpolated exact solution to the solution space of the discretized problem. Now, with regards to EOC, we define the numerical errors as follows: \[ \operatorname{err}_{\Gamma} := \max_{0 \leqslant k \leqslant N_T} \max_{x \in \Gamma^k_h} \ \operatorname{dist}(x,\Gamma(k \tau)),\qquad \operatorname{err}_{\mathcal{X}^k} := \max_{0 \leqslant k \leqslant N_T} \left\{ \left\| u^k_h-\Pi_h u(\cdot,k\tau) \right\|_{\mathcal{X}^k} \right\}, \] where $\mathcal{X}^k \in \{L^2(\Omega_h^k \setminus \overline{B}), H^1(\Omega_h^k \setminus \overline{B})\}$, and $\Pi_h :H^1(\Omega) \rightarrow P_1(\mathcal{T}_h(\Omega))$ is the projection map such that $\Pi_h u(p) = u(p)$ for all nodal points $p \in \mathcal{N}_h$ of $\mathcal{T}_h(\Omega)$. \begin{numex} \label{example2} As an example, we perform a numerical experiment with the following conditions: $\varepsilon \in \{ 10^{-4}, 10^{-2} \}$, $h \approx \tau = 0.05$, \begin{align*} \phi(x,t) &:=\frac{x_1^2}{2(t+1)}+\frac{x_2^2}{t+1}-1, \quad t\in [0,1], \quad (x:=(x_1,x_2)), \end{align*} \sloppy so $\overline{B} := \left\{x\in \mathbb{R}^2 \ \middle\vert \ x_1^2+x_2^2 \leqslant 0.5^2 \right\}$ and $\Omega_0 := \left\{x\in \mathbb{R}^2 \ \middle\vert \ 0.5 x_1^2+x_2^2 < 1 \right\}$. \end{numex} The computational results of Example \ref{example2} are shown in Fig. \ref{fig:Fig4}. The initial profile of $\Gamma$ is depicted in Fig. \ref{fig:Fig4a}, while its shape at time $T=1$ is shown in Fig. \ref{fig:Fig4b}. Meanwhile, Fig. \ref{fig:Fig4c} and Fig. \ref{fig:Fig4d} plot the evolution of the moving boundary $\Gamma$ (on the first quadrant) from initial to final time of interest $T=1$ with $\varepsilon = 10^{-4}$ and $\varepsilon = 10^{-2}$, respectively. Notice that we get more stable evolution, in the sense that the boundary nodes are well-spaced at every time step, of the moving boundary for a higher value of $\varepsilon$ than with a lower value (refer, in particular, to the encircled region in the plots). In fact, for higher values of $\varepsilon$, we observe better mesh quality than when $\varepsilon$ is of small magnitude. Consequently, we notice in our experiment the obvious fact that there is a trade-off between accuracy and stability of the scheme when $\varepsilon$ is made smaller compared to the time step size $\tau \approx h$. In fact, as already expected, the scheme is stable when the step size is taken relatively small compared to $\varepsilon$. Results regarding accuracy are illustrated in further illustrations below. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \resizebox{0.95\textwidth}{!}{\includegraphics{fig4a}} \caption{Initial mesh profile $\Omega^{0}_h$ ($\varepsilon = 10^{-4}$)} \label{fig:Fig4a} \end{subfigure}% \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \resizebox{0.95\textwidth}{!}{\includegraphics{fig4b}} \caption{Mesh profile of $\Omega^{N_T}_h$ at $T=1$ ($\varepsilon = 10^{-4}$)} \label{fig:Fig4b} \end{subfigure}% % \par \bigskip \begin{subfigure}[b]{0.5\textwidth} \centering \resizebox{0.95\textwidth}{!}{\includegraphics{fig4c}} \caption{Boundary nodes trajectory ($\varepsilon = 10^{-4}$)} \label{fig:Fig4c} \end{subfigure}% \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \resizebox{0.95\textwidth}{!}{\includegraphics{fig4d}} \caption{Boundary nodes trajectory ($\varepsilon = 10^{-2}$)} \label{fig:Fig4d} \end{subfigure}% \caption{Computational results for Example \ref{example2}} \label{fig:Fig4} \end{figure} We also check how the error changes with the magnitude of the time step $\tau$ along with the maximum mesh size $h$ of the triangulation $\mathcal{T}_h$ by calculating the EOC of the present numerical example. Here, the mesh size $h$ is as large as the time step $\tau$, i.e., $h \approx \tau$. The results are depicted in Fig. \ref{fig:Fig5}. Notice in these figures that the orders are mostly linear when $\varepsilon$ is sufficiently small, except, of course, in Fig. \ref{fig:Fig5b}. Nevertheless, we can expect that the numerical solution converges to the exact solution by reducing the time step as well as the mesh size in the numerical procedure. Based on these figures, the error is evidently reduced by choosing smaller $\varepsilon$. However, in Fig. \ref{fig:Fig5b}, for sufficiently small $\varepsilon$, the errors become saturated and the saturated values decrease with order $O(\tau)$. \begin{figure}[htbp] \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig5a}} \caption{$\tau$ vs $\operatorname{err}_{\Gamma}$ } \label{fig:Fig5a} \end{subfigure}% \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig5b}} \caption{$\varepsilon$ vs $\operatorname{err}_{\Gamma}$} \label{fig:Fig5b} \end{subfigure}% \par\bigskip \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig5c}} \caption{$\tau$ vs $\operatorname{err}_{L^2}$ } \label{fig:Fig5c} \end{subfigure}% \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig5d}} \caption{$\tau$ vs $\operatorname{err}_{H^1}$} \label{fig:Fig5d} \end{subfigure}% \caption{Error of convergences for Example \ref{example2}} \label{fig:Fig5} \end{figure} \section{Bernoulli Free Boundary Problem} \label{sec:Bernoulli} In this section, we showcase the practicality of the method for solving stationary free boundary problems. \subsection{Application of CMM in solving the Bernoulli free boundary problem} \label{sec:BP} Here, we shall show that our proposed finite element scheme can actually be applied to numerically solved the well-known Bernoulli problem, a prototype of stationary free boundary problems \cite{FlucherRumpf1997}. The applicability of our method in solving the said problem is not surprising since the kinematic boundary condition $V_{n} = (- \nabla u + \bb{\gamma})\cdot {\nu} +\lambda$ with $\bb{\gamma} \equiv \bb{0}$ where $\lambda < 0$, in fact provides a descent direction for a gradient-based descent algorithm for solving the Bernoulli problem in the context of shape optimization. To see this, let us briefly discuss how the Bernoulli problem can be solved using the method of shape optimization, a well-established tool for solving FBPs \cite{DelfourZolesio2011}. The Bernoulli problem splits into two types: (i) the exterior case, similar to the topological profile of the domain $\Omega\setminus \overline{B}$ examined in previous sections, and (ii) the interior case which is the exact opposite of the exterior problem (i.e., the free boundary is the interior part of the disjoint boundaries). In the discussion that follows, we shall focus on the former case which can be described by the following overdetermined boundary value problem: \begin{equation} \label{Bernoulli_problem} \left\{\arraycolsep=1.4pt\def1.{1.} \begin{array}{rcll} -\Delta u &=&0 &\quad\text{in $\Omega\setminus \overline{B}$},\\ u &=&1 &\quad\text{on $\partial B$},\\ u=0\quad\text{and}\quad\nabla u\cdot{\nu}&=&\lambda &\quad\text{on $\Gamma$}. \end{array} \right. \end{equation} There are several ways to reformulate the above problem into a shape optimization setting (see, e.g., \cite{RabagoThesis2020} and the references therein), and the one we are concerned with here is the minimization of the shape functional \cite{EpplerHarbrecht2006} \[ J(\Omega) = \int_{\Omega}\left( |\nabla (u(\Omega))|^2 + \lambda^2 \right) \operatorname{d}\!x, \] where $u = u(\Omega)$ is a unique weak solution to the underlying well-posed state problem: \vspace{2mm} Find $u \in H^1_{\Gamma,0}(\Omega \setminus \overline{B})$, with $u = 1$ on $\partial B$, such that \begin{equation}\label{eq:state} \int_{\Omega \setminus \overline{B}} \nabla u : \nabla \varphi \ {\rm d}x = 0, \quad \forall \varphi \in H^1_{\Gamma,0}(\Omega \setminus \overline{B}). \end{equation} We like to emphasize here that the positivity of the Dirichlet data on the fixed boundary $\partial B$ implies that the state solution $u$ is positive in $\Omega$. This, in turn, yields the identity $|\nabla u| \equiv - \nabla u \cdot \nu$ on $\Gamma$ because $u$ takes homogenous Dirichlet data on the free boundary $\Gamma$. The solution to the exterior Bernoulli problem \eqref{Bernoulli_problem} is equivalent to finding the solution pair $(\Omega, u(\Omega))$ to the shape optimization problem \begin{equation} \label{shape_problem} \min_{\Omega} J(\Omega), \end{equation} where $u(\Omega) \in H^1_{\Gamma,0}(\Omega \setminus \overline{B})$, with $u = 1$ on $\partial B$, satisfies the variational problem \eqref{eq:state}. % This results from the necessary condition of a minimizer of the cost functional $J(\Omega)$, that is, \[ \operatorname{d}\!J(\Omega)[\boldsymbol{V}] = \left. \frac{\operatorname{d}}{\operatorname{d}\varepsilon} J(\Omega_{\varepsilon})\right|_{\varepsilon = 0} = \int_{\Gamma} \left[ \lambda^2 - \left( \nabla u \cdot {\nu} \right)^2\right] V_{n} \operatorname{d}\!x =0, \quad V_{n}:= \boldsymbol{V} \cdot {\nu}, \] has to hold for all sufficiently smooth perturbation fields $\boldsymbol{V}$. Here, $\Omega_\varepsilon$ stands for a deformation of $\Omega$ along the deformation field $\boldsymbol{V}$ vanishing on $\partial B$. For more details of how to compute $\operatorname{d}\!J(\Omega)[\boldsymbol{V}]$, and for more discussion on shape optimization methods, in general, we refer the readers to \cite{DelfourZolesio2011} and \cite{SokolowskiZolesio1992}. To numerically solve \eqref{shape_problem}, a typical approach is to utilize the \emph{shape gradient} (i.e., the kernel of the shape derivative $\operatorname{d}\!J(\Omega)[\boldsymbol{V}]$, see, e.g., \cite[Thm. 3.6, p. 479--480]{DelfourZolesio2011}) in a gradient-based descent algorithm. For instance, given enough regularity on the boundary $\Gamma$ and on the state $u$, we can take $\boldsymbol{0} \not\equiv \boldsymbol{V} = -\left[ \lambda^2 - \left(\nabla u \cdot {\nu}\right)^2\right]{\nu} \in L^2(\Gamma)$. This implies that, formally, for small $t>0$, we have the following inequality \begin{align*} J(\Omega_t) &= J(\Omega) + t \left. \frac{\operatorname{d}}{\operatorname{d}\varepsilon} J(\Omega_{\varepsilon})\right|_{\varepsilon = 0} + O(t^2)\\ &= J(\Omega) + t \int_{\Gamma} \left[ \lambda^2 - \left(\nabla u \cdot {\nu} \right)^2\right] V_{n} \operatorname{d}\!s + O(t^2)\\ &= J(\Omega) - t \int_{\Gamma} |V_{n}|^2 \operatorname{d}\!s + O(t^2) < J(\Omega). \end{align*} Here, we observe that we can simply take $(-\nabla u \cdot {\nu} + \lambda){\nu}$ as the descent vector $\boldsymbol{V}$. This issues from the fact that $\nabla u \cdot {\nu} + \lambda < 0$ on $\Gamma$ since $|\nabla u| \equiv - \nabla u \cdot \nu$ on $\Gamma$. Indeed, with $\boldsymbol{V} = (-\nabla u \cdot {\nu} + \lambda){\nu}$, we see that \begin{align*} J(\Omega_t) &= J(\Omega) + t \int_{\Gamma} \left(\nabla u \cdot {\nu} + \lambda \right) \left(-\nabla u \cdot {\nu} + \lambda\right) V_{n} \operatorname{d}\!s + O(t^2)\\ &= J(\Omega) + t \int_{\Gamma} \underbrace{\left(\nabla u \cdot {\nu} + \lambda \right) }_{<\ 0} |V_{n}|^2 \operatorname{d}\!s + O(t^2) < J(\Omega). \end{align*} It is worth to mention here that simply taking the kernel of the shape derivative of the cost function (multiplied to the normal vector on the free boundary) as the deformation field $\boldsymbol{V}$ may lead to subsequent loss of regularity of the free boundary, hence forming oscillations of the free boundary. To avoid such phenomena, the descent vector is, in most cases, replaced by the so-called \emph{Sobolev gradient} \cite{Neuberger2010}. A strategy to do this is to apply the \emph{traction method} or the $H^1$ gradient method which are popular smoothing techniques in the field of shape design problems (see, e.g., \cite{AzegamiBook2020}). Now, the evolution of the free boundary $\Gamma(t)$ of the Bernoulli problem according to a shape gradient-based descent algorithm (see, e.g., \cite{EpplerHarbrecht2006}) describes a similar evolutionary equation for the Hele-Shaw problem with the moving boundary given as $\Gamma(t)$: \begin{equation} \label{Hele-Shaw-Bernoulli} \left\{\arraycolsep=1.4pt\def1.{1.1} \begin{array}{rcll} -\Delta u &=&0 &\quad\text{in $\Omega(t)\setminus \overline{B}$, \quad $t\in[0, T]$},\\ u &=&1 &\quad\text{on $\partial B$},\\ u &=&0 &\quad\text{on $\Gamma(t)$, \quad $t \in [0,T]$},\\ V_{n} &=&- \nabla u\cdot{\nu} + \lambda &\quad\text{on $\Gamma(t)$, \quad $t \in [0,T]$},\\ \Omega(0) &=& \Omega_0, \end{array} \right. \end{equation} where $T>0$. Before we give a concrete numerical example illustrating the evolution of the solution of \eqref{Hele-Shaw-Bernoulli}, note that the convergence of the solution of the moving boundary problem in CMM to a stationary (non-degenerate) shape solution will be given in Section \ref{sec:properties} (see Proposition \ref{prop:convergence_to_a_stationary_point}). Furthermore, we infer from this claim that the convergence of $\Omega(t)$ to $\Omega^\ast$, as time $t$ increases indefinitely, does not depend on the choice of the value of the parameter $\varepsilon$ in the $\varepsilon$-approximation of the normal-velocity flow $V_{n}\nu$ of the moving boundary $\Gamma(t)$. \begin{numex} \label{example3} Let us now consider a concrete example of the exterior Bernoulli problem and apply CMM to approximate its numerical solution. We consider the problem with $\lambda = -10$, and define the fixed interior domain as the \texttt{L}-shaped domain $B = (0.25,0.25)^2\setminus [0.25,0.25]$. Also, we solve the problem for different choices of initial boundary $\Gamma(0)$. In particular, we consider it to be a circle $\Gamma_1^0$, a square with rounded corners $\Gamma_2^0$, and a rectangle with rounded corners $\Gamma_3^0$. We carry out the approximation procedure discretizing these domains with (initial) triangulations having mesh of width $h \approx 5 \times 2^3$. We set the time step to $\tau = 0.001$ and take $T = 1$ as the final time of interest. Hence, the procedure terminates after $N_T = 1000$ time steps. Lastly, we set the CMM parameter $\varepsilon$ to $0.1$. \end{numex} The results of the experiments are summarized in Fig. \ref{fig:Fig6}--Fig. \ref{fig:Fig9}. Fig. \ref{fig:Fig6} depicts the initial mesh triangulation of each mentioned test cases. Fig. \ref{fig:Fig7}, on the other hand, shows the mesh profile after $N_T$ time steps (i.e., the computational mesh profile at time $T=1$). Notice from these plots that the mesh quality actually deteriorates in the sense that the area of some of the triangles become very small (see the part of the discretized shape near the concave region of the domain in Fig. \ref{fig:Fig7}). This is not actually surprising since we do not imposed any kind of mesh improvement or re-meshing during the approximation process. Of course, as a consequence, the step size may become too large in comparison with the minimum mesh size of the triangulation after a large number of time steps have passed, and this may cause instability within the approximation scheme. Even so, we do not encounter this issue in these present test examples. Meanwhile, to illustrate how the nodes changes after each time step, we plot the boundary nodes' trajectory from initial to final time step for each test cases, and these are projected in Fig. \ref{fig:Fig8}. Moreover, in Fig. \ref{fig:Fig9a}, we plot the shapes at $T=1$ (i.e., the shape $\Gamma_i^k$, $i=1,2,3$, at final time step $k=N_T$) against each of the test cases. Notice that the computed shapes are slightly different, but are nevertheless close to the shape obtained via shape optimization methods (see \cite{RabagoThesis2020}). This is primarily due to the fact that the number of triangles within the initial mesh profile generated for each test cases are also different. Nonetheless, as we tested numerically, the resulting shapes at time $ T=1$ for each cases coincide at one another under smaller time steps and finer meshes. Lastly, Fig. \ref{fig:Fig9b} graphs the histories of the $L^2(\Gamma)$-norms between $\nabla u$ and $\lambda$, for each cases. We mention here that we also tested the case where the initial shape actually contains entirely the closure of the stationary shape (which is typically the experimental setup examined in the literature), and, as expected, we also get an almost identical shape with the ones obtained for the given cases. Here, we opted to consider the above-mentioned test setups to see whether our scheme works well in the case that the initial shape does not contain some regions of the stationary shape. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig6a}} \caption{Mesh profile of $\Gamma_1^0$} \label{fig:Fig6a} \end{subfigure}% \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig6b}} \caption{Mesh profile of $ \Gamma_2^0$} \label{fig:Fig6b} \end{subfigure}% \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig6c}} \caption{Mesh profile of $\Gamma_3^0$} \label{fig:Fig6c} \end{subfigure}% \caption{Initial computational meshes for each test case in Example \ref{example3}} \label{fig:Fig6} \end{figure} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig7a}} \caption{Mesh profile of $\Gamma_1^{N_T}$} \label{fig:Fig7a} \end{subfigure}% \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig7b}} \caption{Mesh profile of $\Gamma_2^{N_T}$} \label{fig:Fig7b} \end{subfigure}% \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig7c}} \caption{Mesh profile of $\Gamma_3^{N_T}$} \label{fig:Fig7c} \end{subfigure}% \caption{Computational mesh profiles for each test case in Example \ref{example3} at $T=1$} \label{fig:Fig7} \end{figure} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig8a}} \caption{Case $\Gamma(0)=\Gamma_1^0$} \label{fig:Fig8a} \end{subfigure}% \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig8b}} \caption{Case $\Gamma(0)=\Gamma_2^0$} \label{fig:Fig8b} \end{subfigure}% \hfill \begin{subfigure}[b]{0.32\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig8c}} \caption{Case $\Gamma(0)=\Gamma_3^0$} \label{fig:Fig8c} \end{subfigure}% \caption{Boundary nodes' trajectories for each test case in Example \ref{example3}} \label{fig:Fig8} \end{figure} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \resizebox{0.9\textwidth}{!}{\includegraphics{fig9a}} \caption{Computed shapes} \label{fig:Fig9a} \end{subfigure}% \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \resizebox{0.9\textwidth}{!}{\includegraphics{fig9b}} \caption{Histories of values of $\|\nabla u^k \cdot \nu^k - \lambda\|_{L^2(\Gamma_i^k)}$} \label{fig:Fig9b} \end{subfigure}% \caption{Plot \ref{fig:Fig9a}: Cross comparison of computed shapes at $T=1$; plot \ref{fig:Fig9b}: history of the $L^2$-norm $\|\nabla u^k \cdot \nu^k - \lambda\|_{L^2(\Gamma_i^k)}$, $i=1,2,3$, for Example \ref{example3}} \label{fig:Fig9} \end{figure} \section{Mean Curvature Flow Problem} \label{sec:MCF} \subsection{Application of CMM to mean curvature flow problem} As further application of CMM, we will showcase in this section how CMM can easily be adapted to handle mean curvature flows: \begin{equation} \label{eq:curvature flow} V_{n} = -\kappa \qquad \text{on $\Gamma(t)$}, \end{equation} where, $\kappa$ denotes curvature of $\Gamma(t)$ for $d=2$, or the sum of principal curvature of $\Gamma(t)$ for $d \geqslant 3$. The corresponding problem under this situation is often referred to in the literature as the \emph{curve shortening problem} when $d=2$ (see, e.g., \cite{GageHamilton1986,Grayson1987}), and is called, in general (i.e., $d \geqslant 3$), as the \emph{mean curvature flow problem} (see, e.g., \cite{Dziuk1991,Huisken1984}). Here, we use the latter terminology in any dimensional case. For other numerical methods used to solve the problem such as the CSM coupled with the level-set method, or via a finite element method using approximation by a reaction-diffusion equation, we refer the readers to \cite{KimuraNotsu2002} and \cite{NochettoVerdi1996}, respectively. Now, let $\kappa^k$ be the curvature of $\Gamma^k = \partial \Omega ^k$. Similarly to \eqref{eq:velocity}, the smooth extension of $V_{n}{\nu}$ according to CMM satisfies the following problem for ${\bb{w}^k_h} : \Omega_h^k\setminus \overline{B_h} \to \mathbb{R}^d$: \begin{equation} \label{eq:curv-velocity} \left\{\arraycolsep=1.4pt\def1.{1} \begin{array}{rcll} - \Delta {\bb{w}^k} &=& \bb{0} &\quad \text{in $\Omega_h^k \setminus \overline{B_h}$},\\[0.3em] {\bb{w}^k} &=& \bb{0} &\quad \text{on $\partial B_h$},\\[0.3em] {\varepsilon} \nabla \bb{w}^k \cdot \nu^k + {\bb{w}^k} &=& -\kappa^k {\nu}^k &\quad \text{on $\Gamma_h^k$}. \end{array} \right. \end{equation} In variational form, the system of partial differential equations \eqref{eq:curv-velocity} is given as follows: find ${\bb{w}^k} \in H^1_{\partial B,\bb{0}}(\Omega^k\setminus \overline{B};\mathbb{R}^d)$ such that \begin{align} &\displaystyle \int_{\Omega^k \setminus \overline{B}} \nabla {\bb{w}^k} : \nabla \bb{\varphi} \ {\rm d}x + \frac{1}{{\varepsilon}} \int_{\Gamma^k} {\bb{w}^k} \cdot \bb{\varphi}\ {\rm d}s \nonumber\\ &\displaystyle \hspace{1in} = -\frac{1}{{\varepsilon}} \int_{\Gamma^k} \kappa^k {\nu}^k \cdot \bb{\varphi}\ {\rm d}s\nonumber\\ &\displaystyle \hspace{1in} = -\frac{1}{{\varepsilon}} \int_{\Gamma^k} \operatorname{div}_{\Gamma} \bb{\varphi} \ {\rm d}s, \quad \forall \bb{\varphi} \in H_{\partial B,\bb{0}}^1(\Omega^k\setminus \overline{B};\mathbb{R}^d),\label{curvatureflow_weakform} \end{align} where $\operatorname{div}_{\Gamma}$ denotes the \emph{tangential divergence} (see, e.g., \cite[Chap. 9, Sec. 5.2, eq. (5.6), p. 495]{DelfourZolesio2011} or \cite[Chap. 3, Sec. 1, Def. 2.3, p. 53]{KimuraNotes2008}). Evaluating the mean curvature term numerically is quite problematic, especially when implemented in a finite element method. Here, however, we point out that to numerically evaluate the integral consisting of the mean curvature $\kappa$, one may utilize the so-called \emph{Gauss-Green formula} on $\Gamma$ (see, e.g., \cite[Chap. 2, Sec. 2, Thm. 2.18, p. 56]{KimuraNotes2008} or \cite[eq. (5.27), p. 498]{DelfourZolesio2011}): \begin{equation} \label{eq:Gauss_Green_formula} \int_{{\Gamma}}{\kappa {\nu} \cdot \bb{v}}\ {\rm d}s = \int_{{\Gamma}}{ \operatorname{div}_{\Gamma} \bb{v} } \ {\rm d}s, \end{equation} which is valid for $C^2$ regular boundary $\Gamma$ and vector-valued function $ \bb{v} : \Gamma \to \mathbb{R}^d$ that belongs at least to $C^1(\Gamma; \mathbb{R}^d)$. Hence, the variational problem \eqref{curvatureflow_weakform} can be solved at once without the need to evaluate the mean curvature $\kappa^k$ at every time step $k = 0,1,\cdots,N_T$. To implement in a finite element method the right side integral appearing in the variational problem \eqref{curvatureflow_weakform}, we remark that the identity $\operatorname{div}_{\Gamma} \bb{\varphi} = \operatorname{div} \bb{\varphi} -(\bb{\varphi} \cdot \nu) \cdot\nu$ on $\Gamma$, actually holds for smooth $\Gamma$ and $\bb{\varphi} : \overline{\Omega} \to \mathbb{R}^d$. So, for a polygonal mesh $\Omega_h$ and $\Gamma_h:= \partial \Omega_h$, with triangular mesh $\mathcal{T}_h$ and element $\varphi_{ih} \in P_l(\mathcal{T}_h)$ ($i=1, 2, \ldots, d$), $l \in \mathbb{N}$, we have \[ \int_{\Gamma_h} \operatorname{div}_{\Gamma_h} \bb{\varphi}_h \ {\rm d}s = \int_{\Gamma_h} \left( \operatorname{div} \bb{\varphi}_h - \frac{\partial \bb{\varphi}_h}{\partial \nu} \cdot\nu \right) {\rm d}s . \] \begin{numex} \label{example4} With the above at hand, we perform a numerical experiment for the mean curvature flow problem which we execute under the following conditions: ${\varepsilon} = 0.1$, $\tau = 5 \cdot 10^{-4}$, with the maximum mesh size of width $h \approx 0.2$, $t \in [0,T]$, $T=1$, $\Omega_0 := \left\{ (r,\theta) \in \mathbb{R}^2 \middle\vert\ 0 \leqslant r < \frac{2}{2-\cos(5\theta)},\ 0 \leqslant \theta \leqslant 2\pi \right\}$, and $\overline{B}$ is the circle $C(\bb{0},0.5)$ as in Example \ref{example2}. \end{numex} The results of the experiment are summarized in Fig. \ref{fig:Fig10}. Here, the initial, plotted with its mesh triangulation, is shown in Fig. \ref{fig:Fig10a}. Fig. \ref{fig:Fig10b}, on the other hand, plots the mesh profile at selected time steps. The third figure, Fig. \ref{fig:Fig10c}, depicts the evolution of the moving boundary from its initial profile (outermost exterior boundary) up to its final shape (innermost exterior boundary), and at some intermediate time steps. Fig. \ref{fig:Fig10d} again plots the time evolution of the moving boundary, but now viewed on the first quadrant and with emphasis to the location of the boundary nodes at time steps $k = 100j$, for $j=0, 1, \ldots, 20$. As expected, the curvature flow equation $V_{n} = -\kappa$ on $\Gamma(t)$ has the effect of flattening uneven parts of the boundary, hence shrinking the whole domain into the geometric profile of the interior boundary $\partial B$ (as evident in the figures), after a sufficiently large time has passed. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.5\textwidth} \centering \resizebox{0.9\textwidth}{!}{\includegraphics{fig10a}} \caption{Initial mesh profile of $\Omega_h^0$} \label{fig:Fig10a} \end{subfigure}% \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \resizebox{0.445\textwidth}{!}{\includegraphics{fig10b1}} \resizebox{0.445\textwidth}{!}{\includegraphics{fig10b2}} \vskip1pt \resizebox{0.445\textwidth}{!}{\includegraphics{fig10b3}} \resizebox{0.445\textwidth}{!}{\includegraphics{fig10b4}} \caption{Mesh profile at $k=400, 800, 1200, 1600$} \label{fig:Fig10b} \end{subfigure}% \par\bigskip \begin{subfigure}[b]{0.5\textwidth} \centering \resizebox{0.9\textwidth}{!}{\includegraphics{fig10c}} \caption{Time evolution of the moving boundary} \label{fig:Fig10c} \end{subfigure}% \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \resizebox{0.9\textwidth}{!}{\includegraphics{fig10d}} \caption{Boundary nodes location at selected $k$s} \label{fig:Fig10d} \end{subfigure}% \caption{Computational results for Example \ref{example4}} \label{fig:Fig10} \end{figure} \subsection{EOC of CMM for the mean curvature flow problem} \label{sec:EOC-CF} We also check the accuracy of CMM for curvature flows in the same way as in subsection \ref{sec:EOC-HS}. That is, we construct a manufactured solution and then compare the numerical solution obtained through the proposed scheme. For this purpose, we state the following construction of the appropriate manufactured solution. \begin{proposition}\label{prop:MS-CF} We suppose $\phi(x,t)$ is a smooth function with $\phi<0$ for $x \in \overline{B}$ and $|\nabla \phi| \neq 0$ on $\{\phi = 0\}$ for $t \in [0,T]$. We define $g:\mathbb{R}^d \times [0,T] \to \mathbb{R}$ as $g:= -\frac{\phi _t}{|\nabla \phi|} + \frac{\Delta \phi}{|\nabla \phi|} - \frac{((D^2 \phi)\nabla \phi) \cdot \nabla \phi}{|\nabla \phi|^3} $, and $\Omega_0:=\{ \phi(x,0)<0 \}$. Then, the moving domain $\Omega (t):= \{ x \in \mathbb{R}^d \mid \phi(x,t) < 0) \}$ satisfy $V_n = - \kappa + g$ on $\Gamma(t)$, $t \in [0,T]$, and $\Omega(0)=\Omega_0$. \end{proposition} \begin{proof} The proposition easily follows from straightforward computation of $V_n$, ${\nu}$, and the mean curvature $\kappa$ in terms of the level set function $\phi$. \end{proof} We now examine the EOC of the scheme when applied to solving the mean curvature problem using Proposition \ref{prop:MS-CF}. In this experiment, the domains are initially discretized with uniform mesh size of width $h \approx 100 \times \tau$, and we set $\tau = 1/(100 \cdot 2^m)$, where $m=0,1,\ldots,5$. The results are depicted in Fig. \ref{fig:Fig11}. We observe from Fig. \ref{fig:Fig11a} an EOC of order one for $\tau$ against the boundary error ${\rm err}_{\Gamma}$. On the other hand, it seems that, for $h \approx 100 \times \tau$, we only have a sub-linear order for the EOC with respect to $\varepsilon$ against ${\rm err}_{\Gamma}$. In fact, the plot shown in Fig. \ref{fig:Fig11b} shows that the behavior due to the change of $\varepsilon$ is similar to Fig. \ref{fig:Fig5b}. This is because CMM is an explicit method, and since the right side of the variational problem \eqref{curvatureflow_weakform} contains the mean curvature term which is a second derivative, then the time step size $\tau$ must be well less than $h$ to stabilize the numerical calculation. In relation to this, notice in Fig. \ref{fig:Fig11a} that there is no corresponding error value for $\tau = 1/(100 \cdot 2^5)$ in case of $\varepsilon = 10^{-3}$. This is because the scheme is becoming unstable after several time steps under this set of parameter values, causing the algorithm to stop. So, for these reasons, we perform another experiment where $h \approx 200 \times \tau$ and consider different values for $\tau$. The results are summarized in Fig. \ref{fig:Fig12} where we now observe an almost linear convergence behavior of the scheme with respect to $\varepsilon$ against the boundary ${\rm err}_{\Gamma}$ as conspicuous in Fig. \ref{fig:Fig12b}. However, for small times steps, ${\rm err}_{\Gamma}$ is already saturated for $\tau$ of magnitude around or less than $10^{-3}$ as evident in Fig. \ref{fig:Fig12a}. Nevertheless, the error values became smaller, which implies that the numerical solution is improved by taking sufficiently small time steps. \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig11a}} \caption{$\tau$ vs $\operatorname{err}_{\Gamma}$ } \label{fig:Fig11a} \end{subfigure}% \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig11b}} \caption{$\varepsilon$ vs $\operatorname{err}_{\Gamma}$ } \label{fig:Fig11b} \end{subfigure}% \caption{Error of convergences when $h \approx 100 \times \tau$} \label{fig:Fig11} \end{figure} \vskip-15pt \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig12a}} \caption{$\tau$ vs $\operatorname{err}_{\Gamma}$ } \label{fig:Fig12a} \end{subfigure}% \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \resizebox{\textwidth}{!}{\includegraphics{fig12b}} \caption{$\varepsilon$ vs $\operatorname{err}_{\Gamma}$ } \label{fig:Fig12b} \end{subfigure}% \caption{Error of convergences when $h \approx 200 \times \tau$} \label{fig:Fig12} \end{figure} \section{Some Qualitative Properties of CMM} \label{sec:properties} In this section, we state and prove two simple properties of CMM related to the convergence to a stationary point $\Gamma^\ast$ of the moving boundary $\Gamma(t)$, $t \geqslant 0$, under a general description of the normal flow $V_{n}$, and another property we call the $\varepsilon$-approximation property of CMM. In relation to the former result, let us consider the following abstract autonomous moving boundary problem. \begin{prob} \label{prob:abstractMBP} Given the initial profile $\Gamma_0$ and a real-valued function $F(\,\cdot\,;\Gamma):\Gamma \to \mathbb{R}$, find a moving surface $\Gamma(t)$, $t\geqslant 0$, which satisfies \begin{equation} \label{eq:abstractMBP} \left\{\arraycolsep=1.4pt\def1.{1.1} \begin{array}{rcll} V_{n}(x,t) &=& F(x;\Gamma(t)), &\quad x \in \Gamma(t), \quad t \geqslant 0,\\[0.5em] \Gamma(0) &=& \Gamma_0. \end{array} \right. \end{equation} \end{prob} The particular forms of $F(x;\Gamma(t))$ that are of interest here are as follows: \begin{itemize} \item $F(x;\Gamma(t)) = (-\nabla u + \bb{\gamma}) \cdot {\nu} + \lambda$ in \eqref{eq:general_Hele-Shaw}; \vspace{2pt} \item $F(x;\Gamma(t)) = - \kappa$ in \eqref{eq:curvature flow}. \end{itemize} Next, we define a stationary solution to Problem \ref{prob:abstractMBP}. \begin{dfn} A domain $\Omega^*$ is said to be a \textit{stationary solution} to Problem \ref{prob:abstractMBP} if $\Gamma^* = \partial \Omega^*$, and $F(x;\Gamma^*) = 0$ for almost every $x \in \Gamma^*$. \end{dfn} Then, we associate with Problem \ref{prob:abstractMBP} the $\varepsilon$-regularized moving boundary problem given as follows: \begin{prob} \label{prob:epsilon_regularized} Let $B$ and $\Omega$ be two bounded domains with respective Lipschitz boundary $\partial B$ and $\Gamma:=\partial\Omega$ such that $\overline{B} \subset \Omega$. Given the initial profile $\Gamma_0$, a real-valued function $F(\,\cdot\, ;\Gamma) \in L^2(\Gamma)$, and a fix number $\varepsilon > 0$, we seek to find a moving surface $\Gamma(t)$, which satisfies \begin{equation} \label{eq:epsilon_regularized} \left\{\arraycolsep=1.4pt\def1.{1.} \begin{array}{rcll} - \Delta {\bb{w}} &=& \bb{0} &\quad \text{in $\Omega(t) \setminus \overline{B}$,\quad $t \geqslant 0$},\\ {\bb{w}} &=& \bb{0} &\quad \text{on $\partial B$},\\ \varepsilon \nabla \bb{w} \cdot \nu + {\bb{w}} &=& F(\,\cdot\, ;\Gamma(t)) {\nu} &\quad \text{on $\Gamma(t)$,\quad $t \geqslant 0$},\\ V_{n} &=& \bb{w} \cdot \nu &\quad \text{on $\Gamma(t)$,\quad $t \geqslant 0$}. \end{array} \right. \end{equation} \end{prob} With respect to Problem \ref{prob:epsilon_regularized}, a stationary solution $\Omega^*$ is define as follows. \begin{dfn} A domain $\Omega^*$ is said to be a \textit{stationary solution} to Problem \ref{prob:epsilon_regularized} if $\Gamma^* = \partial \Omega^*$, and ${\bb{w}} \in H^1_{\partial B, \bb{0}}(\Omega^\ast\setminus \overline{B};\mathbb{R}^d)$ satisfies the variational equation. \begin{align} &\displaystyle \varepsilon \int_{\Omega^\ast \setminus \overline{B}} \nabla {\bb{w}} : \nabla \bb{\varphi} \ {\rm d}x + \int_{\Gamma^\ast} {\bb{w}} \cdot \bb{\varphi}\ {\rm d}s \nonumber\\ &\displaystyle \hspace{0.75in} = \int_{\Gamma^\ast} F(\cdot;\Gamma ^*) {\nu} \cdot \bb{\varphi}\ {\rm d}s, \quad \forall \bb{\varphi} \in H_{\partial B, \bb{0}}^1(\Omega^\ast \setminus \overline{B};\mathbb{R}^d),\label{cmm_weakform}\\ & \text{and}\qquad \bb{w} \cdot \nu = 0 \quad \text{on $\Gamma^\ast$}.\label{cmm_bc} \end{align} \end{dfn} For Lipschitz domain $\Omega^\ast\setminus \overline{B}$ and $F(\,\cdot\, ;\Gamma) \in L^2(\Gamma)$, the variational problem \eqref{cmm_weakform} can be shown to have a weak solution ${\bb{w}} \in H^1(\Omega^\ast\setminus \overline{B};\mathbb{R}^d)$ via Lax-Milgram lemma. With the above definition of a stationary point, we now state and prove our first result. \begin{proposition} \label{prop:convergence_to_a_stationary_point} We suppose $\Omega^* \supset \overline{B}$, $\Gamma^* = \partial \Omega^*$ is Lipschitz, and $F(\,\cdot\, ;\Gamma) \in L^2(\Gamma)$. Then, the following conditions are equivalent: \begin{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})} \item $\Omega^*$ is a stationary solution to Problem \ref{prob:abstractMBP}, \item $\Omega^*$ is stationary solution to Problem \ref{prob:epsilon_regularized}, for any $\varepsilon >0$, \item $\Omega^*$ is stationary solution to Problem \ref{prob:epsilon_regularized}, for some $\varepsilon >0$. \end{enumerate} \end{proposition} \begin{proof} Consider equation \eqref{eq:epsilon_regularized} over the stationary shape $\overline{\Omega^\ast}$ with Lipschitz boundary $\Gamma^\ast$. For the implication $(i) \Rightarrow (ii)$, we assume that $L^2(\Gamma) \ni F(\,\cdot\, ;\Gamma) = 0$, and we need to show that $\bb{w} \cdot \nu = 0$ on $\Gamma^\ast$. To do this, we apply integration by parts to \eqref{cmm_weakform}, and note that $\bb{w} = \bb{0}$ on $\partial B$, to obtain \begin{align*} 0 \leqslant \int_{\Gamma^\ast} |\bb{w}|^2 \ {\rm d}s = - \varepsilon \int_{\Gamma^\ast} \ddn{\bb{w}} \cdot \bb{w} \, {\rm d}s = - \varepsilon \int_{\Omega^\ast \setminus \overline{B}} |\nabla \bb{w}|^2 \, {\rm d}x \leqslant 0. \end{align*} Evidently, $\bb{w} \equiv \bb{0}$ on $\overline{\Omega^{\ast}}$, and in particular, $\bb{w}\cdot {\nu} = 0$ on $\Gamma^\ast$. The proof of the direction $(ii) \Rightarrow (iii)$ is trivial. Finally, for the implication $(ii) \Rightarrow (iii)$, we need to prove that if $\bb{w} \cdot \nu = 0$ on $\Gamma^\ast$, where $\bb{w}$ satisfies the system \eqref{eq:epsilon_regularized} on $\overline{\Omega^\ast} \setminus B$, then $F = 0$ on $\Gamma^\ast$. In \eqref{cmm_weakform}, we take $\bb{\varphi} = \bb{w} \in H_{\partial B, \bb{0}}^1(\Omega^\ast \setminus \overline{B};\mathbb{R}^d)$ so that we get \[ \varepsilon \int_{\Omega^\ast \setminus \overline{B}} \nabla {\bb{w}} : \nabla \bb{w} \ {\rm d}x + \int_{\Gamma^\ast} |{\bb{w}}|^2\ {\rm d}s \nonumber\\ = \int_{\Gamma^\ast} F(\,\cdot\, ;\Gamma^*) {\nu}\, \cdot\, \bb{w}\ {\rm d}s = 0. \] This implies, obviously, that $\bb{w} \equiv \bb{0}$ on $\overline{\Omega^{\ast}}$. Going back to \eqref{cmm_weakform}, we see that $\int_{\Gamma^\ast} F(\,\cdot\, ;\Gamma^*) {\nu} \cdot \bb{\varphi}\ {\rm d}s = 0$, for all $\bb{\varphi} \in H_{\partial B, \bb{0}}^1(\Omega^\ast \setminus \overline{B};\mathbb{R}^d)$, from which we conclude that $F = 0$ on $\Gamma^\ast$. This proves the assertion. \end{proof} In the rest of this section, we want to prove what we call the $\varepsilon$-approximation property of CMM. For this purpose, we again fix $\Omega$ and $B$ and suppose that $\Gamma$ and $\partial B$ are Lipschitz regular. Given a function $\bb{g} : \Gamma \to \mathbb{R}^d$, our main concern is the convergence of its Robin approximation to an original Dirichlet boundary condition associated with the following Laplace equation with pure Dirichlet boundary condition: \begin{equation} \label{eq:origsystem} - \Delta {\bb{v}} = \bb{0} \quad \text{in $\Omega \setminus \overline{B}$},\qquad {\bb{v}} = \bb{0} \quad \text{on $\partial B$},\qquad {\bb{v}} = \bb{g} \quad \text{on $\Gamma$}. \end{equation} For a given data $\bb{g} \in H^{1/2}(\Gamma; \mathbb{R}^d)$ and Lipschitz domain $\Omega \setminus \overline{B}$, it can be shown via Lax-Milgram lemma that the corresponding variational equation of \eqref{eq:origsystem} admits a unique weak solution $\bb{v} \in H^1(\Omega \setminus \overline{B}; \mathbb{R}^d)$. Now, we consider system \eqref{eq:origsystem} and denote its solution, depending on $\bb{g} \in H^{1/2}(\Gamma;\mathbb{R}^d)$, by $\bb{v}^i:=\bb{v}(\bb{g}^i)$. Also, we define the Dirichlet-to-Neumann map $\Lambda: H^{1/2}(\Gamma;\mathbb{R}^d) \to H^{-1/2}(\Gamma;\mathbb{R}^d)$. Then, we have the following lemma whose proof is given in the Appendix. \begin{lemma} \label{lem:inner_product} The map $(\,\cdot\,,\,\cdot\,)_{\Lambda} : H^{1/2}(\Gamma;\mathbb{R}^d) \times H^{1/2}(\Gamma;\mathbb{R}^d) \to \mathbb{R}$ defined as $(\bb{g}^1, \bb{g}^2)_{\Lambda} := (\Lambda\bb{g}^1, \bb{g}^2)_{L^2(\Gamma;\mathbb{R}^d)}$, for $\bb{g}^1, \bb{g}^2 \in H^{1/2}(\Gamma;\mathbb{R}^d)$, is an inner product on $H^{1/2}(\Gamma;\mathbb{R}^d)$, and is equivalent to the usual norm on $H^{1/2}(\Gamma;\mathbb{R}^d)$. \end{lemma} Now, for $\varepsilon > 0$ and $\bb{g}\in H^{1/2}(\Gamma;\mathbb{R}^d)$, we define $\bb{g}_{\varepsilon}$ such that $\varepsilon \Lambda \bb{g}_{\varepsilon} + \bb{g}_{\varepsilon} = \varepsilon \nabla {\bb{v}_{\varepsilon}} \cdot \nu + \bb{v}_{\varepsilon} =: \bb{g}$, and consider the boundary value problem \eqref{eq:origsystem} with $\bb{v}$ and $\bb{g}$ replaced by $\bb{v}_{\varepsilon}$ and $\bb{g}_{\varepsilon}$, respectively, and, instead of the Dirichlet condition, we imposed on $\Gamma$ the Robin condition $\varepsilon \nabla {\bb{v}_{\varepsilon}} \cdot \nu + \bb{v}_{\varepsilon} = \bb{g}$. More precisely, we consider the mixed Dirichlet-Robin problem \begin{equation} \label{eq:epssystem} - \Delta {\bb{v}_{\varepsilon}} = \bb{0} \quad \text{in $\Omega \setminus \overline{B}$},\quad {\bb{v}_{\varepsilon}} = \bb{0} \quad \text{on $\partial B$},\quad \varepsilon \nabla \bb{v}_{\varepsilon} \cdot \nu + \bb{v}_{\varepsilon} = \bb{g} \quad \text{on $\Gamma$}. \end{equation} Let us define the bilinear form $a^\varepsilon(\,\cdot\,, \, \cdot \,)$ as follows: \[ a^\varepsilon(\bb{\varphi}, \bb{\psi}):= \ _{H^{-1/2}}\langle (\varepsilon \Lambda + \bb{I})\bb{\varphi}, \bb{\psi} \rangle_{H^{1/2}} = \varepsilon(\bb{\varphi}, \bb{\psi})_{\Lambda} + (\bb{\varphi}, \bb{\psi} )_{L^2(\Gamma;\mathbb{R}^d)}. \] Then, we may write a weak formulation on $\Gamma$ for $\bb{g}^\varepsilon$ as follows: find $\bb{g}^\varepsilon \in H^{1/2}(\Gamma;\mathbb{R}^d)$ such that \begin{equation} \label{eq:weak_form_on_gamma} a^\varepsilon(\bb{g}^\varepsilon, \bb{\varphi}) = (\bb{g}, \bb{\varphi})_{L^2(\Gamma;\mathbb{R}^d)},\qquad \text{for all $\bb{\varphi} \in H^{1/2}(\Gamma;\mathbb{R}^d)$}. \end{equation} Again, the existence of unique weak solution $\bb{g}^\varepsilon\in H^{1/2}(\Gamma;\mathbb{R}^d)$ to the above variational problem can be proven using Lax-Milgram lemma. We now exhibit our second convergence result in the following proposition which simply states the convergence of the Robin approximation to the original Dirichlet data in $L^2(\Gamma)$ sense as the parameter $\varepsilon$ goes to zero provided that the Neumann data $\Lambda \bb{g}$ is square integrable. \begin{proposition} \label{prop:epsilon_approximation} Let $\bb{g} \in H^{1/2}(\Gamma;\mathbb{R}^d)$ and $\Gamma$ be Lipschitz regular. If $\Lambda \bb{g} \in L^2(\Gamma;\mathbb{R}^d)$, then the following estimate holds $\left\| \bb{g}^\varepsilon - \bb{g} \right\|_{L^2(\Gamma;\mathbb{R}^d)} \leqslant \varepsilon \left\| \Lambda \bb{g} \right\|_{L^2(\Gamma;\mathbb{R}^d)}$. \end{proposition} \begin{proof} Taking the test function in \eqref{eq:weak_form_on_gamma} as $\bb{\varphi}:= \bb{g}^\varepsilon - \bb{g} \in H^{1/2}(\Gamma;\mathbb{R}^d)$ gives us the following sequence of equations: $a^\varepsilon(\bb{g}^\varepsilon - \bb{g}, \bb{g}^\varepsilon - \bb{g}) = (\bb{g}, \bb{g}^\varepsilon - \bb{g})_{L^2(\Gamma;\mathbb{R}^d)} - \varepsilon (\bb{g}, \bb{g}^\varepsilon - \bb{g})_{\Lambda} - (\bb{g}, \bb{g}^\varepsilon - \bb{g})_{L^2(\Gamma;\mathbb{R}^d)} = - \varepsilon (\bb{g}, \bb{g}^\varepsilon - \bb{g})_{\Lambda}$. This gives us the estimate $\left\| \bb{g}^\varepsilon - \bb{g} \right\|^2_{L^2(\Gamma;\mathbb{R}^d)} \leqslant - \varepsilon (\bb{g}, \bb{g}^\varepsilon - \bb{g})_{\Lambda}$. Furthermore, if $\Lambda \bb{g} \in L^2(\Gamma;\mathbb{R}^d)$, then we can write this inequality as $\left\| \bb{g}^\varepsilon - \bb{g} \right\|_{L^2(\Gamma;\mathbb{R}^d)} \leqslant \varepsilon \left\| \Lambda \bb{g} \right\|_{L^2(\Gamma;\mathbb{R}^d)}$, as desired. \end{proof} \section{Conclusion} \label{sec:conclusion} We have developed a finite element scheme we called the `comoving mesh method' or CMM for solving certain families of moving boundary problems. We applied the proposed scheme in solving the classical Hele-Shaw problem and the exterior Bernoulli free boundary problem. In the latter case, we found that the generalized Hele-Shaw problem with normal velocity flow $V_{n} = -\nabla u \cdot {\nu} + \lambda$, where $\lambda <0$ converges to a stationary point which coincides with the optimal shape solution of the said free boundary problem. We have also demonstrated the applicability of CMM in solving a moving boundary problem involving the mean curvature flow equation $V_{n} = -\kappa$. The numerical experiments performed here showed that the experimental order of convergence of the approximate solutions obtained using CMM are mostly linear for both the Hele-Shaw problem and the mean curvature problem. In case of the former problem, this linear order of convergence was seen for time step sizes that is as large as the mesh size value. On the other hand, for the mean curvature problem, it was observed that the magnitude of the time step-size has to be well less than the width of the mesh in order for the numerical scheme to be stable and obtained a (nearly) linear order of convergence with respect to the parameter $\varepsilon$ against the boundary shape error. Finally, we have also presented two simple properties of CMM pertaining to its stationary solution and a convergence result regarding the $\varepsilon$-approximation of $V_{n}$. In our next investigation, we will apply the method in solving more general moving boundary problems such as the Stefan problem and the two-phase Navier-Stokes equations. Moreover, we want to treat the Gibbs-Thomson law which assumes the condition $u=\sigma \kappa$ on the moving boundary.
1,108,101,563,548
arxiv
\section{Introduction} The QCD Multipole Expansion (QCDME) has been widely used to calculate transition rates among heavy quarkonia by emitting pions, \cite{Yan:1980uh,Kuang:1981se}. Since this theory refers to non-perturbative QCD, it has a limited application range, beyond this range the theory is no longer applicable. When the masses of the charmonia (bottononia) are sufficiently large beyond the production thresholds of $D^{(*)}\bar D^{(*)}$($(B^{(*)}\bar B^{(*)})$) which may become on-shell intermediate states, as Eichtein et al. indicate, the decay widths evaluated in terms of the QCDME are smaller than the data by three orders\cite{eichten2013}. In other words, the dominant modes of, say, $\Upsilon(nS)\to \Upsilon(mS)+\pi^+\pi^-$ or $\Upsilon(nS)\to \Upsilon(mS)+\pi^0$ can be realized via $\Upsilon(nS)\to B^{(*)}\bar B^{(*)}\to \Upsilon(mS)+\pi^+\pi^-$, which is usually referred as the final state interaction or re-scattering process. Even though the re-scattering process dominates the transition, the direct pion emission is still contributing and is evaluated in terms of the QCDME. It is interesting to theoretically estimate how small the contribution of the direct pion emission could be in comparison with the dominant one. To serve the purpose, we adopt a simple decay mode to do the job, i.e. calculate the contribution of a direct $\pi^0$ emission to the decay rate of $D^*\to D+\pi^0$. For $D^{*+} \to D^+ \pi^0$, the direct $\pi^0$ emission is an OZI suppressed and moreover causes an isospin violation. The double suppression determines that the contribution from the direct pion emission must be small. In fact, unless other mechanisms are forbidden by some reasons, such as constraints of available phase space or other symmetries, the direct pion emission cannot make substantial contribution to the decay rates as Eichten et al. suggest. To quantitatively confirm Eichten's statement, we use both the $^3P_0$ model and QCDME to calculate their contribution to the decay rate separately. Our numerical results show that the effective coupling constant $g_{D^*D\pi}$ determined by QCDME is 60$\sim$ 70 times smaller than that obtained from quark pair creation (QPC). After the introduction, in section \ref{s2}, we evaluate the contributions to the decay rate of the $D^{*+} \to D^+ \pi^0$ from both the quark pair creation (QPC) described by the $^3P_0$ model and the direct pion emission described by the QCDME respectively in subsections \ref{ss21} and \ref{ss22}. The numerical results are presented following the formulations in the section and comparisons with the corresponding experimental data are made. In the final section we will discuss the framework in some details and then draw our conclusion. \section{$D^{*+} \to D^+ \pi^0$ decays\label{s2}} \subsection{The quark pair creation model and its application to $\pi^0$ radiation\label{ss21}} In the framework of the QPC model\cite{Micu,yaouanc,yaouanc-1,yaouanc-book,Beveren,BSG,sb,qpc-1,qpc-2,qpc-90, ackleh,Zou,liu,Close:2005se,lujie,xiangliu-2860,xiangliu-heavy,Li:2008mz}, the decay $D^{*+} \to D^+ \pi^0$ occurs via a quark-antiquark pair creation from the vacuum. It is an Okubo-Zweig-Iizuka (OZI) allowed process. The decay mechanism is displayed in Fig.\ref{3p0} graphically. The picture is that many soft gluons are emitted from the quark and anti-quark legs which then annihilate into a quark-antiquark pair. Equivalently, the physics scenario can be described as that a quark pair is excited out from vacuum. The $^3P_0$ model has been widely applied to calculate such hadronic strong decays. \begin{figure}[H] \centering \begin{minipage}[!htbp]{0.5\textwidth} \centering \includegraphics[width=0.98\textwidth]{3p0.eps} \caption{The quark-pair creation from vacuum serves as the decay mechanism for $D^{*+} \to D^+ \pi$. } \label{3p0} \end{minipage} \end{figure} For readers' convenience, we collect the relevant formulations about the calculation in terms of the $^3P_0$ model in the appendix. The transition operator for the quark pair creation reads \begin{eqnarray} T&=& - 3 \gamma \sum_m\: \langle 1\;m;1\;-m|0\;0 \rangle\, \int\!{\rm d}{\textbf{k}}_3\; {\rm d}{\textbf{k}}_4 \delta^3({\textbf{k}}_3+{\textbf{k}}_4) {\cal Y}_{1m}\left(\frac{{\textbf{k}}_3-{\textbf{k}_4}}{2}\right)\; \nonumber\\&&\times\chi^{3 4}_{1, -\!m}\; \varphi^{3 4}_0\;\, \omega^{3 4}_0\; d^\dagger_{3i}({\textbf{k}}_3)\; b^\dagger_{4j}({\textbf{k}}_4)\,, \label{tmatrix} \end{eqnarray} and the hadronic matrix element is determined as \begin{eqnarray} \langle D^{+}\pi^0|S|D^{*+}\rangle=I-i2\pi\delta(E_{\text{final}}-E_{\text{initial}})\langle D^{+}\pi^0|T|D^{*+}\rangle.\label{smatrix} \end{eqnarray} In Eq.~\ref{tmatrix}, $i$ and $j$ are the SU(3)-color indices of the created quark and anti-quark. $\varphi^{34}_{0}=(u\bar u +d\bar d +s \bar s)/\sqrt 3$ and $\omega_{0}^{34}=\delta_{ij}$ are for flavor and color singlets, respectively. $\chi_{{1,-m}}^{34}$ is the spin wave function. $\mathcal{Y}_{\ell m}(\mathbf{k})\equiv |\mathbf{k}|^{\ell}Y_{\ell m}(\theta_{k},\phi_{k})$ is the $\ell$th solid harmonic polynomial. $\gamma$ is a dimensionless constant which denotes the strength of quark pair creation from vacuum and is fixed by fitting data. Following Ref.\cite{Blundell:1995ev}, we take $\gamma=13.4$ in this work. For Eq.~\ref{smatrix}, the explicit expressions for the wave function of a meson and the hadronic matrix elements are presented in the appendix. The helicity amplitude $\mathcal{M}^{M_{J_{D^{*+}}}M_{J_{D^{+}}}M_{J_{\pi^0}}}$ of this process can be extracted from the relation $\langle D^{+}\pi^0|S|D^{*+}\rangle=\delta^3(\mathbf{K}_{D^{+}}+\mathbf{K}_{\pi^0}-\mathbf{K}_{D^{*+}}) \mathcal{M}^{M_{J_{D^{*+}}}M_{J_{D^{+}}}M_{J_{\pi^0}}}$. Then, the decay width corresponding to the process is written in terms of the helicity amplitude as \begin{eqnarray*} \Gamma=\pi^2\frac{|\mathbf{K}|}{M_{D^{*+}}^2}\frac{1}{2J_{D^{*+}}+1} \sum_{\renewcommand{\arraystretch}{.38}\begin{array}[t]{l} \scriptstyle M_{J_{M_{D^{*+}}}},M_{J_{M_{D^{+}}}}, \scriptstyle M_{J_{M_{\pi^0}}} \end{array}} \Big|\mathcal{M}^{M_{J_{D^{*+}}}M_{J_{D^{+}}}M_{J_{\pi^0}}}\Big|^2\,, \end{eqnarray*} where we take $\textbf{K}_{D^{+}}=-\textbf{K}_{\pi^0}=\textbf{K}$ in the center of the mass frame of ${D^{*+}}$. Numerically, we take a typical $R$ value for $D$ meson from Ref.~\cite{Close:2005se} as 2.3 GeV$^{-1}$ and $R=2.1$ GeV$^{-1}$ for $\pi^0$ from Ref.~\cite{Blundell:1995ev}. With these parameter setup, the decay width of $D^{*+} \to D^+ \pi^0$ can be easily obtained as $21.9$ keV. Experimentally, the decay width of this mode is $29.5^{+7.3}_{-7.2}$ keV\cite{Agashe:2014kda}. The consistency of the numerical results evaluated in terms of the $^3P_0$ model with data indicates that the theoretical framework is well established and applicable to describe such processes. On the other hand, based on the heavy quark effective theory(HQET), one can extract the effective coupling constant of $D^*D\pi$ from the afore calculated decay width which might offer significant information for application of an effective theory. Following Ref.\cite{Casalbuoni:1996pg}, the related effective Lagrangian can be written as \begin{equation} \begin{array}{rl} \mathcal{L}=-\frac{2g_{D^*D\pi}}{f_\pi}D^*_\mu\partial^\mu\frac{\phi_\pi}{\sqrt 2}D^\dag+h.c. \end{array} \end{equation} Then we can get the decay width as \begin{equation} \begin{array}{rl} \Gamma(D^{*+} \to D^+ \pi^0)=\frac{1}{2m_{D^*}}\frac{4\pi|\textbf{k}|}{(2\pi)^2 4m_{D^*}} \frac{|\mathcal{T}|^2}{3}, \end{array} \label{decay} \end{equation} with $|\textbf{k}|=\frac{1}{{2m_{D^*}}}[(m^2_{D^*}-(m_\pi+m_{D})^2)(m^2_{D^*}-(m_\pi-m_{D})^2)]^{1/2}$. The transition amplitude of $D^{*+} \to D^+ \pi^0$ is\cite{Casalbuoni:1996pg} \begin{equation} \begin{array}{rl} \mathcal{T}(D^{*+} \to D^+ \pi^0)&=g_{D^*D\pi} \frac{1}{\sqrt 2}\frac{2m_D}{f_\pi}\textbf{k}\cdot\boldsymbol{\epsilon}, \end{array} \end{equation} here $\boldsymbol{\epsilon}$ is the polarization vector of $D^*$. From equation(\ref{decay}) we obtain $g_{D^*D\pi}=0.51$. \subsection{The QCDME and evaluating contribution of direct $\pi^0$ emission to the decay width\label{ss22}} It is noted that the pion can be directly emitted before $D^*$ transits into $D$, thus the amplitude, in principle, should be added to the process depicted in above subsection and interferes with it. Just by the qualitative analysis, the one-pion emission is an OZI suppressed process and moreover, it violates isospin conservation, therefore must be much small compared to the vacuum creation. It is obviously interesting to investigate such an effect in other processes, i.e. as long as the one-pion emission is not a leading term, how small it would be compared with the leading ones. Below, we will quantitatively investigate the direct one-pion emission in $D^*\to D+\pi^0$. The corresponding diagrams for the direct pion emission in $D^{*+} \to D^+ \pi^0$ for which the QCDME is responsible are shown in Fig.\ref{qcdme}. \begin{figure}[H] \centering \begin{minipage}[!htbp]{1\textwidth} \centering \includegraphics[width=0.98\textwidth]{qcdme.eps} \caption{The QCDME diagrams responsible for pion emission in the process $D^{*+} \to D^+ \pi^0$.} \label{qcdme} \end{minipage} \end{figure} The readers should note that Fig.\ref{qcdme} is just obtained by distorting Fig.\ref{3p0}, as shown in Fig.\ref{me3}. \begin{figure}[H] \centering \begin{minipage}[!htbp]{1\textwidth} \centering \includegraphics[width=0.98\textwidth]{me3.eps} \caption{Distortion of the $^3P_0$ decay mechanism for $D^{*+} \to D^+ \pi$ into an OZI suppressed process.} \label{me3} \end{minipage} \end{figure} Here we only draw two gluon field lines, but as well understood, in the scenario of QCDME the lines correspond to a field of $E_n$ mode or $M_n$ one which are by no means free gluons and the line is also not corresponding to a single-gluon propagator. Thus the line indeed denote a collection of many soft gluons just as shown in Fig.\ref{3p0}. Now let us calculate the rate contributed by the processes shown in Fig.\ref{qcdme} in the framework of QCDME. The process of directly emitting a soft $\pi^0$ from $D^*$ in decay $D^*\to D+\pi^0$ is dominated by an E1-M2 coalesce transition. This is an OZI suppressed process and violates isospin conservation. The transition amplitude is\cite{Kuang:2006me} \begin{equation} \begin{array}{rl} \mathcal{M}_{E1M2}=i\frac{g_E g_M}{12m}\sum_{NL} (\frac{\langle\Phi_F|x_i|NL\rangle \langle NL|S_j x_k|\Phi_I\rangle}{M_I-E_{NL}}+ \frac{\langle\Phi_F|S_j x_k|NL\rangle \langle NL|x_i|\Phi_I\rangle}{M_I-E_{NL}})\langle\pi|E^a_i \partial_k B^a_j |0\rangle, \end{array} \end{equation} where $S$ operator acting on the total spin of the heavy-quark and light-anti-quark system, $N$ and $L$ are the principal quantum number and the orbital angular momentum of the intermediate hybrid state, $M_I$ and $E_{NL}$ are the mass of the initial meson $D^*$ and the energy eigenvalues of the hybrid state, $m$ is the energy scale of the M2 transition and we set it to be $m_c$ and $\frac{m_c}{2}$ in our numerical computations. The amplitude reduces into\cite{Kuang:1988bz,Kuang:1990kd \begin{equation} \begin{array}{rl} \mathcal{M}_{E1M2}=i\frac{g_E g_M}{18m}\sum_{NL} \frac{\int R_F(r) r R_{NL}^*(r)r^2 dr \int R_{NL}^*(r') r' R_I(r')r'^2 dr'}{M_I-E_{NL}}\boldsymbol{\epsilon}_k\langle\pi|E^a_l \partial_l B^a_k |0\rangle, \end{array}\label{ecc} \end{equation} where $\boldsymbol{\epsilon}$ is the polarization vector of $D^*$, $R_I$, $R_F$ and $R_{NL}$ are the radial wave functions of the initial, final and intermediate hybrid state, respectively. The radial wave functions are calculated via solving the relativistic Schr\"odinger equation \cite{Liu:2013maa}. The potentials for the initial and final $D^{(*)}$ mesons are taken from Ref.(\cite{Liu:2013maa}) and the potential for the intermediate hybrid states is taken from Ref.(\cite{Ke:2007ih}). The matrix element $\langle\pi|g_E g_M E^a_l \partial_l B^a_k |0\rangle$ is of the form\cite{Kuang:1988bz,Kuang:1990kd} \begin{equation} \begin{array}{rl} \langle\pi|g_E g_M E^a_l \partial_l B^a_k |0\rangle=\frac{1}{12}K_k\frac{g_E g_M }{\alpha_s} \frac{4\pi}{\sqrt 2}\frac{m_d-m_u}{m_d+m_u}f_\pi m_\pi^2, \end{array} \end{equation} where $g_E$ and $g_M$ are the coupling constants for the color electric field and color magnetic field, $\textbf{k}$ is the momentum of $\pi^0$. In order to compare the results with the effective coupling constant $g_{D^*D\pi}$ obtained by using the QPC model, the transition amplitude of $D^{*+} \to D^+ \pi^0$ can be rewritten as\cite{Casalbuoni:1996pg} \begin{equation} \begin{array}{rl} \mathcal{M}(D^{*+} \to D^+ \pi^0)&=g^{(ME)}_{D^*D\pi} \frac{2m_D}{f_\pi}\textbf{k}\cdot\boldsymbol{\epsilon}. \end{array} \end{equation} $g^{(ME)}_{D^*D\pi}$ is the effective coupling constant obtained by means of QCDME, it is then \begin{equation} \begin{array}{rl} g^{(ME)}_{D^*D\pi}=\frac{1}{18 m} f_{1110}^{111} \frac{g_E g_M}{\alpha_s}\frac{\pi}{3\sqrt2}\frac{m_u-m_d}{m_u+m_d} f_\pi m_\pi^2\sqrt{2 m_{D^*}}\sqrt{2m_D} \frac{f_\pi}{2m_D}, \end{array} \end{equation} with \begin{equation} \begin{array}{rl} f_{1110}^{111}=\sum_{NL}\frac{\int R_{F}(r) r R_{NL}^*(r)r^2 dr \int R_{NL}^*(r') r' R_I(r')r'^2 dr'}{M_I-E_{NL}}. \end{array} \end{equation} For our numerical analysis, the input parameters are taken from Ref.\cite{Agashe:2014kda}, here $m_{D}=1.869$ GeV, $m_{D^*}=2.010$ GeV, $m_\pi=0.135$ GeV, $m_c=1.800$ GeV, $m_u=0.3$ GeV, $\frac{m_d-m_u}{m_d+m_u}=\frac{1}{3}$, $f_{B^*}=0.230$GeV, $f_K=0.160$ GeV, $F_\pi=0.093$ GeV and $f_\pi=\sqrt 2F_\pi$. $\alpha_s=0.31$ for $\sqrt s=2.010$ GeV. Following Ref. \cite{Yan:1980uh,Kuang:1981se,Kuang:1988bz,Kuang:1990kd} we set $\alpha_E=\frac{g_E^2}{4\pi}$, $\alpha_M=\frac{g_M^2}{4\pi}$ with $\alpha_E=0.6$ and for a possible error range, according to the literature, we let $\alpha_M$ vary from $\alpha_E$ to $10\alpha_E$. The constants $f_{1110}^{111}$ and effective coupling constant $g$ obtained in terms of QCDME are listed in Tab.\ref{cc}. \begin{center} \begin{table}[h] \begin{tabular}[c]{|l|l|l|l|l|} \hline & $\alpha_M=\alpha_E$ &$\alpha_M=3\alpha_E$ &$\alpha_M=10\alpha_E$&$\alpha_M=30\alpha_E$ \\\hline $f_{1110}^{111}$& 5.677 &9.833 &17.952 &31.094 \\\hline $g_{D^*D\pi}(QCDME)$ & 0.00145 & 0.00251 &0.00459 &0.00794 \\\hline \end{tabular} \caption{coupling constant $g^{(ME)}_{D^*D\pi}$ and $f_{1110}^{111}$ (in units of GeV$^{-3}$), and we set $\alpha_M$ to be $\alpha_E$, $3\alpha_E$, $10\alpha_E$, and $30\alpha_E$ separately, and the value of $m$ is set to $\frac{m_c}{2}$.\label{cc}} \end{table} \end{center} In order to explore possible validity ranges of QCDME, we extend the maximum value of $\alpha_M$ to $30\alpha_E$. One can see that when $\alpha_M$ takes the value $30\alpha_E$, $g^{(ME)}_{D^*D\pi}$ reaches 0.00794. With possible errors, this result is 60 times smaller than that obtained by the QPC model\cite{Agashe:2014kda,Meng:2008bq}. \begin{center} \begin{table}[h] \begin{tabular}[c]{|l|l|l|l|l|} \hline & $\alpha_M=\alpha_E$ &$\alpha_M=3\alpha_E$ &$\alpha_M=10\alpha_E$&$\alpha_M=30\alpha_E$ \\\hline $\Gamma(D^{*+} \to D^+ \pi^0)$ with $m=\frac{m_c}{2}$&$4.43\times10^{-5}$ &$5.31\times10^{-4}$&$4.44\times10^{-4}$ &$5.31\times10^{-3}$ \\\hline $\Gamma(D^{*+} \to D^+ \pi^0)$ with $m=m_c $&$1.77\times10^{-4}$&$1.33\times10^{-4}$&$1.78\times10^{-3}$&$1.33\times10^{-3}$ \\\hline \end{tabular} \caption{Decay width of $\Gamma(D^{*+} \to D^+ \pi^0)$(in unit of keV) from QCDME contribution, for $\alpha_M$ we takes $\alpha_E$, $3\alpha_E$, $10\alpha_E$, and $30\alpha_E$ respectively. \label{dec}} \end{table} \end{center} In Tab.\ref{dec} we also list decay width $\Gamma(D^{*+} \to D^+ \pi^0)$ from QCDME contribution, from this table we can see when $\alpha_M$ takes the maximum value $30\alpha_E$, the decay width of $\Gamma(D^{*+} \to D^+ \pi^0)$ from QCDME contribution is $5.31\times10^{-3}$keV. This result is more than three orders smaller than the result obtained by using the QPC model($21.9$keV). \section{conclusion and discussion } Our numerical results for decay mode $D^*\to D\pi^0$ confirm that the direct pion emission is not the leading term and the contributions determined by the QCDME must be much smaller than that induced by other mechanisms. Let us try to understand the smallness of the contribution from direct pion emission, namely using data to convince that our estimate is reasonable. There are two suppression factors: the one-pion emission violates the isospin conservation and QCDME is an OZI suppressed mechanism. Comparing with the vacuum quark pair creation, it must be small and we see that its contribution to the decay width ranges about $10^{-4}\sim 10^{-5}$ keV. As we know, the decay $\Psi(2S)\to J/\Psi +\pi^0$ is also an isospin violation and OZI suppressed process, its branching ratio is $1.27\times 10^{-3}$\cite{Agashe:2014kda}, slightly larger than our estimate for $D^*\to D\pi^0$. That further suppression factor is coming from the fact that the transition $\Psi(2S)\to J/\Psi +\pi^0$ is an $S-$wave process, whereas $D^{*+} \to D^+ \pi^0$ is a $P-$wave one whose decay width is proportional to the three-momentum ${\bf k}$ which is small, so the decay width $\Gamma(D^{*+} \to D^+ \pi^0)$ suffers from a P-wave suppression. We may also look at $\psi(2S)$ as an example for a direct $\pi$ emission. $\psi(2S)\to \pi^0+h_c(1P)$ can only occur via a direct pion emission, so is completely determined by the QCDME mechanism, and its partial width is about 0.26 keV. Since it is an S-wave process, it does not suffer from the P-wave suppression. In $\psi(2S)$ decays, the mode $\psi(2S)\to \eta_c+\pi^0$ is not seen, but $\psi(2S)\to\eta_c+\pi^+\pi^-\pi^0$ has been measured, and its branching ratio is not too small ($< 1.0\times 10^{-3}$), that is because the direct emission of three pions does not violate the isospin conservation. Equivalently, the QCDME can be replaced by the chiral perturbation theory, for example the transition of $\Upsilon(nS)\to \Upsilon(mS)+\pi^+\pi^-$ was studied in terms of the chiral theory\cite{Guo:2006ai}. Moreover, we have extended our mechanism to study the non-resonant three-body decays of $B$ where the weak interaction gets involved. The contribution has been studied by Cheng et al. in terms of the chiral perturbation theory and we have re-done the evaluation by means of QCDME. We will present the relevant results in our next work. As a conclusion, we confirm the validity of the QCDEM and determine its application region. It is indicated that since the framework applies only to the direct pion emission, if there are other mechanisms to contribute, such as the quark-pair creation from vacuum to $D^*\to D+\pi^0$; or $\Upsilon(5S)\to B\bar B^*\to \Upsilon(1S)+\pi^+\pi^-$ where an intermediate state of $B\bar B^*$ portal is open, i.e the available energy is above the production threshold of $B\bar B^*$, the direct emission induced by the QCDME is no longer the leading term and can only contribute tiny fraction. \section*{Acknowledgments} We greatly benefit from the talk given by Prof. Eichten at Nankai University and we also sincerely thank Prof. H.Y. Cheng for enlightening discussions. This work is supported by National Natural Science Foundation of China under the Grant Number 11375128.
1,108,101,563,549
arxiv
\section{Introduction} \label{intro} The Large Hadron Collider (LHC) at CERN started high energy collisions nine years ago. During this period a large amount of data have been collected considering $pp$ collisions at $\sqrt{s}$ = 0.9, 2.76, 7, 8 and 13 TeV, $pPb$ collisions at $\sqrt{s} =$ 5 and 8.2 TeV as well as $PbPb$ collisions at $\sqrt{s}$ = 2.76 and 5 TeV. Currently, there is a great expectation that LHC will discover new physics beyond the Standard Model, such as supersymmetry or extra dimensions. However, we should remember that one of the main contributions of the LHC is that it probes a new kinematical regime at high energy, where several questions related to the description of the Quantum Chromodynamics (QCD) remain without satisfactory answers. In particular, the study of the photon - induced interactions in hadronic collisions at the LHC \cite{upc,review_forward} is expected to constrain the nuclear and nucleon gluon distributions \cite{bert} and the description of the QCD dynamics at high energies \cite{vicmag}, where a hadron becomes a dense system and the nonlinear effects inherent to the QCD dynamics may become visible \cite{hdqcd}. During the last years, the study of these interactions in $pp/pA/AA$ collisions \cite{upc} became a reality \cite{cdf,star,phenix,alice,alice2,alice3,lhcb,lhcb2,lhcb3,lhcbconf} and new data associated to the Run 2 of the LHC are expected to be released soon. A complementary kinematical range can be studied in fixed - target collisions at the LHC. Such alternative, proposed originally in Ref. \cite{after}, is expected to probe for example the nucleon and nuclear matter in the domain of high Feynman $x_F$, the transverse spin asymmetries in the Drell - Yan and Quarkonium production as well as the Quark Gluon Plasma formation in the energy and density range between the SPS and RHIC experiments \cite{after2}. The basic idea of the AFTER@LHC experiment is to develop a fixed - target programme using the proton and heavy ions beams of the LHC, extracted by a bent crystal, to collide with a fixed proton or nuclear target, reaching high luminosities when a target with a high density is considered. The typical energies that are expected to be reached in this experiment are $\sqrt{s} \approx 110$ GeV for $pA$ collisions and $\sqrt{s} \approx 70$ GeV for $PbA$ collisions. Very recently, the study of fixed - target collisions at the LHC became a reality by the injection of noble gases ($He, \, Ne, \, Ar$) in the LHC beam pipe by the LHCb Collaboration \cite{lhcfixed} using the System for Measuring Overlap with Gas (SMOG) device \cite{smog}. The typical fixed - target $pA$ and $PbA$ configurations that already have been performed were $pAr$ collisions at $\sqrt{s} = 69$ GeV, $pNe$ at $\sqrt{s} = 87$ GeV, $pHe/\, pNe/\, pAr$ at $\sqrt{s } = 110$ GeV, $Pb Ne$ at $\sqrt{s} = 54$ GeV and $Pb Ar$ at $\sqrt{s} = 69$ GeV. The associated experimental results are expected to improve our understanding of the nuclear effects present in $pA$ collisions \cite{lhcbD} and, in the particular case of $pHe$ collisions, to shed light on the antiproton production (See e.g. Ref. \cite{antiproton}). The study of photon - induced interactions also is expected to be possible in fixed - target collisions \cite{after}. As demonstrated in Refs. \cite{lands_dilepton,vicodderon2}, the analysis of the exclusive dilepton and $\eta_c$ photoproduction in ultraperipheral collisions at AFTER@LHC can be useful to probe the inner hadronic structure and the existence of the Odderon, which is one the main open questions of the strong interactions theory \cite{review_odderon}. Assuming $\sqrt{s} = 115/\,72/\,72$ GeV for $pp/Pbp/PbPb$ collisions, the typical values for the maximum photon - hadron and photon - photon center - of - mass energies are $\sqrt{s_{\gamma h}} \lesssim 44/\, 12/\, 9$ GeV and $\sqrt{s_{\gamma \gamma}} \lesssim 17/\,2.0/\,1.0$ GeV, respectively. Therefore, the fixed - target collisions allows to probe the photon - induced interactions in a limited energy range, dominated by low - energy interactions. Such analysis can be considered as complementary to those performed in the collider mode, where the maximum energies can reach values of ${\cal{O}}(TeV)$ and the cross sections receive contributions of low and high energies $\gamma \gamma$ and $\gamma h$ interactions \cite{upc}. In this paper we will study, for the first time, the exclusive vector meson photoproduction in fixed - target collisions. In particular, we will estimate the rapidity and transverse momentum distributions for the exclusive $\rho$, $\omega$ and $J/\Psi$ photoproduction in $pA$ and $PbA$ collisions at the energies and configurations considered by the LHCb experiment. Our goal in this exploratory study is to provide a first estimate of the total cross sections and associated distributions that can be measured at the LHC taking into account of some realistic experimental requirements on the final states. In our analysis we will use the STARlight Monte Carlo \cite{starlight} to treat the vector meson photoproduction and its decay. Recent results demonstrate that predictions of this Monte Carlo is able to successfully describe the main aspects of the photon - induced interactions at the LHC (See e.g. Refs. \cite{alice,alice2,lhcb,lhcb2,lhcb3}). We postpone for a future publication a more detailed discussion about the description of the vector meson production in the soft and/or low energy regimes probed in fixed target collisions (See e.g. Ref. \cite{vicmagvector}). Before to present our results in Section \ref{res}, we will present in the next Section a brief review of photon - induced interactions and its description in the STARlight MC. Finally, in Section \ref{conc} we will summarize our main conclusions and results. \begin{figure}[t] \includegraphics[scale=0.3]{vm_fixedtarget.eps} \caption{Vector meson photoproduction in a hadronic $h_1 h_2$ collision. The vertical blue ellipse represents the strong interaction between the vector meson and the hadronic target. } \label{diagram} \end{figure} \section{Formalism} \label{form} The basic idea in photon-induced processes is that an ultra relativistic charged hadron (proton or nucleus) gives rise to strong electromagnetic fields, such that the photon stemming from the electromagnetic field of one of the two hadrons can either interact with one photon of the other hadron (photon - photon process) or can interact directly with the other hadron (photon - hadron process) \cite{upc}. In these processes the total cross section can be factorized in terms of the equivalent flux of photons of the incident hadrons and the photon-photon or photon-target cross section. In what follows we will focus in the exclusive vector meson production in photon - hadron interactions. In this process, represented in Fig. \ref{diagram}, the topology of the final state will be characterized by two empty regions in pseudo-rapidity, called rapidity gaps, separating the intact hadrons from the vector meson. These events also will be characterized by a low hadronic multiplicity and one vector meson with small transverse momentum. Such characteristics can be used, in principle, to separate the photon - induced processes from the inelastic one \cite{review_forward}. Our focus will be in ultraperipheral collisions (UPCs), characterized by large impact parameters ($b > R_{h_1} + R_{h_2}$), in which the photon -- induced interactions become dominant. In this case, the cross section for the exclusive vector meson photoproduction in photon - induced interactions can be expressed by, \begin{widetext} \begin{equation} \sigma(h_1 + h_2 \rightarrow h_1 \otimes V \otimes h_2;\,s) = \int d\omega \,\, n_{h_1}(\omega) \, \sigma_{\gamma h_2 \rightarrow V \otimes h_2}\left(W_{\gamma h_2} \right) + \int d \omega \,\, n_{h_2}(\omega) \, \sigma_{\gamma h_1 \rightarrow V \otimes h_1}\left(W_{\gamma h_1} \right)\, \; , \label{eq:sigma_pp} \end{equation} \end{widetext} where $\sqrt{s}$ is center-of-mass energy for the $h_1 h_2$ collision ($h_i$ = p,A), $\otimes$ represents the presence of a rapidity gap in the final state, $\omega$ is the energy of the photon emitted by the hadron and $n_h$ is the equivalent photon flux of the hadron $h$ integrated over the impact parameter. Moreover, $\sigma_{\gamma h \rightarrow V \otimes h}$ describes the vector meson production in photon - hadron interactions. In the STARlight MC, the photon spectrum is calculated as follows \cite{klein,starlight} \begin{eqnarray} n(\omega) = \int \mbox{d}^{2} {\mathbf b} \, P_{NH} ({\mathbf b}) \, N\left(\omega,{\mathbf b}\right) \,\,, \end{eqnarray} where $P_{NH} ({\mathbf b})$ is the probability of not having a hadronic interaction at impact parameter ${\mathbf b}$ and the number of photons per unit area, per unit energy, derived assuming a point-like form factor, is given by \begin{equation} N(\omega,{\mathbf b}) = \frac{Z^{2}\alpha_{em}}{\pi^2} \frac{\omega}{\gamma^{2}} \left[K_1^2\,({\zeta}) + \frac{1}{\gamma^{2}} \, K_0^2({\zeta}) \right]\, \label{fluxo} \end{equation} where $\zeta \equiv \omega b/\gamma$ and $K_0(\zeta)$ and $K_1(\zeta)$ are the modified Bessel functions. Different models for $P_{NH} ({\mathbf b})$ are assumed for $pp$, $pA$ and $AA$ collisions (For details see Ref. \cite{starlight}). Additionally, the description of the cross section for the $\gamma h \rightarrow V \otimes h$ process depends if the target is a proton or a nuclei. In the proton case, the vector meson photoproduction is described by a parametrization inspired in the Regge theory given by \begin{eqnarray} \sigma_{\gamma p \rightarrow V \otimes p} = \sigma_{{I\!\!P}} \times W_{\gamma p}^{\epsilon} + \sigma_{{I\!\!R}} \times W_{\gamma p}^{\eta} \,\,, \label{sig_gamp} \end{eqnarray} where the first term is associated to a Pomeron exchange, dominant at high energies, and the second one to the Reggeon exchange, which describes the behavior of the cross section at low energies. For the $\rho$ and $\omega$ production, both terms contribute, with $\epsilon = 0.22$, and $\eta$ being negative, in the range $-1.2 \ge \eta \ge -1.9$. Therefore, the cross section for light mesons rises slowly with increasing $W_{\gamma p}$. On the other hand, the $J/\Psi$ production is assumed to be described only by the Pomeron contribution, with $\epsilon = 0.65$, and the cross section is supplemented by a factor that accounts for its behavior for energies near the threshold of production. The free parameters on the parametrization are fitted using the HERA data \cite{hera}. On the other hand, in the nuclear case, the STARlight allows to estimate the vector meson photoproduction in incoherent ($\gamma A \rightarrow V A^*$) and coherent ($\gamma A \rightarrow V A$) photon - nucleus interactions. In what follows we will present our results for the coherent production, where the cross section is determined using a classical Glauber approach \cite{klein,glauber}. In this case the coherent cross section is given by \begin{widetext} \begin{eqnarray} \sigma(h_1 h_2 \rightarrow h_1 \otimes V \otimes h_2) = \int_0^{\infty} d\omega \,n_{h_1}(\omega) \, \int_{t_{min}}^{\infty} dt \, \frac{d\sigma(\gamma h_2 \rightarrow V h_2)}{dt}|_{t=0} \,|F(t)|^2 + [h_1 \leftrightarrow h_2]\,\, \label{star1} \end{eqnarray} \end{widetext} where $F(t)$ is the nuclear form factor and $t_{min} = (M_V^2/4 \omega \gamma)^2$. For heavy ions, the form factor is assumed to be the convolution of a hard sphere potential with a Yukawa potential of range 0.7 fm. Such form factor implies the presence of diffractive minima when $t$ is a multiple of $(\pi/R_A)^2$ if nuclear shadowing and saturation effects are negligible. The differential cross section for a photon - nucleus interaction is determined using the optical theorem and the generalized vector dominance model (GVDM) \cite{sakurai} \begin{eqnarray} \frac{d\sigma(\gamma A \rightarrow V A)}{dt}|_{t=0} = \frac{\alpha \sigma^2_{tot}(VA)}{4 f_v^2} \,\,, \label{gvdm} \end{eqnarray} where $f_v$ is the vector meson - photon coupling and the total cross section for the vector meson - nucleus interactions is found using the classical Glauber calculation \begin{eqnarray} \sigma_{tot}(VA) = \int d^2\mbox{\boldmath $b$} \{ 1 - \exp[-\sigma_{tot}(Vp)T_{AA}(\mbox{\boldmath $b$})]\} \label{glauber} \end{eqnarray} with $\sigma_{tot}(Vp)$ being determined by $\sigma(\gamma p \rightarrow Vp)$ [See Eq. (9) in Ref. \cite{klein}] and $T_{AA}$ is the overlap function at a given impact parameter $\mbox{\boldmath $b$}$. The Eq. (\ref{glauber}) is denoted classical since it disregards the fact that at high energies the wave package that propagates through the nucleus can be different from the projective wave function and nondiagonal $V - V^{\prime}$ terms can contribute for the total $VA$ cross section. Such corrections were estimated originally in Ref. \cite{frank02}, which demonstrated that the quantum calculation implies higher cross sections. Recent results \cite{alice3} indicate that the classical Glauber gives a better description of the experimental data for the $\rho$ photoproduction in ultraperipheral collisions. The discrepancy between the quantum calculation and data is explained as being associated to the presence of nuclear shadowing effects \cite{frank16}. Consequently, the classical calculation can still be considered a good first approximation to estimate the magnitude of the vector meson photoproduction cross sections. Finally, it is important to emphasize that the STARlight assumes a dipole form factor for a proton target. Such form factor implies a $t$ spectrum that decreases with increasing $t$, without the presence of diffractive dips. Such behavior differs from the results obtained in Refs. \cite{armesto,diego} using phenomenological models based on saturation physics, which predict that dips should be present in the spectrum. \begin{widetext} \begin{center} \begin{table}[t] \begin{tabular}{|c|c|c|c|c|c|c|} \hline Final State & p-Ar & p-He & Pb-Ar & Pb-He\tabularnewline \hline \hline $\rho^{0}\rightarrow\pi^{+}\pi^{-}$ & 318.60 (16.50) $\mu b$ & 6.97 (1.09) $\mu b$ & 42.50 (24.50) mb & 5.60 (2.44) mb\tabularnewline \hline $\omega\rightarrow\pi^{+}\pi^{-}$ & 1160.12 (30.71) nb & 21.86 (2.29) nb & 76.32 (46.21) $\mu b$ & 12.81 (5.35) $\mu b$\tabularnewline \hline $J/\psi\rightarrow\mu^{+}\mu^{-}$ & 3.88 (0.14) nb & 118.41 (14.29) pb & 88.67 (39.68) nb & 13.31 (7.15) nb\tabularnewline \hline \hline \end{tabular} \caption{Total cross sections for the exclusive $\rho$, $\omega$ and $J/\Psi$ photoproduction in fixed - target collisions at the LHC considering $pA$ ($Pb A$) collisions at $\sqrt{s} = 110 \,(69)$ GeV. The predictions obtained assuming the LHCb requirements are presented in parenthesis.} \label{table:XSeC} \end{table} \end{center} \end{widetext} \begin{figure}[t] \begin{tabular}{cc} \includegraphics[scale=0.35]{rapidityVM_rho0_110GeV_pAr.eps} & \includegraphics[scale=0.35]{ptpair_rho0_110GeV_pAr.eps} \end{tabular} \caption{Rapidity (left panel) and transverse momentum (right panel) distributions for the exclusive $\rho$ photoproduction in $pAr$ collisions at $\sqrt{s} = 110$ GeV. The predictions associated to $\gamma p$ and $\gamma Ar$ interactions are presented separately, as well as the sum of both contributions. The $\gamma$ in parenthesis indicates the particle that is the source of the photons.} \label{fig:rho} \end{figure} \section{Results} \label{res} In what follows we will present our estimates for the exclusive $\rho$, $\omega$ and $J/\Psi$ photoproduction in fixed - target $pA$ and $Pb A$ collisions at $\sqrt{s} = 110$ GeV and 69 GeV, respectively, assuming $A = He, Ar$. The masses and widths are the standard Particle Data Group values. In the case of $\rho$ and $\omega$ production we will present our predictions taking into account the decay of these vector mesons into $\pi^+ \pi^-$ pairs. For $J/\Psi$ production, we present our predictions considering its decay in a $\mu^+ \mu^-$ pair. Finally, we consider the full LHC kinematical range as well as the kinematical range probed by the LHCb detector. In the latter case, we select the events in which the $\pi^+ \pi^-$ and $\mu^+ \mu^-$ pairs are produced in the pseudorapidity range $2 \le \eta \le 5$ with $p_T \ge 0.2$ GeV. Our predictions for the total cross sections are presented in Table \ref{table:XSeC}. In our analysis we consider that both initial state particles can be the source of the photons that generate the photon - hadron interactions. As expected from previous studies of the exclusive vector meson photoproduction (See e.g. Refs. \cite{vicmag,brunoall}), the cross sections decrease with the increasing of the mass of the vector meson, being substantially larger for the $\rho$ production. Moreover, the $PbA$ collisions are enhanced by the $Z^2$ factor present in the nuclear photon flux. We have that our predictions for the light meson production are approximately three orders of magnitude smaller than those expected in $pPb$ and $PbPb$ collisions at the Run2 of the LHC in the collider mode \cite{brunoall}. In the $J/\Psi$ case, the predictions are smaller by four orders of magnitude, which is directly associated to the fact that $\sigma(\gamma h \rightarrow J/\Psi \otimes h)$ has a steeper energy dependence than the cross section for the light meson production ($\epsilon_{J/\Psi} = 0.65 \gg \epsilon_{\rho,\omega} = 0.22$) and that in the collider mode a larger range of values of $W_{\gamma h}$ contributes for the hadronic cross section. In comparison to the predictions for RHIC energies \cite{vicmag}, we have that our results are smaller by approximately two orders of magnitude. However, as already emphasized above, the fixed - target collisions at the LHC are expected to reach high luminosities [${\cal{O}}(100 \, nb^{-1}$) per year], which implies that approximately $16\times 10^9$ ($34000$) events per year will be associated to a $\rho$ ($J/\Psi$) produced in an exclusive photon - hadron interaction. As a consequence, the experimental analysis of this process in fixed - target collisions at the LHC is, in principle, feasible. Another important aspect is that if we assume the LHCb requirements for the selection of a exclusive process, the impact on the total cross sections for $PbA$ collisions is small (See Table \ref{table:XSeC}), which implies that the LHCb detector ideal for the study of this process. In contrast, for $pA$ collisions, the LHCb requirements has a large impact on the predictions. In order to understand this result, in Fig. \ref{fig:rho} (left panel) we present the rapidity distribution for the exclusive $\rho$ photoproduction in $pAr$ collisions at $\sqrt{s} = 110$ GeV. The contributions associated to $\gamma p$ and $\gamma Ar$ interactions are presented separately, as well the sum of both. We have that at small rapidities ($y_{\pi^+ \pi^-} \le 3.0$) the distribution is dominated by $\gamma Ar$ interactions, with the photon coming from the proton. This contribution decreases at large rapidities. On the other hand, for $y_{\pi^+ \pi^-} > 3.0$, the $\gamma p$ interactions, present when the photon is emitted by the nucleus, become dominant and the maximum occurs at very forward rapidities, beyond those probed by the LHCb detector. As a consequence, the LHCb requirements has a large impact on the prediction of the total cross section in $pAr$ collisions. Moreover, as the $\gamma p$ contribution increases with $Z^2$, the impact of these requirements is larger in $pAr$ than in $pHe$ collisions, as verified in Table \ref{table:XSeC}. For completeness, in Fig. \ref{fig:rho} (right panel) we also present our predictions for the transverse momentum distribution. We can see that this distribution is also dominated by $\gamma p$ interactions. Such behaviour is expected, since the $t$ - behaviour of the $\gamma Ar$ interactions is defined by the nuclear form factor, while in the case of $\gamma p$ the behavior is determined by the nucleon form factor. Due to the difference of values between these quantities, it is expected a narrower transverse momentum distribution in $\gamma Ar$ than in $\gamma p$ interactions. We have verified that similar conclusions are derived analysing the exclusive $\omega$ and $J/\Psi$ photoproduction in $pA$ collisions. In what follows we will only present the sum of $\gamma p$ and $\gamma A$ contributions. \begin{widetext} \begin{figure}[t] \begin{tabular}{ccc} \includegraphics[scale=0.3]{ptpair_rho0_110GeV_LHCb.eps} & \includegraphics[scale=0.3]{ptpair_omega_110GeV_LHCb.eps} & \includegraphics[scale=0.3]{ptpair_Jpsi_110GeV_LHCb.eps} \end{tabular} \caption{Transverse momentum distributions for the exclusive vector meson photoproduction in $pAr$ and $PbAr$ collisions at $\sqrt{s} = 110$ and 69 GeV, respectively. The predictions obtained assuming the LHCb requirements are also presented for comparison. } \label{fig:pt} \end{figure} \end{widetext} In Fig. \ref{fig:pt} we present our predictions for the transverse momentum distributions associated to the exclusive $\rho$, $\omega$ and $J/\Psi$ photoproduction in $pAr$ and $PbAr$ collisions at $\sqrt{s} = 110$ and 69 GeV, respectively. For comparison, we also present the predictions obtained assuming the LHCb requirements. The meson $p_T$ spectrum is determined by the sum of the photon momentum with the exchanged momentum in the interaction between the vector meson and the target. The photon momentum is defined by the equivalent photon approximation, while the exchanged one is determined in the case of coherent interactions by the form factor of the target. As pointed out in Ref. \cite{klein_int}, in the case of symmetric collisions ($h_1 = h_2$), the overall $p_T$ spectrum also is affected by interference from the two production sources. Such effect will be not present in our case, since we are only considering asymmetric collisions ($h_1 \neq h_2$). As both the photon and scattering transverse momenta are small, we expect that meson $p_T$ spectrum will be dominated by small values of transverse momentum, being strongly suppressed at large $p_T^2$. This behaviour is observed in Fig. \ref{fig:pt}. Considering the predictions for $pAr$ collisions, as demonstrated before, the $p_T$ distribution is determined by $\gamma p$ interactions, with the photon coming from the nucleus. As a consequence, the associated predictions for the production of the different vector mesons are distinct from those obtained in the case of $PbAr$ collisions, which are determined by $\gamma A$ interactions. In the $pAr$ case, we predict wider distributions. In the case of $PbAr$ collisions, the $p_T$ - behavior of the distributions is determined by the nuclear form factor $F(t)$ [See Eq. (\ref{star1})], which implies a faster decreasing at large transverse momenta in comparison to the proton case. We have that the main impact of the LHCb requirements is the modification of the normalization of the distributions. In Fig. \ref{fig:rap} we present our predictions for the rapidity distributions. As we are considering the collision of non - identical hadrons, the magnitude of the photon fluxes associated to the two incident hadrons is different, which implies asymmetric rapidity distributions. Moreover, distinctly from the predictions for the collider mode, where the maximum of the distributions occur at central rapidities ($y \approx 0$), we have that in fixed - target collisions, the maximum is shifted for forward rapidities. In particular, it occurs in the kinematical range probed by the LHCb detector in the case of $PbAr$ collisions. As discussed before, this result explains the small impact of the LHCb requirements on the predictions for the total cross sections for $PbAr$ collisions presented in Table \ref{table:XSeC}. In the case of light meson production, the rapidity distribution is determined by the Reggeon and Pomeron contributions [See Eq. (\ref{sig_gamp})], with the cross section being determined by the Reggeon one at low energies near the threshold of production. Such energies are probed at small ($y \le 2$) and large ($y \ge 7$) rapidities. Our results indicate that the Reggeon contribution will be strongly reduced in the kinematical range probed by the LHCb detector. The absence of a reggeonic term and the higher threshold in the $J/\Psi$ case imply a narrower rapidity distribution. On the other hand, in the case of $pAr$ collisions, the maximum of the distributions occurs at very forward rapidities, beyond those probed by the LHCb. Although the LHCb requirements have a large impact on the predictions, in particular at larger nuclei, the values presented in Table \ref{table:XSeC} indicate that the analysis of the exclusive vector meson photoproduction in $pA$ collisions is still feasible. Finally, our results indicate a faster increasing of the rapidity distribution for the $J/\Psi$ production with increasing of $y$ in the region $y\le 4$ in comparison to predicted for the other mesons. Such behaviour is directly associated to the steeper energy behaviour of the $\gamma p \rightarrow J/\Psi p$ cross section, which can be directly probed by the LHCb detector. \section{Summary} \label{conc} During the last years, the experimental results from Tevatron, RHIC and LHC have demonstrated that the study of hadronic physics using photon - induced interactions in $pp/pA/AA$ colliders is feasible and provide important information about the QCD dynamics and vector meson production. In this paper, we have complemented previous analysis of the exclusive vector meson photoproduction by considering, for the first time, the possibility of study this process in fixed - target collisions at the LHC. Our analysis has been motivated by the proposition of the AFTER@LHC experiment and by the study of beam - gas interactions, recently performed by the LHCb detector. We have estimated the total cross sections, rapidity and transverse momentum distributions for the exclusive $\rho$, $\omega$ and $J/\Psi$ photoproduction in $pA$ and $PbA$ collisions at $\sqrt{s} = 110$ and $69$ GeV using the STARlight Monte Carlo, which allowed to take into account some typical LHCb requirements for the selection of exclusive events. Our results indicate that the experimental analysis of this process is, in principle, feasible. If performed, such analysis will probe the vector meson production at low energies, which will allows to improve the description of this process in a kinematical regime unexplorated by previous fixed - target experiments and current colliders. \begin{widetext} \begin{figure}[t] \begin{tabular}{ccc} \includegraphics[scale=0.3]{rapidityVM_rho0_110GeV_LHCb.eps} & \includegraphics[scale=0.3]{rapidityVM_omega_110GeV_LHCb.eps} &\includegraphics[scale=0.3]{rapidityVM_Jpsi_110GeV_LHCb.eps} \end{tabular} \caption{Rapidity distributions for the exclusive vector meson photoproduction in $pAr$ and $PbAr$ collisions at $\sqrt{s} = 110$ and 69 GeV, respectively. The predictions obtained assuming the LHCb requirements are also presented for comparison. } \label{fig:rap} \end{figure} \end{widetext} \section*{Acknowledgements} VPG would like to thanks M. S. Rangel, J. G. Contreras and S. R. Klein for useful discussions. This work was partially financed by the Brazilian funding agencies CNPq, CAPES, FAPERGS and INCT-FNA (process number 464898/2014-5). {\it Note added in the proof --} Recently we became aware that the study of exclusive $J/\Psi$ photoproduction in fixed target collisions as a probe of the generalized parton distribution $E_g$ has been briefly discussed in Ref. \cite{lansfixed}.
1,108,101,563,550
arxiv
\subsection[#1]{\sc #1}} \renewcommand{\mod}{{\ \operatorname{mod}\ }} \newcommand\E{\mathbb{E}} \newcommand\Z{\mathbb{Z}} \newcommand\R{\mathbb{R}} \newcommand\T{\mathbb{T}} \newcommand\C{\mathbb{C}} \newcommand\N{\mathbb{N}} \newcommand\g{\mathbf{g}} \newcommand\X{\mathrm{X}} \newcommand\W{\mathrm{W}} \newcommand\Y{\mathrm{Y}} \newcommand\D{\mathcal{D}} \newcommand\B{\mathcal{B}} \newcommand\Hcal{\mathcal{H}} \newcommand\G{\mathcal{G}} \newcommand\I{\mathcal{I}} \newcommand\J{\mathcal{J}} \renewcommand\P{\mathbb{P}} \newcommand\Q{\mathbb{Q}} \newcommand\Lip{\operatorname{Lip}} \newcommand\Sym{\operatorname{Sym}} \newcommand\Symb{\operatorname{Symb}} \newcommand\PP{\operatorname{PP}} \newcommand\x{{\bf x}} \newcommand\f{{\bf f}} \newcommand\w{{\bf w}} \newcommand\1{{\bf 1}} \newcommand\2{{\bf 2}} \newcommand\z{{\bf z}} \newcommand\s{{\bf s}} \newcommand\n{{\bf n}} \newcommand\y{{\bf y}} \renewcommand\i{{\bf i}} \newcommand\h{{\bf h}} \newcommand\m{{\bf m}} \renewcommand\u{{\bf u}} \newcommand\eps{\varepsilon} \newcommand\rank{\operatorname{rank}} \renewcommand\deg{\operatorname{deg}} \newcommand\degrank{\operatorname{degrank}} \newcommand\degrankform{{\bf degrank}} \newcommand\sgn{\operatorname{sgn}} \newcommand\id{\operatorname{id}} \renewcommand\th{{\operatorname{th}}} \newcommand\BP{{\operatorname{BP}}} \newcommand\poly{{\operatorname{poly}}} \newcommand\GI{{\operatorname{GI}}} \newcommand\DR{{\operatorname{DR}}} \newcommand\MD{{\operatorname{Multi}}} \newcommand\Nil{{\operatorname{Nil}}} \newcommand\ind{{\operatorname{ind}}} \newcommand\rat{{\operatorname{rat}}} \newcommand\sml{{\operatorname{sml}}} \newcommand\lin{{\operatorname{lin}}} \newcommand\mes{{\operatorname{mes}}} \newcommand\Taylor{{\operatorname{Taylor}}} \newcommand\petal{{\operatorname{petal}}} \newcommand\Horiz{{\operatorname{Horiz}}} \newcommand\dist{{\operatorname{dist}}} \newcommand\orbit{{\mathcal{O}}} \newcommand\F{{\mathcal{F}}} \newcommand\HK{\operatorname{HK}} \newcommand\ultra{{{}^*}} \renewcommand{\labelenumi}{(\roman{enumi})} \begin{document} \title{An inverse theorem for the Gowers $U^{s+1}[N]$-norm} \author{Ben Green} \address{Centre for Mathematical Sciences\\ Wilberforce Road\\ Cambridge CB3 0WA\\ England } \email{[email protected]} \author{Terence Tao} \address{Department of Mathematics\\ UCLA\\ Los Angeles, CA 90095\\ USA} \email{[email protected]} \author{Tamar Ziegler} \address{Department of Mathematics \\ Technion - Israel Institute of Technology\\ Haifa, Israel 32000} \email{[email protected]} \subjclass{11B30} \begin{abstract} We prove the \emph{inverse conjecture for the Gowers $U^{s+1}[N]$-norm} for all $s \geq 1$; this is new for $s \geq 4$. More precisely, we establish that if $f : [N] \rightarrow [-1,1]$ is a function with $\Vert f \Vert_{U^{s+1}[N]} \geq \delta$ then there is a bounded-complexity $s$-step nilsequence $F(g(n)\Gamma)$ which correlates with $f$, where the bounds on the complexity and correlation depend only on $s$ and $\delta$. From previous results, this conjecture implies the Hardy-Littlewood prime tuples conjecture for any linear system of finite complexity. \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} The purpose of this paper is to establish the general case of a conjecture named the \emph{Inverse Conjecture for the Gowers norms} by the first two authors in \cite[Conjecture 8.3]{green-tao-linearprimes}. If $N$ is a (typically large) positive integer then we write $[N] := \{1,\dots,N\}$. For each integer $s \geq 1$ the inverse conjecture $\GI(s)$, whose statement we recall shortly, describes the structure of $1$-bounded functions $f : [N] \rightarrow \C$ whose $(s+1)^{\operatorname{st}}$ Gowers norm $\Vert f \Vert_{U^{s+1}[N]}$ is large. These conjectures together with a good deal of motivation and background to them are discussed in \cite{green-icm,green-tao-u3inverse,green-tao-linearprimes}. The conjectures $\GI(1)$ and $\GI(2)$ have been known for some time, the former being a straightforward application of Fourier analysis, and the latter being the main result of \cite{green-tao-u3inverse} (see also \cite{sam} for the characteristic $2$ analogue). The case $\GI(3)$ was also recently established by the authors in \cite{u4-inverse}. The aim of the present paper is to establish the remaining cases $\GI(s)$ for $s \geq 3$, in particular reestablishing the results in \cite{u4-inverse}. We begin by recalling the definition of the Gowers norms. If $G$ is a finite abelian group, $d \geq 1$ is an integer, and $f : G \rightarrow \C$ is a function then we define \begin{equation}\label{ukdef} \Vert f \Vert_{U^{d}(G)} := \left( \E_{x,h_1,\dots,h_d \in G} \Delta_{h_1} \ldots \Delta_{h_d} f(x)\right)^{1/2^d}, \end{equation} where $\Delta_h f$ is the multiplicative derivative $$ \Delta_h f(x) := f(x+h) \overline{f(x)}$$ and $\E_{x \in X} f(x) := \frac{1}{|X|} \sum_{x \in X} f(x)$ denotes the average of a function $f: X \to \C$ on a finite set $X$. Thus for instance we have \[ \Vert f \Vert_{U^2(G)} := \left( \E_{x,h_1,h_2 \in G} f(x) \overline{f(x+h_1) f(x+h_2)} f(x+h_1 + h_2)\right)^{1/4}.\] One can show that $U^d(G)$ is indeed a norm on the functions $f: G \to \C$ for any $d \geq 2$, though we will not need this fact here. In this paper we will be concerned with functions on $[N]$, which is not quite a group. To define the Gowers norms of a function $f : [N] \rightarrow \C$, set $G := \Z/\tilde N\Z$ for some integer $\tilde N \geq 2^d N$, define a function $\tilde f : G \rightarrow \C$ by $\tilde f(x) = f(x)$ for $x = 1,\dots,N$ and $\tilde f(x) = 0$ otherwise, and set \[ \Vert f \Vert_{U^d[N]} := \Vert \tilde f \Vert_{U^d(G)} / \Vert 1_{[N]} \Vert_{U^d(G)},\] where $1_{[N]}$ is the indicator function of $[N]$. It is easy to see that this definition is independent of the choice of $\tilde N$. One could take $\tilde N := 2^d N$ for definiteness if desired. The \emph{Inverse conjecture for the Gowers $U^{s+1}[N]$-norm}, abbreviated as $\GI(s)$, posits an answer to the following question. \begin{question} Suppose that $f : [N] \rightarrow \C$ is a function bounded in magnitude by $1$, and let $\delta > 0$ be a positive real number. What can be said if $\Vert f \Vert_{U^{s+1}[N]} \geq \delta$? \end{question} Note that in the extreme case $\delta = 1$ one can easily show that $f$ is a phase polynomial, namely $f(n)=e(P(n))$ for some polynomial $P$ of degree at most $s$. Furthermore, if $f$ correlates with a phase polynomial, that is to say if $|\E_{n \in [N]} f(n) \overline{e( P(n))}| \geq \delta$, then it is easy to show that $\Vert f \Vert_{U^{s+1}[N]} \geq c(\delta)$. It is natural to ask whether the converse is also true - does a large Gowers norm imply correlation with a polynomial phase function? Surprisingly, the answer is no, as was observed by Gowers \cite{gowers-4aps} and, in the related context of \emph{multiple recurrence}, somewhat earlier by Furstenberg and Weiss \cite{furst, fw-char}. The work of Furstenberg-Weiss and Conze-Lesigne \cite{conze} draws attention to the role of homogeneous spaces $G/\Gamma$ of nilpotent Lie groups, and subsequent work of Host and Kra \cite{host-kra} provides a link, in an ergodic-theoretic context, between these spaces and certain seminorms with a formal similarity to the Gowers norms under discussion here. Later work of Bergelson, Host and Kra \cite{bhk} highlights the role of a class of functions arising from these spaces $G/\Gamma$ called \emph{nilsequences}. The inverse conjecture for the Gowers norms, first formulated precisely in \cite[\S 8]{green-tao-linearprimes}, postulates that this class of functions (which contains the polynomial phases) represents the full set of obstructions to having large Gowers norm. We now recall that precise formulation. Recall that an \emph{$s$-step nilmanifold} is a manifold of the form $G/\Gamma$, where $G$ is a connected, simply-connected nilpotent Lie group of step at most $s$ (i.e. all $s+1$-fold commutators of $G$ are trivial), and $\Gamma$ is a discrete, cocompact\footnote{A subgroup $\Gamma$ of a topological group $G$ is \emph{cocompact} if the quotient space $G/\Gamma$ is compact.} subgroup of $G$. \begin{conjecture}[$\GI(s)$]\label{gis-conj} Let $s \geq 0$ be an integer, and let $0 < \delta \leq 1$. Then there exists a finite collection ${\mathcal M}_{s,\delta}$ of $s$-step nilmanifolds $G/\Gamma$, each equipped with some smooth Riemannian metric $d_{G/\Gamma}$ as well as constants $C(s,\delta), c(s,\delta) > 0$ with the following property. Whenever $N \geq 1$ and $f : [N] \rightarrow \C$ is a function bounded in magnitude by $1$ such that $\Vert f \Vert_{U^{s+1}[N]} \geq \delta$, there exists a nilmanifold $G/\Gamma \in {\mathcal M}_{s,\delta}$, some $g \in G$ and a function $F: G/\Gamma \to \C$ bounded in magnitude by $1$ and with Lipschitz constant at most $C(s,\delta)$ with respect to the metric $d_{G/\Gamma}$ such that $$ |\E_{n \in [N]} f(n) \overline{F(g^n x)}| \geq c(s,\delta).$$ \end{conjecture} We remark that there are many equivalent ways to reformulate this conjecture. For instance, instead of working with a finite family ${\mathcal M}_{s,\delta}$ of nilmanifolds, one could work with a single nilmanifold $G/\Gamma = G_{s,\delta}/\Gamma_{s,\delta}$, by taking the Cartesian product of all the nilmanifolds in the family. Other reformulations include an equivalent formulation using polynomial nilsequences rather than linear ones (see Conjecture \ref{gis-poly}) and an ultralimit formulation (see Conjecture \ref{gis-conj-nonst}). One can also formulate the conjecture using bracket polynomials, or local polynomials; see \cite{green-tao-u3inverse} for a discussion of these equivalences in the $s=2$ case. Let us briefly review the known partial results on this conjecture: \begin{enumerate} \item $\GI(0)$ is trivial. \item $\GI(1)$ follows from a short Fourier-analytic computation. \item $\GI(2)$ was established about five years ago in \cite{green-tao-u3inverse}, building on work of Gowers \cite{gowers-4aps}. \item $\GI(3)$ was established, quite recently, in \cite{u4-inverse}. \item In the extreme case $\delta = 1$ one can easily show that $f(n)=e(P(n))$ for some polynomial $P$ of degree at most $s$, and every such function \emph{is} an $s$-step nilsequence by a direct construction. See, for example, \cite{green-tao-u3inverse} for the case $s = 2$. \item In the almost extremal case $\delta \geq 1- \eps_s$, for some $\eps_s > 0$, one may see that $f$ correlates with a phase $e(P(n))$ by adapting arguments first used in the theoretical computer-science literature \cite{akklr}. \item The analogue of $\GI(s)$ in ergodic theory (which, roughly speaking, corresponds to the asymptotic limit $N \to \infty$ of the theory here; see \cite{host-kra-uniformity} for further discussion) was formulated and established in \cite{host-kra}, work done independently of the work of Gowers (see also the earlier paper \cite{hk1}). This work was the first place in the literature to link objects of Gowers-norm type (associated to functions on a measure-preserving system $(X, T,\mu)$) with flows on nilmanifolds, and the subsequent paper \cite{bhk} was the first work to underline the importance of \emph{nilsequences}. The formulation of $\GI(s)$ by the first two authors in \cite{green-tao-linearprimes} was very strongly influenced by these works. For the closely related problem of analysing multiple ergodic averages, the relevance of flows on nilmanifolds was earlier pointed out in \cite{furst, fw-char,lesigne-nil}, building upon earlier work in \cite{conze}. See also \cite{hk0,ziegler} for related work on multiple averages and nilmanifolds in ergodic theory. \item The analogue of $\GI(s)$ in finite fields of large characteristic was established by ergodic-theoretic methods in \cite{bergelson-tao-ziegler,tao-ziegler}. \item A weaker ``local'' version of the inverse theorem (in which correlation takes place on a subprogression of $[N]$ of size $\sim N^{c_s}$) was established by Gowers \cite{gowers-longaps}. This paper provided a good deal of inspiration for our work here. \item The converse statement to $\GI(s)$, namely that correlation with a function of the form $n \mapsto F(g^n x)$ implies that $f$ has large $U^{s+1}[N]$-norm, is also known. This was first established in \cite[Proposition 12.6]{green-tao-u3inverse}, following arguments of Host and Kra \cite{host-kra} rather closely. A rather simple proof of this result is given in \cite[Appendix G]{u4-inverse}. \end{enumerate} The main result of this paper is a proof of Conjecture \ref{gis-conj}: \begin{theorem}\label{mainthm} For any $s \geq 3$, the inverse conjecture for the $U^{s+1}[N]$-norm, $\GI(s)$, is true. \end{theorem} By combining this result with the previous results in \cite{green-tao-linearprimes,green-tao-mobiusnilsequences} we obtain a quantitative Hardy-Littlewood prime tuples conjecture for all linear systems of finite complexity; in particular, we now have the expected asymptotic for the number of primes $p_1 < \ldots < p_k \leq X$ in arithmetic progression, for every fixed positive integer $k$. We refer to \cite{green-tao-linearprimes} for further discussion, as we have nothing new to add here regarding these applications. Several further applications of the $\GI(s)$ conjectures are given in \cite{fhk,green-tao-arithmetic-regularity}. \vspace{11pt} \section{Strategy of the proof}\label{strategy-sec} The proof of Theorem \ref{mainthm} is long and complicated, but broadly speaking it follows the strategy laid out in previous works \cite{gowers-4aps,gowers-longaps,green-tao-u3inverse,u4-inverse,sam}. We induct on $s$, assuming that $\GI(s-1)$ has already been established and using this to prove $\GI(s)$. To explain the argument, let us first summarise the main steps taken in \cite{u4-inverse} in order to deduce $\GI(3)$, the inverse theorem for the $U^4$-norm, from $\GI(2)$, the inverse theorem for the $U^3$ norm (established in \cite{green-tao-u3inverse}). Once this is done we will explain some of the extra difficulties involved in handling the general case. For a more extensive (but informal) discussion of the proof strategy, see \cite{gtz-announce}. Once we set up some technical machinery, we will also be able to give a more detailed description of the strategy in \S \ref{overview-sec}. Here, then, is an overview of the argument in \cite{u4-inverse}. \begin{enumerate} \item (Apply induction) If $\Vert f \Vert_{U^4[N]} \gg 1$ then, for many $h$, $\Vert \Delta_h f \Vert_{U^3[N]} \gg 1$ and so $\Delta_h f$ correlates with a $2$-step nilsequence $\chi_h$. \item (Nilcharacter decomposition) $\chi_h$ may be decomposed as a sum of a special type of nilsequence called a \emph{nilcharacter}, essentially by a Fourier decomposition. For the sake of illustration, these $2$-step nilcharacters may be supposed to have the form \[ \chi_h(n) = e(\{\alpha_h n\} \beta_h n),\] although these are not quite nilcharacters due to the discontinuous nature of the fractional part function $x \mapsto \{x\}$, and in any event a general $2$-step nilcharacter will be modeled by a linear combination of such ``bracket quadratic monomials'', rather than by a single such monomial (see \cite{green-tao-u3inverse} for further discussion). \item (Rough linearity) The fact that $\Delta_h f$ correlates with $\chi_h$ forces $\chi_h$ to behave weakly linearly in $h$. To get a feel for why this is so, suppose that $|f| \equiv 1$; then we have the cocycle identity \[ \Delta_{h+k} f(n) = \Delta_h f(n+k) \Delta_k f(n).\] To capture something like the same behaviour in the much weaker setting where $\Delta_h f$ correlates with $\chi_h$, we use an extraordinary argument of Gowers \cite{gowers-4aps} relying on the Cauchy-Schwarz inequality. Roughly speaking, the information obtained is of the form \begin{equation}\label{linear-eq} \chi_{h_1} \chi_{h_2} \sim \chi_{h_3} \chi_{h_4} \quad \mbox{modulo lower order terms} \end{equation} for many $h_1, h_2, h_3, h_4$ with $h_1 + h_2 = h_3 + h_4$. \item (Furstenberg-Weiss) An argument of Furstenberg and Weiss \cite{fw-char} is adapted in order to study \eqref{linear-eq}. The quantitative distribution theory of nilsequences developed in \cite{green-tao-nilratner} is a major input here. It is concluded that we may assume that the frequency $\beta_h$ does not actually depend on $h$. Note that this step appeared for the first time in the proof of $\GI(3)$; it did not feature in the proof of $\GI(2)$ in \cite{green-tao-u3inverse}. \item (Linearisation) A similar argument allows one to then assert that \begin{equation}\label{additive-eq} \alpha_{h_1} + \alpha_{h_2} \approx \alpha_{h_3} + \alpha_{h_4} \pmod{1} \end{equation} for many $h_1,h_2,h_3,h_4$ with $h_1 + h_2 = h_3 + h_4$. \item (Additive Combinatorics) By arguments from additive combinatorics related to the Balog-Szemer\'edi-Gowers theorem \cite{balog,gowers-4aps} and Freiman's theorem, as well as some geometry of numbers, we may then assume that $\alpha_h$ varies ``bracket-linearly'' in $h$, thus \begin{equation}\label{bracket-lin} \alpha_h = \gamma_1 \{ \eta_1 h\} + \dots + \gamma_d \{\eta_d h\}. \end{equation} Up to top order, then, the nilcharacter $\chi_h(n)$ can now be assumed to take the form $e(\psi(h,n,n))$, where $\psi$ is ``bracket-multilinear''; it is a sum of terms such as $\{\gamma \{\eta h\} n\} \beta n$. \item (Symmetry argument) The bracket multilinear form $\psi$ obeys an additional symmetry property. This is a reflection of the identity $\Delta_h \Delta_k f = \Delta_k \Delta_h f$, but transferring this to the much weaker setting in which we merely have correlation of $\Delta_h f$ with $\chi_h$ requires another appeal to Gowers' Cauchy-Schwarz argument from (iii). In fact, the key point is to look at the second order terms in \eqref{linear-eq}. \item (Integration) Assuming this symmetry, one is able to express \[ \chi_h(n) \sim \Theta(n+h) \overline{\Theta'(n)}\] for some bracket cubic functions $\Theta, \Theta'$, which morally take the form \[ \Theta(n), \Theta'(n) \sim e(\psi(n,n,n)/3)\] (for much the same reason that $x^3/3$ is an antiderivative of $x^{2}$). Thus we morally have \[ \Delta_h f(n) \sim \Theta(n+h) \overline{\Theta'(n)}\] \item (Construction of a nilsequence) Any bracket cubic form like $e(\psi(n,n,n))$ ``comes from'' a 3-step nilmanifold; this construction is accomplished in \cite{u4-inverse} in a rather \emph{ad hoc} manner. \item From here, one can analyse lower order terms by the induction hypothesis $\GI(2)$. This is a relatively easy matter. \end{enumerate} Let us now discuss the argument of this paper in the light of each point of this outline. A more detailed outline is given in \S \ref{overview-sec}. Assume that $\GI(s-1)$ has been established. \begin{enumerate} \item (Apply induction) If $\Vert f \Vert_{U^{s+1}[N]} \gg 1$ then, for many $h$, $\Vert \Delta_h f \Vert_{U^s[N]} \gg 1$ and so $\Delta_h f$ correlates with an $(s-1)$-step nilsequence $\chi_h$. This is straightforward (see \S \ref{overview-sec}). \item (Nilcharacter decomposition) $\chi_h$ may be decomposed into nilcharacters; this is fairly straightforward as well. It is somewhat reassuring to think of $\chi_h(n)$ as having the form $e(\psi_h(n))$, where $\psi_h(n)$ is a bracket polynomial ``of degree $s-1$'', but we will not be working explicitly with bracket polynomials much in this paper, except as motivation and as a source of examples. One of the main challenges one is faced with during an attempt to prove $\GI(4)$ by a direct generalisation of our arguments from \cite{u4-inverse} is the fact that already bracket cubic polynomials are rather complicated to deal with and can take different forms such as $\{\alpha n\}\{\beta n\}\gamma n$ and $\{ \{\alpha n\} \beta n\} \gamma n$. Instead of objects such as $e(\alpha n\{\beta n\})$, then, we will work with the rather more abstract notion of a \emph{symbol}. This notion, which is fairly central to our paper, is defined and discussed in \S \ref{nilcharacters}. One additional technical point is worth mentioning here. This is the fact that $e(\alpha n\{\beta n\})$ (say) cannot be realised as a nilsequence $F(g^n \Gamma)$ with $F$ \emph{continuous}, and therefore the distributional results of \cite{green-tao-nilratner} do not directly apply. In \cite{u4-inverse} these discontinuities could be understood quite explicitly, but here we take a different approach: we decompose $G/\Gamma$ into $D$ pieces using a smooth partition of unity for some $D=O(1)$, and then work instead with the (smooth) $\C^D$-valued nilsequence consisting of these pieces. We discuss this device more fully in \S \ref{nilcharacters}, but we emphasise that this is a technical device and the reader is advised not to give this particular aspect of the proof too much attention. \item (Rough linearity) $\chi_h$ varies roughly linearly in $h$; this is another fairly straightforward modification of the arguments of Gowers, already employed in \cite{u4-inverse}, which is performed in \S \ref{cs-sec}. \item (Furstenberg-Weiss) This proceeds along similar lines to the corresponding argument in \cite{u4-inverse} but is, in a sense, rather easier once one has developed the device of $\C^D$-valued nilsequences, which allow one to remain in the smooth category; this is accomplised in \S \ref{linear-sec}, after a substantial amount of preparatory material in \S \ref{freq-sec}, \S \ref{reg-sec} and Appendix \ref{equiapp}. \item (Linearisation) This is also quite similar to the corresponding argument in \cite{u4-inverse}, and is performed in \S \ref{linear-sec}. In both of parts (iv) and (v), the ``bracket calculus'' from \cite{u4-inverse} is replaced by the more conceptual ``symbol calculus'' developed in Appendix \ref{basic-sec}. \item (Additive Combinatorics) The additive combinatorial input is much the same as in \cite{u4-inverse}. For the convenience of the reader we sketch it in Appendix \ref{app-f}. \item (Construction of a nilsequence) Our argument differs quite substantially from that in \cite{u4-inverse} at this point. The $s$-step nilobject, which is now a two-variable object $\chi(h,n)$, is constructed \emph{before} the symmetry argument and in a more conceptual manner. This may be compared with the rather \emph{ad hoc} approach taken in \cite{green-tao-u3inverse, u4-inverse}, where various bracket polynomials were merely exhibited as arising from nilsequences. We perform this construction in \S \ref{multi-sec}. \item (Symmetry argument) We replace $\chi(h,n)$ with an equivalent nilcharacter $\tilde \chi(h,n,\ldots,n)$ where $\tilde \chi$ is a nilcharacter in $s$ variables, that is symmetric in the last $s-1$ variables. The symmetry argument given in \S \ref{symsec} shows that $\tilde \chi(h,n,\ldots,n)$ is equivalent to $\tilde \chi(n,h,\ldots,n)$. Again the key idea in the analysis is to look at the second order terms in \eqref{linear-eq}. \item (Integration) With the symmetry in hand, we can use the calculus of multilinear nilcharacters essentially express $\tilde \chi(h,n,\ldots,n)$ as the derivative of an expression which is roughly of the form $\tilde \chi(n,\ldots,n)/s$; see \S \ref{symsec} for details. \item The final step of the argument is relatively straightforward, as before; see \S \ref{overview-sec}. \end{enumerate} In our previous paper \cite{u4-inverse} it was already rather painful to keep proper track of such notions as ``many'' and ``correlates with''. Here matters are even worse, and so to organise the above tasks it turns out to be quite convenient to first take an ultralimit of all objects being studied, effectively placing one in the setting of \emph{nonstandard analysis}. This allows one to easily import results from infinitary mathematics, notably the theory of Lie groups and basic linear algebra, into the finitary setting of functions on $[N]$. In \S \ref{nsa-sec} and Appendix \ref{nsa-app} we review the basic machinery of ultralimits that we will need here; we will not be exploiting any particularly advanced aspects of this framework. The reader does not really need to understand the ultrafilter language in order to comprehend the basic structure of the paper, provided that he/she is happy to deal with concepts like ``dense'' and ``correlates with'' in a somewhat informal way, resembling the way in which analysts actually talk about ideas with one another (and, in fact, analogous to the way we wrote this paper). It is possible to go through the paper and properly quantify all of these notions using appropriate parameters $\delta$ and (many) growth functions $\mathcal{F}$. This would have the advantage of making the paper on some level comprehensible to the reader with an absolute distrust of ultrafilters, and it would also remove the dependence on the axiom of choice and in principle provide explicit but very poor bounds. However it would cause the argument to be significantly longer, and the notation would be much bulkier. Our exposition will be as follows. We will begin by spending some time introducing the ultrafilter language and then, motivated by examples, the notions of nilsequence, nilcharacter and symbol. Once that is done we will, in \S \ref{overview-sec}, give the high-level argument for Theorem \ref{mainthm}; this consist of detailing points (i), (ii) and (x) of the outline above and giving proper statements of the other main points. The discussion above concerning points (iii), (iv), (v) and (vi) has been simplified for the sake of exposition. In actual fact, these points are dealt with together by a kind of iterative loop, in which more and more bracket-linear structure is placed on the nilcharacters $\chi_h(n)$ by cycling from (iii) to (vi) repeatedly. We remark that a quite different approach using ultrafilters to the structural theory of the Gowers norms is in the process of being carried out in \cite{szeg-1,szeg-2,szeg-3}; this seems related to the work of Host and Kra, whereas our work ultimately derives from the work of Gowers. We also make the minor remark that our proof of $\GI(s)$ is restricted to the case $s \geq 3$ case for minor technical reasons. In particular, we take advantage of the non-trivial nature of the degree $s-2$ ``lower order terms'' in the Gowers Cauchy-Schwarz argument (Proposition \ref{gcs-prop}) in the symmetry argument step; and we will also observe that the various ``smooth'' and ``periodic'' error terms arising from the equidistribution theory in Appendix \ref{equiapp} are of degree $1$ and thus negligible compared with the main terms in the analysis, which are of degree $s-1$. The arguments can be modified to give a proof of $\GI(2)$, although this proof would basically be a notationally intensive repackaging of the arguments in \cite{green-tao-u3inverse}. \emph{Acknowledgements.} BG was, for some of the period during which this work was carried out, a fellow of the Radcliffe Institute at Harvard. He is very grateful to the Radcliffe Institute for providing excellent working conditions. TT is supported by NSF Research Award DMS-0649473, the NSF Waterman award and a grant from the MacArthur Foundation. TZ is supported by ISF grant 557/08, an Alon fellowship and a Landau fellowship of the Taub foundation. All three authors are very grateful to the University of Verona for allowing them to use classrooms at Canazei during a week in July 2009. This work was largely completed during that week. \section{Basic notation}\label{notation-sec} We write $\N := \{0,1,2,\ldots\}$ for the natural numbers, and $\N^+ := \{1,2,\ldots\}$ for the positive natural numbers. Given two integers $N,M$, we write $[N,M]$ for the discrete interval $[N,M] := \{ n: N \leq n \leq M\}$. We also make the abbreviations $[N] :=[1,N]$, and , and $[[N]]:=[-N,N]$. If $x$ is a real number, we write $x \mod 1$ for the associated residue class in the unit circle $\T := \R/\Z$, and write $x=y \mod 1$ if $x$ and $y$ differ by an integer. We will rely frequently on the following two elementary functions: the \emph{fundamental character} $e: \R \to \C$ (or $e: \T \to \C$) defined by $$ e(x) := e^{2\pi i x},$$ and the \emph{signed fractional part function}\footnote{The signed fractional part will be slightly more convenient to work with than the unsigned fractional part, as it is equal to the identity near the origin.} $\{\}: \R \to I_0$, where $I_0$ is the \emph{fundamental domain} $$ I_0 := \{ x \in \R: -1/2 < x \leq 1/2\}$$ and $\{x\}$ is the unique real number in $I_0$ such that $x = \{x\} \mod 1$. We will often rely on the identity $$ e(x) = e(\{x\}) = e( x \mod 1 )$$ without further comment. For technical reasons, we will need to manipulate vector-valued complex quantities in a manner analogous to scalar complex quantities. If $v = (v_i)_{i=1}^D$ and $w = (w_i)_{i = 1}^{D'}$ are vectors in $\C^D$ and $\C^{D'}$ respectively then we form the \emph{tensor product} $v \otimes w \in \C^{DD'}$ by the formula \[ v \otimes w := (v_1 w_1,\dots, v_{D} w_{D'})\] and the \emph{complex conjugate} $\overline{v}\in \C^D$ by the formula \[ \overline{v} := (\overline{v_1},\dots,\overline{v_D}).\] Similarly, if $X$ is some set and $f : X \rightarrow \C^D$ and $g : X \rightarrow \C^{D'}$ are functions then we write $f \otimes g: X \rightarrow \C^{DD'}$ for the function defined by $(f\otimes g)(x) := f(x) \otimes g(x)$, and similarly define $\overline{f}: X \to \C^D$. If $G = (G,+)$ is an additive group, $k \in \N$, $\vec g = (g_1,\ldots,g_k) \in G^k$, and $\vec a = (a_1,\ldots,a_k) \in \Z^k$, we define the dot product $$ \vec a \cdot \vec g := a_1 g_1 + \ldots + a_k g_k.$$ Given a set $H$ in an additive group, define an \emph{additive quadruple} in $H$ to be a quadruple $(h_1,h_2,h_3,h_4) \in H$ with $h_1+h_2=h_3+h_4$. The number of additive quadruples in $H$ is known as the \emph{additive energy} of $H$ and is denoted $E(H)$. A map $\phi: H \to G$ from $H$ to another additive group $G$ is said to be a \emph{Freiman homomorphism} if it preserves additive quadruples, i.e. if $\phi(h_1)+\phi(h_2)=\phi(h_3)+\phi(h_4)$ for all additive quadruples $(h_1,h_2,h_3,h_4)$ in $H$. Given a multi-index $\vec d = (d_1,\ldots,d_k) \in \N^k$, we write $|\vec d| := d_1+\ldots+d_k$. We now briefly review and clarify some standard notation from group theory. When we do not assume a group $G$ to be abelian, we will always write $G$ multiplicatively: $G = (G,\cdot)$. However, when dealing with abelian groups, we reserve the right to use additive notation instead. We view an $n$-tuple $(a_1,\ldots,a_n)$ of labels as a finite ordered set with the ordering $a_1 < \ldots < a_n$. If $A = (a_1,\ldots,a_n)$ is a finite ordered set and $(g_a)_{a \in A}$ are a collection of group elements in a multiplicative group $G$, we define the ordered products $$ \prod_{a \in A} g_a := g_{a_1} \ldots g_{a_n}, \; \; \prod_{i=1}^n g_i := g_1 \ldots g_n \; \; \mbox{and} \; \; \prod_{i=n}^1 g_i := g_n \ldots g_1$$ for any $n \geq 0$, with the convention that the empty product is the identity. We extend this notation to infinite products under the assumption that all but finitely many of the factors are equal to the identity. Given a subset $A$ of a group $G$, we let $\langle A \rangle$ denote the subgroup of $G$ generated by $A$. Given a family $(H_i)_{i \in I}$ of subgroups of $G$, we write $\bigvee_{i \in I} H_i$ for the smallest subgroup of $G$ that contains all of the $H_i$. Given two elements $g, h$ of a multiplicative group $G$, we define the \emph{commutator} $$ [g,h] := g^{-1}h^{-1}gh.$$ We write $H \leq G$ to denote the statement that $H$ is a subgroup of $G$. If $H, K \leq G$, we let $[H,K]$ be the subgroup generated by the commutators $[h,k]$ with $h \in H$ and $k \in K$, thus $[H,K] = \left\langle \{ [h,k]: h \in H, k \in K \} \right\rangle$. If $r \geq 1$ is an integer and $g_1,\ldots,g_r \in G$, we define an $(r-1)$-\emph{fold iterated commutator} of $g_1,\ldots,g_r$ inductively by declaring $g_1$ to be the only $0$-fold iterated commutator of $g_1$, and for $r>1$ defining an $(r-1)$-fold iterated commutator to be any expression of the form $[w,w']$, where $w$ and $w'$ are $(s-1)$-fold and $(s'-1)$-fold commutators of $g_{i_1},\ldots,g_{i_s}$ and $g_{i'_1},\ldots,g_{i'_{s'}}$ respectively, where $s, s' \geq 1$ are such that $s+s'=r$, and $\{i_1,\ldots,i_s\} \cup \{ i'_1,\ldots,i'_{s'} \} = \{1,\ldots,r\}$ is a partition of $\{1,\ldots,r\}$ into two classes. Thus for instance $[[g_3,g_1],[g_2,g_4]]$ and $[g_2,[g_1,[g_3,g_4]]]$ are $3$-fold iterated commutators of $g_1,\ldots,g_4$. The following lemma will be useful for computing commutator groups. \begin{lemma}\label{normal} Let $H = \langle A \rangle, K = \langle B \rangle$ be normal subgroups of a nilpotent group $G$ that are generated by sets $A \subset H$, $B \subset K$ respectively. Then $[H,K]$ is normal, and is also the subgroup generated by the $i+j-1$-fold iterated commutators of $a_1,\ldots,a_i,b_1,\ldots,b_j$ with $a_1,\ldots,a_i \in A$, $b_1,\ldots,b_j \in B$ and $i,j \geq 1$. \end{lemma} \begin{proof} The normality of $[H,K]$ is follows from the identity \[ g[H,K]g^{-1} = [gHg^{-1},gKg^{-1}]. \] It is then clear that $[H,K]$ contains the group generated by the iterated commutators of elements in $A,B$ that involve at least one element from each. The converse follows inductively using the identities \begin{equation}\label{com-ident} [x,y]=[y,x]^{-1}, \; \; [xy,z]=[x,z][[x,z],y][y,z] \; \; \mbox{and} \; \; [x,y^{-1}]=[y,x][[y,x],y^{-1}]. \end{equation} This concludes the proof. \end{proof} As a corollary of the above lemma, we have the distributive law $$ \left[ \bigvee_{i \in I} H_i, \bigvee_{j \in J} K_j \right] = \bigvee_{i \in I, j \in J} [H_i, K_j]$$ whenever $(H_i)_{i \in I}, (K_j)_{j \in J}$ are families of normal subgroups of a nilpotent group $G$. If $H \lhd G$ is a normal subgroup of $G$, and $g \in G$, we use $g \mod H$ to denote the coset representative $gH$ of $g$ in $G/H$. For instance, $g = g' \mod H$ if $gH = g' H$.\vspace{11pt} At various stages in the paper we will need the (discrete) \emph{Baker-Campbell-Hausdorff formula} in the following weak form: \begin{equation}\label{bch} g_1^{n_1} g_2^{n_2} = g_2^{n_2} g_1^{n_1} \prod_a g_a^{P_a(n_1,n_2)} \end{equation} for all $g_1,g_2$ in a nilpotent group $G$ and all integers $n_1,n_2$, where $g_a$ ranges over all iterated commutators of $g_1, g_2$ that involve at least one copy of each (note from nilpotency that there are only finitely many non-trivial $g_a$), with the $a$ ordered in some arbitrary fashion, and $P_a: \Z \times \Z \to \Z$ are polynomials. Furthermore, if $g_a$ involves $d_1$ copies of $g_1$ and $d_2$ copies of $g_2$, then $P_a$ has degree at most $d_1$ in the $n_1$ variable and $d_2$ in the $n_2$ variable. Let $G$ be a connected, simply connected, nilpotent Lie group (or \emph{nilpotent Lie group} for short). Then we denote the Lie algebra of $G$ as $\log G$. As is well known (see e.g. \cite{bourbaki}), the exponential map $\exp: \log G \to G$ is a homeomorphism, inverted by the logarithm map $\log: G \to \log G$, and we can then define the exponentiation operation $g^t$ for any $g \in G$ and $t \in \R$ by the formula $$ g^t := \exp( t \log g ).$$ There is a continuous version of the Baker-Campbell-Hausdorff formula: \begin{equation}\label{bch-cont} g_1^{t_1} g_2^{t_2} = g_2^{t_2} g_1^{t_1} \prod_a g_a^{P_a(t_1,t_2)} \end{equation} for all $t_1,t_2 \in \R$ and $g_1, g_2 \in G$, where $P_a$ are the polynomials occurring in \eqref{bch}. We also observe the variant formulae $$ (g_1 g_2)^{t} = g_1^t g_2^t \prod_a g_a^{Q_a(t)}$$ for some polynomials $Q_a$ and all $t \in \R$, $g_1, g_2 \in G$, and $$ \exp( t_1 \log g_1 + t_2 \log g_2 ) = g_1^{t_1} g_2^{t_2} \prod_a g_a^{R_a(t_1,t_2)}$$ for some further polynomials $R_a$ and all $t_1, t_2 \in \R$, $g_1, g_2 \in G$. We refer to all of these formul{\ae} collectively as \emph{the Baker-Campbell-Hausdorff formula}. If $A$ is a subset of a nilpotent Lie group $G$, we let $\langle A \rangle_\R$ be the smallest connected Lie subgroup of $G$ containing $A$, or more explicitly $$ \langle A \rangle_\R := \langle \{ a^t: a \in A; t \in \R \} \rangle.$$ Equivalently, $\log \langle A \rangle_\R$ is the Lie algebra generated by $\log A$. A \emph{lattice} of a nilpotent Lie group $G$ is a discrete cocompact subgroup $\Gamma$ of $G$. Thus for instance, we see from \eqref{bch} that for any finite set $A$ in $G$, $\langle A \rangle$ will be a cocompact subgroup of $\langle A \rangle_\R$, and will thus be a lattice if $\langle A \rangle$ is discrete. A connected Lie subgroup $H$ of $G$ is said to be \emph{rational} with respect to $\Gamma$ if $\Gamma \cap H$ is cocompact in $H$. For instance, if $G = \R^2$, $\Gamma$ is the standard lattice $\Z^2$, and $\alpha \in \R$, then the connected Lie subgroup $H := \{ (x,\alpha x): x \in \R \}$ is rational if and only if $\alpha$ is rational.\vspace{11pt} \textsc{Further notation.} Here is a list of further notation used in the paper for reference, together with the place in the paper where each piece is defined and discussed. \noindent\begin{tabular}{lll} $\poly(H_\N \to G_\N)$ & polynomial maps from one filtered group $H_\N$ to $G_\N$ & \ref{poly-map-def}\\ $\poly(\Z_\N \to G_\N)$ & polynomial maps with the degree filtration & \ref{poly-map-def} \\ $\poly(\Z^k_{\N^k} \to G_{\N^k})$ & polynomial maps with the multidegree filtration & \ref{poly-map-def} \\ $\poly(\Z_{\DR} \to G_{\DR})$ & polynomial maps with the degree-rank filtration & \ref{poly-map-def} \\ $L^\infty(\Omega \to \overline{\C}^D)$ & bounded limit functions to $\ultra \C^d$& \eqref{sigma-bounded}\\ $L^\infty(\Omega \to \overline{\C}^w)$ & bounded limit functions (also $L^{\infty}(\Omega)$) & \eqref{sigma-bounded}\\ $\Lip(\ultra(G/\Gamma) \to \overline{\C}^D)$ & bd'd limit functions with bounded Lipschitz constant & \ref{lip-def} \\ $\Nil^{d}([N])$ & nilsequences of degree $\le d$ on $[N]$ & \ref{nilseq} \\ $\Nil^{\subset J}(\Omega)$ & nilsequences of degree $\subset J$ & \ref{nilch-def-gen} \\ $\Xi^d([N])$ & space of degree $d$ nilcharacters on $[N]$ & \ref{nilch-def} \\ $\Xi^{(d_1,\ldots,d_k)}_\MD(\Omega)$ & multidegree nilcharacters & \ref{nilch-def-gen} \\ $\Xi^{(d,r)}_\DR(\Omega)$ & degree-rank nilcharacters & \ref{nilch-def-gen} \\ $\Symb^d([N])$ & equiv. classes of degree $d$ nicharacters in $\Xi^d([N])$ & \ref{symbol-def} \\ $\Symb^{(d_1,\ldots,d_k)}_{\MD}(\Omega)$ & equiv. classes of multidegree nicharacters & \ref{equiv-def} \\ $\Symb^{(d,r)}_\DR(\Omega)$ & equiv. classes of degree-rank nicharacters & \ref{equiv-def} \\ $G^{\vec D},G^{\vec D, \leq (s-1,r_*)}$ & universal nilpotent Lie group of degree-rank $(s-1,r_*)$ & \ref{universal-nil}\\ $\Horiz_i(G)$ & $i$'th horizontal space of $G$ & \ref{horton} \\ $\Taylor_i(g)$ & $i'$th horizontal Taylor coefficient of a polynomial map & \ref{horton} \\ $(\vec D, \eta, \F)$ & total frequency representation of a nilcharacter & \ref{representation-def} \end{tabular} \section{The polynomial formulation of $\GI(s)$}\label{polysec} The inverse conjecture $\GI(s)$, Conjecture \ref{gis-conj}, has been formulated using \emph{linear} nilsequences $F(g^n x\Gamma)$. This is largely for compatibility with the earlier paper \cite{green-tao-linearprimes} of the first two authors on linear equations in primes, where this form of the conjecture was stated in precisely this form as Conjecture 8.3. Subsequently, however, it was discovered that it is more natural to deal with a somewhat more general class of object called a \emph{polynomial nilsequence} $F(g(n)\Gamma)$. This is particularly so when it comes to discussing the distributional properties of nilsequences, as was done in \cite{green-tao-nilratner}. Thus, we shall now recast the inverse conjecture in terms of polynomial nilsequences, which is the formulation we will work with throughout the rest of the paper. Let us first recall the definition of a polynomial nilsequence of degree $d$. \begin{definition}[Polynomial nilsequence] Let $G$ be a (connected, simply-connected) nilpotent Lie group. By a \emph{filtration} $G_\N = (G_i)_{i \in \N}$ of degree $\leq d$ we mean a nested sequence $G \supseteq G_{0} \supseteq G_{1} \supseteq G_{2} \supseteq \dots \supseteq G_{d+1} = \{\id\}$ with the property that $[G_{i}, G_{j}] \subseteq G_{i+j}$ for all $i, j \geq 0$, adopting the convention that $G_{i}=\{\id\}$ for all $i>d$. By a \emph{polynomial sequence} adapted to $G_\N$ we mean a map $g : \Z \rightarrow G$ such that $\partial_{h_i} \dots \partial_{h_1} g \in G_i$ for all $i \geq 0$ and $h_1,\dots, h_i \in \Z$, where $\partial_h g(n) := g(n+h) g(n)^{-1}$. Write $\poly(\Z_\N \to G_{\N})$ for the collection of all such polynomial sequences. Let $\Gamma \leq G$ be a lattice in $G$ (i.e. a discrete and cocompact subgroup), so that the quotient $G/\Gamma$ is a nilmanifold, and assume that each of the $G_i$ are \emph{rational} subgroups (i.e. $\Gamma_i := \Gamma \cap G_i$ is a cocompact subgroup of $G_i$). We refer to the pair $G/\Gamma = (G/\Gamma,G_\N)$ as a \emph{filtered nilmanifold}. A \emph{polynomial orbit} $\orbit: \Z \to G/\Gamma$ is a sequence of the form $\orbit(n) := g(n) \Gamma$, where $g \in \poly(\Z_\N \to G_\N)$; we let $\poly(\Z_\N \to (G/\Gamma)_\N)$ denote the space of all such polynomial orbits. If $F : G/\Gamma \rightarrow \C$ is a $1$-bounded, Lipschitz function then the sequence $F \circ \orbit = (F(g(n)\Gamma))_{n \in \Z}$ is called a \emph{polynomial nilsequence} of degree $d$. \end{definition} The subscripts $\N$ will become more relevant later in this paper, when we start filtering nilpotent groups and nilmanifolds by other index sets $I$ than the natural numbers $\N$. Note that we do not require $G_0$ or $G_1$ to equal $G$; this freedom will be convenient for some minor technical reasons, although ultimately it will not enlarge the space of polynomial nilsequences. Let us give the basic examples of nilsequences and polynomials: \begin{example}[Linear nilsequences are polynomial nilsequences]\label{polylin} Let $G$ be a $d$-step nilpotent Lie group, and let $\Gamma$ be a lattice of $G$. Then, as is well known (see e.g. \cite{bourbaki}), the \emph{lower central series filtration} defined by $G_{0} = G_1 := G$, $G_{2} := [G, G_{1}]$, $G_{3} := [G, G_{2}], \dots, G_{d+1} := [G, G_{d}] = \{\id\}$ is a filtration on $G$. Using the Baker-Campbell-Hausdorff formula \eqref{bch-cont} it is not difficult to show that the lower central series filtration is rational with respect to $\Gamma$, so the nilmanifold $G/\Gamma$ becomes a filtered nilmanifold. If $g(n) := g_1^n g_0$ for some $g_0, g_1 \in G$, then $\partial_{h_1} g(n) = g_1^{h_1}$ and $\partial_{h_i} \dots \partial_{h_1} g(n) = \id$ for $i \geq 2$: therefore $g$ is a polynomial sequence, and so every linear orbit $n \mapsto g^n x$ with $g \in G$ and $x \in G/\Gamma$ is a polynomial orbit also. As a consequence we see that every $d$-step linear nilsequence $n \mapsto F(g^n x)$ is automatically a polynomial nilsequence of degree $\leq d$. \end{example} \begin{example}[Polynomial phases are polynomial nilsequences]\label{polyphase} Let $d \geq 0$ be an integer. Then we can give the unit circle $\T$ the structure of a degree $\leq d$ filtered nilmanifold by setting $G := \R$ and $\Gamma := \Z$, with $G_i := \R$ for $i \leq d$ and $G_i := \{0\}$ for $i>d$. This is clearly a filtered nilmanifold. If $\alpha_0,\ldots,\alpha_d$ are real numbers, then the polynomial $P(n) := \alpha_0 + \ldots + \alpha_d n^d$ is then polynomial with respect to this filtration, with $n \mapsto P(n) \mod 1$ being a polynomial orbit in $\T$. Thus, for any Lipschitz function $F: \T \to \C$, the sequence $n \mapsto F(P(n))$ is a polynomial nilsequence of degree $\leq d$; in particular, the polynomial phase $n \mapsto e(P(n))$ is a polynomial nilsequence. \end{example} \begin{example}[Combinations of monomials are polynomials]\label{lazard-ex} By Corollary \ref{laz}, we see that if $G = (G,(G_i)_{i \in\N})$ is a filtered group of degree $\leq d$, then any sequence of the form $$ n \mapsto \prod_{j=1}^k g_j^{P_j(n)},$$ in which $g_j \in G_{d_j}$ for some $d_j \in \N$, and $P_j: \Z \to \R$ is a polynomial of degree $\leq d_j$, will be a polynomial map. Thus for instance $$ n \mapsto g_d^{\binom{n}{d}} \ldots g_2^{\binom{n}{2}} g_1^n g_0$$ is a polynomial map whenever $g_j \in G_j$ for $j=0,\ldots,d$. In fact, all polynomial maps can be expressed in such a fashion via a \emph{Taylor expansion}; see Lemma \ref{taylo}. \end{example} We will give several further examples and properties of polynomial maps and polynomial nilsequences in \S \ref{nilcharacters}. As a consequence of Example \ref{polylin}, the following variant of the inverse conjecture $\GI(s)$ is ostensibly weaker than that stated in the introduction. \begin{conjecture}[$\GI(s)$, polynomial formulation]\label{gis-poly} Let $s \geq 0$ be an integer, and let $0 < \delta \leq 1$. Then there exists a finite collection ${\mathcal M}_{s,\delta}$ of filtered nilmanifolds $G/\Gamma = (G/\Gamma,G_\N)$, each equipped with some smooth Riemannian metric $d_{G/\Gamma}$ as well as constants $C(s,\delta), c(s,\delta) > 0$ with the following property. Whenever $N \geq 1$ and $f : [N] \rightarrow \C$ is a function bounded in magnitude by $1$ such that $\Vert f \Vert_{U^{s+1}[N]} \geq \delta$, there exists a filtered nilmanifold $G/\Gamma \in {\mathcal M}_{s,\delta}$, some $g \in \poly(\Z_\N \to G_{\N})$ and a function $F: G/\Gamma \to \C$ bounded in magnitude by $1$ and with Lipschitz constant at most $C(s,\delta)$ with respect to the metric $d_{G/\Gamma}$ such that $$ |\E_{n \in [N]} f(n) \overline{F(g(n)\Gamma)}| \geq c(s,\delta).$$ \end{conjecture} It turns out that this conjecture is actually \emph{equivalent} to Conjecture \ref{gis-conj}; we shall prove this equivalence in Appendix \ref{lift-app}. We remark that, though it might seem odd to put a non-trivial part of the proof of our main theorem in an appendix, we would rather encourage the reader to regard the proof of Conjecture \ref{gis-poly} as our main theorem. The rationale behind this is that everything that is done with linear nilsequences $F(g^nx \Gamma)$ in \cite{green-tao-linearprimes} could have been done equally well, and perhaps more naturally, with polynomial nilsequences $F(g(n)\Gamma)$. Further remarks along these lines were made in the introduction to our earlier paper \cite{u4-inverse}, where the polynomial formulation was emphasised from the outset. Here, however, we have felt a sense of duty to formally complete the programme outlined in \cite{green-tao-linearprimes}. Henceforth we shall refer simply to a \emph{nilsequence}, rather than a polynomial nilsequence. In \S \ref{nilcharacters} we will need to generalise the notion of a (polynomial) nilsequence by allowing more exotic filtrations $G_I$ on the group $G$, indexed by more complicated index sets $I$ than the natural numbers $\N$. In particular, we shall introduce the \emph{multidegree filtration}, which allows us to define nilsequences of several variables, as well as the \emph{degree-rank} filtration which provides a finer classification of polynomial sequences than merely the degree. We will discuss these using examples, and then develop a more unified theory that contains all three. \section{Taking ultralimits}\label{nsa-sec} The inverse conjecture, Conjecture \ref{gis-poly}, is a purely finitary statement, involving functions on a finite set $[N] = \{1,\ldots,N\}$ of integers. As such, it is natural to look for proofs of this conjecture which are also purely finitary, and much of the previous literature on these types of problems is indeed of this nature. However there is a very notable exception, namely the portion of the literature that exploits the \emph{Furstenberg correspondence principle} between combinatorial problems and ergodic theory. See \cite{furstenberg} for the original application to Szemer\'edi's theorem, or \cite{tao-ziegler} for a more recent application to Gowers norms over finite fields. Here we use a somewhat different type of limit object, namely an \emph{ultralimit}. We are certainly not the first to employ ultralimits (a.k.a. \emph{nonstandard analysis}) in additive number theory; see for example \cite{jin}. The ultralimit formalism allows us to convert a ``finitary'' or ``standard'' statement such as Conjecture \ref{gis-poly} into an equivalent statement concerning \emph{limit objects}, constructed as ultralimits of standard objects. This procedure is closely related to the use of the \emph{transfer principle} in nonstandard analysis, but we have elected to eschew the language of nonstandard analysis in order to reduce confusion, instead focusing on the machinery of ultralimits. Here is a brief and somewhat vague list of the advantages of using the ultralimit approach. \begin{itemize} \item Pigeonholing arguments are straightforward (due to the fact that a limit function taking finitely many values is constant); \item Book-keeping of constants: one can talk rigorously about such concepts as ``bounded'' functions without a need to quantify the bounds; \item One may make rigorous sense of such statements as ``the function $f: [N] \to \C$ and the function $g: [N] \to \C$ are equivalent modulo degree $s$ nilsequences''. \item In the infinitary context one may easily perform \emph{rank reduction} arguments in which one seeks to find the ``minimal bounded-complexity'' representation of a given system. \end{itemize} There are also some drawbacks of the approach: \begin{itemize} \item It becomes quite difficult to extract any quantitative bounds from our results, in particular we do not give explicit bounds on the constant $c(s,\delta)$ or on the complexity of the nilsequence in Conjecture \ref{gis-conj} or Conjecture \ref{gis-poly}. It is in principle possible to expand the ultralimit proof into a standard proof, but the bounds are quite poor (of Ackermann type) due to the repeated use of ``rank reduction arguments'' and other highly iterative schemes that arise in the conversion of ultralimit arguments to standard ones. For further discussion of the relation of ultralimit analysis to finitary analysis see \cite[\S 1.3, \S 1.5]{structure}. \item The language of ultrafilters adds one more layer of notational complexity to an already notationally-intensive paper; however, there are gains to be made elsewhere, most notably in eliminating many quantitative constants (e.g. $\eps$, $N$) and growth functions (e.g. ${\mathcal F}$). \end{itemize} \textsc{Limit formulation of $\GI(s)$.} The basic notation and theory of ultralimits are reviewed in Appendix \ref{nsa-app}. We now use this formalism to convert the inverse conjecture, $\GI(s)$, into an equivalent statement formulated in the framework of ultralimits. We first consider a limit version of the concept of a Lipschitz function on a nilmanifold. For technical reasons we will need to consider vector-valued functions, taking values in $\C^D$ or $\overline \C^D$ rather than $\C$ or $\overline\C$. \begin{definition}[Lipschitz functions]\label{lip-def} Let $G/\Gamma$ be a standard nilmanifold, and let $D \in \N^+$ be standard. \begin{itemize} \item We let $\Lip(G/\Gamma \to \C^D)$ be the space of standard Lipschitz functions $F: G/\Gamma \to \C^D$. (Here we endow the compact manifold $G/\Gamma$ with a smooth metric in an arbitrary fashion; the exact choice of metric is not relevant.) \item We let $\Lip(\ultra(G/\Gamma) \to \overline{\C}^D)$ be the space of bounded limit functions $F: \ultra(G/\Gamma) \to \overline{\C}^D$ whose Lipschitz constant is bounded (or equivalently, $F$ is an ultralimit of uniformly bounded functions $F_\n: G/\Gamma \to \C^D$ with uniformly bounded Lipschitz constant). \item We let $\Lip(\ultra(G/\Gamma) \to \overline{S^{2D-1}})$ be the functions in $\Lip(\ultra(G/\Gamma) \to \overline{\C}^D)$ that take values in the (limit) complex sphere $$ \overline{S^{2D-1}} := \{ z \in \overline{\C}^D: |z| = 1\}.$$ \item We write \[ \Lip(\ultra(G/\Gamma) \to \overline{\C}^\omega) := \bigcup_{D \in \N^+} \Lip(\ultra(G/\Gamma) \to \overline{\C}^D)\] and \[\Lip(\ultra(G/\Gamma) \to \overline{S^\omega}) := \bigcup_{D \in \N^+} \Lip(\ultra(G/\Gamma) \to \overline{S^{2D-1}}).\] \end{itemize} We will often abbreviate these spaces as $\Lip(G/\Gamma)$ or $\Lip(\ultra(G/\Gamma))$ when the range of the functions involved is not relevant to the discussion. \end{definition} \emph{Remark.} As $G/\Gamma$ is compact, we see from the Arzel\`a-Ascoli theorem that $\Lip(G/\Gamma \to \C^D)$ is locally compact in the $L^\infty(G/\Gamma \to \C^D)$ topology. As a consequence, if we embed $\Lip(G/\Gamma \to \C^D)$ into $\Lip(\ultra(G/\Gamma) \to \overline{\C}^D)$ in the obvious manner, then the former is a dense subspace of the latter in the (standard) uniform topology, in the sense that for every $F \in \Lip(\ultra(G/\Gamma) \to \overline{\C}^D)$ and every standard $\eps > 0$ there exists $F' \in \Lip(G/\Gamma \to \C^D)$ such that $|F(x)-F'(x)| \leq \eps$ for all $x \in \ultra(G/\Gamma)$. \emph{Remark.} Observe that the spaces $\Lip(\ultra(G/\Gamma) \to \overline{\C}^D)$ and $\Lip(\ultra(G/\Gamma) \to \overline{\C}^\omega)$ are vector spaces over $\overline{\C}$. The spaces $\Lip(\ultra(G/\Gamma) \to \overline{\C}^\omega)$ and $\Lip(\ultra(G/\Gamma) \to \overline{S^\omega})$ are also closed under tensor product (as defined in \S \ref{notation-sec}). All the spaces defined in Definition \ref{lip-def} are closed under complex conjugation. Using the above notion, we can define the limit version of a (polynomial) nilsequence. \begin{definition}[Nilsequence]\label{nilseq} Let $s \geq 0$ be standard. A \emph{nilsequence} of degree $\leq s$ is any limit function $\psi: \ultra \Z \to \ultra \C$ of the form $\psi(n) := F(g(n) \Gamma)$, where $G/\Gamma = (G/\Gamma,G_\N)$ is a standard filtered nilmanifold of degree $\leq s$, $g: \ultra \Z \to \ultra G$ is a limit polynomial sequence (i.e. an ultralimit of polynomial sequences $g_\n: \Z \to G$), and $F \in \Lip(\ultra(G/\Gamma) \to \overline{\C})$. \end{definition} Given any limit subset $\Omega$ of $\ultra \Z$, we denote the space of degree $d$ nilsequences, restricted to $\Omega$, as $\Nil^{d}(\Omega) = \Nil^{d}(\Omega \to \overline{\C}^\omega)$; this is a subset of $L^\infty(\Omega \to \overline{\C}^\omega)$. We write $\Nil^{d}(\Omega \to \overline{\C}^D)$ for the nilsequences that take values in $\overline{\C}^D$; this is a subspace (over $\overline{\C}$) of $L^\infty(\Omega \to \overline{\C}^D)$. We make the technical remark that $\Nil^{d}(\Omega)$ is a $\sigma$-limit set, since one can express this space as the union, over all standard $M$ and dimensions $D$, of the nilsequences taking values in $\overline{\C}^D$ arising from a nilmanifold of ``complexity'' $M$ and a Lipschitz function of constant at most $M$, where one defines the complexity of a nilmanifold in some suitable fashion. In particular, the limit selection lemma in Corollary \ref{mes-select} can be applied to this set. We also define the Gowers uniformity norm $\Vert f\Vert_{U^{s+1}[N]}$ of an ultralimit $f= \lim_{\n \to p} f_\n$ of standard functions $f_\n: [N_\n] \to \C$ in the usual limit fashion $$ \|f\|_{U^{s+1}[N]} := \lim_{\n \to p} \|f_\n\|_{U^{s+1}[N_\n]}.$$ If $f$ is vector-valued instead of scalar valued, say $f = (f_1,\ldots,f_d)$, then we define the uniformity norm by the formula $$ \|f\|_{U^{s+1}[N]} := (\sum_{i=1}^d \|f_i\|_{U^{s+1}[N]}^{2^{s+1}})^{1/2^{s+1}}.$$ (The exponent $2^{s+1}$ is not important here, but has some very slight aesthetic advantages over other equivalent formulations of the vector-valued norm.) The ultralimit formulation of $\GI(s)$ can then be given as follows: \begin{conjecture}[Ultralimit formulation of $\GI(s)$]\label{gis-conj-nonst} Let $s \geq 0$ be standard and $N \geq 1$ be a limit natural number. Suppose that $f \in L^\infty([N] \to \overline{\C})$ is such that $\Vert f \Vert_{U^{s+1}[N]}$ $\gg 1$. Then $f$ correlates with a degree $\leq s$ nilsequence on $[N]$. \end{conjecture} See Definition \ref{linfty} for the definition of \emph{correlation} in this context. We now show why, for any fixed standard $s$, Conjecture \ref{gis-conj-nonst} is equivalent to its more traditional counterpart, Conjecture \ref{gis-poly}. \begin{proof}[Proof of Conjecture \ref{gis-conj-nonst} assuming Conjecture \ref{gis-poly}] Let $f$ be as in Conjecture \ref{gis-conj-nonst}. We may normalise the bounded function $f$ to be bounded by $1$ in magnitude throughout. By hypothesis, there exists a standard $\delta > 0$ such that $\Vert f \Vert_{U^{s+1}[N]} \geq \delta$. Writing $N$ and $f$ as the ultralimits of $N_\n$, $f_\n$ respectively for some $f_\n: [N_\n] \to \C$ bounded in magnitude by $1$, and applying Conjecture \ref{gis-poly}, we conclude that for $\n$ sufficiently close to $p$, we have the correlation bound $$ |\E_{n_\n \in [N_\n]} f_\n(n_\n) \overline{F_\n(g_\n(n_\n) \Gamma_\n)}| \geq c(s,\delta)> 0$$ where $G_\n/\Gamma_\n, g_\n, x_\n, F_\n$ are as in Conjecture \ref{gis-conj}. Writing $G/\Gamma, g, x, F$ for the ultralimits of $G_\n/\Gamma_\n, g_\n, x_\n, F_\n$ respectively, we thus have $$ |\E_{n \in [N]} f(n) \overline{F(g(n) \ultra \Gamma)}| \gg 1.$$ By the pigeonhole principle (cf. Appendix \ref{nsa-app}), we see that $G/\Gamma$ is a standard degree $\leq s$ nilmanifold, while $g: \ultra \Z \to \ultra G$ and $x \in G/\Gamma$ remain limit objects. The limit function $F$ lies in $\Lip(\ultra(G/\Gamma) \to \overline{\C})$ by construction, and the claim follows. \end{proof} \emph{Proof of Conjecture \ref{gis-poly} assuming Conjecture \ref{gis-conj-nonst}.} Observe (from the theory of Mal'cev bases \cite{malcev}) that there are only countably many degree $\leq s$ nilmanifolds $G/\Gamma$ up to isomorphism, which we may enumerate as $G_\n/\Gamma_\n$. We endow each of these nilmanifolds arbitrarily with some smooth Riemannian metric $d_{G_\n/\Gamma_\n}$. Suppose for contradiction that Conjecture \ref{gis-poly} failed. Carefully negating all the quantifiers, we may thus find a $\delta > 0$, a sequence $N_\n$ of standard integers, and a function $f_\n: [N_\n] \to \C$ bounded in magnitude by $1$ with $\|f_\n\|_{U^{s+1}[N]} \geq \delta$, such that \begin{equation}\label{george} |\E_{n_\n \in [N_\n]} f_\n(n_\n) \overline{F(g(n_\n) \Gamma_{\n'}))}| \leq 1/\n \end{equation} whenever $\n' \leq \n$, $g \in \poly(\Z_\N \to (G_{\n'})_\N)$, and $F: G_{\n'}/\Gamma_{\n'} \to \C$ is bounded in magnitude by $1$ and has a Lipschitz constant of at most $\n$ with respect to $d_{G_\n/\Gamma_\n}$. On the other hand, viewing $f$ as a bounded limit function, we can apply Conjecture \ref{gis-conj-nonst} and conclude that there exists a standard filtered nilmanifold $G/\Gamma$ with some smooth Riemannian metric $d_{G/\Gamma}$, a limit polynomial $g: \ultra \Z \to \ultra G$, and some ultralimit $F \in \Lip(\ultra(G/\Gamma) \to \overline{\C})$ of functions $F_\n: G/\Gamma \to \C$ with uniformly bounded Lipschitz norm, such that $$ |\E_{n \in [N]} f(n) \overline{F(g(n) \ultra \Gamma)}| \geq \eps$$ for some standard $\eps > 0$. By construction, $G/\Gamma$ is isomorphic to $G_{\n_0}/\Gamma_{\n_0}$ for some $\n_0$, so we may assume without loss of generality that $G/\Gamma = G_{\n_0}/\Gamma_{\n_0}$; since all smooth Riemannian metrics on a compact manifold are equivalent, we can also assume that $d_{G/\Gamma} = d_{G_{\n_0}/\Gamma_{\n_0}}$. We may also normalise $F$ to be bounded in magnitude by $1$. But this contradicts \eqref{george} for $\n$ sufficiently large, and the claim follows.\endproof Thus, to establish Theorem \ref{mainthm}, it will suffice to establish Conjecture \ref{gis-conj-nonst} for $s \geq 3$. This is the objective of the remainder of the paper. \emph{Remark.} We transformed the finitary linear inverse conjecture, Conjecture \ref{gis-conj}, into a nonstandard polynomial formulation, Conjecture \ref{gis-conj-nonst}, via the finitary polynomial inverse conjecture, Conjecture \ref{gis-poly}. One can also swap the order of these equivalences, transforming the finitary linear inverse conjecture into a nonstandard linear formulation by arguing as above, and then transforming the latter into a nonstandard polynomial formulation by using Proposition \ref{lift}. Of course the two arguments are essentially equivalent. Conjecture \ref{gis-conj-nonst} is trivial when $N$ is bounded, since every function in $L^\infty[N]$ is then a nilsequence of degree at most $s$. For the remainder of the paper we shall thus adopt the convention that $N$ denotes a fixed \emph{unbounded} limit integer. To conclude this section we reformulate Conjecture \ref{gis-poly} by introducing the important notion of \emph{bias}. \begin{definition}[Bias and correlation] Let $\Omega$ be a limit finite subset of $\Z$, and let $d \in \N$. We say that $f, g \in L^\infty(\Omega \to \overline{\C}^\omega)$ \emph{$d$-correlate} if we have $$|\E_{n \in \Omega} f(n) \otimes \overline{g(n)} \otimes \psi(n)| \gg 1$$ for some degree $d$ nilsequence $\psi \in \Nil^{d}(\Omega \to \overline{\C}^\omega)$. We say that $f$ is \emph{$d$-biased} if $f$ $d$-correlates with the constant function $1$, and \emph{$d$-unbiased} otherwise. \end{definition} With this definition, Conjecture \ref{gis-conj-nonst} can be reformulated in the following manner. \begin{conjecture}[Limit formulation of $\GI(s)$, II]\label{gis-conj-nonst-2} Let $s \geq 0$ be standard. Suppose that $f \in L^\infty([N] \to \overline{\C})$ is such that $\Vert f \Vert_{U^{s+1}[N]} \gg 1$. Then $f$ is $s$-biased. \end{conjecture} From previous literature, we see that Conjecture \ref{gis-conj-nonst-2} has already been proven for $s \leq 2$; we need to establish it for all $s \geq 3$. We also make the basic remark that while the conjecture is only phrased for scalar-valued functions $f \in L^\infty([N] \to \overline \C)$, it automatically generalises to vector-valued functions $f \in L^\infty([N] \to \overline \C^\omega)$, since if a vector-valued function $f$ has large $U^{s+1}[N]$ norm, then so does one of its components. Finally we remark that the converse implication is known. \begin{proposition}[Converse $\GI(s)$, ultralimit formulation]\label{inv-nec-nonst} Let $s \geq 0$ be standard. Suppose that $f \in L^\infty([N] \to \overline{\C})$ is $\leq s$-biased. Then $\Vert f \Vert_{U^{s+1}[N]} \gg 1$. \end{proposition} \begin{proof} This follows from \cite[Proposition 12.6]{green-tao-u3inverse}, \cite[\S 11]{green-tao-linearprimes}, or \cite[Proposition 1.4]{u4-inverse}, transferred to the ultralimit setting in the usual fashion. \end{proof} \section{Nilcharacters and symbols in one and several variables}\label{nilcharacters} Conjecture \ref{gis-conj-nonst} asserts that a function in $L^\infty([N] \to \overline{\C})$ on an unbounded interval $[N]$ correlates with a degree $\leq s$ nilsequence. For inductive reasons, it is useful to observe that this conclusion implies a strengthened version if itself, in which $f$ correlates with a special type of degree $\leq s$ nilsequence, namely a degree $s$ \emph{nilcharacter}. A nilcharacter is a special type of nilsequence and should be thought of, very roughly speaking, as a generalisation of characters $e(\alpha n)$ in the degree $1$ setting, or objects such as $e(\alpha n \{\beta n\})$ in the degree $2$ setting; these were crucial in our paper on $\GI(3)$ \cite{u4-inverse}, although the notation there was slightly different in some minor ways. See \cite{gtz-announce} for further informal discussion of nilcharacters. In the $s=1$ case, a nilcharacter is essentially (ignoring constants) the same thing as a linear phase function $n \mapsto e(\xi n)$, and the frequency $\xi$ can be viewed as living in the Pontryagin dual of $\ultra \Z$ (or, in some sense, of $[N]$, even though the latter set is not quite a locally compact abelian group). It will turn out that more generally, a degree $s$ nilcharacter will have a ``symbol'' (analogous to the frequency $\xi$) that takes values in a ``higher order Pontryagin dual'' $\Symb^s([N])$ of $[N]$; this symbol can be interpreted as the ``top order term'' of a nilcharacter, for instance the symbol of the degree $3$ nilcharacter $n \mapsto e(\alpha n^3 + \beta n^2 + \gamma n + \delta)$ is basically\footnote{This is an oversimplification; it would be more accurate to say that the symbol is given by $\alpha$ modulo $\ultra \Z + \Q + O(N^{-3})$.} $\alpha$. This higher order dual obeys a number of pleasant algebraic properties, and the primary purpose of this section is to develop those properties. There are various additional complications to be taken into account: \begin{itemize} \item We will require multidimensional generalisations of these concepts (think of the two-dimensional sequence $(n_1,n_2) \mapsto e(\alpha n_1 \{\beta n_2\})$) together with appropriate notions of \emph{multidegree} in order to make sense of ``top-order'' and ``lower-order terms''; \item We will be dealing with $\C^D$-valued (or, rather, $S^{2D-1}$-valued) nilsequences rather than merely scalar ones. This is so that we may continue to work in the smooth category, as discussed in the introduction; \item The language of ultrafilters will be used. \end{itemize} Our main focus here will be on the first of these points. The second is largely a technicality, whilst the third is actually helpful in that the notion of symbol (for example) is rather clean and does not require discussion of complexity bounds. \vspace{11pt} \textsc{Motivation and one-dimensional definitions.} We now give the definitions of a (one-dimensional) nilcharacter and its symbol, and give a few examples. However, we will hold off for now on actually proving too much about these concepts, because we will shortly need to generalise these notions to a more abstract setting in which one also allows multidimensional nilcharacters, and nilcharacters that are atuned not just to a specific degree, but also to a specific ``rank'' inside that degree. \begin{definition}[Nilcharacter]\label{nilch-def} Let $d \geq 0$ be a standard integer. A \emph{nilcharacter} $\chi$ of degree $d$ on $[N]$ is a nilsequence $\chi(n) = F(\orbit(n)) = F(g(n) \ultra \Gamma)$ on $[N]$ of degree $\leq d$, where the function $F \in \Lip(\ultra(G/\Gamma) \to \overline{\C}^\omega)$ obeys two additional properties: \begin{itemize} \item $F \in \Lip(\ultra(G/\Gamma) \to \overline{S^{\omega}})$ (thus $|F|=1$ pointwise, and hence $|\chi|=1$ pointwise also); and \item $F( g_d x ) = e( \eta(g_d) ) F(x)$ for all $x \in G/\Gamma$ and $g_d \in G_{d}$, where $\eta: G_{d} \to \R$ is a continuous standard homomorphism which maps $\Gamma_{d}$ to the integers (or equivalently, $\eta$ is an element of the Pontryagin dual of the torus $G_d/\Gamma_d$). We call $\eta$ the \emph{vertical frequency} of $F$. \end{itemize} The space of all nilcharacters of degree $d$ on $[N]$ is denoted $\Xi^d([N])$. \end{definition} \begin{example} When $d=1$, the only examples of nilcharacters are the linear phases $n \mapsto e( \alpha n + \beta )$ for $\alpha, \beta \in \ultra \R$. \end{example} \begin{example} For any $\alpha_0,\ldots,\alpha_d \in \ultra \R$, the function $n \mapsto e(\alpha_0 + \ldots + \alpha_d n^d)$ is a nilcharacter of degree $\leq d$. To see this, we set $G/\Gamma$ to be the unit circle $\T$ with the filtration $G_i := \R$ for $i \leq d$ and $G_i := \{0\}$ for $i>d$ (thus $G/\Gamma$ is of degree $d$), let $g(n) := \alpha_0 + \ldots + \alpha_d n^d$, and let $F(x) := e(x)$. The vertical frequency $\eta: \R \to \R$ is then just the identity function. \end{example} Now we give an instructive \emph{near}-example of a nilcharacter. Let $G$ be the free $2$-step nilpotent Lie group on two generators $e_1,e_2$, thus \begin{equation}\label{heisen} G := \langle e_1,e_2\rangle_\R = \{ e_1^{t_1} e_2^{t_2} [e_1,e_2]^{t_{12}}: t_1,t_2,t_{12} \in \R\} \end{equation} with the element $[e_1,e_2]$ being central, but with no other relations between $e_1, e_2$ and $[e_1,e_2]$. This is a degree $\leq 2$ nilpotent group if we set $G_0, G_1 := G$ and $$G_2 := \langle [e_1,e_2] \rangle_\R = \{ [e_1,e_2]^{t_{12}}: t_{12} \in \R \}.$$ We let $$\Gamma := \langle e_1,e_2 \rangle = \{ e_1^{n_1} e_2^{n_2} [e_1,e_2]^{n_{12}}: n_1,n_2,n_{12} \in \Z\}$$ be the discrete subgroup of $G$ generated by $e_1,e_2$, then $G/\Gamma$ is a degree $\leq 2$ filtered nilmanifold, known as the \emph{Heisenberg nilmanifold}, and elements of $G/\Gamma$ can be uniquely expressed using the fundamental domain $$ G/\Gamma = \{ e_1^{t_1} e_2^{t_2} [e_1,e_2]^{t_{12}} \Gamma: t_1,t_2,t_{12} \in I_0 := (-1/2,1/2]\}.$$ If we then set $g: \ultra \Z \to \ultra G$ to be the limit polynomial sequence $g(n) := e_2^{\beta n} e_1^{\alpha n}$ for some fixed $\alpha,\beta \in \ultra \R$, and let $F: G/\Gamma \to \C$ be the function defined on the fundamental domain by the formula \begin{equation}\label{fdef} F( e_1^{t_1} e_2^{t_2} [e_1,e_2]^{t_{12}} \Gamma ) := e( -t_{12} ) \end{equation} for $t_1,t_2,t_{12} \in I_0$, then one easily computes that $$ F( g(n) \ultra \Gamma ) = e( \{\alpha n\} \beta n )$$ where $\{\}: \R \to I_0$ is the signed fractional part function. The function $n \mapsto e( \{\alpha n\} \beta n )$ is then \emph{almost} a nilcharacter of degree $2$, with vertical frequency given by the function $\eta: [e_1,e_2]^{t_{12}} \mapsto -t_{12}$. All the properties required to give a nilcharacter in Definition \ref{nilch-def} are satisfied, save for one: the function $F$ is not Lipschitz on all of $G/\Gamma$, but is instead merely \emph{piecewise} Lipschitz, being discontinuous at some portions of the boundary of the fundamental domain. To put it another way, one can view $n \mapsto e(\{ \alpha n \} \beta n)$ as a \emph{piecewise} nilcharacter of degree $2$. Indeed, a topological obstruction prevents one from constructing \emph{any} scalar function $F \in \Lip(\ultra(G/\Gamma) \to \overline{S^1})$ of unit magnitude on the Heisenberg nilmanifold with the above vertical frequency. By taking standard parts, we may assume that $F$ comes from a standard Lipschitz function $F: G/\Gamma \to S^1$ with the same vertical frequency. For any standard $t \in [-1/2,1/2]$, consider the loop $\gamma_t := \{ e_1^t e_2^s \Gamma: s \in I_0\}$. The image $F(\gamma_t)$ of this loop lives on the unit circle and thus has a well-defined winding number (or degree). As this degree must vary continuously in $t$ while remaining an integer, it is constant in $t$; in particular, $F(\gamma_{-1/2})$ and $F(\gamma_{1/2})$ must have the same winding number. On the other hand, from the Baker-Campbell-Hausdorff formula \eqref{bch} we see that $$F( e_1^{1/2} e_2^s \Gamma ) = F( e_1^{-1/2} e_2^s e_1 [e_1,e_2]^s \Gamma ) = e(s) F( e_1^{-1/2} e_2^s \Gamma )$$ and so the winding number of $F(\gamma_{1/2})$ is one larger than the winding number of $F(\gamma_{-1/2})$, a contradiction. If however we allow ourselves to work with higher dimensions $D$, then this topological obstruction disappears. Indeed, let us take a smooth partition of unity $1 = \sum_{k=1}^D \varphi_k^2(t,s)$ on $\T^2$, where $D \in \N^+$ and each $\varphi_k$ is supported in $B_k \mod \Z^2$, where $B_k$ is a ball of radius $1/100$ (say) in $\R^2$. Then if we define $F := (F_1,F_2,\ldots,F_D)$, where \begin{equation}\label{fkts} F_k( e_1^t e_2^s [e_1,e_2]^u \ultra \Gamma) := \varphi_k(t,s) e(u) \end{equation} whenever $(t,s) \in \ultra B_k$ and $u \in \ultra \R$, with $F_k = 0$ if no such representation of the above form exists, then one easily verifies that $F$ lies in $\Lip(\ultra(G/\Gamma) \to \overline{S^{2D-1}})$ with the vertical frequency $\eta$, and so the vector-valued sequence $\chi: n \mapsto F( g(n) \ultra \Gamma)$ is a nilcharacter of degree $2$. A computation shows that each component $\chi_k$ of this nilcharacter $\chi = (\chi_1,\ldots,\chi_D)$ takes the form $$ \chi_k(n) = e( \{ \alpha n - \theta_k \} \beta n ) \psi_k(n)$$ for some offset $\theta_k \in \ultra \R$ and some degree $1$ nilsequence $\psi_k$. Thus we see that $\chi$ is in some sense ``equivalent modulo lower order terms'' with the bracket polynomial phase $n \mapsto e( \{ \alpha n \} \beta n)$. We refer to the vector-valued nilsequence $\chi$ as a \emph{vector-valued smoothing} of the piecewise nilsequence $n \mapsto e(\{\alpha n \} \beta n)$; we will informally refer to this smoothing operation several times in the sequel when discussing further examples of nilsequences that are associated with bracket polynomials. Similar computations can be made in higher degree. For instance, bracket cubic phases such as $n \mapsto e( \{ \{ \alpha n \} \beta n \} \gamma n )$ or $n \mapsto e( \{ \alpha n^2 \} \beta n )$ with $\alpha,\beta,\gamma \in \ultra \R$ can be viewed as near-examples of degree $3$ nilcharacters (with the problem again being that $F$ is discontinuous on the boundary of the fundamental domain), but there exist vector-valued smoothings of these phases which are genuine degree $3$ nilcharacters. We will not detail these computations here, but they can essentially be found in \cite[Appendix E]{u4-inverse}. More generally, one can view bracket polynomial phases of degree $d$ as near-examples of nilcharacters of degree $d$ that can be converted to genuine examples using vector-valued smoothings; this fact can be made precise using the machinery from \cite{leibman}, but we will not need this machinery here. \emph{Remark.} The above topological obstruction is quite annoying; it is the sole reason that we are forced to work with vector-valued functions. There are two other approaches to avoid this topological obstruction that we know of. One is to work with \emph{piecewise} Lipschitz functions rather than Lipschitz functions. This allows one in particular to build (piecewise) nilcharacters out of \emph{bracket polynomials}. This is the approach taken in \cite{u4-inverse}; however, it requires one to develop a certain amount of ``bracket calculus'' to manipulate these polynomials, and some additional arguments are also needed to deal with the discontinuities at the edges of the piecewise components of the nilmanifold. Another approach is to work with randomly selected fundamental domains of the nilmanifold (cf. \cite{green-tao-longaps}) which eliminates topological obstructions, with the randomness being used to ``average out'' the effects of the boundary of the domain. While all three methods will eventually work for the purposes of establishing the inverse conjecture, we believe that the vector-valued approach introduces the least amount of artificial technicality. By definition, every nilcharacter of degree $d$ is a nilsequence of degree $\leq d$. The converse is far from being true; however, one can approximate nilsequences of degree $\leq d$ as bounded linear combinations of nilcharacters of degree $d$. More precisely, we have the following lemma. \begin{lemma} Let $\psi \in \Nil^d([N] \to \overline{\C})$ be a scalar nilsequence of degree $d$, and let $\eps > 0$ be standard. Then one can approximate $\psi$ uniformly to error $\eps$ by a bounded linear combination \textup{(}over $\overline{\C}$\textup{)} of the components of nilcharacters in $\Xi^d([N])$. \end{lemma} \begin{proof} Unpacking the definitions, it suffices to show that for every degree $d$ filtered nilmanifold $G/\Gamma$, every $F \in \Lip(\ultra(G/\Gamma) \to \overline{\C})$, and every standard $\eps>0$, one can approximate $F$ uniformly to error $\eps$ by a bounded linear combination of functions in the class ${\mathcal F}(G/\Gamma)$ of components of standard Lipschitz functions $F' \in \Lip( G/\Gamma \to S^\omega )$ that have a vertical frequency in the sense of Definition \ref{nilch-def}. By taking standard parts, we may assume that $F$ is a standard Lipschitz function. Observe that ${\mathcal F}(G/\Gamma)$ is closed under multiplication and complex conjugation. By the Stone-Weierstrass theorem, it thus suffices to show that ${\mathcal F}(G/\Gamma)$ separates any two distinct points $x, y \in G/\Gamma$. If $x, y$ do not lie in the same orbit of the $G_d$, then this is clear from a partition of unity (taking $\eta = 0$). If instead $x = g_d y$ for some $g_d \in G_d$, then the distinctness of $x,y$ forces $g_d \not \in \Gamma_d$, and hence by Pontryagin duality there exists a vertical frequency $\eta$ with $\eta(g_d) \neq 0$. If one then builds a nilcharacter with this frequency (by adapting the vector-valued smoothing construction \eqref{fkts}) we obtain the claim. \end{proof} We remark that this lemma can also be proven, with better quantitative bounds, by Fourier-analytic methods: see \cite[Lemma 3.7]{green-tao-nilratner}. As a corollary of the lemma, we have the following. \begin{corollary}\label{nilch-cor} Suppose that $f \in L^\infty([N] \to \overline{\C}^\omega)$. Then $f$ is $d$-biased if and only if $f$ correlates with a nilcharacter $\chi \in \Xi^d([N])$. \end{corollary} It is easy to see that if $\chi, \chi'$ are two nilcharacters of degree $d$, then the tensor product $\chi \otimes \chi'$ and complex conjugate $\overline{\chi}$ are also nilcharacters. If all nilcharacters were scalar, this would mean that the space $\Xi^d([N])$ of degree $d$ nilcharacters form a multiplicative abelian group. Unfortunately, nilcharacters can be vector-valued, and so this statement is not quite true. However, it becomes true if one only focuses on the ``top order'' behaviour of a nilcharacter. To isolate this behaviour, we adopt the following key definition. \begin{definition}[Symbol]\label{symbol-def} Let $d \geq 0$. Two nilcharacters $\chi, \chi' \in \Xi^d([N])$ of degree $d$ are \emph{equivalent} if $\chi \otimes \overline{\chi'}$ is equal on $[N]$ to a nilsequence of degree $\leq d-1$. This can be shown to be an equivalence relation (see Lemma \ref{equiv-lemma}); the equivalence class of a nilcharacter $\chi$ will be called the \emph{symbol} of $\chi$ and is denoted $[\chi]_{\Symb^d([N])}$. The space of all such symbols will be denoted $\Symb^d([N])$; we will show later (see Lemma \ref{symbolic}) that this is an abelian multiplicative group. \end{definition} When $d=1$, two nilcharacters $n \mapsto e(\alpha n + \beta)$ and $n \mapsto e( \alpha' n + \beta')$ are equivalent if and only if $\alpha-\alpha'$ is a limit integer, and $\Symb^1([N])$ is just $\ultra\T$ in this case. However, the situation is more complicated in higher degree. To get some feel for this, consider two polynomial phases $$ \chi: n \mapsto e(\alpha_0 + \ldots + \alpha_d n^d)$$ and $$ \chi': n \mapsto e(\alpha'_0 + \ldots + \alpha'_d n^d)$$ with $\alpha_0,\ldots,\alpha_d,\alpha'_0,\alpha'_d \in \ultra \R$, and consider the problem of determining when $\chi$ and $\chi'$ are equivalent nilcharacters of degree $d$. Certainly this is the case if $\alpha_d$ and $\alpha'_d$ are equal, or differ by a limit integer. When $d \geq 2$, there are two further important cases in which equivalence occurs. The first is when $\alpha'_d = \alpha_d + O(N^{-d})$, because in this case the top degree component $e( (\alpha_d - \alpha'_d) n^d)$ of $\chi \overline{\chi'}$ can be viewed as a Lipschitz function of $n/2N \mod 1$ (say) on $[N]$ and is thus a $1$-step nilsequence. The second is when $\alpha'_d = \alpha_d + a/q$ for some standard rational $q$, since in this case the top degree component $e( (\alpha_d - \alpha'_d) n^d)$ of $\chi \overline{\chi'}$ is periodic with period $q$ and can thus be viewed as a Lipschitz function of $n/q \mod 1$ and is therefore again a $1$-step nilsequence. We can combine all these cases together, and observe that $\chi$ and $\chi'$ are equivalent when $\alpha'_d = \alpha_d + a/q + O(N^{-d}) \mod 1$ for some standard rational $a/q$. It is possible to use the quantitative equidistribution theory of nilmanifolds (see \cite{green-tao-nilratner}) to show that these are in fact the \emph{only} cases in which $\chi$ and $\chi'$ are equivalent; this is a variant of the classical theorem of Weyl that a polynomial sequence is (totally) equidistributed modulo $1$ if and only if at least one non-constant coefficients is irrational. In view of this, we see that $\Symb^d([N])$ contains $\ultra \R / (\ultra \Z + \Q + N^{-d} \overline{\R})$ as a subgroup, and the symbol of $n \mapsto e(\alpha_0 + \ldots + \alpha_d n^d)$ can be identified with $$\alpha_d \hbox{ mod } 1, \Q, O(N^{-d}) := \alpha + \ultra \Z + \Q + N^{-d} \overline{\R}.$$ However, the presence of bracket polynomials (suitably modified to avoid the topological obstruction mentioned earlier) means that when $d \geq 2$, that $\Symb^d([N])$ is somewhat larger than the above mentioned subgroup. We illustrate this with the following (non-rigorous) discussion. Take $d=2$ and consider two degree $2$ nilcharacters $\chi, \chi'$ of the form $$ \chi(n) \approx e( \{ \alpha n \} \beta n + \gamma n^2 )$$ and $$ \chi'(n) \approx e( \{ \alpha' n \} \beta' n + \gamma' n^2 )$$ for some $\alpha, \beta, \gamma, \alpha', \beta', \gamma' \in \ultra \R$, where we interpet the symbol $\approx$ loosely to mean that $\chi, \chi'$ are suitable vector-valued smoothings of the indicated bracket phases, of the type discussed earlier in this section. These may also involve some lower order nilsequences of degree $1$. As before, we consider the question of determining those values of $\alpha,\beta,\gamma,\alpha',\beta',\gamma'$ for which $\chi$ and $\chi'$ are equivalent. There are a number of fairly obvious ways in which equivalence can occur. For instance, by modifying the previous arguments, one can show that equivalence holds when $\alpha=\alpha', \beta=\beta'$, and $\gamma-\gamma'$ is equal to a limit integer, a standard rational, or is equal to $O(N^{-2})$. Similarly, equivalence occurs when $\beta=\beta'$, $\gamma=\gamma'$, and $\alpha-\alpha'$ is equal to a limit integer, a standard rational, or is equal to $O(N^{-1})$. However, there are also some slightly less obvious ways in which equivalence can occur. Observe that the expression $e( \{ \alpha n\} \{\beta n\} )$ is a Lipschitz function of the fractional parts of $\alpha n$ and $\beta n$ and is thus a (piecewise) nilsequence of degree $1$ (and will become a genuine nilsequence after one performs an appropriate vector-valued smoothing). On the other hand, we have the obvious identity $$ e( (\alpha n - \{ \alpha n \}) (\beta n - \{\beta n\}) ) = 1$$ since the exponent is the product of two (limit) integers. Expanding this out and rearranging, we obtain the (slightly imprecise) relation \begin{equation}\label{brackalg} e( \{ \alpha n \} \beta n ) \approx e( - \{ \beta n \} \alpha n + \alpha \beta n^2 ) \end{equation} where we again interpret $\approx$ loosely to mean ``after a suitable vector-valued smoothing, and ignoring lower order factors''. This gives an additional route for $\chi$ and $\chi'$ to be equivalent. A similar argument also gives the variant $$ e( \{ \alpha n \} \beta n ) \approx e( \frac{1}{2} \alpha \beta n^2 )$$ whenever $\alpha,\beta$ are \emph{commensurate} in the sense that $\alpha/\beta$ is a standard rational. We thus see that the notion of equivalence is in fact already somewhat complicated in degree $2$, and the situation only becomes worse in higher degree. One can describe equivalence of bracket polynomials explicitly using \emph{bracket calculus}, as developed in \cite{leibman} (see also the earlier works \cite{bl,ha1,ha2,ha3}), but this requires a fair amount of notation and machinery. Fortunately, in this paper we will be able to treat the notion of a symbol \emph{abstractly}, without requiring an explicit description of the space $\Symb^d([N])$.\vspace{11pt} \textsc{More general types of filtration.} The notion of a one-dimensional polynomial $n \mapsto \alpha_0 + \ldots + \alpha_d n^d$ of degree $\leq d$ can of course be generalised to higher dimensions. For instance, we have the notion of a multidimensional polynomial $$ (n_1,\ldots,n_k) \mapsto \sum_{i_1,\ldots,i_k \geq 0: i_1+\ldots+i_k \leq d} \alpha_{i_1,\ldots,i_k} n_1^{i_1} \ldots n_k^{i_d}$$ of degree $\leq d$. We also have the slightly different notion of a multidimensional polynomial $$ (n_1,\ldots,n_k) \mapsto \sum_{i_1,\ldots,i_k \geq 0: i_j \leq d_j \hbox{ for } 1 \leq j \leq k} \alpha_{i_1,\ldots,i_k} n_1^{i_1} \ldots n_k^{i_d}$$ of \emph{multidegree} $\leq (d_1,\ldots,d_k)$ for some integers $d_1,\ldots,d_k\geq 0$. We can unify these two concepts into the notion of a multi-dimensional polynomial \begin{equation}\label{multipoly} (n_1,\ldots,n_k) \mapsto \sum_{(i_1,\ldots,i_k) \in J} \alpha_{i_1,\ldots,i_k} n_1^{i_1} \ldots n_k^{i_d} \end{equation} of \emph{multidegree} $\subset J$ for some finite \emph{downset} $J \subset \N^k$, i.e. a finite set of tuples with the property that $(i_1,\ldots,i_k) \in J$ whenever $(i_1,\ldots,i_k) \in \N^k$ and $i_j \leq i'_j$ for all $j=1,\ldots,k$ for some $(i'_1,\ldots,i'_k) \in J$. Thus for instance the two-dimensional polynomial $$ (h,n) \mapsto \alpha h n + \beta h n^2 + \gamma n^3$$ for $\alpha,\beta,\gamma \in \ultra \R$ is of multidegree $\subset J$ for \[ J := \{ (0,0), (0,1), (0,2), (0,3), (1,0), (1,1), (1,2) \},\] and is also of multidegree $\leq (1,3)$ and of degree $\leq 3$. (One can view the downset $J$ as a variant of the \emph{Newton polytope} of the polynomial.) In our subsequent arguments, we will need to similarly generalise the notion of a one-dimensional nilcharacter $n \mapsto \chi(n)$ of degree $\leq d$ to a multidimensional nilcharacter $(n_1,\ldots,n_k) \mapsto \chi(n_1,\ldots,n_k)$ of degree $\leq d$, of multidegree $\leq (d_1,\ldots,d_k)$, or of multidegree $\subset J$. We will define these concepts precisely in a short while, but we mention for now that the polynomial phase $$ (h,n) \mapsto e( \alpha h n + \beta h n^2 + \gamma n^3 )$$ will be a two-dimensional nilcharacter of multidegree $\subset J$, multi-degree $\leq (1,3)$, and degree $\leq 3$ where $J$ is as above. Moreover, variants of this phase, such as (a suitable vector-valued smoothing of) $$ (h,n) \mapsto e( \{ \alpha_1 h\} \alpha_2 n + \{ \{ \beta_1 n \} \beta_2 h \} \beta_3 n + \{ \gamma_1 n^2 \} \gamma_2 n ),$$ will also have the same multidegree and degree as the preceding example. The multidegree of a nilcharacter $\chi(n_1,\ldots,n_k)$ is a more precise measurement of the complexity of $\chi$ than the degree, because it separates the behaviour of the different variables $n_1,\ldots,n_k$. We will also need a different refinement of the notion of degree, this time for a one-dimensional nilcharacter $n \mapsto \chi(n)$, which now separates the behaviour of different top degree components of $\chi$, according to their ``rank''. Heuristically, the rank of such a component is the number of fractional part operations $x \mapsto \{ x \}$ that are needed to construct that component, plus one; thus for instance $$ n \mapsto e( \alpha n^3 ) $$ has degree $3$ and rank $1$, $$ n \mapsto e( \{ \alpha n^2 \} \beta n ) $$ has degree $3$ and rank $2$ (after vector-valued smoothing), $$ n \mapsto e( \{ \{ \alpha n \} \beta n \} \gamma n) $$ has degree $3$ and rank $3$ (after vector-valued smoothing), and so forth. We will then need a notion of a nilcharacter $\chi$ \emph{of degree-rank $\leq (d,r)$}, which roughly speaking means that all the components used to build $\chi$ either are of degree $<d$, or else are of degree exactly $d$ but rank at most $r$. Thus for instance, $$ n \mapsto e( \{ \alpha n \} \beta n + \gamma n^3 )$$ has degree-rank $\leq (3,1)$ (after vector-valued smoothing), while $$ n \mapsto e( \{ \alpha n \} \beta n + \gamma n^3 + \{ \delta n^2 \} \epsilon n )$$ has degree-rank $\leq (3,2)$ (after vector-valued smoothing), and $$ n \mapsto e( \{ \alpha n \} \beta n + \gamma n^3 + \{ \delta n^2 \} \epsilon n+ \{ \{ \mu n \} \nu n \} \rho n)$$ has degree-rank $\leq (3,3)$ (after vector-valued smoothing). In order to make precise the notions of multidegree and degree-rank for nilcharacters, it is convenient to adopt an abstract formalism that unifies degree, multidegree, and degree-rank into a single theory. We need the following abstract definition. \begin{definition}[Ordering]\label{order-def} An \emph{ordering} $I = (I, \prec, +, 0)$ is a set $I$ equipped with a partial ordering $\prec$, a binary operation $+: I \times I \to I$, and a distinguished element $0 \in I$ with the following properties: \begin{enumerate} \item The operation $+$ is commutative and associative, and has $0$ as the identity element. \item The partial ordering $\prec$ has $0$ as the minimal element. \item If $i, j \in I$ are such that $i \prec j$, then $i + k \prec j+k$ for all $k \in I$. \item For every $d \in I$, the initial segment $\{ i \in I: i \prec d \}$ is finite. \end{enumerate} A \emph{finite downset} in $I$ is a finite subset $J$ of $I$ with the property that $j \in J$ whenever $j \in I$ and $j \prec i$ for some $i \in J$. \end{definition} In this paper, we will only need the following three specific orderings (with $k$ a standard positive integer): \begin{enumerate} \item The \emph{degree ordering}, in which $I = \N$ with the usual ordering, addition, and zero element. \item The \emph{multidegree ordering}, in which $I = \N^k$ with the usual addition and zero element, and with the product ordering, thus $(i'_1,\ldots,i'_k) \preceq (i_1,\ldots,i_k)$ if $i'_j \leq i_j$ for all $1 \leq j \leq k$. \item The \emph{degree-rank ordering}, in which $I$ is the sector $\DR := \{ (d,r) \in \N^2: 0 \leq r \leq d \}$ with the usual addition and zero element, and the lexicographical ordering, that is to say $(d',r') \prec (d,r)$ if $d' < d$ or if $d'=d$ and $r'<r$. \end{enumerate} It is easy to verify that each of these three explicit orderings obeys the abstract axioms in Definition \ref{order-def}. In the case of the degree or degree-rank orderings, $I$ is totally ordered (for instance, the first few degree-ranks are $(0,0), (1,0), (1,1), (2,0)$, $(2,1), (2,2), (3,0), \ldots$), and so the only finite downsets are the initial segments. For the multidegree ordering, however, the initial segments are not the only finite downsets that can occur. The one-dimensional notions of a filtration, nilsequence, nilcharacter, and symbol can be easily generalised to arbitrary orderings. We give the bare definitions here, and defer the more thorough treatment of these concepts to Appendix \ref{poly-app} and Appendix \ref{basic-sec}. We will however remark that when $I$ is the degree ordering, then all of the notions defined below simplify to the one-dimensional counterparts defined earlier. \begin{definition}[Filtered group]\label{filtered-group} Let $I$ be an ordering and let $G$ be a group. By an \emph{$I$-filtration} on $G$ we mean a collection $G_{I} = (G_{ i})_{i \in I}$ of subgroups indexed by $I$, with the following properties: \begin{enumerate} \item (Nesting) If $i,j \in I$ are such that $i \prec j$, then $G_i \supseteq G_j$. \item (Commutators) For every $i,j \in I$, we have $[G_{i}, G_{ j}] \subseteq G_{i+j}$. \end{enumerate} If $d \in I$, we say that $G$ has \emph{degree} $\leq d$ if $G_i$ is trivial whenever $i \not \preceq d$. More generally, if $J$ is a downset in $I$, we say that $G$ has \emph{degree} $\subseteq J$ if $G_i$ is trivial whenever $i \not \in J$. \end{definition} Let us explicitly adapt the above abstract definitions to the three specific orderings mentioned earlier. \begin{definition} If $(d_1,\ldots,d_k) \in \N^k$, we define a \emph{nilpotent Lie group of multi-degree $\leq (d_1,\ldots,d_k)$} to be a nilpotent $I$-filtered Lie group of degree $\leq (d_1,\ldots,d_k)$, where $I = \N^k$ is the multidegree ordering. Similarly, if $J$ is a downset, define the notion of a nilpotent Lie group of multidegree $\subseteq J$. If $(d,r) \in \DR$, define a \emph{nilpotent Lie group of degree-rank $\leq (d,r)$} to be a nilpotent $\DR$-filtered Lie group $G$ of degree $\leq (d,r)$, with the additional axioms $G_{(0,0)}=G$ and $G_{(d,0)} = G_{(d,1)}$ for all $d \geq 1$. We define the notion of a filtered nilmanifold of multidegree $\leq (d_1,\ldots,d_k)$, multidegree $\subseteq J$, or degree-rank $\leq (d,r)$ similarly. \end{definition} Note that the degree-rank filtration needs to obey some additional axioms, which are needed in order for the rank $r$ to play a non-trivial role. As such, the unification here of degree, multidegree, and degree-rank, is not quite perfect; however this wrinkle is only of minor technical importance and should be largely ignored on a first reading. \begin{example} If $G$ is a filtered nilpotent group of multidegree $\leq (1,1)$, then the groups $G_{(1,0)}$ and $G_{(0,1)}$ must be abelian normal subgroups of $G_{(0,0)}$, and their commutator $[G_{(1,0)}, G_{(0,1)}]$ must lie inside the group $G_{(1,1)}$, which is a central subgroup of $G_{(0,0)}$. If $G$ is a filtered nilpotent group of degree-rank $\leq (d,d)$, then $(G_{(i,0)})_{i \geq 0}$ is a $\N$-filtration of degree $\leq d$. But if we reduce the rank $r$ to be strictly less than $d$, then we obtain some additional relations between the $G_{(i,0)}$ that do not come from the filtration property. For instance, if $G$ has degree-rank $\leq (3,2)$, then the group $[G_{(1,0)},[G_{(1,0)},G_{(1,0)}]]$ must now be trivial; if $G$ has degree-rank $\leq (3,1)$, then the group $[G_{(1,0)}, G_{(2,0)}]$ must also be trivial. More generally, if $G$ has degree-rank $\leq (d,r)$, then any iterated commutator of $g_{i_1},\ldots,g_{i_m}$ with $g_j \in G_{(i_j,0)}$ for $j=1,\ldots,m$ will be trivial whenever $i_1+\ldots+i_m > d$, or if $i_1+\ldots+i_m=d$ and $m>r$. \end{example} \begin{example}\label{inclusions} If $(G_i)_{i \in \N}$ is an $\N$-filtration of $G$ of degree $\leq d$, then $(G_{|\vec i|})_{\vec i \in \N^k}$ is an $\N^k$-filtration of $G$ of multidegree $\subset \{\vec i \in \N^k: |\vec i| \leq d \}$, where we recall the notational convention $|(i_1,\ldots,i_k)| = i_1 + \ldots + i_k$. Conversely, if $J$ is a finite downset of $\N^k$ and $(G_{\vec i})_{\vec i \in \N^k}$ is a $\N^k$-filtration of $G$ of multidegree $\subset J$, then $$ \left( \bigvee_{\vec i: |\vec i| \leq i} G_{\vec i} \right)_{i \in \N}$$ is easily verified (using Lemma \ref{normal}) to be an $\N$-filtration of degree $\leq \max_{\vec i \in J} |\vec i|$, where $\bigvee_{a \in A} G_a$ is the group generated by $\bigcup_{a \in A} G_a$. In particular, any multidegree $\leq (d_1,\ldots,d_k)$ filtration induces a degree $\leq d_1+\ldots+d_k$ filtration. In a similar spirit, every degree-rank $\leq (d,r)$ filtration $(G_{(d',r')})_{(d',r') \in \DR}$ of a group $G$ induces a degree $\leq d$ filtration $(G_{(i,0)})_{i \in \N}$. In the converse direction, if $(G_i)_{i \in \N}$ is a degree $\leq d$ filtration of $G$ with $G=G_0$, then we can create a degree-rank $\leq (d,d)$ filtration $(G_{(d',r')})_{(d',r') \in \DR}$ by setting $G_{(d',r')}$ to be the space generated by all the iterated commutators of $g_{i_1},\ldots,g_{i_m}$ with $g_j \in G_{(i_j,0)}$ for $j=1,\ldots,m$ for which either $i_1+\ldots+i_m > d'$, or $i_1+\ldots+i_m=d$ and $m \geq \max(r',1)$; this can easily be verified to indeed be a filtration, thanks to Lemma \ref{normal}. \end{example} \begin{example}\label{dr-f} Let $d \geq 1$ be a standard integer. We can give the unit circle $\T$ the structure of a degree-rank filtered nilmanifold of degree-rank $\leq (d,1)$ by setting $G=\R$ and $\Gamma=\Z$ with $G_{(d',r')} := \R$ for $(d',r') \leq (d,1)$ and $G_{(d',r')} := \{0\}$ otherwise. This is also the filtration obtained from the degree $\leq d$ filtration (see Example \ref{polyphase}) using the construction in Example \ref{inclusions}. \end{example} \begin{example}[Products]\label{prodeq} If $G_{I}$ and $G'_I$ are $I$-filtrations on groups $G, G'$ then we can give the product $G \times G'$ an $I$-filtration in an obvious way by setting $(G \times G')_i := G_i \times G'_i$. The degree of $G \times G'$ is the union of the degrees of $G$ and $G'$. Similarly the product $G_1/\Gamma_1 \times G_2/\Gamma_2$ of two $I$-filtered nilmanifolds is an $I$-filtered nilmanifold. \end{example} \begin{example}[Pushforward and pullback]\label{pushpull} Let $\phi: G \to H$ be a homomorphism of groups. Then any any $I$-filtration $H_I = (H_{ i})_{i \in I}$ of $H$ induces a \emph{pullback $I$-filtration} $\phi^* H_I := (\phi^{-1}(H_{i}))_{i \in I}$. Similarly, any $I$-filtration $G_{I} = (G_{i})_{i \in I}$ on $G$ induces a \emph{pushforward $I$-filtration} $\phi_* G_{I} := (\phi(G_{i}))_{i \in I}$ on $H$. In particular, if $\Gamma$ is a subgroup of $G$, then we can pullback a filtration $G_{I} = (G_{ i})_{i \in I}$ of $G$ by the inclusion map $\iota : \Gamma \hookrightarrow G$ to create the \emph{restriction} $\Gamma_{I} := (\Gamma_{i})_{i \in I}$ of that filtration. It is a trivial matter to check that the subgroups of this filtration are given by $\Gamma_{ i} := \Gamma \cap G_{i}$. \end{example} \begin{definition}[Filtered quotient space]\label{quot} A \emph{$I$-filtered quotient space} is a quotient $G/\Gamma$, where $G$ is an $I$-filtered group and $\Gamma$ is a subgroup of $G$ (with the induced filtration, see Example \ref{pushpull}). A \emph{$I$-filtered homomorphism} $\phi: G/\Gamma \to G'/\Gamma'$ between $I$-filtered quotient spaces is a group homomorphism $\phi: G \to G'$ which maps $\Gamma$ to $\Gamma'$, and also maps $G_i$ to $G'_i$ for all $i \in I$. Note that such a homomorphism descends to a map from $G/\Gamma$ to $G'/\Gamma'$. If $G$ is a nilpotent $I$-filtered Lie group, and $\Gamma$ is a discrete cocompact subgroup of $G$ which is rational with respect to $G_I$ (thus $\Gamma_i := \Gamma \cap G_i$ is cocompact in $G_i$ for each $i \in I$), we call $G/\Gamma = (G/\Gamma, G_I)$ an \emph{$I$-filtered nilmanifold}. We say that $G/\Gamma$ has degree $\leq d$ or $\subseteq J$ of $G$ has degree $\leq d$ or $\subseteq J$. \end{definition} \begin{example}[Subnilmanifolds] Let $G/\Gamma$ be an $I$-filtered nilmanifold of degree $\subset J$. If $H$ is a rational subgroup of $G$, then $H/(H \cap \Gamma)$ is also a filtered nilmanifold degree $\subset J$ (using Example \ref{pushpull}), with an inclusion homomorphism from $H/(H \cap \Gamma)$ to $G/\Gamma$; we refer to $H/(H \cap \Gamma)$ as a \emph{subnilmanifold} of $G/\Gamma$. \end{example} We isolate three important examples of a filtered group, in which $G$ is the additive group $\Z$ or $\Z^k$. \begin{definition}[Basic filtrations]\label{basic-filter} We define the following filtrations: \begin{itemize} \item The \emph{degree filtration} $\Z^k_\N$ on $G = \Z^k$, in which $I = \N$ is the degree ordering and $G_i = G$ for $i \leq 1$ and $G_i = \{0\}$ otherwise. In many cases $k$ will equal $1$ or $2$. \item The \emph{multidegree filtration} $\Z^k_{\N^k}$ on $G = \Z^k$, in which $I=\N^k$ is the multidegree ordering and $G_{\vec{0}} = \Z^k$, $G_{\vec{e}_i} = \langle \vec{e}_i\rangle$, $i = 1,\dots,k$, and $G_{\vec{v}} = \{ 0\}$ otherwise, with $e_1,\ldots,e_k$ being the standard basis for $\Z^k$; \item The \emph{degree-rank filtration} $\Z_\DR$ on $G = \Z$, in which $I=\DR$ is the degree-rank ordering and $G_{(0,0)} = G_{(1,0)} = \Z$ and $G_{(d,r)} = \{0\}$ otherwise. \end{itemize} \end{definition} \begin{definition}[Polynomial map]\label{poly-map-def} Suppose that $H$ and $G$ are $I$-filtered groups with $H = (H,+)$ abelian\footnote{This is not actually a necessary assumption; see Appendix \ref{poly-app}. However, in the main body of the paper we will only be concerned with polynomial maps on additive domains.}. Then for any map $g : H \rightarrow G$ we define the derivative \begin{equation}\label{partial-def} \partial_h g(n) := g(n+h) g(n)^{-1}.\end{equation} We say that $g : H \rightarrow G$ is \emph{polynomial} if \begin{equation}\label{polynomial-sequence-def} \partial_{h_1} \dots \partial_{h_m} g (n) \in G_{i_1 + \dots + i_m}\end{equation} for all $m \geq 0$, all $i_1,\dots, i_m \in I$ and all $h_j \in H_{i_j}$ for $j = 1,\dots, m$, and for all $n \in H_0$. We denote by $\poly(H_I \to G_I)$ the space of all polynomial maps from $H_I$ to $G_I$. As usual, we use $\ultra \poly(H_I \to G_I)$ to denote the space of all limit polynomial maps from $\ultra H_I$ to $\ultra G_I$ (i.e. ultralimits of polynomial maps in $\poly(H_I \to G_I)$). \end{definition} Many facts about these spaces (in some generality) are established in Appendix \ref{poly-app} where, in particular, a remarkable result essentially due to Lazard and Leibman \cite{lazard,leibman-group-1,leibman-group-2} is established: $\poly(H_I \to G_I)$ is a group. The material in Appendix \ref{poly-app} is formulated in the general setting of abstract orderings $I$ and for arbitrary (and possibly non-abelian) groups $H_I$, but for our applications we are only interested in the special case when $H_I$ is $\Z$ or $\Z^k$ with the degree, multidegree, or degree-rank filtration as defined above. Before moving on let us be quite explicit about what the notion of a polynomial map is in each of the three cases, since the definitions take a certain amount of unravelling. \begin{itemize} \item (Degree filtration) If $H = \Z^k$ with the degree filtration $\Z^k_\N$, then $\poly(\Z^k_\N \to G_\N)$ consists of maps $g : \Z^k \rightarrow G$ with the property that \[ \partial_{h_1} \dots \partial_{h_m} g(n) \in G_m\] for all $m \geq 0$, $h_1,\dots,h_m \in \Z$ and all $n \in G_0$. This space is precisely the same space as the one considered in \cite[\S 6]{green-tao-nilratner}. The space $\ultra \poly(\Z^k \to G_\N)$ is defined similarly, except that $g: \ultra \Z^k \to \ultra G$ is now a limit map, and all spaces such as $\Z$ and $G_m$ need to be replaced by their ultrapowers. (Similarly for the other two examples in this list.) \item (Multidegree filtration) If $H = \Z^k$ with the multidegree filtration $\Z^k_{\N^k}$, then $\poly(\Z^k_{\N^k} \to G_{\N^k})$ consists of maps $g : \Z^k \rightarrow G$ with the property that \[ \partial_{\vec{e}_{i_1}} \dots \partial_{\vec{e}_{i_m}} g(\vec{n}) \in G_{\vec{e}_{i_1} + \dots + \vec{e}_{i_m}}\] for all $k \ge 0$, all $i_1,\dots, i_m$ and all $\vec{n} \in \Z^k$. To relate this space to the analogous spaces for the degree ordering, observe (using Example \ref{inclusions}) that $$ \poly(\Z^k_\N \to (G_i)_{i \in \N} ) = \poly(\Z^k_{\N^k} \to (G_{|\vec i|})_{\vec i \in \N^k} )$$ for any $\N$-filtration $(G_i)_{i \in \N}$, and conversely one has $$ \poly(\Z^k_{\N^k} \to (G_{\vec i})_{\vec i \in \N^k} ) \subset \poly\left(\Z^k_\N \to ( \bigvee_{|\vec i| = i} G_{\vec i} )_{i \in \N} \right)$$ for any $\N^k$-filtration $(G_{\vec i})_{\vec i \in \N^k}$. This is of course related to the obvious fact that a polynomial of multidegree $\leq (d_1,\ldots,d_k)$ is automatically of degree $\leq d_1+\ldots+d_k$. \item (Degree-rank filtration) If $H = \Z$ with the degree-rank filtration $\Z_\DR$, $\poly(\Z_{\DR} \to G_{\DR})$ consists of maps $g : \Z \rightarrow G$ with the property that \[ \partial_{h_1} \dots \partial_{h_m} g(n) \in G_{(m,0)}\] whenever $m \geq 0$, $h_1,\dots,h_m \in \Z$ and $n \in G_0$. We observe (using Example \ref{inclusions}) the obvious equality \begin{equation}\label{dreq} \poly(\Z_\DR \to (G_{(d,r)})_{(d,r) \in \DR} ) = \poly(\Z_\N \to (G_{(i,0)})_{i \in \N} ) \end{equation} for any $\DR$-filtration $(G_{(d,r)})_{(d,r) \in \DR}$. Thus, a degree-rank filtration $G_\DR$ on $G$ does not change the notion of a polynomial sequence, but instead gives some finer information on the group $G$ (and in particular, it indicates that certain iterated commutators of the $G_{(d,r)}$ vanish, which is information that cannot be discerned just from the knowledge that $(G_{(i,0)})_{i \in \N}$ is a $\N$-filtration). \end{itemize} \begin{definition}[Nilsequences and nilcharacters]\label{nilch-def-gen} Let $I$ be an ordering, and let $J$ be a finite downset in $I$. Let $H$ be an abelian $I$-filtered group. A (polynomial) nilsequence of degree $\subset J$ is any function of the form \[\chi(n) = F(g(n) \ultra \Gamma),\] where \begin{itemize} \item $G/\Gamma = (G/\Gamma,G_I)$ is an $I$-filtered nilpotent manifold of degree $\subset J$; \item $g \in \ultra\poly(H_{I} \to G_{I})$ is a limit polynomial map from $\ultra H_I$ to $\ultra G_I$; and \item $F \in \Lip(\ultra (G/\Gamma) \rightarrow \overline{C}^{\omega})$. \end{itemize} The space of all such nilsequences will be denoted $\Nil^{\subset J}(\ultra H)$. We define the notion of a nilsequence of degree $\leq d$ for some $d \in I$, and the space $\Nil^{\leq d}(\ultra H)$, similarly. If $\Omega$ is a limit subset of $\ultra H$, the restriction of the nilsequences in $\Nil^{\subset J}(\ultra H)$ to $\Omega$ will be denoted $\Nil^{\subset J}(\Omega)$, and we define $\Nil^{\leq d}(\Omega)$ similarly. We refer to the map $n \mapsto g(n) \ultra \Gamma$ as a \emph{limit polynomial orbit} in $G/\Gamma$, and denote the space of such orbits as $\ultra \poly(H_I \to (G/\Gamma)_I)$. Suppose that $d \in I$. Then $\chi$ is said to be a \emph{degree $d$ nilcharacter} if $\chi$ is a degree $\leq d$ nilsequence with the following additional properties: \begin{itemize} \item $F \in \Lip(\ultra(G/\Gamma) \to \overline{S^{\omega}})$ (thus $|F|=1$) and \item $F( g_d x ) = e( \eta(g_d) ) F(x)$ for all $x \in G/\Gamma$ and $g_d \in G_{d}$, where $\eta: G_{d} \to \R$ is a continuous standard homomorphism which maps $\Gamma_{d}$ to the integers. We call $\eta$ the \emph{vertical frequency} of $F$. \end{itemize} The space of all degree $d$ nilcharacters on $\ultra H$ will be denoted $\Xi^d(\ultra H)$. If $\Omega$ is a limit subset of $\ultra H$, the restriction of the nilcharacters in $\Xi^d(\ultra H)$ to $\Omega$ will be denoted $\Xi^d(\Omega)$. With the multidegree ordering, a degree $(d_1,\ldots,d_k)$ nilcharacter will be referred to as a multidegree $(d_1,\ldots,d_k)$ nilcharacter, and the space of such characters on $\Omega$ denoted $\Xi^{(d_1,\ldots,d_k)}_\MD(\Omega)$; we similarly write $\Nil^{\subset J}(\Omega)$ or $\Nil^{\leq (d_1,\ldots,d_k)}(\Omega)$ as $\Nil^{\subset J}_\MD(\Omega)$ or $\Nil^{\leq (d_1,\ldots,d_k)}(\Omega)$ for emphasis. Similarly, with the degree-rank ordering, and assuming $G/\Gamma$ is a filtered nilmanifold of degree-rank $\leq (d,r)$ (so in particular, we enforce the axioms $G_{(0,0)} = G$ and $G_{(d,0)}=G_{(d,1)}$), a degree $(d,r)$ nilcharacter will be referred to as a degree-rank $(d,r)$ nilcharacter. The space of nilcharacters on $\Omega$ of degree-rank $(d,r)$ will be denoted $\Xi^{(d,r)}_\DR(\Omega)$ (note that this is distinct from the space $\Xi^{(d_1,d_2)}_\MD(\Omega)$ of two-dimensional nilcharacters of multidegree $(d_1,d_2)$), and the nilsequences on $\Omega$ of degree-rank $\leq (d,r)$ will similarly be denoted $\Nil^{\leq (d,r)}_\DR(\Omega)$. \end{definition} \begin{example} Let $J \subset \N^k$ be a finite downset. Then any sequence of the form $$ (n_1,\ldots,n_k) \mapsto F\left( \sum_{(i_1,\ldots,i_k) \in J} \alpha_{i_1,\ldots,i_k} n_1^{i_1} \ldots n_k^{i_k} \mod 1\right),$$ where $\alpha_{i_1,\ldots,i_k} \in \ultra \R$ and $F \in \Lip( \ultra \T \to \overline{\C}^\omega )$, is a nilsequence on $\Z^k$ of multidegree $\subseteq J$, as can easily be seen by giving $G := \R$ the $\Z^k$-filtration $G_i := \R$ for $i \in J$ and $G_i := \{0\}$ otherwise, and setting $\Gamma := \Z$ and $g \in \ultra \poly( \Z^k \to \R )$ to be the limit polynomial $n \mapsto \sum_{(i_1,\ldots,i_k) \in J} \alpha_{i_1,\ldots,i_k} n_1^{i_1} \ldots n_k^{i_k}$. For similar reasons, any sequence of the form $$ (n_1,\ldots,n_k) \mapsto e\left( \sum_{(i_1,\ldots,i_k) \in \N^k: i_1+\ldots+i_k \leq d} \alpha_{i_1,\ldots,i_k} n_1^{i_1} \ldots n_k^{i_k} \mod 1\right),$$ is a degree $d$ nilcharacter on $\Z^k$ of degree $d$, and any sequence of the form $$ (n_1,\ldots,n_k) \mapsto e\left( \sum_{(i_1,\ldots,i_k) \in \N^k: i_j \leq d_j \hbox{ for } j=1,\ldots,k} \alpha_{i_1,\ldots,i_k} n_1^{i_1} \ldots n_k^{i_k} \mod 1\right),$$ is a multidegree $(d_1,\ldots,d_k)$ nilcharacter on $\Z^k$. \end{example} \begin{example}\label{abn} Any degree $2$ nilsequence of magnitude $1$ is automatically a degree-rank $\leq (3,0)$ nilcharacter, since every degree $\leq 2$ nilmanifold is automatically a degree-rank $\leq (2,2)$ nilmanifold, which can then converted trivially to a degree-rank $\leq (3,0)$ nilmanifold (with a trivial group $G_{(3,0)}$). Thus for instance for $\alpha,\beta \in \R$, $$ n \mapsto e( \{ \alpha n \} \beta n )$$ is nearly a degree-rank $(3,0)$ nilcharacter, and becomes a genuine degree-rank $(3,0)$ nilcharacter after vector-valued smoothing. If $\alpha \in \ultra \R$, then the sequence $$ n \mapsto e( \alpha n^3 )$$ is a degree-rank $(3,1)$ nilcharacter. Indeed, we can give $G=\R$ a degree-rank $\leq (3,1)$ filtration $G_\DR$ by setting $G_{(d,r)} := \R$ for $(d,r) \leq (3,1)$, and $G_{(d,r)} := \{0\}$ otherwise. Next, if $\alpha, \beta \in \ultra \R$, then the sequence \begin{equation}\label{abn-eq} n \mapsto e( \{ \alpha n^2 \} \beta n ) \end{equation} is \emph{nearly} a degree-rank $(3,2)$ nilcharacter (and becomes a genuinely so after vector-valued smoothing). To see this, let $G$ be the Heisenberg nilpotent group \eqref{heisen}, which we give the following degree-rank filtration: \begin{align*} G_{(0,0)} = G_{(1,0)} = G_{(1,1)} &:= G \\ G_{(2,0)} = G_{(2,1)} &:= \langle e_1, [e_1,e_2] \rangle_\R = \{ e_1^{t_1} [e_1,e_2]^{t_{12}}: t_1,t_{12} \in \R \} \\ G_{(2,2)} = G_{(3,0)} = G_{(3,1)} = G_{(3,2)} &:= \langle [e_1,e_2] \rangle_\R = \{ [e_1,e_2]^{t_{12}}: t_{12} \in \R \} \\ G_{(d,r)} &:= \{\id\} \hbox{ for all other } (d,r) \in \DR. \end{align*} One easily verifies that this is a degree-rank $\leq (3,2)$ filtration. If we then set $g: \ultra \Z \to \ultra G$ to be the limit sequence $g(n) := e_2^{\beta n} e_1^{\alpha n^2}$, one easily verifies that $g$ is a limit polynomial with respect to this degree-rank filtration. If one then lets $F$ be the piecewise Lipschitz function \eqref{fdef}, then we see that $$ F( g(n) \ultra \Gamma ) = e( \{ \alpha n^2 \} \beta n )$$ and so we see that $n \mapsto e( \{ \alpha n^2 \} \beta n )$ is a indeed piecewise degree-rank $(3,2)$ nilcharacter. A similar argument (using the free $3$-step nilpotent manifold on three generators, which has degree $\leq 3$ and hence degree-rank $\leq (3,3)$) shows that $$ n \mapsto e( \{ \{ \alpha n \} \beta n \} \gamma n )$$ is nearly a degree-rank $(3,3)$ nilcharacter, and becomes a genuine degree-rank $(3,3)$ nilcharacter after applying vector-valued smoothing; see \cite[Appendix E]{u4-inverse} for the relevant calculations. These examples should help illustrate the heuristic that a degree-rank $(d,r)$ nilcharacter is built up using (suitable vector-valued smoothings of) bracket monomials which either have degree less than $d$, or have degree exactly $d$ and involve at most $r-1$ applications of the fractional part operation. \end{example} We observe (using Example \ref{inclusions}) the following obvious inclusions: \begin{enumerate} \item A multidegree $\leq (d_1,\ldots,d_k)$ nilsequence on $\Z^k$ is automatically a degree $\leq d_1+\ldots+d_k$ nilsequence. \item A multidegree $(d_1,\ldots,d_k)$ nilcharacter on $\Z^k$ is automatically a degree $d_1+\ldots+d_k$ nilcharacter. \item A multidegree $(d_1,\ldots,d_{k-1},0)$ nilsequence on $\Z^k$ is constant in the $n_k$ variable, and descends to a multidegree $(d_1,\ldots,d_{k-1})$ nilsequence on $\Z^{k-1}$. \item A degree-rank $\leq (d,r)$ nilsequence on $\Z$ is automatically a degree $\leq d$ nilsequence. \item A degree $\leq d$ nilsequence on $\Z$ is automatically a degree-rank $\leq (d,d)$ nilsequence. \item A degree $d$ nilcharacter on $\Z$ is automatically a degree-rank $\leq (d,d)$ nilcharacter. \end{enumerate} It is not quite true, though, that a degree-rank $(d,r)$ nilcharacter is a degree $d$ nilcharacter if $r>1$, because the former need not exhibit vertical frequency behaviour for degree-ranks $(d,r')$ with $r'<r$. \begin{definition}[Equivalence and symbols]\label{equiv-def} Let $H$ be an $I$-filtered group, let $d \in I$, and let $\Omega$ be a limit subset of $\ultra H$. Two nilcharacters $\chi, \chi' \in \Xi^d(\Omega)$ are said to be \emph{equivalent} if $\chi\otimes\overline{\chi'}$ is a nilsequence of degree strictly less than $d$. Write $[\chi]_{\Symb^d(\Omega)}$ for the equivalence class of $\chi$ with respect to this relation; this we shall refer to as the \emph{symbol} of $\chi$. Write $\Symb^d(\Omega)$ for the space of all such equivalence classes. \end{definition} We write $\Symb^{(d_1,\ldots,d_k)}_{\MD}(\Omega)$ for the symbols of nilcharacters $\chi \in \Xi^{(d_1,\ldots,d_k)}_\MD(\Omega)$ of multidegree $(d_1,\ldots,d_k)$, and $\Symb^{(d,r)}_\DR(\Omega)$ for the symbols of nilcharacters $\chi \in \Xi^{(d,r)}_\DR(\Omega)$ of degree-rank $(d,r)$. The basic properties of such symbols are set out in Appendix \ref{basic-sec}. \section{A more detailed outline of the argument}\label{overview-sec} Now that we have set up the notation to describe nilcharacters and their symbols, we are ready to give a high-level proof of Conjecture \ref{gis-conj-nonst-2} (and hence Theorem \ref{mainthm}), contingent on some key sub-theorems which will be proven in later sections. This corresponds to the realisation of points (i), (ii) and (ix) from the overview in \S \ref{strategy-sec}. As the cases $s=1,2$ of this conjecture are already known, we assume that $s \geq 3$. We also assume inductively that the claim has already been proven for smaller values of $s$. Henceforth $s$ is fixed. Let $f \in L^\infty[N]$ be such that \begin{equation}\label{fus} \|f\|_{U^{s+1}[N]} \gg 1. \end{equation} Define $f$ to be zero outside of $[N]$. Raising \eqref{fus} to the power $2^{s+1}$, we see that $$ \E_{h \in [[N]]} \| \Delta_h f \|_{U^{s}[N]}^{2^s} \gg 1$$ and thus $$ \| \Delta_h f \|_{U^{s}[N]} \gg 1$$ for all $h$ in a dense subset $H$ of $[[N]]$. Applying the inductive hypothesis, we thus see that $\Delta_h f$ is $(s-1)$-biased for all $h \in H$. By definition, we now know that $\Delta_h f$ correlates with a nilsequence of degree $(s-1)$. By Lemma \ref{nilch-cor}, we see that for each $h \in H$, $\Delta_h f$ correlates with a nilcharacter $\chi_h \in \Xi^{s-1}([N])$. It is not hard to see that the space of such nilcharacters is a $\sigma$-limit set (see Definition \ref{separ}), so by Lemma \ref{mes-select} we can ensure that $\chi_h$ depends in a limit fashion on $h$. The aim at this point is to obtain, in several stages, information about the dependence of $\chi_h$ on $h$. A key milestone in this analysis is a \emph{linearisation} of $\chi_h$ on $h$. In the case $s = 2$, treated in \cite{gowers-4aps,green-tao-u3inverse}, the $\chi_h(n)$ were essentially just linear phases $e(\xi_h n)$, and the outcome of the linearisation analysis was that the frequencies $\xi_h$ may be assumed to vary in a bracket-linear fashion with $h$. In the case $s = 3$ (treated in \cite{u4-inverse} but also dealt with in our present work), a model special case occurs when $\chi_h(n) \approx e(\{\alpha_h n\} \beta_h n)$ (interpreting $\approx$ loosely). The outcome of the linearisation analysis in that case was that at most one of $\alpha_h, \beta_h$ really depends on $h$, and furthermore that this dependence on $h$ is bracket-linear in nature. Now we formally set out the general case of this linearisation process. \begin{theorem}[Linearisation]\label{linear-thm} Let $f \in L^\infty[N]$, let $H$ be a dense subset of $[[N]]$, and let $(\chi_h)_{h \in H}$ be a family of nilcharacters in $\Xi^{s-1}([N])$ depending in a limit fashion on $h$, such that $\Delta_h f$ correlates with $\chi_h$ for all $h \in H$. Then there exists a multidegree $(1, s-1)$-nilcharacter $\chi \in \Xi^{(1,s-1)}_\MD(\ultra \Z^2)$ such that $\Delta_h f$ $(s-2)$-correlates with $\chi(h,\cdot)$ for many $h \in H$. \end{theorem} This statement represents the outcome of points (iii) to (vii) of the outline in \S \ref{strategy-sec} and must therefore address the following points: \begin{itemize} \item For some suitable notion of ``frequency'', the symbol of $\chi_h(n)$ contains only one frequency that genuinely depends on $h$; \item That frequency depends on $h$ in a bracket-linear manner; \item Once this is known, it follows that, for many $h$, $\Delta_h f$ $(s-2)$-correlates with $\chi(h, n)$, where $\chi$ is a certain $2$-variable nilsequence. \end{itemize} These three tasks are, in fact, established together and in an incremental fashion. The nilcharacter $\chi_h(n)$ is gradually replaced by objects of the form $\chi'(h,n)\otimes \chi'_h(n)$ where $\chi'(h,n)$ is a $2$-dimensional nilcharacter of multidegree $(1, s-1)$ and, at each stage, the nilcharacter $\chi'_h(n)$ (which has so far not been shown to vary in any nice way with $h$) is ``simpler'' than $\chi_h(n)$. The notion of \emph{simpler} in this context is measured by the degree-rank filtration, a concept that was introduced in the previous section. Thus the result of a single pass over the three points listed above is the following subclaim. \begin{theorem}[Linearisation, inductive step]\label{linear-induct} Let $1 \leq r_* \leq s-1$, let $f \in L^\infty[N]$, let $H$ be a dense subset of $[[N]]$, let $\chi \in \Xi^{(1,s-1)}_\MD(\ultra \Z^2)$, let $(\chi_h)_{h \in H}$ be a family of nilcharacters of degree-rank $(s-1,r_*)$ depending in a limit fashion on $h$, such that $\Delta_h f$ $(s-2)$-correlates with $\chi(h,\cdot) \otimes \chi_h$ for all $h \in H$. Then there exists a dense subset $H'$ of $H$, a multidegree $(1, s-1)$-nilcharacter $\chi' \in \Xi^{(1,s-1)}_\MD(\ultra \Z^2)$ and a family $(\chi'_h)_{h \in H}$ of nilcharacters of degree-rank $(s-1,r_*-1)$ depending in a limit fashion on $h$, such that $\Delta_h f$ $(s-2)$-correlates with $\chi'(h,\cdot) \otimes \chi'_h$ for all $h \in H'$. \end{theorem} Theorem \ref{linear-thm} follows easily by inductive use of this statement, starting with $r_*$ equal to $s-1$ and using Theorem \ref{linear-induct} iteratively to decrease $r_*$ all the way to zero. To prove Theorem \ref{linear-induct}, we follow steps (iii) to (vii) in the outline quite closely. The first step, which is the realisation of (iii), is a Gowers-style Cauchy-Schwarz inequality to eliminate the function $f$ as well as the $2$-dimensional nilcharacter $\chi(h,n)$ and therefore obtain a statement concerning only the (so far) unstructured-in-$h$ object $\chi_h(n)$. Here is a precise statement of the outcome of this procedure; the proof of this proposition is the main business of \S \ref{cs-sec}. \begin{proposition}[Gowers Cauchy-Schwarz argument]\label{gcs-prop} Let $f,H,\chi,(\chi_h)_{h \in H}$ be as in Theorem \ref{linear-induct}. Then the sequence \begin{equation}\label{gowers-cs-arg} n \mapsto \chi_{h_1}(n) \otimes \chi_{h_2} (n + h_1 - h_4) \otimes \overline{\chi_{h_3}(n)} \otimes \overline{\chi_{h_4}(n + h_1 - h_4)} \end{equation} is $(s-2)$-biased for many additive quadruples $(h_1,h_2,h_3,h_4)$ in $H$. \end{proposition} With this in hand, we reach the most complicated part of the argument. This is the use of Proposition \ref{gcs-prop} to study the ``frequencies'' of the nilcharacters $\chi_h$ and the way they depend on $h$. Roughly speaking, the aim is to interpret the tensor product \eqref{gowers-cs-arg} as a nilsequence itself (depending on $h_1, h_2, h_3, h_4$) and use results from \cite{green-tao-nilratner} to analyse its equidistribution and bias properties. To make proper sense of this one must first find a suitable ``representation'' of the $\chi_h(n)$ in which the frequencies are either independent of $h$, depend in a bracket-linear fashion on $h$, or are appropriately \emph{dissociated} in $h$, in the sense that the frequencies associated to \eqref{gowers-cs-arg} are ``linearly independent'' for most additive quadruples $h_1+h_2=h_3+h_4$. This task is one of the more technical part of the papers and is performed in in \S \ref{reg-sec}; it incorporates the additive combinatorial step (vi) of the outline from \S \ref{strategy-sec}. The precise statement of what we prove is Lemma \ref{sunflower}, the ``sunflower decomposition''. The representation of the $\chi_h$ (and hence of \eqref{gowers-cs-arg}) involves constructing a suitable polynomial orbit on something resembling a free nilpotent Lie group $\tilde G$; this device also featured in \cite[\S 5]{u4-inverse}. Once this is done, one applies the results from \cite{green-tao-nilratner} to examine the orbit of this polynomial sequence on the corresponding nilmanifold $\tilde G/\tilde \Gamma$. The results of \cite{green-tao-nilratner} assert (roughly speaking) that this orbit is close to the uniform measure on a subnilmanifold $H\tilde\Gamma/\tilde\Gamma$, where $H \leq \tilde G$ is some closed subgroup. In \S \ref{linear-sec}, we then crucially apply a commutator argument of Furstenberg and Weiss that exploits some equidistribution information on projections of $H$ to say something about this group $H$. The upshot of this critical phase of the argument is that the $h$-dependence of the frequencies of $\chi_h$ cannot be dissociated in nature, and must instead be completely bracket-linear; the precise statement here is Theorem \ref{slang-petal}. At this point in the argument, we have basically shown that the top-order behaviour (in the degree-rank order) of the nilcharacters $\chi_h(n)$ is bracket-linear in $h$. To complete the proof of Theorem \ref{linear-induct} (and hence of Theorem \ref{linear-thm}) it remains to carry out part (vii) of the outline, that is to say to interpret this bracket-linear part of $\chi_h(n)$ as a multidegree $(1,s-1)$ nilcharacter $\chi'(h,n)$. This is the first part of the argument where some sort of ``degree $s$ nil-object'' is actually constructed, and is thus a key milestone in the inductive derivation of $\GI(s)$ from $\GI(s-1)$. As remarked previously, our construction here is a little more conceptual (and abstractly algebraic) than in previous works, which have been somewhat \emph{ad hoc}. The construction is given in \S \ref{multi-sec}. At the end of that section we wrap up the proof of Theorem \ref{linear-thm}: by this point, all the hard work has been done. With Theorem \ref{linear-thm} in hand, we have completed the first seven steps of the outline. The only remaining substantial step is step (viii), the symmetry argument. Here is a formal statement of it: \begin{theorem}[Symmetrisation]\label{aderiv} Let $f \in L^\infty[N]$, let $H$ be a dense subset of $[[N]]$, and let $\chi \in \Xi^{(1,s-1)}_\MD(\ultra \Z^2)$ be such that $\Delta_h f$ $<s-2$-correlates with $\chi(h,\cdot)$ for all $h \in H$. Then there exists a nilcharacter $\Theta \in \Xi^{s}(\ultra \Z)$ \textup{(}with the degree filtration\textup{)} and a nilsequence $\Psi \in \Nil^{\subset J}_\MD(\ultra \Z^2)$, with $J \subset \N^2$ given by the downset \begin{equation}\label{lower} J := \{ (i,j) \in \N^2: i+j \leq s-1 \} \cup \{ (i,s-i): 2 \leq i \leq s \}, \end{equation} such that $\chi(h,n)$ is a bounded linear combination of $\Theta(n+h) \otimes \overline{\Theta(n)} \otimes \Psi(h,n)$. \end{theorem} The proof is given in \S \ref{symsec}. Informally, this theorem asserts that the multidimensional degree $(1,s-1)$ nilcharacter $\chi(h,n)$ can be expressed as a derivative $\Theta(n+h) \otimes \overline{\Theta(n)}$ of a degree $s$ nilcharacter $\Theta$, modulo ``lower order terms'', which in this context means multidimensional nilsequences $\Psi(h,n)$ that either have total degree $\leq s-1$, or are of degree at most $s-2$ in the $n$ variable. The remaining task for this section is to show how to complete the proof of Conjecture \ref{gis-conj-nonst} (and Theorem \ref{mainthm}) from this point. From the discussion at the beginning of this section, we have already arrived at a situation in which the given function $f \in L^\infty[N]$ has the property that $\Delta_h f$ correlates with $\chi_h$ for all $h$ in a dense subset $H$ of $[[N]]$, where $(\chi_h)_{h \in H}$ be a family of nilcharacters in $\Xi^{s-1}([N])$ depending in a limit fashion on $h$. From Theorem \ref{linear-thm} and Theorem \ref{aderiv} we see that for many $h \in [[N]]$, $\Delta_h f$ $\leq s-2$-correlates with the sequence $$ n \mapsto \Theta(n+h) \otimes \overline{\Theta(n)} \otimes \Psi(h,n).$$ The next step is to break up $J$ and $\Psi$ into simpler components, and our tool for this purpose shall be Lemma \ref{approx}. Applying this lemma for $\eps$ sufficiently small, followed by the pigeonhole principle, one can thus find scalar-valued nilsequences $\psi, \psi'$ on $\ultra \Z^2$ (with the multidegree filtration) of multidegree $$ \subset \{ (i,0) \in \N^2: i \leq s-1 \}$$ and $$ \subset \{ (i,j) \in \N^2: i \leq s-2; i+j \leq s \}$$ respectively, such that for many $h \in [[N]]$, $\Delta_h f$ $\leq (s-2)$-correlates with $$ n \mapsto \Theta(n+h) \otimes \overline{\Theta(n)} \psi(h,n) \psi'(h,n).$$ For fixed $h$, the nilsequence $\psi'(h,n)$ has degree $\leq s-2$ and can thus be ignored. Also, $\psi(h,n) = \psi(n)$ is of multidegree $\leq (s-1,0)$ and is thus independent of $h$, with $n \mapsto \psi(n)$ being a degree $\leq s-1$ nilsequence. Thus, for many $h \in [[N]]$, $\Delta_h f$ $\leq s-2$-correlates with $$ n \mapsto \Theta(n+h) \otimes \overline{\Theta(n)} \psi(n).$$ Applying the pigeonhole principle again, we can thus find scalar nilsequences $\theta, \theta' \in \Nil^{\leq s}(\ultra \Z)$ such that for many $h \in [[N]]$, $\Delta_h f$ $\leq (s-2)$-correlates with $$ n \mapsto \theta(n+h) \theta'(n)$$ (indeed one takes $\theta, \theta'$ to be coefficients of $\Theta$ and $\overline{\Theta} \psi$ respectively). Applying the converse to $\GI(s)$ (Proposition \ref{inv-nec-nonst}), we conclude $$ \| f\overline{\theta}(\cdot+h) \overline{f\theta'}(\cdot) \|_{U^{s-1}[N]} \gg 1$$ for many $h \in H$. Averaging over $h$ (using Corollary \ref{auton-2} to obtain the required uniformity), we conclude that $$ \E_{h \in [[N]]} \| f\overline{\theta}(\cdot+h) \overline{f\theta'}(\cdot) \|_{U^{s-1}[N]}^{2^{s-1}} \gg 1.$$ Applying the Cauchy-Schwarz-Gowers inequality (see e.g. \cite[Equation (11.6)]{tao-vu}) we conclude that $$ \| f\overline{\theta} \|_{U^s[N]} \gg 1$$ and hence by the inductive hypothesis (Conjecture \ref{gis-conj-nonst-2} for $s-1$), $f\overline{\theta}$ is $\leq (s-1)$-biased. Since $\theta$ is a degree $\leq s$ nilsequence, we conclude that $f$ is $\leq s$-biased, as required. This concludes the proof of Conjecture \ref{gis-conj-nonst-2}, Conjecture \ref{gis-conj-nonst}, and hence Theorem \ref{mainthm}, contingent on Theorem \ref{linear-thm} and Theorem \ref{aderiv}. \section{A variant of Gowers's Cauchy-Schwarz argument}\label{cs-sec} The aim of this section is prove Proposition \ref{gcs-prop}. Thus, we have standard integers $1 \leq r_* \leq s-1$, a function $f \in L^\infty[N]$, a dense subset $H$ of $[[N]]$, a two-dimensional nilcharacter $\chi \in \Xi^{(1,s-1)}_\MD(\ultra \Z^2)$ of multidegree $(1,s-1)$, and a family $(\chi_h)_{h \in H}$ of nilcharacters of degree-rank $(s-1,r_*)$ depending in a limit fashion on $h$. We are given that $\Delta_h f$ $(s-2)$-correlates with $\chi(h,\cdot) \otimes \chi_h$ for all $h \in H$. Our objective is to show that, for many additive quadruples $(h_1,h_2,h_3,h_4)$ in $H$, the expression \begin{equation}\label{biasing} n \mapsto \chi_{h_1}(n) \otimes \chi_{h_2} (n + h_1 - h_4) \otimes \overline{\chi_{h_3}(n)} \otimes \overline{\chi_{h_4}(n + h_1 - h_4)} \end{equation} (where we extend the $\chi_h$ by zero outside of $[N]$) is $(s-2)$-biased. The strategy, following the work of Gowers \cite{gowers-4aps}, is to start with the $\leq s-2$-correlation between $\Delta_h f$ and $\chi(h,\cdot) \chi_h$ and then apply the Cauchy-Schwarz inequality repeatedly to eliminate all terms involving $f$, $\chi(h,\cdot)$, finally arriving at a correlation statement that only involves $\chi_h$ (and lower order terms). Unfortunately, there is a technical issue that prevents one from doing this directly, namely that the behaviour of $\chi(h,\cdot)$ in $h$ is not quite linear enough to ensure that these terms are completely eliminated by a Cauchy-Schwarz procedure. In order to overcome this issue, one must first prepare $\chi$ into a better form, as follows. We need the following technical notion (which will not be used outside of this section): \begin{definition}\label{lindef} A \emph{linearised $(1,s-1)$-function} is a limit function $\chi: (h,n) \to \overline{\C}^\omega$ which has a factorisation \begin{equation}\label{chan} \chi(h,n) = c(n)^h \psi(n) \end{equation} where $\psi \in L^\infty(\Z \to \overline{\C}^\omega)$ and $c \in L^\infty(\Z \to S^1)$ are such that, for every $h,l \in \Z$, the sequence $$ n \mapsto c(n-l)^h \overline{c(n)}^h$$ is a degree $\leq s-2$ nilsequence. \end{definition} \begin{remark} Heuristically, one should think of a linearised $(1,s-1)$-function as (a vector-valued smoothing of) a function of the form $$ (h,n) \mapsto e( P(n) + h Q(n) )$$ where $P, Q$ are bracket polynomials of degree $s-1$; for instance, $$ (h,n) \mapsto e( \{ \alpha n \} \beta n + \{ \gamma n \} \delta n h )$$ is morally a linearised $(1,2)$ function. This should be compared with more general multidegree $(1,2)$ nilcharacters, such as $$ (h,n) \mapsto e( \{ \{ \alpha h \} \beta n \} \gamma n )$$ which are not quite linear in $h$ because the dependence on $h$ is buried inside one or more fractional part operations. Intuitively, the point is that one can use the laws of bracket algebra (such as \eqref{brackalg}) to move the $h$ outside of all the fractional part expressions (modulo lower order terms). While one can indeed develop enough of the machinery of bracket calculus to realise this intuition concretely, we will instead proceed by the more abstract machinery of nilmanifolds in order to avoid having to set up the bracket calculus. \end{remark} The key preparation for this is the following. \begin{proposition}\label{prepare} Let $\chi \in \Xi^{(1,s-1)}_\MD(\ultra \Z^2)$ be a two-dimensional nilcharacter of multidegree $(1,s-1)$, and let $\eps > 0$ be standard. Then one can approximate $\chi$ to within $\eps$ in the uniform norm by a bounded linear combination of linearised $(1,s-1)$-functions. \end{proposition} \begin{proof} From Definition \ref{nilch-def}, we can express $$ \chi(h,n) = F(g(h,n) \ultra \Gamma)$$ where $G/\Gamma$ is a $\N^2$-filtered nilmanifold of multidegree $\leq (1,s-1)$, $g \in \ultra \poly(\Z^2_{\N^2} \to G_{\N^2})$ (with $\Z^2$ being given the multidegree filtration $\Z^2_{\N^2}$), and $F \in \Lip(\ultra(G/\Gamma) \to \overline{S^\omega})$ has a vertical frequency $\eta: G_{(1,s-1)} \to \R$. We consider the quotient map $\pi: G/\Gamma \to G/(G_{(1,0)}\Gamma)$ from $G/\Gamma$ onto the nilmanifold $G/(G_{(1,0)}\Gamma)$, which can be viewed as an $\N$-filtered nilmanifold of degree $\leq s-1$ (where we $\N$-filter $G/G_{(1,0)}$ using the subgroups $G_{(0,i)} G_{(1,0)} / G_{(1,0)}$). The fibers of this map are isomorphic to $T := G_{(1,0)} / \Gamma_{(1,0)}$. Observe that $G_{(1,0)}$ is abelian, and so $T$ is a torus; thus $G/\Gamma$ is a torus bundle over $G/(G_{(1,0)}\Gamma)$ with structure group $T$. The idea is to perform Fourier analysis on this large torus $T$, as opposed to the smaller torus $G_{(1,s-1)}/\Gamma_{(1,s-1)}$, to improve the behaviour of the nilcharacter $\chi$. We pick a metric on the base nilmanifold $G/(G_{(1,0)}\Gamma)$ and a small standard radius $\delta>0$, and form a smooth partition of unity $1 = \sum_{k=1}^K \varphi_k$ on $G/(G_{(1,0)}\Gamma)$, where each $\varphi_k \in \Lip(G/(G_{(1,0)}\Gamma) \to \C)$ is supported on an open ball $B_k$ of radius $r$. This induces a partition $\chi = \sum_{k=1}^K \tilde \chi_k$, where $$ \tilde \chi_k(h,n) = F(g(h,n) \ultra \Gamma) \varphi_k(\pi(g(h,n) \ultra \Gamma)).$$ Now fix one of the $k$. Then we have $$ \tilde \chi_k(h,n) = \tilde F_k(g(h,n) \ultra \Gamma)$$ where $\tilde F_k$ is compactly supported in the cylinder $\pi^{-1}(B_k)$. If $r$ is small enough, we have a smooth section $\iota: B_k \to G$ that partially inverts the projection from $G$ to $G/(G_{(1,0)}\Gamma)$, and so we can parameterise any element $x$ of $\pi^{-1}(B_k)$ uniquely as $\iota(x_0) t \Gamma$ for some $x_0 \in B_k$ and $t \in T$ (noting that $t\Gamma$ is well-defined as an element of $G/\Gamma$). Similarly, we can parameterise any element of $\ultra \pi^{-1}(B_k)$ uniquely as $\iota(x_0) t \Gamma$ for $x_0 \in \ultra B_k$ and $t\in \ultra T$. We can now view the Lipschitz function $F_k \in \Lip(\ultra(G/\Gamma))$ as a compactly supported Lipschitz function in $\Lip(\ultra(B_k \times T))$. Applying a Fourier (or Stone-Weierstrass) decomposition in the $T$ directions (cf. Lemma \ref{limone}), we thus see that for any standard $\eps > 0$ we can approximate $\tilde F_k$ uniformly to error $\eps/K$ by a sum $\sum_{k'=1}^{K'} \tilde F_{k,k'}$, where $K'$ is standard and each $F_{k,k'} \in \Lip(\ultra(B_k \times T))$ is compactly supported and has a character $\xi_{k'}: T \to \T$ such that \begin{equation}\label{fan} \tilde F_{k,k'}(\iota(x_0) t\Gamma) = e(\xi_{k'}(t)) \tilde F_{k,k'}(\iota(x_0) \Gamma) \end{equation} for all $x_0 \in \ultra(2B_k)$ and $t \in \ultra T$. It thus suffices to show that for each $k, k'$, the sequence $$ \tilde \chi_{k,k'}: (h,n) \mapsto \tilde F_{k,k'}( g(h,n) \ultra \Gamma )$$ is a linearised $(1,s-1)$-function. Fix $k,k'$. Performing a Taylor expansion (Lemma \ref{taylo}) of the polynomial sequence $g \in \ultra\poly(\Z^2_{\N^2} \to G_{\N^2})$, we may write $$ g(h,n) = g_0(n) g_1(n)^h$$ where $g_0 \in \ultra \poly(\Z_\N \to G_\N)$ is a one-dimensional polynomial map (giving $G$ the $\N$-filtration $G_\N := (G_{(i,0)})_{i \in \N}$), and $g_1 \in \ultra \poly(\Z \to (G_{(1,0)})_\N)$ is another one-dimensional polynomial map (giving the abelian group $G_{(1,0)}$ the $\N$-filtration $(G_{(1,0)})_\N := (G_{(1,i)})_{i \in \N}$). In particular, we see that $\tilde \chi_{k,k'}(h,n)$ is only non-vanishing when $\pi( g_0(n) \ultra \Gamma ) \in B$. Furthermore, in that case we see from \eqref{fan} that \begin{equation}\label{chimn} \tilde \chi_{k,k'}(h,n) = e( h \xi( g_1(n) \mod \Gamma_{(1,0)} ) ) \tilde F_{k,k'}(g_0(n) \ultra \Gamma), \end{equation} which gives the required factorisation \eqref{chan} with $c(n) := e( \xi( g_1(n) \mod \Gamma_{(1,0)} ) )$ and $\psi(n) := \tilde F_{k,k'}(g_0(n) \ultra \Gamma)$. The only remaining task is to establish that for any given $h, l$, the sequence $n \mapsto c(n-l)^h \overline{c(n)}^h$ is a degree $\leq s-2$ nilsequence. We expand this sequence as $$n \mapsto e( h ( \xi( g_1(n-l) \mod \Gamma_{(1,0)} ) - \xi( g_1(n) \mod \Gamma_{(1,0)} ) ) )$$ But from the abelian nature of $G_{(1,0)}$, the map $n \mapsto \xi(g_1(n) \mod \Gamma_{(1,0)})$ is a polynomial map from $\ultra \Z$ to $\ultra \T$ of degree at most $s-1$, and the claim follows. \end{proof} We now return to the proof of Theorem \ref{gcs-prop}. With this multiplicative structure, we can now begin the Cauchy-Schwarz argument. By hypothesis, for each $h \in H$ we can find a scalar nilsequence $\psi_h$ of degree $\leq s-2$ such that $$ |\E_{n \in [N]} \Delta_h f(n) \overline{\chi(h,n)} \otimes \overline{\chi_h(n)} \overline{\psi_h(n)}| \gg 1.$$ By Corollary \ref{mes-select}, we may ensure that $\psi_h$ varies in a limit fashion on $h$. Applying Corollary \ref{auton-2}, this lower bound is uniform in $h$. Applying Proposition \ref{prepare} (with a sufficiently small $\eps$) and using the pigeonhole principle, we may then find a linearised $(1,s-1)$-function $(h,n) \mapsto c(n)^h \psi(n)$ such that $$ |\E_{n \in [N]} \Delta_h f(n) c(n)^{-h} \overline{\psi(n)} \otimes \overline{\chi_h(n)} \overline{\psi_h(n)}| \gg 1.$$ By Corollary \ref{auton-2} again, the lower bound is still uniform in $h$. We may then average in $h$ (extending $\psi_h, \chi_h$ by zero for $h$ outside of $H$) and conclude that $$ \E_{h \in [[N]]} |\E_{n \in [N]} \Delta_h f(n) c(n)^{-h} \overline{\psi(n)} \otimes \overline{\chi_h(n)} \overline{\psi_h(n)}| \gg 1,$$ thus there exists a scalar function $b \in L^\infty[[N]]$ such that $$ |\E_{h \in [[N]]} \E_{n \in [N]} b(h) f(n+h) \overline{f}(n) c(n)^{-h} \overline{\psi(n)} \otimes \overline{\chi_h(n)} \overline{\psi_h(n)}| \gg 1.$$ By absorbing $b(h)$ into the $\psi_h$ factor, we may now drop the $b(h)$ factor. We write $n+h = m$ and obtain $$|\E_{m \in [N]} f(m) \E_{h \in [[N]]} c(m-h)^{-h} f'(m-h) \otimes \overline{\chi_h(m-h)} \overline{\psi_h(m-h)}| \gg 1$$ where $f' := \overline{f} \overline{\psi}$ (recall that $f$ is extended by zero outside of $[N]$), which by Cauchy-Schwarz implies that \begin{align*} |\E_{m \in [N]} \E_{h,h' \in [[N]]} c(m-h)^{-h} c(m-h')^{h'} &f'(m-h) \otimes \overline{f(m-h')} \\ \otimes \overline{\chi_h(m-h)} \otimes \chi_{h'}(m-h') &\overline{\psi_h(m-h)} \psi_{h'}(m-h')| \gg 1. \end{align*} Making the change of variables $h' = h+l$, $n = m-h$, we obtain \begin{align*} |\E_{h,l \in [[2N]]; n \in [N]} c(n)^{-h} c(n-l)^{h+l} &f'(n) \otimes \overline{f'}(n-l) \\ \otimes \overline{\chi_h(n)}\otimes \chi_{h+l}(n-l) &\overline{\psi_h(n)} \psi_{h+l}(n-l)| \gg 1. \end{align*} We then simplify this as \begin{equation}\label{hank} |\E_{h,l \in [[2N]];n \in [N]} c_2(l,n) \otimes \overline{\chi_h(n)} \otimes \chi_{h+l}(n-l) \psi_{h,l}(n)| \gg 1 \end{equation} where \begin{align*} c_2(l,n) &:= c(n-l)^l f'(n) \otimes \overline{f'(n-l)} \\ \psi_{h,l}(n) &= c(n-l)^h c(n)^{-h} \overline{\psi_h(n)} \psi_{h+l}(n-l) \end{align*} Clearly $c_2$ is bounded. As for $\psi_{h,l}$, we see from Definition \ref{lindef} and Corollary \ref{alg} that $\psi_{h,l}$ is a nilsequence of degree $\leq s-2$ for each $h,l$. Returning to \eqref{hank}, we use the pigeonhole principle to conclude that for many $k\in [[2N]]$, we have $$ |\E_{h \in [[2N]]; n \in [N]} c_2(k,n) \otimes \overline{\chi_h(n)} \otimes \chi_{h+k}(n-k) \psi_{h,k}(n)| \gg 1.$$ Let $k$ be such that the above estimate holds. Applying Cauchy-Schwarz in the $n$ variable to eliminate the $c_2(k,n)$ term, we have $$ |\E_{h,h' \in [[2N]]; n \in [N]} \overline{\chi_h(n)} \otimes \chi_{h+k}(n-k) \otimes \overline{\chi_{h'}(n)} \otimes \chi_{h'+k}(n-k) \psi_{h,k}(n)| \gg 1$$ and thus for many $k,h,h' \in [[2N]]$, we have $$ |\E_{n \in [N]} \overline{\chi_h(n)} \otimes \chi_{h+k}(n-k) \otimes \overline{\chi_{h'}(n)} \otimes \chi_{h'+k}(n-k) \psi_{h,k}(n)| \gg 1,$$ which implies that $$ n \mapsto \overline{\chi_h(n)} \otimes \chi_{h+k}(n-k) \otimes \overline{\chi_{h'}(n)} \otimes \chi_{h'+k}(n-k)$$ is $(s-2)$-biased on $[N]$. Note that this forces $h,h+k, h',h'+k$ to be an additive quadruple in $H$, as otherwise the expression vanishes. Applying a change of variables, we obtain Proposition \ref{gcs-prop}. For future reference we observe that a simpler version of the same argument (in which the $\chi$ and $\psi_h$ factors are not present) gives \begin{proposition}[Cauchy-Schwarz]\label{cs} Let $f \in L^\infty[N]$, let $H$ be a dense subset of $[[N]]$, and suppose that one has a family of functions $\chi_h \in L^\infty(\ultra \Z)$ depending in a limit fashion on $h$, such that $\Delta_h f$ correlates with $\chi_h$ on $[N]$ for all $h \in H$. Then for many \textup{(}i.e. for $\gg N^3$\textup{)} additive quadruples $(h_1,h_2,h_3,h_4)$ in $H$, the sequence \begin{equation}\label{slam} n \mapsto \chi_{h_1}(n) \otimes \chi_{h_2} (n + h_1 - h_4) \otimes \overline{\chi_{h_3}(n)} \otimes \overline{\chi_{h_4}(n + h_1 - h_4)} \end{equation} is biased. \end{proposition} This proposition in fact has quite a simple proof; see \cite{gtz-announce}. Note how we can conclude \eqref{slam} to be biased and not merely $(s-2)$-biased. As such, Proposition \ref{cs} saves some ``lower order'' information that was not present in Proposition \ref{gcs-prop}; this lower order information will be crucial later in the argument, when we establish the symmetry property in Theorem \ref{aderiv}. \section{Frequencies and representations}\label{freq-sec} We will use Proposition \ref{gcs-prop} to analyse the ``frequency'' of the nilcharacters $(\chi_h)_{h \in H}$ appearing in Theorem \ref{linear-induct}. To motivate the discussion, let us first suppose that we are in the (significantly simpler) $s=2$ case, rather than the actual case $s \geq 3$ of interest. When $s=2$, we can represent $\chi_h$ as a linear phase $\chi_h(n) = e(\xi_h n + \theta_h)$ for some $\xi_h, \theta_h \in \ultra\T$; one can then interpret $\xi_h$ as the \emph{frequency} of $h$. In order to describe how this frequency $\xi_h$ behaves in $h$, it will be convenient to \emph{represent} $\xi_h$ as a linear combination \begin{equation}\label{xih} \xi_h = a_{1,h} \xi_{1,h} + \ldots + a_{D,h} \xi_{D,h} \end{equation} of other frequencies $\xi_{1,h},\ldots,\xi_{D,h} \in \ultra\T$, where the $a_{i,h} \in \Z$ are (standard) integer coefficients, and the $(\xi_{i,h})_{h \in H}$ are families of frequencies which have better properties with regards to their dependence on $h$; for instance, they might be ``core frequencies'' $\xi_{i,h} = \xi_{*,i}$ that are independent of $h$, or they might be ``bracket-linear petal'' frequencies that depend in a bracket-linear fashion on $h$, or they might be ``regular petal'' frequencies which behave in a suitably ``dissociated'' manner in $h$. We can schematically depict the relationship \eqref{xih} as $$ [\chi_h] \approx \eta_h(\F_h) $$ where $[\chi_h]$ is some sort of ``symbol'' of $\chi_h$ (which, in the linear case $s=2$, is just $\xi_h \mod 1$), $\F_h \in \ultra \T^D$ is the \emph{frequency vector} $\F_h = (\xi_{1,h},\ldots,\xi_{D,h})$, and $\eta_h: \ultra \T^D \to \ultra \T$ is the \emph{vertical frequency} \begin{equation}\label{etaxd} \eta_h(x_1,\ldots,x_D) := a_{1,h} x_1 + \ldots + a_{D,h} x_D. \end{equation} We will need to find analogues of the above type of representation in higher degree $s \geq 3$. Heuristically, we will wish to represent the symbol $[\chi]_{\Xi^{(s-1,r_*)}_\DR([N])}$ of a nilcharacter $\chi$ on $[N]$ of degree-rank $(s-1,r_*)$ (which will ultimately depend on a parameter $h$, though we will not need this parameter in the current discussion) heuristically as \begin{equation}\label{chih-abstract} [\chi]_{\Xi^{(s-1,r_*)}_\DR([N])} \approx \eta(\F) \end{equation} where $\F = (\xi_{i,j})_{1 \leq i \leq s-1; 1 \leq j \leq D_i}$ is a \emph{horizontal frequency vector} of frequencies $\xi_{i,j} \in \ultra \T$ associated to a \emph{dimension vector} $\vec D = (D_1,\ldots,D_{s-1})$, and $\eta$ is a \emph{vertical frequency} that generalises \eqref{etaxd}, but whose precise form we are not yet ready to describe precisely. We then say that the triple $(\vec D, \eta, \F)$ forms a \emph{total frequency representation} of $\chi$. In the previous paper \cite{u4-inverse} that treated the $s=3$ case, such a representation was implicitly used via the description of degree-rank $(2,2)$ nilcharacters $\chi_h$ as essentially being bracket quadratic phases $e(\sum_{j=1}^J \{ \alpha_{h,j} n \} \beta_{h,j} n)$ modulo lower order terms (and ignoring the issue of vector-valued smoothing for now). In our current language, this would correspond to a dimension vector $\vec D = (2J,0)$ and a horizontal frequency vector of the form $(\alpha_{h,1},\ldots,\alpha_{h,J},\beta_{h,1},\ldots,\beta_{h,J})$, and a certain vertical frequency $\eta$ depending only on $J$ that we are not yet ready to describe explicitly here. Bracket-calculus identities such as \eqref{brackalg} could then be used to manipulate such a universal frequency representation into a suitably ``regularised'' form. In principle, one could also use bracket calculus to extract the symbol of $\chi_h$ in terms of frequencies such as $\alpha_{h,j}$ and $\beta_{h,j}$ for higher values of $s$. However, as we are avoiding the use of bracket calculus machinery here, we will proceed instead using the language of nilmanifolds, and in particular by lifting the nilmanifold $G_h/\Gamma_h$ up to a \emph{universal nilmanifold} in order to obtain a suitable space (independent of $h$) in which to detect relationships between frequencies such as $\alpha_{h,j}, \beta_{h,j}$. In some sense, this universal nilmanifold will play the role that the unit circle $\T$ plays in Fourier analysis. We first define the notion of universal nilmanifold that we need. \begin{definition}[Universal nilmanifold]\label{universal-nil} A \emph{dimension vector} is a tuple \[ \vec D = (D_1,\ldots,D_{s-1}) \in \N^{s-1} \] of standard natural numbers. Given a dimension vector, we define the \emph{universal nilpotent group} $G^{\vec D} = G^{\vec D, \leq (s-1,r_*)}$ of degree-rank $(s-1,r_*)$ to be the Lie group generated by formal generators $e_{i,j}$ for $1 \leq i \leq s-1$ and $1 \leq j \leq D_i$, subject to the following constraints: \begin{itemize} \item Any $(m-1)$-fold iterated commutator of $e_{i_1,j_1},\ldots,e_{i_m,j_m}$ with $i_1+\ldots+i_m \geq s$ is trivial. \item Any $(m-1)$-fold iterated commutator of $e_{i_1,j_1},\ldots,e_{i_m,j_m}$ with $i_1+\ldots+i_m = s-1$ and $m \geq r+1$ is trivial. \end{itemize} We give this group a degree-rank filtration $(G^{\vec D}_{(d,r)})_{(d,r) \in \DR}$ by defining $G^{\vec D}_{(d,r)}$ to be the Lie group generated by $(m-1)$-fold iterated commutators of $e_{i_1,j_1},\ldots,e_{i_m,j_m}$ with $1 \leq i_l \leq s-1$ and $1 \leq j_l \leq D_{i_l}$ for all $1 \leq l \leq n$ for which either $i_1+\ldots+i_m > d$, or $i_1+\ldots+i_m=d$ and $m \geq r$. It is not hard to verify that this is indeed a filtration of degree-rank $\leq (s-1,r_*)$. We then let $\Gamma^{\vec D}$ be the discrete group generated by the $e_{i,j}$ with $1 \leq i \leq s-1$ and $1 \leq j \leq D_i$, and refer to $G^{\vec D}/\Gamma^{\vec D}$ as the \emph{universal nilmanifold} with dimension vector $\vec D$. A \emph{universal vertical frequency} at dimension vector $\vec D$ is a continuous homomorphism $\eta: G^{\vec D}_{(s-1,r_*)} \to \R$ which sends $\Gamma^{\vec D}_{(s-1,r_*)}$ to the integers (i.e. a filtered homomorphism from $G^{\vec D}_{(s-1,r_*)} / \Gamma^{\vec D}_{(s-1,r_*)}$ to $\T$). \end{definition} \emph{Remark.} One can give an explicit basis for this nilmanifold in terms of certain iterated commutators of the $e_{i,j}$, following \cite{leibman,mks}. This can then be used to relate nilcharacters to bracket polynomials, as in \cite{leibman}, and it is then possible to develop enough of a ``bracket calculus'' to substitute for some of the nilpotent algebra performed in this paper. However, we will not proceed by such a route here (as it would make the paper even longer than it currently is), and in fact will not need an explicit basis for universal nilmanifolds at all. \begin{example} The unit circle with the degree $\leq d$ filtration (see Example \ref{polyphase}) is isomorphic to the universal nilmanifold $G^{(0,\ldots,0,1),\leq (d,1)}$, thus for instance the unit circle with the lower central series filtration is isomorphic to $G^{(1),\leq (1,1)}$. A universal vertical frequency for any of these nilmanifolds is essentially just a map of the form $\eta: x \mapsto nx$ for some integer $n$. \end{example} \begin{example} The Heisenberg group \eqref{heisen} (with the lower central series filtration) is the universal nilpotent group $G^{(2,0)} = G^{(2,0), \leq (2,2)}$ of degree-rank $(2,2)$ (after identifying $e_1,e_2$ with $e_{1,1}$ and $e_{1,2}$ respectively), and the Heisenberg nilmanifold $G/\Gamma$ is the corresponding universal nilmanifold $G^{(2,0)}/\Gamma^{(2,0)}$. If we reduce the degree-rank from $(2,2)$ to $(2,1)$, then the commutator $[e_1,e_2]$ now trivialises, and $G^{(2,0), \leq (2,1)}$ collapses to the abelian Lie group $\R^2 \equiv G^{2, \leq (1,1)}$, with universal nilmanifold $\T^2$. If, instead of the lower central series filtration, one gives the Heisenberg group \eqref{heisen} the filtration used in Example \ref{abn} to model the sequence \eqref{abn-eq}, then this group is isomorphic to the universal nilpotent group $G^{(1,1), \leq (3,2)}$, with the two generators $e_1, e_2$ of the Heisenberg group now being interpreted as $e_{1,1}$ and $e_{2,1}$ respectively. \end{example} \begin{example} Consider the universal nilpotent group $G^{(D_1,D_2,D_3),\leq (3,3)}$. This group is generated by ``degree $1$'' generators $e_{1,1},\ldots,e_{1,D_1}$, ``degree $2$'' generators $e_{2,1},\ldots,e_{2,D_2}$, and ``degree $3$'' generators $e_{3,1},\ldots,e_{3,D_3}$, with any iterated commutator of total degree exceeding three vanishing (thus for instance the degree $3$ generators are central, and the degree $2$ generators commute with each other). If one drops the degree-rank from $(3,3)$ to $(3,2)$, then all triple commutators of degree $1$-generators, such as $[[e_{1,i}, e_{1,j}],e_{1,k}]$ now vanish, reducing the dimension of the nilpotent group. Dropping the degree-rank further to $(3,1)$ also eliminates the commutators of degree $1$ and degree $2$ generators (thus making the degree $2$ generators central). Finally, dropping the degree-rank to $(3,0)$ eliminates the degree $3$ generators completely, and indeed $G^{(D_1,D_2,D_3), \leq (3,0)}$ is isomorphic to $G^{(D_1,D_2), \leq (2,2)}$. \end{example} \begin{example} The free $s$-step nilpotent group on $D$ generators, in our notation, becomes $G^{(D,0,\ldots,0), \leq (s,s)}$. We may thus view the universal nilpotent groups $G^{\vec D, \leq (d,r)}$ as generalisations of the free nilpotent groups, in which some of the generators are allowed to be weighted to have degrees greater than $1$, and there is an additional rank parameter to cut down some of the top-order behaviour. \end{example} It will be an easy matter to lift a nilcharacter $\chi$ from a general degree-rank $\leq (s-1,r_*)$ nilmanifold $G/\Gamma$ to a universal nilmanifold $G^{\vec D}/\Gamma^{\vec D}$ for some sufficiently large dimension vector $\vec D$ (see Lemma \ref{existence} below). Once one does so, we will need to extract the various ``top order frequencies'' present in that nilcharacter. For instance, if $s=4$ and $\chi$ is (some vector-valued smoothing of) the degree $3$ phase $$ n \mapsto e( \{ \alpha n \} \beta n^2 + \gamma n^3 + \delta n^2 + \{ \epsilon n \} \mu n + \nu n + \theta )$$ then we will need to extract out the ``degree $3$'' frequency $\gamma$, the ``degree $2$'' frequency $\beta$, and the ``degree $1$'' frequency $\alpha$. (The remaining parameters $\delta,\epsilon,\mu,\nu,\theta$ only contribute to terms of degree strictly less than $3$, and will not need to be extracted.) As it turns out, the degree $i$ frequencies will most naturally live in the \emph{$i^\th$ horizontal torus} of the relevant universal nilmanifold; we now pause to define these torii precisely. (These torii also implicitly appeared in \cite[Appendix A]{green-tao-arithmetic-regularity}.) \begin{definition}[Horizontal Taylor coefficients]\label{horton} Let $G = (G, (G_{(d,r)})_{(d,r) \in \DR})$ be a degree-rank-filtered nilpotent group. For every $i \geq 0$, define the \emph{$i^{\th}$ horizontal space} $\Horiz_i(G)$ to be the abelian group $$ \Horiz_i(G) := G_{(i,1)} / G_{(i,2)},$$ with the convention that $G_{(d,r)} := G_{(d+1,0)}$ if $r>d$ (so in particular, $G_{(1,2)} = G_{(2,0)}$). For any polynomial map $g \in \poly(\Z_\N \to G_\N)$, we define the \emph{$i^{th}$ horizontal Taylor coefficient} $\Taylor_i(g) \in \Horiz_i(G)$ to be the quantity $$ \Taylor_i(g) := \partial_{1} \ldots \partial_{1} g(n) \mod G_{(i,2)}$$ for any $n \in \Z$. Note that this map is well-defined since $\partial_{1} \ldots \partial_{1} g$ takes values in $G_{(i,1)}$ and has first derivatives in $G_{(i+1,1)}$ and hence in $G_{(i,2)}$. If $\Gamma$ is a subgroup of $G$, we define $$ \Horiz_i(G/\Gamma) := \Horiz_i(G) / \Horiz_i(\Gamma)$$ and for a polynomial orbit $\orbit \in \poly(\Z_\N \to (G/\Gamma)_\N) := \poly(\Z_\N \to G_\N) / \poly(\Z_\N \to \Gamma_\N)$, we define the \emph{$i^{\th}$ horizontal Taylor coefficient} $\Taylor_i(\orbit) \in \Horiz_i(G/\Gamma)$ to be the quantity defined by $$ \Taylor_i( g \Gamma ) := \Taylor_i(g) \mod \Horiz_i(\Gamma)$$ for any $g \in \poly(\Z_\N \to G_\N)$; it is easy to see that this quantity is well-defined. These concepts extend to the ultralimit setting in the obvious manner; thus for instance, if $\orbit \in \ultra \poly(H_\N \to (G/\Gamma)_\N)$, then $\Taylor_i(\orbit)$ is an element to $\ultra \Horiz_i(G/\Gamma)$. \end{definition} If $G/\Gamma$ is a degree-rank filtered nilmanifold, it is easy to see that the horizontal spaces $\Horiz_i(G)$ are abelian Lie groups, and that $\Horiz_i(\Gamma)$ is a sublattice of $\Horiz_i(G)$, so $\Horiz_i(G/\Gamma)$ is a torus, which we call the \emph{$i^{\th}$ horizontal torus} of $G/\Gamma$.\vspace{11pt} \emph{Remark.} The above definition can be generalised by replacing the domain $\Z$ with an arbitrary additive group $H = (H,+)$. In that case, the Taylor coefficient $\Taylor_i(g)$ is not a single element of $\Horiz_i(G)$, but is instead a map $\Taylor_i(g): H^i \to \Horiz_i(G)$ defined by the formula $$ \Taylor_i(g)(h_1,\ldots,h_k) := \partial_{h_1} \ldots \partial_{h_k} g(n) \mod G_{(i,2)}$$ for $h_1,\ldots,h_k \in H$. Using Corollary \ref{collox} we easily see that this map is symmetric and multilinear; thus for instance when $H=\Z$ we have $$ \Taylor_i(g)(h_1,\ldots,h_k) = h_1 \ldots h_k \Taylor_i(g).$$ However, we will not need this generalisation here. A further application of Corollary \ref{collox} shows that the map $g \mapsto \Taylor_i(g)$ is a homomorphism. As a corollary, we see that any translate $g(\cdot+h) = (\partial_h g) g$ of $g$ will have the same Taylor coefficients as $g$: $\Taylor_i(g(\cdot+h)) = \Taylor_i(g)$. \begin{example} Consider the unit circle $G/\Gamma = \T$ with the degree $\leq d$ filtration (see Example \ref{polyphase}). Then the $d^\th$ horizontal torus is $\T$, and all other horizontal tori are trivial. If $\alpha_0,\ldots,\alpha_d \in \ultra \R$, then the map $\orbit: n \mapsto \alpha_0 + \ldots + \alpha_d n^d \mod 1$ is a polynomial orbit in $\ultra \poly(\Z_\N \to \T_\N)$, and the $d^{th}$ horizontal Taylor coefficient is the quantity $d! \alpha_d \mod 1$ from $\ultra \Z^d$ to $\ultra\T$. (All other horizontal Taylor coefficients are of course trivial.) Thus we see that the horizontal coefficient captures most of the top order coefficient $\alpha_d$, but totally ignores all lower order terms. \end{example} \begin{example} Let $G=G^{(2,1)}=G^{(2,1),\leq (2,2)}$ be the universal nilpotent group of degree-rank $(2,2)$. Thus $G$ is generated by $e_{1,1},e_{1,2},e_{2,1}$, with relations \[ [[e_{1,1},e_{1,2}], e_{1,i}]=[e_{1,i},e_{2,1}]=1 \quad \text{ for $i=1,2$}. \] and with the degree-rank filtration \begin{align*} G_{(0,0)}=G_{(1,0)}=G_{(1,1)}&=G \\ G_{(2,0)}=G_{(2,1)}&= \langle [e_{1,1},e_{1,2}], e_{2,1}\rangle_\R \\ G_{(2,2)}&=\langle [e_{1,1},e_{1,2}] \rangle_\R \end{align*} and the lattice $$ \Gamma = \Gamma^{(2,2)} = \Gamma^{(2,2), \leq (2,1)} := \langle e_{1,1},e_{1,2},e_{2,1} \rangle.$$ Let $\alpha,\beta,\gamma \in \ultra \R$, and consider the orbit $\orbit \in \ultra\poly(\Z_\N \to (G/\Gamma)_\N)$ defined by the formula \[ \orbit(n):=e^{n \alpha}_{1,1} e_{1,2}^{n \beta} e_{2,1}^{n^2 \gamma}; \] this is polynomial by Example \ref{lazard-ex}. Then \[ \Taylor_1(g) = \partial_{1} g(n) \mod \ultra G_{(2,0)} = e^{\alpha}_{1,1}e_{1,2}^{\beta} \mod \ultra G_{(2,0)}, \] and \[ \Taylor_2(g) = e_{2,1}^{2\gamma} \mod \ultra G_{(2,2)}. \] Then $\Taylor_0(g(n)\ultra \Gamma) = g(n)\ultra \Gamma$, \[ \Taylor_1(g\ultra \Gamma) = e^{\alpha}_{1,1}e_{1,2}^{\beta} \mod G_{(2,0)}\ultra \Gamma \] and \[ \Taylor_2(g\ultra \Gamma) = e_{2,1}^{2\gamma} \mod \ultra G_{(2,2)}\Gamma_{(2,0)}. \] \end{example} \begin{example}\label{heist} Let $G/\Gamma$ be the Heisenberg nilmanifold \eqref{heisen} with the lower central series filtration. Thus $G/\Gamma$ is a degree $\leq 2$ nilmanifold, which can then be viewed as a degree-rank $\leq (2,2)$ nilmanifold by Example \ref{inclusions}. The first horizontal torus $\Horiz_1(G/\Gamma)$ is isomorphic to the $2$-torus $\T^2$, with generators given by $e_1, e_2 \mod G_2 \Gamma$. The second horizontal torus $\Horiz_2(G/\Gamma)$ is trivial, since $G_{(2,1)} = [G,G]$ is equal to $G_{(2,0)} = G_2$. If $\orbit \in \ultra \poly(\Z_\N \to (G/\Gamma)_\N)$ is the polynomial orbit $\orbit: n \mapsto e_2^{\beta n} e_1^{\alpha n} \ultra \Gamma$, then the first Taylor coefficient is the quantity $(\alpha, \beta)$. Note also that if one modified the polynomial orbit by a further factor of $[e_1,e_2]^{\gamma n^2 + \delta n + \epsilon}$, this would not impact the Taylor coefficients at all. Thus we see that the Taylor coefficients only capture the frequencies associated to raw generators such as $e_1$ and $e_2$, and not to commutators such as $[e_1,e_2]$. \end{example} \begin{example} Now consider the Heisenberg group \eqref{heisen} with the filtration used in Example \ref{abn} to model the sequence \eqref{abn-eq}. This is now a degree $\leq 3$ nilmanifold, whose first horizontal torus $\Horiz_1(G/\Gamma)$ is isomorphic to the one-torus $\T$ with generator $e_2 \mod G_{(2,0)} \Gamma$, whose second horizontal torus $\Horiz_2(G/\Gamma)$ is isomorphic to the one-torus $\T$ with generator $e_1 \mod G_{(2,2)} \Gamma_{(2,1)}$, and whose third horizontal torus $\Horiz_3(G/\Gamma)$ is trivial. If $\orbit \in \ultra \poly(\Z_\N \to (G/\Gamma)_\N)$ is the polynomial orbit $\orbit: n \mapsto e_2^{\beta n} e_1^{\alpha n^2} \ultra \Gamma$, then the first Taylor coefficient is the linear limit map $n \mapsto \beta n \mod 1$, and the second Taylor coefficient is the quantity $2! \alpha \mod 1$. \end{example} We now have enough notation to be able to formally assign frequencies to a nilcharacter, by means of a package of data which we shall call a \emph{representation}. \begin{definition}[Representation]\label{representation-def} Let $\chi \in L^\infty[N]$ be a nilcharacter of degree-rank $\leq (s-1,r_*)$. A \emph{representation} of $\chi$ is a collection of the following data: \begin{enumerate} \item A filtered nilmanifold $G/\Gamma$ of degree-rank $\leq (s-1,r_*)$; \item A filtered nilmanifold $G_0/\Gamma_0$ of degree-rank $\leq (s-1,r_*-1)$; \item A function $F \in \Lip(\ultra(G/\Gamma \times G_0/\Gamma_0) \to \overline{S^\omega})$; \item Polynomial orbits $\orbit \in \ultra \poly(\Z_\N \to (G/\Gamma)_\N)$ and $\orbit_0 \in \ultra \poly(\Z_\N \to (G_0/\Gamma_0)_\N)$; \item A dimension vector $\vec D = (D_1,\ldots,D_{s-1}) \in \N^{s-1}$; \item A universal vertical frequency $\eta: G^{\vec D}_{(s-1,r_*)} \to \R$ at dimension $\vec D$ on the universal nilmanifold $G^{\vec D}/\Gamma^{\vec D}$ of degree-rank $(s-1,r_*)$; \item A filtered homomorphism $\phi: G^{\vec D}/\Gamma^{\vec D} \to G/\Gamma$ (see Definition \ref{quot}); \item A \emph{horizontal frequency vector} $\F = (\xi_{i,j})_{1 \leq i \leq s-1; 1 \leq j \leq D_i}$ of frequencies $\xi_{i,j} \in \ultra\T$. \end{enumerate} which obeys the following properties: \begin{enumerate} \item For all $n \in [N]$, one has \begin{equation}\label{chin} \chi(n) = F( \orbit(n), \orbit_0(n)). \end{equation} \item For every $t \in G^{\vec D}_{(s-1,r_*)}$, all $x \in G/\Gamma$, and $x_0 \in G_0/\Gamma_0$, one has \begin{equation}\label{vert} F( \phi(t) x, x_0 ) = e( \eta(t) ) F(x,x_0). \end{equation} \item For every $1 \leq i \leq s-1$, one has \begin{equation}\label{taylor} \Taylor_i(\orbit) = \pi_{\Horiz_i(G/\Gamma)}\left(\phi( \prod_{j=1}^{D_i} e_{i,j}^{\xi_{i,j}} )\right), \end{equation} where $\pi_{\Horiz_i(G/\Gamma)}: G_{i} \to \Horiz_i(G/\Gamma)$ is the projection map; observe that the right-hand side is well-defined even though $\xi_{i,j}$ is only defined modulo $1$. \end{enumerate} We call the triplet $(\vec D, \F, \eta)$ a \emph{total frequency representation} of the nilcharacter $\chi$. \end{definition} This is a rather complicated definition, and we now illustrate it with a number of examples. We begin with the $s=2$, $r_*=1$ case, taking $\chi$ to be the degree-rank $(1,1)$ nilcharacter $$ \chi(n) := e( \xi n + \theta )$$ for some $\xi, \theta \in \ultra \R$. Let $D_1 \geq 1$ be an integer, let $\F = (\xi_{1,1},\ldots,\xi_{1,D_1}) \in \ultra \T^{D_1}$ be a collection of frequencies, and let $\eta: \R^{D_1} \to \R$ be the universal vertical frequency $\eta(x_1,\ldots,x_{D_1}) := a_1 x_1 + \ldots + a_{D_1} x_{D_1}$ for some integers $a_1,\ldots,a_{D_1} \in \Z$. Then $((D_1),\F,\eta)$ will be a total frequency representation of $\chi$ if $\xi = a_1 \xi_{1,1} + \ldots + a_{D_1} \xi_{1,D_1}$. Indeed, in that case, one can take $G/\Gamma = \T$ (with the degree-rank $\leq (1,1)$ filtration, see Example \ref{dr-f}), $G_0/\Gamma_0$ to be trivial, $F$ equal to the exponential function $(x,()) \mapsto e(x)$, $\phi: \T^{D_1} \to \T$ to be the filtered homomorphism $$ \phi(x_1,\ldots,x_{D_1}) := a_1 x_1 + \ldots + a_{D_1} x_{D_1},$$ and $\orbit \in \ultra \poly(\Z_\N \to \T_\N)$ to be the orbit $n \mapsto \xi n + \theta \mod 1$. This should be compared with \eqref{chih-abstract} and the discussion at the start of the section. For a slightly more complicated example, we take $s=3, r_* = 1$, and let $\chi$ be the degree-rank $(2,1)$ nilcharacter $$ \chi(n) := e( \alpha n^2 + \beta n + \gamma ).$$ We let $D_2 \geq 1$ be an integer, set $D_1 := 0$, let $\F = ((),(\xi_{2,1},\ldots,\xi_{2,D_2})) \in \ultra \T^{0} \times \ultra \T^{D_2}$ be a collection of frequencies, and let $\eta: \R^{D_2} \to \R$ be the universal vertical frequency $\eta(x_1,\ldots,x_{D_2}) := a_1 x_1 + \ldots + a_{D_2} x_{D_2}$ for some integers $a_1,\ldots,a_{D_2} \in \Z$. Then $((0,D_2), \F, \eta)$ will be a total frequency representation of $\chi$ if $\xi = a_1 \xi_{2,1} + \ldots + a_{D_2} \xi_{2,D_2}$ (cf. \eqref{chih-abstract}). Indeed, we can take $G/\Gamma = \T$ with the degree-rank $\leq (2,1)$ filtration (see Example \ref{dr-f}), $G_0/\Gamma_0 = \T$ with the degree-rank $\leq (1,1)$ filtration, the orbit $$ \orbit(n) := ( \alpha n^2 \mod 1, \beta n + \gamma \mod 1 )$$ and $F: G/\Gamma \times G_0/\Gamma_0 \to S^1$ to be the function $$ F(x, y) := e(x) e(y),$$ and $\phi: \T^{D_2} \to \T$ to be the filtered homomorphism $$ \phi(x_1,\ldots,x_{D_1}) := a_1 x_1 + \ldots + a_{D_1} x_{D_1}.$$ Note how the lower order terms $\beta_n + \gamma$ in the phase of $\chi$ are shunted off to the lower degree-rank nilmanifold $G_0/\Gamma_0$ and thus do not interact at all with the data $\F, \eta$. In this particular case, this shunting off was unnecessary, and one could have easily folded these lower order terms into the dynamics of the primary nilmanifold $G/\Gamma$; but in the next example we give, the lower order behaviour does genuinely need to be separated from the top order behaviour by placing it in a separate nilmanifold. We now turn to a genuinely non-abelian example of a universal representation. For this, we take $s=3$, $r_*=2$, and let $\chi$ be a degree-rank $(2,2)$ nilcharacter that is a suitable vector-valued smoothing of the bracket polynomial phase $$ n \mapsto e( \{ \alpha n \} \beta n + \gamma n^2 ).$$ We can express this nilcharacter as $$ \chi(n) = F( \orbit(n), \orbit_0(n) ),$$ where $\orbit \in \ultra \poly(\Z_\N \to (G/\Gamma)_\N)$ is the orbit $$ \orbit(n) := e_2^{\beta n} e_1^{\alpha n} \Gamma$$ into the Heisenberg nilmanifold \eqref{heisen} (which we give the degree-rank $\leq (2,2)$ filtration), $\orbit_0 \in \ultra \poly(\Z_\N \to (G/\Gamma)_\N)$ is the orbit $$ \orbit_0(n) := \gamma n^2 \mod 1$$ into the unit circle $G_0/\Gamma_0 = \T$ (which we give the degree-rank $\leq (2,1)$ filtration, see Example \ref{dr-f}), and $F$ is a suitable vector-valued smoothing of the map $$ ( e_1^{t_1} e_2^{t_2} [e_1,e_2]^{t_{12}} \Gamma, y ) \mapsto e( t_{12} ) e(y) $$ for $t_1, t_2, t_{12} \in I_0$. By Example \ref{heist}, we have $\Taylor_1(\orbit) = (\alpha \mod 1,\beta \mod 1)$ and $\Taylor_2(\orbit)$ is trivial. Now let $D_1 \geq 1$ be an integer, set $D_2 := 0$, let $\F = ((\xi_{1,1},\ldots,\xi_{1,D_1}),()) \in \ultra \T^{D_1} \times \ultra \T^{0}$ be a collection of frequencies. The subgroup $G^{(D_1,0)}_{(2,2)}$ of the universal nilmanifold $G^{(D_1,0)} = G^{(D_1,0),\leq (2,2)}$ is then the abelian Lie group generated by the commutators $[e_{1,i},e_{1,j}]$ for $1 \leq i < j \leq D_1$. We let $a_1,\ldots,a_{D_1},b_1,\ldots,b_{D_1} \in \Z$ be integers, and let $\phi: G^{(D_1,0)}/\Gamma^{(D_1,0)} \to G/\Gamma$ be the filtered homomorphism that maps $e_{1,i}$ to $e_1^{a_i} e_2^{b_i}$ for $i=1,\ldots,D_1$, thus \begin{align*} \phi( &\prod_{i=1}^{D_1} e_{1,i}^{t_i} \prod_{1 \leq i < j \leq D_1} [e_{1,i},e_{1,j}]^{t_{i,j}} \Gamma^{(D_1,0)} ) \\ &= \prod_{i=1}^{D_1} (e_1^{a_1} e_2^{b_i})^{t_i} \prod_{1 \leq i < j \leq D_1} [e_1^{a_i} e_2^{b_i}, e_1^{a_j} e_2^{b_j}]^{t_{i,j}} \Gamma \\ &= e_1^{\sum_{i=1}^{D_1} a_i t_i} e_2^{\sum_{i=1}^{D_1} b_i t_i} [e_1,e_2]^{-\sum_{i=1}^{D_1} a_i b_i \binom{t_i}{2} - \sum_{1 \leq i < j \leq d} b_i a_j t_i t_j + \sum_{1 \leq i < j \leq d} (a_i b_j - a_j b_i)t_{i,j}} \Gamma. \end{align*} Let us now see what conditions are required for $((D_1,0),\eta,\F)$ to be a total frequency representation of $\chi$. The condition \eqref{taylor} becomes the constraints \begin{align*} \alpha &= \sum_{i=1}^{D_1} a_i \xi_{1,i} \\ \beta &= \sum_{i=1}^{D_1} b_i \xi_{1,i}, \end{align*} while the condition \eqref{vert} becomes \begin{equation}\label{etaij} \eta( [e_{1,i}, e_{1,j}] ) = a_i b_j - a_j b_i \end{equation} for all $1 \leq i < j \leq D_1$, or equivalently $$ \eta( \prod_{1 \leq i < j \leq D_1} [e_{1,i}, e_{1,j}]^{t_{i,j}} ) = \sum_{1 \leq i < j \leq D_1} (a_i b_j - a_j b_i) t_{i,j}$$ Conversely, with these constraints we obtain a total frequency representation of $\chi$ by $((D_1,0),\eta,\F)$. This should be compared with the heuristic \eqref{chih-abstract}. (Note from \eqref{brackalg} that the top order component $\{\alpha n \} \beta n$ of $\chi$ is morally anti-symmetric in $\alpha,\beta$ modulo lower order terms, which is consistent with the anti-symmetry observed in \eqref{etaij}.) Note also that the term $\gamma n^2$, which has lesser degree-rank than the top order term $\{ \alpha n \} \beta n$, plays no role, due to it being shunted off to the lower degree-rank nilmanifold $G_0/\Gamma_0$. If instead we placed this term as part of the principal nilmanifold, then this would create a non-trivial second Taylor coefficient $\Taylor_2(\orbit)$ which would then require a non-zero value of $D_2$ in order to recover a total frequency representation. Thus we see that in order to neglect terms of lesser degree-rank (but equal degree) it is necessary to create the secondary nilmanifold $G_0/\Gamma_0$ as a sort of ``junk nilmanifold'' to hold all such terms. We make the easy remark that every nilcharacter $\chi$ of degree-rank $\leq (s-1,r_*)$ has at least one representation. \begin{lemma}[Existence of representation]\label{existence} Let $\chi$ be a nilcharacter of degree-rank $(s-1,r_*)$ on $[N]$. Then there exists at least one total frequency representation $(\vec D, \F, \eta)$ of $\chi$. \end{lemma} \begin{proof} By definition, $\chi = F \circ \orbit$ for some degree-rank $\leq (s-1,r_*)$ nilmanifold $G/\Gamma$, some $\orbit \in \ultra \poly(\Z_\N \to (G/\Gamma)_\N)$, and some $F \in \Lip(\ultra(G/\Gamma))$ with a vertical frequency. For each $1 \leq i \leq s-1$, let $f_{i,1},\ldots,f_{i,D_i}$ be a basis of generators for $\Gamma_{i}$, and let $\vec D := (D_1,\ldots,D_{s-1})$ be the associated dimension vector. Then we have a filtered homomorphism $\phi: G^{\vec D} \to G$ which maps $e_{i,j}$ to $f_{i,j}$ for all $1 \leq i \leq s-1$ and $1 \leq j \leq D_i$. It is easy to see that $\phi$ is surjective from $G^{\vec D}_{i}$ to $G_{i}$ for each $i$, and so the map $\pi_{\Horiz_i(G/\Gamma)} \circ \phi$ is surjective from $G^{\vec D}_{i}$ to $\Horiz_i(G/\Gamma)$. It is now an easy matter to locate frequencies $\xi_{i,j}$ obeying \eqref{taylor}, and the vertical frequency property of $F$ can be pulled back via $\phi$ to give \eqref{vert}. Setting $G_0/\Gamma_0$ to be trivial, we obtain the claim. \end{proof} To conclude this section, we now give some basic facts about total frequency representations. These facts will not actually be used in this paper, but may serve to consolidate one's intuition about the nature of these representations. We first observe some linearity in the vertical frequency $\eta$. \begin{lemma}[Linearity] Suppose that $\chi, \chi'$ are two nilcharacters of degree-rank $(s-1,r_*)$ on $[N]$ that have total frequency representations $(\vec D, \F, \eta)$ and $(\vec D, \F, \eta')$ respectively. Then $\overline{\chi}$ has a total frequency representation $(\vec D, \F, -\eta)$, and $\chi \otimes \chi'$ has a total frequency representation $(\vec D, \F, \eta+\eta')$. \end{lemma} \begin{proof} This is a routine matter of chasing down the definitions, and noting that nilmanifolds, polynomial orbits, etc. behave well with respect to direct sums. \end{proof} \begin{lemma}[Triviality] Suppose that $\chi$ is a nilcharacter of degree-rank $(s-1,r_*)$ on $[N]$ that has a total frequency representation $(\vec D, \F, 0)$. Then $\chi$ is a nilsequence of degree-rank $\leq (s-1,r_*-1)$ \textup{(}i.e. $[\chi]_{\Symb^{(s-1,r_*)}_\DR([N])} = 0$\textup{)}. \end{lemma} \begin{proof} By construction, we have $$ \chi(n) = F( \orbit(n), \orbit_0(n) )$$ for some limit polynomial orbits $\orbit \in \ultra \poly(\Z_\N \to (G/\Gamma)_\N)$, $\orbit_0 \in \ultra \poly(\Z_\N \to (G_0/\Gamma_0)_\N)$ into filtered nilmanifolds $G/\Gamma, G_0/\Gamma_0$ of degree-rank $\leq (s-1,r_*)$ and $\leq (s-1,r_*-1)$ respectively, where $F \in \Lip(\ultra(G/\Gamma \times G_0/\Gamma_0) \to \overline{S^\omega})$. Furthermore, there exists a filtered homomorphism $\phi: G^{\vec D}/\Gamma^{\vec D} \to G/\Gamma$ such that \eqref{taylor} holds, and such that \begin{equation}\label{flat} F( \phi(t) x, x_0 ) = F(x,x_0). \end{equation} for all $t \in G^{\vec D}_{(s-1,r_*)}$. Let $T$ be the closure of the set $\{ \phi(t) \mod \Gamma_{(s-1,r_*)}: t \in G^{\vec D}_{(s-1,r_*)}\}$; this is a subtorus of the torus $G_{(s-1,r_*)}/\Gamma_{(s-1,r_*)}$, and thus acts on $G/\Gamma$. As $F$ is continuous and obeys the invariance \eqref{flat}, we see that $F$ is $T$-invariant; we may thus quotient out by $T$ and assume that $T$ is trivial. In particular, $\phi$ now annihilates $G^{\vec D}_{(s-1,r_*)}$. We give $G$ a new degree-rank filtration $(G'_{(d,r)})_{(d,r) \in \DR}$ (smaller than the existing filtration $(G_{(d,r)})_{(d,r) \in \DR}$), by defining $G'_{(d,r)}$ to be the connected subgroup of $G$ generated by $G_{(d,r+1)}$ (recalling the convention $G_{(d,r)} := G_{(d+1,0)}$ when $r > d$) together with the image $\phi( G^{\vec D}_{(d,r)} )$ of $G^{\vec D}_{(d,r)}$. It is easy to see that this is still a filtration, and that $G/\Gamma$ remains a filtered nilmanifold with this filtration, but now the degree-rank is $\leq (s-1,r_*-1)$ rather than $\leq (s-1,r_*)$. Furthermore, from \eqref{taylor} we see that $\orbit$ is still a polynomial orbit with respect to this new filtration. As such, $\chi$ is a nilsequence of degree-rank $\leq (s-1,r_*-1)$ as required. \end{proof} Combining the above two lemmas we obtain the following corollary. \begin{corollary}[Representation determines symbol] Suppose that $\chi, \chi'$ are two nilcharacters of degree-rank $(s-1,r_*)$ on $[N]$ that have a common total frequency representation $(\vec D, \F, \eta)$. Then $\chi, \chi'$ are equivalent. In other words, the symbol $[\chi]_{\Xi^{(s-1,r_*)([N])}}$ depends only on $(\vec D, \F, \eta)$. \end{corollary} Note that the above results are consistent with the heuristic \eqref{chih-abstract}. \section{Linear independence and the sunflower lemma}\label{reg-sec} A basic fact of linear algebra is that every finitely generated vector space is finite-dimensional. In particular, if $v_1,\ldots,v_l$ are a finite collection of vectors in a vector space $V$ over a field $k$, then there exists a finite linearly independent set of vectors $v'_1,\ldots,v'_{l'}$ in $V$ such that each of the vectors $v_1,\ldots,v_l$ is a linear combination (over $k$) of the $v'_1,\ldots, v'_{l'}$. Indeed, one can take $v'_1,\ldots,v'_{l'}$ to be a set of vectors generating $v_1,\ldots,v_l$ for which $l'$ is minimal, since any linear relation amongst the $v'_1,\ldots,v'_{l'}$ can be used to decrease\footnote{Indeed, one can recast this argument as a rank reduction argument instead of a minimal rank argument, for the same reason that the principle of infinite descent is logically equivalent to the well-ordering principle. In this infinitary (ultralimit) setting, there is very little distinction between the two approaches, although the minimality approach allows for slightly more compact notation and proofs. But in the finitary setting, it becomes significantly more difficult to implement the minimality approach, and the rank reduction approach becomes preferable. See \cite{u4-inverse} for finitary ``rank reduction'' style arguments analogous to those given here.} the ``rank'' $l'$, contradicting minimality (cf. the proof of classical Steinitz exchange lemma in linear algebra). We will need analogues of this type of fact for frequencies $\xi_1,\ldots,\xi_l$ in the limit unit circle $\ultra \T$. However, this space is not a vector space over a field, but is merely a module over a commutative ring $\Z$. As such, the direct analogue of the above statement fails; indeed, any standard rational in $\ultra \T$, such as $\frac{1}{2} \mod 1$, clearly cannot be represented as a linear combination (over $\Z$) of a finite collection of frequencies in $\ultra \T$ that are linearly independent over $\Z$. However, the standard rationals are the \emph{only} obstruction to the above statement being true. More precisely, we have \begin{lemma}[Baby regularity lemma]\label{baby} Let $l \in \N$, and let $\xi_1,\ldots,\xi_l \in \ultra \T$. Then there exists $l',l'' \in \N$ and $\xi'_1,\ldots,\xi'_{l'}, \xi''_1,\ldots,\xi''_{l''} \in \ultra \T$ such that $\xi'_1,\ldots,\xi'_{l'}$ are linearly independent over $\Z$ \textup{(}i.e. there exist no standard integers $a_1,\ldots,a_{l'}$, not all zero, such that $a_1 \xi'_1+\ldots+a_{l'} \xi'_{l'} = 0$\textup{)}, each of the $\xi''_i$ are rational \textup{(}i.e. they live in $\Q \mod 1$\textup{)}, and each of the $\xi_1,\ldots,\xi_l$ are linear combinations \textup{(}over $\Z$\textup{)} of the $\xi'_1,\ldots,\xi'_{l'}, \xi''_1,\ldots,\xi''_{l''}$. \end{lemma} \begin{proof} Fix $l,\xi_1,\ldots,\xi_l$. Define a \emph{partial solution} to be a collection of objects $l', l''$, $\xi'_1, \ldots,\xi'_{l'}$, $\xi''_1,\ldots,\xi''_{l''}$ satisfying all of the required properties, except possibly for the linear independence of the $\xi'_1,\ldots,\xi'_{l'}$. Clearly at least one partial solution exists, since one can take $l' := l$, $l'' := 0$, and $\xi'_i := \xi_i$ for all $1 \leq i \leq l$. Now let $l',l'',\xi'_1,\ldots,\xi'_{l'}, \xi''_1,\ldots,\xi''_{l''}$ be a partial solution for which $l'$ is minimal. We claim that $\xi'_1,\ldots,\xi'_{l'}$ is linearly independent over $\Z$, which will give the lemma. To see this, suppose for contradiction that there existed $a_1,\ldots,a_{l'} \in \Z$, not all zero, such that $a_1 \xi'_1 + \ldots + a_{l'} \xi'_{l'} = 0$. Without loss of generality we may assume that $a_1$ is non-zero. For each $2 \leq j \leq l'$, let $\tilde \xi'_j \in \ultra \T$ be such that $a_1 \tilde \xi'_j = \xi'_j$. We then have $$ \xi'_1 = - \sum_{j=2}^{l'} \frac{a_j}{a_1} \xi'_j + q \mod 1$$ for some standard rational $q \in \Q$. If we then replace $\xi'_1,\ldots,\xi'_{l'}$ by $\tilde \xi'_2,\ldots,\tilde \xi'_{l'}$ (decrementing $l'$ to $l'-1$) and append $q$ to $\xi''_1,\ldots,\xi''_{l''}$, then we obtain a new partial solution with a smaller value of $l'$, contradicting minimality. The claim follows. \end{proof} This lemma is too simplistic for our applications, and we will need to modify it in a number of ways. The first is to introduce an error term. \begin{definition}[Linear independence] Let $\eps > 0$ be a limit real, and let $l \in \N$. A set of frequencies $\xi_1,\ldots,\xi_l \in \ultra \T$ is said to be \emph{independent modulo $O(\eps)$} if there do not exist any collection $a_1,\ldots,a_l \in \Z$ of standard integers, not all zero, for which $$ a_1 \xi_1 + \ldots + a_l \xi_l = O(\eps) \mod 1$$ (Thus, for instance, the empty set (with $k=0$) is trivially independent modulo $O(\eps)$.) Equivalently, $\xi_1,\ldots,\xi_l$ are linearly independent over $\Z$ after quotienting out by the subgroup $\eps \overline{\R} \mod 1$. \end{definition} This definition is only non-trivial when $\eps$ is an infinitesimal (i.e. $\eps=o(1)$). In practice, $\eps$ will be a negative power of the unbounded integer $N$. We have the following variant of Lemma \ref{baby}. \begin{lemma}[Regularising one collection of frequencies]\label{toddler} Let $l \in \N$, let $\xi_1,\ldots,\xi_l \in \ultra \T$, and let $\eps > 0$ be a limit real. Then there exist $l',l'',l''' \in \N$ and \[ \xi'_1,\ldots,\xi'_{l'}, \xi''_1,\ldots,\xi''_{l''},\xi'''_1,\ldots,\xi'''_{l'''} \in \ultra \T \] such that $\xi'_1,\ldots,\xi'_{l'}$ are linearly independent modulo $O(\eps)$, each of the $\xi''_i$ are rational, each of the $\xi'''_i$ are $O(\eps)$, and each of the $\xi_1,\ldots,\xi_l$ are linear combinations \textup{(}over $\Z$\textup{)} of the $\xi'_1,\ldots,\xi'_{l'}, \xi''_1,\ldots,\xi''_{l''}, \xi'''_1,\ldots,\xi'''_{l'''}$. \end{lemma} One can view Lemma \ref{baby} as the degenerate case $\eps=0$ of the above lemma. \begin{proof} We repeat the proof of Lemma \ref{baby}. Define a \emph{partial solution} to be a collection of objects $l',l'',l''$, $\xi'_1,\ldots,\xi'_{l'}, \xi''_1,\ldots,\xi''_{l''},\xi'''_1,\ldots,\xi'''_{l'''}$ obeying all the required properties except possibly for the linear independence property. Again it is clear that at least one partial solution exists, so we may find a partial solution for which $l'$ is minimal. We claim that this is a complete solution. For if this is not the case, we have $$ a_1 \xi'_1 + \ldots + a_{l'} \xi'_{l'} = O(\eps) \mod 1$$ for some $a_1,\ldots,a_{l'} \in \Z$, not all zero. Again, we may assume that $a_1 \neq 0$. We again select $\tilde \xi'_2,\ldots,\tilde \xi'_{l'} \in \ultra \T$ with $a_1 \tilde \xi'_j = \xi'_j$ for all $2 \leq j \leq l'$, and observe that $$ \xi'_1 = - \sum_{j=2}^{l'} \frac{a_j}{a_1} \xi'_j + q + s\mod 1$$ for some standard rational $q \in \Q$ and some $s = O(\eps)$. If we then replace $\xi'_1,\ldots,\xi'_{l'}$ by $\tilde \xi'_2,\ldots,\tilde \xi'_{l'}$, and append $q$ and $s$ to $\xi''_1,\ldots,\xi''_{l''}$ and $\xi'''_1,\ldots,\xi'''_{l'''}$ respectively, we contradict minimality, and the claim follows. \end{proof} This lemma is still far too simplistic for our needs, because we will not be needing to regularise just one collection $\xi_1,\ldots,\xi_l$ of frequencies, but a whole \emph{family} $\xi_{h,1},\ldots,\xi_{h,l}$ of frequencies, where $h$ ranges over a parameter set $H$. Such frequencies can exhibit a range of behaviour in $h$; at one extreme, they might be completely independent of $h$, while at the other extreme, the frequencies may vary substantially as $h$ does. It turns out that in some sense, the general case is a combination of these extreme cases. In this direction we have the following stronger version of Lemma \ref{toddler}. \begin{lemma}[Regularising many collections of frequencies]\label{kid} Let $l \in \N$, let $\eps > 0$ be a limit real, let $H$ be a limit finite set, and for each $h \in H$, let $\xi_{h,1},\ldots,\xi_{h,l}$ be frequencies in $\ultra \T$ that depend in a limit fashion on $h$. Then there exists a dense subset $H'$ of $H$, standard natural numbers, $l_*, l',l''_*,l''' \in \N$, ``core'' frequencies $\xi_{*,1},\ldots,\xi_{*,l_*}, \xi''_{*,1},\ldots,\xi''_{l''_*} \in \ultra \T$, and ``petal'' frequencies \[ \xi'_{h,1},\ldots,\xi'_{h,l'}, \xi'''_{h,1},\ldots,\xi'''_{h,l'''} \in \ultra \T\] for each $h \in H'$ depending in a limit fashion on $h$, and obeying the following properties: \begin{itemize} \item[(i)] \textup{(Independence)} For almost all triples $(h_1,h_2,h_3) \in (H')^3$ \textup{(}i.e. for all but $o(|H'|^3)$ such triples\textup{)}, the frequencies \[ \xi_{*,1},\ldots,\xi_{*,l_*}, \xi'_{h_1,1},\ldots,\xi'_{h_1,l'}, \xi'_{h_2,1},\ldots,\xi'_{h_2,l'}, \xi'_{h_3,1},\ldots,\xi'_{h_3,l'} \] are linearly independent modulo $O(\eps)$. \item[(ii)] \textup{(Rationality)} For each $1 \leq j \leq l''$, $\xi''_{*,j}$ is a standard rational. \item[(iii)] \textup{(Smallness)} For each $h \in H'$ and $1 \leq j \leq l'''$, $\xi'''_{h,j} = O(\eps)$. \item[(iv)] \textup{(Representation)} For each $h \in H'$, the $\xi_{h,1},\ldots,\xi_{h,l}$ are linear combinations over $\Z$ of the frequencies \[ \xi_{*,1},\ldots,\xi_{*,l_*}, \xi'_{h,1},\ldots,\xi'_{h,l'}, \xi''_{*,1},\ldots,\xi''_{*,l''}, \xi'''_{h,1},\ldots,\xi'''_{h,l'''}.\] \end{itemize} \end{lemma} Note that Lemma \ref{kid} collapses to Lemma \ref{toddler} if $H$ is a singleton set. \begin{proof} We again use the usual argument. Define a \emph{partial solution} to be a collection of objects $H', l_*, l', l''_*, l''', \xi_{*,j}, \xi'_{h,j}, \xi''_{*,j}, \xi'''_{h,j}$ obeying all the required properties except possibly for the independence property. Again, at least one partial solution exists, since we may take $H' := H$, $l_* := l'' := l''' := 0$, $l' := l$, and $\xi'_{j,h} := \xi_{j,h}$ for all $h \in H$ and $1 \leq j \leq l$. We may thus select a partial solution for which $l'$ is minimal; and among all such partial solutions with $l'$ minimal, we choose a solution with $l_*$ minimal for fixed $l'$ (i.e. we minimise with respect to the lexicographical ordering on $l'$ and $l_*$). We claim that this doubly minimal solution obeys the independence property, which would give the claim. Suppose the independence property fails. Carefully negating the quantifiers and using Lemma \ref{dense-dich}, we conclude that there exist standard integers $a_{*,j}$ for $1 \leq j \leq l_*$ and $a'_{i,j}$ for $i=1,2,3$ and $1 \leq j \leq l'$, not all zero, such that one has the relation $$ a_{*,1} \xi_{*,1} + \ldots + a_{*,l_*} \xi_{*,l_*} + \sum_{i=1}^3 \sum_{j=1}^{l'} a'_{i,j} \xi'_{h_i,j} = O(\eps) \mod 1$$ for many triples $(h_1,h_2,h_3) \in (H')^3$. Suppose first that all of the $a'_{i,j}$ vanish, so that we have a linear relation $$ a_{*,1} \xi_{*,1} + \ldots + a_{*,l_*} \xi_{*,l_*} = O(\eps) \mod 1$$ that only involves core frequencies. Then the situation is basically the same as that of Lemma \ref{toddler}; without loss of generality we may take $a_{*,1} \neq 0$, and if we then choose $\tilde \xi_{*,2},\ldots,\tilde \xi_{*,l_*}$ so that $a_{*,1} \tilde \xi_{*,j} = \xi_{*,j}$, then we can rewrite $$ \xi_{*,1} = -\sum_{j=2}^{l'} a_{*,j} \tilde \xi_{*,j} + q + s \mod 1$$ for some $q \in \Q$ and $s = O(\eps)$, and one can then replace the $\xi_{*,1},\ldots,\xi_{*,l_*}$ with $\tilde \xi_{*,2},\ldots,\tilde \xi_{*,l_*}$ (decrementing $l_*$ by $1$) and append $q$ and $s$ to each of the collections $\xi''_{h,1},\ldots,\xi''_{h,l''}$ and $\xi'''_{h,1},\ldots,\xi'''_{h,l'''}$ respectively for each $h \in H$, contradicting minimality. Now suppose that not all of the $a'_{i,j}$ vanish; without loss of generality we may assume that $a'_{1,1}$ is non-zero. By the pigeonhole principple, we can find $h_2, h_3 \in H'$ such that $$ a_{*,1} \xi_{*,1} + \ldots + a_{*,l_*} \xi_{*,l_*} + \sum_{i=1}^3 \sum_{j=1}^{l'} a'_{i,j} \xi'_{h_1,j} = O(\eps) \mod 1$$ for all $h_1$ in a dense subset $H''$ of $H'$. Now let $\tilde \xi_{*,j} \in \ultra \T$ for $1 \leq j \leq l_*$ and $\tilde \xi'_{h,j} \in \ultra \T$ for $h_1 \in H'$ and $1 \leq j \leq l'$ be such that $a'_{1,1} \tilde \xi_{*,j} = \xi_{*,j}$ and $a'_{1,1} \tilde \xi'_{h,j} = \xi'_{h,j}$, then we have $$ \xi'_{h_1,1} = - \sum_{j=2}^{l'} a'_{1,j} \tilde \xi'_{h_1,j} - \sum_{j=1}^{l_*} a_{*,j} \tilde \xi_{*,j} - \sum_{i=2}^3 \sum_{j=1}^{l'} a'_{i,j} \tilde \xi'_{i,j} + q_{h_1} + s_{h_1} \mod O(1)$$ for some standard rational $q_{h_1}$ and some $s_{h_1} = O(\eps)$. Furthermore one can easily ensure that $q_{h_1}, s_{h_1}$ depend in a limit fashion on $h_1$. By Lemma \ref{dense-dich} (and refining $H'$) we may assume that $q_{h_1} = q_*$ is independent of $h_1$. We may thus replace $H'$ by $H''$ and replace $\xi'_{h,1},\ldots,\xi'_{h,l'}$ by $\tilde \xi'_{h,2},\ldots,\tilde \xi'_{h,l'}$ (decrementing $l'$ by $1$), while appending $q_*$ and $s_h$ to $\xi''_{*,1},\ldots,\xi''_{*,l''}$ and $\xi'''_{h,1},\ldots,\xi'''_{h,l'''}$ respectively, and replacing $\xi_{*,1},\ldots,\xi_{*,l_*}$ by $\tilde \xi_{*,1},\ldots,\tilde \xi_{*,l_*}, \tilde \xi'_{h_2,1},\ldots,\tilde \xi_{h_2,l'}, \tilde \xi'_{h_3,1},\ldots,\tilde \xi_{h_3,l'}$ (incrementing $l_*$ as necessary). This contradicts the minimality of the partial solution, and the claim follows. \end{proof} This is still too simplistic for our applications, as the independence hypothesis on triples $(h_1,h_2,h_3)$ will not quite be strong enough to give everything we need. Ideally, (in view of Proposition \ref{gcs-prop}) we would like to have independence of the $\xi_{*,1},\ldots,\xi_{*,l_*}, \xi'_{h_1,1},\ldots,\xi'_{h_4,l'}$ for almost all additive quadruples $h_1+h_2=h_3+h_4$ in $H'$. Unfortunately, this need not be the case; indeed, if the original $\xi_{h,i}$ are linear in $h$, say $\xi_{h,i} = \alpha_i h$ for some $\alpha_i \in \ultra \T$ and all $1 \leq i \leq l'$, then we have $\xi_{h_1,i} + \xi_{h_2,i} = \xi_{h_3,i} + \xi_{h_4,i}$ for all additive quadruples $h_1+h_2=h_3+h_4$ in $H'$ and all $1 \leq i \leq l'$, and as a consequence it is not possible to obtain a decomposition as in Lemma \ref{kid} with the stronger independence property mentioned above. A similar obstruction occurs if the $\xi_{h,i}$ are \emph{bracket}-linear in $h$, for instance if $\xi_{h,i} = \{ \alpha_i h \} \beta_i \mod 1$ for some $\alpha_i \in \ultra \T$ and $\beta_i \in \ultra \R$. By using tools from additive combinatorics, we can show that bracket-linear frequencies are the \emph{only} obstructions to independence on additive quadruples. More precisely, we have \begin{lemma}\label{teenager} Let $l \in \N$, let $\eps > 0$ be a limit real, let $H$ be a dense limit subset of $[[N]]$, and for each $h \in H$, let $\xi_{h,1},\ldots,\xi_{h,l}$ be frequencies in $\ultra \T$ that depend in a limit fashion on $h$. Then there exists a dense subset $H'$ of $H$, standard natural numbers, $l_*, l',l''_*,l''',l'''' \in \N$, ``core'' frequencies $\xi_{*,1},\ldots,\xi_{*,l_*}, \xi''_{*,1},\ldots,\xi''_{*,l''_*} \in \ultra \T$, and ``petal'' frequencies $\xi'_{h,1},\ldots,\xi'_{h,l'},\xi'''_{h,1},\ldots,\xi'''_{h,l'''} \xi''''_{h,1},\ldots,\xi''''_{h,l''''} \in \ultra \T$ for each $h \in H'$ depending in a limit fashion on $h$, obeying the following properties: \begin{itemize} \item[(i)] \textup{(Independence)} For almost all additive quadruples $h_1+h_2=h_3+h_4$ in $H'$ (i.e. for all but $o(|H'|^3)$ such quadruples), the frequencies $\xi_{*,j}$ for $1 \leq j \leq l_*$, $\xi'_{h_i,j}$ for $i=1,2,3,4$ and $1 \leq j \leq l'$, and $\xi''''_{h_i,j}$ for $i=1,2,3$ and $1 \leq j \leq l''''$ are jointly linearly independent modulo $O(\eps)$. \item[(ii)] \textup{(Rationality)} For each $1 \leq j \leq l''_*$, $\xi''_{*,j}$ is a standard rational. \item[(iii)] \textup{(Smallness)} For each $h \in H'$ and $1 \leq j \leq l'''$, $\xi'''_{h,j} = O(\eps)$. \item[(iv)] \textup{(Bracket-linearity)} For each $1 \leq j \leq l''''$, there exist $\alpha_j \in \ultra \T$ and $\beta_j \in \ultra \R$ such that $\xi''''_{h,j} = \{ \alpha_j h \} \beta_j \mod 1$ for all $h \in H'$. Furthermore, the map $h \mapsto \xi''''_{h,j}$ is a Freiman homomorphism on $H'$ \textup{(}see \S \ref{notation-sec} for the definition of a Freiman homomorphism\textup{)}. \item[(v)] \textup{(Representation)} For each $h \in H'$, the $\xi_{h,1},\ldots,\xi_{h,l}$ are linear combinations over $\Z$ of the frequencies \[ \xi_{*,1},\ldots,\xi_{*,l_*}, \xi'_{h,1},\ldots,\xi'_{h,l'}, \xi''_{*,1},\ldots,\xi''_{*,l''}, \xi'''_{h,1},\ldots,\xi'''_{h,l'''}, \xi''''_{h,1},\ldots,\xi''''_{h,l''''}.\] \end{itemize} \end{lemma} \begin{proof} As usual, we define a \emph{partial solution} to be a collection of objects $H'$, $l_*, l',l''_*,l''',l''''$, $\xi_{*,1},\ldots,\xi''''_{h,l''''}$, obeying all of the required properties except possibly for the independence property. Again, there is clearly at least one partial solution, so we select a partial solution with a minimal value of $l'$, and then (for fixed $l'$) a minimal value of $l''''$, and then (for fixed $l',l''''$) a minimal value of $l_*$. We claim that this partial solution obeys the independence property, which will give the lemma. Suppose for contradiction that this were not the case; then by Lemma \ref{dense-dich}, there exist standard integers $a_{*,j}$ for $1 \leq j \leq l_*$, $a'_{i,j}$ for $1 \leq i \leq 4$ and $1 \leq j \leq l'$, and $a''_{i,j}$ for $1 \leq i \leq 3$ and $1 \leq j \leq l''''$, not all zero, such that $$ \sum_{j=1}^{l_*} a_{*,j} \xi_{*,j} + \sum_{i=1}^4 \sum_{j=1}^{l'} a'_{i,j} \xi'_{h_i,j} + \sum_{i=1}^3 \sum_{j=1}^{l'''} a''''_{i,j} \xi''''_{h_i,j} = O(\eps) \mod 1$$ for many additive quadruples $h_1+h_2=h_3+h_4$ in $H'$. Suppose first that all the $a'_{i,j}$ and $a''''_{i,j}$ vanished. Then we have a relation $$ \sum_{j=1}^{l_*} a_{*,j} \xi_{*,j} = O(\eps) \mod 1$$ that only involves core frequencies; arguing as in Lemma \ref{kid} we can thus find another partial solution with a smaller value of $l_*$ (and the same value of $l'$, $l''''$), contradicting minimality. Next, suppose that the $a'_{i,j}$ all vanished, but the $a''''_{i,j}$ did not all vanish. Then we have a relation \begin{equation}\label{triplicate} \sum_{j=1}^{l_*} a_{*,j} \xi_{*,j} + \sum_{i=1}^3 \sum_{j=1}^{l''''} a''''_{i,j} \xi''''_{h_i,j} = O(\eps) \mod 1 \end{equation} for many triples $h_1,h_2,h_3$ in $H'$. Without loss of generality let us suppose that $a''''_{1,1}$ is non-zero. By the pigeonhole principle, we may find $h_2,h_3 \in H'$ such that \eqref{triplicate} holds for all $h_1$ in a dense subset $H''$ of $H'$. As in previous arguments, we then find $\tilde \xi_{*,j} \in \ultra \T$ such that $a''''_{1,1} \tilde \xi_{*,j} = \xi_{*,j}$ for each $1 \leq j \leq l_*$, and also find $\tilde \beta_j \in \ultra \R$ such that $a''''_{1,1} \tilde \beta_j = \beta_j$ for all $1 \leq j \leq l''''$. If we then set $\tilde \xi''''_{h,j} := \{ \alpha_j h \} \tilde \beta_j$ for each $h \in H'$ and $1 \leq j \leq l''''$, then $a''''_{1,1} \tilde \xi''''_{h,j} = \xi''''_{h,j}$, and so for any $h_1 \in H'$ we have $$ \xi''''_{h_1,1} = - \sum_{j=1}^{l_*} a_{*,j} \tilde \xi_{*,j} - \sum_{j=2}^{l''''} a''''_{1,j} \tilde \xi''''_{h_1,j} - \sum_{i=2}^3 \sum_{j=1}^{l''''} a''''_{i,j} \tilde \xi''''_{h_i,j} + q_{h_1} + s_{h_1} \mod 1$$ for some standard rational $q_{h_1}$ and some $s_{h_1} = O(\eps)$, both depending on a limit fashion on $h_1$. By refining $H'$ if necessary (and using the bracket-linear nature of the $\tilde \xi''''_{h,j}$) we may assume that the map $h \mapsto \tilde \xi''''_{h,j}$ is a Freiman homomorphism on $H'$, and by Lemma \ref{dense-dich} we may make $q_{h_1} = q_*$ independent of $h_1$. If we then argue as in the proof of Lemma \ref{kid}, we may find a new partial solution with a smaller value of $l''''$ and the same value of $l'$, contradicting minimality. Finally, suppose that the $a'_{i,j}$ did not all vanish. Using the Freiman homomorphism property to permute the $i$ indices if necessary, we may assume that $a'_{4,1}$ does not vanish. We then have $$ \Xi_1(h_1) + \Xi_2(h_2) + \Xi_3(h_3) + \Xi_4(h_4) = O(\eps)$$ for many additive quadruples $h_1+h_2=h_3+h_4$ in $H'$, where the limit functions $\Xi_i: H \to \ultra \T$ are defined by $$ \Xi_i(h) := \sum_{j=1}^{l'} a'_{i,j} \xi'_{h,j} + \sum_{j=1}^{l''''} a''''_{i,j} \xi''''_{h,j} \mod 1$$ for $i=1,2,3$ and $h \in H$, and $$ \Xi_4(h) := \sum_{j=1}^{l_*} a_{*,j} \xi_{*,j} + \sum_{j=1}^{l'} a'_{4,j} \xi'_{h,j} \mod 1.$$ We can use this additive structure to ``solve'' for $\Xi_4$, using a result from additive combinatorics which we present here as Lemma \ref{lin}. Applying this lemma, we can then find a dense limit subset $H'$ of $H$, a standard integer $K$, and frequencies $\alpha'_1,\ldots,\alpha'_K, \delta \in \ultra \T$ and $\beta'_1,\ldots,\beta'_K \in \ultra \R$ such that $$ \Xi_4(h) = \sum_{k=1}^K \{ \alpha'_k h \} \beta'_k + \delta + O(\eps) \mod 1$$ and thus $$ a'_{4,1} \xi'_{h,1} = \sum_{k=1}^K \{ \alpha'_k h \} \beta'_k + \delta - \sum_{j=1}^{l_*} a_{*,j} \xi_{*,j} + \sum_{j=2}^{l'} a'_{4,j} \xi'_{h,j} + O(\eps) \mod 1$$ for all $h \in H'$. As usual, we now find $\tilde \beta_k \in \ultra \R$ for $1 \leq k \leq K$, $\tilde \beta_j \in \ultra \R$ for $1 \leq j \leq l''''$, $\tilde \delta \in \T$ and $\tilde \xi_{*,j}$ for $1 \leq j \leq l_*$ such that $a'_{4,1} \tilde \beta_k = \beta_k$, $a'_{4,1} \tilde \beta_j = \beta_j$, $a'_{4,1} \tilde \delta = \delta$, and $a'_{4,1} \tilde \xi_{*,j} = \xi_{*,j}$. We then set $\tilde \xi'_{h,j} := \{ \alpha_j h \} \tilde \beta_j \mod 1$, and we conclude that $$ \xi'_{h,1} = \sum_{k=1}^K \{ \alpha'_k h \} \tilde \beta'_k + \tilde \delta - \sum_{j=1}^{l_*} a_{*,j} \tilde \xi_{*,j} + \sum_{j=2}^{l'} a'_{4,j} \tilde \xi'_{h,j} + q_h + s_h \mod 1$$ for all $h \in H'$, where $q_h \in \Q$ and $s_h = O(\eps)$ depend in a limit fashion on $h$. By refining $H'$ we may take $q_h = q_*$ independent of $h$. We can then use relation to build a new partial solution that decreases $l'$ by $1$, at the expense of enlarging the other dimensions $l_*, l'', l''', l''''$ (and also refining $H$ to $H'$), again contradicting minimality, and the claim follows. \end{proof} We now apply the above lemma to the language of horizontal frequency vectors introduced in the previous section. We need some definitions: \begin{definition}[Properties of horizontal frequency vectors] Let \[ \F = (\xi_{i,j})_{1 \leq i \leq s-1; 1 \leq j \leq D_i}\; \; \mbox{and} \; \; \F' = (\xi'_{i,j})_{1 \leq i \leq s-1; 1 \leq j \leq D'_i}\] be horizontal frequency vectors. \begin{itemize} \item We say that $\F$ is \emph{independent} if, for each $1 \leq i \leq d$, the tuple $(\xi_{i,j})_{1 \leq j \leq D_i}$ is independent modulo $O(N^{-i})$. \item We say that $\F$ is \emph{rational} if all the $\xi_{i,j}$ are standard rationals. \item We say that $\F$ is \emph{small} if one has $\xi_{i,j} = O(N^{-i})$ for all $1 \leq i \leq s-1$ and $1 \leq j \leq D_i$. \item We define the \emph{disjoint union} $\F \uplus \F' = (\xi''_{i,j})_{1 \leq i \leq s-1; 1 \leq j \leq D_i+D'_i}$ by declaring $\xi''_{i,j}$ to equal $\xi_{i,j}$ if $j \leq D_i$ and $\xi'_{i,j-D_i}$ if $D_i < j \leq D_i+D'_i$. This is clearly a horizontal frequency vector with dimensions $(D_1+D'_1,\ldots,D_{s-1}+D'_{s-1})$. \item We say that $\F$ is \emph{represented} by $\F'$ if for every $1 \leq i \leq s-1$ and $1 \leq j \leq D_i$, $\xi_{i,j}$ is a standard integer linear combination of the $\xi'_{i,j'}$ for $1 \leq j' \leq D'_i$. \end{itemize} \end{definition} \begin{lemma}[Sunflower lemma]\label{sunflower-basic} Let $H$ be a dense subset of $[[N]]$, and let $(\F_h)_{h \in H}$ be a family of horizontal frequency vectors depending in a limit fashion on $h$, whose dimension vector $\vec D = \vec D_h$ is independent of $h$. Then we can find the following objects: \begin{itemize} \item A dense subset $H'$ of $H$; \item Dimension vectors $\vec D_* = \vec D_{*,\ind} + \vec D_{*,\rat}$ and $\vec D' = \vec D'_\lin + \vec D'_\ind + \vec D'_\sml$, which we write as $\vec D_* = (D_{*,i})_{i=1}^{s-1}$, $\vec D_{*,\ind} = (D_{*,\ind,i})_{i=1}^{s-1}$, etc.; \item A \emph{core horizontal frequency vector} $\F_* = (\xi_{*,i,j})_{1 \leq i \leq s-1; 1 \leq j \leq D_{*,i}}$, which is partitioned as $\F_* = \F_{*,\ind} \uplus \F_{*,\rat}$, with the indicated dimension vectors $\vec D'_\ind, \vec D'_\rat$; \item A \emph{petal horizontal frequency vector} $\F'_h = (\xi'_{h,i,j})_{1 \leq i \leq s-1; 1 \leq j \leq D'_i}$, which is partitioned as $\F'_h = \F'_{h,\lin} \uplus \F'_{h,\ind} \uplus \F'_{h,\sml}$, which is a limit function of $h$ and with the indicated dimension vectors $\vec D'_\lin, \vec D'_\ind, \vec D'_\sml$ \end{itemize} which obey the following properties: \begin{itemize} \item For all $h \in H'$, $\F'_{h,\sml}$ are small. \item $\F_{*,\rat}$ is rational. \item For every $1 \leq i \leq d$ and $1 \leq j \leq D'_{i,\lin}$, there exists $\alpha_{i,j} \in \ultra\T$ and $\beta_{i,j} \in \ultra \R$ such that \eqref{xih-def} holds for all $h \in H'$, and furthermore that the map $h \mapsto \xi'_{h,i,j}$ is a Freiman homomorphism on $H'$. \item For all $h \in H$, $\F_h$ is represented by $\F_* \cup \F'_h$ \item \textup{(Independence property)} For almost all additive quadruples $(h_1,h_2,h_3,h_4)$ in $H$, $$\F_{*,\ind} \uplus \biguplus_{i=1}^4 \F'_{h_i,\ind} \uplus \biguplus_{i=1}^3 \F'_{h_i,\lin}$$ is independent. \end{itemize} \end{lemma} \begin{proof} Write $\F_h = (\xi_{h,i,j})_{1 \leq i \leq s-1; 1 \leq j \leq D_i}$. For each $1 \leq i \leq s-1$ in turn, apply Lemma \ref{teenager} to the collections $(\xi_{h,i,1},\ldots,\xi_{h,i,D_i})_{h \in H}$ and $\eps = O(N^{-i})$, refining $H$ once for each $i$. The claim then follows by relabeling. \end{proof} To apply this lemma to families of nilcharacters, we will need two additional lemmas. \begin{lemma}[Change of basis]\label{basis-change} Suppose that $\chi \in \Xi^{(s-1,r_*)}_\DR([N])$ is a degree-rank $(s-1,r_*)$ nilcharacter with a total frequency representation $(\vec D, \F, \eta)$, and suppose that $\F$ is represented by another horizontal frequency vector $\F'$ with a dimension vector $\vec D'$. Then there exists a vertical frequency $\eta': G^{\vec D'}_{s-1} \to \R$ such that $\chi$ has a total frequency representation $(\vec D', \F', \eta')$. \end{lemma} \begin{proof} By hypothesis, each element $\xi_{i,j}$ of $\F$ can be expressed as a standard linear combination $\xi_{i,j} = \sum_{j'=1}^{D'_i} c_{i,j,j'} \xi'_{i,j'}$ of elements $\xi'_{i,j'}$ of $\F'$ of the same degree, where $c_{i,j,j'} \in \Z$. Now let $\psi: G^{\vec D'} \to G^{\vec D}$ be the unique filtered homomorphism that maps $e'_{i,j'}$ to $\prod_{j=1}^{D_i} e_{i,j}^{c_{i,j,j'}}$ (this can be viewed as an ``adjoint'' of the representation of $\F$ by $\F'$). By hypothesis, $\chi$ has a representation $\chi(n) = F( \orbit(n), \orbit_0(n))$ of $\chi$ with $$ \Taylor_i(\orbit) = \pi_{\Horiz_i(G/\Gamma)}\left(\phi( \prod_{j=1}^{D_i} e_{i,j}^{\xi_{i,j}} )\right) $$ for some filtered homomorphism $\phi: G^{\vec D} \to G$. A brief calculation shows that the right-hand side can also be expressed as $$ \pi_{\Horiz_i(G/\Gamma)}\left(\phi \circ \psi( \prod_{j=1}^{D'_i} (e'_{i,j})^{\xi'_{i,j}} )\right).$$ As $\phi \circ \psi: G^{\vec D'} \to G$ is a filtered homomorphism, and $\eta \circ \psi: G^{\vec D'}_{(s-1,r_*)} \to \R$ is a vertical frequency, we obtain the claim. \end{proof} \begin{lemma}\label{discard} Let $\F$ be a horizontal frequency vector of dimension $\vec D$ of the form $$ \F = \F_{\rat} \uplus \F_{\sml} \uplus \F'$$ where $\F_\rat$ is rational and $\F_\sml$ is small, and $\F'$ has dimension $\vec D'$. Suppose that $\chi \in \Xi^{(s-1,r_*)}_\DR([N])$ is a nilcharacter with a total frequency representation $(\vec D, \F, \eta)$. Then there exists a vertical frequency $\eta': G^{\vec D'}_{s-1} \to \R$ such that $\chi$ has total frequency $(\vec D', \F'/M, \eta')$ for some standard integer $M \geq 1$. \end{lemma} \emph{Remark.} This lemma crucially relies on the hypothesis $s \geq 3$, as it makes the (degree $1$) contributions of rational and small frequencies to be of lower order. Because of this, the inverse conjecture for $s > 2$ is in a very slight way a little bit simpler than the $s \leq 2$ theory, though it is of course more complicated in many other ways. \begin{proof} By induction we may assume that $\F$ is formed from $\F'$ by adding a single frequency $\xi_{i_0,D_{i_0}}$, which is either rational or small. Let us first suppose that we are adding a single frequency which is not just rational, but is in fact an integer. Then if $\chi(n) = F(g(n)\ultra \Gamma, g_0(n) \ultra \Gamma_0)$ is a nilcharacter with a total frequency representation $(\vec D,\F,\eta)$, then we have a filtered homomorphism $\phi: G^{\vec D}/\Gamma^{\vec D} \to G/\Gamma$ such that $$ g_i = \prod_{j=1}^{D_i} \phi(e_{i,j})^{\xi_{i,j}} \hbox{ mod } G_{(i,1)} $$ for all $1 \leq i \leq s_*-1$, where $g_i$ are the Taylor coefficients of $g$. Specialising to the degree $i_0$ and using the integer nature of $\xi_{i_0,D_{i_0}}$, we have $$ g_{i_0} = g'_{i_0} \gamma_{i_0}$$ where $\gamma_{i_0}$ is an element of $\Gamma_{i_0}$, and $$g'_i = \prod_{j=1}^{D_i-1} \phi(e_{i,j})^{\xi_{i,j}} \hbox{ mod } G_{(i,1)}.$$ From this and the Baker-Campbell-Hausdorff formula \eqref{bch}, we can write $g(n) = g'(n) \gamma_{i_0}^{\binom{n}{i_0}}$, where $g'$ is a polynomial sequence with a horizontal frequency representation $(\vec D', \phi', \F')$, where $\vec D'$ is $\vec D$ with $D_{i_0}$ decremented by one, and $\phi'$ is the restriction of $\phi$ to the subnilmanifold $G^{\vec D'}/\Gamma^{\vec D'}$. Since $g(n) \ultra \Gamma = g'(n) \ultra \Gamma$, we see that $\chi$ has a total frequency representation $(\vec D', \F', \eta')$, where $\eta'$ is the restriction of $\eta: G^{\vec D}_{(s-1,r_*)} \to \R$ to $G^{\vec D'}_{(s-1,r_*)}$. This gives the claim in this case (with $M=1$). Now suppose that $\xi_{i_0,D_{i_0}}$ is merely rational rather than integer. Then we can argue as before, except that now $\gamma_{i_0}$ is a rational element of $G_{i_0}$, so that $\gamma_{i_0}^m \in \Gamma_{i_0}$ for some standard positive integer $m$. As such, there exists a standard positive integer $q$ such that $\gamma_{i_0}^{\binom{n}{i_0}} \mod \ultra \Gamma$ is periodic with period $q$. As a consequence, there exists a bounded index subgroup $\Gamma'$ of $\Gamma$ such that the point $$ g'(n) \gamma_{i_0}^{\binom{n}{i_0}} \mod \ultra \Gamma$$ in $G/\Gamma$ can be expressed as a Lipschitz function of $$ g'(n) \mod \ultra \Gamma'$$ and of the quantity $n/q \mod 1$. Repeating the previous arguments, we thus obtain a total frequency representation $(\vec D', \tilde \F', \eta')$ for some $\eta'$, and some $\tilde \F'$ whose coefficients are rational combinations of those of $\F'$; note that the $n/q$ dependence can be easily absorbed into the lower order term $G_0/\Gamma_0$ since $s \geq 3$. The claim then follows from Lemma \ref{basis-change}. Finally, suppose that $\xi_{i_0,D_{i_0}}$ is small rather than rational. Then we can write $$ g_{i_0} = c_{i_0} g'_{i_0}$$ where $g'_{i_0}$ is as before, and $c_{i_0} \in G_{i_0}$ is at a distance $O(N^{-i_0})$ from the origin. We can thus write $$ g(n) = c_{i_0}^{\binom{n}{i_0}} g'(n)$$ where $g'$ is a polynomial sequence with horizontal frequency representation \[ (\vec D', \phi',\F').\] On $[N]$, the sequence $c_{i_0}^{\binom{n}{i_0}}$ is can be expressed as a bounded Lipschitz function of $n/2N \hbox{ mod } 1$. As a consequence, we can thus write $\chi$ in the form $$ \chi(n) = F'( g'(n) \ultra \Gamma, g_0(n) \ultra \Gamma_0, n/2N \hbox{ mod } 1 )$$ for some $F' \in \Lip(\ultra( G/\Gamma \times G_0/\Gamma_0 \times \T ))$. As $s \geq 3$, the final term $\T$ can be absorbed into the degree-rank $\leq (s-1,r_*-1)$ nilmanifold $G_0/\Gamma_0$, and the claim follows (with $M=1$). \end{proof} Finally, we can state the main result of this section. \begin{lemma}[Sunflower lemma]\label{sunflower} Let $H$ be a dense subset of $[[N]]$, and let $(\chi_h)_{h \in H}$ be a family of nilcharacters $\chi_h \in \Xi^{(s-1,r_*)}_\DR([N])$ depending in a limit fashion on $H$. Then we can find \begin{enumerate} \item A dense subset $H'$ of $H$; \item Dimension vectors $\vec D_*$ and $\vec D' = \vec D'_\lin + \vec D'_\ind$, which we write as $\vec D_* = (D_{*,i})_{i=1}^{s-1}$, $\vec D' = (D'_{i})_{i=1}^{s-1}$, $\vec D'_\lin = (D'_{\lin,i})_{i=1}^{s-1}$, $\vec D'_\ind = (D'_{\ind,i})_{i=1}^{s-1}$; \item A \emph{core horizontal frequency vector} $\F_* = (\xi_{*,i,j})_{1 \leq i \leq d; 1 \leq j \leq D_{*,i}}$; \item A \emph{petal horizontal frequency vector} $\F'_h = (\xi'_{h,i,j})_{1 \leq i \leq d; 1 \leq j \leq D'_i}$, which is partitioned as $\F'_h = \F'_{h,\lin} \uplus \F'_{h,\ind}$, which is a limit function of $h$, where $\F'_{h,\lin}$, $\F'_{h,\ind}$ have dimensions $\vec D'_\lin$, $\vec D'_\ind$ respectively; \item A vertical frequency $\eta: G^{\vec D_* + \vec D'}_{(s-1,r_*)} \to \R$ with dimension vector $\vec D_* + \vec D'$ \end{enumerate} which obey the following properties: \begin{enumerate} \item \textup{($\F'_{h,\lin}$ is bracket-linear)} For every $1 \leq i \leq d$ and $1 \leq j \leq D'_{i,\lin}$, there exists $\alpha_{i,j} \in \ultra\T$ and $\beta_{i,j} \in \ultra \R$ such that \begin{equation}\label{xih-def} \xi'_{h,i,j} = \{ \alpha_{i,j} h \} \beta_{i,j} \mod 1 \end{equation} for all $h \in H'$, and furthermore that the map $h \mapsto \xi'_{h,i,j}$ is a Freiman homomorphism on $H'$. \item \textup{(Independence)} For almost all additive quadruples $(h_1,h_2,h_3,h_4)$ in $H$, $$\F_{*,\ind} \uplus \biguplus_{i=1}^4 \F'_{h_i,\ind} \uplus \biguplus_{i=1}^3 \F'_{h_i,\lin}$$ is independent. \item \textup{(Representation)} For all $h \in H'$, $\chi_h$ has a total frequency representation $( \vec D_* + \vec D', \F_* \cup \F'_h, \eta )$. \end{enumerate} \end{lemma} \begin{proof} Each $\chi_h$ thus has a total frequency representation $(\vec D_h, \F_h, \eta_h)$. The space of representations is a $\sigma$-limit set, so by Lemma \ref{int-select} we may assume that $(\vec D_h, \F_h, \eta_h)$ depends in a limit fashion on $h$. The number of possible dimension vectors is countable. Applying Lemma \ref{dense-dich}, and passing from $H$ to a dense subset, we may assume that $\vec D = \vec D_h$ is independent of $h$. We then apply Lemma \ref{sunflower-basic} to the $(\F_h)_{h \in H}$, obtaining a dense subset $H'$ of $H$, dimension vectors $\vec D_* = \vec D_{*,\ind} + \vec D_{*,\rat}$ and $\vec D' = \vec D'_\lin + \vec D'_\ind + \vec D'_\sml$, a core horizontal frequency vector $\F_* = \F_{*,\ind} \uplus \F_{*,\rat}$, and petal horizontal frequency vectors $\F'_h = \F'_{h,\lin} \uplus \F'_{h,\ind} \uplus \F'_{h,\sml}$ for each $h \in H'$ with the stated properties. Applying Lemma \ref{basis-change}, we see that for each $h \in H'$, $\chi_h$ has a total frequency representation $$ (\vec D_* + \vec D', \F_* \uplus \F'_h, \eta'_h )$$ for some vertical frequency $\eta'_h$. Applying Lemma \ref{discard}, we conclude that $\chi_h$ has a total frequency representation $$ (\vec D_{*,\ind} + \vec D'_\lin + \vec D'_\ind, \F_{*,\ind} \uplus \F'_{h,\lin} \uplus \F'_{h,\ind}, \eta''_h )$$ for some vertical frequency $\eta'_h$. The number of vertical frequencies $\eta''_h$ is countable, so by Lemma \ref{dense-dich} we may assume that $\eta = \eta''_h$ is also independent of $h$. The claim then follows. \end{proof} \section{Obtaining bracket-linear behaviour}\label{linear-sec} We return now to the task of proving Theorem \ref{linear-induct}. To recall the situation thus far, we are given a two-dimensional nilcharacter $\chi \in \Xi^{(1,s-1)}_\MD(\ultra \Z^2)$ and a family of degree-rank $(s-1,r_*)$ nilcharacters $(\chi_h)_{h \in H}$ depending in a limit fashion on a parameter $h$ in a dense subset $H$ of $[[N]]$, with the property that there is a function $f \in L^\infty[N]$ such that $\chi(h,\cdot) \otimes \chi_h$ $(s-2)$-correlates with $f$ for all $h \in H$. Using Proposition \ref{gcs-prop} to eliminate $f$ and $\chi$, and refining $H$ to a dense subset if necessary, we conclude that the nilcharacter \eqref{gowers-cs-arg} is $(s-2)$-biased for many additive quadruples $h_1+h_2=h_3+h_4$ in $H$. We make the simple but important remark that this conclusion is ``hereditary'' in the sense that it continues to hold if we replace $H$ with an arbitrary dense subset $H'$ of $H$, since the hypothesis of Proposition \ref{gcs-prop} clearly restricts from $H$ to $H'$ in this fashion. Next, we apply Lemma \ref{sunflower} to obtain a dense refinement $H'$ on $H$ for which the $\chi_h$ have a frequency representation involving various types of frequencies: a core set of frequencies $\F_*$, a bracket-linear family $(\F'_{h,\lin})_{h \in H'}$ of petal frequencies and an independent family $(\F'_{h,\ind})_{h \in H'}$ of petal frequencies. <<<<<<< .mine The main result of this section uses the bias of \eqref{gowers-cs-arg}, combined with the quantitative equidistribution theory on nilmanifolds (as reviewed in Appendix \ref{equiapp}) to obtain an important milestone towards establishing Theorem \ref{linear-induct}, namely that the independent petal frequencies $\F'_{h,\ind}$ do not actually have any influence on the top-order behaviour of the nilcharacters $\chi_h$, and that the bracket-linear frequencies only influence this top-order behaviour in a linear fashion. For this, we use an argument of Furstenberg and Weiss \cite{fw-char}, also used in the predecessor \cite{u4-inverse} to this paper. See also \cite{gtz-announce} for another exposition of this argument. ======= The main result of this section uses the bias of \eqref{gowers-cs-arg}, combined with the quantitative equidistribution theory on nilmanifolds (as reviewed in Appendix \ref{equiapp}) to obtain an important milestone towards establishing Theorem \ref{linear-induct}, namely that the independent petal frequencies $\F'_{h,\ind}$ do not actually have any influence on the top-order behaviour of the nilcharacters $\chi_h$, and that the bracket-linear frequencies only influence this top-order behaviour in a linear fashion. For this, we use an argument of Furstenberg and Weiss \cite{fw-char} that was also used in the predecessor \cite{u4-inverse} to this paper. See \cite{gtz-announce} for another, somewhat simplified, exposition of this argument. >>>>>>> .r207 We begin by formally stating the result we will prove in this section. \begin{theorem}[No petal-petal or regular terms]\label{slang-petal} Let $f,H,\chi,(\chi_h)_{h \in H}$ be as in Theorem \ref{linear-induct} and let $H', \vec D_*, \vec D', \vec D'_\lin, \vec D'_\ind, \F_*, \F'_h, \F'_{h,\lin}, \F'_{h,\ind}, \eta$ be as in Lemma \ref{sunflower}. Let $w \in G^{\vec D_* + \vec D'}$ be an $r_*-1$-fold commutator of $e_{i_1,j_1},\ldots,e_{i_{r_*},j_{r_*}}$, where $1 \leq i_1,\ldots,i_{r_*} \leq s-1$, $i_1+\ldots+i_{r_*}=s-1$, and $1 \leq j_l \leq D_{*,i_l} + D'_{i_l}$ for all $l$ with $1 \leq l \leq r_*$. \begin{enumerate} \item \textup{(No petal-petal terms)} If $j_l > D_{*,i_l}$ for at least two values of $l$, then $\eta(w)=0$. \item \textup{(No regular terms)} If $j_l > D_{*,i_l} + D'_{\lin,i_l}$ for at least one value of $l$, then $\eta(w)=0$. \item \textup{(No petal-petal terms)} If $j_l > D_{*,i_l}$ for at least two values of $l$ then $\eta(w)=0$. \item \textup{(No regular terms)} If $j_l > D_{*,i_l} + D'_{\lin,i_l}$ for at least one value of $l$ then $\eta(w)=0$. \end{enumerate} \end{theorem} The remainder of this section is devoted to the proof of Theorem \ref{slang-petal}. Let the notation and assumptions be as in the above theorem. From Proposition \ref{gcs-prop} we know that, for many additive quadruples $(h_1,h_2,h_3,h_4)$ in $H'$, the sequence \eqref{gowers-cs-arg} is $(s-2)$-biased. Also, from Lemma \ref{sunflower}, we see that for almost all of these quadruples, the horizontal frequency vectors \begin{equation}\label{jinnai-1} \F_{*,\ind} \uplus \biguplus_{i=1}^4 \F_{h_i,\ind} \uplus \biguplus_{i=a,b,c} \F_{h_i,\lin} \end{equation} are independent for all distinct $a,b,c \in \{1,2,3,4\}$. We may therefore find an additive quadruple $(h_1,h_2,h_3,h_4)$ for which \eqref{gowers-cs-arg} is $(s-2)$-biased, and for which \eqref{jinnai-1} is independent for all choices of distinct $a,b,c \in \{1,2,3,4\}$. Fix $(h_1,h_2,h_3,h_4)$ with these properties. We convert the above information to a non-equidistribution result concerning a polynomial orbit. For each $i=1,2,3,4$, we see from Lemma \ref{sunflower} that $\chi_{h_i}$ has a total frequency representation $$ ( \vec D_* + \vec D', \F_* \uplus \F'_{h_i}, \eta ).$$ We write $$ \F_* \uplus \F'_{h_i} = ( \xi_{h_i,j,k} )_{1 \leq j \leq s-1; 1 \leq k \leq D_j},$$ where $$ D_j = D_{*,j} + D'_j;$$ thus the frequencies associated to $\F_{*}$, $\F'_{h_i,\ind}$, $\F'_{h_i,\lin}$ correspond to the ranges $1 \leq k \leq D_{*,j}$, $D_{*,j} < k \leq D_{*,j}+D'_{\ind,j}$, and $D_{*,j} + D'_{\ind,j} < k \leq D_j$ respectively. As \eqref{gowers-cs-arg} is $(s-2)$-biased, we conclude that \begin{equation}\label{expect} |\E_{n \in [N]} \chi_{h_1}(n) \otimes \chi_{h_2}(n+h_1-h_4) \otimes \overline{\chi_{h_3}}(n) \otimes \overline{\chi_{h_4}}(n+h_1-h_4) \psi_{h_1,h_2,h_3,h_4}(n)| \gg 1 \end{equation} for some degree $\leq (s-2)$ nilsequence $\psi_{h_1,h_2,h_3,h_4}$, where $\chi_h$ is defined to be zero outside of $[N]$. As any cutoff to an interval can be approximated to arbitrary standard accuracy by a degree $1$ nilsequence, and $s \geq 3$, we see that the same claim holds if $\chi_h$ is instead extended to be a nilsequence on all of $\ultra \Z$. From Definition \ref{nilch-def} and the total frequency representation of the $\chi_{h_i}$, we can rewrite the sequence inside the expectation of \eqref{expect} as a degree-rank $\leq (s-1,r_*)$ nilsequence $n \mapsto F(\orbit(n))$. Here $G/\Gamma$ is the product nilmanifold\footnote{Unfortunately, there will be several types of subscripts on nilpotent Lie groups $G$ in this argument. Firstly one has the factor groups $G_{(i)}$. Then one also has the degree filtration groups $G_d$ and the degree-rank filtration groups $G_{(d,r)}$ of $G$ (and also the analogous subgroups $(G_{(i)})_d$, $(G_{(i)})_{(d,r)}$ of the factor groups $G_{(i)}$), as well as the free nilpotent groups $G^{\vec D} = G^{\vec D}_{(s-1,r_*)}$. Finally, a Ratner subgroup $G_P$ of $G$ will also make an appearance later. We hope that these notations can be kept separate from each other.} $$ G/\Gamma := \left(\prod_{i=1}^4 G_{(i)}/\Gamma_{(i)}\right) \times G_{(0)}/\Gamma_{(0)}$$ for some filtered nilmanifold $G_{(0)}/\Gamma_{(0)}$ of degree-rank $<(s-1,r_*-1)$ and filtered nilmanifolds $G_{(i)}/\Gamma_{(i)}$ of degree-rank $\leq(s-1,r_*)$ for $i=1,2,3,4$. The orbit $\orbit$ is defined by $$\orbit = (\orbit_1,\orbit_2,\orbit_3,\orbit_4,\orbit_0) \in \ultra \poly(\Z_\N \to (G/\Gamma)_\N)$$ where, for each $i,j$ with $1 \leq i \leq 4$ and $1 \leq j \leq s-1$ we have \begin{equation}\label{gij-spin} \Taylor_j(\orbit_{(i)}) = \pi_{\Horiz_j(G_{(i)}/\Gamma_{(i)})}\left(\phi_{(i)}(\prod_{1 \leq k \leq D_j} e_{j,k}^{\xi_{h_i,j,k}})\right) \end{equation} where $\vec D := (D_1,\ldots,D_{s-1})$, $\phi_{(i)}: G^{\vec D}/\Gamma^{\vec D} \to G_{(i)}/\Gamma_{(i)}$ is a filtered homomorphism and $\pi_{\Horiz_j(G_{(i)}/\Gamma_{(i)})}: (G_{(i)})_j \to \Horiz_j(G_{(i)}/\Gamma_{(i)})$ is the projection to the $j^{\operatorname{th}}$ horizontal torus. Finally $F \in \Lip(\ultra(G/\Gamma))$ is defined by \begin{align}\nonumber F( \phi_{(1)} & (t_{(1)}) x_{(1)}, \ldots, \phi_{(4)}(t_{(4)}) x_{(4)}, y ) = \\ & e( (\eta(t_{(1)})+\eta(t_{(2)})-\eta(t_{(3)})-\eta(t_{(4)})) ) F(x_{(1)},\ldots,x_{(4)},y)\label{fallow} \end{align} for all $(x_{(1)},\ldots,x_{(4)},y) \in G/\Gamma$ and $t_{(1)},\ldots,t_{(4)} \in G^{\vec D}_{(s-1,r_*)}$. (Note that the shifts by $h_1-h_4$ in \eqref{expect} do not affect the Taylor coefficients of $\orbit_{(i)}$, thanks to the remarks following Definition \ref{horton}.) By hypothesis, we have $$ |\E_{n \in [N]} F( \orbit(n) )| \gg 1.$$ Applying Theorem \ref{ratt}, we conclude that \begin{equation}\label{gapp} |\int_{G_P / \Gamma_P} F(\eps x)\ d\mu(x)| \gg 1 \end{equation} for some bounded $\eps \in G$ and some rational subgroup $G_P$ of $G$ with the property that \begin{equation}\label{soo} \pi_{\Horiz_j(G)}(G_P \cap G_{(i)}) \geq \Xi_j^\perp \end{equation} for all $1 \leq j \leq s-1$, where $$ \Xi_j^\perp := \{ x \in \Horiz_j(G) : \xi_j(x) = 0 \hbox{ for all } \xi_j \in \Xi_j \}$$ and $\Xi_j \leq \widehat{\Horiz_j(G/\Gamma)}$ is the group of all (standard) continuous homomorphisms $\xi_j: \Horiz_j(G/\Gamma) \to \T$ such that $$ \xi_j( \Taylor_j(\orbit) ) = O( N^{-j} ).$$ From \eqref{fallow} and \eqref{gapp} we conclude the following lemma. \begin{lemma}\label{gapp-vanish} The group $G_P \cap ((G_{(1)})_{(s-1,r_*)} \times \{\id\} \times \{\id\} \times \{\id\} \times \{\id\})$ is annihilated by $\eta$. \end{lemma} \begin{proof} Let $g = (g_{(1)},\id,\id,\id,\id)$ lie in the indicated group. Then $g$ is central, and so from the invariance of Haar measure we have $$ \int_{G_P / \Gamma_P} F(\eps x)\ d\mu(x) = \int_{G_P / \Gamma_P} F(g \eps x)\ d\mu(x).$$ On the other hand, from \eqref{fallow} we have $$ \int_{G_P / \Gamma_P} F(g \eps x)\ d\mu(x) = e(\eta(g)) \int_{G_P / \Gamma_P} F(\eps x)\ d\mu(x).$$ Comparing these relationships with \eqref{gapp} we obtain the claim. \end{proof} We now analyse the group $G_P$ further. For each $1 \leq j \leq s-1$, let $V_{123,j}$ denote the subgroup of $\Horiz_j(G_{(1)}) \times \Horiz_j(G_{(2)}) \times \Horiz_j(G_{(3)})$ generated by the diagonal elements $$ (\phi_{(1)}(e_{j,k}), \phi_{(2)}(e_{j,k}), \phi_{(3)}(e_{j,k}))$$ for $1 \leq k \leq D_{*,j}$, and by the elements $$ (\phi_{(1)}(e_{j,k}), \id, \id), (\id, \phi_{(2)}(e_{j,k}), \id), (\id, \id, \phi_{(3)}(e_{j,k}))$$ for $D_{*,j} < k \leq D_j$. We define the subgroup $V_{124,j}$ of $\Horiz_j(G_{(1)}) \times \Horiz_j(G_{(2)}) \times \Horiz_j(G_{(4)})$ similarly by replacing $(3)$ with $(4)$ throughout. \begin{lemma}[Components of $G_P$]\label{gp-comp} Let $1 \leq j \leq s-1$. Then the projection of $G_P \cap G_{j}$ to $\Horiz_j(G_{(1)}) \times \Horiz_j(G_{(2)}) \times \Horiz_j(G_{(3)})$ contains $V_{123,j}$. Similarly, the projection to $\Horiz_j(G_{(1)}) \times \Horiz_j(G_{(2)}) \times \Horiz_j(G_{(4)})$ contains $V_{124,j}$. \end{lemma} \begin{proof} We shall just prove the first claim; the second claim is similar (but uses $\{a,b,c\} = \{1,2,4\}$ instead of $\{a,b,c\}=\{1,2,3\}$). Suppose the claim failed for some $j$. Using \eqref{soo} and duality, we conclude that there exists a $\xi_j \in \Xi_j$ which annihilates the kernel of the projection to $\Horiz_j(G_{(1)}) \times \Horiz_j(G_{(2)}) \times \Horiz_j(G_{(3)})$, and which is non-trivial on $V_{123,j}$. As $\xi_j$ annihilates the kernel of the projection to $\Horiz_j(G_{(1)}) \times \Horiz_j(G_{(2)}) \times \Horiz_j(G_{(3)})$, we have a decomposition of the form $$ \xi_j(x_{(1)},x_{(2)},x_{(3)},x_{(4)},x_{(0)}) = \xi_{(1),j}(x_{(1)}) + \xi_{(2),j}(x_{(2)}) + \xi_{(3),j}(x_{(3)})$$ for $x_{(i)} \in \Horiz_j(G_{(i)})$ for $i=1,2,3,4,0$, where $\xi_{(i),j}: \Horiz_j(G_{(i)}) \to \R$ for $i=1,2,3$ are characters. By definition of $\Xi_j$, we conclude that $$ \xi_{(1),j}( \Taylor_j(\orbit_{(1)}) ) + \xi_{(2),j}( \Taylor_j(\orbit_{(2)}) ) + \xi_{(3),j}( \Taylor_j(\orbit_{(3)}) ) = O(N^{-j}).$$ However, from \eqref{gij-spin} we have \begin{equation}\label{star} \xi_{(i),j}(\Taylor_j(\orbit_{(i)})) = \sum_{k=1}^{D_j} c_{(i),j,k} \xi_{h_i,j,k} \end{equation} where the $c_{(i),j,k}$ are standard integers, defined by the formula \begin{equation}\label{cdef} c_{(i),j,k} := \xi_{(i),j}(\phi_{(i)}(e_{j,k})). \end{equation} From the independence of \eqref{jinnai-1} with $\{a,b,c\}=\{1,2,3\}$, we conclude that the $c_{(i),j,k}$ all vanish for $i=1,2,3$ and $D_{*,j} < k \leq D_j$, and that the sum $c_{(1),j,k}+c_{(2),j,k}+c_{(3),j,k}$ vanishes for $1 \leq k \leq D_{*,j}$. But this forces $\xi_j$ to vanish on $V_{123,j}$, contradiction. \end{proof} We now take commutators in the spirit of an argument of Furstenberg and Weiss \cite{fw-char} (see also \cite{hrush,ribet} for similar arguments in completely different settings) to conclude the following result which roughly speaking asserts that all ``petal-petal interactions'' are trivial. \begin{corollary}[Furstenberg-Weiss commutator argument]\label{fw} Let $w$ be an $r_*-1$-fold iterated commutator of generators $e_{j_1,k_1},\ldots,e_{j_{r_*},k_{r_*}}$ with $1 \leq j_l \leq s-1$, $1 \leq k_l \leq D_l$ for $l=1,\ldots,r_*$ and $j_1+\ldots+j_{r_*} = s-1$ \textup{(}thus $w$ has ``degree-rank $(s-1,r_*)$'' in some sense\textup{)}. Suppose that at least two of the generators, say $e_{j_1,k_1}, e_{j_2,k_2}$, are ``petal'' generators in the sense that $k_1 > D_{*,j_1}$ and $k_2 > D_{*,j_2}$. Then $(\phi_{(1)}(w),\id,\id,\id,\id) \in G_P$. \end{corollary} \begin{proof} For $e_{j_1,k_1}$, we may invoke Lemma \ref{gp-comp} and find an element $g_{j_1,k_1}$ of $G_P \cap G_{j_1}$ for which the coordinates $1,2,3$ are equal (modulo projection to \[ \Horiz_{j_1}(G_{(1)}) \times \Horiz_{j_1}(G_{(2)}) \times \Horiz_{j_1}(G_{(3)}))\] to $(\phi_1(e_{j_1,k_1}),\id,\id)$. Similarly, we may find an element $g'_{j_2,k_2}$ of $G_P \cap G_{j_2}$ for which the coordinates $1,2,4$ are equal (modulo projection to \[ \Horiz_{j_2}(G_{(1)}) \times \Horiz_{j_2}(G_{(2)}) \times \Horiz_{j_2}(G_{(4)}))\] to $(\phi_1(e_{j_2,k_2}),\id,\id)$. Finally, for all of the other $e_{j,k}$, we can find elements $g''_{j,k}$ of $G_P \cap G_{j}$ for which the first coordinate is equal (modulo projection to $\Horiz_j(G_{(1)})$) to $\phi_{(1)}(e_{j,k})$. If one then takes iterated commutators of the $g_{j_1,k_1}, g'_{j_2,k_2}, g''_{j,k}$ in the order indicated by $w$, we see (using the filtration property, the homomorphism property of $\phi_{(1)}$, and the fact that the $G_i/\Gamma_i$ have degree $\leq (s-1,r_*)$ for $i=1,2,3,4$ and degree $<(s-1,r_*-1)$ for $i=0$) that we obtain the element $(\phi_{(1)}(w),\id,\id,\id,\id)$. Since the iterated commutator of elements in $G_P$ stays in $G_P$, the claim follows. \end{proof} From Lemma \ref{gapp-vanish} and Corollary \ref{fw} we immediately obtain the first part (i) of Theorem \ref{slang-petal}. We now turn to the second part of the theorem. For this, we need two further variants of Lemma \ref{gp-comp}. For any $1 \leq j \leq s-1$, let $V_{\ind,j}$ be the subspace of $\Horiz_j(G_{(1)}) \times \Horiz_j(G_{(2)}) \times \Horiz_j(G_{(3)}) \times \Horiz_j(G_{(4)})$ generated by the elements $$ (\phi_{(1)}(e_{j,k}), \phi_{(2)}(e_{j,k}), \phi_{(3)}(e_{j,k}),\phi_{(4)}(e_{j,k}))$$ for $1 \leq k \leq D_{*,j}$ and the elements $$ (\phi_{(1)}(e_{j,k}), \id, \id,\id), (\id, \phi_{(2)}(e_{j,k}), \id,\id), (\id, \id, \phi_{(3)}(e_{j,k}),\id), (\id, \id,\id, \phi_{(4)}(e_{j,k})) $$ for $D_{*,j} < k \leq D_{*,j}+D'_{\ind,j}$. \begin{lemma}[Components of $G_P$, II]\label{gp-comp2} Let $1 \leq j \leq s-1$. Then the projection of $G_P \cap G_{j}$ to $\Horiz_j(G_{(1)}) \times \Horiz_j(G_{(2)}) \times \Horiz_j(G_{(3)}) \times \Horiz_j(G_{(4)})$ contains $V_{\ind,j}$. \end{lemma} \begin{proof} Suppose the claim failed for some $j$. Using \eqref{soo} and duality, we conclude that there exists a $\xi_j \in \Xi_j$ which annihilates the kernel of the projection to $\Horiz_j(G_{(1)}) \times \Horiz_j(G_{(2)}) \times \Horiz_j(G_{(3)}) \times \Horiz_j(G_{(4)})$, and which is non-trivial on $V_{\ind,j}$. In particular, we have a decomposition of the form \begin{equation}\label{xij} \xi_j(x_{(1)},x_{(2)},x_{(3)},x_{(4)},x_{(0)}) = \sum_{i=1}^4 \xi_{(i),j}(x_{(i)}) \end{equation} for $x_{(i)} \in \Horiz_j(G_{(i)})$ for $i=1,2,3,4,0$, where $\xi_{(i),j}: \Horiz_j(G_{(i)}) \to \R$ for $i=1,2,3,4$ are characters. By definition of $\Xi_j$, we conclude that $$\sum_{i=1}^4 \xi_{(i),j}( \Taylor_j(\orbit_{(i)}) ) = O(N^{-j}).$$ Inserting \eqref{star}, we conclude that \begin{equation}\label{star2} \sum_{k=1}^{D_j} \sum_{i=1}^4 c_{(i),j,k} \xi_{h_i,j,k} = O(N^{-j}). \end{equation} The left-hand side is an integer linear combination of the degree $j$ frequencies in $$ \F_{*,\ind} \uplus \biguplus_{i=1}^4 \F_{h_i,\ind} \uplus \biguplus_{i=1}^4 \F_{h_i,\lin}.$$ Using the Freiman homomorphism property from Lemma \ref{sunflower} we can eliminate the role of $\F_{h_4,\lin}$, leaving only $$ \F_{*,\ind} \uplus \biguplus_{i=1}^4 \F_{h_i,\ind} \uplus \biguplus_{i=1}^3 \F_{h_i,\lin}.$$ But this is just \eqref{jinnai-1} for $\{a,b,c\}=\{1,2,3\}$. We conclude that the coefficients of the left-hand side of \eqref{star2} in this basis vanish, which in terms of the original coefficients $c_{(i),j,k}$ means that $$ \sum_{i=1}^4 c_{(i),j,k}=0$$ for $1 \leq k \leq D_{*,j}$, and $$ c_{(i),j,k} = 0$$ for $D_{*,j} < k \leq D_{*,j} + D'_{\ind,j}$. But this forces $\xi_j$ to vanish on $V_{\ind,j}$, a contradiction. \end{proof} We now apply the commutator argument to show that ``independent'' frequencies also ultimately have a trivial effect. \begin{corollary}[Furstenberg-Weiss commutator argument, II]\label{fw2} Let $w$ be an $(r_*-1)$-fold iterated commutator of generators $e_{j_1,k_1},\ldots,e_{j_{r_*},k_{r_*}}$ with $1 \leq j_l \leq s-1$, $1 \leq k_l \leq D_l$ for $l=1,\ldots,r_*$ and $j_1+\ldots+j_{r_*} = s-1$. Suppose that at least one of the generators, say $e_{j_1,k_1}$, is an ``independent'' generator in the sense that $D_{*,j_1} < k_1 \leq D_{*,j_1} + D'_{\ind,j_1}$. Then $(\phi_{(1)}(w),\id,\id,\id,\id) \in G_P$. \end{corollary} \begin{proof} We may assume that $k_l \leq D_{*,j_l}$ for all $2 \leq l \leq r_*$, as the claim would follow from Corollary \ref{fw} otherwise. For $e_{j_1,k_1}$, we may invoke Lemma \ref{gp-comp2} and find an element $g_{j_1,k_1}$ of $G_P \cap G_{j_1}$ for which the first $4$ coordinates are equal (modulo projection to $\Horiz_{j_1}(G_{(1)}) \times \Horiz_{j_1}(G_{(2)}) \times \Horiz_{j_1}(G_{(3)}) \times \Horiz_{j_1}(G_{(4)})$) is equal to $(\phi_{(1)}(e_{j_1,k_1}),\id,\id,\id)$. For the other $e_{j,k}$, we can find elements $g'_{j,k}$ of $G_P \cap G_{j}$ for which the first coordinate is equal (modulo projection to $\Horiz_{j}(G_{(1)})$) to $\phi_{(1)}(e_{j,k})$. Taking commutators of $g_{j_1,k_1}$ and $g'_{j,k}$ in the order indicated by $w$, we obtain the claim. \end{proof} Combining Corollary \ref{fw2} with Lemma \ref{gapp-vanish} we obtain the second part of Theorem \ref{slang-petal}. \section{Building a nilobject}\label{multi-sec} The aim of this section is to at last build an object coming from an $s$-step nilmanifold. Recall from the discussion in \S \ref{overview-sec} that this object will be a multidegree $(1,s-1)$-nilcharacter $\chi'(h,n)$, and that this completes the proof of Theorem \ref{linear-induct}. This in turn was used iteratively to prove Theorem \ref{linear-thm}, the heart of our whole paper. It will then remain to supply the \emph{symmetry argument}, which will take us from a 2-dimensional nilsequence to a 1-dimensional one; this will be accomplished in the next section. Let $f,H,\chi,(\chi_h)_{h \in H}$ be as in Theorem \ref{linear-induct}. If we apply Lemma \ref{sunflower}, we obtain the following objects: \begin{itemize} \item A dense subset $H'$ of $H$; \item Dimension vectors $\vec D_* = \vec D_{*,\ind} + \vec D_{*,\rat}$ and $\vec D' = \vec D'_\lin + \vec D'_\ind + \vec D'_\sml$, which we write as $\vec D_* = (D_{*,i})_{i=1}^{s-1}$, $\vec D_{*,\ind} = (D_{*,\ind,i})_{i=1}^{s-1}$, etc.; \item A core horizontal frequency vector $\F_* = (\xi_{*,i,j})_{1 \leq i \leq s-1; 1 \leq j \leq D_{*,i}}$, which is partitioned as $\F_* = \F_{*,\ind} \uplus \F_{*,\rat}$, with the indicated dimension vectors $\vec D'_\ind, \vec D'_\rat$; \item A petal horizontal frequency vector $\F'_h = (\xi'_{h,i,j})_{1 \leq i \leq s-1; 1 \leq j \leq D'_i}$, which is partitioned as $\F'_h = \F'_{h,\lin} \uplus \F'_{h,\ind} \uplus \F'_{h,\sml}$, which is a limit function of $h$ and with the indicated dimension vectors $\vec D'_\lin, \vec D'_\ind, \vec D'_\sml$; \item Nilmanifolds $G_h/\Gamma_h$ and $G_{0,h}/\Gamma_{0,h}$ of degree-rank $\leq (s-1,r_*)$ and $\leq (s-1,r_*-1)$ respectively for each $h \in H'$, depending in a limit fashion on $h$; \item Polynomial sequences $g_h, g_{0,h} \in \ultra \poly(\Z_\N \to (G_h)_\N)$ for each $h \in H'$, depending in a limit fashion on $h$; \item Lipschitz functions $F_h \in \Lip(\ultra(G_h/\Gamma_h \times G_{0,h}/\Gamma_{0,h})\to \overline{S^{\omega}})$ for each $h \in H'$, depending in a limit fashion on $h$; \item a filtered $\phi_h: G^{\vec D_* + \vec D'} \to G_h$ for each $h \in H'$, depending in a limit fashion on $h$; and \item a character $\eta_h: G^{\vec D_* + \vec D'}_{(s-1,r_*)} \to \R$ for each $h \in H'$, depending in a limit fashion on $h$ \end{itemize} that obey the following properties: \begin{itemize} \item For every $1 \leq i \leq d$ and $1 \leq j \leq D'_{i,\lin}$, there exists $\alpha_{i,j} \in \ultra\T$ and $\beta_{i,j} \in \ultra \R$ such that \eqref{xih-def} holds, and furthermore that the map $h \mapsto \xi'_{h,i,j}$ is a Freiman homomorphism on $H'$. \item For almost all additive quadruples $(h_1,h_2,h_3,h_4)$ in $H$, $$\F_{*,\ind} \uplus \biguplus_{i=1}^4 \F'_{h_i,\ind} \uplus \biguplus_{i=1}^3 \F'_{h_i,\lin}$$ is independent. \item We have the representation $$ \chi_h(n) = F_h( g_h(n) \ultra \Gamma_h, g_{0,h}(n) \ultra \Gamma_{0,h} )$$ for every $h \in H'$. \item $\phi_h: G^{\vec D_* + \vec D'} \to G_h$ is a filtered homomorphism such that \begin{equation}\label{phil} F_h( \phi_h(t) x, x_0 ) = e( \eta_h(t) ) F_h(x,x_0) \end{equation} for all $t \in G^{\vec D_* + \vec D'}_{(s-1,r_*)}$, $x \in G_h/\Gamma_h$, and $x_0 \in G_{0,h}/\Gamma_{0,h}$; \item One has the Taylor coefficients \begin{equation}\label{thune} \Taylor_i(g_h\Gamma_h) = \pi_{\Horiz_i(G_h/\Gamma_h)}(\phi_h( \prod_{j=1}^{D_{*,i}+D'_i} e_{i,j}^{\xi_{h,i,j}} )) \end{equation} for all $1 \leq i \leq s-1$. \end{itemize} There are only countably many nilmanifolds $G/\Gamma$ up to isomorphism, so by passing from $H'$ to a dense subset using Lemma \ref{dense-dich} we may assume that $$ G_h/\Gamma_h = G/\Gamma \quad \mbox{and} \quad G_{0,h}/\Gamma_{0,h} = G_0/\Gamma_0$$ are independent of $h$. Similarly we may take $\eta_h = \eta$ and $\phi_h = \phi$ to be independent of $h$. From the Arzel\`a-Ascoli theorem, the space of possible $F_h$ is totally bounded, and so (shrinking $\eps$ slightly if necessary) we may also assume that $F_h = F$ is independent of $h$. For $j$ with $1 \leq j \leq D_{*,i}$, since $\xi_{h,i,j}$ is independent of $h$, we can ensure that $\xi_{h,i,j} =\gamma_{i,j}$ is also independent of $h$. Meanwhile, for $D_{*,i} < j \leq D_{*,i}+D'_{i,\lin}$, from \eqref{xih-def} we may assume that $\xi_{h,i,j}$ takes the form $$ \xi_{h,i,j} = \{ \alpha_{i,j} h \} \beta_{i,j} \mod 1$$ for some $\alpha_{i,j} \in \ultra\T$ and $\beta_{i,j} \in \ultra \R$. By passing to a dense subset of $H'$ using the pigeonhole principle, we may assume for each $i,j$, that $\{ \alpha_{i,j} h \}$ is contained in a subinterval $\ultra I_{i,j}$ around $\ultra 0$ of length at most $1/10$ (say). We now wish to apply Theorem \ref{slang-petal} to obtain more convenient equivalent representatives (in $\Xi_{\DR}^{(s-1,r_*)}([N])$ ) $\tilde \chi_h$ for the nilcharacters $\chi_h$. Let $\tilde G$ be the free Lie group generated by the generators $\tilde e_{i,j}$ for $1 \leq i \leq s-1$ and $1 \leq j \leq D_{*_i} + D'_{\lin,i}$ subject to the following relations: \begin{itemize} \item Any $(r-1)$-fold iterated commutator of $\tilde e_{i_1,j_1},\ldots,\tilde e_{i_r,j_r}$ with $i_1+\ldots+i_r > s-1$ vanishes; \item Any $(r-1)$-fold iterated commutator of $\tilde e_{i_1,j_1},\ldots,\tilde e_{i_r,j_r}$ with $i_1+\ldots+i_r = s-1$ and $r > r_*$ vanishes; \item Any $(r-1)$-fold iterated commutator of $\tilde e_{i_1,j_1},\ldots,\tilde e_{i_r,j_r}$ in which $j_l > D_{*,i_l}$ for at least two values of $l$ vanishes. \end{itemize} We give this group a $\DR$-filtration $\tilde G_\DR$ by defining $\tilde G_{(d,r)}$ to be the group generated by the $(r'-1)$-fold iterated commutators of $\tilde e_{i_1,j_1},\ldots,\tilde e_{i_{r'},j_{r'}}$ with $i_1+\ldots+i_{r'} \geq d$ and $r' \geq r$. We then let $\tilde \Gamma$ be the discrete group generated by the $\tilde e_{i,j}$; $\tilde G/\tilde \Gamma$ is then a nilmanifold of degree-rank $\leq (s-1,r_*)$. Let $G^*$ be the subgroup of $G^{\vec D_* + \vec D'}$ generated by $(r-1)$-fold iterated commutators $\tilde e_{i_1,j_1},\ldots,\tilde e_{i_r,j_r}$ with $i_1+\ldots+i_r = s-1$ in which $j_l > D_{*,i_l}$ for at least two values of $l$, or $j_l > D_{*,i_l} + D'_{\lin,i_l}$ for at least one value of $l$. Then $G^*$ is a subgroup of the central group $G^{\vec D_* + \vec D'}_{(s-1,r_*)}$ of $G^{\vec D_* + \vec D'}$, and $\tilde G$ is isomorphic to the quotient of $G^{\vec D_* + \vec D'}$ by $G^*$. We let $\tilde \phi: G^{\vec D_* + \vec D'} \to \tilde G$ denote the quotient map. From Theorem \ref{slang-petal}, the character $\eta: G^{\vec D_* + \vec D'}_{(s-1,r_*)} \to \R$ annihilates $G^*$, and thus descends to a vertical character $\tilde \eta: \tilde G_{(s-1,r_*)} \to \R$. We select a function $\tilde F \in \Lip( \tilde G/\tilde \Gamma \to S^\omega)$ with vertical frequency $\tilde \eta$; such a function can be built using the construction \eqref{fkts}. We then define the polynomial sequences $ g_0, \tilde g_h \in \ultra \poly(\Z_\N \to \tilde G_\N)$ by the formulae \begin{align} g_0(n) &:= \prod_{i=1}^{s-1} \prod_{j=1}^{D_{*,i}} \tilde e_{i,j}^{\gamma_{i,j} \binom{n}{i}}\label{g0-def}\\ \tilde g_h(n) &:= \prod_{i=1}^{s-1} \prod_{j=D_{*,i}+1}^{D_{*,i}+D'_{\lin,i}} \tilde e_{i,j}^{\{\alpha_{i,j} h\} \beta_{i,j} \binom{n}{i}}\label{gh-def} \end{align} and consider the nilcharacter \begin{equation}\label{chok} \tilde \chi_h(n) := \tilde F( g_0(n) \tilde g_h(n) \ultra \tilde \Gamma ). \end{equation} These nilcharacters are equivalent to $\chi_h$ in $\Symb_{\DR}^{(s-1,r_*)}([N])$, as the following lemma shows. \begin{lemma} For each $h \in H'$, $\chi_h$ and $\tilde \chi_h$ are equivalent \textup{(}as nilcharacters of degree-rank $(s-1,r_*)$\textup{)} on $[N]$. \end{lemma} \begin{proof} Fix $h$. It suffices to show that $\chi_h \otimes \overline{\tilde \chi_h}$ is a nilsequence of degree $<s-1$. We can write this sequence as \begin{equation}\label{fhg} n \mapsto F'_h( g'_h(n) \ultra \Gamma'), \end{equation} where $G' := G \times G_0 \times \tilde G$, $\Gamma := \Gamma \times \Gamma_0 \times \tilde \Gamma$, $g'_h \in \ultra \poly( \Z_\N \to G'_\N )$ is the sequence \[ g'_h(n) := ( g_h(n), g_{0,h}(n), g_0(n) \tilde g_h(n) ) \] and $F'_h \in \Lip(\ultra(G'/\Gamma'))$ is the function \[ F'_h(x,x_0,y) := F_h(x,x_0) \otimes \overline{\tilde F(y)}. \] We define a $\DR$-filtration $G'_\DR$ on $G'$ by defining $G'_{(d,r)}$ for $(d,r) \in \DR$ with $r \geq 1$ to be the Lie group generated by the following sets: \begin{enumerate} \item $G_{(d,r+1)} \times (G_0)_{(d,r)} \times \tilde G_{(d,r+1)}$; \item $\{ (\phi(g), \id, \tilde \phi (g) ): g \in G^{\vec D_*+\vec D'}_{(d,r)} \}$, \end{enumerate} with the convention that $(d,d+1) = (d+1,0)$. We also set $G'_{(d,0)} := G'_{(d,1)}$ for $d \geq 1$. One easily verifies that this is a filtration. We claim that $g'$ is polynomial with respect to this filtration. Indeed, the sequence $n \mapsto (\id,g_{0,h}(n),\id)$ is already polynomial in this filtration, so by Corollary \ref{laz} it suffices to verify that the sequence \begin{equation}\label{gig} n \mapsto (g_h(n), \id, g_0(n) \tilde g_h(n)) \end{equation} is polynomial. We use Lemma \ref{taylo} to Taylor expand $g_h(n) = \prod_{i=0}^{s-1} g_{h,i}^{\binom{n}{i}}$ where $g_{h,i} \in G_{(i,0)}$. From \eqref{thune}, one has \[ g_{h,i} = \phi\big( \prod_{j=1}^{D_{*,i}+D'_i} e_{i,j}^{\xi_{h,i,j}} \big) \mod G_{(i,2)}. \] By construction of the filtration of $G'$, this implies that \[ \big( g_{h,i}, \id, \prod_{j=1}^{D_{*,i}+D_i'} e_{i,j}^{ \xi_{h,i,j}} \mod G^* \big) \in G'_{(i,1)}. \] Applying Corollary \ref{laz}, we conclude that the sequence \[ n \mapsto \big(g_h(n), \id, \prod_{i=0}^{s-1} ( \prod_{j=1}^{D_{*,i}+D_i'} e_{i,j}^{ \xi_{h,i,j}})^{\binom{n}{i}} \mod G^* \big) \] is polynomial with respect to the $G'$ filtration. Applying the Baker-Campbell-Hausdorff formula repeatedly, and using \eqref{g0-def}, \eqref{gh-def}, we see that \[ n \mapsto \prod_{i=0}^{s-1} (\prod_{j=1}^{D_{*,i}+D_i'} e_{i,j}^{ \xi_{h,i,j}})^{\binom{n}{i}} \mod G^* \] differs from the sequence $n \mapsto g_0(n) \tilde g_h(n)$ by a sequence which is polynomial in the shifted filtration $(\tilde G_{(d,r+1)})_{(d,r) \in \DR}$. We conclude that \eqref{gig} is polynomial as required. Next, we claim that $F'_h$ is invariant with respect to the action of the central group $$ G'_{(s-1,r_*)} = \{ (\phi(g), \id, \tilde \phi (g) ): g \in G^{\vec D}_{(s-1,r_*)} \}. $$ It suffices to check this for generators $(\phi(w),\id,w \mod G^*)$, where $w$ is an $(r_*-1)$-fold commutator of $e_{i_1,j_1},\ldots,e_{i_{r_*},j_{r_*}}$ in $G^{\vec D}$ with $i_1+\ldots+i_r = s-1$. There are two cases. If one has $j_l > D_{*,i_l} + D'_{\lin,i_l}$ for some $l$, then $w$ lies in $G^*$ and is also annihilated by $\eta$, and the claim follows from \eqref{phil}. If instead one has $j_l \leq D_{*,i_l} + D'_{\lin,i_l}$ for all $l$, then the claim again follows from \eqref{phil} together with the construction of $\tilde \eta$ and $\tilde F$. We may now quotient out $G'_{(0,0)}$ by $G'_{(s-1,r_*)}$ and obtain a representation of \eqref{fhg} as a nilsequence of degree-rank $<(s-1,r_*)$, as desired. \end{proof} From this lemma and Lemma \ref{symbolic}(ii) we can express $\chi_h$ as a bounded linear combination of $\tilde \chi_h \otimes \psi_h$ for some nilsequence $\psi_h$ of degree-rank $\leq (s-1,r_*-1)$. Thus, to prove Theorem \ref{linear-induct} it suffices to show that there is a nilcharacter $\tilde \chi \in \Xi^{(1,s-1)}(\ultra \Z^2)$, such that $\tilde \chi_h(n) = \tilde \chi(h,n)$ for many $h \in H'$ and all $n \in [N]$. We illustrate the construction with an example. Let $$G:= G^{(2,0)} = \{ e_1^{t_1} e_2^{t_2} [e_1,e_2]^{t_{12}}: t_1,t_2,t_{12} \in \R \}$$ be the universal degree $2$ nilpotent group \eqref{heisen} generated by $e_1,e_2$. Let $F$ be the Lipschitz function in equation (\ref{fkts}). Suppose \[ \chi_h(n) := F(g_h(n)\ultra \Gamma) \] with $g_h(n) := e_2^{\beta n} e_1^{\alpha_h n} $, where $\alpha_h:=\{\delta h\} \gamma$, and $\alpha,\beta,\gamma \in \ultra \R$. As computed in \S \ref{nilcharacters}, we have \[F_k(g_h(n)\ultra \Gamma)= \phi_k(\alpha_hn \mod 1,\beta n \mod 1)e(\{\alpha_h n \} \beta n)\] for some Lipschitz function $\phi_k: \T^2 \to \C$. We would like to interpret the function $(h,n) \mapsto \chi_h(n)$ as a nilcharacter in $ \Xi_{\MD}^{(1,2)}(\ultra \Z^2)$. The first task is to identify a subgroup $G_{\petal}$ of the group $G$ representing that part of $G$ that is ``influenced by'' the petal frequency $\alpha_h$; more specifically, we take $G_{\petal}$ to be the subgroup of $G$ generated by $e_1$ and $[e_1, e_2]$, that is to say $$ G_\petal = \langle e_1, [e_1,e_2] \rangle_\R = \{ e_1^{t_1} [e_1,e_2]^{t_{12}}: t_1,t_{12} \in \R \}.$$ Note that $G_{\petal}$ is abelian and normal in $G$. In particular $G$ acts on $G_{\petal}$ by conjugation, and we may form the semidirect product $$G \ltimes G_{\petal} := \{ (g,g_1): g \in G, g_1 \in G_\petal \},$$ defining multiplication by \[ (g, g_1)\cdot (g', g'_1) = (gg', g_1^{g'} g'_1), \] where $a^b := b^{-1} a b$ denotes conjugation. Now consider the action $\rho$ of $\R$ on $G \ltimes G_{\petal}$ defined by \[ \rho(t)(g, g_1) := (g g_1^t, g_1). \] We may form a further semidirect product \[ G' := \R \ltimes_{\rho} (G \ltimes G_{\petal}),\] in which the product operation is defined by \[ (t, (g, g_1)) \cdot (t', (g', g'_1)) = (t + t', \rho(t')(g, g_1) \cdot (g', g'_1)). \] $G'$ is a Lie group; indeed, one easily verifies that it is $3$-step nilpotent. We give $G'$ a $\N^2$-filtration: \begin{align*} G'_{(0,0)}&:= G' \\ G'_{(1,0)}&:=\{(t,(g,\id)): t \in \R, g \in G_\petal \} \\ G'_{(1,1)}&:=\{(0,(g,\id)): g \in G_\petal\},\\ G'_{(1,2)}&:=\{(0,(g,\id)): g \in [G,G]\}, \\ G'_{(0,1)}&:=\{(0,(g,g_1)): g \in G_\petal; g_1 \in G_{\petal}\},\\ G'_{(0,2)}&:=\{(0,(g,g_1)): g, g_1\in [G,G]\}, \end{align*} with $G'_{i,j}:=\{\id\}$ for all other $(i,j) \in \N^2$. One easily verifies that this is a filtration. Inside $G'$ we take the lattice \[ \Gamma' := \Z \ltimes_{\rho} (\Gamma \ltimes \Gamma_{\petal}), \] where $\Gamma_{\petal} := \Gamma \cap G_{\petal}$. Now consider the polynomial $g':\Z^2 \to G'$ defined by \[ g'(h, n) := (0, (e_2^{\beta n}, e_1^{\gamma n})) \cdot (\delta h, (\id, \id)) \] and observe that \begin{align*} g'(h,n)\Gamma' & = (0, (e_2^{\beta n}, e_1^{\gamma n})) \cdot (\{\delta h\} , (\id, \id)) \Gamma' \\ & = (\{\delta h\}, (e_2^{\beta n} e_1^{\{\delta h\}\gamma n}, e_1^{\gamma n}))\Gamma'. \end{align*} For a dense subset $H''$, $\{\delta h\}$ is in a small interval $I$, and let $\psi$ be a smooth cutoff function supported on $2I$. Take $ F' : G'/ \Gamma' \rightarrow \C^D$ to be the function defined by \[ F'((t, (g, g'))\Gamma') := \psi(t) F(g\Gamma)\] whenever $ t \in I$ and $0$ otherwise. Then we have for $h \in H''$ \[ F'(g'(h,n)\tilde\Gamma) = F(e_2^{\beta n} e_1^{\{\delta h\}\gamma n}\Gamma)=\chi_h(n),\] giving the desired representation of $(h,n) \mapsto \chi_h(n)$ as an (almost) degree $(1,2)$ nilcharacter.\vspace{11pt} We now turn to the general case. Our construction shall proceed by an abstract algebraic construction. Let $\tilde G_{\petal}$ be the subgroup of $\tilde G$ generated by $(r-1)$-fold ($r \ge 1$) iterated commutators of $\tilde e_{i_1,j_1},\ldots,\tilde e_{i_r,j_r}$ in which $j_l > D_{*,i_l}$ for exactly one value of $l$. Then $\tilde G_{\petal}$ is a rational abelian normal subgroup of $\tilde G$. To see that $\tilde G_{\petal}$ is normal, ones uses the equalities \[ \tilde e^{-1}_{i,j}[g,h] \tilde e_{i,j}=[\tilde e^{-1}_{i,j}g\tilde e_{i,j},\tilde e^{-1}_{i,j}h \tilde e_{i,j}] \quad \mbox{and} \quad \tilde e^{-1}_{i,j}g \tilde e_{i,j}= g[g,\tilde e_{i,j}], \] the commutator identities in equation (\ref{com-ident}), and the fact that any iterated commutators of $\tilde e_{i_1,j_1},\ldots,\tilde e_{i_r,j_r}$ in which $j_l > D_{*,i_l}$ for more than one value of $l$ is trivial in $\tilde G$. In particular, $\tilde G$ acts on $\tilde G_{\petal}$ by conjugation, leading to the semidirect product $\tilde G \ltimes \tilde G_{\petal}$ of pairs $(g,g_1)$ with the product $$ (g,g_1) (g',g'_1) := (gg', g_1^{g'} g'_1).$$ Next, let $R$ be the commutative ring of tuples $t = (t_{i,j})_{1 \leq i \leq s-1; D_{*,i} < j \leq D_{*,i}+D'_{\lin,i}}$ with $t_{i,j} \in \R$, which we endow with the pointwise product. For each $t \in R$, we can define an homomorphism $g \mapsto g^t$ on $\tilde G$, which we define on generators by mapping $\tilde e_{i,j}$ to $\tilde e_{i,j}^t$ for $D_{*,i} < j \leq D_{*,i}+D'_{\lin,i}$, but preserving $\tilde e_{i,j}$ for $j \leq D_{*,i}$. Such a homomorphism is well-defined as it preserves the defining relations of $\tilde G$. We observe the composition law $$ (g^t)^{t'} = g^{tt'}$$ for $g \in \tilde G$ and $t,t' \in R$. Also, on the abelian subgroup $\tilde G_{\petal}$ on $\tilde G$, we see that \begin{equation}\label{g0g} g^t g^{t'} = g^{t+t'} \end{equation} as can be seen from the Baker-Campbell-Hausdorff formula \eqref{bch}. We can thus express \begin{equation}\label{chok2} \tilde g_h(n) = g_1(n)^{\{ \alpha h \}} \end{equation} where $g_1 \in \ultra \poly(\Z_\N \to (\tilde G_{\petal})_\N)$ is the polynomial sequence \[ g_1(n) := \prod_{i=1}^{s-1} \prod_{j=D_{*,i}+1}^{D_{*,i}+D'_{\lin,i}} \tilde e_{i,j}^{ \beta_{i,j} \binom{n}{i}} \] and $\{ \alpha h \} \in R$ is the element $$ \{ \alpha h \} := ( \{ \alpha_{i,j} h \} )_{1 \leq i \leq s-1; D_{*,i} < j \leq D_{*,i}+D'_{\lin,i}}.$$ The homomorphism $g \mapsto g^t$ preserves $\tilde G_{\petal}$, and is the identity once $\tilde G_{\petal}$ is quotiented out. As a consequence we see that \begin{equation}\label{g1g} (g g_1 g^{-1})^t = g g_1^t g^{-1} \end{equation} for any $g \in \tilde G$ and $g_1 \in \tilde G_{\petal}$. We can now define an action $\rho$ of $R$ (viewed now as an additive group) on $\tilde G \ltimes \tilde G_{\petal}$ by defining $$ \rho(t)( g, g_1 ) := (g g_1^t, g_1);$$ the properties \eqref{g0g}, \eqref{g1g} ensure that this is indeed an action. We can then define the semi-direct product $G' := R \ltimes_\rho (\tilde G \ltimes \tilde G_{\petal})$ to be the set of pairs $(t, (g,g_1) )$ with the product $$ (t, (g,g_1)) (t', (g',g'_1)) = (t+t', \rho(t')(g,g_1) (g',g'_1)).$$ This is a Lie group. We can give it a $\N^2$-filtration $(G'_{(d_1,d_2)})_{(d_1,d_2) \in \N^2}$ as follows: \begin{enumerate} \item If $d_1 > 1$, then $G'_{(d_1,d_2)} := \{\id\}$. \item If $d_1=1$ and $d_2 > 0$, then $G'_{(1,d_2)}$ consists of the elements $(0,(g,\id))$ with $g \in \tilde G_{d_2} \cap \tilde G_\petal$. \item If $d_1=1$ and $d_2 = 0$, then $G'_{(1,0)}$ consists of the elements $(t,(g,\id))$ with $t \in R$ and $g \in \tilde G_\petal$. \item If $d_1=0$ and $d_2 > 0$, then $G'_{(0,d_2)}$ consists of the elements $(0,(g,g_1))$ with $g \in \tilde G_{d_2}$ and $g_1 \in \tilde G_{\petal} \cap \tilde G_{d_2}$. \item $G'_{(0,0)} = G'$. \end{enumerate} One easily verifies that this is a filtration of degree $\leq (1,s-1)$ with $G'_{(0,0)} = G'$. We let $\Gamma'$ be the subgroup of $\tilde G$ consisting of pairs $(t,(g,g_1))$ with $g \in \tilde \Gamma$, $g_1 \in \tilde \Gamma_{\petal}$, and with all coefficients of $t$ integers. One easily verifies that $\Gamma'$ is a cocompact subgroup of $G'$, and that the above $\N^2$-filtration of $G'$ is rational with respect to $\Gamma'$, so that $G'/\Gamma'$ has the structure of a filtered nilmanifold. We consider the orbit $\orbit' \in \ultra \poly(\Z^2_{\N^2} \to (G'/\Gamma')_{\N^2})$ defined by $$ \orbit'(h,n) := (0,(g_0(n),g_1(n))) (\alpha h, (\id,\id)) \ultra \Gamma',$$ where $$ \alpha h := ( \alpha_{i,j} h )_{1 \leq i \leq s-1; D_{*,i} < j \leq D_{*,i}+D'_{\lin,i}}.$$ As $g_0$, $g_1$ were already known to be polynomial maps, and the linear map $h \mapsto\alpha h$ is clearly polynomial also, we see from Corollary \ref{laz} and the choice of filtration on $G'$ that $\orbit'$ is a polynomial orbit. Now we simplify the orbit. Working on the abelian group $R$, we see that $$ (\alpha h, (\id,\id)) \ultra \Gamma' = (\{\alpha h\}, (\id,\id)) \ultra \Gamma',$$ and then commuting this with $(0,(g_0(n),g_1(n)))$, we obtain \begin{equation}\label{orb} \orbit'(h,n) = (\{\alpha h\}, (g_0(n) g_1(n)^{\{\alpha h\}}, g_1(n) ) ) \ultra \Gamma'. \end{equation} Recall that for many $h \in H$ that each component $\{ \alpha_{i,j} h\}$ of $\{\alpha h \}$ lies in an interval $I_{i,j}$ of length at most $1/10$. Let $2I_{i,j}$ be the interval of twice the length and with the same centre as $I_{i,j}$, and let $\varphi_{i,j}: \R \to \R$ be a smooth cutoff function supported on $I_{i,j}$. We then define a function $F': G'/\Gamma' \to \C^\omega$ by setting $$ F'( ((t_{i,j})_{1 \leq i \leq s-1; D_{*,i} < j \leq D_{*,i}+D'_{\lin,i}}, (g, g_1)) \ultra \Gamma' ) := \big(\prod_{i=1}^{s-1} \prod_{j=D_{*,i}+1}^{D_{*,i}+D'_{\lin,i}} \varphi_{i,j}(t_{i,j})\big) \tilde F(g \ultra \tilde \Gamma)$$ whenever $(g,g_1) \in \tilde G \ltimes \tilde G_{\petal}$ and $t_{i,j} \in 2I_{i,j}$ for all $1 \leq i \leq s-1$ and $D_{*,i} < j \leq D_{*,i}+D'_{\lin,i}$, with $F'$ set equal to zero whenever no representation of the above form exists. One can easily verify that $F'$ is well-defined and Lipschitz. Since $\tilde F$ has vertical frequency $\tilde \eta$, $F'$ has vertical frequency $\eta': G'_{(1,s-1)} \to \R$, defined by the formula $$ \eta'( (0, (g, \id) ) := \tilde \eta(g)$$ for all $g \in \tilde G_{s-1}$. From \eqref{chok}, \eqref{chok2} and \eqref{orb}, we see that for many $h \in H'$ we have $$ \tilde \chi_h(n) = F' \circ \orbit'(h,n)$$ for all $n \in [N]$. By construction, $F' \circ \orbit' \in \Xi_{\MD}^{(1,s-1)}(\ultra \Z^2)$, and Theorem \ref{linear-thm} follows.\\ \section{The symmetry argument}\label{symsec} In this, the last section of the main part of the paper, we supply the symmetry argument, Theorem \ref{aderiv}; we recall that statement now. \begin{theorem74-repeat} Let $f \in L^\infty[N]$, let $H$ be a dense subset of $[[N]]$, and let $\chi \in \Xi^{(1,s-1)}(\ultra \Z^2)$ be such that $\Delta_h f$ $<(s-2)$-correlates with $\chi(h,\cdot)$ for all $h \in H$. Then there exists a nilcharacter $\Theta \in \Xi^{s}(\ultra \Z)$ \textup{(}with the degree filtration\textup{)} and a nilsequence $\Psi \in \Nil^{\subset J}(\ultra \Z^2)$ \textup{(}with the multidegree filtration\textup{)}, with $J$ given by the downset \begin{equation}\label{lower-again} J := \{ (i,j) \in \N^2: i+j \leq s-1 \} \cup \{ (i,s-i): 2 \leq i \leq s \}, \end{equation} such that $\chi(h,n)$ is a bounded linear combination of $\Theta(n+h) \otimes \overline{\Theta(n)} \otimes \Psi(h,n)$. \end{theorem74-repeat} \begin{example} Suppose that $s=2$, $\chi(h,n) = e(P(h,n))$, and $P(h,n): \ultra \Z^2 \to \ultra \R$ is a symmetric bilinear form in $n,h$. Then observe that \begin{equation}\label{chan-sym} \chi(h,n) = \Theta(n+h) \overline{\Theta(n)} \Psi(h,n) \end{equation} where $\Theta(n) := e( \frac{1}{2} P(n,n) )$ and $\Psi(h,n) := e( - \frac{1}{2} P(h,h) )$, which illustrates a special case of Theorem \ref{aderiv}. More generally, if $s \geq 2$ and $\chi(h,n) = e(P(h,n,\ldots,n))$ with $P(h,n_1,\ldots,n_{s-1}): \ultra \Z^s \to \ultra \R$ a symmetric multilinear form, then we have \eqref{chan-sym} with $\Theta(n) := e( \frac{1}{s} P(n,\ldots,n) )$, and $\Psi(h,n)$ a polynomial phase involving terms of multidegree $(i,s-i)$ in $h,n$ with $2 \leq i \leq s$. Thus we again obtain a special case of Theorem \ref{aderiv}. Note how the symmetry of $P$ is crucial in order to make these examples work, which explains why we refer to Theorem \ref{aderiv} as a symmetrisation result. Morally speaking, this type of symmetry property ultimately stems from the identity $\Delta_h \Delta_k f = \Delta_k \Delta_h f$. We remark that an analogous symmetrisation result was crucial to the analogous proof of $\GI(2)$ in \cite{green-tao-u3inverse} (see also \cite{sam}), although our arguments here are slightly different. \end{example} From the inclusions at the end of \S \ref{nilcharacters}, $\chi(h,n)$ is a nilcharacter on $\Z^2$ (with the degree filtration) of degree $\leq s$. For similar reasons, any nilsequence $\Psi(h,n)$ of degree $\leq s-1$ (using the degree filtration on $\Z^2$) will automatically be of the form required for Theorem \ref{aderiv}. In view of this and Lemma \ref{symbolic}, we see that it will suffice to obtain a factorisation of the form $$ [\chi]_{\Xi^s([[N]] \times [N])} = [\Theta(n+h)]_{\Xi^s([[N]] \times [N])} - [\Theta(n)]_{\Xi^s([[N]] \times [N])} + [\Psi(h,n)]_{\Xi^s([[N]] \times [N])}$$ where $\Theta \in \Xi^s(\ultra \N)$ is a one-dimensional nilcharacter of degree $\leq s$ (which automatically makes $(h,n) \mapsto \Theta(n)$ and $(h,n) \mapsto \Theta(n+h)$ two-dimensional nilcharacters of degree $\leq s$, by Lemma \ref{symbolic}(vi)), and $\Psi \in \Xi^s(\ultra \N^2)$ is a two-dimensional nilcharacter of multidegree \begin{equation}\label{slosh} \subset \{ (i,j) \in \N^2: i+j \leq s; j \leq s-2 \}. \end{equation} The set of classes $[\Psi(h,n)]_{\Xi^s([[N]] \times [N])}$, with $\Psi$ of the above form, is a subgroup of the space $\Symb^s([[N]] \times [N])$ of all symbols of degree $s$ nilcharacters on $[[N]] \times [N]$. Denoting the equivalence relation induced by these classes as $\equiv$, our task is thus to show that $$ [\chi]_{\Xi^s([[N]] \times [N])} \equiv [\Theta(n+h)]_{\Xi^s([[N]] \times [N])} - [\Theta(n)]_{\Xi^s([[N]] \times [N])}.$$ In view of Theorem \ref{multilinearisation} and Lemma \ref{symbolic} (vii), there is a nilcharacter $\tilde \chi$ on $\ultra \Z^s$ of degree $(1,\ldots,1)$ which is symmetric in the last $s-1$ variables, and such that \begin{equation}\label{change} [ \chi(h,n) ]_{\Xi^s(\ultra \Z^2)} = s [ \tilde \chi(h,n,\ldots,n) ]_{\Xi^s(\ultra \Z^2)}. \end{equation} Inspired by the polynomial identity $$ s h n^{s-1} = (n+h)^s - n^s - \ldots$$ where the terms in $\ldots$ are of degree $s$ in $h,n$ but of degree at most $s-2$ in $n$, we now choose $$ \Theta(n) := \tilde \chi(n,\ldots,n).$$ From Lemma \ref{symbolic} (vi) we see that $\Theta$ is a nilcharacter of degree $\leq s$. Our task is now to show that \begin{align}\nonumber [\tilde \chi(n+h,\ldots,n+h)]_{\Xi^s([[N]] \times [N])} -& [\tilde \chi(n,\ldots,n)]_{\Xi^s([[N]] \times [N])} - \\ & - s[\tilde \chi(h,n\ldots,n)]_{\Xi^s([[N]] \times [N])} \equiv 0.\label{tilch} \end{align} To manipulate this, we use the following lemma. \begin{lemma}[Multilinearity]\label{multil} Let $\tilde \chi$ be a nilcharacter on $\Z^s$ \textup{(}with the multidegree filtration\textup{)} of degree $(1,\ldots,1)$. Let $m \geq 1$ be standard, and let $L_1,\ldots,L_s: \Z^m \to \Z$ and $L'_1: \Z^m \to \Z$ be homomorphisms. Then we have linearity in the first variable, in the sense that \begin{align*} [\tilde \chi(L_1(\vec n)+L'_1(\vec n),L_2(\vec n),\ldots,L_s(\vec n))]_{\Xi^s(\ultra \Z^m)} &= [\tilde \chi(L_1(\vec n),L_2(\vec n),\ldots,L_s(\vec n))]_{\Xi^s(\ultra \Z^m)}\\ &\quad + [\tilde \chi(L'_1(\vec n),L_2(\vec n),\ldots,L_s(\vec n)]_{\Xi^s(\ultra \Z^m)}, \end{align*} where $\vec n = (n_1,\ldots,n_m)$ are the $m$ independent variables of $\ultra \Z^m$, and $\Z^m$ is given the degree filtration. We similarly have linearity in the other $s-1$ variables. \end{lemma} \begin{proof} We prove the claim for the first variable, as the other cases follow from symmetry. From Lemma \ref{baby-calculus} and Lemma \ref{symbolic}(vi), it will suffice to show that the expression \begin{equation}\label{touch} \tilde \chi(h_1+h'_1,h_2,\ldots,h_s) \otimes \overline{\tilde \chi}(h_1,h_2,\ldots,h_s) \otimes \overline{\tilde \chi}(h'_1,h_2,\ldots,h_s) \end{equation} is a degree $<s$ nilsequence in $h_1,h'_1,h_2,\ldots,h_s$ (using the degree filtration). Write $\tilde \chi(h_1,\ldots,h_s) = F( g(h_1,\ldots,h_s) \ultra \Gamma)$, where $G/\Gamma$ is a $\N^s$-filtered nilmanifold of degree $\leq (1,\ldots,1)$, $F \in \Lip(\ultra(G/\Gamma))$ has a vertical frequency, and $g \in \ultra \poly(\Z^s_{\N^s} \to G_{\N^s})$. Then the expression \eqref{touch} takes the form $$ \tilde F( \tilde g(h_1,h'_1,h_2,\ldots,h_s) \ultra \Gamma^3 )$$ where $\tilde g: \ultra \Z^{s+1} \to G^3$ is the map $$ \tilde g(h_1,h'_1,h_2,\ldots,h_s) := ( g( h_1+h'_1,h_2,\ldots,h_s), g( h_1,h_2,\ldots,h_s), g(h'_1,h_2,\ldots,h_s) )$$ and $\tilde F \in \Lip(\ultra(G/\Gamma)^3)$ is the map $$ \tilde F( x_1,x_2,x_3) = F(x_1) \otimes \overline{F(x_2)} \otimes \overline{F(x_3)}.$$ By Lemma \ref{taylo}, we can expand $$ g(h_1,\ldots,h_s) = \prod_{i_1,\ldots,i_s = \{0,1\}} g_{i_1,\ldots,i_s}^{\binom{h_1}{i_1} \ldots \binom{h_s}{i_s}}$$ for some $g_{i_1,\ldots,i_s} \in G_{(i_1,\ldots,i_s)}$, where we order $\{0,1\}^s$ lexicographically (say). We now $\N$-filter $G^3$ by defining $(G^3)_{i}$ to be the group generated by $(G_{(i_1,\ldots,i_s)})^3$ for all $i_1,\ldots,i_s \in \N$ with $i_1+\ldots+i_s>i$, together with the groups $\{ (g_1g_2,g_1,g_2): g_1,g_2 \in G_{(i_1,\ldots,i_s)} \}$ for $i_1+\ldots+i_s = i$. From the Baker-Campbell-Hausdorff formula \eqref{bch} one verifies that this is a rational filtration of $G^3$. From the Taylor expansion we also see that $\tilde g$ is polynomial with respect to this filtration (giving $\Z^{s+1}$ the degree filtration). Finally, as $F$ has a vertical character, we see that $\tilde F$ is invariant with respect to the action of $(G^3)_{s} = \{ (g_1g_2,g_1,g_2): g_1,g_2 \in G_{(1,\ldots,1)}\}$. Restricting $G^3$ to $(G^3)_{0}$ and quotienting out by $(G^3)_{s}$ we obtain the claim. \end{proof} Using this lemma repeatedly, together with the symmetry of $\tilde \chi$ in the final $s-1$ variables, we see that we can expand \[\begin{split} & [\tilde \chi(n+h,\ldots,n+h)]_{\Xi^s(\ultra \Z^2)} =\\ & \sum_{j=0}^{s-1} \binom{s-1}{j} \left( [\tilde \chi(n,h,\ldots,h,n,\ldots,n)]_{\Xi^s(\ultra \Z^2)} + [\tilde \chi(h,h,\ldots,h,n,\ldots,n)]_{\Xi^s(\ultra \Z^2)} \right), \end{split}\] where in the terms on the right-hand side, the final $j$ coefficients are equal to $n$, the first coefficient is either $n$ or $h$, and the remaining coefficients are $h$. Note that a term with $j$ $h$ factors and $(s-j)$ $n$ factors will have degree \eqref{slosh} and thus be negligible as long as $j \geq 2$. Neglecting these terms, we obtain the simpler expression \[ \begin{split} [\tilde \chi(n+h,\ldots,n+h)]_{\Xi^s(\ultra \Z^2)} \equiv & [\tilde \chi(n,\ldots,n)]_{\Xi^s(\ultra \Z^2)} + [\tilde \chi(h,n,\ldots,n)]_{\Xi^s(\ultra \Z^2)} \\ & + (s-1) [\tilde \chi(n,h,n,\ldots,n)]_{\Xi^s(\ultra \Z^2)}. \end{split} \] Comparing this with \eqref{slosh}, we will be done as soon as we can show the symmetry property \begin{equation}\label{total-sym} (s-1) [\tilde \chi(h,n,\ldots,n)]_{\Xi^s([[N]] \times [N])} = (s-1) [\tilde \chi(n,h,n,\ldots,n)]_{\Xi^s([[N]] \times [N])}. \end{equation} This property does not automatically follow from the construction of $\tilde \chi$. Instead, we must use the correlation properties of $\chi$, as follows. By hypothesis and Lemma \ref{limone}, we have that for all $h$ in a dense subset $H$ of $[[N]]$, we can find a degree $\leq s-2$ nilcharacter $\varphi_h$ such that $f_1(\cdot+h)f_2(\cdot)$ correlates with $\chi(h,\cdot,\ldots,\cdot) \otimes \varphi_h$. By Corollary \ref{mes-select}, we may assume that the map $h \mapsto \varphi_h$ is a limit map. We set $\varphi_h=0$ for $h \not \in H$. To use this information, we return\footnote{Here is a key place where we use the hypothesis $s \geq 3$ (the other is Lemma \ref{discard}). For $s=2$ the lower order terms in Proposition \ref{cs} are useless; however a variant of the argument below still works, see \cite{green-tao-u3inverse}.} to Proposition \ref{cs}. Invoking that proposition, we see that for many additive quadruples $(h_1,h_2,h_3,h_4)$ in $[[N]]$, the sequence \begin{align*} n &\mapsto \chi(h_1,n) \otimes \chi(h_2,n+h_1-h_4) \otimes \overline{\chi(h_3,n)} \otimes \overline{\chi(h_4,n+h_1-h_4)}\\ &\quad \otimes \varphi_{h_1}(n) \otimes \varphi_{h_2}(n + h_1 - h_4) \otimes \overline{\varphi_{h_3}(n)} \otimes \overline{\varphi_{h_4}(n + h_1 - h_4)} \end{align*} is biased. We make the change of variables $(h_1,h_2,h_3,h_4) = (h+a,h+b,h+a+b,h)$ and then pigeonhole in $h$, to conclude the existence of an $h_0$ for which $$ n \mapsto \tau(a,b,n) \otimes \varphi_{h_0+a}(n) \otimes \varphi_{h_0+b}(n+a) \otimes \overline{\varphi}_{h_0+a+b}(n) \otimes \overline{\varphi_{h_0}(n+a)}$$ is biased for many pairs $a,b \in [[2N]]$, where $\tau = \tau_{h_0}$ is the expression \begin{equation}\label{tabn} \tau(a,b,n) := \chi(h_0+a,n) \otimes \chi(h_0+b,n+a) \otimes \overline{\chi(h_0+a+b,n)} \otimes \overline{\chi(h_0,n+a)}. \end{equation} Henceforth $h_0$ is fixed, and we will suppress the dependence of various functions on this parameter. From Lemma \ref{baby-calculus}, $\tau$ is a degree $\leq 3$ nilcharacter on $\ultra \Z^3$ (with the degree filtration). We record its top order symbol: \begin{lemma}\label{calc-1} We have $$ [\tau(a,b,n)]_{\Xi^s(\ultra \Z^3)} \equiv s(s-1) [\tilde \chi(b,a,n,\ldots,n)]_{\Xi^s(\ultra \Z^3)}$$ where by $\equiv$ we are quotienting by all symbols of degree $\leq s-3$ in $n$. \end{lemma} \begin{proof} From \eqref{change}, \eqref{tabn}, Lemma \ref{baby-calculus} and Lemma \ref{symbolic} one has \begin{align*} [\tau(a,b,n)]_{\Xi^s(\ultra \Z^3)} = & s( [\tilde \chi(a,n,\ldots,n)]_{\Xi^s(\ultra \Z^3)} + [\tilde \chi(b,n+a,\ldots,n+a)]_{\Xi^s(\ultra \Z^3)} - \\ & - [\tilde \chi(a+b,n,\ldots,n)]_{\Xi^s(\ultra \Z^3)}). \end{align*} Applying Lemma \ref{multil} in the first variable we simplify this as $$ s ( [\tilde \chi(b,n+a,\ldots,n+a)]_{\Xi^s(\ultra \Z^3)} - [\tilde \chi(a,n,\ldots,n)]_{\Xi^s(\ultra \Z^3)}).$$ Applying Lemma \ref{multil} in all the other variables and gathering terms using the symmetry of $\tilde \chi$ in those variables, we arrive at $$ \sum_{j=0}^{s-2} s \binom{s-1}{j} [\tilde \chi(b,a,\ldots,a,n,\ldots,n)]_{\Xi^s(\ultra \Z^3)},$$ where there are $j$ occurrences of $n$ and $s-1-j$ occurrences of $a$. All the terms with $j<s-2$ are of degree $\leq s-2$ in $n$, and the claim follows. \end{proof} From Lemma \ref{symbolic}, we know that $\varphi_{h_0+b}(n+a)$ is a bounded linear combination of $\varphi_{h_0+b}(n) \otimes \psi_{a,b}(n)$ for some degree $\leq s-3$ nilsequence $\psi_{a,b}$. Similarly for $\varphi_{h_0}(n+a)$. We conclude that $$ n \mapsto \tau(a,b,n) \otimes \varphi_{h_0+a}(n) \otimes \varphi_{h_0+b}(n) \otimes \overline{\varphi}_{h_0+a+b}(n) \otimes \overline{\varphi_{h_0}(n)}$$ is $\leq (s-3)$-biased for many $a,b \in [[2N]]$. We will now eliminate the $\varphi_h$ terms in order to focus attention on $\tau$. Applying Corollary \ref{mes-select}, we may thus find a scalar degree $\leq s-3$ nilsequence $\psi_{a,b}$ depending in a limit fashion on $a, b \in [[2N]]$, such that \begin{align*} |\E_{a,b \in [[2N]]; n \in [N]} \tau(a,b,n) \otimes \varphi_{h_0+a}(n) \otimes \varphi_{h_0+b}(n) \otimes & \overline{\varphi_{h_0+a+b}(n)} \otimes \\ & \otimes \overline{\varphi_{h_0,k'}(n+a)} \psi_{a,b}(n)| \gg 1.\end{align*} We pull out the $b$-independent factors $\varphi_{h_0+a}(n) \otimes \overline{\varphi}_{h_0}(n)$ and Cauchy-Schwarz in $a,n$ to conclude that \begin{align*} |\E_{a,b,b' \in [[2N]]; n \in [N]} \tau(a,b,n) &\otimes \overline{\tau(a,b',n)} \otimes \varphi_{h_0+b}(n) \otimes \overline{\varphi_{h_0+b'}(n)} \\ &\otimes \overline{\varphi_{h_0+a+b}(n)} \otimes \varphi_{h_0+a+b'}(n) \psi_{a,b,b'}(n)| \gg 1, \end{align*} where $(a,b,b') \mapsto \psi_{a,b,b'}$ is a limit map assigning a scalar degree $\leq s-3$ nilsequence to each $a,b,b'$. Next, we make the substitution $c := a+b+b'$ and conclude that \begin{align*} |\E_{c,b,b' \in [[3N]]; n \in [N]}& \tau(c-b-b',b,n) \otimes \overline{\tau(c-b-b',b',n)} \\ &\otimes \varphi_{h_0+b}(n) \otimes \overline{\varphi_{h_0+b'}}(n) \otimes \overline{\varphi_{h_0+c-b'}(n)} \varphi_{h_0+c-b}(n) \psi'_{c,b,b'}(n)| \gg 1 \end{align*} where $(c,b,b') \mapsto \psi'_{c,b,b'}$ is a limit map assigning a scalar degree $\leq s-3$ nilsequence to each $c,b,b'$. By the pigeonhole principle, we can thus find a $c_0$ such that \begin{equation}\label{retour} |\E_{b,b' \in [[3N]]; n \in [N]} \alpha(b,b',n) \otimes \varphi'_b(n) \otimes \overline{\varphi'_{b'}(n)} \psi'_{c_0,b,b'}(n)| \gg 1 \end{equation} where $\alpha = \alpha_{c_0}$ is the form \begin{equation}\label{abab} \alpha(b,b',n) := \tau(c_0-b-b',b,n) \otimes \overline{\tau(c_0-b-b',b',n)} \end{equation} and $\varphi'_b = \varphi'_{b,c_0}$ is the quantity $$ \varphi'_b(n) := \varphi_{h_0+b,k}(n) \otimes \overline{\varphi_{h_0+c_0-b}(n)}.$$ We fix this $c_0$. Again by Lemma \ref{baby-calculus}, $\alpha$ is a degree $\leq s$ nilcharacter on $\ultra \Z^3$, and we pause to record its symbol in the following lemma. \begin{lemma}\label{calc-2} We have $$ [\alpha(b,b',n)]_{\Xi^s(\ultra \Z^3)} \equiv -s(s-1) [\tilde \chi(b+b',b-b',n,\ldots,n)]_{\Xi^s(\ultra \Z^3)}$$ where by $\equiv$ we are quotienting by all symbols of degree $\leq s-3$ in $n$. \end{lemma} \begin{proof} From \eqref{abab} and Lemma \ref{symbolic} we can write the left-hand side as $$ [\tau(-b-b',b,n)]_{\Xi^s(\ultra \Z^3)} - [\tau(-b-b',b',n)]_{\Xi^s(\ultra \Z^3)}.$$ Applying \eqref{calc-1}, we can write this as $$ s(s-1) ( [\tilde \chi(-b-b',b,n,\ldots,n)]_{\Xi^s(\ultra \Z^3)} - [\tilde \chi(-b-b',b',n,\ldots,n)]_{\Xi^s(\ultra \Z^3)} ).$$ The claim then follows from some applications of Lemma \ref{multil}. \end{proof} We return now to \eqref{retour}, and Cauchy-Schwarz in $b',n$ to eliminate the $\varphi'_{b'}(n)$ factor, yielding $$ |\E_{b_1,b_2,b' \in [[3N]]; n \in [N]} \alpha(b_1,b',n) \otimes \overline{\alpha(b_2,b',n)} \otimes \varphi'_{b_1}(n) \otimes \overline{\varphi'_{b_2}(n)} \psi''_{b_1,b_2,b'}(n)| \gg 1$$ where $(b_1,b_2,b') \mapsto \psi''_{b_1,b_2,b'}$ is a limit map assigning a scalar degree $\leq s-3$ nilsequence to each $b_1,b_2,b'$. Finally, we Cauchy-Schwarz in $b_1,b_2,n$ to eliminate the $\varphi'_{b_1}(n) \overline{\varphi'_{b_2}(n)}$ factor, yielding \begin{align*} |\E_{b_1,b_2,b'_1,b'_2 \in [[3N]]; n \in [N]} \alpha(b_1,b'_1,n) \otimes & \overline{\alpha(b_2,b'_1,n)} \otimes \overline{\alpha(b_1,b'_2,n)} \otimes \\ & \otimes \alpha(b_2,b'_2,n) \psi''_{b_1,b_2,b'_1,b'_2}(n)| \gg 1.\end{align*} Note how the $\varphi$ terms have now been completely eliminated. To eliminate the $\psi''$ terms, we first use the pigeonhole principle to find $b_0,b'_0$ such that \begin{equation}\label{ebony} |\E_{b,b' \in [[3N]]; n \in [N]} \alpha'(b,b',n) \psi''_{b,b_0,b',b'_0}(n)| \gg 1 \end{equation} where $\alpha' = \alpha'_{b_0,b'_0}$ is the expression \begin{equation}\label{abab2} \alpha'(b,b',n) := \alpha(b,b',n) \otimes \overline{\alpha(b_0,b',n)} \otimes \overline{\alpha(b,b'_0,n)} \otimes \alpha(b_0,b'_0,n). \end{equation} We fix this $b_0,b'_0$. Again, $\alpha'$ is a degree $\leq s$ nilcharacter on $\ultra \Z^3$. From Lemma \ref{calc-2} and Lemma \ref{multil} (and using Lemma \ref{symbolic} to eliminate shifts by $b_0$) we conclude \begin{equation}\label{calc-3} [\alpha'(b,b',n)]_{\Xi^s(\ultra \Z^3)} \equiv s(s-1) ([\tilde \chi(b,b',n,\ldots,n)]_{\Xi^s(\ultra \Z^3)} - [\tilde \chi(b',b,n,\ldots,n)]_{\Xi^s(\ultra \Z^3)}). \end{equation} Note the similarity here with \eqref{total-sym}. From \eqref{ebony}, we conclude that the sequence $n \mapsto \alpha'(b,b',n)$ is $\leq s-3$-biased for many $b,b' \in [[3N]]$. Applying Proposition \ref{inv-nec-nonst}, we conclude that $$ \| \alpha'(b,b',n) \|_{U^{s-2}[N]} \gg 1$$ for many $b,b' \in [[3N]]$. We conclude (using Corollary \ref{auton-2} to obtain the needed uniformity) that $$ \E_{b,b' \in [[3N]]} \| \alpha'(b,b',n) \|_{U^{s-2}[N]}^{2^{s-2}} \gg 1.$$ By definition of the Gowers norm, this implies that \begin{equation}\label{sorba} |\E_{b,b',h_1,\ldots,h_{s-2} \in [[3N]]; n \in [N]} \sigma( b, b', h_1, \ldots, h_{s-2}, n ) 1_\Omega(h_1,\ldots,h_{s-2},n) | \gg 1, \end{equation} where $\Omega$ is the polytope $$ \Omega := \{ (h_1,\ldots,h_{s-2},n): n+\sum_{j=1}^{s-2} \omega_j h_{s-2} \in [N] \hbox{ for all } \omega \in \{0,1\}^{s-2} \}$$ and $\sigma$ is the expression \begin{equation}\label{sdef} \sigma( b, b', h_1, \ldots, h_{s-2}, n) := \bigotimes_{\omega \in \{0,1\}^{s-2}} {\mathcal C}^{|\omega|} \alpha'(b,b',n+\sum_{j=1}^{s-2} \omega_j h_{s-2}), \end{equation} with ${\mathcal C}$ being the conjugation map. From Lemma \ref{baby-calculus}, $\sigma$ is a nilcharacter of degree $s$ on $\ultra \Z^{s+1}$. In the following lemma we compute its symbol. \begin{lemma} We have \begin{equation}\label{sorba-2} \begin{split} [\sigma(b,b',h_1,\ldots,h_{s-2},n)]_{\Xi^s(\ultra \Z^{s+1})} = &s! ([\tilde \chi(b,b',h_1,\ldots,h_{s-2})]_{\Xi^s(\ultra \Z^{s+1})} \\ &\quad - [\tilde \chi(b',b,h_1,\ldots,h_{s-2})]_{\Xi^s(\ultra \Z^{s+1})}). \end{split} \end{equation} \end{lemma} \begin{proof} From \eqref{sdef} and Lemma \ref{symbolic} we can write the left-hand side as \begin{equation}\label{flip} \sum_{\omega \in \{0,1\}^{s-2}} (-1)^{|\omega|} [\alpha'(b,b',n+\sum_{j=1}^{s-2} \omega_j h_{s-2})]_{\Xi^s(\ultra \Z^{s+1})}; \end{equation} one should think of this as an $s-2$-fold ``derivative'' of $[\alpha'(b,b',n)]_{\Xi^s(\ultra \Z^3)}$ in the $n$ variable. From \eqref{calc-3} we can write \begin{align*} [\alpha'(b,b',n)]_{\Xi^s(\ultra \Z^3)} &= s(s-1) ([\tilde \chi(b,b',n,\ldots,n)]_{\Xi^s(\ultra \Z^3)} - [\tilde \chi(b',b,n,\ldots,n)]_{\Xi^s(\ultra \Z^3)}) \\ &\quad + [\beta(b,b',n)]_{\Xi^s(\ultra \Z^3)} \end{align*} where $\beta$ is of degree at most $s-3$ in $n$. In fact, by inspection of the derivation of $\beta$, and heavy use of Lemma \ref{multil}, one can express $[\beta(b,b',n)]_{\Xi^s(\ultra \Z^3)}$ as a linear combination of classes of the form $$ [\tilde \chi(n_1,\ldots,n_s)]_{\Xi^s(\ultra \Z^3)}$$ where each of $n_1,\ldots,n_s$ is equal to either $b$, $b'$, or $n$, with at most $s-3$ copies of $n$ occurring. If one then substitutes this expansion into \eqref{flip} and applies Lemma \ref{multil} repeatedly, one obtains the claim. \end{proof} On the other hand, from \eqref{sorba} and Lemma \ref{bias}, we see that on $[[3N]]^{s+1}$, $\sigma$ is equal to a nilsequence of degree $\leq s-1$, and thus by Lemma \ref{symbolic} $$ [\sigma(b,b',h_1,\ldots,h_{s-2},n)]_{\Xi^s([[3N]]^{s+1})} = 0$$ and thus by Lemma \eqref{sorba-2} $$ s! ([\tilde \chi(b,b',h_1,\ldots,h_{s-2})]_{\Xi^s([[3N]]^{s+1})} - [\tilde \chi(b',b,h_1,\ldots,h_{s-2})]_{\Xi^s([[3N]]^{s+1})}) = 0.$$ Applying Lemma \ref{baby-calculus} we conclude that $$ s! ([\tilde \chi(h,n,\ldots,n)]_{\Xi^s([[N]] \times [N])} - [\tilde \chi(n,h,n,\ldots,n)]_{\Xi^s([[N]] \times [N])}) = 0.$$ The claim \eqref{total-sym} now follows from Lemma \ref{torsion}. The proof of Theorem \ref{aderiv} is now complete.
1,108,101,563,551
arxiv
\section{\bf Introduction and statement of results} \end{center} \hspace{5mm} Let $P(z) $ be a polynomial of degree $n,$ then \begin{equation}\label{e1} \underset{\left|z\right|=1}{Max}\left|P^{\prime}(z)\right|\leq n\underset{\left|z\right|=1}{Max}\left|P(z)\right|. \end{equation} Inequality \eqref{e1} is an immediate consequence of S. Bernstein's Theorem on the derivative of a trigonometric polynomial (for reference, see \cite[p.531]{mm}, \cite[p.508]{rs} or \cite{asc}) and the result is best possible with equality holding for the polynomial $P(z)=az^n,$ $a\neq 0.$ \\ \indent If we restrict ourselves to the class of polynomials having no zero in $|z|<1$, then inequality \eqref{e1} can be replaced by \begin{equation}\label{e2} \underset{\left|z\right|=1}{Max}\left|P^{\prime}(z)\right|\leq \frac{n}{2}\underset{\left|z\right|=1}{Max}\left|P(z)\right|. \end{equation} Inequality \eqref{e2} was conjectured by Erd\"{o}s and later verified by Lax \cite{el}. The result is sharp and equality holds for $P(z)=az^n+b,$ $|a|=|b|.$\\ \indent For the class polynomials having all zeros in $|z|\leq 1,$ it was proved by Tur\'{a}n \cite{t} that \begin{equation}\label{e3} \underset{\left|z\right|=1}{Max}\left|P^{\prime}(z)\right|\geq \frac{n}{2}\underset{\left|z\right|=1}{Max}\left|P(z)\right|. \end{equation} The inequality \eqref{e3} is best possible and become equality for polynomial $P(z)=(z+1)^n.$ As an extension of \eqref{e2} and \eqref{e3} Malik \cite{m} proved that if $P(z)\neq 0$ in $|z|<k$ where $k\geq 1,$ then \begin{equation}\label{e4} \underset{\left|z\right|=1}{Max}\left|P^{\prime}(z)\right|\leq \frac{n}{1+k}\underset{\left|z\right|=1}{Max}\left|P(z)\right|, \end{equation} where as if $P(z)$ has all its zeros in $|z|\leq k$ where $k\leq 1,$ then \begin{equation}\label{e5} \underset{\left|z\right|=1}{Max}\left|P^{\prime}(z)\right|\geq \frac{n}{1+k}\underset{\left|z\right|=1}{Max}\left|P(z)\right|. \end{equation} \indent Let $D_\alpha P(z)$ denotes the polar derivative of the polynomial $P(z)$ of degree $n$ with respect to the point $\alpha,$ then $$D_\alpha P(z)=nP(z)+(\alpha-z)P^{\prime}(z). $$ The polynomial $D_\alpha P(z)$ is a polynomial of degree at most $n-1$ and it generalizes the ordinary derivative in the sense that $$\underset{\alpha\rightarrow\infty}{Lim}\left[\dfrac{D_\alpha P(z)}{\alpha}\right]=P^{\prime}(z). $$ Now corresponding to a given $n^{th}$ degree polynomial $P(z),$ we construct a sequence of polar derivatives \begin{equation*} D_{\alpha_1}P(z)=nP(z)+(\alpha_1-z)P^{\prime}(z)=P_1(z) \end{equation*} \begin{align*} D_{\alpha_s}D_{\alpha_{s-1}}\cdots D_{\alpha_2}D_{\alpha_1}P(z)=(n-s+1)&\left\{D_{\alpha_{s-1}}\cdots D_{\alpha_2}D_{\alpha_1}P(z)\right\}\\&+(\alpha_s-z)\left\{D_{\alpha_{s-1}}\cdots D_{\alpha_2}D_{\alpha_1}P(z)\right\}^{\prime}. \end{align*} The points $\alpha_1,\alpha_2,\cdots,\alpha_s,$ $s=1,2,\cdots,n,$ may be equal or unequal complex numbers. The $s^{th}$ polar derivative $D_{\alpha_s}D_{\alpha_{s-1}}\cdots D_{\alpha_2}D_{\alpha_1}P(z)$ of $P(z)$ is a polynomial of degree at most $n-s.$ For $$P_j(z)=D_{\alpha_j}D_{\alpha_{j-1}}\cdots D_{\alpha_2}D_{\alpha_1}P(z),$$ we have \begin{align*} P_{j}(z)&=(n-j+1)P_{j-1}(z)+(\alpha_j-z)P^{\prime}_{j-1}(z),\,\,\,\,\,\,j=1,2,\cdots,s,\\ P_{0}(z)&=P(z). \end{align*} \indent A. Aziz \cite{a88} extended the inequality \eqref{e2} to the $s^{th}$ polar derivative by proving if $P(z)$ is a polynomial of degree $n$ not vanishing in $|z|<1,$ then for $|z|\geq 1$ \begin{align}\label{ae} |D_{\alpha_s}\cdots &D_{\alpha_1}P(z)|\leq \dfrac{n(n-1)\cdots(n-s+1)}{2}\left\{|\alpha_1\cdots\alpha_sz^{n-s}|+1\right\}\underset{|z|=1}{Max}|P(z)|, \end{align} where $|\alpha_j|\geq 1,$ for $j=1,2,\cdots,s.$ The result is best possible and equality holds for the polynomial $P(z)=(z^n+1)/2.$\\ As a refinement of inequality \eqref{ae}, Aziz and Wali Mohammad \cite{aw} proved if $P(z)$ is a polynomial of degree $n$ not vanishing in $|z|<1,$ then for $|z|\geq 1$ \begin{align}\label{awe}\nonumber |D_{\alpha_s}\cdots D_{\alpha_1}P(z)|\leq \dfrac{n(n-1)\cdots(n-s+1)}{2}&\Big\{\left(|\alpha_1\cdots\alpha_sz^{n-s}|+1\right)\underset{|z|=1}{Max}|P(z)|,\\ \,\,\,\,\,\,&-\left( |\alpha_1\cdots\alpha_sz^{n-s}|-1 \right)\Big\}\underset{|z|=1}{Min}|P(z)| \end{align} where $|\alpha_j|\geq 1,$ for $j=1,2,\cdots,s.$ The result is best possible and equality holds for the polynomial $P(z)=(z^n+1)/2.$ \\ \indent In this paper, we shall obtain several inequalities concerning the polar derivative of a polynomial and thereby obtain compact generalizations of inequalities \eqref{ae} and \eqref{awe}.\\ \indent We first prove following result from which certain interesting results follow as special cases. \begin{theorem}\label{t1} If $ F(z)$ be a polynomial of degree $n$ having all its zeros in the disk $\left|z\right|\leq k$ where $k\leq 1$, and $P(z)$ is a polynomial of degree n such that \begin{equation*} \left|P(z)\right|\leq \left|F(z)\right| \,\,\, for\,\,\, |z| = k, \end{equation*} then for $\alpha_j,\beta\in\mathbb{C}$ with $ \left|\alpha_j\right|\geq k,\left|\beta\right|\leq 1 $, $j=1,2,\cdots,s$ and $|z|\geq 1$, \begin{align}\label{te1} \left|z^sP_s(z)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}P(z)\right|\leq \left|z^sF_s(z)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}F(z)\right|, \end{align} where \begin{align}\label{te1'} n_s=n(n-1)\cdots(n-s+1) \,\,\&\,\,\Lambda_s=(|\alpha_1|-k)(|\alpha_2|-k)\cdots(|\alpha_s|-k). \end{align} \end{theorem} If we choose $ F(z)= z^{n}M/k^{n} $, where $ M = Max_{|z|=k}\left|P(z)\right|$ in Theorem \ref{t1}, we get the following result. \begin{corollary}\label{c1} If $P(z)$ be a polynomial of degree $n$, then for $\alpha_j,\beta\in\mathbb{C}$ with $ \left|\alpha_j\right|\geq k,\left|\beta\right|\leq 1 $, $j=1,2,\cdots,s$ and $|z|\geq 1$, \begin{align}\label{ce1} \left|z^sP_s(z)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}P(z)\right|\leq \dfrac{n_s|z|^n}{k^n}\left|\alpha_1\alpha_2\cdots\alpha_s+\beta\dfrac{ \Lambda_s}{(1+k)^s}\right|\underset{\left|z\right|=k}{Max}\left|P(z)\right|, \end{align} where $n_s$ and $\Lambda_s$ are given by \eqref{te1'}. \end{corollary} If $\alpha_1=\alpha_2=\cdots=\alpha_s=\alpha,$ then dividing both sides of \eqref{ce1} by $|\alpha|^s$ and letting $|\alpha|\rightarrow\infty,$ we obtain the following result. \begin{corollary}\label{c2} If $ P(z)$ be a polynomial of degree $n$ then for $ \beta\in\mathbb{C}$ with $\left|\beta\right|\leq 1 $, and $|z|\geq 1$, \begin{align}\label{ce2} \left|z^sP^{(s)}(z)+\beta\dfrac{n_s}{(1+k)^s}P(z)\right|\leq \dfrac{n_s|z|^n}{k^n}\left|1+\dfrac{\beta }{(1+k)^s}\right|\underset{\left|z\right|=k}{Max}\left|P(z)\right|, \end{align} where $n_s$ is given by \eqref{te1'}. \end{corollary} Again, if we take $\alpha_1=\alpha_2=\cdots=\alpha_s=\alpha,$ then divide both sides of \eqref{te1} by $|\alpha|^s$ and letting $|\alpha|\rightarrow\infty,$ we obtain the following result. \begin{corollary}\label{c3} If $ F(z)$ be a polynomial of degree $n$ having all its zeros in the disk $\left|z\right|\leq k$ where $k\leq 1$, and $P(z)$ is a polynomial of degree $n$ such that \begin{equation*} \left|P(z)\right|\leq \left|F(z)\right| \,\,\, for\,\,\, |z| = k, \end{equation*} then for $\beta\in\mathbb{C}$ with $ \left|\beta\right|\leq 1 $, and $|z|\geq 1$, \begin{align}\label{ce3} \left|z^sP^{(s)}(z)+\beta\dfrac{n_s }{(1+k)^s}P(z)\right|\leq \left|z^sF^{(s)}(z)+\beta\dfrac{n_s }{(1+k)^s}F(z)\right|, \end{align} where $n_s$ is given by \eqref{te1'}. \end{corollary} If we choose $P(z)=mz^n/k,$ where $m=Min_{|z|=k}|F(z)|,$ in Theorem \ref{t1} we get the following result. \begin{corollary}\label{c4} If $ F(z)$ be a polynomial of degree $n$ having all its zeros in the disk $\left|z\right|\leq k$ where $k\leq 1$, then for $\alpha_j,\beta\in\mathbb{C}$ with $ \left|\alpha_j\right|\geq k,\left|\beta\right|\leq 1 $, where $j=1,2,\cdots,s$ and $|z|\geq 1$, \begin{align}\label{ce4} \left|z^sF_s(z)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}F(z)\right|\geq \dfrac{n_s|z|^n}{k^n}\left|\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s}\right|\underset{|z|=k}{Min}|F(z)|, \end{align} where $n_s$ and $\Lambda_s$ are given by \eqref{te1'}. \end{corollary} \begin{remark} \textnormal{For $\beta=0$ and $k=1,$ we get the result due to Aziz and Wali Mohammad \cite[Theorem 1]{aw}.} \end{remark} Again, if we take $\alpha_1=\alpha_2=\cdots=\alpha_s=\alpha,$ then divide both sides of \eqref{ce4} by $|\alpha|^s$ and letting $|\alpha|\rightarrow\infty,$ we obtain the following result. \begin{corollary}\label{c5} If $ F(z)$ be a polynomial of degree $n$ having all its zeros in the disk $\left|z\right|\leq k$ where $k\leq 1$, then for $ \beta\in\mathbb{C}$ with $ \left|\beta\right|\leq 1 $, and $|z|\geq 1$, \begin{align}\label{ce5} \left|z^sF^{(s)}(z)+\dfrac{\beta n_s }{(1+k)^s}F(z)\right|\geq \dfrac{n_s|z|^n}{k^n}\left|1+\dfrac{\beta }{(1+k)^s}\right|\underset{|z|=k}{Min}|F(z)|, \end{align} where $n_s$ is given by \eqref{te1'}. \end{corollary} For $s = 1,$ and $\alpha_1=\alpha$ in Theorem \ref{t1} we get the following result: \begin{corollary}\label{c} If $ F(z)$ be a polynomial of degree $n$ having all its zeros in the disk $\left|z\right|\leq k$ where $k\leq 1$, and $P(z)$ is a polynomial n such that \begin{equation*} \left|P(z)\right|\leq \left|F(z)\right| \,\,\, for\,\,\, |z| = k, \end{equation*} then for $\alpha, \beta\in\mathbb{C}$ with $ \left|\alpha\right|\geq k,\left|\beta\right|\leq 1 $, and $|z|\geq 1$, \begin{equation}\nonumber\label{ce} \left|zD_\alpha P(z)+n\beta \left(\dfrac{|\alpha|-k}{k+1}\right)P(z)\right| \leq \left|zD_\alpha F(z)+n\beta \left(\dfrac{|\alpha|-k}{k+1}\right)F(z)\right|. \end{equation} \end{corollary} \begin{theorem}\label{t2} If $P(z)$ is a polynomial of degree $n,$ which does not vanish in the disk $|z|< k$ where $k\leq 1,$ then for $\alpha_1,\alpha_2,\cdots,\alpha_s,\beta\in\mathbb{C}$ with $|\alpha_1|\geq k,|\alpha_2|\geq k,\cdots,|\alpha_s|\geq k,|\beta|\leq 1$ and $|z|\geq 1$ then \begin{align}\nonumber\label{te2} \Bigg|z^sP_s(z)&+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}P(z)\Bigg|\\&\leq \dfrac{n_s}{2}\left\{\dfrac{|z^n| }{k^n}\Bigg|\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s} \Bigg|+\Bigg|z^s+\dfrac{\beta \Lambda_s}{(1+k)^s}\Bigg|\right\}\underset{|z|=k}{Max}|P(z)|, \end{align} where $n_s$ and $\Lambda_s$ are given by \eqref{te1'}. \end{theorem} \begin{remark} \textnormal{If we take $\beta=0$ and $k=1,$ we get inequality \eqref{ae}.} \end{remark} We next prove the following refinement of Theorem \ref{t2}. \begin{theorem}\label{t3} If $P(z)$ is a polynomial of degree $n$ and does not vanish in the disk $|z|< k$ where $k\leq 1,$ then for $\alpha_1,\alpha_2,\cdots,\alpha_s,\beta\in\mathbb{C}$ with $|\alpha_1|\geq k,|\alpha_2|\geq k,\cdots,|\alpha_s|\geq k,|\beta|\leq 1$ and $|z|\geq 1,$ we have \begin{align}\nonumber\label{te3} \Bigg|z^s&P_s(z)+\beta\dfrac{ n_s \Lambda_s}{(1+k)^s} P(z)\Bigg|\\\nonumber\leq & \dfrac{n_s}{2}\Bigg[\left\{\dfrac{|z|^n }{k^n}\Bigg|\alpha_1\alpha_2\cdots\alpha_s+\beta\dfrac{ \Lambda_s}{(1+k)^s} \Bigg|+\Bigg|z^s+\beta\dfrac{ \Lambda_s}{(1+k)^s}\Bigg|\right\}\underset{|z|=k}{Max}|P(z)|\\ \,\,\,\,&-\left\{\dfrac{|z|^n}{k^{n}}\Bigg|\alpha_1\alpha_2\cdots\alpha_s+\beta\dfrac{ \Lambda_s}{(1+k)^s}\Bigg|-\Bigg|z^s+\beta\dfrac{ \Lambda_s}{(1+k)^s}\Bigg|\right\}\underset{|z|=k}{Min}|P(z)|\Bigg], \end{align} where $n_s$ and $\Lambda_s$ are given by \eqref{te1'}. \end{theorem} If we take $\alpha_1=\alpha_2=\cdots=\alpha_s=\alpha,$ then divide both sides of \eqref{te1} by $|\alpha|^s$ and letting $|\alpha|\rightarrow\infty,$ we obtain the following result. \begin{corollary}\label{c7} If $P(z)$ is a polynomial of degree $n,$ which does not vanish in the disk $|z|< k$ where $k\leq 1,$ then for $\beta\in\mathbb{C}$ with $|\beta|\leq 1$ and $|z|\geq 1$ then \begin{align}\nonumber\label{ce7} \Bigg|z^sP^{(s)}(z)+\beta\dfrac{ n_s }{(1+k)^s} P(z)\Bigg|\leq& \dfrac{n_s}{2}\Bigg[\left\{\dfrac{|z|^n}{k^n}\Bigg|1+\dfrac{\beta }{(1+k)^s} \Bigg|+\Bigg|\dfrac{\beta}{(1+k)^s}\Bigg|\right\}\underset{|z|=k}{Max}|P(z)|\\ \,\,\,\,\,&-\left\{\dfrac{|z|^n}{k^{n}}\Bigg|1+\dfrac{ \beta}{(1+k)^s}\Bigg|-\Bigg|\dfrac{ \beta}{(1+k)^s}\Bigg|\right\}\underset{|z|=k}{Min}|P(z)|\Bigg], \end{align} where $n_s$ is given by \eqref{te1'}. \end{corollary} For $s=1$ in Corollary \ref{c7}, we get the following result. \begin{corollary}\label{c8} If $P(z)$ is a polynomial of degree $n,$ which does not vanish in the disk $|z|< k$ where $k\leq 1,$ then for $\beta\in\mathbb{C}$ with $|\alpha|\geq k,|\beta|\leq 1$ and $|z|\geq 1$ then \begin{align}\nonumber\label{ce8} \Bigg|zP^{\prime}(z)+\dfrac{ n\beta }{1+k} P(z)\Bigg|\leq & \dfrac{n}{2}\Bigg[\left\{\dfrac{|z|^n}{k^n}\Bigg|1+\dfrac{\beta }{1+k} \Bigg|+\Bigg|\dfrac{\beta}{1+k}\Bigg|\right\}\underset{|z|=k}{Max}|P(z)|\\&-\left\{\dfrac{|z|^n}{k^{n}}\Bigg|1+\dfrac{ \beta}{1+k}\Bigg|-\Bigg|\dfrac{ \beta}{1+k}\Bigg|\right\}\underset{|z|=k}{Min}|P(z)|\Bigg], \end{align} \end{corollary} For $\beta=0,$ Theorem \ref{t3} reduces to the following result. \begin{corollary}\label{c9} If $P(z)$ is a polynomial of degree $n,$ which does not vanish in the disk $|z|< k$ where $k\leq 1,$ then for $\alpha_1,\alpha_2,\cdots,\alpha_s,\beta\in\mathbb{C}$ with $|\alpha_1|\geq k,|\alpha_2|\geq k,\cdots,|\alpha_s|\geq k,|\beta|\leq 1$ and $|z|\geq 1$ then \begin{align}\label{ce9}\nonumber \big|P_s(z)\big|\leq \dfrac{n_s}{2}\Bigg[&\left\{\dfrac{|z|^{n-s} }{k^n}|\alpha_1\alpha_2\cdots\alpha_s|+1\right\}\underset{|z|=k}{Max}|P(z)|\\&-\left\{\dfrac{|z|^{n-s}}{k^{n}}|\alpha_1\alpha_2\cdots\alpha_s|-1\right\}\underset{|z|=k}{Min}|P(z)|\Bigg], \end{align} where $n_s$ is given by \eqref{te1'}. \end{corollary} \begin{remark} \textnormal{For $k=1,$ inequality \eqref{ce9} reduces to \eqref{awe}.} \end{remark} If we take $\alpha_1=\alpha_2=\cdots=\alpha_s=\alpha,$ then divide both sides of \eqref{te1} by $|\alpha|^s$ and letting $|\alpha|\rightarrow\infty,$ we obtain the following result. \begin{corollary}\label{c10} If $P(z)$ is a polynomial of degree $n,$ which does not vanish in the disk $|z|< k$ where $k\leq 1,$ then for $\alpha_1,\alpha_2,\cdots,\alpha_s,\beta\in\mathbb{C}$ with $|\alpha_1|\geq k,|\alpha_2|\geq k,\cdots,|\alpha_s|\geq k,|\beta|\leq 1$ and $|z|\geq 1$ then \begin{align}\label{ce10} \big|P^{(s)}(z)\big|\leq \dfrac{n(n-1)\cdots(n-t+1)|z|^{n-s}}{2k^{n}}\Bigg[\underset{|z|=k}{Max}|P(z)|-\underset{|z|=k}{Min}|P(z)|\Bigg]. \end{align} \end{corollary} If $s=1$ and $\alpha_1=\alpha$ then inequality \eqref{te3} reduces to the following result. \begin{corollary}\label{c11} If $P(z)$ is a polynomial of degree $n,$ which does not vanish in the disk $|z|< k$ where $k\leq 1,$ then for $\alpha,\beta\in\mathbb{C}$ with $|\alpha|\geq k,|\beta|\leq 1$ and $|z|\geq 1$ then \begin{align}\nonumber\label{ce11} \Bigg|z&D_\alpha P(z)+n\beta \left(\dfrac{ |\alpha|-k}{1+k}\right) P(z)\Bigg|\\\nonumber&\leq \dfrac{n}{2}\Bigg[\left\{\dfrac{1 }{k^n}\Bigg|\alpha+\beta\left(\dfrac{ |\alpha|-k}{1+k}\right) \Bigg|+\Bigg|z^s+\beta\left(\dfrac{ |\alpha|-k}{1+k}\right)\Bigg|\right\}\underset{|z|=k}{Max}|P(z)|\\&-\left\{\dfrac{1}{k^{n}}\Bigg|\alpha+\beta\left(\dfrac{ |\alpha|-k}{1+k}\right)\Bigg|-\Bigg|z^s+\beta\left(\dfrac{ |\alpha|-k}{1+k}\right)\Bigg|\right\}\underset{|z|=k}{Min}|P(z)|\Bigg]. \end{align} \end{corollary} \section{\bf Lemmas} For the proof of Theorems, we need the following Lemmas. The first Lemma follows by repeated application of Laguerre's theorem \cite{al} or\cite[p. 52]{marden}. \begin{lemma}\label{l1} If all the zeros of $nth$ degree polynomial lie in circular region $C$ and if none of the points $\alpha_1,\alpha_2,\cdots,\alpha_s$ lie in circular region $C$, then each of the polar derivatives \begin{equation}\label{le1} D_{\alpha_s}D_{\alpha_{s-1}}\cdots D_{\alpha_1} P(z)=P_s(z),\,\,\,\,\,\,\,\,\,s=1,2,\cdots,n-1 \end{equation} has all its zeros in $C.$ \end{lemma} The next Lemma is due to Aziz and Rather \cite{ar98}. \begin{lemma}\label{l2} If $P(z)$ is a polynomial of degree $n,$ having all zeros in the closed disk $|z|\leq k,$ $ k\leq 1,$ then for every real or complex number $\alpha$ with $|\alpha|\geq k$ and $|z|=1$, we have \begin{equation}\label{le2} |D_\alpha P(z)|\geq n\left(\dfrac{|\alpha|-k}{1+k}\right)|P(z)|. \end{equation} \end{lemma} \begin{lemma}\label{l3} If $P(z)=\sum_{j=0}^{n}a_jz^j$ is a polynomial of degree $n$ having all its zeros in $|z|\leq k,$ $k\leq 1$ then \begin{equation}\label{le3} \dfrac{1}{n}\left|\dfrac{a_{n-1}}{a_n}\right|\leq k. \end{equation} \end{lemma} The above lemma follows by taking $\mu=1$ in the result due to Aziz and Rather \cite{ar04}. \begin{lemma}\label{l4} If $P(z)$ be a polynomial of degree $n$ having all zeros in the disk $|z|\leq k$ where $k\leq 1,$ then for $\alpha_1,\alpha_2,\cdots,\alpha_s\in\mathbb{C}$ with $|\alpha_1|\geq k,|\alpha_2|\geq k,\cdots,|\alpha_s|\geq k,$ $(1\leq s<n),$ and $|z|=1$ \begin{align}\label{le4} |P_s(z)|\geq\dfrac{n_s \Lambda_s}{(1+k)^s}|P(z)|, \end{align} where $n_s$ and $\Lambda_s$ are defined in \eqref{te1'}. \end{lemma} \begin{proof} The result is trivial if $|\alpha_j|=k$ for at least one $j$ where $j=1,2,\cdots,t.$ Therefore, we assume $|\alpha_j|>k$ for all $j=1,2,\cdots,t.$ We shall prove Lemma by principle of mathematical induction. For $t=1$ the result is follows by Lemma \ref{l2}. \\ We assume that the result is true for $t=q,$ which means that for $|z|=1,$ we have \begin{equation}\label{l4pe1} |P_q(z)|\geq \dfrac{n_qA_{\alpha_q}}{(1+k)^q}|P(z)|\,\,\,q\geq 1, \end{equation} and we will prove that the Lemma is true for $t=q+1$ also.\\ Since $D_{\alpha_1}P(z)=(na_n\alpha_1+a_{n-1})z^{n-1}+\cdots+(na_0+\alpha_1a_1)$ and $|\alpha_1|>k,$ $D_{\alpha_1}P(z)$ is a polynomial of degree $n-1.$ If this is not true, then $$ na_n\alpha_1+a_{n-1}=0, $$ which implies $$ |\alpha_1|=\dfrac{1}{n}\left|\dfrac{a_{n-1}}{a_n}\right|. $$ By Lemma \ref{l3}, we have $$ |\alpha_1|=\dfrac{1}{n}\left|\dfrac{a_{n-1}}{a_n}\right|\leq k . $$ But this is the contradiction to the fact $|\alpha|>k.$ Hence, $D_{\alpha_1}P(z)$ is a polynomial of degree $n-1$ and by Lemma \ref{l1}, $D_{\alpha_1}P(z)$ has all its zeros in $|z|\leq k.$ By the similar argument as before, $D_{\alpha_2}D_{\alpha_1}P(z)$ must be a polynomial of degree $n-2$ for $|\alpha_1|>k,$ $|\alpha_2|>k$ and all its zeros in $|z|\leq k.$ Continuing in this way, we conclude $D_{\alpha_q}D_{\alpha_{q-1}}\cdots D_{\alpha_1}P(z)=P_q(z)$ is a polynomial of degree $n-q$ for all $|\alpha_j|>k,$ $j=1,2,\dots,q$ and has all zeros in $|z|\leq k.$ Applying Lemma \ref{l2} to $P_q(z),$ we get for $|\alpha_{q+1}|>k,$ \begin{equation}\label{l4pe2} |P_{q+1}(z)|=|D_{\alpha_{q+1}}P_q(z)|\geq \dfrac{(n-q)(|\alpha_{q+1}|-k)}{1+k}|P_q(z)|\,\,\,\textnormal{for}\,\,\,\,|z|=1. \end{equation} Inequality \eqref{l4pe2} in conjunction with \eqref{l4pe1} gives for $|z|=1,$ \begin{equation} |P_{q+1}(z)|\geq \dfrac{n_{q+1}A_{\alpha_{q+1}}}{(1+k)^{q+1}}|P(z)|, \end{equation} where $n_{q+1}=n(n-1)\cdots(n-q)$ and $A_{\alpha_{q+1}}=(|\alpha_1|-k)(|\alpha_2|-k)\cdots(|\alpha_{q+1}|-k).$\\ This shows that the result is true for $s=q+1$ also. This completes the proof of Lemma \ref{l4}. \end{proof} \begin{lemma}\label{l5} If $P(z)$ be a polynomial of degree $n,$ then for $\alpha_1,\alpha_2,\cdots,\alpha_s\in\mathbb{C}$ with $|\alpha_1|\geq k,|\alpha_2|\geq k,\cdots,|\alpha_s|\geq k,$ $(1\leq t<n),$ $|\beta|\leq 1$ and $|z|\geq 1,$ \begin{align}\nonumber\label{le5} \Bigg|z^s&P_s(z)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}P(z)\Bigg|+k^n\left|z^sQ_s(z/k^2)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}Q(z/k^2)\right|\\\leq &n_s\left\{\dfrac{|z^n| }{k^n}\Bigg|\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s} \Bigg|+\Bigg|z^s+\dfrac{\beta \Lambda_s}{(1+k)^s}\Bigg|\right\}\underset{|z|=k}{Max}|P(z)|, \end{align} where $n_s$ and $\Lambda_s$ are defined in \eqref{te1'} and $Q(z)=z^n\overline{P(1/\overline{z})}.$ \end{lemma} \begin{proof} Let $M=\underset{|z|=k}{Max}|P(z)|.$ Therefore, for every $\lambda$ with $|\lambda|>1,$ $|P(z)|<|\lambda Mz^n/k^n|$ on $|z|=k.$ By Rouche's theorem it follows that all the zeros of $F(z)=P(z)+\lambda Mz^n/k^n$ lie in $|z|<k.$ If $G(z)=z^n\overline{F(1/\overline{z})}$ then $|k^nG(z/k^2)|=|F(z)|$ for $|z|=k$ and hence for any $\delta$ with $|\delta|>1,$ the polynomial $H(z)= k^nG(z/k^2)+\delta F(z)$ has all its zeros in $|z|<k.$ By applying Lemma \ref{l4} to $H(z),$ we have for $\alpha_1,\alpha_2,\cdots,\alpha_s\in\mathbb{C}$ with $|\alpha_1|> k,|\alpha_2|> k,\cdots,|\alpha_s|> k,$ $(1\leq t<n),$ \begin{equation*} |z^sH_s(z)|\geq\dfrac{n_s \Lambda_s}{(1+k)^s}|H(z)| \,\,\,\,\,\textnormal{for}\,\,\,\,|z|=1. \end{equation*} Therefore, for any $\beta$ with $|\beta|<1$ and $|z|=1,$ we have \begin{equation*} |z^sH_s(z)|>|\beta|\dfrac{n_s \Lambda_s}{(1+k)^s}|H(z)| . \end{equation*} Since $|\alpha_j|> k,$ $j=1,2,\cdots,s$ by Lemma \ref{l1} the polynomial $ z^sH_s(z)$ has all its zeros in $|z|<1$ and by Rouche's theorem, the polynomial $$ T(z)=z^sH_s(z)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s} H(z) $$ has all its zeros in $|z|<1.$ Replacing $H(z)$ by $k^nG(z/k^2)+\delta F(z),$ we conclude that the polynomial $$ T(z)=k^n\left\{ z^sG_s(z/k^2)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} G(z/k^2)\right\}+\delta\left\{ z^sF_s(z)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} F(z)\right\}$$ has all its zero in $|z|<1.$ This gives for $|\beta|<1,$ $|\alpha_j|\geq k,$ where $j=1,2,\cdots,t$ and $|z|\geq 1.$ \begin{equation}\label{le34} k^n\left|z^sG_s(z/k^2)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} G(z/k^2)\right|\leq \left|z^sF_s(z)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} F(z)\right| \end{equation} If inequality \eqref{le34} is not true, then there is a point $z_0$ with $|z_0|\geq 1$ such that \begin{equation*} k^n\left|z_0^sG_s(z_0/k^2)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} G(z_0/k^2)\right|> \left|z_0^sF_s(z_0)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} F(z_0)\right| \end{equation*} Since all the zeros of $F(z)$ lie in $|z|<k,$ proceeding similarly as in case of $H(z),$ it follows that the polynomial $z^sF_s(z)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} F(z)$ has all its zeros in $|z|< 1,$ and hence $z_0^sF_s(z_0)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} F(z_0)\neq 0.$ Now, choosing $$\delta=-\dfrac{k^n\left\{z_0^sG_s(z_0/k^2)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} G(z_0/k^2)\right\}}{z_0^sF_s(z_0)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} F(z_0)}. $$ then $\delta$ is a well-defined real or complex number, with $|\delta|>1$ and $T(z_0)=0,$ which contradicts the fact that $T(z)$ has all zeros in $|z|<1.$ Thus, \eqref{le34} holds. Now, replacing $F(z)$ by $P(z)+\lambda Mz^n/k^n$ and $G(z)$ by $Q(z)+\overline{\lambda}M/k^n$ in \eqref{le34}, we have for $|z|\geq 1,$ \begin{align}\nonumber\label{le35} k^n\Bigg|z^s&Q_s(z/k^2)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} Q(z/k^2)+\dfrac{\overline{\lambda}n_s}{k^n}\left\{z^s+\dfrac{\beta \Lambda_s}{(1+k)^s}\right\}M\Bigg|\\&\leq\Bigg|z^sP_s(z)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} P(z)+\dfrac{\lambda n_s}{k^n}\left\{\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s} \right\}Mz^n\Bigg|. \end{align} Choosing the argument of $\lambda$ in the right hand side of \eqref{le35} such that \begin{align*} \Bigg|z^s&P_s(z)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} P(z)+\dfrac{\lambda n_s}{k^n}\left\{\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s} \right\}Mz^n\Bigg|\\=&\Bigg|\dfrac{\lambda n_s}{k^n}\left\{\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s} \right\}Mz^n\Bigg|-\Bigg|z^sP_s(z)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} P(z)\Bigg|. \end{align*} which is possible by Corollary \ref{c1}, then for $|z|\geq 1,$ we have \begin{align}\nonumber\label{le36} \Bigg|z^s&P_s(z)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} P(z)\Bigg|+k^n\Bigg|z^sQ_s(z/k^2)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} Q(z/k^2)\Bigg|\\\leq&|\lambda|n_s\Bigg[\dfrac{|z^n| }{k^n}\Bigg|\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s} \Bigg|+\Bigg|z^s+\dfrac{\beta \Lambda_s}{(1+k)^s}\Bigg|\Bigg]M \end{align} Making $|\lambda|\rightarrow 1$ and using the continuity for $|\beta|=1$ and $|\alpha_j|=k,$ where $j=1,2,\cdots,t,$ in \eqref{le36}, we get the inequality \eqref{le5}. \end{proof} \section{\textbf{Proof of Theorems}} \begin{proof}[\textnormal{\textbf{Proof of Theorem \ref{t1}}}] By hypothesis $F(z)$ is a polynomial of degree $n$ having all in zeros in the closed disk $|z|\leq k $ and $P(z)$ is a polynomial of degree $n$ such that \begin{equation}\label{t1e1} |P(z)|\leq |F(z)|\,\,\,\, \textrm{for} \,\,\,\,|z|= k, \end{equation} therefore, if $F(z)$ has a zero of multiplicity $s$ at $z=ke^{i\theta_{0}}$, then $P(z)$ has a zero of multiplicity at least $s$ at $z=ke^{i\theta_{0}}$. If $P(z)/F(z)$ is a constant, then inequality \eqref{t1e1} is obvious. We now assume that $P(z)/F(z)$ is not a constant, so that by the maximum modulus principle, it follows that\\ \[|P(z)|<|F(z)|\,\,\,\textrm{for}\,\, |z|>k\,\,.\] Suppose $F(z)$ has $m$ zeros on $|z|=k$ where $0\leq m < n$, so that we can write\\ \[F(z) = F_{1}(z)F_{2}(z)\] where $F_{1}(z)$ is a polynomial of degree $m$ whose all zeros lie on $|z|=k$ and $F_{2}(z)$ is a polynomial of degree exactly $n-m$ having all its zeros in $|z|<k$. This implies with the help of inequality \eqref{t1e1} that\\ \[P(z) = P_{1}(z)F_{1}(z)\] where $P_{1}(z)$ is a polynomial of degree at most $n-m$. Again, from inequality \eqref{t1e1}, we have \[|P_{1}(z)| \leq |F_{2}(z)|\,\,\,for \,\, |z|=k\,\] where $F_{2}(z) \neq 0 \,\, for\,\, |z|=k$. Therefore for every real or complex number $\lambda $ with $|\lambda|>1$, a direct application of Rouche's theorem shows that the zeros of the polynomial $P_{1}(z)- \lambda F_{2}(z)$ of degree $n-m \geq 1$ lie in $|z|<k$ hence the polynomial \[G(z) = F_{1}(z)\left(P_{1}(z) - \lambda F_{2}(z)\right)=P(z) - \lambda F(z)\] has all its zeros in $|z|\leq k.$ Therefore for $r> 1,$ all the zeros of $G(rz)$ lie in $|z|\leq k/r< k$ By applying Lemma \ref{l4} to the polynomial $G(rz),$ then for $|z|=1,$ we have \begin{equation*} |z^sG_s(rz)|\geq\dfrac{n_s \Lambda_s}{(1+k)^s}|G(rz)|. \end{equation*} Equivalently for $|z|=1,$ we have \begin{equation} |z^sP_s(rz)-\lambda z^s F_s(rz)|\geq\dfrac{n_s \Lambda_s}{(1+k)^s}|P(rz)-\lambda F(rz)|. \end{equation} Therefore, we have for any $\beta$ with $|\beta|<1$ and $|z|=1,$ \begin{equation} |z^sP_s(rz)-\lambda z^sF_s(rz)|>\dfrac{n_s \Lambda_s}{(1+k)^s}|\beta||P(rz)-\lambda F(rz)|. \end{equation} Since $|\alpha_j|>k,$ $j=1,2,\cdots,t$ by Rouche's theorem, the polynomial \begin{align*} T(rz)&= \left\{ z^sP_s(rz)-\lambda z^sF_s(rz)\right\}+\dfrac{n_s \Lambda_s}{(1+k)^s}\beta \left\{P(rz)-\lambda F(rz)\right\}\\& = z^sP_s(rz)+\dfrac{n_s \Lambda_s\beta}{(1+k)^s}P(rz)- \lambda\left\{z^sF_s(rz)+\dfrac{n_s \Lambda_s\beta}{(1+k)^s}F(rz) \right\} \end{align*} has all zeros in $|z|< 1.$ This implies for $|z|\geq 1$ \begin{equation}\label{t1p1} |z^sP_s(rz)+\dfrac{n_s \Lambda_s\beta}{(1+k)^s}P(rz)|\leq |z^sF_s(rz)+\dfrac{n_s \Lambda_s\beta}{(1+k)^s}F(rz)| \end{equation} If it is true, then we have for some $z_0$ with $|z_0|\geq k,$ \begin{equation} |z_0^sP_s(rz_0)+\dfrac{n_s \Lambda_s\beta}{(1+k)^s}P(rz_0)|> |z_0^sF_s(rz_0)+\dfrac{n_s \Lambda_s\beta}{(1+k)^s}F(rz_0)| \end{equation} Since $z_0^sP_s(rz_0)+\dfrac{n_s \Lambda_s\beta}{(1+k)^s}P(rz_0)\neq 0,$ we can choose $$ \lambda=\dfrac{z_0^sF_s(rz_0)+\dfrac{n_s \Lambda_s\beta}{(1+k)^s}F(rz_0)}{z_0^sP_s(rz_0)+\dfrac{n_s \Lambda_s\beta}{(1+k)^s}P(rz_0)} $$ $\lambda$ is well-defined real or complex number with $|\lambda|<1$ and with this choice of $\lambda,$ and $T(rz_0)=0$ for $|z_0|\geq k.$ But this is a contradiction to the fact that $T(rz)\neq 0$ for $|z|\geq 1.$ Thus \eqref{t1p1} holds. Letting $r\rightarrow 1,$ in \eqref{t1p1}, we get the desired result. \end{proof} \begin{proof}[\textnormal{\textbf{Proof of Theorem \textbf{\ref{t2}}}}] Let $Q(z)=z^{n}\overline{P(1/\overline{z})}.$ Since $ P(z) $ does not vanish in the disk $ |z|<k,\,\, k\leq 1 $, then the polynomial $Q(z/k^2)$ has all zeros in $|z|\leq k.$ Applying Theorem \ref{t1} to $k^nQ(z/k^2)$ and noting that $|P(z)|=|k^nQ(z/k^2)|$ for $|z|=k,$ we have for all $ \alpha_j,\beta\in\mathbb{C} $ with $ |\alpha_j|\geq 1,$ $j=1,2,\cdots,t,$ $|\beta|\leq 1 ,$ and $ |z|\geq 1 ,$ \begin{align}\label{t2e1} \left|z^sP_s(z)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}P(z)\right|\leq k^n\left|z^sQ_s(z/k^2)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}Q(z/k^2)\right|. \end{align} Inequality \eqref{t2e1} in conjunction with Lemma \ref{l5} gives for all $ \alpha_j,\beta\in\mathbb{C} $ with $ |\alpha_j|\geq k,$ $j=1,2,\cdots,t$ $|\beta|\leq 1, $ and $ |z|\geq 1 ,$ \begin{align*} 2\Bigg|&z^sP_s(z)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}P(z)\Bigg|\\&\leq \left|z^sP_s(z)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}P(z)\right|+ k^n\left|z^sQ_s(z/k^2)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}Q(z/k^2)\right|\\&\leq n_s\left\{\dfrac{|z^n| }{k^n}\Bigg|\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s} \Bigg|+\Bigg|z^s+\dfrac{\beta \Lambda_s}{(1+k)^s}\Bigg|\right\}\underset{|z|=k}{Max}|P(z)|, \end{align*} which is equivalent to \eqref{te2}. \end{proof} \begin{proof}[\textnormal{\textbf{Proof of Theorem \ref{t3}}}] Let $m=Min_{|z|=k}|P(z)|.$ If $P(z)$ has a zero on $|z|=k,$ then $m=0$ and result follows from Theorem \ref{t2}. Therefore, We assume that $ P(z) $ has all its zeros in $ |z|>k $ where $ k\leq 1 $ so that $ m > 0 $. Now for every $ \lambda $ with $ |\lambda|<1 $, it follows by Rouche's theorem that $ h(z)=P(z)-\lambda m $ does not vanish in $ |z|<k $. Let $g(z)=z^n\overline{h(1/\overline{z})}=z^n\overline{P(1/\overline{z})}-\overline{\lambda}mz^n=Q(z)-\overline{\lambda}mz^n$ then, the polynomial $g(z/k^2)$ has all its zeros in $|z|\leq k.$ As $|k^ng(z/k^2)|=|h(z)|$ for $|z|=k,$ applying Theorem \ref{t1} to $k^ng(z/k^2),$ we get for $\alpha_1,\alpha_2,\cdots,\alpha_s,\beta\in\mathbb{C},$ with $|\alpha_j|\geq k,$ $j=1,2,\cdots,t$ $|\beta|\leq 1$ and $|z|\geq 1,$ \begin{align}\label{t3e1} \left|z^sh_s(z)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}h(z)\right|\leq k^n\left|z^sg_s(z/k^2)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}g(z/k^2)\right|. \end{align} Equivalently for $|z|\geq 1,$ we have \begin{align}\nonumber\label{t3e2} \Bigg|z^s&P_s(z)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} P(z)-\lambda n_s\left\{z^s+\dfrac{\beta \Lambda_s}{(1+k)^s}\right\}m\Bigg|\\&\leq k^n\Bigg|z^sQ_s(z/k^2)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} Q(z/k^2)-\dfrac{\overline{\lambda}n_s}{k^{2n}}\left\{\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s}\right\}mz^n\Bigg|. \end{align} Since $Q(z/k^2)$ has all its zeros in $|z|\leq k$ and $k^n\underset{|z|=k}{Min}|Q(z/k^2)|=\underset{|z|=k}{Min}|P(z)|,$ by Corollary \ref{c4} applied to $Q(z/k^2),$ we have for $|z|\geq 1,$ \begin{align}\nonumber\label{t3e3} \Bigg|z^sQ_s(z/k^2)+\beta\dfrac{n_s \Lambda_s}{(1+k)^s}Q(z/k^2)\Bigg|&\geq \dfrac{n_s}{k^n}\left|\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s}\right|\underset{|z|=k}{Min}|Q(z)|\\\nonumber&= \dfrac{n_s}{k^{2n}}\left|\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s}\right|\underset{|z|=k}{Min}|P(z)|\\&= \dfrac{n_s}{k^{2n}}\left|\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s}\right|m. \end{align} Now, choosing the argument of $\lambda$ on the right hand side of inequality \eqref{t3e2} such that \begin{align*} k^n&\Bigg|z^sQ_s(z/k^2)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} Q(z/k^2)-\dfrac{\overline{\lambda}n_s}{k^{2n}}\left\{\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s}\right\}mz^n\Bigg|\\&=k^n\Bigg|z^sQ_s(z/k^2)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} Q(z/k^2)\Bigg|-\dfrac{|\overline{\lambda}|n_s}{k^{n}}\Bigg|\left\{\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s}\right\}mz^n\Bigg|, \end{align*} which is possible by inequality \eqref{t3e3}, we get for $|z|\geq 1,$ \begin{align*} \Bigg|z^s&P_s(z)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} P(z)\Bigg|-|\lambda| n_s\Bigg|z^s+\dfrac{\beta \Lambda_s}{(1+k)^s}\Bigg|m\\&=k^n\Bigg|z^sQ_s(z/k^2)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} Q(z/k^2)\Bigg|-\dfrac{|\lambda||z|^nn_s}{k^{n}}\Bigg|\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s}\Bigg|m, \end{align*} Letting $\lambda\rightarrow 1,$ we have for $|z|\geq 1,$ \begin{align}\nonumber\label{t3e4} \Bigg|z^s&P_s(z)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} P(z)\Bigg|-k^n\Bigg|z^sQ_s(z/k^2)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} Q(z/k^2)\Bigg|\\&\leq n_s\left\{\Bigg|z^s+\dfrac{\beta \Lambda_s}{(1+k)^s}\Bigg|-\dfrac{|z|^n}{k^{n}}\Bigg|\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s}\Bigg|\right\}m \end{align} Adding \eqref{le5} and \eqref{t3e4}, we get for $|z|\geq 1$ \begin{align*} 2\Bigg|z^s&P_s(z)+\dfrac{\beta n_s \Lambda_s}{(1+k)^s} P(z)\Bigg|\\&\leq n_s\Bigg[\left\{\dfrac{|z^n| }{k^n}\Bigg|\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s} \Bigg|+\Bigg|z^s+\dfrac{\beta \Lambda_s}{(1+k)^s}\Bigg|\right\}\underset{|z|=k}{Max}|P(z)|\\&+\left\{\Bigg|z^s+\dfrac{\beta \Lambda_s}{(1+k)^s}\Bigg|-\dfrac{|z|^n}{k^{n}}\Bigg|\alpha_1\alpha_2\cdots\alpha_s+\dfrac{\beta \Lambda_s}{(1+k)^s}mz^n\Bigg|\right\}m\Bigg], \end{align*} which is equivalent to \eqref{te3}. This completes the proof of Theorem \ref{t3}. \end{proof}
1,108,101,563,552
arxiv
\section{Introduction} The average luminosities of quasars, and possibly many other properties, depend strongly on cosmological epoch. Shape, normalisation, and evolution of the quasar luminosity function (QLF) are among the most basic descriptions of the quasar population. A good knowledge of the {\em local\/} ($z\approx 0$) QLF is essential for at least two purposes: first, to serve as zero-point for quasar evolution studies; second, to provide a reference distribution law for unbiased subsamples constructed for statistical investigations (e.g., for spectral properties, host galaxies, etc). Unfortunately, current determinations of the QLF are limited mainly to $z\ga 0.3$, for the simple reason of lacking appropriate survey data. Most optical QSO surveys discriminate against objects with extended morphology, causing severe incompleteness (and bias) at low redshifts. At the low-luminosity end of the QSO--Seyfert family, dedicated galaxy surveys have prompted investigations of the Seyfert~1 luminosity function (e.g., Cheng et al.\ \cite{cheng:85}; Huchra \& Burg \cite{huchra:92}). However, these surveys are necessarily incomplete at {\em high\/} nuclear luminosities, where the active nuclei outshine the surrounding hosts, and the objects are not selected as galaxies anymore. In 1990 we started the `Hamburg/ESO survey' (HES), a new wide-angle survey for bright QSOs and Seyferts, designed to avoid the above selection biases as far as possible. A detailed description of the HES was given by Wisotzki et al.\ (\cite{wisotzki:96}; hereafter Paper~I), and a first list of 160 newly discovered QSOs was published by Reimers et al.\ (\cite{reimers:96}; Paper~II). Basically, QSO candidates are selected by automated procedures from digitised objective-prism plates taken with the ESO Schmidt telescope, with subsequent slit spectroscopy of all candidates above the magnitude limit of typically $B_{\subs{lim}} \simeq 17.0$--17.5, over an area of $\sim 5000$\,deg$^2$ at high galactic latitude. To prevent morphological bias against sources with resolved structure, such as Seyferts or gravitational lenses, objects with non-stellar appearance are {\em not\/} excluded from the candidate lists. Among other goals, we aim to construct a large, optically flux-limited sample of low-redshift `Type~1 AGN', including QSO/Seyfert borderline objects. In this paper we publish the results of an analysis of a first part of the HES, based on 33 Schmidt fields, with an effective area of 611\,deg$^2$. We present the first direct determination of the local quasar luminosity function from a single optical survey, and discuss some implications of our results. A comprehensive description of the analysis has been given by K\"ohler (\cite{koehler:96a}). We adopt a cosmological model with $H_0 = 50\,\mathrm{km\,s}^{-1}\,\mathrm{Mpc}^{-1}$, $q_0 = 0.5$, and $\Lambda=0$. \begin{table}[tb] \caption[]{Properties of surveyed fields. Field designations are given as standard ESO/SERC field numbers. $N_H$ is the column density of Galactic neutral hydrogen in the field centre, in $10^{20}$\,cm$^{-2}$. $A_B$ is the corresponding extinction in the $B$ band. $\Omega_{\subs{eff}}$ is the effective survey area for the field, in deg$^2$.} \label{tab:fields} \begin{tabular}{lllll} Field & $B_{\subs{lim}}$ & $N_H$ & $A_B$ & $\Omega_{\subs{eff}}$ \\[0.3ex] \hline 501 & 17.38 & 5.5 & 0.39 & 16.38 \rule{0em}{2.7ex} \\ 503 & 17.25 & 4.8 & 0.34 & 17.75 \\ 505 & 16.95 & 7.4 & 0.53 & 17.05 \\ 506 & 17.66 & 7.5 & 0.54 & 17.17 \\ 507 & 17.11 & 7.2 & 0.51 & 17.10 \\ 509 & 16.95 & 5.3 & 0.38 & 17.04 \\ 568 & 16.95 & 5.8 & 0.41 & 18.20 \\ 570 & 16.78 & 4.2 & 0.30 & 18.92 \\ 578 & 16.64 & 6.2 & 0.44 & 17.59 \\ 637 & 16.90 & 5.5 & 0.39 & 17.94 \\ 638 & 16.95 & 7.5 & 0.53 & 16.14 \\ 639 & 17.47 & 5.1 & 0.36 & 19.35 \\ 640 & 17.50 & 4.9 & 0.35 & 16.23 \\ 643 & 17.41 & 4.4 & 0.31 & 21.47 \\ 644 & 17.31 & 3.9 & 0.28 & 18.46 \\ 645 & 17.42 & 3.8 & 0.27 & 20.72 \\ 646 & 17.23 & 4.0 & 0.28 & 17.68 \\ 647 & 17.46 & 5.2 & 0.37 & 20.02 \\ 648 & 17.15 & 6.2 & 0.44 & 17.95 \\ 649 & 17.05 & 7.3 & 0.52 & 16.68 \\ 650 & 17.15 & 7.4 & 0.53 & 11.86 \\ 708 & 17.21 & 4.2 & 0.30 & 20.12 \\ 709 & 17.46 & 5.7 & 0.41 & 20.07 \\ 710 & 17.62 & 4.8 & 0.34 & 22.91 \\ 711 & 17.11 & 3.5 & 0.25 & 19.74 \\ 715 & 17.35 & 4.1 & 0.29 & 21.52 \\ 716 & 16.78 & 3.2 & 0.23 & 18.71 \\ 717 & 17.14 & 2.5 & 0.17 & 17.64 \\ 718 & 17.35 & 3.4 & 0.24 & 20.39 \\ 719 & 17.25 & 2.5 & 0.18 & 20.20 \\ 720 & 17.30 & 3.7 & 0.26 & 19.85 \\ 721 & 17.14 & 3.6 & 0.25 & 19.39 \\ 722 & 16.86 & 5.8 & 0.41 & 18.95 \\ \hline \end{tabular} \end{table} \begin{figure}[tb] \epsfxsize=\hsize \epsfclipon \epsfbox[72 86 341 257]{fig_effarea.ps} \caption[]{Effective area of the surveyed region as a function of $B$ magnitude (without correction for Galactic extinction).} \label{fig:effarea} \end{figure} \begin{table}[tb] \caption[]{The flux-limited sample of QSOs with $z > 0.07$. The entries in column $B$ marked by colons are photographic, all other magnitudes are CCD measurements corrected to the zero-points of the photographic plates. Column $B_0$ lists the extinction-corrected magnitudes.} \label{tab:qsos} \begin{tabular}{llllll} Name & $z$ & $B$ & Field & $B_0$ & $M_B$ \\[0.3ex] \hline HE 0952$-$1552 & 0.108 & 16.56 & 637 & 16.17 & $-$22.87 \rule{0em}{2.7ex}\\ HE 1006$-$1211 & 0.693 & 16.37 & 709 & 15.96 & $-$27.03 \\ HE 1007$-$1405 & 0.583 & 16.05 & 637 & 15.66 & $-$26.94 \\ HE 1012$-$1637 & 0.433 & 16.22 & 638 & 15.69 & $-$26.25 \\ HE 1015$-$1618 & 0.247 & 15.91:& 638 & 15.38 & $-$25.40 \\ HE 1019$-$1413 & 0.077 & 16.75 & 638 & 16.22 & $-$22.09 \\ PKS 1020$-$103 & 0.197 & 17.39 & 710 & 17.05 & $-$23.26 \\ HE 1021$-$0738 & 1.800 & 17.47 & 710 & 17.12 & $-$28.00 \\ PKS 1022$-$102 & 2.000 & 17.58 & 710 & 17.24 & $-$28.14 \\ HE 1025$-$1915 & 0.323 & 16.91 & 568 & 16.50 & $-$24.84 \\ HE 1029$-$1401 & 0.086 & 14.07 & 638 & 13.54 & $-$25.01 \\ HE 1031$-$1457 & 0.652 & 17.39 & 639 & 17.03 & $-$25.83 \\ HE 1041$-$1447 & 1.569 & 17.26:& 639 & 16.90 & $-$27.91 \\ HE 1043$-$1443 & 0.599 & 17.06 & 639 & 16.70 & $-$25.96 \\ HE 1045$-$2322 & 0.407 & 16.65 & 501 & 16.26 & $-$25.55 \\ PKS 1048$-$090 & 0.345 & 16.51 & 711 & 16.27 & $-$25.21 \\ HE 1104$-$1805 & 2.319 & 16.30 & 570 & 16.00 & $-$29.62 \\ HE 1109$-$1255 & 0.596 & 17.19 & 640 & 16.84 & $-$25.80 \\ HE 1110$-$1910 & 0.111 & 16.77 & 570 & 16.47 & $-$22.63 \\ HE 1115$-$1735 & 0.217 & 16.25 & 570 & 15.95 & $-$24.56 \\ HE 1120$-$2713 & 0.389 & 16.94 & 503 & 16.60 & $-$25.12 \\ HE 1159$-$1338 & 0.506 & 16.68 & 643 & 16.36 & $-$25.91 \\ HE 1200$-$1234 & 0.553 & 17.22 & 643 & 16.91 & $-$25.56 \\ HE 1201$-$2409 & 0.137 & 16.74 & 505 & 16.21 & $-$23.33 \\ HE 1211$-$1322 & 1.125 & 16.15 & 644 & 15.87 & $-$28.25 \\ HE 1223$-$1543 & 1.735 & 17.14 & 644 & 16.86 & $-$28.17 \\ HE 1228$-$1637 & 0.102 & 16.78 & 644 & 16.50 & $-$22.43 \\ HE 1233$-$2313 & 0.238 & 17.06 & 506 & 16.52 & $-$24.18 \\ HE 1237$-$2252 & 0.096 & 17.29 & 506 & 16.75 & $-$22.05 \\ HE 1239$-$2426 & 0.082 & 16.73 & 507 & 16.22 & $-$22.25 \\ HE 1254$-$0934 & 0.139 & 15.72 & 718 & 15.48 & $-$24.11 \\ HE 1255$-$2231 & 0.492 & 17.02 & 507 & 16.51 & $-$25.71 \\ HE 1258$-$0823 & 1.153 & 16.44 & 718 & 16.20 & $-$27.97 \\ HE 1258$-$1627 & 1.709 & 17.22 & 646 & 16.94 & $-$28.06 \\ PKS 1302$-$102 & 0.278 & 15.15 & 718 & 14.91 & $-$26.12 \\ HE 1304$-$1157 & 0.294 & 17.14 & 718 & 16.90 & $-$24.25 \\ HE 1312$-$1200 & 0.327 & 16.07 & 719 & 15.89 & $-$25.48 \\ HE 1315$-$1028 & 0.099 & 16.75 & 719 & 16.57 & $-$22.28 \\ Q 1316$-$0734 & 0.538 & 16.70 & 719 & 16.52 & $-$25.89 \\ HE 1335$-$0847 & 0.080 & 16.80 & 720 & 16.54 & $-$21.86 \\ HE 1341$-$1020 & 2.134 & 17.28 & 720 & 17.02 & $-$28.48 \\ HE 1345$-$0756 & 0.777 & 17.13 & 720 & 16.87 & $-$26.40 \\ HE 1358$-$1157 & 0.408 & 16.64:& 721 & 16.39 & $-$25.43 \\ HE 1403$-$1137 & 0.589 & 16.86 & 721 & 16.61 & $-$26.01 \\ HE 1405$-$1545 & 0.194 & 16.51 & 649 & 15.99 & $-$24.29 \\ HE 1405$-$1722 & 0.661 & 15.85 & 649 & 15.33 & $-$27.55 \\ PG 1416$-$1256 & 0.129 & 16.88 & 650 & 16.35 & $-$23.07 \\ HE 1419$-$1048 & 0.265 & 16.77 & 722 & 16.36 & $-$24.58 \\[0.3ex] \hline \end{tabular} \end{table} \section{The quasar sample} \subsection{Survey area} The investigated area was defined by 33 ESO Schmidt fields located in the region $-28\degr <\delta < -7\degr$ and $9^{\subs{h}}\,40^{\subs{m}}<\alpha < 14^{\subs{h}}\,40^{\subs{m}}$, thus all in the North Galactic hemisphere. A list of the survey fields and their properties is given in Table~\ref{tab:fields}. In each field, one direct plate from the ESO (B) atlas, and one HES spectral (objective prism) plate were available. The formal total area subtended by 33 ESO plates is $\sim 25\,\mathrm{deg}^2\times 33 = 825\,\mathrm{deg}^2$. Losses of usable area occurred because (1) plates overlapped in adjacent fields, especially if not precisely centred; (2) occasionally direct and spectral plates were positionally mismatched, and only the common area could be used; (3) most important, overlapping spectra (and to a lesser degree, direct images) rendered a certain percentage of spectra unprocessable. These losses have been quantified and incorporated into `effective areas' $\Omega_{\subs{eff}}$ for each field, given in Table~\ref{tab:fields}. Because of the field-to-field variations in limiting magnitude, the total effective survey area is a function of apparent magnitude, as plotted in Fig.~\ref{fig:effarea}. The maximum area is 611\,deg$^2$ for $B<16.64$ and decreases gradually to zero for sources fainter than $B=17.66$. While many quasar surveys were carried out at very high Galactic latitudes where foreground extinction corrections are small (although systematically non-zero), this is not the case for our present set of fields. We have used the data from Stark et al.\ (\cite{stark:92}) to obtain the neutral hydrogen column density in the centre of each field, and converted this into a $B$ band extinction using the formula $A_B = 4.2 \times N_H / 59$ where $N_H$ is given in units of $10^{20}\,\mathrm{cm}^{-2}$ (cf.\ Spitzer \cite{spitzer:78}); Table~\ref{tab:fields} lists $N_H$ and $A_B$ for each field. The average extinction is 0.36\,mag. \subsection{Sample selection} The limiting brightness of QSO candidates during the automated selection procedure was initially defined by a minimum S/N ratio of $\sim 3$ in the digital spectra. We later found, as expected (cf.\ Paper~I), that close to the selection limit the object lists became incomplete. Several experiments showed that systematic incompleteness occurred only for the faintest spectra, and that the effect disappeared for a limiting S/N $\ge$ 5. We translated this S/N cutoff into a magnitude limit on the direct plate, using the calibration of photographic magnitudes described below. For the 33 fields, the values for $B_{\subs{lim}}$ range between 17.66 and 16.64, depending largely on the quality of the spectral plates. Altogether, 115 QSOs and Seyfert~1 galaxies were detected by the HES selection criteria in these fields (primarily different measures of UV excess; cf.\ Paper~I) that were either confirmed by our own follow-up spectroscopy (Paper~II), or previously catalogued as QSO or Seyfert~1 in the compilation of V\'{e}ron-Cetty \& V\'{e}ron (\cite{veron:93}). Note that in this paper we deliberately ignore the traditional subdivision Seyferts vs.\ `real' QSOs based on some arbitrary luminosity threshold. A lower redshift limit of 0.07 was applied to avoid possible incompleteness due to host galaxy contamination (but see the $z<0.07$ Seyfert sample below). Several of the lowest redshift objects would nevertheless be called Seyferts by many others. There remained 48 QSOs with $z>0.07$ located within the effective survey area and above the completeness limiting magnitude. These form the flux-limited sample listed in Table~\ref{tab:qsos}. A comparison with the V\'{e}ron-Cetty \& V\'{e}ron (\cite{veron:93}) catalogue revealed no additional QSOs that should have been included in the flux-limited sample. This test for `completeness' is, however, not very strong, as few other surveys have covered this region of the sky, and $\sim 90$\,\% of the sample objects are new discoveries made by the HES. \begin{table*}[tb] \caption[]{Seyfert~1 galaxies with $z < 0.07$. The CCD magnitudes are for the small aperture (see text). Absolute magnitudes are given as observed ($M_{B,\subs{obs}}$), and with host galaxy contribution $M_{\subs{gal}}$ subtracted ($M_{\subs{nuc}}$). } \label{tab:seyferts} \begin{tabular}{llllllllll} Name & $z$ & $B$ & Field & $B_0$ & $z_{\subs{max}}$ & $V_{\subs{e}}/V_{\subs{a}}$ & $M_{B,\subs{obs}}$ & $M_{\subs{gal}}$ & $M_{\subs{nuc}}$ \\[0.3ex] \hline HE 1043$-$1346 & 0.0669 & 16.83 & 639 & 16.47 & 0.0692 & 0.902 & $-$21.55 & $-$21.06 & $-$20.44 \rule{0em}{2.7ex}\\ HE 1248$-$1357 & 0.0144 & 15.28 & 645 & 15.01 & 0.0152 & 0.855 & $-$19.67 & $-$19.39 & $-$18.06 \\ IRAS 1249$-$13 & 0.0129 & 14.91 & 645 & 14.64 & 0.0254 & 0.132 & $-$19.80 & $-$19.10 & $-$18.99 \\ R 12.01 & 0.0463 & 15.50 & 646 & 15.21 & 0.0700 & 0.289 & $-$22.00 & $-$20.90 & $-$21.52 \\ PG 1310$-$1051 & 0.0337 & 15.64 & 719 & 15.46 & 0.0544 & 0.238 & $-$21.07 & $-$19.54 & $-$20.77 \\ HE 1330$-$1013 & 0.0221 & 15.82 & 719 & 15.65 & 0.0293 & 0.428 & $-$19.96 & $-$18.96 & $-$19.41 \\ R 14.01 & 0.0408 & 14.68 & 648 & 14.24 & 0.0700 & 0.196 & $-$22.70 & $-$20.72 & $-$22.51 \\[0.3ex] \hline \end{tabular} \end{table*} \subsection{Photometry} Obtaining consistent and unbiased photometry of low-redshift quasars with detectable host galaxies is not trivial. During the survey phase, normally only photographic data, in our case from the digitised direct $B$ plates, are available. These plates usually reach much fainter magnitudes than objective-prism plates, but they also saturate earlier, especially if not the original plates but glass copies are used (as was the case for several of our fields). The conventionally employed isophotal or large aperture magnitudes, although well suited for point sources, tend to give much larger total luminosities for QSOs with detectable hosts than desired for an investigation of the {\em nuclear\/} luminosity function. Additional complications can result from non-linear filtering effects inherent in the digitisation procedure. For the present sample we have developed a two-step procedure to obtain unbiased and survey-consistent magnitudes. The photographic plates were calibrated by CCD sequences obtained with the ESO 90\,cm telescope in 1993 and 1994. These sequences will be published in a separate paper. The sample selection was based on simulated diaphragm photometry with an aperture of the size of the seeing disk. For point sources, an accurate calibration relation with very little intrinsic scatter, $\sim 0.15$ mag, could be derived from the CCD sequences (which contained only stars). These small-aperture magnitudes correlated also well with the corresponding S/N ratios of the digital spectra, important for an accurate definition of limiting magnitudes. In a second step, we obtained CCD photometry for most of the sample QSOs. Exposures in the $B$ band were available from the mandatory acquisition images at the ESO 3.6\,m and 2.2\,m telescopes (cf.\ Paper~II). After standard reduction and colour term corrections, the photometric zero points of these images were shifted to the system of the corresponding photographic plate by using surrounding unsaturated field stars as references. The resulting corrected CCD magnitudes, again measured in an aperture of diameter approximately equal to that of the seeing disk, should be much more accurate than the photographic data, at least for extended objects, while magnitudes of point sources should be more or less unchanged. We tested this assertion by comparing photographic and zero-point corrected CCD photometry of the $z > 0.3$ QSOs (which should be true point sources) and found a mean difference of only 0.01\,mag. For the further analysis we used the zero-point corrected $B$ band CCD magnitudes. For three objects no CCD data were available, and for those we substituted the photographic measurements (cf.\ Table~\ref{tab:qsos}). \subsection{The additional Seyfert sample} To obtain further insight into the continuity of properties between high-luminosity QSOs and low-luminosity Seyfert nuclei, we conducted a dedicated subsurvey for Seyfert galaxies, targeting both type~1 and type~2 objects. The criteria were different from those used in the main survey, adapted to discriminate between active and inactive galaxies rather than between stars and QSOs. More details about this subsurvey will be given in a later paper (K\"ohler et al., in prep.) where we shall also present our results concerning Seyfert~2 galaxies, which we do not consider here. We found 7 Seyfert~1 galaxies with $0.01<z<0.07$, listed in Table~\ref{tab:seyferts}. The redshifts were remeasured in new spectra obtained with the ESO 1.52~m telescope, and corrected for heliocentric motion. Not all the fields from Table~\ref{tab:fields} were involved in this subsurvey, and the effective area was only 477 deg$^2$ (but see Sect.\ \ref{sec:sy1lf} for the computation of space densities). For these sources, proper photometry is at least as important as for QSOs. The photometric systems of many earlier investigations of the luminosity function of Seyferts suffer from two severe drawbacks: (i) They were often based on heterogeneous collections of published (aperture) photometry, generally inconsistent with the magnitudes used for defining the survey flux limits. (ii) The contributions of host galaxies were either not corrected for at all, or some assumptions about universal intrinsic properties of hosts and nuclei had to be made, such as colours (Sandage \cite{sandage:73}) or host luminosities (Cheng et al.\ \cite{cheng:85}). On the assumption that Seyfert~1 galaxies are the low-luminosity equivalent of quasars, the quantity of interest is the {\em nuclear\/} brightness, with the host galaxy contribution removed. Although our small-aperture magnitudes were already much less affected by galaxy contamination than large-aperture or total brightness measurements, a major contribution could still be expected especially for the lowest nuclear luminosities. We therefore estimated these corrections individually, based on $B$ and $V$ images of the seven sample objects taken with the ESO 90\,cm telescope. We first constructed empirical two-dimensional point-spread functions from nearby stars. These were then subtracted from the Seyfert images, scaled by a factor chosen such that the residual galaxy surface brightness distribution did not decrease inwards. The resulting host galaxy brightness was finally integrated over the aperture and subtracted from the total aperture flux. As can be seen in Table~\ref{tab:seyferts}, the corrections were not negligible, and -- expectedly -- largest for the lowest redshift sources. \section{The surface density of bright QSOs} An important diagnostic tool for QSO samples is the empirical relation between source fluxes and the number of sources per unit solid angle. Once the field properties and the sample photometry are complete, the cumulative surface density of QSOs brighter than magnitude $B$, $N(<B)$, can be easily computed from summing over $1/\Omega_{\subs{eff}}$ for all relevant sample objects. From our QSO sample with $z<2.2$ (47 objects) we have derived the $N(<B)$ relation shown as thick line in Fig.~\ref{fig:surfdens}. Note that the abscissa values in this diagram are extinction-corrected magnitudes. The bright end of the curve is dominated by the extremely luminous source HE~1029$-$1401, but the main part can be well approximated by a simple power law of slope $\beta = 0.67\pm 0.04$ ($\log N = \beta B + \mathrm{const}$). If only the 33 sources with $z>0.2$ are considered, excluding thereby all QSO/Seyfert borderline cases (thin line in Fig.~\ref{fig:surfdens}), the normalisation drops slightly but the slope is essentially unchanged with $\beta = 0.69\pm 0.05$. For those readers who prefer differential surface densities, we have computed the figures in half magnitude bins between $B=14.75$ and $B=17.25$ (Table~\ref{tab:surfdens}). The counts were multiplied by a factor of 2 to formally express number counts per unit magnitude intervals. Comparison relations from other surveys are numerous for fainter magnitudes, but rare for $B\la 16.5$. The Palomar Bright Quasar Survey (BQS; Schmidt \& Green \cite{schmidt:83}, hereafter quoted as SG83) provides the only available optically selected QSO sample with a well-defined flux limit subtending over a similar region in the Hubble diagram. SG83 listed differential surface densities computed from the full set of 114 QSOs with $z<2.2$ found over an area of 10\,714 deg$^2$. To make their numbers comparable to ours, we recomputed the BQS surface densities by removing all $z<0.07$ entries from their sample, and converted the result into a cumulative relation. This relation is shown in Fig.~\ref{fig:surfdens} by the open squares. A large discrepancy at all magnitudes is apparent. At $B=16$, the surface density found in the HES is high by a factor of 3.6 compared to the BQS, certainly more than can be permitted by statistical fluctuations. \begin{figure}[tb] \epsfxsize=\hsize \epsfclipon \epsfbox[68 86 341 257]{fig_surfdens.ps} \caption[]{Cumulative surface densities of bright QSOs with $z<2.2$. Thick line: this work, for $z>0.07$, thin line: the same for $z>0.2$. Open squares show the relation from Schmidt \& Green (\cite{schmidt:83}), modified for $z>0.07$ (see text). Triangles give the LBQS (Hewett et al.\ \cite{hewett:95}) relation, valid for $z>0.2$.} \label{fig:surfdens} \end{figure} \begin{table}[tb] \caption[]{Differential surface density $A(B)$ of 33 HES QSOs with redshifts $0.2 < z < 2.2$, expressed as number of QSOs per deg$^2$ per unit magnitude interval. $n$ gives the actually observed number per half magnitude bin ($B\pm 0.25$).} \label{tab:surfdens} \begin{tabular}{llll} $B$ & $n$ & $A(B)$ & $\sigma_A$ \\[0.3ex] \hline 15.0 & 1 & 0.003 & 0.003 \rule{0em}{2.7ex} \\ 15.5 & 4 & 0.013 & 0.007 \\ 16.0 & 5 & 0.016 & 0.008 \\ 16.5 & 12 & 0.046 & 0.014 \\ 17.0 & 11 & 0.15 & 0.08 \\[0.3ex] \hline \end{tabular} \end{table} A similar result was already reported by Goldschmidt et al.\ (\cite{goldschmidt:92}) from an analysis of the Edinburgh survey, who found 5 QSOs above the BQS flux limit in an area of 330\,deg$^2$ (non-overlapping with the present HES area) while the BQS contained only one. Their estimate of $N(B<16.5)=0.024$\,deg$^{-2}$ for $0.3<z<2.2$ is completely consistent with our corresponding value of $0.021\pm 0.05$. Furthermore, Goldschmidt et al.\ found a zero-point offset of 0.28\,mag between the BQS photographic magnitudes and theirs, in the sense that the Palomar measurements were systematically {\em too bright\/}, thus increasing rather than relaxing the discrepancy. It should be noted that neither SG83 nor Goldschmidt et al.\ corrected their magnitudes for Galactic extinction, whereas we have done so. However, our survey area is {\em predominantly\/} located in regions of rather high $A_B$, while SG83 estimated that the extinction for most BQS quasars were `not much more than 0.1\,mag'; this even might be, at least partly, compensated by the above mentioned zero-point offset. Because of these uncertainties we have not attempted to correct the BQS results for extinction. One can estimate globally that an average $A_B$(BQS) of 0.61\,mag would be necessary to bring BQS and HES surface densities to a match -- a totally unrealistic value. At fainter magnitudes the agreement of HES number counts with those of other surveys is excellent. As an example, we have plotted the values from the LBQS (Hewett et al.\ \cite{hewett:95}) into Fig.~\ref{fig:surfdens}, with the abscissa shifted by +0.1\,mag to transform their $B_J$ photometric system approximately into $B$ magnitudes. While the counts in the brightest bins are again lower than ours, this can be understood as a result of poor statistics in the LBQS for $B\la 16.5$, possibly arising from saturation effects. Around $B\simeq 17$, HES and LBQS surface densities join smoothly, without any detectable significant offset. Similarly good agreement is reached with other surveys, cf.\ the recent compilation by Cristiani et al.\ \cite{cristiani:95}. We are thus confident that the photometric scale used in the present investigation is essentially unbiased and adequate for luminosity function work. \begin{figure}[tb] \epsfxsize=\hsize \epsfclipon \epsfbox[68 86 341 257]{fig_lqlf.ps} \caption[]{Cumulative luminosity function of QSOs with $0.07 < z < 0.3$ (thick line). For comparison: Cumulative luminosity function of BQS quasars with $0.07 < z < 0.3$, constructed as described in the text (dotted line and small symbols).} \label{fig:lqlf} \end{figure} \begin{figure}[tb] \epsfxsize=\hsize \epsfclipon \epsfbox[68 86 341 257]{fig_sy1lf.ps} \caption[]{Cumulative luminosity function of the additional Seyfert~1 sample ($z<0.07$). Thick line: LF for nuclear magnitudes corrected for the host galaxy contributions; thin line: uncorrected LF. For comparison: Seyfert~1 nuclear luminosity function of Cheng et al.\ (\cite{cheng:85}; dotted line and small symbols)} \label{fig:sy1lf} \end{figure} \section{Luminosity functions} \subsection{The luminosity function of low-redshift QSOs} For all objects in Table~\ref{tab:qsos}, we first estimated absolute magnitudes for the rest-frame $B$ band by computing their luminosity distances in an expanding Friedmann universe with the formula of Terrell (\cite{terrell:77}), and applying $K$ corrections taken from Cristiani \& Vio (\cite{cristiani:90}). As low-redshift subsample we selected the 20 QSOs with $z<0.3$. Space densities were derived using the generalised $1/V_{\subs{max}}$ estimator (Felten \cite{felten:76}; Avni \& Bahcall \cite{avni:80}), thus incorporating the full information about field-dependent flux limits, including fields with zero detections (the `coherent' analysis in the terminology of Avni \& Bahcall). The resulting cumulative luminosity function is shown in Fig.~\ref{fig:lqlf}. Locations of individual objects are indicated by the error bars. We have chosen the cumulative representation $\psi(<M_B)$ as it allows one to avoid binning (particularly problematic for smaller samples) and to make the contribution of each object in the sample apparent. The errors were estimated from Poisson statistics in the sample, combined with the uncertainties in volume determination resulting from the photometric errors. Restricting the HES luminosity function to $z<0.2$ (14 objects) results in slightly larger error bars, but the space densities change only insignificantly, by less than 0.05\,dex, and, overplotted in Fig.~\ref{fig:lqlf}, could not even be distinguished from the $z<0.3$ relation. We thus feel justified to neglect differential evolution within our redshift shell; we shall further investigate and qualify this assumption below. Up to now, the BQS provides the only comparison sample in the region $M_B < -23$ and $z<0.3$, containing 45 objects that fulfill also the condition $z>0.07$. Surprisingly, there is yet -- to our knowledge -- no {\em direct\/} determination of the local QLF based on BQS objects alone. We have therefore computed a QLF from this sample with the same methods used for our own data, taking magnitudes, survey limits, and areas from SG83. We show the results of these computations as a cumulative distribution in Fig.~\ref{fig:lqlf}. (Since all other literature data are available only in binned form, we have also binned the BQS LF in $M_B$, to achieve a more uniform presentation.) As mentioned above, Galactic extinction is neglected for these data; at any rate, the effect should be very small. The space densities found from the HES data are much higher than those derived from the BQS, with the discrepancy increasing with luminosity. For the most luminous sources, we find that there are almost an order of magnitude more low-redshift quasars per unit volume than discovered by the BQS. Possible reasons for this discrepancy are discussed in Sect.\ \ref{sec:discussion}. \subsection{The space density of Seyfert~1 nuclei \label{sec:sy1lf}} To extend the local luminosity function towards fainter levels of nuclear activity, we have used the additional Seyfert~1 sample with $z<0.07$. The absolute magnitudes were computed in the approximation of Euclidean static geometry and neglecting $K$ corrections. To obtain reliable and unbiased estimates for the accessible volumes, the maximum redshift $z_{\subs{max,$i$}}$ at which object $i$ would still have been included in the survey had to be determined for each object. Because of the superposition of a host galaxy with constant intrinsic scale length and a point-like nucleus, the small-aperture magnitudes depended on redshift in a complicated way, and $z_{\subs{max,$i$}}$ was therefore not only a function of $B$ and $B_{\subs{lim}}$ but also of intrinsic properties. We used the available CCD images to simulate the mixing of AGN\,/\,host contributions into the aperture as a function of redshift $z$, resulting in an array of $z_{\subs{max,$k$}}$ values, one for each field $k$. For simplicity, Table~\ref{tab:seyferts} supplies `effective' values $z_{\subs{max}}$ that permit a direct conversion into space densities. A $V/V_{\subs{max}}$ test (Schmidt \cite{schmidt:68}), implemented in the generalisation proposed by Avni \& Bahcall (\cite{avni:80}), gives a mean value of $V/V_{\subs{max}} = 0.43\pm 0.11$ (individual values are also listed in Table~\ref{tab:seyferts}). Compared with the expectation value of 0.5 for a complete, non-evolving, homogeneously distributed sample, there is no evidence for major incompleteness. The resulting luminosity function is displayed in Fig.~\ref{fig:sy1lf}. It can be seen that a host galaxy correction is important particularly for the intrinsically faintest objects in the sample: Without correction, the luminosity function shows a steep upturn at $M_B \simeq -20$; for these objects, even the small-aperture magnitude is dominated by the host rather than by the AGN. Although the sample is not large, the derived space densities are fully consistent with previous estimates (Cheng et al.\ \cite{cheng:85}; Huchra \& Burg \cite{huchra:92}). However, none of these previous investigations have payed as much attention to an appropriate host galaxy correction, neither for source luminosities nor for the survey volume, as we have. In particular, the analysis of the CfA sample by Huchra \& Burg was based on uncorrected Zwicky magnitudes (essentially measuring the {\em total\/} galaxy brightness), making a straightforward comparison impossible. More similar to our approach was the work of Cheng et al.\ (\cite{cheng:85}) with Markarian galaxies, but their sample, although much larger than ours, had to be manipulated with substantial incompleteness corrections, in addition to the afore mentioned problems of a heterogeneous photometric data base. \begin{figure}[tb] \epsfxsize=\hsize \epsfclipon \epsfbox[68 86 341 257]{fig_syqlf.ps} \caption[]{Combined local luminosity function of quasars and Seyfert~1 nuclei with $z<0.3$, based on magnitudes corrected for the host galaxies. } \label{fig:syqlf} \end{figure} \begin{table}[tb] \caption[]{Binned differential local luminosity function, computed from 27 QSOs and Seyferts with $z < 0.3$, in Mpc$^{-3}$ per unit absolute magnitude interval centred on the value given by $M_B$. Column $n$ gives the number of objects per bin.} \label{tab:lqlf} \begin{tabular}{llll} $M_B$ & $n$ & $\phi(M_B)$ & $\log\phi$ \\[0.3ex] \hline -18.5 & 2 & $(5.5 \pm 4.7 ) \times 10^{-5}$ & $-4.3$ \rule{0em}{2.7ex} \\ -19.5 & 1 & $(4.1 \pm 4.1 ) \times 10^{-6}$ & $-5.4$ \\ -20.5 & 2 & $(9.0 \pm 6.7 ) \times 10^{-7}$ & $-6.0$ \\ -21.5 & 2 & $(1.1 \pm 0.9 ) \times 10^{-7}$ & $-7.0$ \\ -22.5 & 8 & $(4.5 \pm 1.8 ) \times 10^{-7}$ & $-6.4$ \\ -23.5 & 3 & $(6.2 \pm 3.6 ) \times 10^{-8}$ & $-7.2$ \\ -24.5 & 6 & $(3.6 \pm 1.5 ) \times 10^{-8}$ & $-7.5$ \\ -25.5 & 2 & $(1.0 \pm 0.7 ) \times 10^{-8}$ & $-8.0$ \\ -26.5 & 1 & $(5.0 \pm 5.0 ) \times 10^{-9}$ & $-8.3$ \\[0.3ex] \hline \end{tabular} \end{table} \subsection{The combined local luminosity function} The local universe is the only domain where it is possible to construct the entire QLF, including low-luminosity tail, directly from one single survey. The local luminosity function of QSOs and Seyfert~1 nuclei, obtained by combining the two independently derived luminosity functions presented above, is valid between absolute magnitudes $-18$ and $-26$, thus spanning three decades in luminosity. Overlaying the cumulative relations of Figs.\ \ref{fig:lqlf} and \ref{fig:sy1lf}, the match in the common luminosity region is remarkably good (cf.\ Fig.~\ref{fig:syqlf}). To arrive at a consistent system of {\em nuclear\/} luminosities, the magnitudes in the QSO sample were corrected by subtracting a template host galaxy of $M_B = -21$. Since the magnitudes had already been measured through a small aperture, this host subtraction was a minor correction for all objects in the QSO sample, and the results are not sensitive to the assumed host luminosity. The combined local QLF can be fitted well by a simple linear relation between $M_B$ and $\log \psi (M)$, corresponding to a single power-law for the {\em differential\/} QLF ($\phi (L) dL \propto L^\alpha$). A slope parameter $\alpha = -2.17\pm 0.06$ provides a statistically acceptable fit over the full range of 8 magnitudes. Fitting only the QSO sample ($z>0.07$) does not alter the slope significantly ($\alpha = -2.10\pm 0.08$); fitting only the $z<0.07$ Seyferts yields $\alpha = -2.38\pm 0.23$, again consistent with a constant slope of $\alpha \simeq -2.2$. The slope of the BQS LF is much steeper: Apart from flattening at both high and low luminosities, the data demand $\alpha \la -3$ over the most of the range. From the combined dataset, we have also produced a binned differential luminosity function $\phi (M)$, tabulated in steps of one in absolute magnitude (Table~\ref{tab:lqlf}). As was to be expected, this representation is quite sensitive to the actual choice of binning intervals, most obvious in the succession of a `high' and a `low' bin at $M_B = -22.5$ and $-21.5$, respectively. There is, however, full consistency of the results obtained; in particular, the QLF slope parameter $\alpha \simeq 2.2$ is reproduced within the expected uncertainties. \begin{figure}[tb] \epsfxsize=\hsize \epsfclipon \epsfbox[68 86 341 257]{fig_evolqlf.ps} \caption[]{The bright end of the quasar luminosity function up to $z=0.5$, in comparison with the local QLF. Shown are the cumulative relations in the redshift shells $z<0.2$ (thin line), and $0.2<z<0.5$ (thick line).} \label{fig:evolqlf} \end{figure} \subsection{The quasar luminosity function at higher redshifts} The sample of Table~\ref{tab:qsos} contains eight QSOs with redshifts $0.3<z<0.5$ that we used to further constrain the potential error introduced by neglecting differential evolution. Binning the full $z<0.5$ sample into two redshift regimes $0.07<z<0.2$ and $z>0.2$, respectively, there are 14 objects in each shell. The luminosity functions are compared in Fig.\ \ref{fig:evolqlf}: For $-25\la M_B \la -24$, the local and the adjacent $0.2<z<0.5$ shell have essentially identical space densities, while for $M_B<-25$, the QLF in $z>0.2$ seems to be merely a continuation of the local QLF to higher luminosities, {\em with the same slope and normalisation\/}. Dividing the sample at $z=0.3$ instead of 0.2 does not affect the results. Outside the local universe, a single survey can yield no more than a small segment of the QLF, and only the combination of many different surveys can provide similar coverage of source luminosities as in the local QLF. The HES has the potential to provide new samples covering the brightest parts of the known QSO population, and it will be a promising task for the future to combine our results with those of fainter surveys. Such an analysis is, however, not the scope of the present paper. We only remark that even at redshifts higher than $z=0.5$, the QLF pieces sampled by the HES do not show significant deviations from an extrapolation of the local relation. A more quantitative discussion of this issue is deferred to a later paper. \section{Discussion \label{sec:discussion}} \subsection{Incompleteness of the BQS} Our results imply that the Palomar Bright Quasar Survey is not only incomplete, but also heavily biased. The general incompleteness may be estimated from the surface densities to be around a factor of 2--3. This effect has been noted by others before (Wampler \& Ponz \cite{wampler:85}; Goldschmidt et al.\ \cite{goldschmidt:92}), and we have at present no easy explanation for it. Some mild adjustment of the BQS data to higher values of space densities and luminosities may be required because of neglected Galactic extinction, but this cannot account for the observed discrepany. One possibility is that the rather large photometric errors in the BQS, reflected in the $(U-B)$ colours, scattered many QSOs out of the UV excess domain. There seems to be an additional deficit of luminous low-redshift QSOs, by another factor of $\ga 2$ for $M_B<-24$. This discrepancy is apparent already from simply counting the number of $z<0.3$ QSOs more luminous than $M_B = -24.5$: There are 5 objects in the present HES sample, while the entire BQS contains only 15 such sources, in a more than 15 times larger area. As this is the first detection of such an effect, one has to carefully consider systematic errors. Note, however, that the HES is the first optical QSO survey since completion of the BQS that is sensitive to a similar range of redshifts and luminosities, and it is therefore not altogether inconceivable that substantial selection biases have remained unnoticed over a long time. The good agreement between the HES $z>0.2$ or $z>0.3$ surface densities with other major surveys, e.g., the LBQS (Hewett et al.\ \cite{hewett:95}), the Edinburgh survey (Goldschmidt et al.\ \cite{goldschmidt:92}), or the HBQS (Cristiani et al.\ \cite{cristiani:95}), makes it implausible that the HES photometric zeropoint could be seriously in error. We also tested our method of zero-point corrected CCD photometry. All relevant quantities were recomputed based on the standard approach of using only photographic magnitudes measured over large apertures, without any systematic change of results, except that the luminosities of low-redshift QSOs with bright host galaxies were often {\em over\/}estimated in the photographic data, creating a bias towards even higher QLF values. The CCD approach thus seems justified and appropriate. The reconstruction of the $0.07<z<0.3$ luminosity function for the BQS was based on the same procedures as applied to our own data. We neglected differential evolution within the $z<0.3$ shell, but the {\em observed\/} discrepancy is independent of any assumed evolution law, and therefore most likely caused by real incompleteness in the BQS sample. The exclusion of objects of non-stellar appearance in the BQS is an intuitively appealing candidate mechanism for this additional incompleteness. It is obvious at first glance (and has in fact never been disputed) that the BQS sample is systematically lacking low-luminosity Seyferts; it has usually been assumed that this selection effect ceased to be valid for $M_B \la -23$. Our results suggest that this is not the case. However, it is then somewhat surprising that the incompleteness should be smallest, almost disappearing, in the luminosity range around $M_B \simeq -23$, as seems to follow from Fig.\ \ref{fig:lqlf}. In fact, this would then be the {\em only\/} region of the entire Hubble diagram where the BQS counts were not significantly below those of the HES. Increased incompleteness for the most luminous QSOs seems a possible, but rather unlikely explanation. If, instead, the BQS magnitudes particularly for the slightly resolved $M_B\simeq -23$ objects were {\em systematically too bright\/}, overcompleteness would occur, scattering objects into the sample from below the flux limit. We know from our own experience with photometry of resolved objects on digitised photographic plates that such effects exist, especially if a large digitisation aperture is used (this is why we decided to incorporate the CCD measurements). The BQS was based on PDS scans with an aperture of $4\farcs 4$ (Green \& Morrill \cite{green:78}), and magnitudes were basically isophotal in photographic density. We {\em suspect\/} that these technical details provide a clue to the above contradiction, and that incompleteness due to the rejection of extended QSO hosts could be partly counteracted by photometric bias and overcompleteness for only marginally resolved objects. Further analysis of this point is certainly required. If our hypothesis is correct, two important conjectures can be made: (1) The slope of the QLF for $M_B<-23$ is much flatter than previous investigators estimated, weakening the evidence for a `break' in the Seyfert/QSO transition regime (e.g., Marshall \cite{marshall:87}) -- the local QLF presented here showing essentially {\em no\/} break at all. Note that also in the Seyfert LF of Cheng et al.\ (\cite{cheng:85}), the presence of such a break is not really demanded by the data. (2) The BQS sample, up to now the prime source for luminous radio-quiet low-redshift QSOs, might suffer from severe selection biases with respect to the morphological properties of their galaxy hosts. \subsection{Constraints on quasar evolution} The local luminosity function derived in this paper is independent of samples collected at higher redshifts, and therefore well suited to test and constrain possible evolution laws. Existing estimates of the QLF at redshift $z\simeq 2$ (e.g., Boyle et al.\ \cite{boyle:91}) show a marked break, or change of slope, at intermediate luminosities, that shifts towards lower values with decreasing $z$. We have argued above that the local equivalent of this break is probably an artefact. The flat slope of the $z\approx 0$ QLF derived in this paper, and the absence of clear features, demand, on the assumption that the high $z$ QLF determinations are correct, that both shape and slope of the QLF change strongly with redshift. This clearly is in contradiction to the `standard picture' of pure luminosity evolution (PLE), where a double power-law luminosity function with invariant logarithmic shape is merely shifted along the luminosity axis. However, it should be noted that virtually {\em all\/} evolution modelling since 1983 had to rely on the BQS data of SG83, to cover an otherwise inaccessible part of the Hubble diagram. Hence, in these regions, the models merely reproduce the incompleteness and possible biases of the BQS contribution. The problem that QSO surveys were notoriously incomplete at low redshifts was of course realised by most researchers, and it has become customary to apply a lower redshift cutoff of $z_{\subs{min}}\simeq 0.3$ to all samples. The construction of a $z=0$ QLF from such models has therefore always the character of an {\em extrapolation\/} outside the validated range, and should be treated with appropriate caution. Even so, a {\em quantitative\/} comparison between evolutionary models and actual measurements of the local Seyfert~1/QLF is a powerful test which deserves higher attention than it has been paid to in the past, compared to the careful analyses of higher redshift data. In the construction of the $z<0.3$ local QLF we have not corrected individual luminosities for statistical evolution. While it is true that such a neglect could artificially flatten the luminosity function, evolution must be fairly strong for the effect to be relevant. We show now that the no-evolution assumption was, in fact, the best approximation one could possibly make, over the range of luminosities covered by the HES. In the case of evolution, the LFs of adjacent, non-overlapping redshift shells should be different. For the simple case of PLE with the rate suggested, e.g., by Boyle et al.\ (\cite{boyle:91}), one would expect an abscissa offset of $\Delta M = -0.8$\,mag between $z<0.2$ and $0.2<z<0.5$. The necessity to invoke such an offset is not apparent from Fig.\ \ref{fig:evolqlf}. While it might be consistent with our data to allow for a somewhat steeper intrinsic slope of the QLF, combined with mild evolution, there is certainly no {\em evidence\/} for evolution in the data. These conclusions are supported by results reported recently from other surveys. Hewett et al.\ (\cite{hewett:93}), with a preliminary analysis of the LBQS, showed that the QLF flattens considerably at $z<1$; a similar conclusion was reached by Miller et al.\ (\cite{miller:93}) using the Edinburgh survey. Although both surveys have the usual lower redshift limit excluding the local population ($z>0.3$ for the Edinburgh survey, $z>0.2$ for the LBQS), a prediction for the slope of the local QLF can be made by extrapolating the trend seen by Miller et al.; the result is consistent with our measured value of $\alpha = -2.2$. A flatter QLF implies a higher fraction of luminous quasars among the total population. Our results indicate therefore, in agreement with Hewett et al.\ and Miller et al., that the evolution of the most luminous quasars must proceed considerably slower than suggested by the notion of pure luminosity evolution. \section{Conclusions and outlook} We have analysed a new, flux-limited and well-defined sample of QSOs \& Seyfert~1 galaxies. Host-galaxy dependent selection and photometric biases, if not absent, are greatly reduced in comparison to other optically selected samples. We find that the derived space density of luminous QSOs in the local universe is much higher than previous surveys indicated, and that consequently these objects show a much slower cosmological evolution. With the present sample of 55 QSOs and Seyferts in 33 fields, only a small fraction of the HES area is covered. We have already acquired, and partly processed, much additional plate material enlarging the survey by a significant factor. For example, we now have several fields in common with the BQS, and it shall soon be possible to pursue the question of incompleteness and biases in the BQS by a direct comparison of objects and photometry. To understand and model the process of quasar evolution requires strong constraints from observations. It seems now that we are farther away from a coherent empirical picture of the QSO population and its evolution than thought a few years ago. Quasars of different luminosities evolve differently, and quasars do not necessarily evolve with the same rate at different cosmological epochs. There are presumably more parameters needed to describe quasar evolution properly than in the simple model of `pure luminosity evolution'. The ultimate aim of QSO survey work is the deduction of intrinsic physical properties from statistical samples. Unfortunately the physical processes in active galaxy nuclei are poorly understood, and the relations between statistical and physical properties therefore far from unique. The best study cases for nuclear activity in galaxies are nearby Seyferts; as shown above, the luminosity function of Seyfert~1 nuclei can be smoothly continued into the classical quasar regime, without a significant change of slope or break. While this by no means proves that Seyfert~1 nuclei are simply scaled-down versions of quasars, it suggest a continuity of properties rather than distinct classes. With the completion of the HES, a large and well-defined sample of highly luminous low-redshift QSOs will become available, to study the relationship between QSOs, their environments, and their evolution in detail. \begin{acknowledgements} T.K. acknowledges support from the Deutsche Forschungsgemeinschaft under grant Re 353/33. Substantial observing time at ESO was allotted to the project as ESO key programme 02-009-45K. \end{acknowledgements}
1,108,101,563,553
arxiv
\section{Background} Understanding the power of quantum computation relative to classical computation is a fundamental question. When we look at which problems can be solved in quantum but not classical polynomial time, we get a wide range: quantum simulation, factoring, approximating the Jones polynomial, Pell's equation, estimating Gauss sums, period-finding, group order-finding and even detecting some mildly non-abelian symmetries~\cite{SICOMP::Shor1997,Hallgren2007,Watrous01,FriedlIMSS03,vDHI03}. However, when we look at what algorithmic tools exist on a quantum computer, the situation is not nearly as diverse. Apart from the BQP-complete problems~\cite{AJL06}, the main tool for solving most of these problems is a quantum Fourier transform (QFT) over some group. Moreover, the successes have been for cases where the group is abelian or close to abelian in some way. For sufficiently nonabelian groups, there has been no indication that the transforms are useful even though they can be computed exponentially faster than classically. For example, while an efficient QFT for the symmetric group has been intensively studied for over a decade because of its connection to graph isomorphism, it is still unknown whether it can be used to achieve any kind of speedup over classical computation~\cite{STOC::Beals1997}. The first separation between quantum computation and randomized computation was the Recursive Fourier Sampling problem (RFS)~\cite{SICOMP::BernsteinV1997}. This algorithm had two components, namely using a Fourier transform, and using recursion. Shortly after this, Simon's algorithm and then Shor's algorithm for factoring were discovered, and the techniques from these algorithms have been the focus of most quantum algorithmic research since~\cite{SICOMP::Simon1997:1474,SICOMP::Shor1997}. These developed into the hidden subgroup framework. The hidden subgroup problem is an oracle problem, but solving certain cases of it would result in solutions for factoring, graph isomorphism, and certain shortest lattice vector problems. Indeed, it was hoped that an algorithm for graph isomorphism could be found, but recent evidence suggests that this approach may not lead to one~\cite{HMRRS06}. As a way to understand new techniques, this oracle problem has been very important, and it is also one of the very few where super-polynomial speedups have been found~\cite{IMSantha01,BaconCvD05}. In comparison to factoring, the RFS problem has received much less attention. The problem is defined as a property of a tree with labeled nodes and it was proven to be solvable with a quantum algorithm super-polynomially faster than the best randomized algorithm. This tree was defined in terms of the Fourier coefficients over ${\mathbb{Z}}_2^n$. The definition was rather technical, and it seemed that the simplicity of the Fourier coefficients for this group was necessary for the construction to work. Even the variants introduced by Aaronson~\cite{Aaronson2003} were still based on the same QFT over ${\mathbb{Z}}_2^n$, which seemed to indicate that this particular abelian QFT was a key part of the quantum advantage for RFS. The main result of this paper is to show that the RFS structure can be generalized far more broadly. In particular, we show that an RFS-style super-polynomial speedup is achievable using almost any quantum circuit, and more specifically, it is also true for any Fourier transform (even nonabelian), not just over ${\mathbb{Z}}_2^n$. This illustrates a more general power that quantum computation has over classical computation when using recursion. The condition for a quantum circuit to be useful for an RFS-style speedup is that the circuit be {\em dispersing}, a concept we introduce to mean that it takes many different inputs to fairly even superpositions over most of the computational basis. Our algorithm should be contrasted with the original RFS algorithm. One of the main differences between classical and quantum computing is so-called garbage that results from computing. It is important in certain cases, and crucial in recursion-based quantum algorithms because of quantum superpositions, that intermediate computations are uncomputed and that errors do not compound. The original RFS paper~\cite{SICOMP::BernsteinV1997} avoided the error issue by using an oracle problem where every quantum state create from it had the exact property necessary with no errors. Their algorithm could have tolerated polynomially small errors, but in this paper we relax this significantly. We show that even if we can only create states with constant accuracy at each level of recursion, we can still carry through a recursive algorithm which introduces new constant-sized errors a polynomial number of times. The main technical part of our paper shows that most quantum circuits can be used to construct separations relative to appropriate oracles. To understand the difficulty here, consider two problems that occur when one tries to define an oracle whose output is related to the amplitudes that result from running a circuit. First, it is not clear how to implement such an oracle since different amplitudes have different magnitudes, and only phases can be changed easily. Second, we need an oracle where we can prove that a classical algorithm requires many queries to solve the problem. If the oracle outputs many bits, this can be difficult or impossible to achieve. For example, the matrix entries of nonabelian groups can quickly reveal which representation is being used. To overcome these two problems we show that there are binary-valued functions that can approximate the complex-valued output of quantum circuits in a certain way. One by-product of our algorithm is related to the Fourier transform of the symmetric group. Despite some initial promise for solving graph isomorphism, the symmetric group QFT has still not found any application in quantum algorithms. One instance of our result is the first example of a problem (albeit a rather artificial one) where the QFT over the symmetric group is used to achieve a super-polynomial speedup. \section{Statement of results} Our main contributions are to generalize the RFS algorithm of \cite{SICOMP::BernsteinV1997} in two stages. First, \cite{SICOMP::BernsteinV1997} described the problem of Fourier sampling over ${\mathbb{Z}}_2^n$, which has an $O(1)$ vs. $\Omega(n)$ separation between quantum and randomized complexities. We show that here the QFT over ${\mathbb{Z}}_2^n$ can be replaced with a QFT over any group, or for that matter with almost any quantum circuit. Next, \cite{SICOMP::BernsteinV1997} turned Fourier sampling into recursive Fourier sampling with a recursive technique. We will generalize this construction to cope with error and to amplify a larger class of quantum speedups. As a result, we can turn any of the linear speedups we have found into superpolynomial speedups Let us now explain each of these steps in more detail. We replace the $O(1)$ vs $\Omega(n)$ separation based on Fourier sampling with a similar separation based on a more general problem called {\em oracle identification}. In the oracle identification problem, we are given access to an oracle ${\mathcal{O}}_a:X\to \{0,1\}$ where $a\in A$, for some sets $A$ and $X$ with $\log |A|, \log |X|=\Theta(n)$. Our goal is to determine the identity of $a$. Further, assume that we have access to a testing oracle $T_a:A\to\{0,1\}$ defined by $T_a(a')=\delta_{a,a'}$, that will let us confirm that we have the right answer.\footnote{This will later allow us to turn two-sided into one-sided error; unfortunately it also means that a non-deterministic Turing machine can find $a$ with a single query to $T_a$. Thus, while the oracle defined in BV is a candidate for placing BQP outside PH, ours will not be able to place BQP outside of NP. This limitation appears not to be fundamental, but we will leave the problem of circumventing it to future work.} A quantum algorithm for identifying $a$ can be described as follows: first prepare a state $\ket{\varphi_a}$ using $q$ queries to ${\mathcal{O}}_a$, then perform a POVM $\{\Pi_{a'}\}_{a'\in A}$ (with $\sum_{a'} \Pi_{a'} \leq I$ to allow for the possibility of a ``failure'' outcome), using no further queries to ${\mathcal{O}}_a$. The success probability is $\bra{\varphi_a}\Pi_a \ket{\varphi_a}$. For our purposes, it will suffice to place a $\Omega(1)$ lower bound on this probability: say that for each $a$, $\bra{\varphi_a}\Pi_a\ket{\varphi_a} \geq \delta$ for some constant $\delta>0$. On the other hand, any classical algorithm trivially requires $\geq\log (|A|\delta)=\Omega(n)$ oracle calls to identify $a$ with success probability $\geq \delta$. This is because each query returns only one bit of information. In \thmref{lin-sep} we will describe how a large class of quantum circuits can achieve this $O(1)$ vs. $\Omega(n)$ separation, and in Theorems \ref{thm:QFT-dispersing} and \ref{thm:random-dispersing} we will show specifically that QFTs and most random circuits fall within this class. Now we describe the amplification step. This is a variant of the \cite{SICOMP::BernsteinV1997} procedure in which making an oracle call in the original problem requires solving a sub-problem from the same family as the original problem. Iterating this $\ell$ times turns query complexity $q$ into $q^{\Theta(\ell)}$, so choosing $\ell=\Theta(\log n)$ will yield the desired polynomial vs. super-polynomial separation. We will generalize this construction by defining an amplified version of oracle identification called {\em recursive oracle identification}. This is described in the next section, where we will see how it gives rise to superpolynomial speedups from a broad class of circuits. We conclude that quantum speedups---even superpolynomial speedups---are much more common than the conventional wisdom would suggest. Moreover, as useful as the QFT has been to quantum algorithms, it is far from the only source of quantum algorithmic advantage. \section{Recursive amplification} \label{sec:recur} In this section we show that once we are given a constant versus linear separation (for quantum versus classical oracle identification), we are able to amplify this to a super-polynomial speedup. We require a much looser definition than in \cite{SICOMP::BernsteinV1997} because the constant case can have a large error. \begin{definition}\label{def:single-level} For sets $A,X$, let $f: A\times X\to \{0,1\}$ be a function. To set the scale of the problem, let $|X|=2^n$ and $|A|=2^{\Omega(n)}$. Define the set of oracles $\{{\mathcal{O}}_a: a\in A\}$ by ${\mathcal{O}}_a(x)=f(a,x)$, and the states $\ket{\varphi_a}=\frac{1}{\sqrt{|X|}}\sum_{x\in X}(-1)^{f(a,x)}\ket{x}$. The single-level oracle identification problem is defined to be the task of determining $a$ given access to ${\mathcal{O}}_a$. Let $U$ be a family of quantum circuits, implicitly depending on $n$. We say that $U$ solves the single-level oracle identification problem if $$|\bra{a}U\ket{\varphi_a}|^2 \geq \Omega(1)$$ for all sufficiently large $n$ and all $a\in A$. In this case, we define the POVM $\{\Pi_a\}_{a\in A}$ by $\Pi_a = U^\dag\proj{a}U$. \end{definition} When this occurs, it means that $a$ can be identified from ${\mathcal{O}}_a$ with $\Omega(1)$ success probability and using a single query. In the next section, we will show how a broad class of unitaries $U$ (the so-called {\em dispersing} unitaries) allow us to construct $f$ for which $U$ solves the single-level oracle identification problem. There are natural generalizations to oracle identification problems requiring many queries, but we will not explore them here. \begin{theorem}\label{thm:superpoly-sep} Suppose we are given a single-level oracle problem with function $f$ and unitary $U$ running in time $\mathop{\mathrm{poly}}(n)$. Then we can construct a modified oracle problem from $f$ which can be solved by a quantum computer in polynomial time (and queries), but requires $n^{\Omega(\log n)}$ queries for any classical algorithm that succeeds with probability $\frac{1}{2}+n^{-o(\log n)}$. \end{theorem} We start by defining the modified version of the problem (\defref{recursive} below), and describing a quantum algorithm to solve it. Then in \thmref{recursive-correct} we will show that the quantum algorithm solves the problem correctly in polynomial time, and in \thmref{amplification}, we will show that randomized classical algorithms require superpolynomial time to have a nonnegligible probability of success. The recursive version of the problem simply requires that another instance of the problem be solved in order to access a value at a child. \fig{recursive} illustrates the structure of the problem. \vspace{-0.2cm} \begin{figure} \begin{center} \includegraphics[width=.4\textwidth]{fig-recur} \caption{A depth $k$ node at location $x=(x_1,\ldots,x_k)$ is labeled by its secret $s_x$ and a bit $b_x$. The secret $s_x$ can be computed from the bits $b_y$ of its children, and once it is known, the bit $b_x$ is computed from the oracle ${\mathcal{O}}(x,s_x)=b_x$. If $x$ is a leaf then it has no secret and we simply have $b_x={\mathcal{O}}(x)$. The goal is to compute the secret bit $b_\emptyset$ at the root. \label{fig:recursive}} \end{center} \end{figure} \vspace{-1cm} Using the notation from \fig{recursive}, the relation between a secret $s_x$, and the bits $b_y$ of its children is given by $b_y= f(s_x,x')$, where $f$ is the function from the single-level oracle identification problem. Thus by computing enough of the bits $b_{y_1}, b_{y_2},\ldots$ corresponding to children $y_1,y_2,\ldots$, we can solve the single-level oracle identification problem to find $s_x$. Of course computing the $b_y$ will require finding the secret strings $s_y$, which requires finding the bits of {\em their} children and so on, until we reach the bottom layer where queries return answer bits without the need to first produce secret strings. \begin{definition}\label{def:recursive} A level-$\ell$ recursive oracle identification problem is specified by $X, A$ and $f$ from a single-level oracle identification problem (\defref{single-level}), any function $s:\emptyset \cup X \cup X\times X \cup \ldots \cup X^{\ell-1} \to A$, and any final answer $b_\emptyset\in\{0,1\}$. Given these ingredients, an oracle ${\mathcal{O}}$ is defined which takes inputs in $$\bigcup_{k=0}^{\ell-1} \left[X^k \times A\right] \cup X^\ell$$ and to return outputs in $\{0,1,\textsc{FAIL}\}$. On inputs $x_1,\ldots,x_k\in X, a\in A$ with $1\leq k<\ell$, ${\mathcal{O}}$ returns \ba {\mathcal{O}}(x_1,\ldots,x_k, a) &= f(s(x_1,\ldots,x_{k-1}), x_k) && \text{when } a = s(x_1,\ldots,x_k)\\ {\mathcal{O}}(x_1,\ldots,x_k, a) &= \textsc{FAIL} && \text{when } a\neq s(x_1,\ldots,x_k). \ea If $k=0$, then ${\mathcal{O}}(s(\emptyset))=b_\emptyset$ and ${\mathcal{O}}(a)=\textsc{FAIL}$ if $a\neq s(\emptyset)$. When $k=\ell$, $${\mathcal{O}}(x_1,\ldots,x_{\ell}) = f(s(x_1,\ldots,x_{\ell-1}), x_\ell).$$ The recursive oracle identification problem is to determine $b_\emptyset$ given access to ${\mathcal{O}}$. \end{definition} Note that the function $s$ gives the values $s_x$ in \fig{recursive}. These values are actually defined in the oracle and can be chosen arbitrarily at each node. Note also that the oracle defined here effectively includes a testing oracle, which can determine whether $a = s(x_1,\ldots,x_k)$ for any $a\in A, x_1,\ldots,x_k\in X$ with one query. (When $x=(x_1,\ldots,x_k)$, we use $s(x_1,\ldots,x_k)$ and $s_x$ interchangeably.) A significant difference between our construction and that of \cite{SICOMP::BernsteinV1997} is that the values of $s$ at different nodes can be set completely independently in our construction, whereas \cite{SICOMP::BernsteinV1997} had a complicated consistency requirement. {\bf The algorithm.} Now we turn to a quantum algorithm for the recursive oracle identification problem. If a quantum computer can identify $a$ with one-sided\footnote{One-sided error is a reasonable demand given our access to a testing oracle. Most of these results go through with two-sided error as well, but for notational simplicity, we will not explore them here.} error $1-\delta$ using time $T$ and $q$ queries in the non-recursive problem, then we will show that the recursive version can be solved in time $O((q\frac{\log 1/\delta}{\delta})^\ell T)$. For concreteness, suppose that $\ket{\varphi_a}=\frac{1}{\sqrt{|X|}}\sum_{x\in X}(-1)^{f(a,x)}\ket{x}$, so that $q=1$; the case when $q>1$ is an easy, but tedious, generalization. Suppose that our identifying quantum circuit is $U$, so $a$ can be identified by applying the POVM $\{\Pi_{a'}\}_{a'\in A}$ with $\Pi_{a'}=U^\dag\proj{a'}U$ to the state $\ket{\varphi_a}$. The intuitive idea behind our algorithm is as follows: At each level, we find $s(x_1,\ldots, x_k)$ by recursively computing $s(x_1,\ldots,x_{k+1})$ for each $x_{k+1}$ (in superposition) and using this information to create many copies of $\ket{\varphi_{s(x_1,\ldots,x_k)}}$, from which we can extract our answer. However, we need to account for the errors carefully so that they do not blow up as we iterate the recursion. In what follows, we will adopt the convention that Latin letters in kets (e.g. $\ket{a}, \ket{x}, \ldots$) denote computational basis states, while Greek letters (e.g. $\ket{\zeta},\ket{\varphi},\ldots$) are general states that are possibly superpositions over many computational basis states. Also, we let the subscript $_{(k)}$ indicate a dependence on $(x_1,\ldots,x_k)$. The recursive oracle identification algorithm is as follows: \begin{tabbing} ~~~~~ \= ~~~~ \= ~~~~ \= ~~~~ \= ~~~~ \= ~~~~ \= \kill {\bf Algorithm: \textsc{FIND}}\\ {\bf Input:} $\ket{x_1,\ldots,x_k}\ket{0}$ for $k<\ell$\\ {\bf Output:} $a_{(k)}=s(x_1,\ldots,x_k)$ up to error $\varepsilon = (\delta/8)^2$, where $\delta$ is the constant from the oracle. This means\\ $\ket{x_1,\ldots,x_k}\left[\sqrt{1-\varepsilon_{(k)}}\ket{0}\ket{a_{(k)}}\ket{\zeta_{(k)}} + \sqrt{\varepsilon_{(k)}}\ket{1}\ket{\zeta_{(k)}'}\right]$, where $\varepsilon_{(k)}\leq \varepsilon$ and $\ket{\zeta_{(k)}}$ and $\ket{\zeta_{(k)}'}$ are arbitrary.\\ (We can assume this form without loss of generality by absorbing phases into $\ket{\zeta_{(k)}}$ and $\ket{\zeta_{(k)}'}$.) \\ {\bf 1.}\> Create the superposition $\frac{1}{\sqrt{|X|}}\sum_{x_{k+1}\in X}\ket{x_{k+1}}$.\\ {\bf 2.}\> If $k+1<\ell$ then let $a_{(k+1)}=\textsc{FIND}(x_1,\ldots,x_{k+1})$ (with error $\leq \varepsilon$), otherwise $a_{(k+1)}=\emptyset$.\\ {\bf 3.}\> Call the oracle ${\mathcal{O}}(x_1,\ldots,x_{k+1},a_{(k+1)})$ to apply the phase $(-1)^{f(s(x_1,\ldots,x_k), x_{k+1})}$ using the key $a_{(k+1)}$.\\ {\bf 4.}\> If $k+1<\ell$ then call \textsc{FIND}$^\dag$ to (approximately) uncompute $a_{(k+1)}$.\\ {\bf 5.}\> We are now left with $\ket{\tilde{\varphi}_{(k)}}$, which is close to $\ket{\varphi_{s(x_1,\ldots,x_k)}}$.\\ \>Repeat steps 1--4 $m = \frac{4}{\delta}\ln\frac{8}{\delta}$ times to obtain $\ket{\tilde{\varphi}_{(k)}}^{\otimes m}$\\ {\bf 6.}\> Coherently measure $\{\Pi_a\}$ on each copy and test the results (i.e.\ apply $U$, test the result, and apply $U^\dag$). \\ {\bf 7.}\> If any tests pass, copy the correct $a_{(k)}$ to an output register, along with $\ket{0}$ to indicate success.\\ \> Otherwise put a $\ket{1}$ in the output to indicate failure.\\ {\bf 8.}\> Let everything else comprise the junk register $\ket{\zeta_{(k)}}$.\\ \end{tabbing} \begin{theorem}\label{thm:recursive-correct} Calling $\textsc{FIND}$ on $|0\rangle$ solves the recursive oracle problem in quantum polynomial time. \end{theorem} \begin{proof} The proof is by backward induction on $k$; we assume that the algorithm returns with error $\leq \varepsilon$ for $k+1$ and prove it for $k$. The initial step when $k=\ell$ is trivial since there is no need to compute $a_{\ell +1}$, and thus no source of error. If $k<\ell$, then assume that correctness of the algorithm has already been proved for $k+1$. Therefore Step 2 leaves the state $$\frac{1}{\sqrt{|X|}}\sum_{x_{k+1}\in X} \ket{x_{k+1}} \left[\sqrt{1-\varepsilon_{(k+1)}}\ket{0}\ket{a_{(k+1)}}\ket{\zeta_{(k+1)}} + \sqrt{\varepsilon_{(k+1)}}\ket{1}\ket{\zeta_{(k+1)}'}\right].$$ In Step 3, we assume for simplicity that the oracle was called conditional on the success of Step 2. This yields $$\ket{\psi'_{(k)}}:= \frac{1}{\sqrt{|X|}}\sum_{x_{k+1}\in X} \ket{x_{k+1}} \left[(-1)^{f(a_{(k)}, x_{k+1})} \sqrt{1-\varepsilon_{(k+1)}}\ket{0}\ket{a_{(k+1)}}\ket{\zeta_{(k+1)}} + \sqrt{\varepsilon_{(k+1)}}\ket{1}\ket{\zeta_{(k+1)}'}\right].$$ Now define the state $\ket{\psi_{(k)}}$ by $$ \ket{\psi_{(k)}} := \frac{1}{\sqrt{|X|}}\sum_{x_{k+1}\in X} (-1)^{f(a_{(k)}, x_{k+1})} \ket{x_{k+1}} \left[\sqrt{1-\varepsilon_{(k+1)}}\ket{0}\ket{a_{(k+1)}}\ket{\zeta_{(k+1)}} + \sqrt{\varepsilon_{(k+1)}}\ket{1}\ket{\zeta_{(k+1)}'}\right].$$ Note that $$\braket{\psi'_{(k)}}{\psi_{(k)}} = \frac{1}{|X|} \sum_{x_{k+1}\in X} \left(1-\varepsilon_{(k+1)} + (-1)^{f(a_{(k)},x_{k+1})} \varepsilon_{(k+1)}\right).$$ This quantity is real and always $\geq 1-2\varepsilon_{(k+1)}\geq\sqrt{1-4\varepsilon}$ by the induction hypothesis. Let $$|\phi_{(k)}\rangle := \frac{1}{|X|} \sum_{x_{k+1}\in X} (-1)^{f(a_{(k)},x_{k+1})} |x_{k+1}\rangle|0\rangle.$$ Note that $\textsc{FIND}^\dag |x_1,\ldots,x_k,\psi_{(k)}\rangle = |x_1,\ldots,x_k,\phi_{(k)}\rangle$. Thus there exists $\varepsilon_{(k)}$ such that applying $\textsc{FIND}^\dag$ to $\ket{x_1,\ldots,x_k}\ket{\psi'_{(k)}}$ yields $$\ket{x_1,\ldots,x_k}\otimes \left[\sqrt{1-4\varepsilon_{(k)}}\ket{\phi_{(k)}}+ \sqrt{4\varepsilon_{(k)}}\ket{\phi'_{(k)}}\right],$$ where $\braket{\phi_{(k)}}{\phi_{(k)}'}=0$ and $\varepsilon_{(k)}\leq \varepsilon$. We now want to analyze the effects of measuring $\{\Pi_{a}\}$ when we are given the state $$\ket{\varphi_{(k)}} := \sqrt{1-4\varepsilon_{(k)}}\ket{\phi_{(k)}}+ \sqrt{4\varepsilon_{(k)}}\ket{\phi'_{(k)}}$$ instead of $\ket{\phi_{(k)}}$. If we define $\|M\|_1=\mathop{\mathrm{tr}}\sqrt{M^\dag M}$ for a matrix $M$, then $\| \proj{\varphi_{(k)}}-\proj{\phi_{(k)}}\|_1 = 4 \sqrt{\varepsilon_{(k)}}$ \cite{FG97}. Thus $$\bra{\varphi_{(k)}}\Pi_{a_{(k)}}\ket{\varphi_{(k)}} \geq \bra{\phi_{(k)}}\Pi_{a_{(k)}}\ket{\phi_{(k)}} - 4\sqrt{\varepsilon_{(k)}} \geq \delta - 4\sqrt{\varepsilon_{(k)}} \geq \delta/2.$$ In the last step we have chosen $\varepsilon = (\delta/8)^2$. Finally, we need to guarantee that with probability $\geq 1-\varepsilon$ at least one of the tests in Step 6 passes. After applying $U$ and the test oracle to $\ket{\varphi_{(k)}}$, we have $\geq \sqrt{\delta/2}$ overlap with a successful test and $\leq \sqrt{1-\delta/2}$ overlap with an unsuccessful test. When we repeat this $m$ times, the amplitude in the subspace corresponding to all tests failing is $\leq (1-\delta/2)^{m/2}\leq e^{-m\delta/4}$. If we choose $m=(2/\delta)\ln(1/\varepsilon)=(4/\delta)\ln(8/\delta)$ then the failure amplitude will be $\leq \sqrt{\varepsilon}$, as desired. To analyze the time complexity, first note that the run-time is $O(T)$ times the number of queries made by the algorithm, and we have assumed that $T$ is polynomial in $n$. Suppose the algorithm at level $k$ requires $Q(k)$ queries. Then steps 2 and 4 require $mQ(k+1)$ queries each, steps 3 and 6 require $m$ queries each and together $Q(k)=2mQ(k+1)+2m$. The base case is $k=\ell$, for which $Q(\ell)=0$, since there are no secret strings to calculate for the leaves. The total number of queries required for the algorithm is then $Q(0)\approx (2m)^{2\ell}$. If we choose $\ell = \log n$ the quantum query complexity will thus be $n^{2\log 2m} = n^{O(1)}$ and the quantum complexity will be polynomial in $n$ compared with the $n^{\Omega(\log n)}$ lower bound. \end{proof} This concludes the demonstration of the polynomial-time quantum algorithm. Now we turn to the classical $n^{\Omega(\log n)}$ lower bound. Our key technical result is the following lemma: \begin{lemma}\label{lem:classical-lower-bound} Define the recursive oracle identification problem as above, with a function $f:A \times X\to \{0,1\}$ and a secret $s:\emptyset \cup X \cup X\times X \cup \ldots \cup X^{\ell-1}\mapsto A$ encoded in an oracle ${\mathcal{O}}$. Fix a deterministic classical algorithm that makes $\leq Q$ queries to ${\mathcal{O}}$. Then if $s$ and $\ANS$ are chosen uniformly at random, the probability that $\ANS$ is output by the algorithm is $$\leq \frac{1}{2} + \max\left( \frac{Q}{|A|^{1/3} - Q}, Q\left(\frac{\log|A|}{3}\right)^{-\ell} \right).$$ \end{lemma} Using Yao's minimax principle and plugging in $|A|=2^{\alpha n}$, $\ell=\log n$ and $Q=n^{o(\log n)}$ readily yields \begin{theorem}\label{thm:amplification} If $\log |A| = n^{\Omega(1)}$ and $\ell=\Omega(\log n)$, then any randomized classical algorithm using $Q = n^{o(\log n)}$ queries will have $\frac{1}{2}+n^{-\Omega(\log n)}$ probability of successfully outputting $\ANS$. \end{theorem} \begin{proof}[of \lemref{classical-lower-bound}] Let $T = \emptyset \cup X \cup \ldots \cup X^{\ell}$ denote the tree on which the oracle is defined. We say that a node $x\in T$ has been {\em hit} by the algorithm if position $x$ has been queried by the oracle together with the correct secret, i.e.\ ${\mathcal{O}}(s(x),x)$ has been queried. The only way to find to obtain information about $\ANS$ is for the algorithm to query $\emptyset$ with the appropriate secret; in other words, to hit $\emptyset$. For $x,y\in T$ we say that $x$ is an {\em ancestor} of $y$, and that $y$ is a {\em descendant} of $x$, if $y=x \times z $ for some $z\in T$. If $z\in X$ then we say that $y$ is a {\em child} of $x$ and that $x$ is a {\em parent} of $y$. Now define $S\subset T$ to be the set of all $x\in T$ such that $x$ has been hit but none of $x$'s ancestors have been. Also define a function $d(x)$ to be the depth of a node $x$; i.e. for all $x\in X^k$, $d(x)=k$. We combine these definitions to declare an invariant $$Z = \sum_{x\in S} \left(\frac{\log |A|}{3}\right)^{-d(x)}$$ The key properties of $Z$ we need are that: \begin{enumerate} \item Initially $Z=0$. \item If the algorithm is successful then it terminates with $Z=1$. \item Only oracle queries change the value of $Z$. \item Querying a leaf can add at most $(\log |A|/3)^{-\ell}$ to $Z$. \item Querying an internal node (i.e.\ not a leaf) can add at most $2 / (|A|^{1/3} - Q)$ to $\mathop{\mbox{$\mathbf{E}$}} Z$, where $\mathop{\mbox{$\mathbf{E}$}}$ indicates the expectation over random choices of $s$. \end{enumerate} Combining these facts yields the desired bound. Properties 1--4 follow directly from the definition (with the inequality in property 4 because it is possible to query a node that has already been hit). To establish property 5, suppose that the algorithm queries node $x\in T$ and that it has previously hit $k$ of $x$'s children. This gives us some partial information about $s(x)$. We can model this information as a partition of $A$ into $2^k$ disjoint sets $A_1,\ldots,A_{2^k}$ (of which some could be empty). From the $k$ bits returned by the oracle on the $k$ children of $x$ we have successfully queried, we know not only that $s(x)\in A$, but that $s(x)\in A_i$ for some $i\in \{1,\ldots,2^k\}$. We will now divide the analysis into two cases. Either $k\leq \frac{1}{3}\log|A|$ or $k>\frac{1}{3}\log|A|$. We will argue that in the former case, $|A_i|$ is likely to be large, and so we are unlikely so successfully guess $s(x)$, while in the latter case even a successful guess will not increase $Z$. The latter case ($k>\frac{1}{3}\log|A|$) is easier, so we consider it first. In this case, $Z$ only changes if $x$ is hit in this step and neither $x$ nor any of its ancestors have been previously hit. Then even though hitting $x$ will contribute $(\log |A|/3)^{-d(x)}$ to $Z$, it will also remove the $k$ children from $S$ (as well as any other descendants of $x$), which will decrease $Z$ by at least $k(\log |A|/3)^{-d(x)-1} > (\log |A|/3)^{-d(x)}$, resulting in a net decrease of $Z$. Now suppose that $k \leq \frac{1}{3}\log |A|$. Recall that our information about $s(x)$ can be expressed by the fact that $s(x)\in A_i$ for some $i\in \{1,\ldots,2^k\}$. Since the values of $s$ were chosen uniformly at random, we have $\Pr(A_i)=|A_i|/|A|$. Say that a set $A_i$ is {\em bad} if $|A_i| \leq |A|^{2/3}/2^k$. Then for a particular bad set $A_i$, $\Pr(A_i) \leq |A|^{-1/3}2^{-k}$. From the union bound, we see that the probability that {\em any} bad set is chosen is $\leq |A|^{-1/3}$. Assume then that we have chosen a good set $A_i$, meaning that conditioned on the values of the children there are $|A_i|\geq |A|^{2/3}/2^k \geq |A|^{1/3}$ possible values of $s(x)$. However, previous failed queries at $x$ may also have ruled out specific possible values of $x$. There have been at most $Q$ queries at $x$, so there are $\geq |A|^{1/3}-Q$ possible values of $s(x)$ remaining. (Queries to any other nodes in the graph yield no information on $s(x)$.) Thus the probability of hitting $x$ is $\leq 1 / (|A|^{1/3}-Q)$ if we have chosen a good set. We also have a $\leq |A|^{-1/3}$ probability of choosing a bad set, so the total probability of hitting $x$ (in the $k \leq \frac{1}{3}\log |A|$ case) is $\leq |A|^{-1/3} + 1/(|A|^{1/3}-Q) \leq 2 / (|A|^{1/3}-Q)$. Finally, hitting $x$ will increase $Z$ by at most one, so the largest possible increase of $\mathop{\mbox{$\mathbf{E}$}} Z$ when querying a non-leaf node is $\leq 2/(|A|^{1/3} - Q)$. This completes the proof of property 5 and thus the Lemma. \end{proof} \section{Dispersing Circuits} In this section we define {\em dispersing} circuits and show how to construct an oracle problem with a constant versus linear separation from any such circuit. In the next sections we will show how to find dispersing circuits. Our strategy for finding speedups will be to start with a unitary circuit $U$ which acts on $n$ qubits and has size polynomial in $n$. We will then try to find an oracle for which $U$ efficiently solves the corresponding oracle identification problem. Next we need to define a state $\ket{\varphi_a}$ that can be prepared with $O(1)$ oracle calls and has $\Omega(1)$ overlap with $U^\dag\ket{a}$. This is accomplished by letting $\ket{\varphi_a}$ be a state of the form $2^{-n/2}\sum_x \pm \ket{x}$. We can prepare $\ket{\varphi_a}$ with only two oracle calls (or one, depending on the model), but to guarantee that $|\bra{a}U\ket{\varphi_a}|$ can be made large, we will need an additional condition on $U$. For any $a\in A$, $U^\dag\ket{a}$ should have amplitude that is mostly spread out over the entire computational basis. When this is the case, we say that $U$ is {\em dispersing}. The precise definition is as follows: \begin{definition} Let $U$ be a quantum circuit on $n$ qubits. For $0<\alpha,\beta\leq 1$, we say that $U$ is $(\alpha,\beta)$-dispersing if there exists a set $A\subseteq \{0,1\}^n$ with $|A|\geq 2^{\alpha n}$ and \begin{equation} \sum_{x\in\{0,1\}^n}|\bra{a}U\ket{x}| \geq \beta 2^{\frac{n}{2}}. \label{eq:disp-def}\end{equation} for all $a\in A$. \end{definition} Note that the LHS of \eq{disp-def} can also be interpreted as the $L_1$ norm of $U^\dag\ket{a}$. The speedup in \cite{SICOMP::BernsteinV1997} uses $U=H^{\otimes n}$, which is (1,1)-dispersing since $\sum_x |\bra{a}H^{\otimes n}\ket{x}|=2^{n/2}$ for all $a$. Similarly the QFT over the cyclic group is (1,1)-dispersing.\footnote{Another possible way to generalize \cite{SICOMP::BernsteinV1997} is to consider other unitaries of the form $U=A^{\otimes n}$, for $A\in{\mathcal{U}}_2$. However, it is not hard to show that the only way for such a $U$ to be $(\Omega(1), \Omega(1))$-dispersing is for $A$ to be of the form $e^{i\phi_1\sigma_z} H e^{i \phi_2\sigma_z}$.} Nonabelian QFTs do not necessarily have the same strong dispersing properties, but they satisfy a weaker definition that is still sufficient for a quantum speedup. Suppose that the measurement operator is instead defined as $\Pi_a = U(\proj{a} \otimes I)U^\dag$, where $a$ is a string on $m$ bits and $I$ denotes the identity operator on $n-m$ bits. Then $U$ still permits oracle identification, but our requirements that $U$ be dispersing are now relaxed. Here, we give a definition that is loose enough for our purposes, although further weakening would still be possible. \begin{definition} Let $U$ be a quantum circuit on $n$ qubits. For $0<\alpha,\beta\leq 1$ and $0<m\leq n$, we say that $U$ is $(\alpha,\beta)$-pseudo-dispersing if there exists a set $A\subseteq \{0,1\}^m$ with $|A|\geq 2^{\alpha n}$ such that for all $a\in A$ there exists a unit vector $\ket{\psi}\in{\mathbb{C}}^{2^{n-m}}$ such that \begin{equation} \sum_{x\in\{0,1\}^n}|\bra{a}\bra{\psi}U\ket{x}| \geq \beta 2^{\frac{n}{2}}. \label{eq:ps-disp-def}\end{equation} \end{definition} This is a weaker property than being dispersing, meaning that any $(\alpha,\beta)$-dispersing circuit is also $(\alpha,\beta)$-pseudo-dispersing. We can now state our basic constant vs. linear query separation. \begin{theorem}\label{thm:lin-sep} If $U$ is $(\alpha,\beta)$-pseudo-dispersing, then there exists an oracle problem which can be solved with one query, one use of $U$ and success probability $(2\beta/\pi)^2$. However, any classical randomized algorithm that succeeds with probability $\geq \delta$ must use $\geq \alpha n + \log \delta$ queries. \end{theorem} Before we prove this Theorem, we state a Lemma about how well states of the form $2^{-n/2}\sum_x e^{i\phi_x} \ket{x}$ can be approximated by states of the form $2^{-n/2}\sum_x \pm \ket{x}$. \begin{lemma}\label{lem:complex-approx} For any vector $(x_1,\ldots,x_d)\in{\mathbb{C}}^{d}$ there exists $(\theta_1,\ldots,\theta_d)\in\{\pm 1\}^d$ such that $$\left|\sum_{k=1}^d x_k\theta_k\right|\geq \frac{2}{\pi}\sum_{k=1}^d \left|x_k\right|.$$ \end{lemma} The proof is in the full version of the paper\cite{HH08}. {\em Proof of \thmref{lin-sep}:} Since $U$ is $(\alpha,\beta)$-pseudo-dispersing, there exists a set $A\subset \{0,1\}^m$ with $|A|\geq 2^{\alpha n}$ and satisfying \eq{ps-disp-def} for each $a\in A$. The problem will be to determine $a$ by querying an oracle ${\mathcal{O}}_a(x)$. No matter how we define the oracle, as long as it returns only one bit per call any classical randomized algorithm making $q$ queries can have success probability no greater than $2^{q-\alpha n}$ (or else guessing could succeed with probability $>2^{-\alpha n}$ without making any queries). This implies the classical lower bound. Given $a\in A$, to define the oracle ${\mathcal{O}}_a$, first use the definition to choose a state $|\psi\rangle$ satisfying \eq{ps-disp-def}. Then by \lemref{complex-approx} (below), choose a vector $\vec{\theta}$ that (when normalized to $\ket{\theta}$) will approximate the state $U^\dag|a\rangle|\psi\rangle$. Define ${\mathcal{O}}_a(x)$ so that $(-1)^{{\mathcal{O}}_a(x)}=\theta_x=2^{n/2}\braket{x}{\theta}$. By construction, \begin{equation} 2^{-n/2}|\bra{a}\bra{\psi}U\ket{\theta}| \geq \frac{2}{\pi}\beta \label{eq:pm-good-approx}\end{equation} which implies that creating $\ket{\theta}$, applying $U$, and measuring the first register has probability $\geq (2\beta/\pi)^2$ of yielding the correct answer $a$. \qed \section{Any quantum Fourier transform is pseudo-dispersing} In this section we start with some special cases of dispersing circuits by showing that any Fourier transform is dispersing. In the next section we show that most circuits are dispersing. The original RFS paper~\cite{SICOMP::BernsteinV1997} used the fact that $H^{\otimes n}$ is (1,1)-dispersing to obtain their starting $O(1)$ vs $\Omega(n)$ separation. The QFT on the cyclic group (or any abelian group, in fact) is also (1,1)-dispersing. In fact, if we will accept a pseudo-dispersing circuit, then any QFT will work: \begin{theorem}\label{thm:QFT-dispersing} Let $G$ be a group with irreps $\hat{G}$ and $d_\lambda$ denoting the dimension of irrep $\lambda$. Then the Fourier transform over $G$ is $(\alpha,1/\sqrt{2})$-pseudo-dispersing, where $\alpha=(\log\sum_\lambda d_\lambda)/\log|G| \geq 1/2$. \end{theorem} Via \thmref{lin-sep} and \thmref{superpoly-sep}, this implies that any QFT can be used to obtain a superpolynomial quantum speedup. For most nonabelian QFTs, this is the first example of a problem which they can solve more quickly than a classical computer. \begin{proof}[Proof of \thmref{QFT-dispersing}] Let $A=\{(\lambda,i): \lambda \in \hat{G}, i \in \{1,\ldots,d_\lambda\}\}$. Let $V_\lambda$ denote the representation space corresponding to an irrep $\lambda\in\hat{G}$. The Fourier transform on $G$ maps vectors in ${\mathbb{C}}[G]$ to superpositions of vectors of the form $\ket{\lambda}\ket{v_1}\ket{v_2}$ for $\ket{v_1},\ket{v_2}\in V_\lambda$. Fix a particular choice of $\lambda$ and $\ket{i}\in V_\lambda$. If $U$ denotes the QFT on $G$ then let $$\rho = U^\dag\left(\proj{\lambda}\otimes \proj{i} \otimes \frac{I_{V_\lambda}}{d_\lambda}\right) U.$$ Define $V := \supp \rho$, and let $\mathop{\mbox{$\mathbf{E}$}}_{\ket{\psi}\in V}$ denote an expectation over $\ket{\psi}$ chosen uniformly at random from unit vectors in $V$\footnote{We can think of $\ket{\psi}$ either as the result of applying a Haar uniform unitary to a fixed unit vector, or by choosing $\ket{\psi'}$ from any rotationally invariant ensemble (e.g. choosing the real and imaginary part of each component to be an i.i.d. Gaussian with mean zero) and setting $\ket{\psi}=\ket{\psi'}/\sqrt{\braket{\psi'}{\psi'}}$.} Finally, let $\Pi$ be the projector onto $V$. Note that $\rho = \Pi/d_\lambda = \mathop{\mbox{$\mathbf{E}$}} \proj{\psi}$. Because of the invariance of $\rho$ under right-multiplication by group elements (i.e.\ $\bra{g_1}\rho\ket{g_2}=\bra{g_1h}\rho\ket{g_2h}$ for all $g_1,g_2,h\in G$), we have for any $g$ that \begin{equation} \bra{g}\rho\ket{g} = \frac{1}{|G|} \sum_h \bra{gh}\rho\ket{gh} = \frac{1}{|G|} \mathop{\mathrm{tr}}(\rho) = \frac{1}{|G|}. \label{eq:rho-isotropic}\end{equation} Since $\mathop{\mbox{$\mathbf{E}$}}\proj{\psi}=\rho$, \eq{rho-isotropic} implies that $$\mathop{\mbox{$\mathbf{E}$}}_{\ket{\psi}\in V} |\braket{g}{\psi}|^2 = \bra{g}\rho\ket{g} = \frac{1}{|G|}.$$ Next, we would like to analyze $\mathop{\mbox{$\mathbf{E}$}} |\braket{g}{\psi}|^4$. \ba \mathop{\mbox{$\mathbf{E}$}}_\ket{\psi} |\braket{g}{\psi}|^4 & = \mathop{\mbox{$\mathbf{E}$}}_\ket{\psi} \mathop{\mathrm{tr}} \left(\proj{g}\otimes\proj{g}\right) \cdot \left(\proj{\psi}\otimes\proj{\psi}\right) \\ & = \mathop{\mathrm{tr}} \left(\proj{g}\otimes\proj{g}\right) \frac{I + \textsc{swap}}{d_\lambda(d_\lambda+1)} \left(\Pi \otimes \Pi\right)\label{eq:sym-subspace}\\ &\leq \mathop{\mathrm{tr}} \left(\proj{g}\otimes\proj{g}\right) \cdot (I + \textsc{swap}) (\rho \otimes \rho)\\ & = 2(\bra{g}\rho\ket{g})^2 = \frac{2}{|G|^2} \ea To prove the equality on the second line, we use a standard representation-theoretic trick (cf. section V.B of \cite{PSW05}). First note that $\ket{\psi}^{\otimes 2}$ belongs to the symmetric subspace of $V\otimes V$, which is a $\frac{d_\lambda(d_\lambda+1)}{2}$-dimensional irrep of ${\mathcal{U}}_{d_\lambda}$. Since $\mathop{\mbox{$\mathbf{E}$}}_{\ket{\psi}}\proj{\psi}^{\otimes 2}$ is invariant under conjugation by $u \otimes u$ for any $u\in {\mathcal{U}}_{d_\lambda}$, it follows that $\mathop{\mbox{$\mathbf{E}$}}_\ket{\psi} \proj{\psi}^{\otimes 2}$ is proportional to a projector onto the symmetric subspace of $V^{\otimes 2}$. Finally, $\textsc{swap}\Pi^{\otimes 2}$ has eigenvalue $1$ on the symmetric subspace of $V^{\otimes 2}$ and eigenvalue $-1$ on its orthogonal complement, the antisymmetric subspace of $V^{\otimes 2}$. Thus, $\frac{I+\textsc{swap}}{2}\Pi^{\otimes 2}$ projects onto the symmetric subspace and we conclude that $$\mathop{\mbox{$\mathbf{E}$}}_\ket{\psi} \proj{\psi}^{\otimes 2} = \frac{(I+\textsc{swap})(\Pi \otimes \Pi)}{d_\lambda(d_\lambda+1)}.$$ Now we note the inequality \begin{equation} \mathop{\mbox{$\mathbf{E}$}} |Y| \geq (\mathop{\mbox{$\mathbf{E}$}} Y^2)^{\frac{3}{2}} / (\mathop{\mbox{$\mathbf{E}$}} Y^4)^{\frac{1}{2}}, \label{eq:fourth-moment}\end{equation} which holds for any random variable $Y$ and can be proved using H\"{o}lder's inequality~\cite{SICOMP::Berger1997}. Setting $Y=|\braket{g}{\psi}|$, we can bound $\mathop{\mbox{$\mathbf{E}$}}_{\ket{\psi}} |\braket{g}{\psi}| \geq 1/\sqrt{2|G|}$. Summing over $G$, we find $$\mathop{\mbox{$\mathbf{E}$}}_\ket{\psi} \sum_{g\in G}|\braket{g}{\psi}| \geq \frac{1}{\sqrt{2}} \sqrt{|G|}.$$ Finally, because this last inequality holds in expectation, it must also hold for at least some choice of $\ket{\psi}$. Thus there exists $\ket{\psi}\in V$ such that $$\sum_{g\in G}|\braket{g}{\psi}| \geq \frac{1}{\sqrt{2}} \sqrt{|G|}.$$ Then $U$ satisfies the pseudo-dispersing condition in \eq{ps-disp-def} for the state $\ket{\psi}$ with $\beta=1/\sqrt{2}$. This construction works for each $\lambda\in\hat{G}$ and for $\ket{v_1}$ running over any choice of basis of $V_\lambda$. Together, this comprises $\sum_{\lambda\in\hat{G}} d_\lambda$ vectors in the set $A$. \end{proof} \section{Most circuits are dispersing} \label{sec:random-circuits} Our final, and most general, method of constructing dispersing circuits is simply to choose a polynomial-size random circuit. We define a length-$t$ random circuit to consist of performing the following steps $t$ times. \begin{enumerate} \item Choose two distinct qubits $i,j$ at random from $[n]$. \item Choose a Haar-distributed random $U\in{\mathcal{U}}_4$. \item Apply $U$ to qubits $i$ and $j$. \end{enumerate} A similar model of random circuits was considered in \cite{DOP07}. Our main result about these random circuits is the following Theorem. \begin{theorem}\label{thm:random-dispersing} For any $\alpha,\beta>0$, there exists a constant $C$ such that if $U$ is a random circuit on $n$ qubits of length $t=Cn^3$ then $U$ is $(\alpha,\beta)$-dispersing with probability $$\geq 1 - \frac{2\beta^2 }{1-2^{-n(1-\alpha)}}.$$ \end{theorem} \thmref{random-dispersing} is proved in the extended version of this paper\cite{HH08}. The idea of the proof is to reduce the evolution of the fourth moments of the random circuit (i.e. quantities of the form $\mathop{\mbox{$\mathbf{E}$}}_U \mathop{\mathrm{tr}} UM_1U^\dag M_2UM_3U^\dag M_4$) to a classical Markov chain, using the approach of \cite{DOP07}. Then we show that this Markov chain has a gap of $\Omega(1/n^2)$, so that circuits of length $O(n^3)$ have fourth moments nearly identical to those of Haar-uniform unitaries from ${\mathcal{U}}_{2^n}$. Finally, we use \eq{fourth-moment}, just as we did for quantum Fourier transforms, to show that a large fraction of inputs are likely to be mapped to states with large $L_1$-norm. This will prove \thmref{random-dispersing} and show that superpolynomial quantum speedups can be built by plugging almost any circuit into the recursive framework we describe in \secref{recur}. \section*{Acknowledgments} AWH was funded by the U.S. Army Research Office under grant W9111NF-05-1-0294, the European Commission under Marie Curie grants ASTQIT (FP6-022194) and QAP (IST-2005-15848), and the U.K. Engineering and Physical Science Research Council through ``QIP IRC.'' \bibliographystyle{alpha}
1,108,101,563,554
arxiv
\section{Introduction} Given a discrete space $X$, we take the points of $\beta X$, the Stone-$\check{C}$ech compactification of $X$, to be the ultrafilters on $X$, with the points of $X$ identified with the principal ultrafilters, so $X^*=\beta X\setminus X$ is the set of all free ultrafilters on $X$. The topology on $\beta X$ can be defined by stating that the sets of the form $\overline{A}=\{p\in\beta X: A\in p\}$, where $A$ is a subset of $X$, are base for the open sets. We note the sets of this form are clopen and that for any $p\in\beta X$ and $A\subseteq X$, $A\in p$ if and only if $p\in\overline{A}$. For any $A\subseteq X$, we denote $A^*=\overline{A}\cap G^*$. The universal property of $\beta X$ states that every mapping $f: X\to Y$, where $Y$ is a compact Hausdorff space, can be extended to the continuous mapping $f^\beta:\beta X\to Y$. Now let $G$ be a discrete group. Using the universal property of $\beta G$, we can extend the group multiplication from $G$ to $\beta G$ in two steps. Given $g\in G$, the mapping $$x\mapsto gx: \text{ } G\to \beta G$$ extends to the continuous mapping $$q\mapsto gq: \text{ } \beta G\to \beta G.$$ Then, for each $q\in\beta G$, we extend the mapping $g\mapsto gq$ defined from $G$ into $\beta G$ to the continuous mapping $$p\mapsto pq:\text{ }\beta G\to\beta G.$$ The product $pq$ of the ultrafilters $p$, $q$ can also be defined by the rule: given a subset $A\subseteq G$, $$A\in pq\leftrightarrow\{g\in G:g^{-1}A\in q\}\in p.$$ To describe a base for $pq$, we take any element $P\in p$ and, for every $x\in P$, choose some element $Q_x\in q$. Then $\bigcup_{x\in P}xQ_x\in pq$, and the family of subsets of this form is a base for the ultrafilter $pq$. By the construction, the binary operation $(p,q)\mapsto pq$ is associative, so $\beta G$ is a semigroup, and $G^*$ is a subsemigroup of $\beta G$. For each $q\in \beta G$, the right shift $x\mapsto xq$ is continuous, and the left shift $x\to gx$ is continuous for each $g\in G$. For the structure of a compact right topological semigroup $\beta G$ and plenty of its applications to combinatorics, topological algebra and functional analysis see ~\cite{b2}, ~\cite{b4}, ~\cite{b5}, ~\cite{b18}, ~\cite{b21}. Given a subset $A$ of a group $G$ and an ultrafilter $p\in G^*$ we define a {\em $p$-companion} of $A$ by $$\vt_p(A)=A^*\cap Gp=\{gp: g\in G, A\in gp\},$$ and say that a subset $S$ of $G^*$ is an {\em ultracompanion} of $A$ if $S=\vt_p(A)$ for some $p\in G^*$. Clearly,$A$ is finite if and only if $\vt_p(A)=\varnothing$ for every $p\in G^*$, and $\vt_p(G)=Gp$ for each $p\in G^*$. We say that a subset$A$ of a group $G$ is \\$\bullet$ {\em sparse} if each ultracompanion of $A$ is finite; \\$\bullet$ {\em disparse} if each ultracompanion of $A$ is discrete. In fact, the sparse subsets were introduced in ~\cite{b3} with rather technical definition (see Proposition 5) in order to characterize strongly prime ultrafilters in $G^*$, the ultrafilters from $G^*\setminus\overline{G^*G^*}$. In this paper we study the families of sparse and disparse subsets of a group, and characterize in terms of ultracompanions the subsets from the following basic classification. A subset $A$ of $G$ is called \\$\bullet$ {\em large} if $G=FA$ for some finite subset $F$ of $G$; \\$\bullet$ {\em thick} if, for every finite subset $F$ of $G$, there exists $a\in A$ such that $Fa\subseteq A$; \\$\bullet$ {\em prethick} if $FA$ is thick for some finite subset $F$ of $G$; \\$\bullet$ {\em small} if $L\setminus A$ is large for every large subset $L$; \\$\bullet$ {\em thin} if $gA\cap A$ is finite for each $g\in G\setminus\{e\}$, $e$ is the identity of $G$. In the dynamical terminology ~\cite{b5}, the large and prethick subsets are called syndetic and piecewise syndetic respectively. For references on the subset combinatorics of groups see the survey ~\cite{b12}. We conclude the paper with discussions of some modifications of sparse subsets and a couple of open questions. \section{Characterizations}~\label{s2} \begin{Ps}~\label{p1} For a subset $A$ of a group $G$ and an ultrafilter $p\in G^*$, the following statements hold $(i)$ $\vt_p(FA)=F\vt_p(A)$ for every finite subset $F$ of $G$; $(ii)$ $\vt_p(Ah)=\vt_{ph^{-1}}(A)$ for every $h\in G$; $(iii)$ $\vt_p(A\cup B)=\vt_p(A)\cup\vt_p(B)$.\end{Ps} \begin{Ps} For an infinite subset $A$ of a group $G$, the following statements are equivalent $(i)$ $A$ is large; $(ii)$ there exists a finite subset $F$ of $G$ such that, for each $p\in G^*$, we have $Gp=\vt_p(FA)$; $(iii)$ for every $p\in G^*$, there exists a finite subset $F_p$ of $G$ such that $Gp=\vt_p(F_pA)$.\end{Ps} \begin{proof} The implications $(i)\Rightarrow (ii)\Rightarrow(iii)$ are evident. To prove $(iii)\Rightarrow(i)$, we note that the family $\{(F_pA)^*:p\in G\}$ is a covering of $G^*$, choose a finite subcovering $(F_{p_1}A)^*,...,(F_{p_n}A)^*$ and put $F=F_{p_1}\cup...\cup F_{p_n}$. Then $G^*=(FA)^*$ so $G\setminus FA$ is finite and $G=HFA$ for some finite subset $H$ of $G$. Hence, $A$ is large. \end{proof} \begin{Ps} For an infinite subset $A$ of a group $G$ the following statements hold $(i)$ $A$ is thick if and only if there exists $p\in G^*$ such that $\vt_p(A)=Gp$; $(ii)$ $A$ is prethick if and only if there exists $p\in G^*$ and a finite subset $F$ of $G$ such that $\vt_p(FA)=Gp$; $(iii)$ $A$ is small if and only if, for every $p\in G^*$ and each finite $F$ of $G$, we have $\vt_p(FA)\neq Gp$; $(iv)$ $A$ is thin if and only if $|\vt_p(A)|\leq1$ for each $p\in G^*$.\end{Ps} \begin{proof}~\label{p3} $(i)$ Assume that $A$ is thick. For each finite subset $F$ of $G$, we put $P_F=\{x\in A:Fx\subset A\}$ and form a family $\PP=\{P_F:F \text{ is a finite subset of } $G$\}$. Since $A$ is thick, each subset $P_F$ is infinite. Clearly, $P_F\cap P_H=P_{P\cup H}$. Therefore, $\PP$ is contained in some ultrafilter $p\in G^*$. By the choice of $\PP$, we have, $\vt_p(A)=Gp$. On the other hand, let $\vt_p(A)=Gp$. We take an arbitrary finite subset $F$ of $G$. Then $(F\cup\{e\})p\subset A^*$ so $(F\cup\{e\})P\subset A$ for some $P\in p$. Hence, $P\subseteq A$ and $Fx\subset A$ for each $x\in P$. $(ii)$ follows from $(i)$. $(iii)$ We note that $A$ is small if and only if $A$ is not prethick and apply $(ii)$. $(iv)$ follows directly from the definitions of thin subsets and $\vt_p(A)$. \end{proof} For $n\in\NN$, a subset $A$ of a group $G$ is called $n$-thin if, for every finite subset $F$ of $G$, there is a finite subset $H$ of $G$ such that $|Fg\cap A|\le n$ for every $g\in G\setminus H$. \begin{Ps} For a subset $A$ of a group $G$, the following statements are equivalent $(i)$ $|\vt_p(A)|\le n$ for each $p\in G^*$; $(ii)$ for every distinct $x_1,..,x_{n+1}\in G$, the set $x_1A\cap...\cap x_{n+1}A$ is finite; $(iii)$ $A$ is $n$-thin. \end{Ps} \begin{proof} We note that $x_1A\cap...\cap x_{n+1}A$ is infinite if and only if there exists $p\in G^*$ such that $x_1^{-1}p,...,x_{n+1}^{-1}p\in A^*$. This observation prove the equivalence $(i)\Leftrightarrow(ii)$. $(ii)\Rightarrow(iii)$ Assume that $A$ is not thin. Then there are a finite subset $F$ of $G$ and an injective sequence $(g_m)_{m<\w}$ in $G$ such that $|Fg_m\cap A|>n$. Passing to subsequences of $(g_m)_{m<\w}$, we may suppose that there exist distinct $x_1,...,x_{n+1}\in F$ such that $\{x_1,...,x_{n+1}\}g_m\subseteq A$ so $x_1^{-1}A\cap...\cap x_{n+1}^{-1}A$ is infinite. $(iii)\Rightarrow(i)$ Assume that $x_1A\cap...\cap x_{n+1}A$ is infinite for some distinct $x_1,...,x_{n+1}\in G$. Then there is an injective sequence $(g_m)_{m<\w}$ in $x_1A\cap...\cap x_{n+1}A$ such that $\{x_1^{-1},...,x_{n+1}^{-1}\}g_m\subset A$ so $A$ is not $n$-thin. \end{proof} By ~\cite{b7}, a subset $A$ of a countable group $G$ is $n$-thin if and only if $A$ can be partitioned into $\le n$ thin subsets. The following statements are from ~\cite{b15}. Every $n$-thin subset of an Abelian group of cardinality $\aleph_m$ can be partitioned into $\le n^{m+1}$ thin subsets. For each $m\ge2$ there exist a group $G$ of cardinality $\aleph_n$, $n=\frac{m(m+1)}{2}$ and a $2$-thin subset $A$ of $G$which cannot be partitioned into $m$ thin subsets. Moreover, there is a group $G$ of cardinality $\aleph_\w$ and a $2$-thin subset $A$ of $G$ which cannot be finitely partitioned into thin subsets. Remind that an ultrafilter $p\in G^*$ is strongly prime if $p\in G^*\setminus\overline{G^*G^*}$. \begin{Ps} ~\label{p5} For a subset $A$ of a group $G$, the following statements are equivalent $(i)$ $A$ is sparse; $(ii)$ every ultrafilter $p\in A^*$ is strongly prime; $(iii)$ for every infinite subset $X$ of $G$, there exists a finite subset $F\subset X$ such that $\bigcap_{g\in F}gA$ is finite. \end{Ps} \begin{proof} The equivalence $(ii)\Leftrightarrow(iii)$ was proved in ~\cite[Theorem 9]{b3}. To prove $(i)\Leftrightarrow(ii)$, it suffices to note that $\vt_p(A)$ is infinite if and only if $\vt_p(A)$ has a limit point $qp$, $q\in G^*$ in $A^*$. \end{proof} \begin{Ps}~\label{p6} A subset $A$ of a group $G$ is sparse if and only if, for every countable subgroup $H$ of $G$, $A\cap H$ is sparse in $H$. \end{Ps} \begin{proof} Assume that $A$ is not sparse. By Proposition~\ref{p5} $(iii)$, there is a countable subset $X=\{x_n:n<\w\}$ of $G$ such that for any $n<\w$ $x_0A\cap...\cap x_nA$ is infinite. For any $n<\w$, we pick $a_n\in x_0A\cap...\cap x_nA$, put $S=\{x_0^{-1}a_n,...,x_n^{-1}a_n:n<\w\}$ and denote by $H$ the subgroup of $G$ generated by $S\cup X$. By Proposition~\ref{p5}$(iii)$, $A\cap H$ is not sparse in $H$. \end{proof} A family $\II$ of subsets of a group $G$ is called an ideal in the Boollean algebra $\PP_G$ of all subsets of $G$ if $A,B\in\II$ implies $A\cup B\in\II$, and $A\in\II$, $A'\subset A$ implies $A'\in\II$. An ideal $\II$ is left (right) translation invariant if $gA\in\II$ ($Ag\in\II$) for each $A\in\II$. \begin{Ps} The family $Sp_G$ of all sparse subsets of a group $G$ is a left and right translation invariant ideal in $\PP_G$\end{Ps} \begin{proof} Apply Proposition~\ref{p1}. \end{proof} \begin{Ps}~\label{p8} For a subset $A$ of a group $G$, the following statements are equivalent $(i)$ $A$ is disparse; $(ii)$ if $p\in A^*$ then $p\notin G^*p$. \end{Ps} Recall that an element $s$ of a semigroup $S$ is right cancelable if, for any $x,y\in S$, $xs=ys$ implies $x=y$. \begin{Ps}~\label{p9} A subset $A$ of a countable group $G$ is disparse if and only if each ultrafilter $p\in A^*$ is right cancelable in $\beta G$.\end{Ps} \begin{proof} By ~\cite[Theorem 8.18]{b5}, for a countable group $G$, an ultrafilter $p\in G^*$ is right cancelable in $\beta G$ if and only if $p\notin G^*p$. Apply Proposition~\ref{p8}\end{proof} \begin{Ps}~\label{p10} The family $dSp_G$ of all disparse subsets of a group $G$ is a left and right translation invariant ideal in $\PP_G$.\end{Ps} \begin{proof} Assume that $A\cup B$ is not disparse and pick $p\in G^*$ such that $\vt_P(A\cup B)$ has a non-isolated point $gq$. Then either $gp\in A^*$ or $gp\in B^*$ so $gp$ is non-isolated either in $\vt_p(A)$ or in $\vt_p(B)$. To see that $dSp_G$ is translation invariant, we apply Proposition~\ref{p1}. \end{proof} For an injective sequence $(a_n)_{n<\w}$ in a group $G$, we denote $$FP(a_n)_{n<\w}=\{a_{i_1}a_{i_2}...a_{i_n}:i_1<...<i_n<\w\}.$$ \begin{Ps}~\label{p11} For every disparse subset $A$ of a group $G$, the following two equivalent statements hold $(i)$ if $q$ is an idempotent from $G^*$ and $g\in G$ then $qg\notin A^*$; $(ii)$ for each injective sequence $(a_n)_{n<\w}$ in $G$ and each $g\in G$, $FP(a_n)_{n<\w}g\setminus A$ is infinite. \end{Ps} \begin{proof} The equivalence $(i)\Leftrightarrow(ii)$ follows from two well-known facts. By ~\cite[Theorem 5.8]{b5}, for every idempotent $q\in G^*$ and every $Q\in q$, there is an injective sequence $(a_n)_{n<\w}$ in $Q$ such that $FP(a_n)_{n<\w}\subseteq Q$. By ~\cite[Theorem 5.11]{b5}, for every injective sequence $(a_n)_{n<\w}$ in $G$, there is an idempotent $q\in G^*$ such that $FP(a_n)_{n<\w}\in q$. Assume that $qg\in A^*$. Then $q(qg)=qg$ so $qg\in G^*qg$ and, by Proposition~\ref{p8}, $A$ is not disparse. \end{proof} \begin{Ps}~\label{p12} For every infinite group $G$, we have the following strong inclusions $$Sp_G\subset dSp_G\subset Sm_G,$$ where $Sm_G$ is the ideal of all small subsets of $G$. \end{Ps} \begin{proof} Clearly, $Sp_G\subseteq dSp_G$. To verify $dSp_G\subseteq Sm_G$, we assume that a subset $A$ of $G$ is not small. Then $A$ is prethick and, by Proposition~\ref{p3}$(ii)$, there exist $p\in G^*$ and a finite subset $F$ of $G$ such that $\vt_p(FA)=Gp$. Hence, $G^*p\subseteq (FA)^*$. We takean arbitrary idempotent $q\in G^*$ and choose $g\in F$ such that $qp\in (gA)^*$. Since $q(qp)=qp$ so $q\in G^*qp$ and, by Proposition~\ref{p8}$(ii)$, $gA$ is not disparse. By Proposition~\ref{p10} $A$ is not disparse. To prove that $dSp_G\setminus Sp_G\neq\varnothing$ and $Sm_G\setminus dSp_G\neq\varnothing$, we may suppose that $G$ is countable. We put $F_0=\{e\}$ and write $G$ as an union of an increasing chain $\{F_n:n<\w\}$ of finite subsets. $1.$ To find a subset $A\in dSp_G\setminus Sp_G$, we choose inductively two sequences $(a_n)_{n<\w}$, $(b_n)_{n<\w}$ in $G$ such that $(1)$ $F_nb_n\cap F_{n+1}b_{n+1}=\varnothing,\text{ } n<\w$; $(2)$ $F_ia_ib_j\cap F_ka_kb_m=\varnothing$, $0\le i\le j<\w$, $0\le k\le m<\w$, $(i,j)\neq(k,m)$.\\ We put $a_0=b_0=e$ and assume that $a_0,...,a_n$, $b_0,...,b_n$ have been chosen. We choose $b_{n+1}$ to satisfy $F_{n+1}b_{n+1}\cap F_ib_i=\varnothing$, $i\le n$ and $$\bigcup_{0\le i\le j<\w}F_ia_ab_i\cap(\bigcup_{o\le i\le n}F_ia_i)b_{n+1}=\varnothing.$$ Then we pick $a_{n+1}$ so that $$F_{n+1}a_{n+1}b_{n+1}\cap(\bigcup_{0\le i\le j<\w F_ia_ib_j})=\varnothing, \text{ } F_{n+1}a_{n+1}b_{n+1}\cap(\bigcup_{0\le i\le n} F_ia_ib_{n+1})=\varnothing.$$ After $\w$ steps, we put $A=\{a_ib_j:0\le i\le j<\w\}$, choose two free ultrafilters $p,q$ such that $\{a_i:i<\w\}\in p$, $\{b_i: i<\w\}\in q$ and note that $A\in pq$. By Proposition~\ref{p5}$(ii)$, $A\notin Sp_G$. To prove that $A\in dSp_G$, we fix $p\in G^*$ and take an arbitrary $q\in\vt_p(A)$. For $n<\w$, let $A_n=\{a_ib_j:0\le i\le n, i\le j<\w\}$. By $(1)$, the set $\{b_j:j<\w\}$ is thin. Applying Proposition~\ref{p3} and Proposition~\ref{p1}, we see that $A_n$ is sparse. Therefore, if $A_n\in q$ for some $n<\w$ then $q$ is isolated in $\vt_p(A)$. Assume that $A_n\notin q$ for each $n<\w$. We take an arbitrary $g\in G\setminus\{e\}$ and choose $m<\w$ such that $g\in F_m$. By $(2)$, $g(A\setminus A_m)\cap A=\emptyset$ so $gq\notin A^*$. Hence, $\vt_p(A)=\{q\}$. $2.$ To find a subset $A\in Sm_G\setminus dSp_G$, we choose inductively two sequences $(a_n)_{n<\w}$, $(b_n)_{n<\w}$ in $G$ such that, for each $m<\w$, the following statement hold $(3)$ $b_mFP(a_n)_{n<\w}\cap F_m(FP(a_n)_{n<\w})=\varnothing.$\\ We put $a_0=e$ and take an arbitrary $g\in G\setminus\{e\}$. Suppose that $a_0,...,a_m$ and $b_0,...,b_m$ have been chosen. We pick $b_{m+1}$ so that $$b_{m+1}FP(a_n)_{n\le m}\cap F_{m+1}(FP(a_n)_{n\le m})=\varnothing$$ and choose $a_{n+1}$ such that $$b_{m+1}(FP(a_n)_{n\le m})a_{n+1}\cap F_{m+1}(FP(a_n)_{n\le m})=\varnothing,$$ $$b_{m+1}(FP(a_n)_{n\le m})\cap F_{m+1}(FP(a_n)_{n\le m})a_{n+1}=\varnothing.$$ After $\w$ steps, we put $A=FP(a_n)_{n<\w}$. By Proposition~\ref{p11}, $A\notin dSp_G$. To see that $A\in Sm_G$, we use $(3)$ and the following observation. A subset $S$ of a group $G$ is small if and only if $G\setminus FS$ is large for each finite subset $F$ of $G$. \end{proof} \begin{Ps}~\label{p13} Let $G$ be a direct product of some family $\{G_\alpha:\alpha<\kappa\}$ of countable groups. Then $G$ can be partitioned into $\aleph_0$ disparse subsets. \end{Ps} \begin{proof} For each $\alpha<\kappa$, we fix some bijection $f_\alpha: G_\alpha\setminus\{e_\alpha\}\to\NN$, where $e_\alpha$ is the identity of $G_\alpha$. Each element $g\in G\setminus\{e\}$ has the unique representation $$g=g_{\alpha_1}g_{\alpha_2}...g_{\alpha_n},\text{ }\alpha_1<\alpha_2<...<\alpha_n<\kappa,\text{ }g_{\alpha_i}\in G_{\alpha_i}\setminus\{e_{\alpha_i}\}.$$ We put $supt g=\{\alpha_1,...,\alpha_n\}$ and let $Seq_\NN$ denotes the set of all finite sequence in $\NN$. We define a mapping $f: G\setminus\{e\}\to Seq_\NN$ by $$f(g)=(n,f_{\alpha_1}(g_{\alpha_1}),...,f_{\alpha_n}(g_{\alpha_n}))$$ and put $D_s=f^{-1}(s)$, $s\in Seq_\NN$. We fix some $s\in Seq_{\NN}$ and take an arbitrary $p\in G^*$ such that $p\in D_s^*$. Let $s=\{n,m_1,...,m_n\}$, $g\in D_s$ and $i\in supt g$. It follows that, for each $i<\kappa$, there exists $x_i\in G_i$ such that $x_iH_i\in p$, where $H_i=\otimes\{G_i:j<\kappa,j\neq i\}$. We choose $i_1,...,i_k$, $k<n$ such that $$\{i_1,...,i_k\}=\{i<\kappa:x_iH_i\in p,\text{ }x_i\neq e_i\},$$ put $P=x_{i_1}H_{i_1}\cap...\cap x_{i_k}H_{i_k}\cap D_s$ and assume that $gp\in P^*$ for some $g\in G\setminus\{e\}$. Then $supt g\cap\{i_1,...,i_k\}=\varnothing$. Let $supt g=\{j_1,...,j_t\}$, $H=H_{j_1}\cap...\cap H_{j_t}$. Then $H\in p$ but $g(H\cap P)\cap D_s=\varnothing$ because $|supt gx|>n$ for each $x\in H\cap P$. In particular, $gp\notin P^*$. Hence, $p$ is isolated in $\vt_p(D_s)$.\end{proof} By Proposition~\ref{p13}, every infinite group embeddable in a direct product of countable groups (in particular, every Abelian group) can be partitioned into $\aleph_0$ disparse subsets. \begin{Qs} Can every infinite group be partitioned into $\aleph_0$ disparse subsets? \end{Qs} By \cite{b9}, every infinite group can be partitioned into $\aleph_0$ small subsets. For an infinite group $G$, $\eta(G)$ denotes the minimal cardinality $\kappa$ such that $G$ can be partitioned into $\eta(G)$ sparse subsets. By \cite[Theorem 1]{b11}, if $|G|>(\kappa^+)^{\aleph_0}$ then $\eta(G)>\kappa$, so Proposition~\ref{p12} does not hold for partition of $G$ into sparse subsets. For partitions of groups into thin subsets see \cite{b10}. \section{Comments}~\label{s3} $1.$ A subset $A$ of an amenable group $G$ is called {\em absolute null} if $\mu(A)=0$ for each Banach measure $\mu$ on $G$, i.e. finitely additive left invariant function $\mu:\PP_G\to[0,1]$. By \cite[Theorem 5.1]{b6} Proposition~\ref{p5}, every sparse subset of an amenable group $G$ is absolute null. \begin{Qs} Is every disparse subset of an amenable group $G$ absolute null? \end{Qs} To answer this question in affirmative, in view of Proposition~\ref{p8}, it would be enough to show that each ultrafilter $p\in G^*$ such that $p\notin G^*p$ has an absolute null member $P\in p$. But that is not true. We sketch corresponding counterexample. We put $G=\ZZ$ and choose inductively an injective sequence $(a_n)_{n<\w}$ in $\NN$ such that, for each $m<\w$ and $i\in\{-(m+1),...,-1,1,...,m+1\}$, the following statements hold $(\ast)\text{ } (\bigcup_{n>m}(a_n+2^{a_n}\ZZ))\cap(i+\bigcap_{n>m}(a_n+2^{a_n}\ZZ))=\varnothing$ Then we fix an arbitrary Banach measure $\mu$ on $\ZZ$ and choose an ultrafilter $q\in\ZZ^*$ such that $2^n\ZZ\in q$, $n\in\NN$ and $\mu(Q)>0$ for each $Q\in q$. Let $p\in G^*$ be a limit point of the set $\{a_n+q:n<\w\}$. Clearly, $\mu(P)>0$ for each $P\in p$. On the other hand, by $(\ast)$, the set $\ZZ+p$ is discrete so $p\notin\ZZ^*+p$. In \cite{b17} S. Solecki, for a group $G$, defined two functions $\sigma^R,\sigma^L:\PP_G\to [0,1]$ by the formulas $$\sigma^R(A)=\inf_F\sup_{g\in G}\frac{F\cap Ag}{|F|},\text{ }\sigma^L(A)=\inf_F\sup_{g\in G}\frac{|F\cap gA|}{|F|},$$ where $\inf$ is taken over all finite subsets of$G$. By \cite{b1} and \cite{b20}, a subset $A$ of an amenable group is absolute null if and only if $\sigma^R(A)=0$. \begin{Qs} Is $\sigma^R(A)=0$ for every sparse subset $A$ of a group $G$? \end{Qs} To answer this question positively it suffices to prove that if $\sigma^R(A)>0$ then there is $g\in G\setminus\{e\}$ such that $\sigma^R(A\cap gA)>0.$ $2.$ The origin of the following definition is in asymptology (see \cite{b16}, \cite{b19}). A subset $A$ of a group $G$ is called {\em asymptotically scattered} if, for any infinite subset $X$ of $A$, there is a finite subset $H$ of $G$ such that, for any finite subset $F$ of $G$ satisfying $F\cap H=\varnothing$, we can find a point $x\in X$ such that $Fx\cap A=\varnothing$. By \cite[Theorem 13]{b13} and Propositions~\ref{p5} and~\ref{p6}, a subset $A$ is sparse if and only if $A$ is asymptotically scattered. We say that a subset $A$ of $G$ is {\em ultrascattered} if, for any $p\in G^*$, the space $\vt_p(A)$ is scattered, i.e. each subset of $\vt_p(A)$ has an isolated point. Clearly, each disparse subset is ultrascattered. \begin{Qs} How one can detect whether given a subset $A$ of $G$ is ultrascattered? Is every ultrascattered subset small? \end{Qs} We say that a subset $A$ of $G$ is {\em weakly asymptotically scattered} if, for any subset $X$ of $A$, there is a finite subset $H$ of $G$ such that, for any finite subset $F$ of $G$ satisfying $F\cap H=\varnothing$, we can find a point $x\in X$such that $Fx\cap X=\varnothing$. \begin{Qs} Are there any relationships between ultrascattered and weakly asymptotically scattered subsets? \end{Qs} $3.$ Let $A$ be a subset of a group $G$ such that each ultracompanion $\vt_p(A)$ is compact. We show that $A$ is sparse. In view of Proposition~\ref{p6}, we may suppose that $G$ is countable. Assume the contrary: $\vt_p(A)$ is infinite for some $p\in G^*$. On one hand, the countable compact space $\vt_p(A)$ has an injective convergent sequence. On the other hand, $G^*$ has no such a sequence. $4.$ Let $X$ be a subset of a group $G$, $p\in G^*$. We say that the set $Xp$ is uniformly discrete if there is $P\in p$ such that $xP^*\cap yP^*=\varnothing$ for all distinct $x,y\in X$. \begin{Qs} Let $A$ be a disparse subset of a group $G$. Is $\vt_p(A)$ uniformly discrete for each $p\in G^*$? \end{Qs} $5.$ Let $\FF$ be a family of subsets of a group $G$, $A$ be a subset of $G$. We denote $\delta_{\FF}(A)=\{g\in G:gA\cap A\in \FF\}$. If $\FF$ is the family of all infinite subsets of $G$, $\delta_{\FF}(A)$ were introduced in \cite{b14} under name combinatorial derivation of $A$. Now suppose that $\delta_p(\FF)\neq\varnothing$, pick $q\in A^*\cap Gp$ and note that $\delta_p(A)=\delta_q(A)$. Then $\delta_q(A)=(\delta_q(A))^{-1}q$.
1,108,101,563,555
arxiv
\section{Introduction} The two-dimensional Radon transform is the mapping of functions to their line integrals, i.e., a mapping $\mathcal{R}: \mathbb{R}^2\to S^1\times \mathbb{R}$ where $S^1$ denotes the unit circle, defined by \begin{align} \mathcal{R} f(\theta,s)=\int f(x)\delta(x\cdot\theta-s)d x. \label{radon_def} \end{align} The parameter $\theta$ represents the (normal) direction of the lines, and the parameter $s$ denotes the (signed) distance of the line to the origin. It is customary to use $\theta$ both as a point on the unit sphere and as an angle, i.e., the notation $x\cdot\theta$ is used to parameterize lines in $\mathbb{R}^2$ by the relation $x\cdot\theta=x_1\cos(\theta)+x_2\sin(\theta)$. Note that each line is defined twice in this definition, since $s$ can take both positive and negative, and since $\theta \in S^1$. A schematic illustration of the Radon transform is depicted in Figure \ref{fig:Rexpl}, where beams are propagating through an object and after absorption are measured by receivers. The Radon transform appears for instance in computational tomography (CT). The tomographic inversion problem lies in recovering an unknown function $f$ given knowledge of $\mathcal{R} f$. For more details about CT, see \cite{faridani2003introduction,frikel2013characterization,kak1988principles,natterer1986computerized}. \begin{figure} \centering \subfloat{\includegraphics[clip=true,trim=45mm 50mm 40mm 0mm,width=0.48\textwidth]{Rexpl.png}} \caption{Scheme of computing projections for a given angle $\theta$.} \label{fig:Rexpl} \end{figure} One of the most popular methods of inverting the usual Radon transform is by means of the filtered back-projection (FBP) method \cite{natterer1986computerized}. It uses the inversion formula \begin{equation}\label{FBP} f = \mathcal{R}^\# \mathcal{W} \mathcal{R} f. \end{equation} where $\mathcal{W}$ is a convolution operator acting only on the $s$-variable, and where $\mathcal{R}^\# :S^1\times \mathbb{R} \to R^2$ denotes the back-projection operator, which integrates over all lines through a point, i.e., \begin{equation} \mathcal{R}^\# g(x) = \int_{S^1} g(\theta,x\cdot \theta). \end{equation} The back-projection operator is adjoint to the Radon transform. The convolution operator $\mathcal{W}$ can either be describes as a Hilbert transform followed by a derivation, both with respect to the variable $s$; or as a convolution operator with a transfer function being a suitable scaled version of $|\sigma|$, where $\sigma$ denotes the conjugate variable of $s$. The filtered back-projection algorithm has a time complexity of $N^3$, if we assume that reconstructions are made on an $N\times N$ lattice and that the numbers of samples in $s$ and $\theta$ are both $\mathcal{O}(N)$. This is because for each point $x$, integration has to be made over all lines ($N$ directions) passing through that point. However, there are methods for fast $\mathcal{O}(N^2\log N)$ back-projection, see for instance \cite{basu2000n, danielsson1997backprojection, george2007fast}. Another class of fast methods inversion of Radon data goes via the Fourier-slice theorem. This is a result by which the Radon data can be mapped to a polar sampling of the unknown function in the frequency domain. To recover the unknown function, interpolation-like operations (for instances the ones presented in \cite{USFFT,brandt2000fast,kalamkar2012high}) have to be applied in the frequency domain. The data in the frequency domain will typically be oscillatory, as seen in Figure \ref{fig:phantom}d) and (at in increased resolution) in the lower right panel of Figure \ref{fig:phantom}c). Hence, in order to interpolate data of this type high interpolation order is required. In comparison, the data in the Radon domain will not be particularly oscillatory. This is illustrated in Figure \ref{fig:phantom}b) and the upper left panel of Figure \ref{fig:phantom}c). This fact implies that interpolation methods of moderate order can be expected to produce reasonable results. This means in turn that less time can be spent on conducting interpolation. It also gives more control over the interpolation errors, as local errors will be kept local in the Radon domain (and hence more easily distinguishable as artifacts in the reconstruction). In this paper, we discuss how to design fast algorithms for the application of the Radon transform and the back-projection operator by using the fact that they can be expressed in terms of convolutions when represented in log-polar coordinates \cite{andersson2005fast,eggermont1983tomographic,hansen1981theory,verly1981circular}. In particular, we follow the approach suggested in \cite{andersson2005fast}. This formulation turns out to be particularly well-suited for implementation on GPUs. A major advantage with using GPUs is that the routines for linear interpolation are fast. In fact, the cost for computing linear interpolation is the same as reading directly from memory \cite{govindaraju2006memory}. This feature can be utilized for constructing fast interpolation schemes. In particular, in this paper we will work with cubic interpolation on GPU \cite{ruijters2008efficient,sigg2005fast}. \begin{figure}[t!] \begin{minipage}{.48\linewidth} \subfloat[][]{\includegraphics[width=0.49\linewidth,height=0.49\linewidth]{ph_f.png}} \subfloat[][]{\includegraphics[width=0.49\linewidth,height=0.49\linewidth]{ph_R.png}} \end{minipage} \subfloat[][]{ \begin{minipage}{.24\linewidth} \centering \hspace{-1.5cm} \includegraphics[width=0.4\linewidth,height=0.4\linewidth]{ph_Rz.png} \\\vspace{0.5cm}\hspace{1.5cm} \includegraphics[width=0.4\linewidth,height=0.4\linewidth]{ph_FRz.png} \end{minipage} } \subfloat[][]{ \begin{minipage}{.24\linewidth} \includegraphics[width=\linewidth,height=\linewidth]{ph_FR.png} \end{minipage} } \caption{\label{fig:phantom} The (modified) Shepp-Logan phantom (a) and its Radon transform (b). The panel (d) shows the one-dimensional Fourier transform of the Radon transform with respect to $s$. The regions indicated by the black squares in (b) and (d) are shown in higher resolution in (c).} \end{figure} For the sake of comparison, we will provide performance and accuracy tests of the proposed method along with comparisons against other software packages for tomographic computations. We also conduct a performance comparison between the different methods when utilized in iterative reconstruction techniques. The iterative methods rely on applying the Radon transform and the back-projection operator several times. An advantage of keeping all computations on the GPU is that the needed time for CPU-GPU memory transfer can be reduced. \section{The Radon transform and the back-projection expressed as convolutions}\label{secrad} We recapitulate some of the main ideas of the method described in \cite{andersson2005fast}. A key part there is the usage of log-polar coordinates, i.e., \begin{align} \begin{cases} &x_1=e^\rho\cos(\theta),\\ &x_2=e^\rho\sin(\theta), \end{cases} \label{param} \end{align} where $-\pi<\theta<\pi$. To simplify the presentation, we identify $f(\theta,\rho)$ with $f(x_1,x_2)$ if $(\theta,\rho)$ in the log-polar coordinate system corresponds to the point $x=(x_1,x_2)$ the Cartesian coordinate system, and similarly for other coordinate transformations. By representing the distance between lines and the origin $s=e^\rho$, and by a change of variables in \eqref{radon_def} from Cartesian to log-polar coordinates the log-polar Radon transform can be expressed as \begin{align} \mathcal{R}_{\mathrm{lp}} f(\theta,\rho)&=\int_{-\pi}^{\pi}\int_{-\infty}^{\infty}f(\theta',\rho')e^{\rho'}\delta\left(\cos(\theta-\theta')-e^{\rho-\rho'}\right) d\rho'd \theta' \nonumber \\ &=\int_{-\pi}^{\pi}\int_{-\infty}^{\infty}f(\theta',\rho')e^{\rho'}\zeta \left( \theta-\theta',\rho-\rho'\right) d\rho' d \theta' . \label{logdef} \end{align} In particular, $\mathcal{R} f(\theta,s) = \mathcal{R}_{\mathrm{lp}} f(\theta,\log(s))$ for $s>0$ (which is sufficient since the information for $s<0$ is redundant). We briefly mention how to put this formula in a theoretical framework. Set $S=(-\pi,\pi)\times \mathbb{R}$ and note that, for a compactly supported smooth function $h$ on $S$, we have $$\int_{-\pi}^{\pi}\int_{-\infty}^{\infty}h(\theta,\rho)\zeta \left( \theta,\rho\right) d\rho d \theta = \int_{-\pi/2}^{\pi/2}\frac{h(\theta,\log\left( \cos(\theta)\right))}{\cos\theta} d \theta,$$ which can be written as $\int_S h d\mu$ where $\mu$ is an infinite measure on $S$. Hence, the formula extends by continuity to, e.g., all continuous compactly supported functions $h$ in $S$. It follows that \eqref{logdef} is well-defined whenever $f$ is a continuous function which is zero in a neighborhood of 0. As the Radon transform in the coordinate system $(\theta,\rho)$ is essentially a convolution between $f$ and the distribution $\zeta(\theta,\rho) = \delta( \cos(\theta)-e^{\rho})$, it can be rapidly computed by means of Fourier transforms. Special care has to be taken to the distribution $\zeta$, an issue we will return to in what follows. Ignoring possible difficulties with the distribution $\zeta$, let us discuss how \eqref{logdef} can be realized by using fast Fourier transforms. It is natural to assume that the function $f$ has compact support (the object that is measured has to fit in the device that is measuring it). The compact support also implies that the Radon transform of $f$ will have compact support in the $s$-variable. However, this is not true in the log-polar setting, since $\rho \rightarrow -\infty$ as $s \rightarrow 0$. Note also that for any point $x$ in the plane there is a direction for which there is a line passing through $x$ and the origin. This implies that it is not possible to approximate the values of the Radon transform by using a finite convolution in log-polar coordinates if it is to be computed for all possible line directions. However, by restricting the values of $\theta$, and by making a translation so that the support of $f$ is moved away from the origin, it is in fact possible to describe the \emph{partial Radon transform} as a finite convolution, and then recover the full Radon transform by adding the contributions from various partial Radon transforms. The setup is illustrated in Fig. \ref{fig:spans}. \begin{figure} \centering \subfloat[][]{\includegraphics[trim = 45mm 35mm 25mm 35mm,clip=true,width=0.48\textwidth]{Rlpexpl1.png}} \subfloat[][]{\includegraphics[trim = 25mm 25mm 25mm 25mm,clip=true,width=0.48\textwidth]{Rlpexpl.png}} \vspace{0.01mm} \caption{(a) Tangent lines to the circle to determine the support of the log-polar Radon transform function; (b) three angle spans to compute partial Radon transforms.} \label{fig:spans} \end{figure} More precisely, let $\beta$ be a fixed angle and let $a_R$, $a_r,$ $L_1,~L_2$ and $L_3$ be as in Figure \ref{fig:spans}a). From the geometry we see that \begin{align} &a_R=\frac {\sin\left(\frac{\beta}{2}\right)}{1+\sin\left(\frac{\beta}{2}\right)}, \quad \mbox{ and } \quad a_r=\frac{\cos\left(\frac{\beta}{2}\right)-\sin\left(\frac{\beta}{2}\right)}{1+\sin\left(\frac{\beta}{2}\right)}. \label{aconst} \end{align} Assume for the moment that $f$ has support in the gray circle indicated in Figure \ref{fig:spans}a). In log-polar coordinates $(\theta,\rho)$ it then has support inside of $\left[\log (1-2a_R),0\right]\times\left[-\frac{\beta}{2},\frac{\beta}{2}\right]$. If we restrict our attention to values of $\mathcal{R}_{\mathrm{lp}}(f)$ in the same sector $\theta\in \left[-\frac{\beta}{2},\frac{\beta}{2}\right]$, then the only nonzero values of $\mathcal{R}_{\mathrm{lp}}(f)$ will be for $\rho$ in the interval $\left[\log a_r,0\right]$. This means that $\mathcal{R}_{\mathrm{lp}}(f)(\theta,\rho)$ for these values can be computed by the finite convolution \begin{align} &\int_{-\beta/2}^{\beta/2}\int_{\log (1-2a_R) }^{0} f(\theta',\rho') e^{\rho'}\zeta \left( \theta-\theta',\rho-\rho'\right) d \theta' d\rho', \label{logdefsimple} \end{align} where $(\theta,\rho)\in\left[-\frac{\beta}{2},\frac{\beta}{2}\right]\times \left[\log a_r,0 \right].$ We now replace the integral \eqref{logdefsimple} by the periodic convolution \begin{align} &{\mathcal{R}_\mathrm{lp}^{\mathrm{p}}} f(\theta,\rho)=\int_{-\beta}^{\beta}\int_{\log (a_r) }^{0} f(\theta',\rho') e^{\rho'} \zeta_{\mathrm{per}} \left( \theta-\theta',\rho-\rho'\right) d \theta' d\rho', \label{lograd_periodic} \end{align} where $\zeta_{\mathrm{per}}$ is the periodic extension of $\zeta$ defined on $[\log(a_r),0]\times [-\beta,\beta]$. It is readily verified that for $(\theta,\rho)\in\left[-\frac{\beta}{2},\frac{\beta}{2}\right]\times \left[\log a_r,0 \right]$, it thus holds that \begin{align*} {\mathcal{R}_\mathrm{lp}^{\mathrm{p}}} f(\theta,\rho)&=\mathcal{R} f(\theta,e^\rho). \end{align*} We refer to ${\mathcal{R}_\mathrm{lp}^{\mathrm{p}}}$ as the \emph{partial log-polar Radon transform}. Note that, in analogy with the argument following \eqref{logdef}, the formula \eqref{logdefsimple} can be written as a convolution between $f(\theta',\rho') e^{\rho'}$ and a finite measure, whereas \eqref{lograd_periodic} can be written as a convolution with a locally finite periodic measure. The above formulas are thus well defined as long as $f$ is continuous (or even piecewise continuous) in the log-polar coordinates. We refer to \cite[Chapter 11]{zemanian}, for basic results about convolution between functions and periodic distributions. \begin{figure} \centering \subfloat[][]{\includegraphics[trim = 2cm 2cm 2cm 2cm,width=0.48\textwidth]{figs/Convolution_support.png}} \subfloat[][]{\includegraphics[trim = 2cm 2cm 2cm 2cm,width=0.48\textwidth]{figs/Convolution_support_bck.png}} \caption{\label{fig:conv_support} The effect on the support due to convolutions: a) Radon transform; b) Back-projection.} \end{figure} The convolution setup for the Radon transform is depicted in Figure \ref{fig:conv_support}a). The rightmost black solid curve $C$ shows $\rho=\log(\cos(\theta))$, ($-\beta<\theta<\beta$), which is the support of $\zeta$ in $\left[-\beta,\beta\right]\times \left[\log a_r,0 \right]$. Let $D$ denote the circle \begin{equation}\label{circle_D} D=\{ (x_1,x_2): (x_1-1+a_R)^2+x_2^2<a_R^2\} \end{equation} in log-polar coordinates. The black dots shows the perimeter of $D$, and the gray curves indicate translation of the curve $C$ associated with the black dots on the circle, within the interval $[-\beta,\beta]$. There is a difference in grayscale for the points with $\theta$ inside the range $[-\frac{\beta}{2},\frac{\beta}{2}]$, as only these values are of interest to us. Note that the smallest $\rho$-value of the contributing part in this interval is $\log(a_r)$. Moreover, the red lines indicate parts of the translations of $C$ outside $[\log(a_r),0]\times[-\beta,\beta]$, and the blue lines shows how these curves are wrapped back into the domain $[\log(a_r),0]\times [-\beta,\beta]$ by the periodic extension of $\zeta$. We see that these alias effects do not have any influence on the domain $[\log(a_r),0]\times [-\frac{\beta}{2},\frac{\beta}{2}]$. We now describe how $\mathcal{R}_{\mathrm{lp}}^{\beta}$ can be used to recover $\mathcal{R} f$ for a function $f$ with support in the unit circle. We split the angular variable into $M$ different parts, $\beta=\frac{\pi}{M}$. For $m=0,1,\ldots M-1$ let $\mathsf{T}_m : \mathbb{R}^2\rightarrow \mathbb{R}^2$ denote the change of coordinates \begin{equation} \label{Tdef} \mathsf{T}_m(x) = a_R\begin{pmatrix} \cos(m\beta) & \sin(m\beta)\\ -\sin(m\beta) & \cos(m\beta) \end{pmatrix} \begin{pmatrix} x_1\\ x_2 \end{pmatrix}+ \begin{pmatrix} 1-a_R\\ 0 \end{pmatrix} , \end{equation} and let $T_m f = f(\mathsf{T}_m^{-1} x)$. Note that \begin{equation}\label{Radon_tm}\mathcal{R} f (\theta,s)=a_R^{-1}\mathcal{R} \left(T_{m} f\right) \Big(\theta-m\beta,a_Rs+(1-a_R)\cos(\theta-m\beta)\Big).\end{equation} The Radon transform can thus be computed for arbitrary $\theta$ and $0<s<1$, by using the relation \begin{equation}\label{Radon_all_pb} \mathcal{R} f(\theta,s) = a_R^{-1} {\mathcal{R}_\mathrm{lp}^{\mathrm{p}}} \left(T_{m} f\right)\Big(\theta-m\beta,\log\left(a_R s +(1-a_R)\cos\left(\theta-m\beta\right)\right)\Big), \end{equation} where $m= [\theta/\beta (\bmod M)]$ and $[x]$ denotes the rounding operator to the closest integer to $x$, and where $\bmod$ denotes the modulus operator. We denote the change of coordinates above by \begin{equation}\label{Sm_def} \mathsf{S}_m(\theta,s) = \Big( \theta-m\beta,\log\left( a_Rs+(1-a_R)\cos(\theta-m\beta) \right) \Big), \end{equation} and we then have that \begin{equation} \mathcal{R} f(\theta,s)=a_R^{-1} {\mathcal{R}_\mathrm{lp}^{\mathrm{p}}} \left(T_{m} f\right)(\mathsf{S}_m(\theta,s)) . \end{equation} We remark that for fixed $\theta$, the connection \eqref{Radon_tm} between the Radon data in the different domains has an affine dependence on $s$. Since the filter operator $\mathcal{W}$ acts as a convolution operator with regards to $s$, its action will in principle be the same regardless if the coordinate transformation $T_m$ is used or not. We use the notation $\mathcal{W}_{\mathrm{lp}}$ to denote the action of the filter operator in log-polar coordinates. The adjoint operator (back-projection) associated with the Radon transform \eqref{radon_def} can be written as \begin{align} \mathcal{R}^\# g(x) = \int_{-\infty}^{\infty}\int_{S^1} g(\theta,s)\delta(x\cdot \theta - s) \, d\theta ds, \label{def_Radast} \end{align} cf. \cite{natterer1986computerized}. It is a weighted integral of $g$ over lines passing through the point $x$, and just as for the Radon transform it can be expressed as a convolution in log-polar coordinates. We define \begin{align*} \mathcal{R}_{\mathrm{lp}}^\# g(\theta,\rho) &= \int_{-\infty}^{\infty} \int_{-\pi}^\pi g(\theta',\rho') \delta\left(e^{\rho-\rho'} \cos(\theta-\theta')-1\right) \, d\theta' d\rho' \\ &= \int_{-\infty}^{\infty} \int_{-\pi}^\pi g(\theta',\rho') \zeta^\# \left(\rho-\rho',\theta-\theta'\right) \, d\theta' d\rho'. \end{align*} It is shown in \cite{andersson2005fast} that $\mathcal{R}^\# f(x) = 2\mathcal{R}_{\mathrm{lp}}^\# f(\theta,\rho)$, where the factor 2 comes from the fact that the corresponding integration in the polar representation $(\theta,s)$ is only done in the half-plane $s>0$. The log-polar back-projection operator has the same problem as the log-polar Radon transform in dealing with $s=0$, and in a similar fashion we make use of \emph{partial back-projections} in order to avoid this problem. Because of the relation \eqref{Radon_all_pb} and the fact that the filter operator $\mathcal{W}$ can be applied to each partial Radon data individually, it will be enough to consider partial back-projections for Radon data with $\theta\in\left[-\frac{\beta}{2}+m\beta,\frac{\beta}{2}+m\beta\right]$ according to the setup of Figure \ref{fig:spans}. By applying $T_m^{-1}$ to each of the partial back-projection, and summing up the results, we will recover the original function. For detailed calculations, we refer to \cite{andersson2005fast}. The idea is thus to split the Radon data into $M$ parts, where each part is transformed according to Figure \ref{fig:spans}a). For each part, the filtered data is back-projected according to \begin{equation}\label{partial_bck_1} \int_{-\infty}^{\infty} \int_{-\frac{\beta}{2}}^\frac{\beta}{2} g(\theta',\rho') \zeta^\# \left(\rho-\rho',\theta-\theta'\right) \, d\theta' d\rho'. \end{equation} Since we assumed that our original function had support inside the unit circle, we are only be interested in the contributions inside the disc $D$. Since only lines with $\rho \in [\log(a_r),0]$ go through this circle, the integration in the $\rho$ variable above can be limited to $\rho \in [\log(a_r),0]$. Similarly as for the Radon transform, we now want to write this (finite) convolution as a periodic convolution. Figure \ref{fig:conv_support}b) illustrates how this can be achieved. The black solid lines show translations of the curves $\rho=-\log(\cos(\theta))$ representing the back-projection integral in the log-polar coordinates. The black dots now show the perimeter of a support of the Radon data, indicated by dark gray in the left illustration of Figure \ref{fig:conv_support}. The dark gray curves the back-projection illustration now show the translations of $\rho=-\log(\cos(\theta))$ that will give a contribution inside the disc $D$. The light gray curves illustrate contributions that fall outside the support of $D$. The red curves show contributions that will fall outside the range $[-\beta,\beta]\times [\log(a_r),0]$, and the blue curves show the effect when these lines are wrapped back in to the domain $[-\beta,\beta]\times [\log(a_r),0]$. We note that the blue curves do not intersect the circle. Hence, we define the partial log-polar back-projection operator as \begin{equation}\label{backproj_lpp} {\mathcal{R}_\mathrm{lp}^{\mathrm{p}}}^\# g(\theta,\rho) = \int_{\log(a_r)}^{0} \int_{-\frac{\beta}{2}}^\frac{\beta}{2} g(\theta',\rho') \zeta_{\mathrm{per}}^\# \left(\rho-\rho',\theta-\theta'\right) \, d\theta' d\rho'. \end{equation} where $\zeta^\#_{\mathrm{per}}$ is the periodic extension of $\zeta^\#$ defined on $[\log(a_r),0]\times [-\beta,\beta]$, and note that for $(\theta,\rho)$ corresponding to points inside the domain $D$, it holds that \begin{align*} {\mathcal{R}_\mathrm{lp}^{\mathrm{p}}}^\# g(\theta,\rho) = \mathcal{R}_{\mathrm{lp}}^\# g(\theta,\rho) . \end{align*} We then have that $$ 2\sum_{m=0}^{M-1} T_m^{-1} {\mathcal{R}_\mathrm{lp}^{\mathrm{p}}}^\# \mathcal{\mathcal{W}_{\mathrm{lp}}} {\mathcal{R}_\mathrm{lp}^{\mathrm{p}}} T_m f(x)= f(x) $$ for all $x$ in the unit disc. \section{Fast evaluation of the log-polar Radon transform and the log-polar back-projection} Let $h$ be a continuous function on some rectangle $R$ in $\mathbb{R}^2$, and let $\mu$ be a finite measure on $R$, and let $\mu_{\mathrm{per}}$ be its periodic extension. Along the same lines as \cite[Theorem 11.6-3]{zemanian}, it is easy to see that the Fourier coefficients of their periodic convolution satisfies $$\widehat{h*\mu_{\mathrm{per}}}=\hat{h}\hat{\mu},$$ where $\hat{h}\hat{\mu}$ is the pointwise multiplication of the respective Fourier coefficients with respect to $R$. We will use this formula and FFT to fast evaluate ${\mathcal{R}_\mathrm{lp}^{\mathrm{p}}}$ \eqref{lograd_periodic} and ${\mathcal{R}_\mathrm{lp}^{\mathrm{p}}}^\#$ \eqref{backproj_lpp}. We use the notations \begin{align} f(\theta,\rho) &= \sum_{k_\theta,k_\rho} \widehat{f}_{k_\theta,k_\rho} e^{2\pi i \left( \frac{\theta k_\theta}{2\beta} + \frac{\rho k_\rho}{-\log{a_r}}\right) }, \label{fFTrep}\\ g(\theta,\rho) &= \sum_{k_\theta,k_\rho} \widehat{g}_{k_\theta,k_\rho} e^{2\pi i \left( \frac{\theta k_\theta}{2\beta} + \frac{\rho k_\rho}{-\log{a_r}}\right) },\label{gFTrep}\\ \zeta(\theta,\rho) &= \sum_{k_\theta,k_\rho} \widehat{\zeta}_{k_\theta,k_\rho} e^{2\pi i \left( \frac{\theta k_\theta}{2\beta} + \frac{\rho k_\rho}{-\log{a_r}}\right) },\label{zetaFTrep}\\ \zeta^\#(\theta,\rho) &= \sum_{k_\theta,k_\rho} \widehat{\zeta^\#}_{k_\theta,k_\rho} e^{2\pi i \left( \frac{\theta k_\theta}{2\beta} + \frac{\rho k_\rho}{-\log{a_r}}\right) }.\label{zetabFTrep} \end{align} The Fourier coefficients for the two distributions $\zeta$ and $\zeta^\#$ are given by \begin{align} \zeta_{k_\theta,k_\rho}&=\int_{-\beta}^{\beta}\int_{\log(a_r)}^{0}\delta(\cos(\theta)-e^{\rho})e^{-2\pi i\frac{\theta k_\theta}{2\beta}}e^{-2\pi i \frac{\rho k_\rho}{-\log(a_r)}}d\rho d\theta=\nonumber\\ &=\int_{-\beta}^{\beta}\int_{\log(a_r)}^{0}\delta(\cos(\theta)-e^{\rho})e^{-2\pi i\frac{\theta k_\theta}{2\beta}} \left( e^{\rho} \right)^{-2\pi i \frac{k_\rho}{-\log(a_r)}}d\rho d\theta=\nonumber\\ &=\label{exprcoef1}\int_{-\beta}^{\beta}e^{-2\pi i\frac{\theta k_\theta}{2\beta}}(\cos(\theta))^{-2\pi i \frac{ k_\rho}{-\log(a_r)}-1}d\theta,\\ \zeta_{k_\theta,k_\rho}^{\#}&=\int_{-\beta}^{\beta}\int_{0}^{-\log(a_r)}\delta(e^{\rho}\cos(\theta) - 1)e^{-2\pi i\frac{\theta k_\theta}{2\beta}}e^{-2\pi i \frac{\rho k_\rho}{-\log(a_r)}}d\rho d\theta=\nonumber\\ &=\int_{-\beta}^{\beta}\int_{0}^{-\log(a_r)}\delta(e^{\rho}\cos(\theta) - 1)e^{-2\pi i\frac{\theta k_\theta}{2\beta}} \left( e^{\rho} \right)^{-2\pi i \frac{\rho k_\rho}{-\log(a_r)}}d\rho d\theta=\nonumber\\ &=\label{exprcoef2}\int_{-\beta}^{\beta}e^{-2\pi i\frac{\theta k_\theta}{2\beta}}(\cos(\theta))^{-2\pi i \frac{ k_\rho}{\log(a_r)}}d\theta. \end{align} Both the integrals on the right hand sides of \eqref{exprcoef1} and \eqref{exprcoef2} are of the form \begin{align}\label{Pfor} P(\mu,\alpha,\beta)=\int_{-\beta}^{\beta}e^{i\mu\theta}\cos(\theta)^\alpha d\theta, \end{align} where $\mu={-2\pi \frac{k_\theta}{2\beta}}$, $\alpha={-2\pi i \frac{ k_\rho}{-\log(a_r)}-1}$ for \eqref{exprcoef1} and $\alpha={-2\pi i \frac{ k_\rho}{\log(a_r)}}$ for \eqref{exprcoef2}, respectively. It turns out that there is a closed form expression for the integral \eqref{Pfor}, namely \begin{align}\label{P_closed_form} &P(\mu,\alpha,\beta)= \frac{\Gamma(\frac{\alpha+1}{2})\Gamma(\frac{1}{2})\Gamma(\frac{\alpha+2}{2})}{\Gamma(\frac{\alpha+\mu}{2}+1)\Gamma(\frac{\alpha-\mu}{2}+1)}+\\ &\frac{2\mu\cos(\beta)^{\alpha+2}\sin(\mu\beta)}{(\alpha+1)(\alpha+2)}\,_3F_2\left(1,\frac{\alpha}{2}+\frac{\mu}{2}+1,\frac{\alpha}{2}-\frac{\mu}{2}+1;\frac{\alpha+3}{2},\frac{\alpha}{2}+2;\cos(\beta)^2\right)-\\ &\frac{2\cos(\beta)^{\alpha+1}\cos(\beta \mu)\sin(\beta)}{(\alpha+1)} \,_3F_2\left(1,\frac{\alpha}{2}+\frac{\mu}{2}+1,\frac{\alpha}{2}-\frac{\mu}{2}+1;\frac{\alpha+3}{2},\frac{\alpha}{2}+1;\cos(\beta)^2\right). \end{align} In the appendix we derive this expression. However, as it includes gamma and hypergeometric functions with complex arguments, special care has to be taken when evaluating \eqref{P_closed_form} numerically, as cancelation effect easily can cause large numerical errors. For the case where there are not suitable numerical routines readily available for evaluation of these special functions, we briefly describe an alternative way to evaluate \eqref{Pfor} numerically. Note that for $\mu={-2\pi \frac{k_\theta}{2\beta}}$, the integral is well suited for evaluation by FFT, since for a fixed value of $k_\rho$ (fixed $\alpha$) we can obtain $\zeta_{k_\theta,k_\rho}$ and $\zeta^\#_{k_\theta,k_\rho}$ for all integers $k_\theta$ in a given range by evaluating the integral of \eqref{Pfor} by the trapezoidal rule by means of FFT. However, for this procedure to be accurate, we need to oversample the integral, and make use of end-point corrections. The function $\cos(\theta)^\alpha$ will be oscillatory, but the oscillation is determined by the fixed parameter $\alpha$. Neglecting boundary effects, we can therefore expect the trapezoidal rule to be efficient, provided that sufficient oversampling is used. For the boundary effects, we can make use of end-point correction schemes \cite{alpert1995high}. For the computations used in this paper, we have used an oversampling factor of $8$ and an 8-order end-point correction with weights \begin{align} 1+\frac{1}{120960} \Big[-23681, 55688, -66109, 57024, -31523, 9976, -1375\Big] \label{weightsa}. \end{align} Suppose next that we only know values of $f$ and $g$ in \eqref{lograd_periodic} and \eqref{backproj_lpp}, respectively, on an equally spaced sampling covering $[\log(a_r),0]\times [-\beta,\beta]$, i.e., $f$ and $g$ are known on the lattice \begin{equation}\label{rhothetalattice} \left\{ \frac{j_\theta}{2\beta N_\theta}, \frac{j_\rho}{-\log(a_r) N_\rho} \right\},\quad -\frac{N_\theta}{2} \le j_\theta < \frac{N_\theta}{2}, \quad 0 \le j_\rho < N_\rho, \end{equation} with $N_\theta=2\left\lceil \frac{N_\theta}{2M} \right\rceil$. We denote these values by $f_{j_\theta,j_\rho}$ and $g_{j_\theta,j_\rho}$, respectively. In order for \eqref{lograd_periodic} and \eqref{backproj_lpp} to be meaningful, we need to have continuous representations of $f$ and $g$. This is particularly important as $\zeta$ and $\zeta^\#$ are distributions. A natural way to do this, is to define $\widehat{f}_{k_\theta,k_\rho}$ and $\widehat{g}_{k_\theta,k_\rho}$ by the discrete Fourier transform of $f_{j_\theta,j_\rho}$ and $g_{j_\theta,j_\rho}$, i.e., let \begin{equation} \label{fhatdef} \widehat{f}_{k_\theta,k_\rho} = \begin{dcases} \sum_ { j_\theta =-\frac{N_\theta}{2} }^{\frac{N_\theta}{2}-1} \sum_{j_\rho=0}^{N_\rho-1} f_{j_\theta,j_\rho} e^{-2\pi i \left( \frac{j_\theta k_\theta}{N_\theta} + \frac{j_\rho k_\rho}{N_\rho} \right) },& \mbox{if} \quad -\frac{N_\theta}{2} \le k_\theta < \frac{N_\theta}{2}, \quad 0 \le k_\rho < N_\rho,\\ 0 & \mbox{otherwise,}\\ \end{dcases} \end{equation} and \begin{equation} \label{ghatdef} \widehat{g}_{k_\theta,k_\rho} = \begin{dcases} \sum_ { j_\theta =-\frac{N_\theta}{2} }^{\frac{N_\theta}{2}-1} \sum_{j_\rho=0}^{N_\rho-1} g_{j_\theta,j_\rho} e^{-2\pi i \left( \frac{j_\theta k_\theta}{N_\theta} + \frac{j_\rho k_\rho}{N_\rho} \right) },& \mbox{if} \quad -\frac{N_\theta}{2} \le k_\theta < \frac{N_\theta}{2}, \quad 0 \le k_\rho < N_\rho,\\ 0 & \mbox{otherwise.}\\ \end{dcases} \end{equation} We can then use \eqref{fFTrep} and \eqref{gFTrep} \textit{to define} continuous representations of $f$ and $g$. Using this approach, the values of \eqref{lograd_periodic} and \eqref{backproj_lpp} are also well defined, namely $$ {\mathcal{R}_\mathrm{lp}^{\mathrm{p}}} f(\theta,\rho) = \sum_{k_\theta,k_\rho} \widehat{f}_{k_\theta,k_\rho} \widehat{\zeta}_{k_\theta,k_\rho} e^{2\pi i \left( \frac{\theta k_\theta}{2\beta} + \frac{\rho k_\rho}{-\log{a_r}}\right) }, \label{Rad_lp_discrete}\\ $$ and $$ \mathcal{R}_{\mathrm{lp}}^\# g(\theta,\rho) =\sum_{k_\theta,k_\rho} \widehat{g}_{k_\theta,k_\rho} \widehat{\zeta^\#}_{k_\theta,k_\rho} e^{2\pi i \left( \frac{\theta k_\theta}{2\beta} + \frac{\rho k_\rho}{-\log{a_r}}\right) }\label{Rad_bck_lp_discrete}. $$ In the case where the two transforms are to be evaluated at $(\theta,\rho)$ on the lattice \eqref{rhothetalattice}, the corresponding sums above can be rapidly evaluated by used FFT. \section{Sampling rates} There are three different cases for which it is natural to use equally spaced discretization: For the representation of $f$ in Cartesian coordinates $(x_1,x_2)$ covering the unit circle $S^1$; for the polar representation $(\theta,s)$ of the sinograms $\mathcal{R}(f)$; and for the log-polar coordinates $(\theta,\rho)$ for evaluation of $\mathcal{R}_{\mathrm{lp}}$ and $\mathcal{R}_{\mathrm{lp}}^{\#}$. We will use a rectangular grid in all three coordinate systems and interpolate data between them as we work with the different domains. In this section we derive guidelines for how to choose discretization parameters. These will be based on the assumption that $\hat{f}$ is ``essentially supported'' in a disc with radius $N/2$, in the sense that contributions from the complement of this disc can be ignored without affecting accuracy of our calculations. Due to the uncertainty principle, $\hat {f}$ can not have its support included in this disc, since $f$ itself is supported in a disc with radius $1/2$, but it may work quite well in practice and is therefore still convenient to use for deriving sampling rates.\footnote{More strict results can be obtained by introducing notions such as numerical support. Recall that any compactly supported $L^1$ function has compact numerical support in both $x$ and $\xi$ by the Riemann-Lebesgue lemma. For instance, Gaussians have (small) compact numerical support in both the spatial and the frequency domain. The presentation will quickly become substantially more technical and we therefore follow the signal processing practice and use Nyquist sample rates despite not having infinitely long samples.} We also base our arguments on refinements of the Nyquist sampling rate, more precisely the Paley-Wiener-Levinson theorem. In the spatial variables $(x_1,x_2)$, the Nyquist sampling rate (corresponding to the assumption on the support of $\hat{f}$) is $1/N$. This leads us to cover the unit circle with a grid of size $N\times N$. We use \begin{equation}\label{Xgrid} X= \left\{ \frac{j_1}{N}, \frac{j_2}{N}\right\}, \quad -\frac{N}{2} \le j_1,j_2 < \frac{N}{2}. \end{equation} To derive recommendations for the sampling in the polar coordinates $(\theta,s)$ (for the entire $\mathcal{R} f$), recall the Fourier slice theorem, which states that \begin{align*} &\int_{-\infty}^\infty \mathcal{R} f (s,\theta) e^{-2\pi i s \sigma} \, ds = \int_{-\infty}^\infty \int_{-\infty}^\infty f (s\theta+t\theta^\perp) e^{-2\pi i s \sigma} \, ds \, dt =\\ &\int_{-\infty}^\infty \int_{-\infty}^\infty f (x) e^{-2\pi i \sigma (x\cdot \theta) } \, ds \, dt=\int_{-\infty}^\infty \int_{-\infty}^\infty f (x) e^{-2\pi i (\sigma \theta) \cdot x } \, ds \, dt =\widehat{f}(\sigma \theta). \end{align*} Thus $\mathcal{R} f(\theta,s)=\mathcal{F}^{-1}_{\sigma\rightarrow s}(\hat{f}(\sigma \theta))$, and hence the Nyquist sampling rate for $s$ is $\triangle s=\frac{1}{N}$, yielding that $N_s = N$. Let $\triangle {\theta_\mathrm{p}}$ denote the angular sampling rate in the polar coordinate system. Since $f$ is supported in the unit disc, the Nyquist sampling rate in the frequency domain $\xi$ is 1 (on a rectangular infinite grid). However, the multidimensional version of the Paley-Wiener-Levinson theorem roughly says that it is sufficient to consider an irregular set of sample points whose maximal internal distance between neighbors is 1. Since we have assumed that the values of $\widehat{f}(\xi)$ for $|\xi| >\frac{N}{2}$ are negligible, this leads us to choose $\triangle {\theta_\mathrm{p}} $ and $\triangle \sigma$ (the latter will not be used) so that the polar grid-points inside this circle has a maximum distance of 1. It follows that \begin{equation}\label{delta_theta_org} \triangle {\theta_\mathrm{p}} = \frac{2}{N}. \end{equation} Since we want to cover an angle span of $[0,\pi]$, this leads to $N_\theta \approx \frac{\pi}{2} N$. We denote the polar grid by \begin{equation}\label{Polargrid} \Sigma= \left\{ j_\theta \triangle {\theta_\mathrm{p}}, j_s \triangle s \right\}, \quad 0 \le j_\theta < N_\theta, \quad -\frac{N}{2} \le j_s < \frac{N}{2}. \end{equation} Practical tomographic measurements can therefore typically have a ratio $\frac{N_\theta}{N_s} \approx 1.5$ between the sampling rates in the $\theta$- and $s$-variables. We refer to \cite{kak1988principles, natterer1986computerized} for more details, and proceed to discuss the sampling in the log-polar coordinates. \begin{figure} \centering \subfloat[][]{\includegraphics[trim = 0mm 0mm 0mm 0mm,clip=true,width=0.47\textwidth]{osrho.png}}\hspace{2mm} \subfloat[][]{\includegraphics[trim = 34mm 20mm 4mm 0mm,clip=true,width=0.5\textwidth]{ostheta.png}} \caption{(a) Log-polar grid. Samples in the $\rho$ variable are chosen in order not to lose the accuracy of measuring data. (b) The bandwidth of a partial back-projection function. } \label{fig:gridslp} \end{figure} To distinguish between the coordinate sampling parameters we use $\triangle {\theta_{\mathrm{lp}}}$ for angular sampling rate in the log-polar coordinates. Recall that we wish to accurately represent functions of the form $T_m f$, which is a rotation, dilation (with factor $2a_R$) and translation of $f$. The support of $T_m f$ lies in the grey circle of Figure \ref{fig:spans} and its essential frequency support is then inside the disc of radius $\frac{N}{4a_R}$. Figure \ref{fig:gridslp} illustrates the setup where the black lines indicate equally spaced samples of ${\theta_{\mathrm{lp}}}$; the blue curves indicate equally spaced sampling in ${s_{\mathrm{lp}}}$; and where the red dashed curves indicate equally spaced sampling in $\rho$. Note that the maximum distance between points in the $x_1$ direction occurs when $s=1$ or $\rho=0$. It is thus clear that $\triangle {s_{\mathrm{lp}}}$ should equal $\triangle x_1=\frac{2 a_R}{N}$, whereas $\triangle \rho$ is determined by \begin{align*} \max_{\rho \in [\log(a_r),0]} e^{\rho}-e^{\rho-\triangle \rho} = \max_{\rho \in [\log(a_r),0]} e^{\rho}\left(1- e^{-\triangle \rho}\right) \le\triangle x_1. \end{align*} As the largest distance occurs when $\rho=0$ it follows that $$ \triangle \rho \le -\log\left(1- \frac{2a_R}{N}\right), $$ and consequently, since the total distance that is to be covered is $-\log(a_r)$, that \begin{align}\label{vrho} &N_\rho\ge\left\lceil\frac{\log(a_r)}{\log(1-\frac{2a_R}{N})}\right\rceil. \end{align} where the notation $\lceil x \rceil$ denotes the nearest integer greater than or equal to $x$. For the determination of sample rate in the angular variable for the representation of $T_m f$ in the disc $D$, we have that $$ \triangle {\theta_{\mathrm{lp}}}\approx \sin(\triangle {\theta_{\mathrm{lp}}}) \le \frac{2 a_R}{N}. $$ Hence, we introduce \begin{equation}\label{Lopolargrid_f} \Omega_{\mathrm{lp}}= \left\{ j_\theta \triangle {\theta_{\mathrm{lp}}} , j_\rho \triangle \rho \right\}, \quad -\frac{\beta}{2 \triangle {\theta_{\mathrm{lp}}}} \le j_\theta < \frac{\beta}{2 \triangle {\theta_{\mathrm{lp}}}}, \quad -N_\rho < j_\rho \le 0, \end{equation} for the representation of $T_m f$ in $D$. The representation of the partial Radon transform of $T_m f$ needs a reduced sample rate compared to \eqref{Lopolargrid_f}. As the Radon transform is applied as a convolution on a log-polar grid (i.e. by a Fourier multiplier), the higher frequencies in the $\theta$-direction will not be needed. Hence, we can apply a low-pass filter in the $\theta$-direction, and apply the FFT operation the grid \begin{equation}\label{Lopolargrid_polar} \Omega_\mathrm{p}= \left\{ j_\theta \triangle {\theta_\mathrm{p}}, j_\rho \triangle \rho \right\}, \quad -\left\lfloor \frac{N_\theta}{2 M} \right\rfloor \le j_\theta < \left\lfloor \frac{N_\theta}{2 M} \right\rfloor, \quad N_\rho < j_\rho \le 0, \end{equation} for the computation of the partial Radon tranform of $T_m f$. The grid $\Omega_\mathrm{p}$ will also be useful when resampling Radon data in log-polar coordinates. \section{Interpolation} As mentioned in the previous section it will be natural to use equally spaced sampling in the Cartesian, polar and log-polar coordinate systems, respectively. We typically want to reconstruct data on an equally spaced Cartesian grid; the tomographic data is sampled in equally spaced polar coordinates; and both the Radon transform and the back-projection can be rapidly evaluated by FFT when sampled on an equally spaced log-polar grid. There are several ways to interpolate between these coordinate systems. In this work we will make use of cubic (cardinal) B-spline interpolation. We will discuss how to incorporate some of the interpolation steps in the FFT operations, and also discuss how the B-spline interpolation can be efficiently implemented on GPUs. The cubic cardinal B-spline is defined as \begin{align*} B(x)= \begin{cases} (3|x|^3-6|x|^2+4)/6, & 0\le|x|<1,\\ (-|x|^3+6|x|^2-12x+8)/6, & 1\le|x|<2,\\ 0, & |x| \ge 2. \end{cases} \end{align*} This function is designed so that it is non-negative, piecewise smooth, and $C^2$ at $|x|=0,1,2$. Note that at integer points, it holds that \begin{equation}\label{Bcoeff} B(j) = \begin{cases} \frac{2}{3} & \mbox{if } j=0, \\ \frac{1}{6} & \mbox{if } |j|=1, \\ 0 & \mbox{otherwise.} \end{cases} \end{equation} When used as a filter, it will smooth out information and is thus acting as a low-pass filter. Consequently, it can not be used directly for interpolation. The Fourier series given by the coefficients in \eqref{Bcoeff}, which we denote by $\widehat{B}$, is given by \begin{equation}\label{Bhat} \widehat{B}(\xi) = \sum_{j} B(j) e^{-2\pi i j \xi} = \frac{1}{6} \left( e^{2\pi i x \xi} + 4 + e^{-2\pi i x \xi}\right) = \frac{2}{3} + \frac{1}{3} \cos(2\pi \xi). \end{equation} Suppose that equally spaced samples $f_k$ of a function $f$ (in one variable) are available. We want to recover values of $f$ at arbitrary points $x$ using \begin{equation}\label{Bsplineinterp} f(x)=\sum_{k} (Q f)_k B\left(x-\frac{k}{N}\right). \end{equation} Since $B$ has short support, only values of $(Q f)_k$ for $k\approx x N$ will contribute in this sum. The operator $Q$ is a pre-filter operation, that is compensating for the fact that convolution with $B$ suppresses high frequencies. The pre-filter operation is boosting high frequencies in the samples $f_k$. It can be computed in different ways. Perhaps the most direct way is to define $Q$ in the Fourier domain (by the discrete Fourier transform), where it essentially becomes division by $\widehat{B}$ (upon scaling and sampling). In this case, it is easy to see that the convolution with $B$ and the pre-filter operation will cancel each other at points $x=\frac{j}{N}$, i.e., the original function is recovered at the sample points; which is a requisite for any interpolation scheme. As we will compute the Radon transform and the back-projection by means of FFT in log-polar coordinates, we can in some steps incorporate the pre-filter step $1/\widehat{B}$ in the Fourier domain, at virtually no additional cost. However, not all of the pre-filter operations can be incorporated in this way. While the pre-filter easily can be applied by separate FFT operations, we want to limit the total number of FFT operations, as these will be the most time-consuming part in the implementations we propose. As an alternative to applying the pre-filter in the Fourier domain, it can be applied by recursive filters. In \cite{ruijters2010gpu} these operations are derived by using the $Z$-transform. It turns out that if we define $$ (Qf^+)_k=6 f(k)+(\sqrt{3}-2) (Qf^+)_{k-1}, $$ then \begin{equation}\label{Qfrecursive} (Qf)_k=(\sqrt{3}-2)((Qf)_{k+1}-(Qf^+)_k), \end{equation} cf. \cite[equations (12,13)]{ruijters2010gpu}, where the boundary condition on $Qf$ and $Qf^+$ are given in \cite[equations (14,15)]{ruijters2010gpu}. These filters can be efficiently implemented on GPUs. Two-dimensional prefiltration can be done in two steps, one in each dimension. We will use the notation $Q f$ for the pre-filtering also in this case. On a GPU this implies doing operations on rows and columns separately. However, there are highly optimized routines for transposing data, which means that the prefiltration can be made to act only on column data. This is done to improve the so-called memory coalescing and GPU cache performance \cite{wilt2013cuda}. Memory coalescing refers to combining multiple memory accesses into a single transaction. In this way the GPU threads run simultaneously and substantially increased cache hit ratios are obtained. Let us now turn our focus to the convolution step in \eqref{Bsplineinterp}. Let \begin{align}\label{floorx} &k=\lfloor N x \rfloor,\\ &\nonumber\alpha=x-\lfloor x \rfloor. \end{align} The sum \eqref{Bsplineinterp} then reduces to \begin{align} &f(x)=w_0(\alpha) (Q f)_{k-1}+w_1(\alpha)(Q f)_{k}+w_2(\alpha)(Q f)_{k+1}+w_3(\alpha)(Q f)_{k+2},\text{ where }\label{wsumn}\\ &w_0(\alpha)=B(\alpha-2),\quad w_1(\alpha)=B(\alpha-1),\quad w_2(\alpha)=B(\alpha),\quad w_3(\alpha)=B(\alpha+1).\nonumber \end{align} We now discuss how this sum can be evaluated using linear interpolators. As mentioned previously, linear interpolation is executed fast on GPUs. In \cite{sigg2005fast} it is shown how this can be utilized to conduct efficient cubic interpolation. The cubic interpolation is expressed as two weighted linear interpolations, instead of four weighted nearest neighbor look-ups, yielding $2^d$ operations instead of $4^d$ for conducting cubic interpolation in $d$ dimensions. We briefly recapitulate the approach taken in \cite{ruijters2008efficient,sigg2005fast}. Given coefficients $(Qf)_k$, let $ Qf_{\mathrm{lin}}$ be the linear interpolator \begin{align*} Qf_{\mathrm{lin}}(x) &=(1-(\alpha))(Qf)_{k}+\alpha (Qf)_{k+1}\\ &=(1-(x-\lfloor x \rfloor))(Qf)_{\lfloor N x \rfloor}+(x-\lfloor x \rfloor) (Qf)_{\lfloor N x \rfloor+1}. \end{align*} The sum \eqref{wsumn} can be then be written as \begin{align} f(x)&=(w_0(\alpha)+w_1(\alpha)) Qf_{\mathrm{lin}}\left(k-1+\frac{w_1(\alpha)}{w_0(\alpha)+w_1(\alpha)}\right) \nonumber\\ &+(w_2(\alpha)+w_3(\alpha)) Qf_{\mathrm{lin}}\left(k+1+\frac{w_3(\alpha)}{w_2(\alpha)+w_3(\alpha)}\right). \label{cubic_interp_reduced} \end{align} The evaluation of the function $Qf_{\mathrm{lin}}(x) $ can be performed by hard-wired linear interpolation on GPU. In modern GPU architecture the so-called texture memory (cached on a chip) provide effective bandwidth by reducing memory requests to the off-chip DRAM. The two most useful features of these kind of memory with regards to conducting B-spline interpolation, are \begin{enumerate} \item The texture cache is optimized for the 2D spatial locality, giving best performance to GPU threads that read texture addresses that are close together. \item Linear interpolation of neighboring values can be performed directly in the GPUs texture hardware, meaning that the cost for computing the interpolation is the same as reading data directly from memory. \end{enumerate} This implies that the cost for memory access in \eqref{cubic_interp_reduced} will be two instead of four as only two function calls of $Qf_{\mathrm{lin}}$ are made in \eqref{cubic_interp_reduced}, and in two dimensions the corresponding reduction from 16 memory access operations to 4 will give significant improvement in computational speed. \section{Algorithms} We now have the necessary ingredients to present detailed descriptions on how to rapidly evaluate the Radon transform and the back-projection operator by FFT in log-polar coordinates. In the algorithms below, we let $\widehat{B}_{k_\theta,k\rho}$ denote the values of the two-dimensional counterpart of \eqref{Bhat}, scaled to represent the sampling on $(\theta,\rho)\in\left[-\frac{\beta}{2},\frac{\beta}{2}\right]\times \left[\log a_r,0 \right]$. \begin{algorithm} \caption{Fast Radon transform} \label{Radon_alg} \begin{algorithmic}[1] \State Given $f$ sampled at $X$ \eqref{Xgrid} compute $Qf$ by \eqref{Qfrecursive} \For{$m=0, \dots M-1$} \State Resample $T_m f$ at $\Omega_\mathrm{lp}$ \eqref{Lopolargrid_f} by \eqref{Bsplineinterp} \State Downsample from $\Omega_\mathrm{lp}$ \eqref{Lopolargrid_f} to $\Omega_\mathrm{p}$ \eqref{Lopolargrid_polar} \State Multiply result by $e^\rho$ \State Apply the log-polar Radon transform with pre-filtering incorporated, i.e., compute $\widehat{f}_{k_\theta,k_\rho}$ from \eqref{fhatdef}, and evaluate $$ \sum_{k_\theta,k_\rho} \widehat{f}_{k_\theta,k_\rho} \frac{\widehat{\zeta}_{k_\theta,k_\rho}}{\widehat{B}_{k_\theta,k_\rho}} e^{2\pi i \left( j_\theta k_\theta \frac{\triangle {\theta_\mathrm{p}}}{2N_\theta} + j_\rho k_\rho \frac{\triangle \rho}{N_\rho}\right) } $$ by using FFT. \State Resample from $\mathsf{S}_m^{-1} \Omega_\mathrm{p}$ \eqref{Lopolargrid_polar} to $\Sigma$ \eqref{Polargrid} by \eqref{Bsplineinterp}. \EndFor \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Fast back-projection} \label{BP_alg} \begin{algorithmic}[1] \State Given $g$ sampled at $\Sigma$ \eqref{Polargrid} compute $Qg$ by \eqref{Qfrecursive} \For{$m=0, \dots M-1$} \State Resample $g(\mathsf{S}_m)$ at $\Omega_\mathrm{p}$ \eqref{Lopolargrid_polar} by \eqref{Bsplineinterp} \State Apply the log-polar back-projection with pre-filtering incorporated, i.e., compute $\widehat{g}_{k_\theta,k_\rho}$ from \eqref{ghatdef} and evaluate $$ \sum_{k_\theta,k_\rho} \widehat{g}_{k_\theta,k_\rho} \frac{\widehat{\zeta^\#}_{k_\theta,k_\rho}}{\widehat{B}_{k_\theta,k_\rho}} e^{2\pi i \left( j_\theta k_\theta \frac{\triangle {\theta_\mathrm{p}}}{2N_\theta} + j_\rho k_\rho \frac{\triangle \rho}{N_\rho}\right) } $$ by using FFT. \State Resample from $\mathsf{T}_m^{-1} \Omega_\mathrm{p}$ \eqref{Lopolargrid_polar} to $X$ \eqref{Xgrid} by \eqref{Bsplineinterp} to obtain partially back-projected data. \EndFor \State Sum up the $M$ partial back-projections. \end{algorithmic} \end{algorithm} Let us end with a few remarks on time complexity. The most time consuming part of both algorithms are the convolutions that are implemented by FFT. In total, $2 M$ FFT operations needs to be computed (including forward and backward FFT's), and each operation will be done on a grid of size $ \frac{2 N_\theta}{M} \times N_\rho$. For $M=3$, it follows from \eqref{vrho} that $$ N_\rho \approx -\frac{N \log(a_r)}{2a_R} \approx 2.1 N, $$ and $N_\rho$ is monotonically decreasing for increasing $M$. It thus hold approximately that twice as many samples in $\rho$ compared to that originally used for the sampling of $s$ ($N_s=N$), and in the angular variable we also need to sample with twice as many parameters in order to avoid aliasing effects. In total we need to use an oversampling of about 4 in the FFT operations. Note that this is the same oversampling that is generally needed for computing the convolution of two functions if aliasing is to be avoided. The total cost for applying the Radon transform and the back-projection operator is thus the same as would be expected for a generic convolution. In addition to the FFT operations, interpolation and one-dimensional filter operations are needed in the implementations. In our simulations these operations will typically take about 25\% of the total time. \section{Performance and accuracy tests} Let us start by briefly discussing the filters used in the filter back-projection \eqref{FBP}. This discussion is included in order to better interpret the errors obtained when comparing different methods. For theoretically perfect reconstruction (with infinitely dense sampling) the filter $\mathcal{W}$ in \eqref{FBP} is given by \begin{figure} \centering \subfloat{\includegraphics[clip=1,trim = 12mm 2mm 12mm 25mm,clip=true,width=0.5\textwidth]{filtersall.png}} \caption{Filters for Radon transform inversion.} \label{fig:filters} \end{figure} \begin{equation} \label{ramp} \widehat{w}_{\mathrm{ramp}}(\sigma) = |\sigma|. \end{equation} This filter is sometimes referred to as the \emph{ramp filter}. If the sampling rate is insufficient in relation to the frequency content of $f$ (or the object upon which the measurements is conducted on), it can be desirable to suppress the highest frequencies in order to localize the effects of the insufficient sampling. There is a relation between one-dimensional filtering in the $s$-direction of Radon data, and a two dimensional convolution in the spatial domain \cite[p.102]{natterer1986computerized}. It can be explained by the Fourier slice theorem, which describes how Radon data can be converted to a polar sampling in the frequency domain by taking one-dimensional Fourier transforms in the $s$-direction. The application of the one-dimensional filter will then correspond to a change in amplitude (and possible phase) along the lines (indicated by dots with the same angles) in Figure \ref{fig:gridslp}b). As there is no information outside the circle with radius $N/2$, the action of the ramp filter will be equivalent of applying a two-dimensional convolution to the original function $f$ using a two-dimensional filter with Fourier transform \begin{equation}\label{Framp} \widehat{W}_{\mathrm{ramp}}(\xi) = \begin{cases} 1 & \mbox{if } |\xi| \le \frac{N}{2}, \\ 0 & \mbox{otherwise}. \end{cases} \end{equation} We see that if $f$ contains more high frequent information than the one prescribed by the sampling rate $N$, then the sharp cutoff in \eqref{Framp} can yield artifacts, and in the presence of noise in the Radon sampling the high-frequency boosting of \eqref{ramp} will boost the noise. Replacing the ramp filter \eqref{ramp} with a filter that goes smoothly to zero at the highest (discrete) frequencies, will thus yield an image that is slightly smoother, but on the other hand an image with suppressed high-frequency noise and artifacts due to incomplete sampling. Sometimes, the ramp filter is modified so that it does not reach zero but only reduces the high-frequency amplitudes. Two common choices of filters are the cosine and the Shepp-Logan filters, defined by \begin{align} &\label{cosfilt}\widehat{w}_{\cos}(\sigma) = \begin{cases}|\sigma|\cos\left(\frac{2 \pi \sigma}{N}\right),\\ 0 & \mbox{otherwise}. \end{cases} \\ &\widehat{w}_{\mathrm{SL}}(\sigma) = \begin{cases}|\sigma| \sinc\left(\frac{\sigma}{N}\right)\\ 0 & \mbox{otherwise}. \end{cases}. \label{SLfilt} \end{align} The cutoff is made above the sampling bandwidth, as it will illustrate the practical effect that the filters have on measured data. The three filters are illustrated in the frequency domain in Figure \ref{fig:filters}. The two-dimensional filters associated with these filters have Fourier representations \begin{equation*} \widehat{W}_{\cos}(\xi)= \begin{cases} \cos\left(\frac{2 \pi|\xi|}{N}\right) & \mbox{if } |\xi| \le \frac{N}{2}, \\ 0 & \mbox{otherwise}. \end{cases}\label{filters2d} \end{equation*} and \begin{equation*} \widehat{W}_{\mathrm{SL}}(\xi)= \begin{cases} \sinc \left(\frac {|\xi|}{N} \right) & \mbox{if } |\xi| \le \frac{N}{2}, \\ 0 & \mbox{otherwise}. \end{cases} \end{equation*} respectively. As the $\widehat{w}_{\cos}$ goes to zero at the highest sampling rate, we can expect smaller error when using this filter compared to the others. On the other hand, the highest frequencies are suppressed, and the results reconstructions will look slightly less sharp. To illustrate the accuracy of the suggested implementation, we conduct some examples on the \emph{Shepp-Logan} \cite{shepp1974fourier} phantom. We use the modified version introduced in \cite[Appendix B.2]{toft1996radon}. The function used ($f$) is illustrated in the left panel of Figure \ref{fig:phantom}. The phantom consists of linear combinations of characteristic functions of ellipses, and its support is inscribed in the unit circle. Since the Radon transform is a linear operator, $\mathcal{R} f$ is a linear combination of Radon transforms of characteristic functions of ellipses. Since the Radon transform of the characteristic function of a circle can be computed analytically, analytic expressions are available for the Radon transform of the Shepp-Logan phantom by applying transform properties of shifting, scaling and rotation. This Radon transform is depicted in the right panel of figure \ref{fig:phantom}. The presence of a high-frequency discontinuity caused by the filter can cause artifacts, but also the discontinuity of the derivative at $\sigma=0$. To avoid artifacts from this part, we apply end-point trapezoidal corrections. The effect of this correction can be seen around $\sigma=0$ in Figure \ref{fig:filters}. Our aim is to eliminate as much of the errors as possible to distinguish the errors that the resampling between the different coordinate systems used in the proposed methods introduce. In Figure \ref{fig:errors} we show some reconstruction results from the filtered back-projection using different methods and different filters. For the sake of quality comparison, we use the results from ASTRA Tomography Toolbox \cite{palenstijn2013astra}, NiftyRec Tomography Toolbox \cite{pedemonte2012niftyrec} and the MATLAB image processing toolbox\textsuperscript{TM}. The ASTRA Toolbox uses GPU implementations and the implementation is described in \cite{palenstijn2011performance,xu2010high}. The NiftyRec Toolbox comes with both GPU and CPU implementations. The toolbox is described in \cite{pedemonte2010gpu,pedemonte20144}. For the comparisons we have used ASTRA v1.5, NiftyRec v2.0.1, and MATLAB v2014b. Both ASTRA and NiftyRec provide MATLAB scripts demonstrating how to use call their routines. For the MATLAB tests, we have used the functions \verb!radon! and \verb!iradon!. Both these functions call compiled routines. To test the back-projection algorithm from Astra toolbox we based our test on the provided routine \verb!s014_FBP!. The iterative tests in the next section are based on the script \verb!s007_3d_reconstruction!. The comparison against the NiftyRec toolbox was done by using the back-projection part of the provided demo \verb!tt_demo_mlem_parallel! for parallel beam transmission tomography. The package provides the option to use either GPU or CPU routines. We provide timings for both cases. We use the setup of \verb!tt_demo_mlem_parallel! for the iterative tests described in the next section. \begin{figure} \hspace{0.12\linewidth} Ramp \hspace{0.19\linewidth} Shepp-Logan \hspace{0.18\linewidth} Cosine \vspace{2.5mm} \subfloat{\begin{turn}{90}\,\quad\quad\quad Log-polar \end{turn}}\hspace{0.5mm} \subfloat{\includegraphics[width=0.296\linewidth]{errlpram-lak3n.png}}\hspace{0.5mm} \subfloat{\includegraphics[width=0.296\linewidth]{errlpshepp-logan3n.png}}\hspace{0.5mm} \subfloat{\includegraphics[width=0.296\linewidth]{errlpcosine3n.png}} \vspace{2.5mm} \subfloat{\begin{turn}{90}\quad\quad\quad\quad ASTRA\end{turn}}\hspace{1.8mm} \subfloat{\includegraphics[width=0.296\linewidth]{errastraram-lak.png}}\hspace{0.5mm} \subfloat{\includegraphics[width=0.296\linewidth]{errastrashepp-logan.png}}\hspace{0.5mm} \subfloat{\includegraphics[width=0.296\linewidth]{errastracosine.png}} \vspace{2.5mm} \subfloat{\begin{turn}{90}\quad\quad\quad NiftyRec\end{turn}}\hspace{1mm} \subfloat{\includegraphics[width=0.296\linewidth]{errniftyram-lak.png}}\hspace{0.5mm} \subfloat{\includegraphics[width=0.296\linewidth]{errniftyshepp-logan.png}}\hspace{0.5mm} \subfloat{\includegraphics[width=0.296\linewidth]{errniftycosine.png}} \vspace{2.5mm} \subfloat{\begin{turn}{90}\quad\quad\quad\quad MATLAB\end{turn}}\hspace{1.7mm} \subfloat{\includegraphics[width=0.296\linewidth]{errmatlabram-lak.png}}\hspace{0.5mm} \subfloat{\includegraphics[width=0.296\linewidth]{errmatlabshepp-logan.png}}\hspace{0.5mm} \subfloat{\includegraphics[width=0.296\linewidth]{errmatlabcosine.png}} \caption{Computational errors of filtered back-projection for the ramp; the Shepp-Logan; and the cosine filters for different methods.} \label{fig:errors} \end{figure} The reconstructions in the left column of Figure \ref{fig:errors} use the ramp filter \eqref{ramp}; the reconstructions in the middle column use the Shepp-Logan filter \eqref{SLfilt}; and the reconstructions in the right column use the cosine filter \eqref{cosfilt}. The reconstructions were made on a $512\times 512$ grid using 768 samples in the angular direction. For the log-polar reconstruction, we used $M=3$ partial reconstructions, and $N_\rho=2N$. As the cosine filter goes to zero at the boundary, the errors in the reconstruction the right column should better represent the errors caused by the sampling parameters. The reconstruct panels have inscribed $\ell^2$-errors against the filtered versions of the phantom. We can see that ASTRA and NiftyRec produce almost identical errors. We note that the proposed method seems to give the smallest error out of the four methods. The NiftyRec reconstructions refer to the GPU implementation. \begin{table} \caption{Computational time (in seconds) of the back-projection for sizes $(N_\theta \times N_s)=(\frac{3}{2}N\times N)$ (excluding initialization time).} \centering \label{table:time_back} \begin{tabular}{ | c | c | c | c | c| c | c|} \hline $N$ & Log-polar & ASTRA & NiftyRec (GPU) & NiftyRec (CPU) & MATLAB \\ \hline 256 & 1.6e-03 & 1.0e-02 & 7.1e-02 & 1.3e00 & 3.1e-01 \\ 512 & 6.1e-03 & 3.5e-02 & 5.6e-01 & 1.3e01 & 2.4e00 \\ 1024 & 2.5e-02 & 1.8e-01 & 4.4e00 & 1.2e02 & 1.9e01 \\ 2048 & 9.9e-02 & 1.2e00 & 3.5e+01 & 1.0e03 & 1.5e02 \\ \hline \end{tabular} \end{table} In Table \ref{table:time_back} we show times for computing of one back-projection by the different methods mentioned above. To exclude initialization effects and assure high GPU load, the times in Table \ref{table:time_back} are obtained by executing batches of reconstructions (to fill up the GPU memory) in one function call. We note that the proposed method is about 5-10 times faster than ASTRA, substantially faster than both the GPU and the CPU implementations of NiftyRec, and up to 1500 times faster than the routines available in the MATLAB toolbox. For the tests, we have use a standard desktop computer with a Intel Core i7-3820 processor and a NVIDIA GeForce GTX 770 graphic card. We have used NVIDIA CUDA cuFFT library for the FFT operations, and we have used version 7 of the CUDA Toolkit \cite{documentation2015v70}. The computations were performed in single precision. The written GPU program is optimized by using NVidia Nsight profiler, and GPU memory usage, kernels occupancy, instructions fetch and other performance counters were analyzed and improved by using strategies described in \cite{kirk2012programming,sanders2010cuda}. \section{Iterative methods for tomographic reconstruction}\label{itmethh} In some situations it is preferable to use an iterative method for doing reconstruction from tomographic data. This could for instance be because of missing data, e.g., that data for some angles are missing; suppression of artifacts \cite{Miqueles:pp5053}; or that additional information about the noise contamination can be used to improve the reconstruction results compared to direction filtered back-projections \eqref{FBP}. Iterative reconstruction methods rely on applying the forward and back-projection operators several times. Iterative algorithms can be computationally expensive when a large number of iterations are required for the algorithm to converge. For that reason, fast algorithms for computing the Radon transform and the associated back-projection can play an important role. Iterative algebraic reconstruction techniques (ART) are popular tools for reconstruction from incomplete measurements \cite{gubareni2009algebraic}. It aims at solving the set of linear equations determined by the projection data. Transmission based tomographic measurements measure the absorption of a media along a line. This puts a sign constraint on the function we wish to recover. It is clearly not ideal to assume the data is contaminated by normally distributed noise (for which the best estimate is given by the least squares estimates). A more reasonable assumption is that the added noise has a Poisson distribution \cite{barrett1994noise,yan2011expectation}. The simplest iterative method for solving the estimation/reconstruction problem under this noise assumption is by the Expectation-Maximization (EM) algorithm. The EM-algorithm is well-suited to reconstruct tomography data with non-Gaussian noise \cite{dempster1977maximum,miqueles2014generalized,shepp1982maximum}. Alternative techniques include for instance the Row Action Maximum Likelihood Algorithm (RAMLA). More details about the EM-algorithm can be found for instance in \cite{champley2004spect,yan2011expectation}. In our notation, it can (given tomographic data $g$) be expressed as the iterative computations of \begin{align*} f^{k+1}=f^{k}\frac{\mathcal{R}^\#\left(\frac{g}{\mathcal{R} f^k}\right)}{\mathcal{R}^\# \chi_C}, \end{align*} where the function $\chi_C(\theta,s)$ is one if the line parameterized by $(\theta,s)$ passes through the unit disc (the support of $f$) and zero otherwise. In each step, a Radon transform $\mathcal{R} f^k$ and a back-projection $\mathcal{R}^\#\left(\frac{g}{\mathcal{R} f^k}\right)$ is computed. It is well-known that a crucial part of GPU computations is host-device memory transfers. For an iterative method such as the one described above, it is possible to keep all the necessary data in the GPU memory, and thereby limit the data copying between host and device memory to an initial guess $f^0$, the measured data $g$, and the final result. As most methods, the proposed method require some initialization steps (e.g., geometry parameters and convolution kernels $\zeta,\zeta^\#$) The obtained GPU program was tested on Radon data with Poisson noise, cf. Figure \ref{fig:EM}a). Figure \ref{fig:EM}b) shows the result of applying the filtered back-projection formula, whereas figure \ref{fig:EM}c demonstrates de-noised recovered data after 50 iterations of the EM-algorithm. \begin{figure} \centering \includegraphics[width=0.312\linewidth]{R.png} \includegraphics[width=0.312\linewidth]{frec.png} \includegraphics[width=0.312\linewidth]{frec_em.png} \caption{Reconstruction of Radon data with noise. Radon data with Poisson noise is shown in the left panel, the result of applying filtered back-projection is shown in the middle panel, the right panel shows the result after 50 iterations of the EM-algorithm.} \label{fig:EM} \end{figure} In Table \ref{table:timeEM} we present performance results for the EM-algorithm for the proposed method and the methods mentioned in the previous section. The computational times for conducting 100 iterations of the EM-algorithm for different resolution parameters are presented. The Radon data was sampled with parameter $N_\theta = \frac{3}{2} N$, and $N_s=N$. We used $M=3$ partial back-projection for the proposed log-polar based algorithms. The speedup from the previous section is confirmed also for this case. \begin{table} \centering \caption{Time (in seconds) for 100 iterations of the EM-algorithm for reconstruction 3d data of sizes $(N\times N \times N)$.} \label{table:timeEM} \begin{tabular}{ | c | c | c | c | c|} \hline $N$& Log-polar & ASTRA & NiftyRec (GPU) & MATLAB \\ \hline 256 & 8.8e+01 & 3.1e+02 & 3.6e+03 & 1.6e+04 \\ 512 & 6.9e+02 & 2.3e+03 & 5.8e+04 & 2.6e+05 (*) \\ 1024 & 5.4e+03 & 2.5e+04 & 9.3e+05 & 4.2e+06 (*)\\ 2048 & 4.2e+04 & 3.5e+05 & - & - \\ \hline \end{tabular} * - estimated by using a reduced number of slices. \end{table} \section{Conclusions} We have described how to implement the Radon transform and the back-projection as convolution operators in log-polar coordinates efficiently on GPUs. We present sampling conditions; provide formulas and numerical guidelines for how to compute the kernels associated with the Radon transform and the back-projection operator; and discuss how the convolutions can be rapidly evaluated by using FFT. The procedure involves several steps of interpolation between data in the Radon domain; the spatial domain; and the log-polar domain. It is comparatively favorable to conduct interpolation in these domains compared to conducting interpolation in the Fourier domain, as the functions will tend to be less oscillatory there. We use cubic spline interpolation which can be efficiently implemented on GPUs, and optimized routines for FFT on the GPU. We conduct numerical tests and see that that we obtain at least as accurate results as that produced by other software packages, but obtained at a substantially lower computational cost. \section*{Acknowledgments} This work was supported by the Crafoord Foundation (20140633) and the Swedish Research Council (2011-5589). \section*{Appendix} \label{app:Fzeta} In this appendix, we present the exact result of the integration in \eqref{Pfor}. By applying two partial integrations, it follows that \begin{align*} &-\mu^2 P(\mu,\alpha,\beta)=\\& \int_{-\beta}^{\beta}\left(\frac{d^2}{d\varphi^2}e^{i\mu\varphi}\right)\cos(\varphi)^\alpha d\varphi=\left(\frac{d}{d\varphi}e^{i\mu\varphi}\right)\cos(\varphi)^\alpha |_{-\beta}^{\beta}-\int_{-\beta}^{\beta}\left(\frac{d}{d\varphi}e^{i\mu\varphi}\right)\left(\frac{d}{d\varphi}\cos(\varphi)^\alpha\right) d\varphi=\\ &\left(\frac{d}{d\varphi}e^{i\mu\varphi}\right)\cos(\varphi)^\alpha |_{-\beta}^{\beta}-e^{i\mu\varphi}\left(\frac{d}{d\varphi}\cos(\varphi)^\alpha\right) |_{-\beta}^{\beta}+\int_{-\beta}^{\beta}e^{i\mu\varphi}\left(\frac{d^2}{d\varphi^2}\cos(\varphi)^\alpha\right) d\varphi=\\ &\Big(e^{i\mu\varphi}\cos(\varphi)^{\alpha-1} (i\mu \cos(\varphi)+\alpha \sin(\varphi))\Big) \Big|_{-\beta}^{\beta}+\int_{-\beta}^{\beta}e^{i\mu\varphi}\left(\frac{d^2}{d\varphi^2}\cos(\varphi)^\alpha\right) d\varphi=\\ &\Big(e^{i\mu\varphi}\cos(\varphi)^{\alpha-1} (i\mu \cos(\varphi)+\alpha \sin(\varphi))\Big) \Big|_{-\beta}^{\beta}+\int_{-\beta}^{\beta}e^{i\mu\varphi}\Big(\alpha(\alpha-1)\cos(\varphi)^{\alpha-2}-\alpha^2\cos(\varphi)^\alpha\Big)d\varphi=\\ & \Big(e^{i\mu\varphi}\cos(\varphi)^{\alpha-1} (i\mu \cos(\varphi)+\alpha \sin(\varphi))\Big) \Big|_{-\beta}^{\beta}+\alpha(\alpha-1)P(\mu,\alpha-2,\beta)-\alpha^2P(\mu,\alpha,\beta). \end{align*} Hence, we obtain the following recursive relation for $P$ \begin{align*} &P(\mu,\alpha-2,\beta)=\frac{\alpha}{\alpha-1}\left(1-\frac{\mu^2}{\alpha^2}\right)P(\mu,\alpha,\beta)+h(\mu,\alpha,\beta), \end{align*} where \begin{align}\label{hdefin} &h(\mu,\alpha,\beta)= \frac{2}{\alpha(\alpha-1)}(\mu\cos(\beta)^\alpha\sin(\mu\beta)-\alpha \cos^{\alpha-1}(\beta)\cos(\mu\beta)\sin(\beta)). \end{align} For all positive $n$ it thus holds that \begin{align} \nonumber &P(\mu,\alpha-2,\beta)=\prod_{k=0}^{n}\left( \frac{\alpha+2k}{\alpha+2k-1} \right) \prod_{k=0}^{n}\left( (1-\frac{\mu^2}{(\alpha+2k)^2} \right)P(\mu,\alpha+2n,\beta)+\\ \nonumber&\sum_{j=0}^{n}\prod_{k=0}^{j-1}\left( \frac{\alpha+2k}{\alpha+2k-1} \right) \prod_{k=0}^{j-1}\left( (1-\frac{\mu^2}{(\alpha+2k)^2} \right)h(\mu,\alpha+2j,\beta)=\\ \nonumber&\frac{\Gamma(\frac{\alpha-1}{2})}{\Gamma(\frac{\alpha-1}{2}+n)} \frac{\Gamma(\frac{\alpha}{2}+n)}{\Gamma(\frac{\alpha}{2})} \prod_{k=0}^{n}\left( (1-\frac{\mu^2}{(\alpha+2k)^2} \right)P(\mu,\alpha+2n,\beta)+\\ \nonumber&\sum_{j=0}^{n}\frac{\Gamma(\frac{\alpha-1}{2})}{\Gamma(\frac{\alpha-1}{2}+j-1)} \frac{\Gamma(\frac{\alpha}{2}+j-1)}{\Gamma(\frac{\alpha}{2})} \prod_{k=0}^{j-1}\left( (1-\frac{\mu^2}{(\alpha+2k)^2} \right)h(\mu,\alpha+2j,\beta)=\\ \nonumber&\frac{\mathcal{B}\left(\frac{\alpha}{2}-\frac{1}{2},\frac{1}{2}\right)}{\mathcal{B}(n+\frac{\alpha}{2}-\frac{1}{2},\frac{1}{2})} \prod_{k=0}^{n}\left( (1-\frac{\mu^2}{(\alpha+2k)^2} \right)P(\mu,\alpha+2n,\beta)+\\ &\label{expr1P}\sum_{j=0}^{n}\frac{\mathcal{B}(\frac{\alpha}{2}-\frac{1}{2},\frac{1}{2})}{\mathcal{B}(j+\frac{\alpha}{2}-\frac{1}{2},\frac{1}{2})} \prod_{k=0}^{j-1}\left( (1-\frac{\mu^2}{(\alpha+2k)^2} \right)h(\mu,\alpha+2j,\beta), \end{align} where $\mathcal{B}$ is the beta function, defined by \begin{equation}\label{betadef} \mathcal{B}(m,n)=2\int_0^{\pi/2}(\cos \varphi)^{2m-1}(\sin \varphi)^{2n-1}d\varphi. \end{equation} Above we have used some of the properties of the gamma function \cite[p. 256]{abramowitz1972handbook} and its relation to the beta function $\mathcal{B}$ \cite[p. 258]{abramowitz1972handbook}, \begin{align}\label{betaprop} \mathcal{B}(m,n)=\frac{\Gamma(m)\Gamma(n)}{\Gamma(m+n)}. \end{align} In the limit case $n\to\infty$, it holds that \begin{align} \nonumber&P(\mu,\alpha-2,\beta)=\lim_{n\to\infty}\Bigg[ \frac{\mathcal{B}(\frac{\alpha}{2}-\frac{1}{2},\frac{1}{2})}{\mathcal{B}(n+\frac{\alpha}{2}-\frac{1}{2},\frac{1}{2})} \prod_{k=0}^{n}\left( 1-\frac{\mu^2}{(\alpha+2k)^2} \right)P(\mu,\alpha+2n,\beta)+\\ \nonumber&\sum_{j=0}^{n}\frac{\mathcal{B}(\frac{\alpha}{2}-\frac{1}{2},\frac{1}{2})}{\mathcal{B}(j+\frac{\alpha}{2}-\frac{1}{2},\frac{1}{2})} \prod_{k=0}^{j-1}\left( 1-\frac{\mu^2}{(\alpha+2k)^2} \right)h(\mu,\alpha+2j,\beta)\Bigg]=\\ \nonumber&\lim_{n\to\infty}\Bigg[ \mathcal{B}\left(\frac{\alpha}{2}-\frac{1}{2},\frac{1}{2}\right) \prod_{k=0}^{n}\left( 1-\frac{\mu^2}{(\alpha+2k)^2} \right)\int_{-\beta}^{\beta}e^{i\mu\varphi}\frac{\cos(\varphi)^{\alpha+2n}}{\mathcal{B}(n+\frac{\alpha}{2}-\frac{1}{2},\frac{1}{2})} d\varphi+\\ \label{exprlim}&\sum_{j=0}^{n}\frac{\mathcal{B}(\frac{\alpha}{2}-\frac{1}{2},\frac{1}{2})}{\mathcal{B}(j+\frac{\alpha}{2}-\frac{1}{2},\frac{1}{2})} \prod_{k=0}^{j-1}\left( 1-\frac{\mu^2}{(\alpha+2k)^2} \right)h(\mu,\alpha+2j,\beta)\Bigg] \end{align} Using \eqref{betadef}, it is easy to see that the sequence of functions \begin{align*} h_n(t)=\frac{(\cos t)^n}{\mathcal{B}(\frac{n}{2}-\frac{1}{2},\frac{1}{2})} \end{align*} tends to $\delta(t)$ when $n\to\infty$. Using this fact and \eqref{exprlim} we can represent $P(\mu,\alpha,\beta)$ in the form $P(\mu,\alpha,\beta)=P_0(\mu,\alpha,\beta)+P_1(\mu,\alpha,\beta)$, where \begin{align*} &P_0(\mu,\alpha,\beta)=\mathcal{B}\left(\frac{\alpha}{2}+\frac{1}{2},\frac{1}{2}\right) \prod_{k=1}^{\infty}\left( 1-\frac{\mu^2}{(\alpha+2k)^2} \right),\\ &P_1(\mu,\alpha,\beta)=\sum_{j=1}^{\infty}\frac{\mathcal{B}(\frac{\alpha}{2}+\frac{1}{2},\frac{1}{2})}{\mathcal{B}(j+\frac{\alpha}{2}+\frac{1}{2},\frac{1}{2})} \prod_{k=1}^{j-1}\left( 1-\frac{\mu^2}{(\alpha+2k)^2} \right)h(\mu,\alpha+2j,\beta). \end{align*} From the identity \cite[p. 336]{ramanujan1994P4} \begin{align} \label{identity} \frac{\Gamma^2(n+1)}{\Gamma(n+x i+1)\Gamma(n-x i+1)}=\prod_{k=1}^{\infty}\left(1+\frac{x^2}{(n+k)^2}\right), \end{align} it follows that \begin{align} P_0(\mu,\alpha,\beta) &= \mathcal{B}\left(\frac{\alpha+1}{2},\frac{1}{2}\right) \prod_{k=1}^{\infty}\left( 1-\frac{\mu^2}{(\alpha+2k)^2} \right) \label{exprgamma}\\ &=\mathcal{B}\left(\frac{\alpha+1}{2},\frac{1}{2}\right)\frac{\Gamma^2(\frac{\alpha}{2}+1)}{\Gamma(\frac{\alpha}{2}+\frac{\mu}{2}+1)\Gamma(\frac{\alpha}{2}-\frac{\mu}{2}+1)}=\frac{\Gamma(\frac{\alpha+1}{2})\Gamma(\frac{1}{2})\Gamma(\frac{\alpha+2}{2})}{\Gamma(\frac{\alpha+\mu}{2}+1)\Gamma(\frac{\alpha-\mu}{2}+1)}. \nonumber \end{align} Concerning the second term, we have that \begin{align*} &P_1(\mu,\alpha,\beta)= \lim_{n\to\infty}\left[ \sum_{j=0}^{n}\frac{\mathcal{B}(\frac{\alpha+1}{2},\frac{1}{2})}{\mathcal{B}(j+\frac{\alpha+1}{2},\frac{1}{2})}h(\mu,\alpha+2+2j) \prod_{k=1}^{j}\left( 1-\frac{\mu^2}{(\alpha+2k)^2} \right)\right], \end{align*} where $h(\mu,\alpha )= \frac{2}{\alpha(\alpha-1)}(\mu\cos(\beta)^\alpha\sin(\mu\beta)-\alpha \cos^{\alpha-1}(\beta)\cos(\mu\beta)\sin(\beta))$. By using the identity \eqref{identity} again, we can rewrite \begin{align*} &\prod_{k=1}^{j}\left( 1-\frac{\mu^2}{(\alpha+2k)^2} \right)= \frac{\Gamma^2(\frac{\alpha}{2}+1)\Gamma(\frac{\alpha}{2}+\frac{\mu}{2}+j+1)\Gamma(\frac{\alpha}{2}-\frac{\mu}{2}+j+1)}{\Gamma(\frac{\alpha}{2}+\frac{\mu}{2}+1)\Gamma(\frac{\alpha}{2}-\frac{\mu}{2}+1)\Gamma^2(\frac{\alpha}{2}+j+1)} \end{align*} An expression for $P_1(\mu,\alpha,\beta)$ thus reads, \begin{align} \nonumber P_1(\mu,\alpha,\beta)&= \sum_{j=0}^{\infty} h(\mu,\alpha+2+2j) \frac{\Gamma\left(\frac{\alpha+1}{2}\right)\Gamma(\frac{\alpha}{2}+1)\Gamma(\frac{\alpha}{2}+\frac{\mu}{2}+j+1)\Gamma(\frac{\alpha}{2}-\frac{\mu}{2}+j+1)}{\Gamma(j+\frac{\alpha+1}{2})\Gamma(\frac{\alpha}{2}+\frac{\mu}{2}+1)\Gamma(\frac{\alpha}{2}-\frac{\mu}{2}+1)\Gamma(\frac{\alpha}{2}+j+1)}\\ &=\sum_{j=0}^{\infty} h(\mu,\alpha+2+2j) \frac{(\frac{\alpha}{2}+\frac{\mu}{2}+1)_j(\frac{\alpha}{2}-\frac{\mu}{2}+1)_j}{(\frac{\alpha+1}{2})_j(\frac{\alpha}{2}+1)_j}\\ \end{align} using the Pochhammer symbol \cite[p. 256]{abramowitz1972handbook} \begin{align*} &(x)_j=\frac{\Gamma(x+j)}{\Gamma(x)}=x(x+1)\dots(x+j-1). \end{align*} By using the definition of $h$ \eqref{hdefin}, we split the sum into two parts, \begin{align*} &\nonumber P_1(\mu,\alpha,\beta)=\sum_{j=0}^{\infty} \frac{2\mu\cos(\beta)^{\alpha+2+2j}\sin(\mu\beta)}{(\alpha+2+2j)(\alpha+1+2j)} \frac{(\frac{\alpha}{2}+\frac{\mu}{2}+1)_j(\frac{\alpha}{2}-\frac{\mu}{2}+1)_j}{(\frac{\alpha+1}{2})_j(\frac{\alpha}{2}+1)_j}-\\ &\sum_{j=0}^{\infty} \frac{2 \cos^{\alpha+1+2j}(\beta)\cos(\mu\beta)\sin(\beta)}{(\alpha+1+2j)} \frac{(\frac{\alpha}{2}+\frac{\mu}{2}+1)_j(\frac{\alpha}{2}-\frac{\mu}{2}+1)_j}{(\frac{\alpha+1}{2})_j(\frac{\alpha}{2}+1)_j}=\\ &2\mu\cos(\beta)^{\alpha+2}\sin(\mu\beta)\sum_{j=0}^{\infty} \frac{\cos(\beta)^{2j}}{(\alpha+2+2j)(\alpha+1+2j)} \frac{(\frac{\alpha}{2}+\frac{\mu}{2}+1)_j(\frac{\alpha}{2}-\frac{\mu}{2}+1)_j}{(\frac{\alpha+1}{2})_j(\frac{\alpha}{2}+1)_j}-\\ &2\cos(\beta)^{\alpha+1}\cos(\beta \mu)\sin(\beta)\sum_{j=0}^{\infty} \frac{\cos(\beta)^{2j}}{(\alpha+1+2j)} \frac{(\frac{\alpha}{2}+\frac{\mu}{2}+1)_j(\frac{\alpha}{2}-\frac{\mu}{2}+1)_j}{(\frac{\alpha+1}{2})_j(\frac{\alpha}{2}+1)_j}. \end{align*} Now we can use the facts that $(1)_j=j!$ and that \begin{align*} (z+j)(z)_j=(z)_{j+1}=\frac{\Gamma(z+j+1)}{\Gamma(z+1)}=\frac{1}{z}\frac{\Gamma(z+j+1)}{\Gamma(z)}=\frac{1}{z}(z+1)_j, \end{align*} to simplify the above expression as \begin{align*} &P_1(\mu,\alpha,\beta)= 2\mu\cos(\beta)^{\alpha+2}\sin(\mu\beta)\frac{1}{(\alpha+1)(\alpha+2)}\sum_{j=0}^{\infty} \frac{\cos(\beta)^{2j}}{j!} \frac{(1)_j(\frac{\alpha}{2}+\frac{\mu}{2}+1)_j(\frac{\alpha}{2}-\frac{\mu}{2}+1)_j}{(\frac{\alpha+3}{2})_j(\frac{\alpha}{2}+2)_j}-\\ &2\cos(\beta)^{\alpha+1}\cos(\beta \mu)\sin(\beta)\frac{1}{(\alpha+1)}\sum_{j=0}^{\infty} \frac{\cos(\beta)^{2j}}{j!} \frac{(1)_j(\frac{\alpha}{2}+\frac{\mu}{2}+1)_j(\frac{\alpha}{2}-\frac{\mu}{2}+1)_j}{(\frac{\alpha+3}{2})_j(\frac{\alpha}{2}+1)_j}. \end{align*} The sums above take the form of $_3F_2$ hypergeometric functions \cite[p. 403]{olver2010nist}, and hence it holds that \begin{align*} P_1(\mu,\alpha,\beta) &= \frac{2\mu\cos(\beta)^{\alpha+2}\sin(\mu\beta)}{(\alpha+1)(\alpha+2)}\,_3F_2\left(1,\frac{\alpha}{2}+\frac{\mu}{2}+1,\frac{\alpha}{2}-\frac{\mu}{2}+1;\frac{\alpha+3}{2},\frac{\alpha}{2}+2;\cos(\beta)^2\right)\\ &-\frac{2\cos(\beta)^{\alpha+1}\cos(\beta \mu)\sin(\beta)}{(\alpha+1)} \,_3F_2\left(1,\frac{\alpha}{2}+\frac{\mu}{2}+1,\frac{\alpha}{2}-\frac{\mu}{2}+1;\frac{\alpha+3}{2},\frac{\alpha}{2}+1;\cos(\beta)^2\right). \end{align*} We then finally obtain that \begin{align*} &P(\mu,\alpha,\beta)= \frac{\Gamma(\frac{\alpha+1}{2})\Gamma(\frac{1}{2})\Gamma(\frac{\alpha+2}{2})}{\Gamma(\frac{\alpha+\mu}{2}+1)\Gamma(\frac{\alpha-\mu}{2}+1)}+\\ &\frac{2\mu\cos(\beta)^{\alpha+2}\sin(\mu\beta)}{(\alpha+1)(\alpha+2)}\,_3F_2\left(1,\frac{\alpha}{2}+\frac{\mu}{2}+1,\frac{\alpha}{2}-\frac{\mu}{2}+1;\frac{\alpha+3}{2},\frac{\alpha}{2}+2;\cos(\beta)^2\right)-\\ &\frac{2\cos(\beta)^{\alpha+1}\cos(\beta \mu)\sin(\beta)}{(\alpha+1)} \,_3F_2\left(1,\frac{\alpha}{2}+\frac{\mu}{2}+1,\frac{\alpha}{2}-\frac{\mu}{2}+1;\frac{\alpha+3}{2},\frac{\alpha}{2}+1;\cos(\beta)^2\right). \end{align*} \bibliographystyle{siam}
1,108,101,563,556
arxiv
\section{Introduction} Kashiwara's crystals \cite{kascbq} encode the structure of certain bases (called crystal bases) for highest weight representations of quantum groups $U_q({\mathfrak g})$ as $q$ goes to zero. The first author and Postnikov \cite{lapawg,lapcmc} defined the so-called alcove model for highest weight crystals associated to a semisimple Lie algebra $\mathfrak g$ (in fact, the model was defined more generally, for symmetrizable Kac-Moody algebras $\mathfrak g$). A related model is the one of Gaussent-Littelmann, based on LS-galleries \cite{gallsg}. Both models are discrete counterparts of the celebrated Littelmann path model. In this paper we define a generalization of the alcove model, which we call the \emph{quantum alcove model}, as it is based on enumerating paths in the so-called quantum Bruhat graph of the corresponding finite Weyl group. This graph first appeared in connection with the quantum cohomology of flag varieties \cite{fawqps}. The path enumeration is determined by the choice of a certain sequence of alcoves (an alcove path), like in the classical alcove model. If we restrict to paths in the usual Bruhat graph, we recover the classical alcove model. The mentioned paths in the quantum Bruhat graph first appeared in \cite{Lenart}, where they index the terms in the specialization $t=0$ of the Ram-Yip formula \cite{raycfm} for Macdonald polynomials $P_{\lambda}(X;q,t)$. The main application \citep{unialcmod,unialcmod2} is that the new model uniformly describes tensor products of column shape Kirillov-Reshetikhin (KR) crystals \citep{karrym}, for all untwisted affine types. We demonstrate this in types $A$ and $C$, by showing that the bijections constructed in \citep{Lenart}, from the objects of the quantum alcove model to tensor products of Kashiwara-Nakashima (KN) columns \citep{kancgr}, are affine crystal isomorphisms (indeed, a column shape KR crystal is realized by a KN column in these cases). Another application is to the {\em energy function} on a tensor products of column shape KR crystals, which endows it with an affine grading. The authors also plan to realize the {\em combinatorial $R$-matrix} (i.e., the affine crystal isomorphism commuting two factors in a tensor product) by extending to the quantum alcove model the alcove model version of Sch\"utzenberger's {\em jeu de taquin} on Young tableaux in \cite{lenccg}; the latter is based on so-called {\em Yang-Baxter moves}. \section{Background} \subsection{Root systems} Let $\mathfrak{g}$ be a complex semisimple Lie algebra, and $\mathfrak{h}$ a Cartan subalgebra, whose rank is $r$. Let $\Phi \subset \mathfrak{h}^*$ be the corresponding irreducible \emph{root system}, $\mathfrak{h}^*_{\mathbb{R}}\subset \mathfrak{h}$ the real span of the roots, and $\Phi^{+} \subset \Phi$ the set of positive roots. Let $\Phi^{-} = \Phi \backslash \Phi^{+}$. For $ \alpha \in \Phi$ we will say that $ \alpha > 0 $ if $ \alpha \in \Phi^{+}$, and $ \alpha < 0 $ if $ \alpha \in \Phi^{-}$. The sign of the root $\alpha$, denoted $\mathrm{sgn}(\alpha)$, is defined to be $1$ if $\alpha \in \Phi^{+}$, and $-1$ otherwise. Let $| \alpha | = \mathrm{sgn}( \alpha ) \alpha $. Let $\rho := \frac{1}{2}(\sum_{\alpha \in \Phi^{+}}\alpha)$. Let $\alpha_1, \ldots , \alpha_r \in \Phi^{+}$ be the corresponding \emph{simple roots}. We denote $\inner{\cdot}{\cdot}$ the nondegenerate scalar product on $\mathfrak{h}^{*}_{\mathbb{R}}$ induced by the Killing form. Given a root $\alpha$, we consider the corresponding \emph{coroot} $\alpha^{\vee} := 2\alpha/ \inner{\alpha}{\alpha}$ and reflection $s_{\alpha}$. If $\alpha= \sum_i c_i \alpha_i$, then the \emph{height} of $\alpha$, denoted by $\mathrm{height} (\alpha)$, is given by $\mathrm{height} (\alpha)=\sum_i c_i$. We will denote by $\widetilde{\alpha}$ the highest root in $\Phi^{+}$, and we let $\theta = \alpha_0 = -\widetilde{\alpha} $. Let $W$ be the corresponding \emph{Weyl group}, whose Coxeter generators are denoted, as usual, by $s_i:=s_{\alpha_i}$. The length function on $W$ is denoted by $\ell(\cdot)$. The \emph{Bruhat order} on $W$ is defined by its covers $w \lessdot ws_{\alpha}$, for $\alpha \in \Phi^{+}$, if $\ell(ws_{\alpha}) = \ell(w) + 1$. The mentioned covers correspond to the labeled directed edges of the \emph{Bruhat graph} on $W$: \begin{equation} w \stackrel{\alpha}{\longrightarrow} ws_{\alpha} \quad \text{ for } w \lessdot ws_{\alpha}. \label{eqn:bruhat_edge} \end{equation} The \emph{weight lattice} $\Lambda$ is given by \begin{equation} \Lambda = \left\{ \lambda \in \mathfrak{h}_{\mathbb{R}}^{*} \, : \, \inner{\lambda}{\alpha^{\vee}} \in \mathbb{Z} \text{ for any } \alpha \in \Phi \right\}. \label{eqn:weight_lattice} \end{equation} The weight lattice $\Lambda$ is generated by the \emph{fundamental weights} $\omega_1, \ldots \omega_r$, which form the dual basis to the basis of simple coroots, i.e., $\inner{\omega_i}{\alpha_j^{\vee}}= \delta_{ij}$. The set $\Lambda^{+}$ of \emph{dominant weights} is given by \begin{equation} \Lambda^{+} := \left\{ \lambda \in \Lambda \, : \, \inner{\lambda}{\alpha^{\vee}} \geq 0 \text{ for any } \alpha \in \Phi^{+} \right\}. \label{eqn:dominant_weights} \end{equation} Let $ \mathbb{Z}[\Lambda]$ be the group algebra of the weight lattice $\Lambda$, which has the $\mathbb{Z}$-basis of formal exponents $x^{\lambda}$, for $\lambda \in \Lambda$, with multiplication $x^{\lambda}\cdot x^{\mu} = x^{\lambda+\mu}$. Given $\alpha \in \Phi$ and $k \in \mathbb{Z}$, we denote by $s_{\alpha,k}$ the reflection in the affine hyperplane \begin{equation} H_{\alpha,k}:= \left\{ \lambda \in \mathfrak{h}^{*}_{\mathbb{R}} \, : \, \inner{\lambda}{\alpha^{\vee}} = k \right\} \label{eqn:affine_hyperplane}. \end{equation} These reflections generate the \emph{affine Weyl group} $W_{\textrm{aff}}$ for the \emph{dual root system} $\Phi^{\vee}:= \left\{ \alpha^{\vee} \, |\, \alpha \in \Phi \right\}$. The hyperplanes $H_{\alpha,k}$ divide the real vector space $\mathfrak{h}^{*}_\mathbb{R}$ into open regions, called \emph{alcoves.} The \emph{fundamental alcove} $A_{\circ}$ is given by \begin{equation} A_{\circ} := \left\{ \lambda \in \mathfrak{h}_{\mathbb{R}}^{*} \, | \, 0 < \inner{\lambda}{\alpha^{\vee}} < 1 \text{ for all } \alpha \in \Phi^{+} \right\}. \label{eqn:fundamental_alcove} \end{equation} Define $w \qstep ws_{\alpha}$, for $\alpha \in \Phi^{+}$, if $\ell(ws_{\alpha}) = \ell(w) - 2\inner{\rho}{\alpha^{\vee}} + 1$. The \emph{quantum Bruhat graph} \cite{fawqps} is defined by adding to the Bruhat graph (\ref{eqn:bruhat_edge}) the following edges labeled by positive roots $\alpha$: \begin{equation} w \stackrel{\alpha}{\longrightarrow} ws_{\alpha} \quad \text{ for } w \qstep ws_{\alpha}. \label{eqn:qbruhat_edge} \end{equation} We will need the following properties of the quantum Bruhat graph \cite{unialcmod}. \begin{lemma} Let $w \in W$. We have $w^{-1}(\theta)>0$ if and only if $w \qstep s_{\theta}w$. We also have $w^{-1}(\theta) < 0$ if and only if $s_{\theta}w \qstep w$. \label{lemma:theta} \end{lemma} \begin{proposition} \label{prop:deodhar} Let $w \in W$, let $\alpha$ be a simple root, $\beta \in \Phi^+$, and assume $s_{\alpha}w \ne ws_{\beta}$. Then $w \lessdot s_{\alpha}w$ and $ w \longrightarrow ws_{\beta}$ if and only if $ws_{\beta} \lessdot s_{\alpha}ws_{\beta}$ and $s_{\alpha}w \longrightarrow s_{\alpha}ws_{\beta}.$ Furthermore, in this context we have $ w \lessdot ws_{\beta}$ if and only if $s_{\alpha}w \lessdot s_{\alpha}ws_{\beta}$. The diagram below provides a picture. \begin{displaymath} \xymatrix{ & s_{\alpha} ws_{\beta} & \\ s_{\alpha}w \ar[ru] & & ws_{\beta} \ar[lu] \\ & w \ar[ru] \ar[lu] & } \end{displaymath} \end{proposition} \begin{proposition} \label{prop:deodhar0} Let $w \in W$, $\beta \in \Phi^+$, and assume $s_{\theta}w \ne ws_{\beta}$. Then $w \qstep s_{\theta}w$ and $ w \longrightarrow ws_{\beta}$ if and only if $ws_{\beta} \qstep s_{\theta}ws_{\beta}$ and $s_{\theta}w \longrightarrow s_{\theta}ws_{\beta}.$ \end{proposition} \subsection{Kirillov-Reshetikhin (KR) crystals} \label{subsection:KR-crystals} A $\mathfrak{g}$-{\em crystal} is a nonempty set $B$ together with maps $e_i,f_i:B\to B\cup \{ \mathbf{0} \}$ for $i\in I$ ($I$ indexes the simple roots, as usual, and $\mathbf{0} \not \in B$), and $\mathrm{wt}:B \to \Lambda$. We require $b'=f_i(b)$ if and only if $b=e_i(b')$. The maps $e_i$ and $f_i$ are called crystal operators and are represented as arrows $b \to b'=f_i(b)$; thus they endow $B$ with the structure of a colored directed graph. For $b\in B$, we set $\varepsilon_i(b) = \max\{k \mid e_i^k(b) \neq \mathbf{0} \}$, $\varphi_i(b) = \max\{k \mid f_i^k(b) \neq \mathbf{0} \}$. Given two $\mathfrak{g}$-crystals $B_1$ and $B_2$, we define their tensor product $B_1 \otimes B_2$ as follows. As a set, $B_1\otimes B_2$ is the Cartesian product of the two sets. For $b=b_1 \otimes b_2\in B_1 \otimes B_2$, the weight function is simply $\mathrm{wt}(b) = \mathrm{wt}(b_1) + \mathrm{wt}(b_2)$. The crystal operators $f_i$ and $e_i$ are given by \begin{equation*} f_i (b_1 \otimes b_2)= \begin{cases} f_i (b_1) \otimes b_2 & \text{if $\varepsilon_i(b_1) \geq \varphi_i(b_2)$}\\ b_1 \otimes f_i (b_2) & \text{ otherwise } \end{cases} \end{equation*} \begin{equation*} e_i (b_1 \otimes b_2)= \begin{cases} e_i (b_1) \otimes b_2 & \text{if $\varepsilon_i(b_1) > \varphi_i(b_2)$}\\ b_1 \otimes e_i (b_2) & \text{ otherwise .} \end{cases} \end{equation*} The {\em highest weight crystal} $B(\lambda)$ of highest weight $\lambda\in \Lambda^+$ is a certain crystal with a unique element $u_\lambda$ such that $e_i(u_\lambda)=\mathbf{0}$ for all $i\in I$ and $\mathrm{wt}(u_\lambda)=\lambda$. It encodes the structure of the crystal basis of the $U_q(\mathfrak{g})$-irreducible representation with highest weight $\lambda$ as $q$ goes to 0. A {\em Kirillov-Reshetikhin (KR) crystal} \cite{karrym} is a finite crystal $B^{r,s}$ for an affine algebra, associated to a rectangle of height $r$ and length $s$. We now describe tableaux models for KR crystals $B^{r,1}$ for type $A_{n-1}^{(1)}$ and $C_n^{(1)}$, where $r\in \{1,2,\ldots,n-1\}$, and $r\in \{1,2,\ldots,n\}$, respectively. As a classical type $A^{(1)}_{n-1}$ (resp. $C^{(1)}_n$) crystal, the KR crystal $B^{r,1}$ is isomorphic to the corresponding $B(\omega_r)$. In type $A^{(1)}_{n-1}$, $b\in B(\omega_r)$ is represented by a strictly increasing column shape fillings with entries in $[n]:=\{1, \dots , n \}$ of a height $r$ column. Let \[ \mathrm{w}(b) := x_r \cdots x_1 \;\; \text{ for } b = \ytableausetup{boxsize=normal} \begin{ytableau}[x_] 1 \\ 2 \\ \none[\vdots] \\ r \end{ytableau} \ytableausetup{boxsize=2.4ex} \;\; \text{ with } x_1 < \cdots < x_r. \] We will now describe crystal operators in terms of the so called signature rule in the more general setting of tensor products. For $b = b_1 \otimes \cdots \otimes b_k \in B(\omega_{i_1}) \otimes \cdots \otimes B(\omega_{i_k})$, $\mathrm{w}(b)$ is obtained by concatenating the words for each filling $b_i$, i.e. $\mathrm{w}(b) = \mathrm{w}(b_1) \cdots \mathrm{w}(b_k)$. To apply $f_i$, (or $e_i$) on $b$ consider the subword $\omega_i$ of $\mathrm{w}(b)$ containing only the letters $i$ and $i+1$, if $1\leq i \leq n-1$ (the letters $n$ and $1$ if $i=0$). Then, we encode in $\omega_i$ the letter $i$ by the symbol $+$, and the letter $i+1$ by the symbol $-$ (if $i=0$ we encode $n$ by $+$ and $1$ by $-$). Next, we remove adjacent factors $-+$ in $\omega_i$ to obtain a new subword, $\omega_i^{(1)}$, in which, we can again remove adjacent factors $-+$ to obtain a new subword $\omega_i^{(2)}$. This process is repeated until no factors $-+$ remain, and we are left with a reduced word \begin{equation} \rho_i(\mathrm{w}(b)) = \underbrace{++ \cdots +}_x\underbrace{-- \cdots -}_y. \label{eqn:reduced_word} \end{equation} $\rho_i(\mathrm{w}(b))$ is called the $i$-signature of $b$. \begin{definition} ~ \begin{itemize} \item If $y > 0$, $e_i(b)$ is obtained by replacing the letter $i+1$, which corresponds to the leftmost $-$ in $\rho_i(\mathrm{w}(b))$, with the letter $i$ in $b$. (if $i=0$ we replace the letter $1$ with $n$, and then sort the column if necessary). If $y=0, e_i(b) = \mathbf{0}$. \item If $x > 0$, $f_i(b)$ is obtained by replacing the letter $i$, which corresponds to the rightmost $+$ in $\rho_i(\mathrm{w}(b))$, with the letter $i+1$ in $b$. (if $i=0$ we change the letter $n$ to $1$, and then sort the column if necessary). If $x=0, f_i(b) =\mathbf{0}$. \end{itemize} \label{definition:crystal_operators} \end{definition} We can extend the above definition of crystal operators to column strict fillings of shape $\lambda$ via the canonical embedding of column strict fillings into the tensor product of columns. \begin{example} \label{example:root0} Let $n=3$, $b = \tableau{2 & 1 & 1 \\ 3 & 2 }$ $\mapsto$ $\ytableaushort{2,3} \otimes \ytableaushort{1,2} \otimes \ytableaushort{1}$, and has $+--$ as its $0$-signature, which is already reduced. So we have $f_0\left(\ytableaushort{211,32}\right) = \ytableaushort{111,22}$. \end{example} In type $C_{n}^{(1)}$, $B(\omega_r)$ is represented by {\em Kashiwara-Nakashima (KN) columns} \citep{kancgr} of height $r$, with entries in the set $[\overline{n}]= \left\{ 1< \dots< n<\overline{n}< \dots <\overline{1} \right\}$, which we will now describe. \begin{definition} A column-strict filling $C = x_1 \ldots x_r$ with entries in $[\overline{n}]$ is a KN column if there is no pair $(z,\overline{z})$ of letters in $C$ such that: \[ z=x_p,\; \overline{z} = x_q,\; q-p \leq r-z. \] \label{definition:KN_columns} \end{definition} Crystal operators $f_i$ and $ e_i$ are defined on tensor products of KN columns in a similar way as in type $A_n^{(1)}$. To apply $f_i$ or $e_i$ on $b = b_1 \otimes \cdots \otimes b_k \in B(\omega_{i_1}) \otimes \cdots \otimes B(\omega_{i_k})$, first construct $\mathrm{w}(b)=\mathrm{w}(b_1)\cdots\mathrm{w}(b_k)$. Next consider the subword $\omega_i$ of $\mathrm{w}(b)$ containing only the letters $i, i+1, \overline{i}, \overline{i+1}$, if $1 \leq i \leq n$ (the letters $1$ and $\overline{1}$ if $i=0$). Encode in $\omega_i$ each letter $i,\overline{i+1}$ by the symbol $+$ and each letter $i+1, \overline{i}$ by the symbol $-$ (if $i=0$ encode the letter $\overline{1}$ by $+$ and the letter $1$ by $-$). Like before, remove factors $-+$ until what remains is a reduced word, $\rho_i(\mathrm{w}(b))$ which we call the $i$-signature of $b$, c.f. \eqref{eqn:reduced_word}. The crystal operators $f_i$ and $e_i$ are again given in terms $\rho_i(\mathrm{w}(b))$ by Definition $\ref{definition:crystal_operators}$. In this case, changing $+$ to $-$ means changing $i$ to $i+1$, if $+$ corresponds to $i$, and changing $\overline{i+1}$ to $\overline{i}$, if $+$ corresponds to $\overline{i+1}$. Similarly changing $-$ to $+$ means changing $i+1$ to $i$ or $\overline{i}$ to $\overline{i+1}$. Like before, we can extend the above definition of crystal operators to column strict fillings of shape $\lambda$ via the canonical embedding of column strict fillings into the tensor product of columns. We will need a different definition of KN columns which was proved to be equivalent to the one above in \cite{shsjdt}. \begin{definition} Let $C$ be column and $I = \left\{ z_1 > \ldots > z_k \right\}$ the set of unbarred letters $z$ such that the pair $(z,\overline{z})$ occurs in $C$. The column $C$ can be \emph{split} when there exists a set of $k$ unbarred letters $J = \left\{ t_1 > \ldots > t_k \right\} \subset [n]$ such that: \begin{itemize} \item $t_1$ is the greatest letter in $[n]$ satisfying: $t_1 < z_1$, $t_1\not \in C$, and $\overline{t_1} \not \in C$, \item for $i = 2, \ldots, k$, the letter $t_i$ is the greatest one in $[n]$ satisfying $t_i < \min(t_{i-1}, z_i)$, $t_i \not \in C$, and $\overline{t_i} \not \in C$. \end{itemize} In this case we write: \begin{itemize} \item $rC$ for the column obtained by changing $\overline{z_i}$ into $\overline{t_i}$ in $C$ for each letter $z_i \in I$, and by reordering if necessary, \item $lC$ for the column obtained by changing $z_i$ into $t_i$ in $C$ for each letter $z_i \in I$, and by reordering if necessary. \end{itemize} The pair $(lC,rC)$ will be called a \emph{split column}, which we well sometimes denote by $lCrC$. \label{definition:KN_doubled_columns} \end{definition} \begin{example} The following is a KN column of height 5 in type $C_n$ for $n \ge 5$, together with the corresponding split column: \[ C= \tableau{4 \\ 5\\ \overline{ 5 }\\ \overline{ 4 }\\ \overline{ 3 } }\,, \quad (lC,rC) = \tableau{1 \\ 2\\ \overline{ 5 }\\ \overline{ 4 }\\ \overline{ 3 } } \tableau{4 \\ 5\\ \overline{ 3 }\\ \overline{ 2 }\\ \overline{ 1 } } \] We used the fact that $ \left\{ z_1 > z_2 \right\}= \left\{ 5>4 \right\} $, so $ \left\{ t_1 > t_2 \right\}= \left\{ 2 > 1 \right\} $. \end{example} A column is a KN column if and only if it can be split. If $C$ is a $KN$ column with splitting $lCrC$ then $f_i(C) = f_i^2(lCrC)$ by \citep[Theorem 5.1]{kasscb}. In what follows we will use Definition \ref{definition:KN_doubled_columns} as the definition of KN columns. Certain Demazure crystals for affine Lie algebras are isomorphic as classical crystals to tensor products of KR crystals. \begin{definition}[\cite{demazure_arrows, KRcrystals_energy}] An arrow $b\rightarrow f_i(b)$ is called a \emph{Demazure arrow} if $i \ne 0$, or $i=0$ and $\varepsilon_0(b)\ge 1$. \end{definition} Demazure arrows exclude 0-arrows at the beginning of a string of 0-arrows. We are interested in excluding 0-arrows at the end of a string of 0-arrows. We call these arrows dual Demazure. \begin{definition} \label{definition:pdemazure_arrows} An arrow $b\rightarrow f_i(b)$ is called a \emph{dual Demazure arrow} if $i \ne 0$, or $i=0$ and $\varphi_i(b)\geq 2$. \end{definition} Let $\lambda=(\lambda_1 \geq \lambda_2 \geq \dots)$ be a partition, which is interpreted as a dominant weight in classical types; let $\lambda'$ be the conjugate partition. Let $B^{\otimes \lambda} =\bigotimes_{i=1}^{\lambda_1} B^{\lambda_i',1}$. The \emph{energy function} $D$ is a statistic on $B^{\otimes \lambda}$. It is defined by summing the local energies of column pairs. We will only need the following property of the energy function, which defines it as an affine grading on the crystal $B^{\otimes \lambda}$. \begin{theorem}[\cite{KRcrystals_energy}] \label{theorem:energy_recursion} The energy is preserved by the classical crystal operators $f_i$. If $b\rightarrow f_0(b)$ is a dual Demazure arrow, then $D(f_0(b))=D(b)-1$. \end{theorem} It follows that the energy is determined up to a constant on the connected components of the subgraph of the affine crystal $B^{\otimes \lambda}$ containing only the dual Demazure arrows. In the case where all of the tensor factors are \emph{perfect} crystals \citep{hkqgcb}, there is exactly one such connected component. For instance, $B^{k,1}$ is perfect in type $A^{(1)}_{n-1}$, but not in type $C^{(1)}_n$. In types $A$ and $C$, and conjecturally in types $B$ and $D$, there is another statistic on $B^{\otimes \lambda} $, called the \emph{charge}, which is obtained by translating a certain statistic in the Ram-Yip formula for Macdonald polynomials (i.e., the height statistic in \eqref{eqn:height_statistic} ) to the model based on KN columns \cite{Lenart}; this is done by using certain bijections from Section \ref{modelac}. The charge statistic is related to the energy function by the following theorem. \begin{theorem}[\cite{energy_charge}] \label{theorem:charge} Let $B^{\otimes \lambda} $ be a tensor product of KR crystals in type $A_{n-1}^{(1)}$ or type $C_{n}^{(1)}$. For all $b\in B^{\otimes \lambda} $, we have $D(b) = - \mathrm{charge} (b)\,.$ \end{theorem} The charge gives a much easier method to compute the energy than the recursive one based on Theorem \ref{theorem:energy_recursion}. \section{The quantum alcove model} \subsection{\texorpdfstring{$\lambda$-chains and admissible subsets}{lambda-chains and admissible subsets}} \label{subsection:chains_and_admissible_sequences} We say that two alcoves are \emph{adjacent} if they are distinct and have a common wall. Given a pair of adjacent alcoves $A$ and $B$, we write $A \stackrel{\beta}{\longrightarrow} B$ if the common wall is of the form $H_{\beta,k}$ and the root $\beta \in \Phi$ points in the direction from $A$ to $B$. \begin{definition}[\cite{lapawg}] An \emph{alcove path} is a {sequence of alcoves} $(A_0, A_1, \ldots, A_m)$ such that $A_{j-1}$ and $A_j$ are adjacent, for $j=1,\ldots m.$ We say that an alcove path is \emph{reduced} if it has minimal length among all alcove paths from $A_0$ to $A_m$. \end{definition} Let $A_{\lambda}=A_{\circ}+\lambda$ be the translation of the fundamental alcove $A_{\circ}$ by the weight $\lambda$. \begin{definition}[\cite{lapawg}] The sequence of roots $(\beta_1, \beta_2, \dots, \beta_m)$ is called a \emph{$\lambda$-chain} if \[ A_0=A_{\circ} \stackrel{-\beta_1}{\longrightarrow} A_1 \stackrel{-\beta_2}{\longrightarrow}\dots \stackrel{-\beta_m}{\longrightarrow} A_m=A_{-\lambda}\] is a reduced alcove path. \end{definition} We now fix a dominant weight $\lambda$ and an alcove path $\Pi=(A_0, \dots , A_m)$ from $A_0 = A_{\circ}$ to $A_m = A_{-\lambda}$. Note that $\Pi$ is determined by the corresponding $\lambda$-chain of {positive} roots $\Gamma:=(\beta_1, \dots, \beta_m)$. We let $r_i:=s_{\beta_i}$, and let $\widehat{r_i}$ be the affine reflection in the hyperplane containing the common face of $A_{i-1}$ and $A_i$, for $i=1, \ldots, m$; in other words, $\widehat{r}_i:= s_{\beta_i,-l_i}$, where $l_i:=|\left\{ j<i \, ; \, \beta_j = \beta_i \right\} |$. We define $\widetilde{l}_i:= \inner{\lambda}{\beta_i^{\vee}} -l_i = |\left\{ j \geq i \, ; \, \beta_j = \beta_i \right\} |$. \begin{example} \label{example:lambda_chain} Consider the dominant weight $\lambda=3\varepsilon_1+2\varepsilon_2$ in the root system $A_2$ (cf. Section \ref{subsection:TypeA} and the notation therein). The corresponding $\lambda$-chain is $(\alpha_{23},\alpha_{13},\alpha_{23},\alpha_{13},\alpha_{12},\alpha_{13})$. The corresponding heights $l_i$ are $(0,0,1,1,0,2)$ and $\widetilde{l}_i$ are $\left\{ 2,3,1,2,1,1 \right\}$. The alcove path is shown in Figure \ref{fig:unfolded_chain}; here $A_0$ is shaded, and $A_0-\lambda$ is the alcove at the end of the path. \end{example} \begin{figure}[h] \centering \subfloat[$\Gamma$ for $\lambda=3\varepsilon_1+2\varepsilon_2$ \label{fig:unfolded_chain}] {\includegraphics[scale=.41]{lchain}} \hspace{10pt} \subfloat[$\Gamma(J)$ for $J=\left\{ 1,2,3,5 \right\}$ \label{fig:folded_chain}] {\includegraphics[scale=.41]{lchain_folded}} \label{fig:gamma_and_delta} \caption{Unfolded and folded $\lambda$-chain} \end{figure} Let $J=\left\{ j_1 < j_2 < \cdots < j_s \right\} \subseteq [m]$ be a subset of $[m]$. The elements of $J$ are called \emph{folding positions}. We fold $\Pi$ in the hyperplanes corresponding to these positions and obtain a folded path, see Example \ref{example:folded_lambda_chain} and Figure \ref{fig:folded_chain}. Like $\Pi$, the folded path can be recorded by a sequence of roots, namely $\Delta = \Gamma(J)=\left( \gamma_1,\gamma_2, \dots, \gamma_m \right)$; here $\gamma_k=r_{j_1}r_{j_2}\dots r_{j_p}(\beta_k)$, with $j_p$ the largest folding position less than $k$. We define $\gamma_{\infty} := r_{j_1}r_{j_2}\dots r_{j_s}(\rho)$. Upon folding, the hyperplane separating the alcoves $A_{k-1}$ and $A_k$ in $\Pi$ is mapped to \begin{equation}\label{deflev} H_{|\gamma_k|,-\l{k}}=\widehat{r}_{j_1}\widehat{r}_{j_2}\dots \widehat{r}_{j_p}(H_{\beta_k,-l_k})\,, \end{equation} for some $\l{k}$, which is defined by this relation. Given $i \in J$, we say that $i$ is a \emph{positive folding position} if $\gamma_i>0$, and a \emph{negative folding position} if $\gamma_i<0$. We denote the positive folding positions by $J^{+}$, and the negative ones by $J^{-}$. We call $\mu=\mu(J):=-\widehat{r}_{j_1}\widehat{r}_{j_2}\ldots \widehat{r}_{j_s}(-\lambda)$ the \emph{weight} of $J$. We define \begin{equation} \label{eqn:height_statistic} \mathrm{height} (J):= \sum_{j \in J^{-}} \widetilde{l}_j. \end{equation} \begin{definition} A subset $J=\left\{ j_1 < j_2 < \cdots < j_s \right\} \subseteq [m]$ (possibly empty) is an \emph{admissible subset} if we have the following path in the quantum Bruhat graph on $W$: \begin{equation} \label{eqn:admissible} 1 \stackrel{\beta_{j_1}}{\longrightarrow} r_{j_1} \stackrel{\beta_{j_2}}{\longrightarrow} r_{j_1}r_{j_2} \stackrel{\beta_{j_3}}{\longrightarrow} \cdots \stackrel{\beta_{j_s}}{\longrightarrow} r_{j_1}r_{j_2}\cdots r_{j_s}\,. \end{equation} We call $\Delta=\Gamma(J)$ an \emph{admissible folding}. We let $\mathcal{A}=\mathcal{A}(\lambda)$ be the collection of admissible subsets. \begin{example} \label{example:folded_lambda_chain} We continue Example \ref{example:lambda_chain}. Let $J = \left\{ 1,2,3,5 \right\}$, then $\Delta=\Gamma(J)=\{ \alpha_{23},\alpha_{12}, \alpha_{31}, \alpha_{23},$ \linebreak $\alpha_{21},\alpha_{13}\}$. The folded path is shown in Figure \ref{fig:folded_chain}. We have $J^{+}=\left\{ 1,2\right\}$, $J^{-}=\left\{ 3,5 \right\}$, $\mu(J)=-\varepsilon_3$, and $\mathrm{height} (J)=2$. In section \ref{subsection:TypeA} we will describe an easy way to verify that $J$ is admissible. \end{example} \end{definition} \subsection{Crystal operators} \label{subsection:crystalop} In this section we define the crystal operators $f_i$ and $e_i$. Given $J\subseteq [m]$ and $\alpha\in \Phi$, we will use the following notation: \begin{align*} I_\alpha &= I_{\alpha}(\Delta):= \left\{ i \in [m] \, | \, \gamma_i = \pm \alpha \right\}, \quad L_{\alpha}=L_{\alpha}(\Delta) := \left\{ \l{i} \, | \, i \in I_{\alpha} \right\}, \\ \widehat{I}_\alpha &= \widehat{I}_{\alpha}(\Delta):= I_{\alpha} \cup \{\infty\}, \quad \widehat{L}_{\alpha} = \widehat{L}_{\alpha}(\Delta) := L_{\alpha} \cup \{l_\alpha^{\infty} \}, \end{align*} where $l_{\alpha}^{\infty}:=\inner{\mu(J)}{\mathrm{sgn}(\alpha)\alpha^{\vee}}$. We will use $\widehat{L}_{\alpha}$ to define the crystal operators on admissible subsets. The following graphical representation of $\widehat{L}_{\alpha}$ is useful for such purposes. Let \[\widehat{I}_{\alpha}= \left\{ i_1 < i_2 < \dots < i_n \leq m<i_{n+1}=\infty \right\}\, \text{ and } \varepsilon_i := \begin{cases} \,\,\,\, 1 &\text{ if } i \not \in J\\ -1 & \text { if } i \in J \end{cases}.\, \] If $\alpha > 0$, we define the continuous piecewise-linear function $g_{\alpha}:[0,n+\frac{1}{2}] \to \mathbb{R}$ by \begin{equation} \label{eqn:piecewise-linear_graph} g_\alpha(0)= -\frac{1}{2}, \;\;\; g'_{\alpha}(x)= \begin{cases} \mathrm{sgn}(\gamma_{i_k}) & \text{ if } x \in (k-1,k-\frac{1}{2}),\, k = 1, \ldots, n\\ \varepsilon_{i_k}\mathrm{sgn}(\gamma_{i_k}) & \text{ if } x \in (k-\frac{1}{2},k),\, k=1,\ldots,n \\ \mathrm{sgn}(\inner{\gamma_{\infty}}{\alpha^{\vee}}) & \text{ if } x \in (n,n+\frac{1}{2}). \end{cases} \end{equation} If $\alpha<0$, we define $g_{\alpha}$ to be the graph obtained by reflecting $g_{-\alpha}$ in the $x$-axis. By \cite{lapcmc}, we have \begin{equation} \label{eqn:graph_height} \mathrm{sgn}(\alpha)\l{i_k}=g_\alpha\left(k-\frac{1}{2}\right), k=1, \dots, n, \, \text{ and }\, \mathrm{sgn}(\alpha)l_{\alpha}^{\infty}:= \inner{\mu(J)}{\alpha^{\vee}} = g_{\alpha}\left(n+\frac{1}{2}\right). \end{equation} \begin{example} \label{example:graph} We continue Example \ref{example:folded_lambda_chain}. The graphs of $g_{\alpha_2}$ and $g_{\theta}$ are given in Figure \ref{fig:graph}. \end{example} \begin{figure}[h] \centering \includegraphics[scale=.45]{chain} \caption{} \label{fig:graph} \end{figure} Let $J$ be an admissible subset. Fix $p$ so $\alpha_p$ is a simple root if $p>0$, or $\theta$ if $p=0$. Let $M$ be the maximum of $g_{\alpha_p}$. Let $m$ be the minimum index $i$ in $\widehat{I}_{\alpha_p}(\Delta)$ for which we have $\mathrm{sgn}(\alpha_p)\l{i}=M$. If $M>\delta_{p,0}$ then by part \eqref{rootFb} of Proposition \ref{prop:rootF} $m$ has a predecessor $k$ in $\widehat{I}_{\alpha_p}$ and we define \begin{equation} \label{eqn:rootF} f_p(J):= \begin{cases} (J \backslash \left\{ m \right\}) \cup \{ k \} & \text{ if $M>\delta_{p,0} $ } \\ \mathbf{0} & \text{ otherwise }. \end{cases} \end{equation} Now we define $e_p$. Again let $M:= \max g_{\alpha_p}$. Let $k$ be the maximum index $i$ in $I_{\alpha_p}$ for which we have $\mathrm{sgn}(\alpha_p)\l{i}=M$ and let $m$ be the successor of $k$ in $\widehat{I}_{\alpha_p}$. Define \begin{equation} \label{eqn:rootE} e_p(J):= \begin{cases} (J \backslash \left\{ k \right\}) \cup \{ m \} & \text{ if } M>\inner{\mu(J)}{\alpha_p^{\vee}} \text{ and } M \geq \delta_{p,0} \\ \mathbf{0} & \text{ otherwise. } \end{cases} \end{equation} Note that $f_p(J)=J'$ if and only if $ e_p(J')=J$. In the above definitions, when $ m= \infty $, we will use the convention that $J\backslash \left\{ \infty \right\}= J \cup \left\{ \infty \right\} = J$. \begin{example} \label{example:root_op} We continue Example \ref{example:graph}. We find $f_2(J)$ by noting that $\widehat{I}_{\alpha_2}=\left\{1,4,\infty \right\}$. From $g_{\alpha_2}$ in Figure \ref{fig:graph} we can see that $\widehat{L}_{\alpha_2}=\left\{0,0,1 \right\}$, so $k=4,m=\infty$, and $f_2(J)=J \cup \{ 4 \} = \left\{ 1,2,3,4,5 \right\}$. We can see from Figure \ref{fig:graph} that the maximum of $g_{\theta}=1$, hence $f_0(J)=\mathbf{0}$. To compute $e_0(J)$ observe that $\widehat{I}_{\theta}=\left\{3,6\right\}$ with $k = 3$ and $m = 6$. So $e_0(J) = ( J \backslash \{k\} ) \cup \{ m \} = \left\{ 1,2,5,6 \right\} $. \end{example} We will prove the following theorem in Section \ref{subsection:propositions}. \begin{theorem} \label{theorem:admissible} If $J$ is an admissible subset and if $f_p(J) \ne \mathbf{0}$, then it is also an admissible subset. Similarly for $e_p(J)$. \end{theorem} \subsection{Propositions and Lemmas} \label{subsection:propositions} In this section we collect necessary results for the proof of Theorem \ref{theorem:admissible}. \begin{lemma \label{lemma:admissible} Let $w\in W$, $\alpha$ a simple root or $\theta$, and $\beta$ a positive root. Let $w \longrightarrow ws_{\beta}$, and assume that we have $w^{-1}(\alpha)>0$ and $s_{\beta}w^{-1}(\alpha)<0$ then $w^{-1}(\alpha)=\beta$. \end{lemma} \begin{proof} If $s_{\alpha}w=ws_{\beta}$, then $w^{-1}(\alpha)=\pm \beta$, and $w^{-1}(\alpha)>0$ implies $w^{-1}(\alpha)=\beta$. Suppose by way of contradiction that $s_{\alpha}w\ne ws_{\beta}$. First suppose $\alpha$ is a simple root. Since $w^{-1}(\alpha)>0 $, then $w \lessdot s_{\alpha}w$. By assumption we have $w \longrightarrow ws_{\beta}$, hence by Proposition \ref{prop:deodhar} we have $ws_{\beta} \lessdot s_{\alpha}ws_{\beta}$. But $s_{\beta}w^{-1}(\alpha)<0$ implies $s_{\alpha}ws_{\beta}\lessdot ws_{\beta}$, which is a contradiction. Suppose $\alpha=\theta$. Since $w^{-1}(\theta)>0$, by Lemma \ref{lemma:theta} $w \qstep s_{\theta}w$, and by Proposition \ref{prop:deodhar0} $ws_{\beta} \qstep s_{\theta}ws_{\beta}$. Since $s_{\beta}w^{-1}(\theta)<0$, by Lemma \ref{lemma:theta} $s_{\theta}ws_{\beta } \qstep ws_{\beta} $ which is a contradiction. \end{proof} \begin{lemma} \label{lemma:admissible2} Let $J = \left\{ j_1 < j_2 < \cdots < j_s \right\} $ be an admissible subset. Assume that $r_{j_a}\dots r_{j_1}(\alpha)>0$ and $r_{j_{b}}\dots r_{j_1}(\alpha)<0$ where $\alpha$ is a simple root or $\theta$, and $0\leq a < b$ (if $a=0$, then the first condition is void). Then there exists $i$ with $a \leq i < b$ such that $\gamma_{j_{i+1}}=\alpha$. \end{lemma} \begin{proof} Find $i$ with $a \leq i < b$ such that $r_{j_i}\dots r_{j_1}(\alpha)>0$ and $r_{j_{i+1}}\dots r_{j_1}(\alpha)<0$. By Lemma \ref{lemma:admissible}, we have $\beta_{j_{i+1}}= r_{j_i}\dots r_{j_1}(\alpha)$. This means that $\gamma_{j_{i+1}}=r_{j_1}\dots r_{j_i}(\beta_{j_{i+1}})=\alpha$. \end{proof} \begin{proposition} \label{prop:A} Let $J = \left\{ j_1 < j_2 < \cdots < j_s \right\} $ be an admissible subset. Assume that $\alpha$ is a simple root or $\theta$, with $I_{\alpha}\ne \emptyset$. Let $m \in I_{\alpha}$ be an element for which its predecessor $k$ (in $I_{\alpha}$) satisfies $(\gamma_k,\varepsilon_k)\in \{(\alpha,1),(-\alpha,-1)\}.$ Then we have $\gamma_m=\alpha$. \end{proposition} \begin{proof} First suppose that $(\gamma_k,\varepsilon_k)=(\alpha,1)$. Note that $\gamma_{i}=\beta_{i}>0$ for $i \leq j_1$. Assume that $\gamma_m=-\alpha$. Let us define the index $b$ by the condition $j_b < m \leq j_{b+1}$ (possibly $b=s$, in which case the second inequality is dropped). We define the index $a$ by the condition $j_{a} < k < j_{a+1}$ (possibly $a=0$, in which case the first inequality is dropped). We clearly have $r_{j_1}\dots r_{j_b}(\beta_m)=-\alpha$, which implies $r_{j_b}\dots r_{j_1}(\alpha)<0$. We also have $r_{j_1}\dots r_{j_a}(\beta_k)=\alpha$, so $r_{j_a}\dots r_{j_1}(\alpha)>0$ (hence $b \ne a$). Note that if $\alpha=\theta$, then $a>0$. We can now apply Lemma \ref{lemma:admissible2} to conclude that $\gamma_{j_i} =\alpha$ for some $i \in [a+1,b]$. Since $k<j_{a+1} \leq j_{b} < m$, we contradicted the assumption that $\gamma_k$ is the predecessor of $\gamma_m$ in $I_{\alpha}$. Now suppose that $(\gamma_k,\varepsilon_k)=(-\alpha,-1)$. Assume that $\gamma_m = -\alpha$ and define $b$ as in the previous case. Again we have $r_{j_b}\dots r_{j_1}(\alpha)<0$. Define $a$ by the condition $j_a = k < j_{a+1}$. Hence $r_{j_1}\dots r_{j_{a-1}}(\beta_{j_a})=-\alpha$, so $r_{j_1}\dots r_{j_{a}}(\beta_{j_a})=\alpha$, and $r_{j_a}\dots r_{j_1}(\alpha)>0$, again this leads to a contradiction. \end{proof} \begin{proposition} \label{prop:A1} Let $J$ be an admissible subset. Assume that $\alpha$ is a simple root for which $I_{\alpha}\ne \emptyset$. Let $m \in I_{\alpha}$ be the minimum of $I_{\alpha}$. Then we have $\gamma_m=\alpha$. \end{proposition} \begin{proof} The proof of Proposition \ref{prop:A} carries through with $a=0$. \end{proof} \begin{proposition} \label{prop:B} Let $J = \left\{ j_1 < j_2 < \cdots < j_s \right\} $ be an admissible subset. Assume that $\alpha$ is a simple root or $\theta$. Suppose that $I_{\alpha}\ne \emptyset$, and $(\gamma_m,\varepsilon_m) \in\{ (\alpha,1), (-\alpha,-1) \}$ for $m=\max I_{\alpha}.$ Then we have $\langle \gamma_{\infty},\alpha^{\vee} \rangle >0$. \end{proposition} \begin{proof} Assume that the conclusion fails, which means that $r_{j_s}\dots r_{j_1}(\alpha)<0$. First suppose that $(\gamma_m,\varepsilon_m)=(\alpha,1)$. Define the index $a$ by the condition $j_a < m < j_{a+1}$. (If $a=0$ or $a=s$ one of the two inequalities is dropped). We have $r_{j_1}\dots r_{j_{a}}(\beta_m)=\alpha$, so $r_{j_a}\dots r_{j_1}(\alpha)>0$ (hence $a \ne s$). Note, if $\alpha=\theta$ then $a>0$. We now apply Lemma \ref{lemma:admissible2} to conclude that $\gamma_{j_i}=\alpha$ for $i \in [a+1,s]$. Since $m<j_{a+1}\leq s$, this contradicts that $m=\max I_{\alpha}$. Now suppose that $(\gamma_m,\varepsilon_m)=(-\alpha,-1)$, in this case we define the index $a$ by $j_a = m <j_{a+1}$. We have $r_{j_1}\dots r_{j_{a-1}}(\beta_{j_a})=-\alpha$ so $r_{j_1}\dots r_{j_{a}}(\beta_{j_a})=\alpha$, and $r_{j_a}\dots r_{j_1}(\alpha)>0$. Again we come to a contradiction by Lemma \ref{lemma:admissible2}. \end{proof} \begin{proposition} \label{prop:B1} Let $J$ be an admissible subset. Assume that, for some simple root $\alpha$, we have $I_{\alpha}= \emptyset$. Then $\langle \gamma_{\infty},\alpha^{\vee} \rangle >0$. \end{proposition} \begin{proof} The proof of Proposition \ref{prop:B} carries through with $a=0$. \end{proof} Let us now fix a simple root $\alpha$. We will rephrase some of the above results in a simple way in terms of $g_{\alpha}$, and we will deduce some consequences. Assume that $I_{\alpha}=\left\{ i_1 < i_2 < \dots < i_n \right\}$, so that $g_{\alpha}$ is defined on $[0,n+\frac{1}{2}]$, and let $M$ be the maximum of $g_{\alpha}$. Note first that the function $g_{\alpha}$ is determined by the sequence $(\sigma_1, \dots, \sigma_{n+1})$, where $\sigma_j = (\sigma_{j,1},\sigma_{j,2}):= (\mathrm{sgn}(\gamma_{i_j}), \varepsilon_{i_j}\mathrm{sgn} (\gamma_{i_j}))$ for $1\leq j\leq n$, and $\sigma_{n+1}= \sigma_{n+1,1}:=\mathrm{sgn} (\langle \gamma_{\infty}, \alpha^{\vee} \rangle)$. From Propositions \ref{prop:A}, \ref{prop:A1}, \ref{prop:B} and \ref{prop:B1} we have the following restrictions. \begin{enumerate}[(C1)] \item $\sigma_{1,1}=1$ \item $\sigma_{j,2}=1 \Rightarrow \sigma_{j+1,1}=1$ \end{enumerate} \begin{proposition} \label{prop:main1} If $g_{\alpha}(x)=M$, then $x=m+\frac{1}{2}$ for $0 \leq m \leq n$, $\sigma_{m+1} \in \left\{ (1,-1),1 \right\}$ and $M \in \mathbb{Z}_{\geq 0}$. \end{proposition} \begin{proof} By (C1) $M\geq 0$, therefore $g_{\alpha}(0) = - \frac{1}{2} \ne M$. For $m \in \left\{ 1, \dots, n \right\}$, $ g_{\alpha}(m)=M \Rightarrow \sigma_{m,2}=1$, and (C2) leads to a contradiction. The last statement is obvious. \end{proof} We use Proposition \ref{prop:main1} implicitly in the proof of Proposition \ref{prop:main2} and Proposition \ref{prop:main3}. \begin{proposition} \label{prop:main2} Assume that $M>0$, and let $m$ be such that $m+ \frac{1}{2} = \min g_{\alpha}^{-1}(M)$. We have $m>0$, $\sigma_m = (1,1)$, and $g_{\alpha}(m-\frac{1}{2})= M-1$. Moreover, we have $g_{\alpha}(x) \leq M-1$ for $0 \leq x \leq m- \frac{1}{2}$. \end{proposition} \begin{proof} By construction we have $g_{\alpha}(\frac{1}{2})\leq 0$, so $m>0$. If $\sigma_m \in \{(-1,-1), (1,-1)\}$, then we have $g_{\alpha}(m-\frac{1}{2})=M$, which contradicts the definition of $m$. If $\sigma_m=(-1,1)$, then $g_{\alpha}(m-1)=M-\frac{1}{2}$. By (C1) $m \geq 2 $, and by (C2) $\sigma_{m-1,2}=-1$, this implies that $g_{\alpha}(m-\frac{3}{2})=M$, contradicting the definition of $m$. Hence $\sigma_m=(1,1)$. Suppose by way of contradiction the last statement in the corollary fails. Then there exists a $k$ with $1 \leq k \leq m-1$ such that $g_{\alpha}(k-1)=M-\frac{1}{2}$ and $\sigma_{k,1}=-1$. Condition (C1) implies that $k \geq 2$ and Condition (C2) implies $\sigma_{k-1,2}=-1$. This implies $g_{\alpha}(k-\frac{3}{2})=M$, contradicting the definition of $m$. \end{proof} \begin{proposition} \label{prop:main3} Assume that $M> g_{\alpha}(n+\frac{1}{2})$, and let $k$ be such that $k-\frac{1}{2}= \max g_{\alpha}^{-1}(M)$. We have $k \leq n, \sigma_{k+1} \in \{ (-1, -1), -1\}$, and $g_{\alpha}(k+\frac{1}{2}) =M-1$. Moreover, we have $g_{\alpha}(x) \leq M-1$ for $k + \frac{1}{2} \leq x \leq n+ \frac{1}{2}$. \end{proposition} \begin{proof} Since $ M> g_{\alpha}(n+\frac{1}{2})$, it follows that $k\leq n$. If $\sigma_{k+1} \in \{ (1,1),(1,-1),1 \}$ then $g_{\alpha}(k+\frac{1}{2})=M$, contradicting the choice of $k$. If $\sigma_{k+1}=(-1,1)$ then by (C2) $\sigma_{k+2,1}=1$, and $g_{\alpha}(k+\frac{3}{2})=M$, contradicting the choice of $k$. Hence $\sigma_{k+1} \in \{(-1,-1),-1 \}$. Suppose by way of contradiction the last statement in the corollary fails. Then there exists an $m$ with $k+1 \leq m \leq n-1$ such that $g_{\alpha}(m+1)=M-\frac{1}{2}$ and $\sigma_{m+1,2}=1$. Condition (C2) implies that $\sigma_{m+2,1}=1$, so $g_{\alpha}(m+\frac{3}{2})=M$, contradicting the choice of $k$. \end{proof} We now consider $g_{\theta}$. Since $\theta<0$, we have $\sigma_j = (\sigma_{j,1},\sigma_{j,2}):= (-\mathrm{sgn}(\gamma_{i_j}),-\varepsilon_{i_j}\mathrm{sgn} (\gamma_{i_j}))$ for $1 \leq j \leq n$, and $\sigma_{n+1}=\sigma_{n+1,1}:=\mathrm{sgn} (\langle \gamma_{\infty}, \theta^{\vee} \rangle)$. From Propositions \ref{prop:A} and \ref{prop:B} we conclude that condition (C2) holds for $g_{\theta}$. We can replace condition (C1) by restricting to admissible subsets $J$ where $M$ is large enough, as we will now explain. In the proof of Proposition \ref{prop:main1} (C1) is needed to conclude that $g_{\alpha}(0) \ne M$. It is possible that $g_{\theta}(0)=M$, but if we restrict to $g_{\theta}$ where $M\geq 1$ we can conclude that $g_{\theta}(0)=\frac{1}{2} \ne M$, and the rest of the proof follows through. In the proof of Proposition \ref{prop:main2} (C1) allows us to conclude that if $g_{\alpha}(m-1)=M-\frac{1}{2}$, then $m\geq 2$. We can make this conclusion about $g_{\theta}$ if we assume that $M \geq 2$. In this case $M- \frac{1}{2} \geq \frac{3}{2}>\frac{1}{2}=g_{\theta}(0)$. So $m-1 \geq 1$ and $m \geq 2$. Note that Proposition \ref{prop:main3} depends on Proposition \ref{prop:main1} so we need to assume $M \geq 1$ here too. We have therefore proved the following propositions. \begin{proposition} \label{prop:main1_theta} Suppose $M\geq 1$. If $g_{\theta}(x)=M$, then $x=m+\frac{1}{2}$ for $0 \leq m \leq n$, $\sigma_{m+1} \in \left\{ (1,-1),1 \right\}$, and $M \in \mathbb{Z}_{\geq 1}$. \end{proposition} \begin{proposition} \label{prop:main2_theta} Assume that $M\geq2$, and let $m$ be such that $m+ \frac{1}{2} = \min g_{\theta}^{-1}(M)$. We have $m>0$, $\sigma_m = (1,1)$, and $g_{\theta}(m-\frac{1}{2})= M-1$. Moreover, we have $g_{\theta}(x) \leq M-1$ for $0 \leq x \leq m- \frac{1}{2}$. \end{proposition} \begin{proposition} \label{prop:main3_theta} Assume $M\geq 1$. Assume that $M> g_{\theta}(n+\frac{1}{2})$, and let $k$ be such that $k-\frac{1}{2}= \max g_{\theta}^{-1}(M)$. We have $k \leq n, \sigma_{k+1} \in \{ (-1, -1), -1\}$, and $g_{\theta}(k+\frac{1}{2}) =M-1$. Moreover, we have $g_{\theta}(x) \leq M-1$ for $k + \frac{1}{2} \leq x \leq n+ \frac{1}{2}$. \end{proposition} Recall from from Section \ref{subsection:crystalop} the definitions of the finite sequences $I_{\alpha}(\Delta)$, $\widehat{I}_{\alpha}(\Delta)$, $L_{\alpha}(\Delta)$, and $\widehat{L}_{\alpha}(\Delta)$, $g_{\alpha}$, where $\alpha$ is a root, as well as the related notation. Fix $p$, then $\alpha_p$, is a simple root if $p>0$, or $\theta$ if $p=0$. Recall the convention $J\backslash \left\{ \infty \right\}= J \cup \left\{ \infty \right\} = J$. Let $M$ be the maximum of $g_{\alpha_p}$, and suppose that $M \geq \delta_{p,0}$. Note this is always true for $p \ne 0$ by Proposition \ref{prop:main1}. Let $m$ be the minimum index $i$ in $\widehat{I}_{\alpha_p}(\Delta)$ for which we have $\mathrm{sgn}(\alpha_p)\l{i}=M$. \begin{proposition} Given the above setup, the following hold. \label{prop:rootF} \begin{enumerate} \item If $m \ne \infty$, then $\gamma_m=\alpha_p$ and $m \in J$. \label{rootFa} \item If $M>\delta_{p,0}$ then $m$ has a predecessor $k$ in $\widehat{I}_{\alpha_p}(\Delta)$ such that \[ \gamma_k=\alpha_p,\, k \not \in J, \, \mbox{and }\, \mathrm{sgn}(\alpha_p)\l{k} = M-1. \]\label{rootFb} \item We have $ \mu(f_p(J)) = \mu(J) - \alpha_p $. \label{rootFc} \end{enumerate} \end{proposition} \begin{proof} Parts \eqref{rootFa} and \eqref{rootFb} are immediate consequences of Propositions \ref{prop:main1} - \ref{prop:main3_theta}. For part \eqref{rootFc}, the proof from \citep{lapcmc} can be applied in our context. We repeat it here. Let $ \widehat{ t }_j:=s_{|\gamma_j|, -\l{j}}$, recall \[ H_{|\gamma_k|,-\l{k}}=\widehat{r}_{j_1}\widehat{r}_{j_2}\dots \widehat{r}_{j_p}(H_{\beta_k,-l_k})\,; \] where $j_p$ is the largest folding position less than $k$, and $ \widehat{ r }_j = s_{ \beta_j, -\l{j}} $. Then \[ \widehat{t}_{j_1} = \widehat{r}_{j_1},\, \widehat{t}_{j_2} = \widehat{r}_{j_1}\widehat{r}_{j_2}\widehat{r}_{j_1},\, \widehat{t}_{j_3} = \widehat{r}_{j_1}\widehat{r}_{j_2}\widehat{r}_{j_3}\widehat{r}_{j_2}\widehat{r}_{j_1}\, \ldots\;. \] This follows from the following basic fact \citep[Corollary 4.2]{humrgc}; If $w$ is an element of the affine Weyl group and $wH_{ \alpha, k} = H_{ \beta, l}$, then $ws_{ \alpha, k}w^{-1} = s_{ \beta, l}$. Let $J = \left\{ j_1 < j_2 < \cdots < j_s \right\}$ and let $\mu=\mu(J)= - \widehat{ r }_{j_1} \widehat{ r }_{j_2} \cdots \widehat{ r }_{j_s}(- \lambda ) $. It follows that the weight of $F_p(J)$ is $-\widehat{ t}_k \widehat{ t}_m (-\mu)$ if $ m \ne \infty $ and $-\widehat{ t}_k (-\mu) $ otherwise. Using the formula $ s_{ \alpha ,k} (\nu) = s_{ \alpha }(\nu) + k \alpha_p $ we compute (in both cases) \[ \mu(f_p(J)) = \mu + (\l{k} - M) \alpha_p = \mu - \alpha_p\;. \] \end{proof} Let $k$ be the maximum index $i$ in $I_{\alpha_p}(\Delta)$ for which we have $\mathrm{sgn}(\alpha_p)\l{i}=M$, and let $m$ be the successor of $k$ in $\widehat{I}_{\alpha_p}(\Delta)$. The following analog of Proposition \ref{prop:rootF} is proved in a similar way \begin{proposition} Given the above setup, the following hold. \label{prop:rootE} \begin{enumerate} \item We have $\gamma_k=\alpha_p$ and $k \in J$. \item If $m\ne \infty$ then \[\gamma_m=-\alpha_p,\, m \not \in J,\, \mbox{ and } \mathrm{sgn}(\alpha_p)\l{m} = M-1.\] \item We have $ \mu(e_p(J)) = \mu(J) + \alpha_p $. \end{enumerate} \end{proposition} \begin{proof}[Proof of Theorem \ref{theorem:admissible}] Suppose $p \ne 0$. We consider $f_p$ first. The cases corresponding to $m\ne \infty$ and $m=\infty$ can be proved in similar ways, so we only consider the first case. Let $J=\left\{ j_1 < j_2< \ldots < j_s \right\}$, and let $w_i = r_{j_1}r_{j_2} \dots r_{j_i}$. Based on Proposition \ref{prop:rootF}, let $a<b$ be such that \[ j_a < k < j_{a+1} < \dots < j_b = m < j_{b+1} ; \] if $a=0$ or $b+1>s$, then the corresponding indices $j_a$, respectively $j_{b+1}$, are missing. To show that $(J \backslash \left\{ m \right\}) \cup \left\{ k \right\}$ is an admissible subset, it is enough to prove \begin{equation} w_a \longrightarrow w_ar_{k} \longrightarrow w_ar_k r_{j_{a+1}} \longrightarrow \dots \longrightarrow w_ar_k r_{j_{a+1}}\dots r_{j_{b-1}} = w_b. \label{newadmissible} \end{equation} By our choice of $k$, we have \begin{align} w_a(\beta_k)=\alpha_p \iff w_a^{-1}(\alpha_p)=\beta_k >0 \iff w_a \lessdot s_pw_a = w_ar_k. \label{admissiblebasecase} \end{align} So we can rewrite (\ref{newadmissible}) as \begin{equation} w_a \longrightarrow s_pw_a \longrightarrow s_pw_{a+1} \longrightarrow \dots \longrightarrow s_pw_{b-1} = w_b. \label{newadmissibleb} \end{equation} We will now prove that (\ref{newadmissibleb}) is a path in the quantum Bruhat graph. Observe \begin{align*} s_p w_{i-1}=w_i \Longleftrightarrow w_{i-1}(\beta_{j_i})=\pm \alpha_p \Longleftrightarrow j_i \in I_{\alpha}. \end{align*} Our choice of $k$ and $b$ implies that we have \begin{equation} s_p w_{i-1} \ne w_i \; \text{ for } a<i<b \label{equation:tempinductionstep} \end{equation} (otherwise $j_i \in I_{\alpha}$ for $k<j_i<j_b$), and $s_p w_{b-1}=w_b$ since $j_b \in I_{\alpha}$. Since $J$ is admissible, we have \begin{equation} w_{i-1} \longrightarrow w_i. \label{eqn:diamond_lower} \end{equation} With (\ref{admissiblebasecase}) as the base case, assume by induction that $w_{i-1}\lessdot s_p w_{i-1}$. We can apply Proposition \ref{prop:deodhar} to conclude that $w_{i} \lessdot s_pw_{i}$ and \begin{equation} \label{eqn:diamond_upper} s_pw_{i-1} \longrightarrow s_pw_{i} \; \mbox{ for } a<i<b. \end{equation} The proof for $e_p(J)$ is similar. We let $a<b$ such that \[ j_a < k = j_{a+1} < \dots < j_b < m < j_{b+1}. \] First suppose that $m\ne \infty$. In this case we need to prove that \begin{equation} w_a \longrightarrow w_ar_{j_{a+2}} \longrightarrow \dots \longrightarrow w_a r_{j_{a+2}}\dots r_{j_{b}} \longrightarrow w_a r_{j_{a+2}}\dots r_{j_{b}}r_{m} = w_b. \label{Eopnewadmissible} \end{equation} By choice of $k$, (\ref{admissiblebasecase}) still holds, and by choice of $m$, $w_b(\beta_m)=-\alpha$. From these two observations we have $s_pw_b=w_br_m$, and the equality on the right hand side of (\ref{Eopnewadmissible}). We also have $w_b^{-1}(\alpha)=-\beta_m < 0$ so $s_pw_b \lessdot w_b$, which we use as our base case, and assume by induction that $s_pw_i \lessdot w_i$, for $a+1 < i < b+1$. Using (\ref{eqn:diamond_lower}) and (\ref{equation:tempinductionstep}) for $a+1<i<b+1$, we can apply Proposition \ref{prop:deodhar} to conclude that $s_pw_{i-1} \lessdot w_{i-1}$ and $s_pw_{i-1} \longrightarrow s_pw_{i}$ for $a+1<i<b+1$. Finally $s_pw_{a+1}=w_a$ by (\ref{admissiblebasecase}), and we showed (\ref{Eopnewadmissible}). If $m=\infty$, then we only need to show \[ w_a \longrightarrow w_ar_{j_{a+2}} \longrightarrow \dots \longrightarrow w_a r_{j_{a+2}}\dots r_{j_{b}}. \] By choice of $k,m$ and \citep[Proposition 5.5]{lapcmc}, we have $w_b^{-1}(\alpha) <0$, hence $s_pw_b \lessdot w_b$, which we use as our base case, and assume by induction that $s_pw_i \lessdot w_i$. The rest of the proof is similar to the case $m \ne \infty$. The above proof follows through for $p=0$ with $\lessdot$ replaced by $\qstep$ with the help of Lemma \ref{lemma:theta} and Proposition \ref{prop:deodhar0}. \end{proof} \subsection{Main application} The setup is that of untwisted affine root systems. Part \eqref{enumerate:conjecture_iso} of the theorem below is proved for a particular choice of a $ \lambda $-chain. \begin{theorem}[\cite{unialcmod_ea,unialcmod, KRcrystals_energy}] \label{mainconj} ~ \begin{enumerate} \item $\mathcal{A}(\lambda)$ is isomorphic to the subgraph of $B^{\otimes \lambda} $ containing only the dual Demazure arrows (c.f. Definition \ref{definition:pdemazure_arrows}). \label{enumerate:conjecture_iso} \item If $b$ corresponds to $J$ under the isomorphism in part (\ref{enumerate:conjecture_iso}), then the energy is given by $D(b)=-\mathrm{height} (J)$. \end{enumerate} \end{theorem} \section{The quantum alcove model in types \texorpdfstring{$A$ and $C$}{A and C}} \label{modelac} \subsection{Type \texorpdfstring{$A$}{A}} \label{subsection:TypeA} We start with the basic facts about the root system of type $A_{n-1}$. We can identify the space $\mathfrak{h}_\mathbb{R}^*$ with the quotient $V:=\mathbb{R}^n/\mathbb{R}(1,\ldots,1)$, where $\mathbb{R}(1,\ldots,1)$ denotes the subspace in $\mathbb{R}^n$ spanned by the vector $(1,\ldots,1)$. Let $\varepsilon_1,\ldots,\varepsilon_n\in V$ be the images of the coordinate vectors in $\mathbb{R}^n$. The root system is $\Phi=\{\alpha_{ij}:=\varepsilon_i-\varepsilon_j \::\: i\ne j,\ 1\leq i,j\leq n\}$. The simple roots are $\alpha_i=\alpha_{i,i+1}$, for $i=1,\ldots,n-1$. The highest root $\widetilde{\alpha}=\alpha_{1n}$. We let $\alpha_0=\theta=\alpha_{n1}$. The weight lattice is $\Lambda=\mathbb{Z}^n/\mathbb{Z}(1,\ldots,1)$. The fundamental weights are $\omega_i = \varepsilon_1+\ldots +\varepsilon_i$, for $i=1,\ldots,n-1$. A dominant weight $\lambda=\lambda_1\varepsilon_1+\ldots+\lambda_{n-1}\varepsilon_{n-1}$ is identified with the partition $(\lambda_{1}\geq \lambda_{2}\geq \ldots \geq \lambda_{n-1}\geq\lambda_n=0)$ having at most $n-1$ parts. Note that $\rho=(n-1,n-2,\ldots,0)$. Considering the Young diagram of the dominant weight $\lambda$ as a concatenation of columns, whose heights are $\lambda_1',\lambda_2',\ldots$, corresponds to expressing $\lambda$ as $\omega_{\lambda_1'}+\omega_{\lambda_2'}+\ldots$ (as usual, $\lambda'$ is the conjugate partition to $\lambda$). The Weyl group $W$ is the symmetric group $S_n$, which acts on $V$ by permuting the coordinates $\varepsilon_1,\ldots,\varepsilon_n$. Permutations $w\in S_n$ are written in one-line notation $w=w(1)\ldots w(n)$. For simplicity, we use the same notation $(i,j)$ with $1\le i<j\le n$ for the root $\alpha_{ij}$ and the reflection $s_{\alpha_{ij}}$, which is the transposition $t_{ij}$ of $i$ and $j$. We now consider the specialization of the alcove model to type $A$. For any $k=1, \ldots , n-1$, we have the following $\omega_k$-chain, from $A_{\circ}$ to $A_{-\omega_k}$, denoted by $\Gamma(k)$ \cite{lapcmc}: \begin{equation} \begin{matrix*}[l] ( (k,k+1),& (k,k+2)&, \ldots,& (k,n), \\ \phantom{(} (k-1,k+1),& (k-1,k+2)&, \ldots,& (k-1,n), \\ \phantom{(,}\vdots & \phantom{,}\vdots & & \phantom{,}\vdots \\ \phantom{(}(1,k+1),& (1, k+2)&, \ldots,& (1,n)) \,. \end{matrix*} \label{eqn:lambdachainA} \end{equation} \begin{example} \label{example:broken_column_type_A} For $n=4, \Gamma(2)$ can be visualized as obtained from the following broken column, by pairing row numbers in the top and bottom parts in the prescribed order. \[ \tableau{ 1 \\ 2 \\ \\ 3\\ 4 }\quad, \quad \Gamma(2)=\{ (2,3),(2,4), (1,3),(1,4) \}\,. \] Note the top part of the above broken column corresponds to $\omega_2$. \end{example} We construct a $\lambda$-chain $\Gamma=\left( \beta_1, \beta_2, \dots , \beta_m \right)$ as the concatenation $\Gamma:= \Gamma^{1}\dots\Gamma^{\lambda_1}$, where $\Gamma^{j}=\Gamma(\lambda'_j)$. Let $J=\left\{ j_1 < \dots < j_s \right\}$ be a set of folding positions in $\Gamma$, not necessarily admissible, and let $T$ be the corresponding list of roots of $\Gamma$. The factorization of $\Gamma$ induces a factorization on $T$ as $T=T^{1}T^2 \dots T^{\lambda_1}$, and on $\Delta=\Gamma(J)$ as $\Delta=\Delta^1 \dots \Delta^{\lambda_1}$. We denote by $T^{1} \dots T^{j}$ the permutation obtained via multiplication by the transpositions in $T^{1}, \dots , T^{j}$ considered from left to right. For $w\in W$, written $w=w_1w_2\dots w_n$, let $w[i,j]=w_i\dots w_j$. To each $J$ we can associate a filling of a Young diagram $\lambda$. \begin{definition} \label{definition:fill} Let $\pi_{j}=\pi_{j}(T)=T^1 \dots T^j$. We define the \emph{filling map}, which produces a filling of the Young diagram $\lambda$, by \begin{equation} \label{eqn:filling_map} \mathrm{fill}(J)=\mathrm{fill}(T)=C_{1}\dots C_{\lambda_1};\; \mbox{ here } C_{i}=\pi_i[1,\lambda'_i]. \end{equation} \end{definition} We need the circular order $\prec_i$ on $[n]$ starting at $i$, namely $i\prec_i i+1\prec_i\ldots \prec_i n\prec_i 1\prec_i\ldots\prec_i i-1$. It is convenient to think of this order in terms of the numbers $1,\ldots,n$ arranged on a circle clockwise. We make the convention that, whenever we write $a\prec b\prec c\prec\ldots$, we refer to the circular order $\prec=\prec_a$. We have the following description of the edges of the quantum Bruhat graph in type $A$. \begin{proposition}[\cite{Lenart}] \label{prop:quantum_bruhat_order_type_A} For $1\leq i<j\leq n$, we have an edge $w \stackrel{(i,j)}{\longrightarrow} w(i,j)$ if and only if there is no $k$ such that $i<k<j$ and $w(i) \prec w(k) \prec w(j)$. \end{proposition} \begin{example} \label{example:filling_map} Let $n=3$ and $\lambda=(4,3,0)$, which is identified with $4\varepsilon_1 + 3\varepsilon_2 = 3\omega_2 +\omega_1$, and corresponds to the Young diagram $\, \tableau{ { } & { } & { } & { } \\ { } & { } & {} }$. We have \[\Gamma =\Gamma^1 \Gamma^2 \Gamma^3\Gamma^4 = \Gamma(2)\Gamma(2)\Gamma(2)\Gamma(1) = \{ \underline{(2,3)},\underline{(1,3)} \,|\, \underline{(2,3)}, (1,3) \,|\, \underline{(2,3)},(1,3)\,|\,\underline{(1,2)},(1,3) \}, \] where we underlined the roots in positions $J=\{1,2,3,5,7 \}$. Then \[T= \{(2,3),(1,3)\,|\,(2,3)\,|\,(2,3)\,|\,(1,2) \},\, \mbox{ and } \] \begin{equation} \label{eqn:delta_factorization} \Gamma(J)=\Delta=\Delta^1\Delta^2\Delta^3\Delta^4= \{\underline{(2,3)}, \underline{(1,2)} \, | \, \underline{(3,1)}, (2,3) \, | \, \underline{(1,3)}, (2,1)\,|\, \underline{(2,3)},(3,1) \}, \end{equation} where we again underlined the folding positions. We write permutations in (\ref{eqn:admissible}) as broken columns. Based on Proposition \ref{prop:quantum_bruhat_order_type_A}, $J$ is admissible since \begin{equation} \tableau{1 \\ \mathbf{2} \\ \\ \mathbf{3}} \lessdot \tableau{\mathbf{1} \\ 3 \\ \\ \mathbf{2}} \lessdot \tableau{2 \\ 3 \\ \\ 1} \,|\, \tableau{2 \\ \mathbf{3} \\ \\ \mathbf{1}} \qstep \tableau{2 \\ 1 \\ \\ 3} \,|\, \tableau{2 \\ \mathbf{1} \\ \\ \mathbf{3}} \lessdot \tableau{2 \\ 3 \\ \\ 1} \,|\, \tableau{\mathbf{2} \\ \\ \mathbf{3} \\ 1} \lessdot \tableau{3 \\ \\ 2 \\ 1} \,|. \label{eqn:admissible_chain} \end{equation} By considering the top part of the last column in each segment and by concatenating these columns left to right, we obtain $\mathrm{fill}(J)$, i.e., $ \mathrm{fill}(J) = \tableau{2 & 2 &2 & 3 \\ 3 & 1& 3 }$. \end{example} \begin{definition} \label{definition:sfill} We define the \emph{sorted filling map} $\mathrm{sfill}(J)$ by sorting ascendingly the columns of $\mathrm{fill}(J)$. \end{definition} \begin{theorem}[\cite{Lenart}] \label{theorem:bijection_type_A} The map $\mathrm{sfill}$ is a bijection between $\mathcal{A}(\lambda)$ and $B^{\otimes \lambda} $. \end{theorem} \begin{theorem} \label{theorem:crystal_isomorphism} The map $\mathrm{sfill}$ preserves the affine crystal structures, with respect to dual Demazure arrows. In other words, given $\mathrm{sfill}(J)=b$, there is a dual Demazure arrow $b\rightarrow f_i(b)$ if and only if $f_i(J)\ne \mathbf{0}$, and we have $f_i(b)=\mathrm{sfill}(f_i(J))$. \end{theorem} \begin{remark} In type $A_2$, consider $\lambda=(3,2,0)$ and $J=\{1,2,3,5\}$ (cf. Examples \ref{example:lambda_chain} - \ref{example:graph}). One can check that $J$ is an admissible subset, $b=\mathrm{sfill}(J)= \tableau{ 2 & 1 & 1 \\ 3 & 2}$, and $\mathrm{sfill}(\emptyset)= \tableau{1 & 1 & 1 \\ 2 & 2}$. Since $\varphi_0(b)=1$, $b \to f_0(b)$ is not a dual Demazure arrow, and from Example \ref{example:graph} $f_0(J)=\mathbf{0}$. From Example \ref{example:root0}, $f_0(b)= \mathrm{sfill}(\emptyset)$, so it would be desirable to have $f_0(J)=\emptyset$. In general, there may be many changes to an admissible subset for arrows that are not dual Demazure, and these changes are hard to control. \end{remark} The main idea of the proof of Theorem \ref{theorem:crystal_isomorphism} is the following. The signature of a filling, used to define the crystal operator $f_i$, can be interpreted as a graph similar to the graph of $g_{\alpha_i}$, which is used to define the crystal operator on the corresponding admissible subsequence. The link between the two graphs is given by Lemma \ref{lemma:height_counting} below, called the height counting lemma, which we now explain. Let $N_c(\sigma)$ denote the number of entries $c$ in a filling $\sigma$. Let $\mathrm{ct}(\sigma)=(N_1(\sigma), \dots, N_n(\sigma))$ be the content of $\sigma$. Let $\sigma[q]$ be the filling consisting of the columns $1,2, \dots , q$ of $\sigma$. Recall the factorization of $\Delta$ illustrated in \eqref{eqn:delta_factorization} and the heights $l_k^\Delta$ defined in (\ref{deflev}). \begin{lemma}[\cite{LenartHHL}, Proposition 3.6] \label{lemma:weight} Let $J \subseteq [m]$, and $\sigma=\mathrm{fill}(J)$. Then we have $\mu(J)=\mathrm{ct}(\sigma)$. \end{lemma} \begin{corollary} \label{corollary:linfinity} Let $J \subseteq [m]$, $\sigma=\mathrm{fill}(J)$, and $\alpha \in \Phi$. Then $\mathrm{sgn}(\alpha)l_{\alpha}^{\infty} = \inner{\mathrm{ct}(\sigma)}{\alpha^{\vee}}$. \end{corollary} \begin{lemma}[\cite{LenartHHL}, Proposition 4.1] \label{lemma:height_counting} Let $J \subseteq [m]$, and $\sigma=\mathrm{fill}(J)$. For a fixed $k$, let $\gamma_k=(c,d)$ be a root in $\Delta^{q+1}$. We have \[ \mathrm{sgn}(\gamma_k)\,\l{k} = \langle \mathrm{ct}(\sigma[q]),\gamma_k^{\vee}\rangle = N_c(\sigma[q]) - N_d(\sigma[q]). \] \end{lemma} We now introduce notation to be used for the remainder of this section. Let $p \in \{1, \dots, n-1 \}$. Let $J$ be an admissible sequence and let $\sigma=\mathrm{sfill}(J)$. Let $a_i=\inner{\mathrm{ct}(C_i)}{\alpha_p^{\vee}}$ and note that $a_i\in \left\{1,-1,0\right\}$. Where $a_i =1,-1$ corresponds to $C_i$ containing $p$, $p+1$ respectively, and $a_i = 0$ corresponds to $C_i$ containing both $p$ and $p+1$ or neither of them. The sequence $a_i$ corresponds, in an obvious way, to the $p$-signature from section \ref{subsection:KR-crystals}. Let $h_j= \inner{\mathrm{ct}(\sigma[j])}{\alpha_p^{\vee}}=\sum_{i=0}^j a_i$, with $a_0=h_0:=0$. Let $M'$ be the maximum of $h_j$, and let $m'$ be minimal with the property $h_{m'}=M'$. $M' \geq 0$, if $M'>0$, then $a_{m'}=1$ which corresponds to the rightmost $p$ in the reduced $p$-signature. It follows that $f_p$ will change the $p$ in column $m'$ of $\sigma$ to a $p+1$. The previous observations hold if we replace $\alpha_p,f_p,p$-signature with $\alpha_0,f_0,0$-signature respectively, and replace $p, p+1$ with $n,1$ respectively, hence we will choose $p \in \left\{ 0,1, \ldots, n-1 \right\}$. \begin{example} We continue with Example \ref{example:filling_map}. Let $\sigma=\mathrm{sfill}(J)=\tableau{ 2 & 1 & 2 & 3 \\ 3 & 2 & 3}$, then $f_2(\sigma)=\tableau{2 & 1 & 2 & 3 \\ 3 & 3 & 3}$. Let $p=2$ and refer to Figure \ref{fig:exampleA}. \begin{figure}[h] \centering \includegraphics[scale=.5]{exampleA} \caption{} \label{fig:exampleA} \end{figure} From the graph $g_{\alpha_2}$ for $J$, we can see that $M=1$. We note that $m=7$, with $\gamma_7 \in \Delta^4$, and $k=4$ with $\gamma_k \in \Delta^{2}$. So $f_2(J)=(J \backslash \{ 7 \}) \cup \{ 4 \} = \{1,2,3,4,5 \}$, and \[\Gamma(f_2(J))= \{ \underline{(2,3)},\underline{(1,2)} \,|\, \underline{(3,1)}, \underline{(2,3)} \,|\, \underline{(1,2)},(3,1)\,|\,(3,2),(3,1 \}, \] where we underlined roots in positions $f_2(J)$. From the graph $h_j$ for $J$, we can see that $m'=2$. \end{example} \begin{lemma} \label{lemma:chain_filling} If $\alpha_p=\gamma_k \in \Delta ^q$ with $k \not \in J$ then $a_q=1$. \end{lemma} \begin{proof} Recall $\alpha_p=\gamma_k=w(\beta_k)$, and let $\beta_k = (a,b)$. The result follows from the claim that $w(a)=\pi_q(a)$ and $w(b)=\pi_q(b)$ (c.f. Definition \ref{definition:fill}), which is a consequence of the structure of $\Gamma^q$ (c.f. \eqref{eqn:lambdachainA}) as we now explain. The only reflections in $\Gamma^q$ to the right of $\beta_k$ that affect values in positions $a$ or $b$, are $(a,b')$ for $b<b'$ and $(a',b)$ for $a'<a$. Applying these reflections (on the right) to any $w'$ satisfying $w'(a)=w(a)$ and $w'(b)=w(b)$ is not an edge in the quantum Bruhat graph; if $p \ne 0$ then the length clearly goes up by more than 1, if $p = 0$ it doesn't go down by as much as possible. \end{proof} Recall Proposition $\ref{prop:rootF}$, and the notation therein. $M$ is the maximum of $g_{\alpha_p}$, and suppose $M> \delta_{p,0}$, then $\gamma_k=\alpha_p$ with $k \not \in J$, $\mathrm{sgn}(\alpha_p)\l{k}=M-1$, and if $m \ne \infty$ then $\gamma_m=\alpha_p$ with $m \in J$. We will implicitly use the following observation when applying Lemma \ref{lemma:admissible2} in the next two proofs: if $a_i \ne 0$ then $\mathrm{sgn}(a_i) = \mathrm{sgn}(\pi_i^{-1}(\alpha_p))$. \begin{proposition} \label{proposition:max-correspondence} Let $J$ be an admissible subset, $\sigma=\mathrm{sfill}(J)$, and let $\delta_{p,q}$ be the Kronecker delta function. We have $M\geq M'$. If $M \geq \delta_{p,0}$ then $M=M'$. \end{proposition} \begin{proof} From Corollary \ref{corollary:linfinity} we have $h_{\lambda_1}=\mathrm{sgn}(\alpha)l_{\alpha_p}^{\infty}$. The case $M'=0$ is trivial, since $M\geq 0$, so suppose $M'>0$. If $M' > h_{\lambda_1}$ we can find $i<j$, with $h_i=M'$ such that $a_i>0, a_j<0$, and $a_t=0$ for $t \in (i,j)$. By Lemma \ref{lemma:admissible2} there exits $\gamma_{k'} = \alpha_p \in \Delta^{q+1}$ with $q \in [i,j)$, and $\mathrm{sgn}(\alpha_p)\l{k'}=h_q=h_i=M'$. Hence $M\geq M'$. If $M\geq \delta_{p,0}$, then by Proposition \ref{prop:main1}, Proposition \ref{prop:main1_theta}, \eqref{eqn:graph_height}, Lemma \ref{lemma:height_counting} and Corollary \ref{corollary:linfinity} it follows that $M \leq M'$, hence $M=M'$. \end{proof} The previous proposition states that $M=M'$ except in a few corner cases that occur when $p=0$. We will sometimes use one symbol in favor of the other to allude to the corresponding graph. \begin{proposition} \label{proposition:root_matching} Let $J$ be an admissible subset, $\sigma=\mathrm{sfill}(J)$, and suppose $\delta_{p,0}< M$, so $M=M'$ and $f_p(J)\ne \mathbf{0}$. Then $\gamma_k\in \Delta^{m'}$. If $m \ne \infty$, so $\gamma_m \in \Delta^{m''}$, then $a_i=0$ for $i \in (m',m'')$. If $m= \infty$ then $a_i=0$ for $i>m'$. \end{proposition} \begin{proof} Suppose $\gamma_k \in \Delta^j$, by Lemma \ref{lemma:chain_filling} $a_j=1$. Since $\mathrm{sgn}(\alpha_p)\l{k}=M-1=h_{j-1}$, it follows that $h_j=M$. Recall $m'$ is minimal with such property, and $a_{m'}>0$ since $M'>0$. By way of contradiction suppose that $m'<j$. It follows that the set $\left\{ i \in (m',j] \,|\, a_i \ne 0 \right\}$ is not empty. Let $t$ be its minimal element. If $a_t>0$ then $h_t > M'$ contradicting the maximality of $M'$. If $a_t<0$ then we can apply Lemma \ref{lemma:admissible2} to contradict the minimality of $m$. We conclude that $j=m'$. If $m\ne \infty$, we can use a similar proof to conclude that the set $\left\{i \in (m',m'') \,|\, a_i \ne 0 \right\}$ is empty. The case $m=\infty$ is done similarly. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem:crystal_isomorphism}] We continue to use notation from the above setup. Recall that $b=\mathrm{sfill}(J)$. The statement, that there is a dual Demazure arrow $b \to f_p(b) $ if and only if $f_p(J)\ne \mathbf{0}$, follows from Proposition \ref{proposition:max-correspondence}. We next show that $f_p(b) = \mathrm{sfill}(f_p(J))$, when $f_p(J)\ne \mathbf{0}$. Since $f_p(J) \ne \mathbf{0}$, we have $M'>0$, and $f_p$ will change the $p$ in column $m'$ to $p+1$ ($f_0$ will change $n$ to $1$ and sort the column). Let $J = \{j_1 < \dots < j_s \}$ be an admissible subset, and let $w_i=r_{j_1} \dots r_{j_i}$ be the corresponding sequence. The filling $\mathrm{fill}(J)$ is constructed from a subsequence of $w_i$ (cf. Definition \ref{definition:fill}). Suppose $m\ne \infty$, the case $m=\infty$ being proved similarly. There exist $a<b$ such that \[ j_a < k < j_{a+1} < \cdots < j_b = m < j_{b+1}\,; \] if $a=0$ or $b+1 > s$, then the corresponding indices $j_a$, respectively $j_{b+1}$ are missing. The sequence associated to $f_p(J)$ is \[w_1, \ldots, w_a, s_pw_a, s_pw_{a+1}, \ldots , s_pw_{b-1}=w_b, w_{b+1}, \ldots, w_s\] (see \eqref{newadmissibleb}). It follows that $\mathrm{fill}(f_p(J))$ is obtained from $\mathrm{fill}(J)$ by interchanging $p$ and $p+1$ in columns $i$ for $i \in [m',m'')$ (interchange $n$ with $1$ if $p=0$). By Proposition \ref{proposition:root_matching} this amounts to changing the $p$ in column $m'$ to $p+1$ ($n$ to $1$ if $p=0$). \end{proof} \subsection{Type \texorpdfstring{$C$}{C}} We start with the basic facts about the root system of type $C_n$. We can identify the space $\mathfrak{h}^*_{\mathbb{R}}$ with $V:= \mathbb{R}^n$, the coordinate vectors being $\varepsilon_1, \dots , \varepsilon_n$. The root system is $\Phi = \left\{ \pm \varepsilon_i \pm \varepsilon_j \, :\, 1 \leq i < j \leq n \right\} \cup \left\{ \pm 2 \varepsilon_i \, : \, 1 \leq i \leq n \right\}$. The simple roots are $\alpha_i = \varepsilon_i - \varepsilon_{i+1}$, for $i= 1, \dots , n-1, $ and $\alpha_n=2\varepsilon_n$. The highest root $\widetilde{\alpha}=2\varepsilon_1$. We let $\alpha_0=\theta=-2\varepsilon_1$. The weight lattice is $\Lambda = \mathbb{Z}^n$. The fundamental weights are $\omega_i = \varepsilon_1 + \dots + \varepsilon_i$, for $i=1, \dots , n$. A dominant weight $\lambda = \lambda_1\varepsilon_1 + \dots + \lambda_n\varepsilon_n$ is identified with the partition $(\lambda_1 \geq \lambda_2 \geq \dots \geq \lambda_{n-1}\geq \lambda_n \geq 0)$ of length at most $n$. Note that $\rho = (n, n-1, \dots , 1)$. Like in type $A$, writing the dominant weight $\lambda$ as a sum of fundamental weights corresponds to considering the Young diagram of $\lambda$ as a concatenation of columns. We fix a dominant weight $\lambda$ throughout this section. The Weyl group $W$ is the group of signed permutations $B_n$, which acts on $V$ by permuting the coordinates and changing their signs. A signed permutation is a bijection $w$ from $[\overline{n} ]:= \{ 1 < 2 < \dots < n < \overline{n} < \overline{n-1} < \dots < \overline{1} \}$ to $[\overline{n}]$ satisfying $w(\imath) = \overline{w(i)}$. Here $\overline{\imath}$ is viewed as $-i$, so $\overline{\overline{\imath}}= i$, $|\overline{\imath}| = i$, and $\mathrm{sign}(\overline{\imath})=-1$. We use both the window notation $w=w_1 \dots w_n$ and the full one-line notation $W=w(1)\dots w(n)w(\overline{n})\dots w(\overline{1})$ for signed permutations. For simplicity, given $1 \leq i < j \leq n$, we denote by $(i,j)$ the root $\varepsilon_i - \varepsilon_j$ and the corresponding reflection, which is identified with the composition of transpositions $t_{ij}t_{\overline{\jmath \imath}}$. Similarly, we denote by $(i,\overline{\jmath})$, for $1\leq i<j \leq n$, the root $\varepsilon_i + \varepsilon_j$ and the corresponding reflection, which is identified with the composition of transpositions $t_{i\overline{\jmath}} t_{j\overline{\imath}}$. Finally, we denote by $(i,\overline{\imath})$ the root $2\varepsilon_i$ and the corresponding reflection, which is identified with the transposition $t_{i\overline{\imath}}$. The length of an element $w$ in $B_n$ is given by \[ \ell(w):= \# \left\{ (k,l) \in [n] \times [\overline{n}] \, : \, k \leq |l|, w(k) > w(l) \right\}. \] We now consider the specialization of the alcove model to type $C$. For any $k=1, \ldots , n$, we have the following $\omega_k$-chain, from $A_{\circ}$ to $A_{-\omega_k}$, denoted by $\Gamma(k)$ \cite{Lenart}: \begin{align*} \Gamma(k):=\, & \Gamma_l(k)\Gamma_r(k) \text{ where }\\ \Gamma_l(k):=\, & \Gamma_{kk} \dots \Gamma_{k1},\, \Gamma_r(k):=\, \Gamma_k \dots \Gamma_2,\, \\ \Gamma_i:=\, & \left( (i,\overline{i-1}), (i, \overline{i-2}), \dots , (i, \overline{1}) \right),\,\\ \begin{matrix} \Gamma_{ki} :=\, \\\mbox{} \\\mbox{} \\\mbox{} \end{matrix} & \begin{matrix*}[l] ( (i,k+1),& (i,k+2),& \ldots,& (i,n), \\ \phantom{(} (i,\overline{\imath}), & & & \\ \phantom{(} (i,\overline{n}),& (i, \overline{n-1}),& \dots,& (i, \overline{k+1}),\\ \phantom{(} (i,\overline{i-1}),& (i,\overline{i-2}),& \ldots,& (i,\overline{1})). \end{matrix*} \label{eqn:lambdachainC} \end{align*} We construct a $\lambda$-chain $\Gamma=\left\{ \beta_1, \beta_2, \dots, \beta_m \right\}$ as a concatenation $\Gamma:= \Gamma^1 \dots \Gamma^{\lambda_1}$, where $\Gamma^j= \Gamma(\lambda'_j)$; we also let $\Gamma_l^j:=\Gamma_l(\lambda'_j)$ and $\Gamma_r^j:=\Gamma_r(\lambda'_j)$. Like in type $A$ we let $J=\left\{ j_1 < \dots < j_s \right\}$ be the set of folding positions in $\Gamma$, not necessarily admissible, and let $T$ be the corresponding list of roots of $\Gamma$. We factor $\Gamma$ as $\Gamma=\widetilde{\Gamma}^1\dots \widetilde{\Gamma}^{2\lambda_1}$, where $\widetilde{\Gamma}^{2i-1}=\Gamma^i_l$ and $\widetilde{\Gamma}^{2i}=\Gamma^i_r$, for $1\leq i \leq \lambda_1$. This factorization of $\Gamma$ induces a factorization on $T$ as $T^1T^2 \dots T^{2\lambda_1}$, and on $\Delta=\Gamma(J)$ as $\Delta=\Delta^1 \dots \Delta^{2\lambda_1}$. We denote by $T^1T^2 \dots T^j$ the permutation obtained via multiplication by the transpositions in $T^1, \dots , T^j$. For $w \in W$, $w=w_1w_2 \dots w_n$, let $w[i,j]=w_i \dots w_j$. To each $J$ we can associate a filling of a Young diagram shape $\lambda$. \begin{definition} \label{definition:fillTypeC} Let $\pi_{j}=\pi_{j}(T)=T^1 \dots T^j$. We define the \emph{filling map}, which produces a filling of the Young diagram $2\lambda$, by \begin{equation} \label{eqn:filling_mapTypeC} \mathrm{fill}(J)=\mathrm{fill}(T)=C_{1}\dots C_{2\lambda_1};\; \mbox{ here } C_{i}=\pi_i[1,\lambda'_{\lceil \frac{i}{2}\rceil}]. \end{equation} \end{definition} Here we need the circular order $\prec_i$ on $[\overline{n}]$ starting at $i$, which is defined in the obvious way, cf. Section \ref{subsection:TypeA}. It is convenient to think of this order in terms of the numbers $1, \dots, n, \overline{n}, \dots, \overline{1}$ arranged on a circle clockwise. We make the same convention as in Section \ref{subsection:TypeA} that, whenever we write $a \prec b \prec c \prec \dots,$ we refer to the circular order $\prec = \prec_a$. We have the following description of the edges of the quantum Bruhat graph in type $C$. \begin{proposition}[\cite{Lenart}] ~ \begin{enumerate} \item Given $1 \leq i < j \leq n$, we have an edge $w \stackrel{(i,j)}{\longrightarrow} w(i,j)$ if and only if there is no $k$ such that $i < k < j$ and $w(i) \prec w(k) \prec w(j)$. \item Given $1 \leq i < j \leq n$, we have an edge $w\stackrel{(i,\overline{\jmath})}{\longrightarrow}$ if and only if $w(i) < w(\overline{\jmath}), \mathrm{sign}(w(i)) = \mathrm{sign}(w(\overline{\jmath}))$, and there is no $k$ such that $i<k<\overline{\jmath}$ and $w(i) < w(k) < w(\overline{\jmath})$. \item Given $1 \leq i \leq n$, we have an edge $w \stackrel{(i,\overline{\imath})}{\longrightarrow} w(i,\overline{\imath})$ if and only if there is no $k$ such that $i < k < \overline{\imath}$ (or, equivalently, $i<k\leq n$) and $w(i) \prec w(k) \prec w(\overline{\imath})$. \end{enumerate} \end{proposition} \begin{definition} We define the \emph{sorted filling map} $\mathrm{sfill}(J)$ by sorting ascendingly the columns of the filling $\mathrm{fill}(J)$. \end{definition} \begin{theorem}[\cite{Lenart} Theorem 6.1] The map $\mathrm{sfill}$ is a bijection between $\mathcal{A}(\lambda)$ and $B^{\otimes \lambda} $. \label{theorem:bijectionTypeC} \end{theorem} \begin{theorem} \label{theorem:crystal_isomorphismTypeC} The map $\mathrm{sfill}$ preserves the affine crystal structures with respect to dual Demazure arrows (cf. Theorem \ref{theorem:crystal_isomorphism}). \end{theorem} The proof is parallel to the proof of Theorem \ref{theorem:crystal_isomorphism}. In this case using \ref{lemma:height_countingTypeC}, the height counting lemma in type $C_n^{(1)}$. As before let $N_i(\sigma)$ denote the number of entries $i$ in a filling $\sigma$. Let $c_i=c_i(\sigma) := \frac{1}{2} ( N_i(\sigma) - N_{\overline{\imath}}(\sigma)$) and define the content of a filling $\sigma$ as $\mathrm{ct}(\sigma):=(c_1,c_2,\dots,c_n)$. Let $\sigma[q]$ be the filling consisting of the columns $1,2, \dots , q$ of $\sigma$. Recall the factorization of $\Delta$ and the heights $l_k^\Delta$ defined in (\ref{deflev}). \begin{lemma}[\cite{LenartHHL_height_counting}, Proposition 4.6(2)] \label{lemma:weightTypeC} Let $J \subseteq [m]$, and $\sigma=\mathrm{fill}(J)$. Then we have $\mu(J)=\mathrm{ct}(\sigma)$. \end{lemma} Note that Proposition 4.6(2) in \cite{LenartHHL_height_counting} is proved in more generality than it is stated. This more general statement is what we need here. \begin{corollary} \label{corollary:linfinityTypeC} Let $J \subseteq [m]$, $\sigma=\mathrm{fill}(J)$, and $\alpha \in \Phi$. Then $\mathrm{sgn}(\alpha)l_{\alpha}^{\infty} = \inner{\mathrm{ct}(\sigma)}{\alpha^{\vee}}$. \end{corollary} \begin{lemma}[\cite{LenartHHL_height_counting}, Proposition 6.1] \label{lemma:height_countingTypeC} Let $J \subseteq [m]$, and $\sigma=\mathrm{fill}(J)$. For a fixed $k$, let $\gamma_k$ be a root in $\Delta^{q+1}$. We have \[ \mathrm{sgn}(\gamma_k)\,\l{k} = \langle \mathrm{ct}(\sigma[q]),\gamma_k^{\vee}\rangle. \] \end{lemma} As before, we now introduce notation to be used for the remainder of this section. Let $p \in \{0,1, \dots, n \}$. Let $J$ be an admissible sequence and let $\sigma=\mathrm{sfill}(J)$. Let $a_i=\inner{\mathrm{ct}(C_i)}{\alpha_p^{\vee}}$ then $a_i\in \left\{-1,-\frac{1}{2},0,\frac{1}{2},1\right\}$, for $1\leq p \leq n-1$, and $a_i \in \left\{ -\frac{1}{2},0,\frac{1}{2} \right\}$ for $p\in \left\{ 0,n \right\}$ as we now explain. If $1 \leq p \leq n-1$ let \[\mathcal{P} = \{p, p+1, \overline{p}, \overline{p+1}\}, \mathcal{P}^+ = \{ p, \overline{p+1} \}, \mathcal{P}^- = \{ p+1, \overline{p} \}, \mathcal{P}^0 = \{ p, p+1 \}, \overline{\mathcal{P}^0} = \{ \overline{p}, \overline{p+1} \}. \] Then $a_i = 1, -1$ corresponds to $C_i$ containing both elements of $\mathcal{P}^+, \mathcal{P}^-$ respectively, $a_i = \frac{1}{2}, -\frac{1}{2}$ corresponds to $C_i$ containing exactly one element from $\mathcal{P}^+, \mathcal{P}^-$ respectively, and $a_i = 0$ corresponds to $C_i$ containing both elements of $\mathcal{P}^0$, or both elements of $\overline{\mathcal{P}^0}$ or none of the elements of $\mathcal{P}$. If $p = n$ let $\mathcal{P} = \{ n, \overline{n} \}$, $\mathcal{P}^+ = \{ n \}, \mathcal{P}^- = \{\overline{n}\}$, and $\mathcal{P}^0 = \overline{\mathcal{P}^{0}}=\mathcal{P}$. If $p=0$ let $\mathcal{P}=\{ 1,\overline{1}\}, \mathcal{P}^+ = \{\overline{1}\}, \mathcal{P}^- = \{1 \} $, and $\mathcal{P}^0=\overline{\mathcal{P}^0} = \mathcal{P}$. Similar observations hold for $p=n$ and $p=0$. Let $h_j= \inner{\mathrm{ct}(\sigma[j])}{\alpha_p^{\vee}}=\sum_{i=0}^j a_i$, with $a_0=h_0:=0$. Let $M'$ be the maximum of $h_j$, and let $m'$ be minimal with the property $h_{m'}=M'$. Recall Definition \ref{definition:KN_doubled_columns} of split columns. \begin{proposition} Suppose we have a splitting of a KN column $C$ as $(lC,rC)$, let $x \in [n]$, and let $S = \left\{ x, \overline{x} \right\}$, then $lC$ contains an element of $S$ if and only if $rC$ contains an element of $S$. \label{proposition:KN_splitting_one} \end{proposition} The proof follows directly from Definition \ref{definition:KN_doubled_columns}. The following Proposition is a consequence of Proposition \ref{proposition:KN_splitting_one} and Definition \ref{definition:KN_doubled_columns}. \begin{proposition} If $a_{m'}= \frac{1}{2}$ then $m' = 2i$ for $ 1 \leq i \leq \lambda_1$. In this case, both columns $C_{m'-1}$ and $C_{m'}$ contain a single element of $\mathcal{P}^+$, which is the same for both columns, and no elements of $\mathcal{P}^-$. \label{proposition:KN_splitting_two} \end{proposition} \begin{proof} Since $a_{m'} = \frac{1}{2}$, $C_{m'}$ contains exactly one element from $\mathcal{P}^+$ and no elements from $\mathcal{P}^-$. We first show that $m' = 2i$, in the case $1 \leq p \leq n-1$. Suppose by way of contradiction $m' = 2i-1$. We suppose that $C_{2i-1}$ contains $p$, the other case being proved similarly. Then $C_{2i-1}$ contains no elements from the set $S := \left\{ p+1, \overline{p+1} \right\}$. From Proposition \ref{proposition:KN_splitting_one}, $C_{2i}$ contains either a $p$ or a $\overline{p}$. \begin{enumerate}[(a)] \item Suppose $C_{2i}$ contains $p$. By minimality of $M'$, $C_{2i}$ must also contain $\overline{p+1}$. Then by Proposition \ref{proposition:KN_splitting_one}, $C_{2i-1}$ contains an element of $S$, which is a contradiction. \label{proof:signature_a} \item Suppose $C_{2i}$ contains $\overline{p}$. It follows that $p = t_i$ (c.f. notation from Definition \ref{definition:KN_doubled_columns}). By choice of $t_i$, it follows that $C_{2i-1}$ contains an element of $S$, again leading to a contradiction. \label{proof:signature_b} \end{enumerate} If $p \in \{0,n\}$. We let $+$ indicate an element from $\mathcal{P}^+$ and $-$ indicate an element from $\mathcal{P}^-$. For example, if $p=n$, $+-$ means that $n$ is in $C_{2i-1}$ and $\overline{n}$ is in $C_{2i}$. Since $a_{m'}=\frac{1}{2}$, and (by way of contradiction) $m'=2i-1$, there are 2 possibilities $+-,++$. The latter case contradicts the minimality of $M'$. The first one contradicts Definition \ref{definition:KN_doubled_columns}. We have $m'= 2i$, and $C_{2i}$ contains exactly one element from $\mathcal{P}^{+}$ and no elements from $\mathcal{P}^{-}$. The fact that both $C_{2i-1}$ and $C_{2i}$ contain the same element from $\mathcal{P}^{+}$ and no elements from $\mathcal{P}^{-}$ follows from the minimality of $m'$ and Proposition \ref{proposition:KN_splitting_one}. \end{proof} By construction $M' \geq 0$. If $M'>0$, then $a_{m'}>0$. If $a_{m'}=1$, then $1\leq p \leq n-1$ and column $m'$ contains both elements of $\mathcal{P}^{+}$. In this case, applying $f_p$ twice will exchange both elements of $\mathcal{P}^+$ in column $m'$ for corresponding elements of $\mathcal{P}^-$, i.e. $p$ is exchanged for $p+1$, and $\overline{p+1}$ is exchanged for $\overline{p}$. If $a_{m'}=\frac{1}{2}$, then by Proposition \ref{proposition:KN_splitting_two} $m'= 2i$ for $i \in \left\{ 1, \ldots, \lambda_1' \right\}$, and applying $f_p$ twice will exchange the element of $\mathcal{P}^+$ in columns $C_{m'-1}$ and $C_{m'}$ with the corresponding element of $\mathcal{P}^-$, i.e. $p$ with $p+1$ or $\overline{p+1}$ with $\overline{p}$ (if $p=0$, the symbol $\overline{1}$ is exchanged with 1). The following is the analogue of Lemma \ref{lemma:chain_filling}, the proof of which is similar to the proof of Lemma \ref{lemma:chain_filling}. \begin{lemma} \label{lemma:chain_fillingTypeC} If $\alpha_p=\gamma_k \in \Delta^{2i-1}$ with $k \not \in J$ then either $a_{2i-1}=\frac{1}{2}$ and $a_{2i}=\frac{1}{2}$, or we have $a_{2i-1}=1$. If $\alpha_p=\gamma_k \in \Delta^{2i}$ with $k \not \in J$ then $a_{2i}=1$. \end{lemma} Recall Proposition $\ref{prop:rootF}$ and the notation therein. $M$ is the maximum of $g_{\alpha_p}$, and suppose $M> \delta_{p,0}$, then $\gamma_k=\alpha_p$ with $k \not \in J$, $\mathrm{sgn}(\alpha_p)\l{k}=M-1$, and if $m \ne \infty$ then $\gamma_m=\alpha_p$ with $m \in J$. The following analogues of Propositions \ref{proposition:max-correspondence}, \ref{proposition:root_matching} are proved similarly to their type $A$ counterparts. \begin{proposition} \label{proposition:max-correspondenceTypeC} Let $J$ be an admissible subset, $\sigma=\mathrm{sfill}(J)$, and let $\delta_{p,q}$ be the Kronecker delta function. We have $M\geq M'$. If $M \geq \delta_{p,0}$, then $M=M'$. \end{proposition} \begin{proposition} \label{proposition:root_matchingTypeC} Let $J$ be an admissible subset, $\sigma=\mathrm{sfill}(J)$, and suppose $\delta_{p,0}< M$, so $M=M'$ and $f_p(J)\ne \mathbf{0}$. If $a_{m'}=1$ then $\gamma_k\in \Delta^{m'}$, otherwise $\gamma_k \in \Delta^{m'-1}$. If $m \ne \infty$, so $\gamma_m \in \Delta^{m''}$, then $a_i=0$ for $i \in (m',m'')$. If $m= \infty$, then $a_i=0$ for $i>m'$. \end{proposition} The proof of Proposition \ref{theorem:crystal_isomorphismTypeC}, is analogous to the proof of \ref{theorem:crystal_isomorphism} using Propositions \ref{proposition:max-correspondenceTypeC} and \ref{proposition:root_matchingTypeC}. \bibliographystyle{alpha} \newcommand{\etalchar}[1]{$^{#1}$}
1,108,101,563,557
arxiv
\section{Introduction} In this paper we apply the Painlev\'{e} test for integrability of partial differential equations \cite{WTC,T} to the class of sixth-order nonlinear wave equations \begin{multline} \label{e1} u_{xxxxxx} + a u_x u_{xxxx} + b u_{xx} u_{xxx} + c u_x^2 u_{xx} \\ + d u_{tt} + e u_{xxxt} + f u_x u_{xt} + g u_t u_{xx} = 0 , \end{multline} where $a,b,c,d,e,f$ and $g$ are arbitrary parameters. We show that there are four distinct cases of relations between the parameters when equation \eqref{e1} passes the Painlev\'{e} test well. Three of those cases correspond to known integrable equations, whereas the fourth one turns out to be new. This new integrable case of equation \eqref{e1} is equivalent to the Korteweg--de~Vries equation with a source of a new type, and we find its Lax pair, B\"{a}cklund self-transformation, travelling wave solutions and third-order generalized symmetries. There are the following reasons to explore the class of equations \eqref{e1} for integrability. Recently Dye and Parker \cite{DP} constructed and studied two integrable nonlinear integro-differential equations, \begin{multline} \label{e2} 5 \partial_x^{-1} v_{tt} + 5 v_{xxt} - 15 v v_t - 15 v_x \partial_x^{-1} v_t \\ - 45 v^2 v_x + 15 v_x v_{xx} + 15 v v_{xxx} - v_{xxxxx} = 0 \end{multline} and \begin{multline} \label{e3} 5 \partial_x^{-1} v_{tt} + 5 v_{xxt} - 15 v v_t - 15 v_x \partial_x^{-1} v_t \\ - 45 v^2 v_x + \tfrac{75}{2} v_x v_{xx} + 15 v v_{xxx} - v_{xxxxx} = 0 , \end{multline} which describe the propagation of waves in two opposite directions and represent bidirectional versions of the Sawada--Kotera--Caudrey--Dodd--Gibbon equation \cite{SK,CDG} and Kaup--Kupershmidt equation \cite{K,FG}, respectively. Equations \eqref{e2} and \eqref{e3} possess Lax pairs due to their construction \cite{DP} and fall into class \eqref{e1} after the potential transformation $v = u_x$. There is one more well-known integrable equation in class \eqref{e1}, namely \begin{equation} \label{e4} u_{tt} - u_{xxxt} - 2 u_{xxxxxx} + 18 u_x u_{xxxx} + 36 u_{xx} u_{xxx} - 36 u_x^2 u_{xx} =0 . \end{equation} This equation is equivalent to the Drinfel'd--Sokolov--Satsuma--Hirota system of coupled Korteweg--de~Vries equations \cite{DS,SH}, of which a fourth-order recursion operator was found in \cite{GK}. A B\"{a}cklund self-transformation of equation \eqref{e4} was derived in \cite{KS} by the method of truncated singular expansion \cite{WTC,W1}. Multisoliton solutions of equations \eqref{e3} and \eqref{e4} were studied in \cite{VM1,VM2,V}. Thus we have already known three interesting integrable equations of class \eqref{e1}, and it is natural to ask what are other integrable equations in this class and, if there are any, what are their properties. Solving problems of this kind is important for testing the reliability of integrability criteria and for discovering new interesting objects of soliton theory. The paper is organized as follows. In section~\ref{s2} we perform the singularity analysis of equation \eqref{e1} and find four distinct cases which possess the Painlev\'{e} property and correspond, up to scale transformations of variables, to equations \eqref{e2}--\eqref{e4} and to the new integrable equation \begin{equation} \label{e5} \left( \partial_x^3 + 8 u_x \partial_x + 4 u_{xx} \right) \left( u_t + u_{xxx} + 6 u_x^2 \right) = 0 . \end{equation} In this part the present study is similar to the recent Painlev\'{e} classifications done in \cite{KK,S1,S2,ST}, where new integrable nonlinear wave equations were discovered as well. The method of truncated singular expansion is successfully used in section~\ref{s3}, where we derive a Lax pair for the new equation \eqref{e5} and also obtain and study its B\"{a}cklund self-transformation. The contents of section~\ref{s4} is not related to the Painlev\'{e} property: there we find and discuss travelling wave solutions and third-order generalized symmetries of equation \eqref{e5}. Section~\ref{s5} contains concluding remarks. \section{Singularity analysis} \label{s2} In order to select integrable cases of equation \eqref{e1} we use the so-called Weiss--Kruskal algorithm for singularity analysis of partial differential equations \cite{RGB}, which is based on the Weiss--Tabor--Carnevale expansions of solutions near movable singularity manifolds \cite{WTC}, Ward's requirement not to examine singularities of solutions at characteristics of equations \cite{W2,W3} and Kruskal's simplifying representation for singularity manifolds \cite{JKM}, and which follows step by step the Ablowitz--Ramani--Segur algorithm for ordinary differential equations \cite{ARS}. Computations are made using the Mathematica computer algebra system \cite{W4}, and we omit inessential details. Equation \eqref{e1} is a sixth-order normal system, and its general solution must contain six arbitrary functions of one variable \cite{O}. A hypersurface $\phi (x,t) = 0$ is noncharacteristic for equation \eqref{e1} if $\phi_x \neq 0$, and we set $\phi_x = 1$ without loss of generality. Substitution of the expansion $u = u_0 (t) \phi^{\delta} + \dotsb + u_r (t) \phi^{r + \delta} + \dotsb$ to equation \eqref{e1} determines branches of the dominant behavior of solutions near $\phi = 0$, i.e.\ admissible choices of $\delta$ and $u_0$, and corresponding positions $r$ of the resonances, where arbitrary functions of $t$ can enter the expansion. There are two singular branches, both with $\delta = -1$, values of $u_0$ being the roots of a quadratic equation with constant coefficients. Without loss of generality we make $u_0 = 1$ for one of the two branches by a scale transformation of $u$, thus fixing the coefficient $c$ of equation \eqref{e1} as \begin{equation} \label{e6} c = 12 a + 6 b - 360 , \end{equation} and then we get $u_0 = 60 / (2 a + b - 60)$ for the other branch. We require that at least one of singular branches is a generic one, representing the general solution of equation \eqref{e1}, and without loss of generality we assume that we have set $u_0 = 1$ for the generic branch, whereas the branch with $u_0 = 60 / (2 a + b - 60)$ may be nongeneric. Positions $r$ of the resonances are determined by the equation \begin{equation} \label{e7} (r+1)(r-1)(r-6) \left( r^3 - 15r^2 + (86-a)r + (4a+2b-240) \right) = 0 \end{equation} for the branch with $u_0 = 1$, and by the equation \begin{multline} \label{e8} (r+1)(r-1)(r-6) \Bigl( r^3 - 15r^2 + \bigl( 86 - 60a / (2a+b-60) \bigr) r \\ + \bigl( -120 + 7200 / (2a+b-60) \bigr) \Bigr) = 0 \end{multline} for the branch with $u_0 = 60 / (2 a + b - 60)$. Let us consider the generic branch with $u_0 = 1$ first. From equation \eqref{e7} we find that positions of three resonances are $r = -1, 1, 6$, where $r = -1$ corresponds to the arbitrary dependence of $\phi$ on $t$, and that the positions $r_1$, $r_2$ and $r_3$ of other three resonances satisfy the relations \begin{gather} r_3 = 15 - r_1 - r_2 , \label{e9} \\ a = r_1^2 + r_1 r_2 + r_2^2 - 15 r_1 - 15 r_2 + 86 , \label{e10} \\ b = \tfrac{1}{2} r_1^2 r_2 + \tfrac{1}{2} r_1 r_2^2 - 2 r_1^2 - \tfrac{19}{2} r_1 r_2 - 2 r_2^2 + 30 r_1 + 30 r_2 - 52 . \label{e11} \end{gather} In order to have the resonances of this generic branch in admissible positions, we must set the numbers $r_1$, $r_2$ and $r_3$ to be positive, integer, distinct and not equal to $1$ or $6$. Assuming without loss of generality that $r_1 < r_2 < r_3$ and taking relation \eqref{e9} into account, we get five distinct cases to be studied further: (i) $r_1 = 2$, $r_2 = 3$; (ii) $r_1 = 2$, $r_2 = 4$; (iii) $r_1 = 2$, $r_2 = 5$; (iv) $r_1 = 3$, $r_2 = 4$; (v) $r_1 = 3$, $r_2 = 5$. {\sc Case (i):} $r_1 = 2$, $r_2 = 3$. From relations \eqref{e9}--\eqref{e11} and \eqref{e6} we find that $r_3 = 10$, $a = 30$, $b = 30$ and $c = 180$. In the generic branch we have $u_0 = 1$ and $r = -1, 1, 2, 3, 6, 10$. Substitution of the expansion $u = \sum_{i=0}^{\infty} u_i (t) \phi^{i-1}$ with $\phi_x (x,t) = 1$ to equation \eqref{e1} determines recursion relations for $u_i$, and we check whether those recursion relations are compatible at the resonances. We must set $e = \tfrac{1}{12} ( f + g )$ for the compatibility condition at $r = 2$ to be satisfied identically. In the same way we get the relations $f = g$ and $d = - \tfrac{1}{180} g^2$ at the resonances $r = 6$ and $r = 10$, respectively. With the obtained constraints on the coefficients of equation \eqref{e1} we proceed to the nongeneric branch, where $u_0 = 2$ and $r = -2, -1, 1, 5, 6, 12$, and find that the recursion relations are compatible there, which completes the Painlev\'{e} test. Finally, it is easy to prove that, under the obtained constraints on the coefficients, a scale transformation of variables relates equation \eqref{e1} to the known integrable equation \eqref{e2} with $v = u_x$. {\sc Case (ii):} $r_1 = 2$, $r_2 = 4$. This case of equation \eqref{e1} with $a = 24$, $b = 36$, $c = 144$ and $r_3 = 9$ does not pass the Painlev\'{e} test for integrability. We find that in the nongeneric branch, where $u_0 = \tfrac{5}{2}$, three of the resonances lie in the noninteger positions $r \approx -2.54656, \, 6.26589, \, 11.2807$ due to equation \eqref{e8}, these positions being the irrational roots of $r^3 - 15 r^2 + 26 r + 180 = 0$. Moreover, in the generic branch, where $u_0 = 1$ and $r = -1, 1, 2, 4, 6, 9$, the recursion relations fail to be compatible at the resonance $r = 4$, and this means that the general solution of equation \eqref{e1} contains nondominant movable logarithmic singularities. {\sc Case (iii):} $r_1 = 2$, $r_2 = 5$. Here we have $r_3 = 8$, $a = 20$, $b = 40$ and $c = 120$ due to relations \eqref{e9}--\eqref{e11} and \eqref{e6}. In the generic branch, where $u_0 = 1$ and $r = -1, 1, 2, 5, 6, 8$, we get the constraints $e = \tfrac{1}{12} ( f + g )$, $f = 2 g$ and $d = 0$ at the resonances $r = 2$, $r = 5$ and $r = 8$, respectively. Under these constraints on the coefficients of equation \eqref{e1}, the recursion relations are compatible in the nongeneric branch as well, where $u_0 = 3$ and $r = -3, -1, 1, 6, 8, 10$. Finally, a scale transformation of variables leads us to the new equation \eqref{e5}, which must be integrable according to the obtained result of the Painlev\'{e} test. {\sc Case (iv):} $r_1 = 3$, $r_2 = 4$. We find that $r_3 = 8$, $a = 18$, $b = 36$ and $c = 72$. In the generic branch, where $u_0 = 1$ and $r = -1, 1, 3, 4, 6, 8$, we get $g = 0$ and $d = -2 e^2 + \tfrac{1}{2} e f - \tfrac{1}{36} f^2$ at the resonance $r = 4$, and $f = 0$ at the resonance $r = 6$. No extra constraints on the coefficients of equation \eqref{e1} arise in the nongeneric branch, where $u_0 = 5$ and $r = -5, -1, 1, 6, 8, 12$, and we obtain the known integrable equation \eqref{e4} after a scale transformation of variables. {\sc Case (v):} $r_1 = 3$, $r_2 = 5$. We find that $r_3 = 7$, $a = 15$, $b = \tfrac{75}{2}$ and $c = 45$. In the generic branch, where $u_0 = 1$ and $r = -1, 1, 3, 5, 6, 7$, we get $d = \tfrac{1}{90} ( f^2 - f g - 2 g^2 )$ and $e = \tfrac{1}{6} ( f + g )$ at $r = 5$, and $f = g$ at $r = 6$. No extra constraints on the coefficients of equation \eqref{e1} arise in the nongeneric branch, where $u_0 = 8$ and $r = -7, -1, 1, 6, 10, 12$, and a scale transformation of variables leads us to the known integrable equation \eqref{e3} with $v = u_x$. We have completed the Painlev\'{e} analysis of equation \eqref{e1}. It is noteworthy that the Painlev\'{e} test not only detected all the previously known integrable cases \eqref{e2}--\eqref{e4} of equation \eqref{e1}, but also discovered the new case \eqref{e5} which is integrable due to the results of the next section. In what follows we refer to equation \eqref{e5} as the KdV6, in order to emphasize that this new integrable sixth-order nonlinear wave equation is associated with the same spectral problem as of the potential Korteweg--de~Vries equation. Let us also note that the KdV6, i.e.\ equation \eqref{e5}, is equivalent to the following Korteweg--de~Vries equation with a source satisfying a third-order ordinary differential equation: \begin{equation} \label{e12} v_t + v_{xxx} + 12 v v_x - w_x = 0 , \qquad w_{xxx} + 8 v w_x + 4 w v_x = 0 , \end{equation} where the new dependent variables $v$ and $w$ are related to $u$ as \begin{equation} \label{e13} v = u_x , \qquad w = u_t + u_{xxx} + 6 u_x^2 . \end{equation} The system of equations \eqref{e12} is different from the so-called Korteweg--de~Vries equation with a self-consistent source which was extensively studied during last decade (see \cite{LYZ} and references therein). \section{Truncated singular expansion} \label{s3} Let us try to find a Lax pair and a B\"{a}cklund self-transformation for the KdV6, using the method of truncated singular expansion \cite{WTC,W1}. We substitute the truncated singular expansion \begin{equation} \label{e14} u = \dfrac{y(x,t)}{\phi (x,t)} + z(x,t) \end{equation} to equation \eqref{e5} (note that the Kruskal simplifying representation $\phi_x = 1$ is not used now), collect terms with equal degrees of $\phi$, and in this way get the following. At $\phi^{-7}$ we have three possibilities: $y = 0$, $y = \phi_x$ or $y = 3 \phi_x$. We choose \begin{equation} \label{e15} y = \phi_x \end{equation} which corresponds to the generic branch, i.e.\ we truncate the singular expansion representing the general solution of the KdV6. Then we get identities at $\phi^{-6}$ and $\phi^{-5}$, whereas the terms with $\phi^{-4}$ give us the equation \begin{equation} \label{e16} z_{xx} = - \dfrac{\phi_{xxxx}}{4 \phi_x} + \dfrac{\phi_{xx} \phi_{xxx}}{2 \phi_x^2} - \dfrac{\phi_{xx}^3}{4 \phi_x^3} \end{equation} which is equivalent to \begin{equation} \label{e17} z_{x} = - \dfrac{\phi_{xxx}}{4 \phi_x} + \dfrac{\phi_{xx}^2}{8 \phi_x^2} + \sigma , \end{equation} where $\sigma = \sigma (t)$ is an arbitrary function that appeared as the `constant' of integration of equation \eqref{e16} over $x$. Next, using the obtained relations \eqref{e15} and \eqref{e17}, we get at $\phi^{-3}$ the equation \begin{multline} \label{e18} z_t = - \dfrac{\phi_{xxxxx}}{4 \phi_x} + \dfrac{5 \phi_{xx} \phi_{xxxx}}{4 \phi_x^2} + \dfrac{5 \phi_{xxx}^2}{8 \phi_x^2} - \dfrac{25 \phi_{xx}^2 \phi_{xxx}}{8 \phi_x^3} + \dfrac{45 \phi_{xx}^4}{32 \phi_x^4} \\ - \dfrac{\phi_{xxt}}{2 \phi_x} + \dfrac{\phi_{xx} \phi_{xt}}{2 \phi_x^2} + \sigma \left( - \dfrac{5 \phi_{xxx}}{\phi_x} + \dfrac{15 \phi_{xx}^2}{2 \phi_x^2} - \dfrac{2 \phi_t}{\phi_x} \right) - 30 \sigma^2 . \end{multline} The terms with $\phi^{-2}$ and $\phi^{-1}$ add nothing to the already obtained relations \eqref{e15}, \eqref{e17} and \eqref{e18}. Finally, at $\phi^0$ we find that $\sigma '(t) = 0$, i.e.\ $\sigma$ is an arbitrary constant in relations \eqref{e17} and \eqref{e18}. We have found that the truncation procedure is consistent for the KdV6. The function $u$ determined by relations \eqref{e14}, \eqref{e15}, \eqref{e17} and \eqref{e18} is a solution of equation \eqref{e5}, as well as the function $z$ determined by relations \eqref{e17} and \eqref{e18} is. Moreover, these expressions for $u$ and $z$ correspond to the general solution of the KdV6, because the function $\phi$ is determined by a sixth-order equation---the compatibility condition $z_{xt} = z_{tx}$ for equations \eqref{e17} and \eqref{e18}---and the general solution for $\phi$ contains six arbitrary functions of one variable. We can derive a Lax pair for the KdV6 from the system of equations \eqref{e17} and \eqref{e18} in the same way as used for the Korteweg--de~Vries equation in \cite{WTC}. Introducing the function $\psi$ related to $\phi$ as \begin{equation} \label{e19} \phi_x = \psi^2 , \end{equation} we immediately get equation \eqref{e17} linearized: \begin{equation} \label{e20} \psi_{xx} + 2 ( z_x - \sigma) \psi = 0 . \end{equation} Then we multiply equation \eqref{e18} by $\phi_x$, apply $\partial_x$ to the result, eliminate all derivatives of $\phi$ using relation \eqref{e19}, eliminate derivatives of $\psi$ of order higher than one using equation \eqref{e20}, and in this way obtain the equation \begin{multline} \label{e21} \psi_t = \Bigl( - 8 \sigma - 4 z_x - \tfrac{1}{2} \sigma^{-1} \left( z_t + z_{xxx} + 6 z_x^2 \right) \Bigr) \psi_x \\ + \Bigl( 2 z_{xx} + \tfrac{1}{4} \sigma^{-1} \left( z_{xt} + z_{xxxx} + 12 z_x z_{xx} \right) \Bigr) \psi , \end{multline} also linear with respect to $\psi$. Equations \eqref{e20} and \eqref{e21} are compatible for $\psi$ if and only if $z$ is a solution of equation \eqref{e5}. Consequently, equations \eqref{e20} and \eqref{e21}, where $z$ should be re-denoted as $u$, constitute a Lax pair for equation \eqref{e5}, the arbitrary constant $\sigma$ being a spectral parameter. Note that the spectral problem \eqref{e20}, with $z$ re-denoted as $u$, is the same of equation \eqref{e5} and of the potential Korteweg--de~Vries equation \begin{equation} \label{e22} u_t + u_{xxx} + 6 u_x^2 = 0 . \end{equation} Note also that in variables \eqref{e13} the obtained Lax pair of the KdV6 turns into the Lax pair of the Korteweg--de~Vries equation with a source \eqref{e12}: \begin{gather} \psi_{xx} + 2 ( v - \sigma) \psi = 0, \label{e23} \\ \psi_t = \left( - 8 \sigma - 4 v - \tfrac{1}{2} \sigma^{-1} w \right) \psi_x + \left( 2 v_x + \tfrac{1}{4} \sigma^{-1} w_x \right) \psi . \label{e24} \end{gather} From the truncated singular expansion we can also derive a B\"{a}cklund self-transformation for the KdV6. We introduce the notations \begin{equation} \label{e25} p = u - z , \qquad q = u + z , \end{equation} eliminate $\phi$ from relations \eqref{e14}, \eqref{e15}, \eqref{e17} and \eqref{e18}, and in this way obtain the following two equations for $p$ and $q$: \begin{equation} \label{e26} p_{xx} - \tfrac{1}{2} p^{-1} p_x^2 + \tfrac{1}{2} p \left( 4 q_x + p^2 - 8 \sigma \right) =0 \end{equation} and \begin{multline} \label{e27} q_{xxxx} - p^{-1} p_x \left( q_{xxx} + 3 q_x^2 + q_t + 8 \sigma q_x + 32 \sigma^2 \right) + 2 \left( 3 q_x - p^2 + 4 \sigma \right) q_{xx} \\ + q_{xt} - 5 p p_x \left( 2 q_x + p^2 - 8 \sigma \right) + p p_t - 4 \sigma p^{-1} p_t = 0 , \end{multline} where $\sigma$ is an arbitrary constant. These equations \eqref{e26} and \eqref{e27} constitute a B\"{a}cklund transformation according to the definition used in \cite{AS}. Namely, when we eliminate $z$ from equations \eqref{e26} and \eqref{e27}, we exactly get equation \eqref{e5} for $u$; and vice versa, we get equation \eqref{e5} for $z$ if we eliminate $u$ from system \eqref{e26}--\eqref{e27}. However, one definitely needs to use computer algebra tools to prove this. Some words are due on the following interesting property of the obtained B\"{a}cklund self-transformation of the KdV6. If an `old' solution $z$ of equation \eqref{e5} satisfies the potential Korteweg--de~Vries equation \eqref{e22}, then the `new' solution $u$ of equation \eqref{e5}, related to $z$ by transformation \eqref{e26}--\eqref{e27} with notations \eqref{e25}, is also a solution of equation \eqref{e22} or satisfies the condition $u_x = \tfrac{1}{2} \sigma$. One can prove this statement (where, of course, $u$ and $z$ are interchangeable) by direct elimination of $z$ from the system of equations \eqref{e26}, \eqref{e27} and $z_t + z_{xxx} + 6 z_x^2 = 0$, the result being $( 2 u_x - \sigma ) \left( u_t + u_{xxx} + 6 u_x^2 \right) = 0$. For example, if we apply the B\"{a}cklund transformation \eqref{e26}--\eqref{e27} to the solution $z = 0$ of equation \eqref{e5}, which is also a solution of equation \eqref{e22}, we get \begin{multline} \label{e28} u = 2 \rho \Bigl( c_1 \exp ( 2 \rho x ) + c_2 \exp \left( 8 \rho^3 t \right) \Bigr)^2 \biggl( c_1^2 \exp ( 4 \rho x ) - c_2^2 \exp \left( 16 \rho^3 t \right) \\ + 4 \rho \Bigl( c_1 c_2 \left( x - 12 \rho^2 t \right) + c_3 \Bigr) \exp \left( 2 \rho x + 8 \rho^3 t \right) \biggr)^{-1} , \end{multline} where $\rho$, $c_1$, $c_2$ and $c_3$ are arbitrary constants, $\rho^2 = 2 \sigma$. Of course, solution \eqref{e28} of equation \eqref{e5} turns out to be a solution of equation \eqref{e22} too. Consequently, in order to obtain, using the B\"{a}cklund transformation \eqref{e26}--\eqref{e27}, a nontrivial solution of equation \eqref{e5} which is not a solution of equation \eqref{e22}, one must apply the transformation to a solution of equation \eqref{e5} which is not a solution of equation \eqref{e22} either, i.e.\ one must initially know some solution of the KdV6 which does not satisfy the potential Korteweg--de~Vries equation. \section{Solutions and symmetries} \label{s4} Let us try to find the `wave of translation' solution \begin{equation} \label{e29} u = U ( X ) , \qquad X = x - s t , \qquad s = \text{constant} \end{equation} of the KdV6, assuming that $U$ is asymptotically constant at $X \to \pm \infty$. From equation \eqref{e5} we get the sixth-order nonlinear ordinary differential equation \begin{equation} \label{e30} U^{(\text{vi})} + ( 20 U' - s ) U^{(\text{iv})} + 40 U'' U ''' + 12 ( 10 U' - s ) U' U'' = 0 . \end{equation} Then we use the substitution \begin{equation} \label{e31} Y ( X ) = - U' + \tfrac{1}{20} s , \end{equation} integrate equation \eqref{e30} over $X$, and get the fourth-order equation \begin{equation} \label{e32} Y^{(\text{iv})} = 20 Y Y'' + 10 {Y'}^2 - 40 Y^3 + \tfrac{3}{10} s^2 Y - \tfrac{1}{100} s^3 , \end{equation} where we have fixed the constant of integration in accordance with the required asymptotic condition \begin{equation} \label{e33} Y \to \tfrac{1}{20} s \quad \text{at} \quad X \to \pm \infty . \end{equation} Equation \eqref{e32} is a special case, with $\kappa = 0$, $\alpha = \tfrac{3}{10} s^2$ and $\beta = - \tfrac{1}{100} s^3$, of the Cosgrove's F-V equation \begin{equation} \label{e34} Y^{(\text{iv})} = 20 Y Y'' + 10 {Y'}^2 - 40 Y^3 + \alpha Y + \kappa X +\beta \end{equation} possessing the Painlev\'{e} property \cite{C}. Let us remind that, by the method used in \cite{C}, the F-V equation \eqref{e34} with $\kappa = 0$ can be solved in terms of hyperelliptic functions as follows: \begin{equation} \label{e35} Y ( X ) = \tfrac{1}{4} \bigl( \mu ( X ) + \nu ( X ) \bigr) , \end{equation} where the functions $\mu$ and $\nu$ are determined by the equations \begin{equation} \label{e36} I_1 ( \mu ) + I_1 ( \nu ) = K_3 , \qquad I_2 ( \mu ) + I_2 ( \nu ) = X + K_4 , \end{equation} $K_3$ and $K_4$ are arbitrary constants, $I_1$ and $I_2$ are defined as \begin{equation} \label{e37} I_1 ( \mu ) = \int^\mu \! \! \dfrac{d \tau}{\sqrt{P ( \tau )}} \, , \qquad I_2 ( \mu ) = \int^\mu \! \! \dfrac{\tau d \tau}{\sqrt{P ( \tau )}} \, , \end{equation} and in the quintic polynomial \begin{equation} \label{e38} P ( \tau ) = \tau^5 - 2 \alpha \tau^3 + 8 \beta \tau^2 + 32 K_1 \tau + 16 K_2 \end{equation} the arbitrary constants $K_1$ and $K_2$ correspond to the integrals \begin{equation} \label{e39} Y' H' - 4 Y J - \tfrac{1}{2} H^2 - \left( 2 Y^2 - \tfrac{1}{4} \alpha \right) H = K_1 , \qquad {H'}^2 - 8 H J = K_2 \end{equation} of equation \eqref{e34} with $\kappa = 0$, the notations being \begin{equation} \label{e40} H = Y'' - 6 Y^2 + \tfrac{1}{4} \alpha , \qquad J = Y Y'' - \tfrac{1}{2} {Y'}^2 - 4 Y^3 + \tfrac{1}{4} \beta . \end{equation} In our case \eqref{e32} of the F-V equation we get $K_1 = \tfrac{3}{1000} s^4$ and $K_2 = \tfrac{9}{6250} s^5$ from the asymptotic condition \eqref{e33} and relations \eqref{e39}--\eqref{e40}, and the quintic polynomial \eqref{e38} takes the form \begin{equation} \label{e41} P ( \tau ) = \left( \tau - \tfrac{3}{5} s \right)^2 \left( \tau + \tfrac{2}{5} s \right)^3 . \end{equation} This allows us to compute integrals \eqref{e37} in terms of inverse hyperbolic functions and obtain from relations \eqref{e35}--\eqref{e36} the following solution of equation \eqref{e32}: \begin{equation} \label{e42} Y = \dfrac{\xi^2}{10} \left( 2 - \dfrac{5 \left( 1 + 2 \xi^2 ( X - \eta )^2 - \cosh ( 2 \xi X ) \right)}{\bigl( \sinh ( \xi X ) - \xi ( X - \eta ) \cosh ( \xi X ) \bigr)^2} \right) , \end{equation} where the constant $\xi$ is determined by $s = 4 \xi^2$, the arbitrary constant $\eta$ is related to $K_3$, whereas the arbitrary constant $K_4$ has been fixed by an appropriate shift of $x$ in $X$. Finally, relations \eqref{e29}, \eqref{e31} and \eqref{e42} lead us to the following quite simple travelling wave solution of equation \eqref{e5}: \begin{equation} \label{e43} u = \left( \dfrac{1}{\xi \tanh ( \xi X )} - \dfrac{1}{\xi^2 ( X - \eta )} \right)^{-1} , \end{equation} where $\xi$ and $\eta$ are arbitrary constants, $\xi \neq 0$, $X = x - 4 \xi^2 t$, and the additive constant of integration has been omitted in the right-hand side for convenience. One can generalize this solution \eqref{e43} by arbitrary constant shifts of $x$ and $u$, $x \mapsto x + \zeta_1$ and $u \mapsto u + \zeta_2$. \begin{figure} \includegraphics[width=6cm]{fig1-left.eps} \hfil \includegraphics[width=6cm]{fig1-right.eps} \caption{The travelling wave solution \eqref{e43} of equation \eqref{e5}, $u(X)$ with $\xi = 3$: $\eta = -3$ (left) and $\eta = 5$ (right).} \label{fig1} \end{figure} It is easy to guess from figure~\ref{fig1} that the obtained solution \eqref{e43} is a nonlinear superposition of two elementary travelling wave solutions of equation \eqref{e5}, \begin{equation} \label{e44} u = \xi \tanh \left( \xi x - 4 \xi^3 t \right) \end{equation} and \begin{equation} \label{e45} u = \xi \left( \pm 1 - \left( \xi x - 4 \xi^3 t \right)^{-1} \right)^{-1} , \end{equation} where $\xi$ is an arbitrary constant, $\xi \neq 0$ in expression \eqref{e45}, and that the arbitrary constant $\eta$ determines the `distance' between these two solutions in their superposition \eqref{e43}. We do not know, however, why solutions \eqref{e44} and \eqref{e45} do not appear separately when we solve equation \eqref{e32} by the method used in \cite{C}. Let us also note that solution \eqref{e44} of equation \eqref{e5} is the well-known one-soliton solution of equation \eqref{e22}, whereas solutions \eqref{e43} and \eqref{e45} of the KdV6 do not satisfy the potential Korteweg--de~Vries equation. Integrable nonlinear wave equations usually possess infinitely many generalized symmetries \cite{O}. Using the Dimsym program \cite{D} based on the standard prolongation technique for computing symmetries, we found the following three third-order generalized symmetries of equation \eqref{e5}: \begin{gather} S_1 = \left( u_{xxx} + 6 u_x^2 \right) \partial_u , \label{e46} \\ S_2 = \left( 3 t u_{xxx} + 18 t u_x^2 - x u_x - u \right) \partial_u , \label{e47} \\ S_3 = h (t) \left( u_t + u_{xxx} + 6 u_x^2 \right) \partial_u , \label{e48} \end{gather} where $h (t)$ is an arbitrary function. On available computers, however, we were unable to find any symmetry of order higher than three. In variables \eqref{e13} the obtained symmetries \eqref{e46}--\eqref{e48} correspond to the Lie point symmetries \begin{gather} \bar{S}_1 = ( w_x - v_t ) \partial_v + 4 ( v w_x - w v_x ) \partial_w , \label{e49} \\ \bar{S}_2 = \bigl( 3 t (w_x - v_t ) - x v_x -2 v \bigr) \partial_v + \bigl( 12 t ( v w_x - w v_x ) - x w_x - w \bigr) \partial_w , \label{e50} \\ \bar{S}_3 = h (t) w_x \partial_v + \bigl( h (t) ( 4 v w_x - 4 w v_x + w_t ) + h' (t) w \bigr) \partial_w \label{e51} \end{gather} of system \eqref{e12}, respectively. We notice that symmetry \eqref{e46} is also a generalized symmetry of the potential Korteweg--de~Vries equation \eqref{e22}. This allows us to guess, and then to prove by direct computation in accordance with the definition of recursion operators \cite{O}, that the well-known recursion operator \begin{equation} \label{e52} R = \partial_x^2 + 8 u_x - 4 \partial_x^{-1} \cdot u_{xx} \end{equation} of equation \eqref{e22} is also a recursion operator of equation \eqref{e5}. Consequently, the KdV6 possesses at least one infinite hierarchy of generalized symmetries, of the form $R^n u_x$ ($n = 1, 2, \dotsc$), where $R$ is the recursion operator \eqref{e52}. We believe, however, that the complete algebra of generalized symmetries of the KdV6 can be more rich and interesting. \section{Conclusion} \label{s5} In this paper we discovered the new integrable nonlinear wave equation \eqref{e5}, which we call the KdV6, and obtained first results on its properties. We discovered the KdV6 by applying the Painlev\'{e} test for integrability to the multiparameter class of equations \eqref{e1}. This result provides additional empirical confirmation of sufficiency of the Painlev\'{e} property for integrability. Then we derived the Lax pair \eqref{e20}--\eqref{e21} and B\"{a}cklund self-transformation \eqref{e26}--\eqref{e27} of equation \eqref{e5}, using the method of truncated singular expansion. We found that the KdV6 is associated with the same spectral problem as of the potential Korteweg--de~Vries equation \eqref{e22}, and observed an interesting property of the obtained B\"{a}cklund transformation concerning solutions of equation \eqref{e5} which satisfy equation \eqref{e22}. Finally, we derived the `wave of translation' solution \eqref{e43} of the KdV6, which is more general than the soliton solution of the potential Korteweg--de~Vries equation, and also found the third-order generalized symmetries \eqref{e46}--\eqref{e48} of equation \eqref{e5} and its recursion operator \eqref{e52}. Taking into account the obtained results, we believe that the KdV6 deserves further investigation. The following problems seem to be interesting: \begin{itemize} \item What is the multisoliton solution of the KdV6? Do solutions of the rational type \eqref{e45} interact elastically with each other and with solutions of the soliton type \eqref{e44}? \item Is there any other B\"{a}cklund self-transformation of the KdV6, different from the one obtained in this paper, which can produce nontrivial solutions not satisfying equation \eqref{e22}, when applied to solutions of equation \eqref{e22}? \item Is there any other recursion operator of the KdV6, different from the recursion operator \eqref{e52} of the potential Korteweg--de~Vries equation? \item What are Hamiltonian structures and conservation laws of the KdV6? \end{itemize} And, of course, to find any applications of the new integrable nonlinear wave equation \eqref{e5} to physics and technology is also an interesting problem. \section*{Acknowledgements} S.S. is grateful to the Scientific and Technical Research Council of Turkey (T\"{U}B\.{I}TAK) for support and to the Middle East Technical University (ODT\"{U}) for hospitality.
1,108,101,563,558
arxiv
\section{\label{sec:intro}Introduction} Kochen-Specker (KS) sets have recently been studied extensively due to new theoretical results which have prompted new experimental and computational techniques. The theoretical results consider conditions under which such experiments are feasible \cite{cabell-02,barrett}, including single qubit KS setups \cite{cabell-03,cabello-moreno-07}. Such results and experiments are applicable to quantum computational rules of dealing with qubits and qutrits within a large number of quantum gates. The experiments were carried out for spin$-\frac{1}{2}\otimes\frac{1}{2}$ particles (correlated photons, or neutrons with spatial and spin degrees of freedom), and therefore in this paper we provide results only for 4-dim KS vector sets of yes-no questions (KS sets for short). Recent designs \cite{cabello-08} and experiments \cite{cabello-fillip-rauch-08,b-rauch-09,k-cabello-blatt-09,amselem-cabello-09,liu-09,moussa-laflamme-10} deal with state-independent vectors. Such experiments can tell us a great deal more about quantum formalism and the geometry involved in obtaining new functional KS setups and quantum gate setups in general. On the other hand, a recent result of A.~Cabello \cite{cabello-10} connects noncontextuality and therefore the KS theorem with quantum nonlocality and opens the possibility of using KS sets in quantum information experiments. For both of these applications, it is important to have many nonisomorphic critical (empirically distinguishable) KS sets. Before our work, only eight 4-dim ones were known (seee below). In this work, we present thousands of new nonisomorphic 4-dim critical and millions of non-critical KS sets. This is also important for a better understanding of quantum systems. First, it seemed that quantum gates and system state configurations that would allow only quantum representations were very sparse. We show that they are actually abundant. Second, the configurations of Hilbert space vectors and subspaces in KS sets have interesting symmetries that have intrigued many authors since the discovery of the KS theorem.~\cite{zimba-penrose,penrose-02,peres-book,massad-arravind99,aravind-ajp,mermin93,topos2,conway-kochen-02,ruuge05,brunet07,blanchfield-10} We now get many new symmetries because we obtain a new disjoint class of KS sets, none of which were previously known and none of which can be built up from previously known ones. Recently, we found that all known KS sets with up to 24 vectors and component values from the set \{-1,0,1\} (including one with 18 vectors for which experiments have been carried out) can be obtained from a single KS set with 24 vectors and 24 tetrads (blocks) originally found by A.~Peres.~\cite{pmm-2-09} The following is a brief summary of the techniques we used for that discovery. When a KS set can be obtained from a larger one by stripping (removing) tetrads, the stripped tetrads correspond to redundant detections within a measurement of spin projections. ``Critical'' sets are the smallest empirically distinguishable KS sets in the sense that they cannot be reduced to each other by stipping tetrads. Instead, stripping any tetrad from a critical set will cause the set to cease to be a KS set. Using the ``stripping technique'' (which we explain below), we exhaustively generated all possible 24-24 McKay-Megill-Pavicic (MMP) hypergraphs\cite{bdm-ndm-mp-fresl-jmp-10} [each vertex in a hypergraph represents a vector (state) in a Hilbert space and each tetrad (block) corresponds to four mutually orthogonal vectors] and found (after several months of computation on our 500 CPU cluster {\em Isabela}) that among over $10^{10}$ nonlinear equations to which the hypergraphs correspond, only one has a solution.~\cite{mp-report-10} The solution is, of course, isomorphic to Peres' 24-24 KS set mentioned above. Therefore we named this set simply the ``24-24 KS set'' and the family of KS sets that can be obtained from it, the ``24-24 KS class.'' We obtained KS subsets of the 24-24 set by a {\em stripping technique}, which consisted of stripping blocks off of the initial 24-24 KS set then checking whether the resulting subset continued to be a KS set. There are altogether 1232 such KS subsets.~\cite{pmm-2-09} Looking at these sets as sets of vectors with component values from a \{-1,0,1\} that do not allow a numerical evaluation (KS theorem!), we might not see any apparent reason why, among trillions of instances, there would not also be KS sets that are not subsets of the 24-24 set. But it turns out that there is none. We prove that by an exhaustive generation of KS sets with 18 to 23 vectors. They are isomorphic to the subsets of the 24-24 set.~\cite{mp-report-10}. When we strip off blocks, we eventually reach smallest KS sets in the sense that any of them ceases to be a KS set if we strip off an additional block. We call such smallest sets {\em critical sets}. This is the definition we used in Ref.~\onlinecite{pmm-2-09}, and it differs from the definition we used in Ref.~\onlinecite{aravind10}, which is based on deleting vectors (directions, rays). In Ref.~\onlinecite{pmm-2-09} we proved the there are altogether six critical KS subsets in the 24-24 KS class. These are: 18-9,\cite{cabell-est-96a} 20-11,\cite{kern} another 20-11,\cite{pmmm03a}, two 22-13s,\cite{pmmm03a} and 24-15.~\cite{pmm-2-09} The main focus of this paper is to describe another isolated family of KS sets (the 60-75 KS class), which we discovered by means of our stripping technique applied to a KS set with 60 vectors and 75 blocks. This latter set was obtained from the 4-dim polytope called the 600-cell.\ \cite{aravind-600} We call this new family the ``60-75 KS class.'' It turns out that it contains millions of KS sets and thousands of critical sets, and this is what is novel in this paper.~\cite{mp-vienna-talk-10} Previously, we found only several isolated examples from this class using a different technique.~\cite{aravind-600,aravind10} Being based on the geometry of the 600-cell, the 60-75 set provides us with highly symmetrical configurations of 60 rays, and this symmetry can be traced down to its smallest (with an estimated a confidence of over 95\%) 26-13 KS subset (described in Sec.~\ref{sec:critical}). In particular, the smallest maximal loop of any KS set from the 60-75 class forms an octagon, while all sets from the 24-24 class have a hexagon maximal loop. The fact that the smallest KS set from the 60-75 class has 26 vectors, together with the fact that we could not build up to any subset of the 60-75 class with 24 vectors proves the disjointness of the two classes with a confidence of over 95\%. Below, we present some of many (several thousand) new critical sets from the 60-75 KS class, give the algorithms that enabled us to find them, and investigate their symmetry and geometry, comparing it with the ones of the 24-24 KS class. \section{\label{sec:critical}Critical Sets} A discovery we made for the 18 through 24 vector sets was that the maximal loop of edges in all their MMP hypergraphs was a hexagon and that in all of them there was only one hexagon. (In an MMP hypergraph vertices correspond to vectors and edges to tetrads. So, {\tt 1} denotes the 1st vector, \dots , {\tt 9} the 9th, {\tt A}, the 10th, \dots , {\tt Z} the 35th, {\tt a} the 36th, \dots {\tt y} the 60th.) That gave us the idea of the stripping technique, because stripping the edges in a way that preserves the hexagon might give a comparatively small number of subsets and critical sets. That was confirmed in Ref.~\onlinecite{pmm-2-09}. Since the 24-24 KS set is a single largest set of its class, we expected that sets and in particular critical ones of our new 60-75 class would be based on loops larger than hexagons. The conjecture was correct. Also, since by exhaustive generation of all MMP hypergraphs up to 23 vertices we have not found a single KS set with a loop larger than a hexagon, we concluded that these classes of KS sets do not overlap, i.e., that the minimal critical sets of the 60-75 KS sets must have more than 24 vectors. See Fig.~\ref{fig:c-26-30}. \begin{figure}[htp] \begin{center} \includegraphics[width=0.248\textwidth]{ara-critical-26-13.eps} \includegraphics[width=0.263\textwidth]{ara-critical-30-15-symm2.eps} \includegraphics[width=0.257\textwidth]{ara-critical-30-15a-s.eps} \includegraphics[width=0.212\textwidth]{ara-critical-30-15b.eps} \end{center} \caption{[1st fig.] The smallest critical KS set, shown with the help of MMP hypergraphs. It has 26 vertices, 13 edges, and a maximal loop of 8 edges (octagon). Vertices correspond to vectors and edges to orthogonal tetrads. [2nd-4th fig.] Critical sets with 30 vertices, 15 edges, and a maximal loop of 10 edges (decagon) for 2nd and 3rd and an octagon for the 4th figure. The 2nd one is isomorphic to the 1st and the 4th one to the 2nd of the two 30-15 critical sets found in Ref.~\onlinecite{aravind10}, respectively } \label{fig:c-26-30} \end{figure} The figures in Fig.~\ref{fig:c-26-30} show MMP hypergraph representations of the smallest critical subsets of the 60-75 KS set. The smallest one has 26 vectors and 13 tetrads. Each vector is represented by a vertex in the MMP hypergraph. Each tetrad, which consists of four mutually orthogonal vectors, is represented by an edge in the MMP hypergraph. The next smallest ones are three KS sets with 30 vectors and 15 tetrads. Their MMP hypergraphs have 30 vertices and 15 edges each, as shown in their figures. As we can see, the first three sets are highly symmetrical and might be suitable for setting up an experiment. In the MMP notation described above, the 26-13 set has the following representation: {\tt 1234,4567,789A,ABCD,DEFG,GHIJ,JKLM,MNO1,5CHO,3Q8I,6QKF,NP9E,2PLB.} As we can see, there is a one-to-one correspondence between this notation and the figure. This is possible because in constructing a set, we only deal with orthogonalities between vectors and not with the values of the vector components. The values can always be ascribed later on by means of our program {\tt vectorfind}. The 60-75 \cite{aravind-600,aravind10} in the MMP notation reads: {\noindent {\tt 1234,1cKT,1Qtg,1Njo,1yYE,2Mmn,2vZD,2Pri,2bIV,3HWe,3kqO,3XGx,3shS,4Fwa,4UdJ,4fRu,4pLl,5678,5pSK,\break 5XiN,5buE,5Wwm,9ABC,9fxK,9sVN,9PlE,9qdZ,AUOt,AHiy,Abao,AGRm,BFSj,BXnc,BvJg,BWLr,CpeY,CkDQ,CMuT,\break ChwI,6Fet,6kVy,6PJo,6hLZ,7Uxj,7sDc,7Mag,7qRI,8fOY,8HnQ,8vlT,8Gdr,FGDE,FqiT,UhnE,UWVT,fhig,fWDo,\break pGVg,pqno,HIJK,HZuj,kraK,kmlj,XIlt,XZaY,srut,smJY,MLON,MdSy,vReN,vwxy,PRSQ,PwOc,bLxQ,bdec.}} To obtain these results, we used an interactive procedure where we stripped one block at a time and then decided (based e.g.\ on the size of the output) what steps to take next. The programs we used and their algorithms are described in Sec.\ \ref{sec:alg}. We used the program {\tt mmpstrip} to strip blocks from starting diagrams. In general, we attempted to set the parameters of {\tt mmpstrip} (in particular, its increment parameter) so that we ended up with a sample of about 10,000 hypergraphs after colorable hypergraphs and isomorphic hypergraphs were removed. The overall interactive procedure was as follows. We start with the MMP hypergraph for the 60-75 KS set. \begin{enumerate} \item From this set, new sets with less and less blocks were generated with with {\tt mmpstrip}. We used their increment parameter to keep the number of hypergraphs manageable, and we enabled the suppression of non-connected hypergraphs. \item Duplicate hypergraphs will result when one block is removed at a time (rather than multiple blocks combinatorially). These duplicates were removed. \item Colorable hypergraphs were filtered out with {\tt states01}, leaving only non-colorable ones (i.e.\ KS sets). \item Isomorphic hypergraphs were removed with {\tt shortdL}. \end{enumerate} While it is not exhaustive, the advantage of the above technique is that the filtering quickly converges to give us a collection of non-isomorphic KS sets with the desired block count for further study. Typically, we first ran these steps on a small sample of the hypergraphs (a hundred or so) so that the increment parameter for {\tt mmpstrip} in the first step above could be estimated, in order to end up with around 10,000 hypergraphs in the last step. In order to determine, after the above process (i.e.\ after removing a single block then processing the output), which (non-colorable) hypergraphs were critical, we ran {\tt mmpstrip} on one hypergraph at a time, then determined with {\tt states01} whether all of them became colorable. If any one was non-colorable, it meant that the hypergraph was non-critical. This procedure is somewhat CPU-intensive and for this study was done (in conjunction with the above process) only up to 19 blocks remaining (i.e.\ with 56 blocks or more blocks removed). We also did a more limited sampling of critical KS sets with higher block counts. \begin{figure}[htp] \begin{center} \includegraphics[width=0.24\textwidth]{ara-critical-32-17.eps} \includegraphics[width=0.27\textwidth]{ara-critical-33-17a.eps} \includegraphics[width=0.23\textwidth]{ara-critical-33-17b.eps} \includegraphics[width=0.24\textwidth]{ara-critical-34-17.eps} \end{center} \caption{[1st fig.] 32-17 KS set; Maximal loop: nonagon; [2nd, 3rd fig.] 33-17a,b; decagon, nonagon; [4th fig.] 34-17a; decagon.} \label{fig:c-32-34} \end{figure} The following list summarizes the critical hypergraphs we found with up to 19 blocks, along with some sample MMP diagrams and figure references. ``30-15$\times$3'' means that we found 3 non-isomorphic critical diagrams with 30 vectors and 15 blocks (tetrads). This list resulted from testing several million hypergraphs using the procedure described above. The reader should keep in mind that the list is not necessarily exhaustive but is based on this limited sample. \begin{itemize} \item $\le$ 12 blocks: None. \item 13 blocks: 26-13$\times$1. It is shown in Fig.\ \ref{fig:c-26-30}, with a maximal loop of 8 blocks (octagon). Its MMP representation is: {\noindent 26-13: \ \ {\tt 1234,4567,789A,ABCD,DEFG,GHIJ,JKLM,MNO1,5CHO,3Q8I,6QKF,NP9E,2PLB}} \item 14 blocks: None. \item 15 blocks: 30-15$\times$3. The first two sets in Fig.\ \ref{fig:c-26-30} hava a maximal loop of order 10 (decagon) and the third one of order 8 (octagon). They receive the following MMP representation: {\noindent 30-15a: \ \ {\tt 1234,4567,789A,ABCD,DEFG,GHIJ,JKLM,MNOP,PQRS,STU1,5FKU,2CHR,38IN,9EOT,6BLQ.}} {\noindent 30-15b: \ \ {\tt 1234,4567,789A,ABCD,DEFG,GHIJ,JKLM,MNOP,PQRS,STU1,6ELT,8FKR,C5UN,O29H,B3QI.}} {\noindent 30-15c: \ \ {\tt 1234,4567,789A,ABCD,DEFG,GHIJ,JKLM,MNO1,2PCL,3TSI,5PUT,6QRH,8KRF,9NSE,BQUO.}} 30-15c is isomorphic to the first and 30-15a to the second of the two 30-15 critical sets found in Ref.~\onlinecite{aravind10}. \item 16 blocks: None. \item 17 blocks: 32-17$\times$1, 33-17 $\times$2, and 34-17 $\times$5. The 32-17, 33-17b, and 34-17d have maximal loops 9 (nonagon) and the maximal loopsof 33-17a-c and 34-17e are decagons, as shown in Fig.~\ref{fig:c-32-34}. Their MMP representations are {\noindent 32-17: \ \ {\tt 1234,4567,789A,ABCD,DEFG,GHIJ,JKLM,MNOP,PQR1,2SCK,3VIO,3WE8,3LTB,5SGR,6WUH,9TGN,FUVQ.}} {\noindent 33-17a: \ \ {\tt 1234,4567,789A,ABCD,DEFG,GHIJ,JKLM,MNOP,PQRS,STU1,K2VF,3BIN,R5CH,Q6WL,U8OH,9TXE,VWXH.}} {\noindent 33-17b: \ \ {\tt 1234,4567,789A,ABCD,DEFG,GHIJ,JKLM,MNOP,PQR1,O2XW,K3UV,IR5C,LQ6B,8SWF,9TVE,BTSN,BUXH.}} {\noindent 34-17a: \ \ {\tt 1234,4567,789A,ABCD,DEFG,GHIJ,JKLM,MNOP,PQRS,STU1,2VIQ,3NYE,5RKF,6WOB,TW8L,9XVU,CXYH.}} {\noindent 34-17b: \ \ {\tt 1234,4567,789A,ABCD,DEFG,GHIJ,JKLM,MNOP,PQRS,STU1,2XHO,3CYR,5VKT,6BIN,8FWU,9ELQ,VWXY.}} {\noindent 34-17c: \ \ {\tt 1234,4567,789A,ABCD,DEFG,GHIJ,JKLM,MNOP,PQRS,STU1,2WIN,3XHC,5FKU,6RVB,8YTO,9ELQ,VWXY.}} {\noindent 34-17d: \ \ {\tt 1234,4567,789A,ABCD,DEFG,GHIJ,JKLM,MNOP,PQR1,O2T9,NS38,5UCH,6YBI,ELVQ,FKXR,XWUT,VSWY.}} {\noindent 34-17e: \ \ {\tt 1234,4567,789A,ABCD,DEFG,GHIJ,JKLM,MNOP,PQRS,STU1,2VXI,3WYH,5FKU,6BOT,8VWR,9ELQ,CXYN.}} \begin{figure}[htp] \begin{center} \includegraphics[width=0.23\textwidth]{ara-critical-34-17a.eps} \includegraphics[width=0.23\textwidth]{ara-critical-34-17b.eps} \includegraphics[width=0.25\textwidth]{ara-critical-34-17d.eps} \includegraphics[width=0.23\textwidth]{ara-critical-34-17e.eps} \end{center} \caption{[1st fig.] 34-17b (Max. loop: decagon); [2nd] 34-17c (decagon); [3rd] 34-17d (nonagon); [4th] 34-17e (decagon).} \label{fig:c-34-34} \end{figure} \item 18 blocks: None. \item 19 blocks: 36-19$\times$11, 37-19$\times$9, and 38-19$\times$6. Three of them have the maximal loops of order 11. \item 20 blocks: None \item 21 and higher: The number of critical KS sets per block increases to a maximum of over 1700 sets with 38 blocks, then decreases. Even numbers of blocks in a KS set start at 24. Odd numbers of vectors with an odd number of blocks start to appear at 17 blocks (33 vectors) and with an even number of blocks at 24 blocks. The 33-17 set is given above, and an MMP representation of a 42-24 set (with 14 blocks in a maximal loop) reads: \\ 42-24:\phantom{xx}{\tt 1234,5678,9ABC,DEF8,GHIJ,KLMJ,NOPQ,RSTM,UVI4,WXYF,ZaE3,bcLC,dTHB,eaYQ,ecVD,fbPG,gUSO,\\ \phantom{xxxXxXx}gfeA,ZTN7,XU97,bWR2,dWO6,fK63,eJ72.}\\ An exhaustive generation proves that there are no critical sets with more than 63 blocks. We conjecture that the first critical sets will appear at 60 blocks, i.e., for 60-60 KS sets and smaller, in analogy to the 24-24 class, assuming that that picking up new tetrads (61st-63rd) of the already recorded 60 vector/state values would not make a set empirically distinguishable from the 60-60 one we started with. The size of maximal loops steadily rises with the number of blocks. (Detailed presentation of additional KS sets we obtained by two other techniques will be given elsewhere, because the algorithms we use for higher numbers of blocks are too involved to be presented here. Besides, many massive computations that take months on our clusters are under way.) \end{itemize} Since our sampling was not exhaustive---for example, there are 290 quadrillion (2.9$\times 10^{17}$) hypergraphs with 19 blocks---it is likely that there are many more critical KS sets than suggested above, particularly at block counts higher than 19. When repeating the above procedure with independent random samples, some critical diagrams (up to isomorphism) recurred frequently, whereas others occurred only once or twice, indicating that their distribution and symmetries are far from uniform. An exhaustive generation of all sets from the 60-75 KS class, that we might be able to carry out in the future, would give us the complete list. \section{\label{sec:alg}Algorithms} Our study makes use of algorithms and computer programs from several disciplines (quantum mechanics, lattice theory, graph theory, and geometry). Each has its own terminology, which we will sometimes keep when discussing an algorithm from that discipline. To avoid confusion, the reader should keep in mind that in the context of the MMP diagrams used for this study, the terms ``vertex,'' ``atom,'' ``ray,'' ``1-dim subspace,'' and ``vector'' are synonymous, as are the terms ``edge,'' ``block,'' and ``tetrad (of mutually orthogonal vectors).'' Similarly, ``MMP hypergraph'' and ``MMP diagram'' mean the same thing. For the purpose of the KS theorem, the vertices of an MMP hypergraph are interpreted as rays, i.e.\ 1-dim subspaces of a Hilbert space, each specified by a representative (non-zero) vector in the subspace. The vertices on a single edge are assumed to be mutually orthogonal rays or vectors. In order for an MMP hypergraph to correspond to a KS set, first there must exist an assignment of vectors to the vertices such that the orthogonality conditions specified by the edges are satisfied. Second, there must not exist an assignment (sometimes called a ``coloring'') of 0/1 (non-dispersive or classical) probability states to the vertices such that each edge has exactly one vertex assigned to 1 and its others assigned to 0. For a given MMP hypergraph, we use two programs to confirm these two conditions. The first one, {\tt vectorfind}, attempts to find an assignment of vectors to the vertices that meets the above requirement. This program is described in Ref.~\onlinecite{pmm-2-09}. The second program, {\tt states01}, determines whether or not a 0/1 coloring is possible that meets the above requirement. The algorithm used by {\tt states01} is described in Ref.~\onlinecite{pmmm03a}. The 60-vertex, 75-edge MMP hypergraph based on the 600-cell described above (which we refer to as 60-75) has been shown to be a KS set.\ \cite{aravind-600} However, we can remove blocks from it and it will continue to be a KS set. The purpose of this study was to try to find subsets of the 60-75 hypergraph that are critical i.e.\ that are minimal in the sense that if any one block is removed, the subset is no longer a KS set. While the program {\tt vectorfind} independently confirmed that 60-75 admits the necessary vector assignment, such an assignment remains valid when a block is removed. Thus it is not necessary to run {\tt vectorfind} on subsets of 60-75. However, a non-colorable (KS) set will eventually admit a coloring when enough blocks are removed, and the program {\tt states01} is used to test for this condition. The basic method in our study was to start with the 60-75 hypergraph and generate successive subsets, each with one or more blocks stripped off of the previous subset, then keep the ones that continued to admit no coloring and discard the rest. Of these, ones isomorphic to others were also discarded. The program {\tt mmpstrip} was used to generate subsets with blocks stripped off. The user provides the number of blocks $k$ to strip from an input MMP hypergraph with $n$ blocks, and the program will produce all ${{n}\choose{k}}$ subsets with a simple combinatorial algorithm. Partial output sets can be generated with start and end parameters, and if the full output is too large to be practical, an increment parameter $i$ will skip all but every $i$th output line in order to partially sample the output subsets. Given an input file with MMP hypergraphs, the program can calculate in advance how many output hypergraphs will result, so that the user can plan which parameter settings to use. The {\tt mmpstrip} program will also optionally suppress MMP hypergraphs that are not connected, such as those with isolated blocks or two unconnected sections, since these are of no interest. Finally, all output lines are by default renormalized (assigned a canonical atom naming), so that there are no gaps in the atom naming as is required by some other MMP processing programs. In order to detect isomorphic hypergraphs, one of two programs was used. For testing small sets of hypergraphs, we used the program {\tt subgraph} described in Ref.~\onlinecite{pmm-2-09}, which has the advantage of displaying the isomorphism mapping for manual verification. For a large number of hypergraphs, we used Brendan McKay's program {\tt shortdL}, which has a much faster run time. \section{\label{sec:disc}Discussion} In this paper we describe the 60-75 class of Kochen-Specker set with a focus on its smallest critical sets, as defined in the Introduction. The smallest critical sets we find are shown in Figs.~\ref{fig:c-26-30}-\ref{fig:c-34-34}. The order of their maximal loops of edges (blocks, tetrads) is 8 and more. Since the order of the maximal loop of all subsets that form the lower KS 24-24 class is 6, this is an additional aspect of the disjointness of 24-24 and 60-75 classes, for which we have shown to have a statistical confidence of over 95\%. More details on the latter results are given in the Introduction. The high symmetry of the smallest critical KS sets shown in the first three figures of Fig.~\ref{fig:c-26-30} and in the figures of Fig.~\ref{fig:c-34-34} suggests that spin KS experiments might be designed for them. Therefore we would like to discuss geometrical features of the sets we obtained in Sec.~\ref{sec:critical}. Each of the sets shown in Figs.~\ref{fig:c-26-30}-\ref{fig:c-34-34} involves an odd number of bases (blocks, edges, tetrads) (13, 17 or 17), with each ray (vertex, vector, direction) occurring exactly twice over these bases. This observation, by itself, gives an immediate ``parity proof'' of the BKS (Bell-Kochen-Specker) theorem without the need for any further calculation or analysis (and, in particular, without the need to use program {\tt states01} mentioned in the previous section). The reason is the following: on the one hand, because each tetrad must have exactly one ray assigned the value 1, there must be an odd number of occurrences of 1's over all the tetrads; but, on the other hand, because each ray is repeated twice, there must be an even number of 1's over all the tetrads. This contradiction shows that a 1/0 assignment is impossible and so proves that these are indeed KS sets. The argument, of course, does not go through for those sets where a ray appears in an odd number of tetrads as, e.g., ray 2 in the the set 42-24 whose MMP representation is given at the end of Sec.~\ref{sec:critical}. An interesting difference between the 26-13 and 30-15 cases is that the latter are isogonal (or vertex transitive), whereas the former is not. A set of rays is said to be isogonal if there is a symmetry operation that will take any ray into any other one while keeping the structure as a whole invariant. The 60 rays of the 600-cell are isogonal as a whole, and this might encourage the belief that subsets yielding parity proofs must also be isogonal. The 26-13 set shows this supposition to be false in the case of the 600-cell, or the 60-75 set. In addition to the methods outlined in the previous sections, alternative methods were used to arrive at some of the possible critical sets. The idea, which followed the ``parity proof'' above, was to look for $N$ rays forming $T$ complete tetrads, with $T$ odd, in such a way that each ray occurred in exactly two of the tetrads. Such a set, which we will refer to as a $N$-$T$ set (e.g., 26-13, 30-15, etc.), is a KS set. An $N$-$T$ set that obeys the ``parity proof'' satisfies the numerical constraint $N=2T$, so it involves only one free parameter. (Of course, there are other critical sets such as 33-17 that will not be found by this method.) Starting from small values of $N$ and proceeding upwards, we looked for KS sets. It is easily seen that no set of this type can exist for $N$ less than 16. The reason has to do with the structure of the tetrads for the 600-cell. Inspection shows that each ray occurs in exactly five tetrads and that it occurs exactly once in these tetrads with each of the 15 rays it is orthogonal to. Suppose a particular tetrad is chosen as the ``seed'' for a $N$-$T$ set. Then each ray in that tetrad must occur in one other tetrad, and so there must be at least four other tetrads involved. However each of those tetrads must involve three new rays, and so the total number of rays, including the four we began with, is 16. Starting with $N = 18$ and proceeding upwards (remembering that $T = N/2$ must be an odd integer) shows, through slightly more involved arguments (which differ from those for the smallest critical set (18-9) of the 24-24 class), that solutions with $N = 18$ and 22 are impossible. The first solution that is possible is for $N = 26$, and it explains the 26-13 set shown in Fig.\ \ref{fig:c-26-30}. There are actually 1800 different sets of 26 rays that lead to such a solution, but they are all geometrically isomorphic to one another, in the sense that there is a four-dimensional orthogonal transformation that will take any one such set into any other. In the Introduction, we give a statistical argument showing, with over 95\%\ confidence, that there are no smaller sets than 26-13 in the 60-75 class that do not follow the parity proof. All the other results we obtained, together with the presentation of the algorithms and programs we used, is given in Ref.~\onlinecite{waeg-aravind-megill-pavicic-10}. There we give a detailed analyisis of the results, complete statistics of the obtained sets, and a review of their features. All that is outside of the scope of the present paper. \begin{acknowledgments} One of us (M. P.) would like to thank his host Hossein Sadeghpour for a support during his stay at ITAMP. Supported by the US National Science Foundation through a grant for the Institute for Theoretical Atomic, Molecular, and Optical Physics (ITAMP) at Harvard University and Smithsonian Astrophysical Observatory and Ministry of Science, Education, and Sport of Croatia through the project No.\ 082-0982562-3160. Computational support was provided by the cluster Isabella of the University Computing Centre of the University of Zagreb and by the Croatian National Grid Infrastructure. \end{acknowledgments}
1,108,101,563,559
arxiv
\section{Introduction}\label{sec:introduction} Resource Description Framework (RDF) \cite{Swick1998Resource} is the standard data model in the semantic web. RDF describes the relationship of entities or resources using directed labelling graph. RDF has a broad range of applications in the semantic web, social network, bio-informatics, geographical data, etc. \cite{Arenas2011Querying}. The standard query language for RDF graphs is SPARQL \cite{Prud2006SPARQL}. Though SPARQL is powerful to express queries over RDF graphs \cite{Angles2008The}, generally, the query evaluation of the full SPARQL is PSPACE-complete \cite{P2009Semantics}. Currently, there are some popular query engines for supporting the full SPARQL such as Jena \cite{Carroll2004Jena} and Sesame \cite{Broekstra2002Sesame}. However, they become not highly efficient when they handle some large RDF datasets\cite{Zou2011gStore,Zou2014gStore}. Currently, gStore\cite{Zou2011gStore,Zou2014gStore} and RDF-3X\cite{Neumann2010The} can highly efficiently query large datasets. But gStore and RDF-3X merely provide querying services of BGP. Therefore, it is very necessary to develop a query engine with supporting more expressive queries for large datasets. Since the OPT operator is the least conventional operator among SPARQL operators \cite{Zhang2014ipl}, it is interesting to investigate those patterns extending BGP with the OPT operator. Let us take a look at the following example. An RDF example in \autoref{blogger.rdf} describes the entities of bloggers and blogs. The relationship between a blogger and a blog is revealed in the property of \emph{foaf:maker}. Both blogger and blog have some properties to describe themselves. Triples can be modeled as a directed graph substantially. \begin{table}[h] \centering \caption{bloggers.rdf \label{blogger.rdf}} \begin{tabular}{l|l|l} \hline Subject &Predict &Object \\ \hline id1 &foaf:name &Jon Foobar \\ \hline id1 &rdf:type &foaf:Agent \\ \hline id1 &foaf:weblog &foobar.xx/blog \\ \hline foobar.xx/blog &dc:title &title \\ \hline foobar.xx/blog &rdfs:seeAlso &foobar.xx/blog.rdf \\ \hline foobar.xx/blog.rdf &foaf:maker &id1 \\ \hline foobar.xx/blog.rdf &rdf:type &rss:channel \\ \hline \end{tabular} \end{table} \begin{example}\label{ex:introduction} Consider the RDF dataset $G$ storing information in \autoref{blogger.rdf}. Given a BGP $Q=((?x, \textit{foaf:maker}, ?y) \ \mathbin{\mathrm{AND}} \ (?z, \textit{foaf:name}, ?u))$, its evaluation over $G$ is as follows: \begin{center} $\semm {Q}{G} =$\;\;\;\;\;\;\;\begin{tabular}{|c|c|c|c|} \hline ?x &?y &?z &?u \\ \hline foobar.xx/blog.rdf &id1 &id1 &Jon Foobar \\ \hline \end{tabular} \end{center} Consider a new pattern $Q_1$ obtained from $Q$ by adding the OPT operator in the following way: \\$Q_1=(((?x, \textit{foaf:maker}, ?y) \ \mathbin{\mathrm{OPT}} \ (?y, \textit{rdf:type}, ?v)) \ \mathbin{\mathrm{AND}} \ (?z, \textit{foaf:name}, ?u))$, the evaluation of $Q_1$ over $G$ is as follows: \begin{center} $\semm {Q_1}{G} =$\;\;\;\;\;\;\;\begin{tabular}{|c|c|c|c|c|} \hline ?x &?y &?v &?z &?u \\ \hline foobar.xx/blog.rdf &id1 &foaf:Agent &id1 &Jon Foobar \\ \hline \end{tabular} \end{center} Consider another pattern $Q_2=(((?x, \textit{foaf:maker}, ?y) \ \mathbin{\mathrm{OPT}} \ (?y, \textit{rdf:type}, ?z)) \ \mathbin{\mathrm{AND}} \ (?z, \textit{foaf:name}, ?u))$, the evaluation of $Q_2$ over $G$ is the empty set, i.e., $\semm {Q_2}{G} = \emptyset$. \end{example} In the above example, $Q_1$ is a well-designed pattern while $Q_2$ is not a well-designed pattern \cite{P2009Semantics}. In fact, we investigate that queries built on well-designed patterns are very popular in a real world. For example, in LSQ\cite{saleem2015lsq}, a Linked Dataset describing SPARQL queries extracted from the logs of four prominent public SPARQL endpoints containing more than one million available queries shown in \autoref{lsq}, queries built on well-designed patterns are over 70\% \cite{HAN2016On,Song2016Efficient}. \begin{table}[h] \centering \caption{SPARQL logs source in LSQ \label{lsq}} \begin{tabular}{l|l|r} \hline Dataset &Date &Triple Number \\ \hline DBpedia &30/04/2010 to 20/07/2010 &232,000,000 \\ \hline Linked Geo Data (LGD) &24/11/2010 to 06/07/2011 &1,000,000,000 \\ \hline Semantic Web Dog Food (SWDF) &16/05/2014 to 12/11/2014 &300,000 \\ \hline British Museum (BM) &08/11/2014 to 01/12/2014 &1,400,000 \\ \hline \end{tabular} \end{table} Furthermore, queries with well-designed AND-OPT patterns (for short, WDAO-patterns) are over 99\% among all queries with well-designed patterns in LSQ \cite{HAN2016On,Song2016Efficient}. In short, the fragment of WDAO-patterns is a natural extension of BGP in our real world. Therefore, we mainly discuss WDAO-patterns in this paper. In this paper, we present a plugin-based framework for all SELECT queries built on WDAO-patterns, named PIWD. Within this framework, we can employ any query engine evaluating BGP for evaluating queries built on WDAO-patterns in a convenient way. The main contributions of this paper can be summarized as follows: \begin{compactitem} \item We present a parse tree named \emph{well-designed AND-OPT tree} (for short, WDAO-tree), whose leaves are BGP and all inner nodes are the OPT operator and then prove that for any WDAO-pattern, it can be translated into a WDAO-tree. \item We propose a plugin-based framework named \emph{PIWD} for query evaluation of queries built on WDAO-patterns based on WDAO-tree. Within this framework, a query could be evaluated in the following three steps: (1) translating that query into a WDAO tree $T$; (2) evaluating all leaves of $T$ via query engines of BGP; and (3) joining all solutions of children to obtain solutions of their parent up to the root. \item We implement the proposed framework PIWD by employing gStore and RDF-3X and evaluate the experiments on LUBM. \end{compactitem} The rest of this paper is organized as follows: Section \ref{sec:pre} briefly introduces the SPARQL, conception of well-designed patterns and OPT normal form. Section \ref{sec:wdao} defines the well-designed and-opt tree to capture WDAO-patterns. Section \ref{sec:piwd} presents PIWD and Section \ref{sec:experiment} evaluates experimental results. Section \ref{sec:related} summarizes our related works. Finally, Section \ref{sec:con} summarizes this paper. \section{Preliminaries}\label{sec:pre} In this section, we introduce RDF and SPARQL patterns, well-designed patterns, and OPT normal form \cite{P2009Semantics}. \subsection{RDF} Let ${I}$, ${B}$ and ${L}$ be infinite sets of \emph{IRIs}, \emph{blank nodes} and \emph{literals}, respectively. These three sets are pairwise disjoint. We denote the union $I \cup B \cup L$ by $U$, and elements of $I \cup L$ will be referred to as \emph{constants}. A triple $(s, p, o) \in ({I}\cup {B}) \times {I} \times ({I} \cup {B}\cup {L})$ is called an \emph{RDF triple}. A \emph{basic graph pattern} (BGP) is a set of triple patterns. \subsection{Semantics of SPARQL patterns} The semantics of patterns is defined in terms of sets of so-called \emph{mappings}, which are simply total functions $\mu \colon S \to U$ on some finite set $S$ of variables. We denote the domain $S$ of $\mu$ by $\dom \mu$. Now given a graph $G$ and a pattern $P$, we define the semantics of $P$ on $G$, denoted by $\semm P G$, as a set of mappings, in the following manner. \begin{compactitem} \item If $P$ is a triple pattern $(u,v,w)$, then \begin{center} $ \semm P G := \{\mu \colon \{u,v,w\} \cap V \to U \mid (\mu(u),\mu(v),\mu(w)) \in G\}. $ \end{center} Here, for any mapping $\mu$ and any constant $c \in I \cup L$, we agree that $\mu(c)$ equals $c$ itself. In other words, mappings are extended to constants according to the identity mapping. \item If $P$ is of the form $P_1 \mathbin{\mathrm{UNION}} P_2$, then $\semm P G := \semm {P_1} G \cup \semm {P_2} G$. \item If $P$ is of the form $P_1 \mathbin{\mathrm{AND}} P_2$, then $\semm P G := \semm {P_1} G \Join \semm {P_2} G$, where, for any two sets of mappings $\Omega_1$ and $\Omega_2$, we define \begin{center} $ \Omega_1 \Join \Omega = \{\mu_1 \cup \mu_2 \mid \mu_1 \in \Omega_1 \text{ and } \mu_2 \in \Omega_2 \text{ and }\mu_1 \sim \mu_2\}. $ \end{center} Here, two mappings $\mu_1$ and $\mu_2$ are called \emph{compatible}, denoted by $\mu_1 \sim \mu_2$, if they agree on the intersection of their domains, i.e., if for every variable $?x \in \dom {\mu_1} \cap \dom {\mu_2}$, we have $\mu_1(?x) = \mu_2(?x)$. Note that when $\mu_1$ and $\mu_2$ are compatible, their union $\mu_1 \cup \mu_2$ is a well-defined mapping; this property is used in the formal definition above. \item If $P$ is of the form $P_1 \mathbin{\mathrm{OPT}} P_2$, then \begin{center} $ \semm P G := (\semm {P_1} G \Join \semm {P_2} G) \cup (\semm {P_1} G \smallsetminus \semm {P_2} G), $ \end{center} where, for any two sets of mappings $\Omega_1$ and $\Omega_2$, we define \begin{center} $\Omega_1 \smallsetminus \Omega_2$ $ = \{ \mu_1 \in \Omega_1 \mid \neg \exists \mu_2 \in \Omega_2 : \mu_1 \sim \mu_2\}$. \end{center} \item If $P$ is of the form $\mathop{\mathrm{SELECT}}_{S}(P_{1})$, then $\semm {P}{G} = \{\mu|_{S \cap \dom \mu} \mid \mu \in \semm{P_1}G\}$, where $f|_X$ denotes the standard mathematical notion of restriction of a function $f$ to a subset $X$ of its domain. \item Finally, if $P$ is of the form $P_1 \mathbin{\mathrm{FILTER}} C$, then $\semm P G := \{\mu \in \semm {P_1} G \mid \mu(C) = \mathit{true} \}$. Here, for any mapping $\mu$ and constraint $C$, the evaluation of $C$ on $\mu$, denoted by $\mu(C)$, is defined in terms of a three-valued logic with truth values $\mathit{true}$, $\mathit{false}$, and $\mathit{error}$. Recall that $C$ is a boolean combination of atomic constraints. For a bound constraint $\mathbin{\mathrm{bound}}(?x)$, we define: \begin{center} $ \mu(\mathbin{\mathrm{bound}}(?x)) = \begin{cases} \mathit{true} & \text{if $?x \in \dom \mu$;} \\ \mathit{false} & \text{otherwise.} \end{cases} $ \end{center} For an equality constraint $?x=?y$, we define: \begin{center} $ \mu(?x=?y) = \begin{cases} \mathit{true} & \text{if $?x,?y \in \dom \mu$ and $\mu(?x)=\mu(?y)$;} \\ \mathit{false} & \text{if $?x,?y \in \dom \mu$ and $\mu(?x)\neq\mu(?y)$;} \\ \mathit{error} & \text{otherwise.} \end{cases} $ \end{center} Thus, when $?x$ and $?y$ do not both belong to $\dom \mu$, the equality constraint evaluates to $\mathit{error}$. Similarly, for a constant-equality constraint $?x=c$, we define: \begin{center} $ \mu(?x=c) = \begin{cases} \mathit{true} & \text{if $?x \in \dom \mu$ and $\mu(?x)=c$;} \\ \mathit{false} & \text{if $?x \in \dom \mu$ and $\mu(?x)\neq c$;} \\ \mathit{error} & \text{otherwise.} \end{cases} $ \end{center} A boolean combination is then evaluated using the truth tables given in Table~\ref{truthtable}. \end{compactitem} \begin{table} \caption{Truth tables for the three-valued semantics.} \label{truthtable} $$ \begin{array}[t]{cc|cc} p & q & p \land q & p \lor q \\ \hline \mathit{true} & \mathit{true} & \mathit{true} & \mathit{true}\\ \mathit{true} & \mathit{false} & \mathit{false} & \mathit{true}\\ \mathit{true} & \mathit{error} & \mathit{error} & \mathit{true}\\ \mathit{false} & \mathit{true} & \mathit{false} & \mathit{true}\\ \mathit{false} & \mathit{false} & \mathit{false} & \mathit{false}\\ \mathit{false} & \mathit{error} & \mathit{false} & \mathit{error}\\ \mathit{error} & \mathit{true} & \mathit{error} & \mathit{true}\\ \mathit{error} & \mathit{false} & \mathit{false} & \mathit{error}\\ \mathit{error} & \mathit{error} & \mathit{error} & \mathit{error} \end{array} \qquad \begin{array}[t]{c|c} p & \neg p \\ \hline \mathit{true} & \mathit{false} \\ \mathit{false} & \mathit{true} \\ \mathit{error} & \mathit{error} \end{array} $$ \end{table} \subsection{Well-Designed Pattern}\label{wd} A $\mathbin{\mathrm{UNION}}$-\emph{free} pattern $P$ is \emph{well-designed} if the followings hold: \begin{itemize} \item $P$ is safe; \item for every subpattern $Q$ of form ($Q_{1}$ OPT $Q_{2}$) of $P$ and for every variable $?x$ occurring in $P$, the following condition holds: \begin{center} If $?x$ occurs both inside $Q_{2}$ and outside $Q$, then it also occurs in $Q_{1}$. \end{center} \end{itemize} Consider the definition of well-designed patterns, some conceptions can be explained as follows: \begin{remark} In the fragment of and-opt patterns, we exclude $\mathbin{\mathrm{FILTER}}$ and $\mathbin{\mathrm{UNION}}$ operators and it contains only $\mathbin{\mathrm{AND}}$ and $\mathbin{\mathrm{OPT}}$ operators at most. It is obvious that \emph{and-opt} pattern must be $\mathbin{\mathrm{UNION}}$-\emph{free} and safe. \end{remark} We can conclude that WDAO-patterns are decided by variables in subpattern. \begin{itemize} \item \textbf {UNION-\emph{free} Pattern}: $P$ is $\mathbin{\mathrm{UNION}}$-free if $P$ is constructed by using only operators $\mathbin{\mathrm{AND}}$, $\mathbin{\mathrm{OPT}}$, and $\mathbin{\mathrm{FILTER}}$. Every graph pattern $P$ is equivalent to a pattern of the form denoted by ( $P_1 \mathbin{\mathrm{UNION}} P_2 \mathbin{\mathrm{UNION}} \cdots \mathbin{\mathrm{UNION}} P_n$ ). Each $P_i$ ( $1 \leq i \leq n$ ) is $\mathbin{\mathrm{UNION}}$-free. \item \textbf {Safe} : If the form of ( P FILTER R ) holds the condition of $\mathit{var}(R) \subseteq \mathit{var}(P)$, then it is safe. \end{itemize} Note that the OPT operator provides really optional left-outer join due to the weak monotonicity \cite{P2009Semantics}. A SPARQL pattern P is said to be weakly monotone if for every pair of RDF graphs $G_{1}$, $G_{2}$ such that $G_{1}\subseteq G_{2}$, it holds that $\semm P {G_1} \sqsubseteq \semm P {G_2}$. In other words, we assume $\mu_1$ represents $\semm P {G_1}$, and $\mu_2$ represents $\semm P {G_2}$. Then there exists $\mu'$ such that $\mu_2=\mu_1\cup\mu'$. Weakly monotone is an important property to characterize the satisfiability of SPARQL \cite{Zhang2015sat}. For instance, consider the pattern $Q_1$ in Section \ref{sec:introduction}, $(?y, \textit{rdf:type}, ?v)$ are really optional. \subsection{OPT Normal Form} A UNION-free pattern $P$ is in \emph{OPT normal form} \cite{P2009Semantics} if $P$ meets one of the following two conditions: \begin{compactitem} \item $P$ is constructed by using only the $\mathbin{\mathrm{AND}}$ and $\mathbin{\mathrm{FILTER}}$ operators; \item $P = (P_1\ \mathbin{\mathrm{OPT}}\ P_2)$ where $P_1$ and $P_2$ patterns are in OPT normal form. \end{compactitem} For instance, the pattern $Q$ aforementioned in Section \ref{sec:introduction} is in OPT normal form. However, consider the pattern $(((?x, \textit{p}, ?y)\ \mathbin{\mathrm{OPT}}\ (?x, \textit{q}, ?z))\mathbin{\mathrm{AND}} (?x, \textit{r}, ?z))$ is not in OPT normal form. \section{Well-Designed And-Opt Tree} \label{sec:wdao} In this section, we propose the conception of the well-designed and-opt tree (WDAO-tree), any WDAO-pattern can be seen as an WDAO-tree. \subsection{WDAO-tree Structure} \begin{definition}[WDAO-tree]\label{well-designed tree} Let $P$ be a well-designed pattern in OPT normal form. A well-designed tree $T$ based on $P$ is a redesigned parse tree, which can be defined as follows: \begin{compactitem} \item All inner nodes in $T$ are labeled by the $\mathbin{\mathrm{OPT}}$ operator and leaves are labeled by BGP. \item For each subpattern $(P_1\ \mathbin{\mathrm{OPT}} \ P_2)$ of $P$, the well-designed tree $T_1$ of $P_1$ and the well-designed tree $T_2$ of $P_2$ have the same parent node. \end{compactitem} \end{definition} For instance, consider a WDAO-pattern $P$\footnote{We give each OPT operator a subscript to differentiate them so that readers understand clearly.} \begin{multline*} P=(((p_1 \ \mathbin{\mathrm{AND}} \ p_3) \ \mathbin{\mathrm{OPT}}_2 \ p_2) \ \mathbin{\mathrm{OPT}}_1 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ ((p_4 \ \mathbin{\mathrm{OPT}}_4 \ p_5) \ \mathbin{\mathrm{OPT}}_5 (p_6 \ \mathbin{\mathrm{OPT}}_6 \ p_7))). \end{multline*} The WDAO-tree $T$ is shown in Figure \ref{fig:wd}. As shown in this example, BGP - $(p_1 \mathbin{\mathrm{AND}} p_3)$ is the exact matching in $P$, which corresponds to the non-optional pattern. Besides, in WDAO-tree, it is the leftmost leaf in $T$. We can conclude that the leftmost node in WDAO-tree means the exact matching in well-designed SPARQL query pattern. \begin{figure}[h]\centering \begin{tikzpicture}[scale=0.7][nodes={draw}, -] \node{$\mathbin{\mathrm{OPT}}_1$} child { node {$\mathbin{\mathrm{OPT}}_2$} child { node {$p_1 \mathbin{\mathrm{AND}} p_3$}} child { node {$p_2$} } } child [missing] child [missing] child { node {$\mathbin{\mathrm{OPT}}_3$} child { node {$\mathbin{\mathrm{OPT}}_4$} child { node {$p_4$}} child { node {$p_5$}} } child [missing] child { node {$\mathbin{\mathrm{OPT}}_5$} child { node {$p_6$}} child { node {$p_7$}} } }; \end{tikzpicture} \caption{WDAO-tree\label{fig:wd}} \end{figure} \subsection{Rewritting rules over WDAO-tree} As described in Section \ref{well-designed tree}, WDAO-tree does not contain any OPT operator in its leaves. In this sense, patterns as the form of $Q_1$ in Section \ref{sec:introduction} cannot be transformed into WDAO-tree since it is not OPT normal form. \begin{proposition}\cite[Theorem 4.11]{P2009Semantics}\label{prop:ONF} For every UNION-free well-designed pattern $P$, there exists a pattern $Q$ in OPT normal form such that $P$ and $Q$ are equivalent. \end{proposition} In the proof of Proposition \ref{prop:ONF}, we apply three rewriting rules based on the following equations: let $P, Q, R$ be patterns and $C$ a constraint, \begin{compactitem} \item $(P \mathbin{\mathrm{OPT}} R) \mathbin{\mathrm{AND}} Q \equiv (P \mathbin{\mathrm{AND}} Q) \mathbin{\mathrm{OPT}} R$; \item $P \mathbin{\mathrm{AND}} (Q \mathbin{\mathrm{OPT}} R) \equiv (P \mathbin{\mathrm{AND}} Q) \mathbin{\mathrm{OPT}} R$; \item $(P \mathbin{\mathrm{OPT}} R) \mathbin{\mathrm{FILTER}} C \equiv (P \mathbin{\mathrm{FILTER}} C) \mathbin{\mathrm{OPT}} R$. \end{compactitem} Intuitively, this lemma states that AND operator can forward and OPT operator can backward in a well-designed pattern with preserving the semantics. The above three rules can be deployed on a WDAO-tree. For each WDAO-tree $T$, there exists $T'$ corresponding to $T$ after applying rewriting rules. Figure \ref{rewrite-1} and Figure \ref{rewrite-2} have shown that the process of rewriting rules after generating grammar tree and finally WDAO-tree can be obtained. Clearly, WDAO-tree has less height than the grammar tree. \begin{figure}[h] \center \subfigure{ \begin{tikzpicture}[scale=0.6][nodes={draw}, -] \node{$\mathbin{\mathrm{AND}}$} child { node {$\mathbin{\mathrm{OPT}}$} child { node {$P$}} child { node {$R$} } } child { node {$Q$} }; \end{tikzpicture}} $\quad \quad \Leftrightarrow \quad \quad$ \subfigure{ \begin{tikzpicture}[scale=0.6][nodes={draw}, -] \node{$\mathbin{\mathrm{OPT}}$} child { node {$\mathbin{\mathrm{AND}}$} child { node {$P$}} child { node {$Q$} } } child { node {$R$} }; \end{tikzpicture}} $\quad \quad \Leftrightarrow \quad \quad$ \subfigure{ \begin{tikzpicture}[scale=0.8][nodes={draw}, -] \node{$\mathbin{\mathrm{OPT}}$} child { node {$P \mathbin{\mathrm{AND}} Q$} } child { node {$R$} }; \end{tikzpicture}} \caption{rewritting rule-1 \label{rewrite-1}} \end{figure} \begin{figure}[h] \center \subfigure{ \begin{tikzpicture}[scale=0.6][nodes={draw}, -] \node{$\mathbin{\mathrm{AND}}$} child { node {$P$} } child { node {$\mathbin{\mathrm{OPT}}$} child { node {$Q$}} child { node {$R$} } }; \end{tikzpicture}} $\quad \quad \Leftrightarrow \quad \quad$ \subfigure{ \begin{tikzpicture}[scale=0.6][nodes={draw}, -] \node{$\mathbin{\mathrm{OPT}}$} child { node {$\mathbin{\mathrm{AND}}$} child { node {$P$}} child { node {$Q$} } } child { node {$R$} }; \end{tikzpicture}} $\quad \quad \Leftrightarrow \quad \quad$ \subfigure{ \begin{tikzpicture}[scale=0.8][nodes={draw}, -] \node{$\mathbin{\mathrm{OPT}}$} child { node {$P \mathbin{\mathrm{AND}} Q$} } child { node {$R$} }; \end{tikzpicture}} \caption{rewritting rule-2 \label{rewrite-2}} \end{figure} \subsection{WDAO-tree Construction}\label{ctree} Before constructing WDAO-tree, we recognize query patterns and attachments at first. Then we rewrite query patterns by rewritting rules, which leads to a new pattern. Based on this new pattern, we construct WDAO-tree in the principle of Definition \ref{well-designed tree}. In the process of the WDAO-tree construction, we firstly build the grammar tree of SPARQL patterns, whose inner node is either $\mathbin{\mathrm{AND}}$ operator or $\mathbin{\mathrm{OPT}}$ operator. This process is based on recursively putting the left pattern and right pattern of operator in the left node and right node respectively until the pattern does not contain any operator. Then we apply the rewritting rules to the grammar tree in Algorithm \autoref{rewriterules} to build rewriting-tree whose only leaf node is single triple pattern. Different rewritting rules are adopted depending on $\mathbin{\mathrm{OPT}}$ operator are $\mathbin{\mathrm{AND}}$ operator's left child or right child. Since WDAO-tree's inner nodes only contain $\mathbin{\mathrm{AND}}$ operators, After getting rewriting-tree, we merge the $\mathbin{\mathrm{AND}}$ operators only containing leaf child nodes with its child nodes into new nodes in order to get a WDAO-tree. The WDAO-tree construction can be executed in PTIME. Given a pattern containing $n$ $\mathbin{\mathrm{AND}}$s and $m$ $\mathbin{\mathrm{OPT}}$s, the construction of the grammar tree and rewriting tree have $O(n+m)$ time complexity and $O(nm)$ time complexity, respectively. Furthermore, the merge of nodes whose parent is $\mathbin{\mathrm{AND}}$ has $O(n)$ time complexity. \begin{center} \begin{algorithm}[h] \caption{rewritting rules\label{rewriterules}} \vspace{.1cm} \begin{algorithmic}[1] \Require GrammarTree with $Root$; \Ensure RewriteTree with $Root$; \State \textbf{while} not all AND.child IS OPT \textbf{do} \State \textbf{Procedure} ReWriteRules($Root$) \If{$Root$ IS AND} \If{$Root$ IS OPT} \State swap($Root.left$,$Root.right.left$); \State swap($Root.right$,$Root.right$); \State swap($Root.left.left$,$Root$); \State swap($Root.left.right$,$Root.left.right$); \EndIf \If{$Root.right$ IS OPT} \State swap($Root.left$,$Root.left$); \State swap($Root.right$,$Root.left.left$); \State swap($Root.left.left$,$Root$); \State swap($Root.left.right$,$Root.left.right$); \EndIf \EndIf \State \textbf{Procedure} ReWriteRules($Root.left$) \State \textbf{End Procedure} \State \textbf{Procedure} ReWriteRules($Root.right$) \State \textbf{End Procedure} \State \textbf{End Procedure} \State \textbf{end while}\\ \Return $Root$; \end{algorithmic} \end{algorithm} \end{center} \section{PIWD Demonstration} \label{sec:piwd} In this section, we introduce PIWD, which is a plugin-based framework for well-designed SPARQL. \subsection{PIWD Overview} PIWD is written in Java in a 2-tier design shown in Figure \ref{fig:piwd}. The bottom layer consists of any BGP query framework which is used as a black box for evaluating BGPs. Before answering SPARQL queries, the second layer provides the rewriting process and left-outer join evaluation, which lead to the solutions. \begin{figure}[h] \centering \includegraphics[scale=0.6]{architecture.pdf} \caption{PIWD architecture \label{fig:piwd}} \end{figure} BGP query framework supports both query and RDF data management, such as gStore, RDF-3X and so on, which solve the problem of subgraph isomorphism. PIWD provides the left-outer join between the BGPs. That is, the problem of answering well-designed SPARQL has been transformed into the problem of subgraph isomorphism and left-outer join between triple patterns. \subsection{Answering Queries over PIWD} The query process over PIWD can be described as follows: Firstly, WDAO-tree is built after rewriting rules on the grammar tree. Secondly, post-order traversal is applied on WDAO-trees. The traversal rule is: If the node is a leaf node without the OPT operator, BGP query framework is deployed on it to answer this query and return solutions which is stored in a stack. If the node is an inner node labeled by the OPT operator, we get the top two elements in the stack and left-outer join them. We repeat this process until all of WDAO-tree's nodes are visited. Finally, only one element in the stack is the final solutions. In the querying processing, BGP query framework serves as a query engine to support queries from leaves in WDAO-trees. OPT operators take an essential position in the query processing. Users receive optional solutions based on OPT operators which contribute to the semantic abundance degree since optional solutions are considered in this sense. In other words, OPT operators lead to the explosive growth of the solution scale. The query process is described in Algorithm \autoref{queryprocess}. \begin{center} \begin{algorithm}[h] \caption{Query Processing over PIWD \label{queryprocess}} \begin{algorithmic}[1] \Require WDAO-tree with $Root$; Prefix $prefix$; $Stack$ to store subresults; \Ensure Query result $result$; \State \textbf{Procedure} TraverseTree($Root$) \If{$root$ is not null} \State \textbf{Procedure} TraverseTree($Root \to Lnode$) \State \textbf{End Procedure} \State \textbf{Procedure} TraverseTree($Root \to Rnode$) \State \textbf{End Procedure} \If{$node$ is not \emph{OPTIONAL}} \State $subquery$=AssembleQuery($prefix$,$node$); \State $subresult$=QueryIngStore($subquery$); \State Push($Stack$\;,\;$subresult$); \Else \State $r$=Pop($Stack$); \State $l$=Pop($Stack$); \State $result$=$l \; \tiny \textifsym{d|><|} \; r$; \State Push($Stack$\;,\;$result$); \EndIf \EndIf \State \textbf{End Procedure} \State $list$=ConvertToList($Stack$);\\ \Return $list$; \end{algorithmic} \end{algorithm} \end{center} \section{Experiments and Evaluations} \label{sec:experiment} This section presents our experiments. The purpose of the experiments is to evaluate the performance of different WDAO-patterns. \subsection{Experiments} \paragraph{Implementations and running environment} All experiments were carried out on a machine running Linux, which has one CPU with four cores of 2.40GHz, 32GB memory and 500GB disk storage. All of the algorithms were implemented in Java. gStore\cite{Zou2011gStore,Zou2014gStore} and RDF-3X\cite{Neumann2010The} are used as the underlying query engines to handle BGPs. In our experiments, there is no optimization in our OPT operation. \paragraph{gStore and RDF-3X} Both gStore and RDF-3X are SPARQL query engines for subgraph matching. gStore stores RDF data in disk-based adjacency lists, whose format is \textit{[vID,vLabel,adjList]}, where \textit{vID} is the vertex ID, \textit{vLabel} is the corresponding URI, and \textit{adjList} is the list of its outgoing edges and the corresponding neighbor vertices. gStore converts an RDF graph into a data signature graph by encoding each entity and class vertex. Some different hash functions such as BKDR and AP hash functions are employed to generate signatures, which compose a novel index (called VS$^\ast$-tree). A filtering rule and efficient search algorithms are developed for subgraph queries over the data signature graph in order to speed up query processing. gStore can answer exact SPARQL queries and queries with wildcards in a uniform manner. RDF-3X engine is a RISC-style architecture for executing SPARQL queries over large repositories of RDF triple. Physical design is workload-independent by creating appropriate indexes over a single giant triples table in RDF-3X. And the query processor is RISC-style by relying mostly on merge joins over sorted index lists. gStore and RDF-3X have good performances in BGPs since their query methods are based on subgraph matching. \paragraph{Dataset} We used LUBM\footnote{http://swat.cse.lehigh.edu/projects/lubm/} as the dataset in our experiments to investigate the relationship between query response time and dataset scale. LUBM, which features an ontology for the university domain, is a standard benchmark to evaluate the performance of semantic Web repositories, In our experiments, we used LUBM1, LUBM50, LUBM100, LUBM150 and LUBM200 as our query datasets. The LUBM dataset details in our experiments are shown in Table \ref{lubm}. \begin{table} \centering \caption{LUBM Dataset Details\label{lubm}} \begin{tabular}{c|r|r} \hline Dataset &Number of triples &RDF NT File Size(bytes)\\ \hline LUBM1 &103,104 &14,497,954\\ LUBM50 &6,890,640 &979,093,554\\ % LUBM100 &13,879,971 &1,974,277,612\\ LUBM150 &20,659,276 &2,949,441,119\\ LUBM200 &27,643,644 &3,954,351,227\\ \hline \end{tabular} \end{table} \paragraph{SPARQL queries} The queries over LUBM were designed as four different forms, which corresponds to different WDAO-trees. The details of queries are described in Table \ref{queries}. Clearly, OPT nesting in $Q_2$ is the most complex among four forms. Furthermore, we build the $\mathbin{\mathrm{AND}}$ operator in each query. \begin{table} \centering \caption{SPARQL queries Details\label{queries}} \begin{tabular}{c|c|c} \hline QueryID &Pattern &OPT amount\\ \hline $Q_1$ &$(P_1 \ \mathbin{\mathrm{AND}} \ P_2 \ \mathbin{\mathrm{AND}} P_3)\ \mathbin{\mathrm{OPT}} \ P_4$ &1\\ $Q_2$ &$((P_1 \ \mathbin{\mathrm{AND}} \ P_2 \ \mathbin{\mathrm{AND}} P_3)\ \mathbin{\mathrm{OPT}} \ P_4) \ \mathbin{\mathrm{OPT}} \ (P_5 \ \mathbin{\mathrm{OPT}} \ P_6)$ &3\\ $Q_3$ &$((P_1 \ \mathbin{\mathrm{AND}} \ P_2 \ \mathbin{\mathrm{AND}} P_3)\ \mathbin{\mathrm{OPT}} \ P_4) \ \mathbin{\mathrm{OPT}} \ P_5$ &2\\ $Q_4$ &$P_1 \ \mathbin{\mathrm{OPT}} \ ((P_2 \mathbin{\mathrm{AND}} P_3 \mathbin{\mathrm{AND}} P_4) \ \mathbin{\mathrm{OPT}} \ P_5)$&2\\ \hline \end{tabular} \end{table} \subsection{Evaluation on PIWD} The variation tendencies of query response time are shown in Table \ref{tab:time-1}, Table \ref{tab:time-2} and Figure \ref{fig:time}. Query efficiency is decreased with higher response time when OPT nesting becomes more complex. Furthermore, there has been a significant increase in query response time when the dataset scale grows up. For instance, we observe $Q_2$, which corresponds to the most complex pattern in our four experimental SPARQL patterns. When the dataset is ranging from LUBM100 to LUBM200, its query response time extends more than five times even though the dataset scale extends two times. In this sense, OPT nesting complexity in WDAO-patterns influences query response time especially for large dataset scale. \begin{table} \centering \caption{Query Response Time[ms] on gStore\label{tab:time-1}} \begin{tabular}{c|r|r|r|r|r} \hline &LUBM1 &LUBM50 &LUBM100 &LUBM150 &LUBM200\\ \hline $Q_1$ &1,101 &617,642 &1,329,365 &2,126,383 &2,978,237 \\ $Q_2$ &1,870 &1,010,965 &2,901,295 &6,623,806 &10,041,836 \\ $Q_3$ &1,478 &637,128 &1,359,315 &2,191,356 &3,068,692 \\ $Q_4$ &1,242 &644,155 &1,456,232 &2,151,811 &3,129,246\\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Query Response Time[ms] on RDF-3X\label{tab:time-2}} \begin{tabular}{c|r|r|r|r|r} \hline &LUBM1 &LUBM50 &LUBM100 &LUBM150 &LUBM200\\ \hline $Q_1$ &1,231 &625,703 &1,401,782 &2,683,461 &3,496,156 \\ $Q_2$ &1,900 &1,245,241 &2,983,394 &7,286,812 &10,852,761 \\ $Q_3$ &1,499 &640,392 &1,427,392 &2,703,981 &3,672,970 \\ $Q_4$ &1,316 &648,825 &1,531,547 &2,791,152 &3,714,042\\ \hline \end{tabular} \end{table} \begin{figure}[htbp] \centering \subfigure[Performance on gStore] { \begin{tikzpicture}[scale=0.70] \begin{axis}[ title={}, xlabel={Dataset scale}, ylabel={Time[ms]}, symbolic x coords={LUBM1,LUBM50,LUBM100,LUBM150,LUBM200}, legend pos=north west, ymajorgrids=true, grid style=dashed, ] \addplot[ color=red, mark=square*, mark options={fill=black} ] coordinates { (LUBM1,1101)(LUBM50,617642)(LUBM100,1329365 )(LUBM150,2126383)(LUBM200,2978237) }; \addplot[ color=blue, mark=square*, mark options={fill=red} ] coordinates { (LUBM1,1870)(LUBM50,1010965)(LUBM100,2901295)(LUBM150,6623806)(LUBM200,10041836) }; \addplot[ color=black, mark=triangle*, mark options={fill=white} ] coordinates { (LUBM1,1478)(LUBM50,637128)(LUBM100,1359315)(LUBM150,2191356)(LUBM200,3068692) }; \addplot[ color=orange, mark=*, mark options={fill=blue} ] coordinates { (LUBM1,1242)(LUBM50,644155)(LUBM100,1456232)(LUBM150,2151811)(LUBM200,3129246) }; \legend{$Q_1$,$Q_2$,$Q_3$,$Q_4$} \end{axis} \end{tikzpicture}}\subfigure[Performance on RDF-3X] { \begin{tikzpicture}[scale=0.70] \begin{axis}[ title={}, xlabel={Dataset scale}, ylabel={Time[ms]}, symbolic x coords={LUBM1,LUBM50,LUBM100,LUBM150,LUBM200}, legend pos=north west, ymajorgrids=true, grid style=dashed, ] \addplot[ color=red, mark=square*, mark options={fill=black} ] coordinates { (LUBM1,1231)(LUBM50,625703)(LUBM100,1401782)(LUBM150,2683461)(LUBM200,3496156) }; \addplot[ color=blue, mark=square*, mark options={fill=red} ] coordinates { (LUBM1,1900)(LUBM50,1245241)(LUBM100,2983394)(LUBM150,7286812)(LUBM200,10852761) }; \addplot[ color=black, mark=triangle*, mark options={fill=white} ] coordinates { (LUBM1,1499)(LUBM50,640392)(LUBM100,1427392)(LUBM150,2703981)(LUBM200,3672970) }; \addplot[ color=orange, mark=*, mark options={fill=blue} ] coordinates { (LUBM1,1316)(LUBM50,648825)(LUBM100,1531547)(LUBM150,2791152)(LUBM200,3714042) }; \legend{$Q_1$,$Q_2$,$Q_3$,$Q_4$} \end{axis} \end{tikzpicture} } \caption{Query Response Time over LUBM\label{fig:time}} \end{figure} \section{Related works}\label{sec:related} In this section, we survey related works in the following three areas: BGP query evaluation algorithms, well-designed SPARQL and BGP query evaluation frameworks. BGP query algorithms have been developed for many years. Existing algorithms mainly focus on finding all embedding in a single large graph, such as ULLmann\cite{Ullmann1976An}, VF2\cite{Luigi2004A}, QUICKSI\cite{Shang2008Taming}, GraphQL\cite{He2010Query}, SPath\cite{Zhao2010On}, STW\cite{Hongzhi2012Efficient} and TurboIso\cite{Han2013Turbo}. Some optimization method has been adapted in these techniques, such as adjusting matching order, pruning out the candidate vertices. However, the evaluation of well-designed SPARQL is not equivalent to the BGP query evaluation problem since there exists inexact matching. It has been shown that the complexity of the evaluation problem for the well-designed fragment is coNP-complete\cite{P2009Semantics}. The quasi well-designed pattern trees (QWDPTs), which are undirected and ordered, has been proposed \cite{Letelier2012Static}. This work aims at the analysis of containment and equivalence of well-designed pattern. Efficient evaluation and semantic optimization of WDPT have been proposed in \cite{barcelo2015efficient}. Sparm is a tool for SPARQL analysis and manipulation in \cite{letelier2012spam}. Above-mentioned all aim at checking well-designed patterns or complexity analysis without evaluation on well-designed patterns. Our WDAO-tree is different from QWDPTs in structure and it emphasizes reconstructing query plans. The OPT operation optimization has been proposed in \cite{Atre2015Left}, which is different from our work since our work aims to handle a plugin in any BGP query engine in order to deal with WDAO-patterns in SPARQL queries. RDF-3X\cite{Neumann2010The}, TripleBit\cite{Yuan2013TripleBit}, SW-Store\cite{abadi2009sw}, Hexastore\cite{Weiss2008Hexastore} and gStore\cite{Zou2011gStore,Zou2014gStore} have high performance in BGPs. RDF-3X create indexes in the form of B+ tree, as well as TripleBit in the form of ID-Chunk. All of them have efficient performance since they concentrate on the design of indexing or storage. However, they can only support exact SPARQL queries, since they replace all literals (in RDF triples) by ids using a mapping dictionary. In other words, they cannot support WDAO-patterns well. Virtuoso\cite{Erling2009RDF} and MonetDB\cite{Boncz2005MonetDB} support open-source and commercial services. Jena\cite{Carroll2004Jena} and Sesame\cite{Broekstra2002Sesame} are free open source Java frameworks for building semantic web and Linked Data applications, which focus on SPARQL parse without supporting large-scale date. Our work is independent on these BGP query frameworks, and any BGP query engine is adaptable for our plugin. \section{Conclusion}\label{sec:con} In this paper, we have presented PIWD, which is a plugin adaptable for any BGP query framework to handle WDAO-patterns. Theoretically, PIWD rebuilds the query evaluation plan based on WDAO-trees. After employing BGP query framework on WDAO-trees, PIWD supports the left-outer join operation between triple patterns. Our experiments show that PIWD can deal with complex and multi-level nested WDAO-patterns. In the future, we will further handle other non-well-designed patterns and deal with more operations such as UNION. Besides, we will consider OPT operation optimization to improve efficiency of PIWD and implement our framework on distributed {RDF} graphs by applying the distributed gStore \cite{DBLP:journals/vldb/PengZO0Z16}. \section*{Acknowledgments} This work is supported by the programs of the National Key Research and Development Program of China (2016YFB1000603), the National Natural Science Foundation of China (NSFC) (61502336), and the open funding project of Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education (K93-9-2016-05). Xiaowang Zhang is supported by Tianjin Thousand Young Talents Program and the project-sponsored by School of Computer Science and Technology in Tianjin University.
1,108,101,563,560
arxiv
\section{Mathematical definition of the decay cascade model~\cite{kaestner2010a}} \nocite{kaestner2010a} The probability distribution $P_n(t)$ is governed by the equation \begin{align} \label{eq:mastereqs} \frac{d P_n(t)}{d t} & = -\Gamma_n(t) P_n(t) + \Gamma_{n+1}(t) P_{n+1}(t) \, \tag{KK-1} \end{align} for $n \ge 0$ and with $\Gamma_0 \equiv 0$. Normalization and initial conditions are \begin{align} & \sum_{n=0}^{N} P_n(t) =1 \, ,\\ & P_n(t_0) = \begin{cases} 1 , & n=N \\ 0, & n \not = N \end{cases} \, . \label{KK-2} \tag{KK-2} \end{align} Equation \eqref{eq:mastereqs} is a general kinetic equation for a birth-death Markov process \cite{Gardiner} for time- and population-size-dependent rates. Here we will be interested in the asymptotic values of $P_n$ as the transition rates $\Gamma_n$ gradually decrease to zero as function of time. The original publication \cite{kaestner2010a} discussing solutions to the above model in the context of dynamic quantum dot initialization is denoted KK; equations marked here as KK-1, KK-2 etc.\ match the equations in KK with the corresponding numbers. \section{Implicit exact solution} In KK, the following exact iterative soltion, valid for $t>t_0$, is presented \begin{align} \label{eq:ingralform} \tag{KK-3} P_n(t) & = \int_{t_0}^{t} \! e^{ - \int_{t'}^t \Gamma_{n}(\tau) \, d \tau } \Gamma_{n+1}(t') \, P_{n+1}(t') \, dt' \, , \\ P_{N+1}(t) & = \delta(t-t_0)/ \Gamma_{N+1}(t_0) \, . \label{eq:formal} \end{align} Condition \eqref{eq:formal} is introduced formally, in order for the general formula \eqref{eq:ingralform} to accommodate the initial condition \eqref{KK-2}; the delta functions are regularised as $\int_{t_0}^{t} \delta (t-t_0)=1$ for $t>t_0$. Equation \eqref{eq:ingralform} is just the standard solution of a single linear first order differential equation \eqref{eq:mastereqs} for an unknown function $P_n(t)$ with $P_{n+1}(t)$ treated as known. One can verify that \eqref{eq:ingralform} solves \eqref{eq:mastereqs} by direct substitution: \begin{align} \frac{d P_n(t)}{d t} & = \nonumber \underbrace{e^{ - \int_{t}^t \Gamma_{n}(\tau) \, d \tau }}_{=1} \Gamma_{n+1}(t) \, P_{n+1}(t) + \int_{t_0}^{t} \! \frac{d}{d t} \left [ e^{ - \int_{t'}^t \Gamma_{n}(\tau) \, d \tau } \right ]\Gamma_{n+1}(t') \, P_{n+1}(t') \, dt' \\ \nonumber & = \Gamma_{n+1}(t) \, P_{n+1}(t) + \int_{t_0}^{t} \! e^{ - \int_{t'}^t \Gamma_{n}(\tau) \, d \tau } \underbrace{\frac{d}{d t} \left [- \int_{t'}^t \Gamma_{n}(\tau) \, d \tau \right ]}_{=-\Gamma_{n}(t)} \Gamma_{n+1}(t') \, P_{n+1}(t') \, dt' \\ \nonumber & = \Gamma_{n+1}(t) \, P_{n+1}(t) -\Gamma_{n}(t) P_n(t) \, . \end{align} \section{Explicit solution for time-independent rate ratio $\Gamma_n(t)/\Gamma_{n-1}(t)=\text{const}$} The first solution described in KK corresponds to the case of \begin{align} \label{eq:constantratio} \Gamma_n(t) = \frac{X_n}{X_1} \Gamma_1(t) \, , \end{align} with $X_n \equiv \exp \sum_{k=1}^{n} \delta_k$ being time-independent constants. For the rates obeying the condition \eqref{eq:constantratio}, the general solution can be constructed in the following form~\footnote{This form was inspired by studying explicit solutions for $\Gamma_n(t) \sim e^{-t}$ and $N=1,2,3$ with the means of a computer algebra system \texttt{Mathematica}.} \begin{align} P_n(t) & = \sum_{k=n}^{N} R_{nk} \, e^{-\int_{t_0}^{t} \Gamma_k(t') dt'} \, , \label{eq:fullgeneral} \end{align} with constant coefficients $R_{nk}$ that need to be determined. \subsection{Derivation of $R_{nk}$} For $n=N$, the initial conditions $P_N(t_0)=1$, $P_{N+1}(t_0)=0$ and equation \eqref{eq:fullgeneral} for $P_N(t)$ give \begin{align} \label{eq:RNN} P_N(t) & = e^{-\int_{t_0}^{t} \Gamma_N(t') dt'} \Longrightarrow R_{NN}=1 \, . \end{align} The initial condition $P_n(t_0)=0$ for $n<N$, applied to \eqref{eq:fullgeneral}, implies: \begin{align} \sum_{k=n}^{N} R_{nk} = 0 \, , \quad n < N . \label{eq:total} \end{align} For $n<N$ the substituting \eqref{eq:fullgeneral} into the differential equation \eqref{eq:fullgeneral} gives \begin{align} \frac{d P_n(t)}{d t} & = - \sum_{k=n}^{N} R_{nk} \, \Gamma_k(t) \, e^{-\int_{t_0}^{t} \Gamma_k(t') dt'} \label{eq:lhs} \\ -\Gamma_n(t) P_n(t) + \Gamma_{n+1}(t) P_{n+1}(t) & = -\Gamma_n(t) \sum_{k=n}^{N} R_{nk} \, e^{-\int_{t_0}^{t} \Gamma_k(t') dt'} + \Gamma_{n+1}(t) \sum_{k=n+1}^{N} R_{n+1,k} \, e^{-\int_{t_0}^{t} \Gamma_{k}(t') dt'} \label{eq:rhs} \end{align} Now we invoke the condition \eqref{eq:constantratio} which allows us to equate the coefficients of $e^{-\int_{t_0}^{t} \Gamma_k(t') dt'}$ between \eqref{eq:lhs} and \eqref{eq:rhs}: \begin{align} \label{eq:reucrrence} X_k R_{nk} & = X_n R_{nk} +X_{n+1} R_{n+1,k} \, , \quad {n< k \leq N } \\ R_{n-1,k} & =\frac{X_{n}}{X_k - X_{n-1}} R_{n,k} , \quad {n \leq k \leq N } \tag{\ref{eq:reucrrence}'} \end{align} Equations \eqref{eq:total} and \eqref{eq:reucrrence} give the sought-after recurrence relations: \begin{align} R_{nk} & = R_{kk} \prod_{m=n+1}^{k} \frac{X_{m}}{X_k - X_{m-1}} \, , \quad {n< k \leq N } \label{eq:DC-5} \tag{KK-5'} \\ R_{kk} & = - \sum_{m=k+1}^N R_{km} \, . \quad {k < N } \label{eq:DC-6} \tag{KK-6'} \end{align} Equations \eqref{eq:DC-5} and \eqref{eq:DC-6} together with \eqref{eq:RNN} are equivalent to Eqs.~(5) and (6) of KK with $C_k \equiv R_{kk}$ and $Q_{kn} \equiv R_{kn}/R_{kk}$. \subsection{Explicit solution of recurrence relations} The recurrence relations \eqref{eq:DC-5} and \eqref{eq:DC-6} admit the following explicit solution: \begin{align} \label{eq:expsol} R_{nk} = \prod\limits_{m=n+1}^{N} X_m {\prod\limits_{\substack{m=n\\m\not=k}}^{N}} \frac{1}{X_k-X_m} \, . \end{align} The solution \eqref{eq:expsol} can be obtained for finite $n$ and $k$ by means of computer algebra, and proven in general form by induction. Since integration over time preserves the condition \eqref{eq:constantratio}, we can choose \begin{align} X_n = \int_{t_0}^{t} \Gamma_n(t)\, dt \end{align} and write down the solution explicitly: \begin{align} \label{eq:explicit} P_n(t) & = \sum_{k=n}^{N} e^{-X_k} \prod\limits_{m=n+1}^{N} X_m {\prod\limits_{\substack{m=n\\m\not=k}}^{N}} \frac{1}{X_k-X_m} \, . \end{align} The solution \eqref{eq:explicit} agrees precisely with the solution for $N=3$ and $\Gamma_n = \text{const}$ obtained by Miyamoto~\emph{et al.}~\cite{miyamoto2008}. \section{Solution in the limit of time-scale separation between cascade steps} A more general solution that does not rely on condition \eqref{eq:constantratio} is derived in KK in the limit of decay time-scale separation between consecutive steps of the cascade. This is motivated as follows: $P_{n+1}(t)$ stops changing appreciably over a times scale on the order of $\Gamma_{n+1}^{-1}(t)$ during which $P_{n}(t)$ changes only due to probability flux from state $(n+1)$, with negligible decay down the cascade to state $(n-1)$. This condition corresponds to $\Gamma_{n+1}(t) \gg \Gamma_n(t)$ during the relevant time interval. The mathematical part of the derivation proceeds as follows: \begin{enumerate} \item Summing equations \eqref{eq:ingralform} for all $dP_m/dt$ with $N \ge m>n$ gives \begin{equation} \Gamma_{n+1}(t) P_{n+1}(t) = -\frac{d}{dt} \sum_{m>n}\!P_{m}(t). \end{equation} \item On the ``slow'' time-scale controlled by $\Gamma_n(t)$, the function $\sum_{m>n}\!P_{m}(t)$ is changing rapidly from $1$ to its asymptotic value $\sum_{m>n}\!P_{m}(t\to \infty)$ and hence can be approximated by a step function in time. ($t \to \infty$ corresponds to the time when the decay transitions no longer take place). \item The derivative of a step function is proportional to a delta function and thus the exact integral \eqref{eq:ingralform} can be approximated as follows: \begin{align}\label{eq:sequential} P_n(t) & = \int_{t_0}^{t} \! e^{ - \int_{t'}^t \Gamma_{n}(\tau) \, d \tau } \underbrace{\Gamma_{n+1}(t') \, P_{n+1}(t')}_{\approx \delta(t_0\!-\!t') [1-\sum_{m>n}\!P_{m}(t\to \infty)] }\, dt' \approx \underbrace{e^{ - \int_{t_0}^t \Gamma_{n}(\tau) \, d \tau }}_{e^{-X_n}}[1- \sum_{m>n}\!P_{m}(t\to \infty) ] \, . \end{align} Equation \eqref{eq:sequential} essentially states that the decay of all previous states (higher than $n$) provides an initial condition for the decay of the $n$-th state. Based on \eqref{eq:sequential}, the condition on the final probabilities $P_n(t\to \infty) \equiv P_n$ stated in KK is formulated: \begin{align} \label{eq:condition} P_{n}\!=\!e^{-X_n}\left(1\!-\!\sum\nolimits_{m=n+1}^N P_m\right) \, . \end{align} \item Equation \eqref{eq:condition} is solved by expressing $P_{n-1}$ in terms of $P_n$. Using \eqref{eq:condition} for $P_n$ we can express $\sum_{m>n} P_m = 1- e^{X_n} P_n$ and \begin{align} \sum_{m>n\!-\!1} P_m & = 1+(1-e^{X_n}) P_n \, .\label{eq:sum} \end{align} Substituting the sum \eqref{eq:sum} into \eqref{eq:condition} for $P_{n-1}$, we get the derised recurrence relation \begin{align} P_{n-1} & = e^{-X_{n-1}} (e^{X_n} -1) P_n \, . \label{eq:Pnrecurrence} \end{align} Starting from $P_N = e^{-X_N}$ and iterating \eqref{eq:Pnrecurrence} gives \begin{align} \underbrace{ \underbrace{ \underbrace{e^{-X_N}}_{P_N} \times \left (e^{X_{N}}-1 \right) e^{-X_{N-1}}}_{P_{N-1}} \times \left (e^{X_{N-1}}-1 \right) e^{-X_{N-2}}}_{P_{N-2}} \times \ldots \, , \end{align} wherefrom the general form is easy to infer \begin{align} \label{eq:DCcanonical} P_n = e^{-X_n} \prod_{m=n+1}^{N} \left ( 1- e^{-X_m} \right ) \, .\tag{KK-11} \end{align} \end{enumerate} Equation \eqref{eq:DCcanonical} has also been applied to a generalised decay cascade scenario by Fricke~\emph{et al.} \cite{Fricke2013}. They consider the onset of decay steps at different times, $t_0< \ldots < t^b_{n+1} < t^b_{n} < \ldots$, which in the notation of KK corresponds to \begin{align} \Gamma_n(t) & = \tilde{\Gamma}_n(t) \Theta(t-t^b_n) \, , \end{align} where $\Theta(x)$ is a unit step function and $\tilde{\Gamma}_n(t)$ are smooth functions that decay to zero as for $t\to \infty$. The condition $X_n = \int_{t_0}^{\infty} \Gamma_n(t) = \int_{t^b_n}^{\infty} \tilde{\Gamma}_n(t) dt \gg X_{n-1}$ justifies the sequential cascade approximation \eqref{eq:sequential} and hence the probability distribution \eqref{eq:DCcanonical}.
1,108,101,563,561
arxiv
\section{Introduction} \label{sec:introduction} Abstractive summarization methods have been under intensive study, yet they often suffer from inferior performance compared to extractive methods~\cite{Allahyari:2017:arXiv, Nallapati:2017:AAAI, See:2017:ACL}. Admittedly, by task definition, abstractive summarization is more challenging than extractive summarization. However, we argue that such inferior performance is partly due to some biases of existing summarization datasets. The source text of most datasets~\cite{Over:2007:IPM, Hermann:2015:NIPS, Cohan:2018:NAACL-HLT, Grusky:2018:NAACL-HLT, Narayan:2018:EMNLP} originates from formal documents such as news articles, which have some structural patterns of which extractive methods better take advantage. In formal documents, there could be a strong tendency that key sentences locate at the beginning of the text and favorable summary candidates are already inside the text in similar forms. Hence, summarization methods could generate good summaries by simply memorizing keywords or phrases from particular locations of the text. Moreover, if abstractive methods are trained on these datasets, they may not show much abstraction \cite{See:2017:ACL}, because they are implicitly forced to learn structural patterns \cite{Kedzie:2018:EMNLP}. \citet{Grusky:2018:NAACL-HLT} and \citet{Narayan:2018:EMNLP} recently report similar extractive bias in existing datasets. They alleviate this bias by collecting articles from diverse news publications or regarding intro sentences as gold summary. Different from previous approaches, we propose to alleviate such bias issue by changing the source of summarization dataset. We exploit user-generated posts from the online discussion forum \texttt{Reddit}, especially \texttt{TIFU} subreddit, which are more casual and conversational than news articles. We observe that the source text in \texttt{Reddit} does not follow strict formatting and disallows models to simply rely on locational biases for summarization. Moreover, the passages rarely contain sentences that are nearly identical to the gold summary. Our new large-scale dataset for abstractive summarization named as \textit{Reddit TIFU} contains 122,933 pairs of an online post as source text and its corresponding long or short summary sentence. These posts are written by many different users, but each pair of post and summary is created by the same user. Another key contribution of this work is to propose a novel memory network model named \textit{multi-level memory networks} (MMN). Our model is equipped with multi-level memory networks, storing the information of source text from different levels of abstraction (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot word-level, sentence-level, paragraph-level and document-level). This design is motivated by that abstractive summarization is highly challenging and requires not only to understand the whole document, but also to find salient words, phrases and sentences. Our model can sequentially read such multiple levels of information to generate a good summary sentence. Most abstractive summarization methods \cite{See:2017:ACL,Li:2017:EMNLP,Zhou:2017:ACL,Liu:2018:ICLR,Cohan:2018:NAACL-HLT,Paulus:2018:ICLR} employ sequence-to-sequence (seq2seq) models \cite{Sutskever:2014:NIPS} where an RNN encoder embeds an input document and another RNN decodes a summary sentence. Our MMN has two major advantages over seq2seq-based models. First, RNNs accumulate information in a few fixed-length memories at every step regardless of the length of an input sequence, and thus may fail to utilize far-distant information due to vanishing gradient. It is more critical in summarization tasks, since input text is usually very long ($>$300 words). On the other hand, our convolutional memory explicitly captures long-term information. Second, RNNs cannot build representations of different ranges, since hidden states are sequentially connected over the whole sequence. This still holds even with hierarchical RNNs that can learn multiple levels of representation. In contrast, our model exploits a set of convolution operations with different receptive fields; hence, it can build representations of not only multiple levels but also multiple ranges (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot sentences, paragraphs, and the whole document). Our experimental results show that the proposed MMN model improves abstractive summarization performance on both our new Reddit TIFU and existing Newsroom-Abs~\citep{Grusky:2018:NAACL-HLT} and XSum~\citep{Narayan:2018:EMNLP} datasets. It outperforms several state-of-the-art abstractive models with seq2seq architecture such as \cite{See:2017:ACL, Zhou:2017:ACL, Li:2017:EMNLP}. We evaluate with quantitative language metrics (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot perplexity and ROUGE \cite{Lin:2004:TSBO}) and user studies via Amazon Mechanical Turk (AMT). The contributions of this work are as follows. \begin{enumerate} \vspace{-4pt}\item { \begin{comment} As far as we know, our work is the first to address the problem of \textit{lack of abstraction} in summarization datasets. We newly collect a large-scale abstractive summarization dataset: \textit{Reddit TIFU}. \end{comment} We newly collect a large-scale abstractive summarization dataset named \textit{Reddit TIFU}. As far as we know, our work is the first to use non-formal text for abstractive summarization. } \vspace{-4pt}\item We propose a novel model named \textit{multi-level memory networks} (MMN). To the best of our knowledge, our model is the first attempt to leverage memory networks for the abstractive summarization. We discuss the unique updates of the MMN over existing memory networks in Section \ref{sec:related_work}. \vspace{-4pt}\item With quantitative evaluation and user studies via AMT, we show that our model outperforms state-of-the-art abstractive summarization methods on both Reddit TIFU, Newsroom abstractive subset and XSum dataset. \end{enumerate} \section{Related Work} \label{sec:related_work} Our work can be uniquely positioned in the context of the following three topics. \textbf{Neural Abstractive Summarization}. Many deep neural network models have been proposed for abstractive summarization. One of the most dominant architectures is to employ RNN-based seq2seq models with attention mechanism such as \citep{Rush:2015:EMNLP, Chopra:2016:NAACL-HLT, Nallapati:2016:CoNLL, Cohan:2018:NAACL-HLT, Hsu:2018:ACL, Gehrmann:2018:EMNLP}. In addition, recent advances in deep network research have been promptly adopted for improving abstractive summarization. Some notable examples include the use of variational autoencoders (VAEs)~\citep{Miao:2016:EMNLP,Li:2017:EMNLP}, graph-based attention~\citep{Tan:2017:ACL}, pointer-generator models \citep{See:2017:ACL}, self-attention networks~\citep{Liu:2018:ICLR}, reinforcement learning~\citep{Paulus:2018:ICLR, Pasunuru:2018:NAACL-HLT}, contextual agent attention~\citep{Celikyilmaz:2018:NAACL-HLT} and integration with extractive models~\citep{Hsu:2018:ACL, Gehrmann:2018:EMNLP}. Compared to existing neural methods of abstractive summarization, our approach is novel to replace an RNN-based encoder with explicit multi-level convolutional memory. While RNN-based encoders always consider the whole sequence to represent each hidden state, our multi-level memory network exploits convolutions to control the extent of representation in multiple levels of sentences, paragraphs, and the whole text. \textbf{Summarization Datasets}. Most existing summarization datasets use formal documents as source text. News articles are exploited the most, including in DUC~\cite{Over:2007:IPM}, Gigaword~\citep{Napoles:2012:NAACL-HLT}, CNN/DailyMail~\citep{Nallapati:2016:CoNLL,Hermann:2015:NIPS}, Newsroom~\citep{Grusky:2018:NAACL-HLT} and XSum~\citep{Narayan:2018:EMNLP} datasets. \citet{Cohan:2018:NAACL-HLT} introduce datasets of academic papers from arXiv and PubMed. \citet{Hu:2015:EMNLP} propose the LCSTS dataset as a collection of Chinese microblog's short text each paired with a summary. However, it selects only formal text posted by verified organizations such as news agencies or government institutions. Compared to previous summarization datasets, our dataset is novel in that it consists of posts from the online forum Reddit. Rotten Tomatoes and Idebate dataset \citep{Wang:2016:NAACL-HLT} use online text as source, but they are relatively small in scale: 3.7K posts of RottenTomatoes compared to 80K posts of TIFU-short as shown in Table \ref{tab:dataset}. Moreover, Rotten Tomatoes use multiple movie reviews written by different users as single source text, and one-sentence consensus made by another professional editor as summary. Thus, each pair of this dataset could be less coherent than that of our TIFU, which is written by the same user. The Idebate dataset is collected from short arguments of debates on controversial topics, and thus the text is rather formal. On the other hand, our dataset contains the posts of interesting stories happened in daily life, and thus the text is more unstructured and informal. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{./pictures/dataset/post_example_cropped.pdf} \caption{An example post of the \texttt{TIFU} subreddit.} \label{fig:post_example} \vspace{-5pt} \end{center} \end{figure} \textbf{Neural Memory Networks}. Many memory network models have been proposed to improve memorization capability of neural networks \cite{Kaiser:2017:ICLR, Na:2017:ICCV, Yoo:2019:CVPR}. \citet{Weston:2014:ICLR} propose one of early memory networks for language question answering (QA); since then, many memory networks have been proposed for QA tasks \cite{Sukhbaatar:2015:NIPS,Kumar:2016:ICML,Miller:2016:EMNLP}. \citet{Park:2017:CVPR} propose a convolutional read memory network for personalized image captioning. One of the closest works to ours may be \citet{Singh:2017:CIKM}, which use a memory network for text summarization. However, they only deal with extractive summarization by storing embeddings of individual sentences into memory. Compared to previous memory networks, our MMN has four novel features: (i) building a multi-level memory network that better abstracts multi-level representation of a long document, (ii) employing a dilated convolutional memory write mechanism to correlate adjacent memory cells, (iii) proposing normalized gated tanh units to avoid covariate shift within the network, and (iv) generating an output sequence without RNNs. \section{Reddit TIFU Dataset} \label{sec:reddit_datasets} We introduce the Reddit TIFU dataset whose key statistics are outlined in Table \ref{tab:dataset}. We collect data from Reddit, which is a discussion forum platform with a large number of subreddits on diverse topics and interests. Specifically, we crawl all the posts from 2013-Jan to 2018-Mar in the \texttt{TIFU} subreddit, where every post should strictly follow the posting rules, otherwise they are removed. Thanks to the following rules\footnote{\url{https://reddit.com/r/tifu/wiki/rules}.}, the posts in this subreddit can be an excellent corpus for abstractive summarization: \textit{Rule 3: Posts and titles without context will be removed. Your title must make an attempt to encapsulate the nature of your f***up. Rule 11: All posts must end with a TL;DR summary that is descriptive of your f***up and its consequences}. Thus, we regard the body text as source, the title as short summary, and the TL;DR summary as long summary. As a result, we make two sets of datasets: \textit{TIFU-short} and \textit{TIFU-long}. Figure \ref{fig:post_example} shows an example post of the \texttt{TIFU} subreddit. \begin{table}[t] \centering \small \setlength{\tabcolsep}{2.2pt} \begin{tabular}{|c|c|c|c|} \hline Dataset & \# posts & \# words/post & \# words/summ \\ \hline \texttt{RottenTomatoes} & 3,731 & 2124.7 (1747) & 22.2 (22) \\ \texttt{Idebate} & 2,259 & 178.3 (160) & 11.4 (10) \\ \hline {\tt TIFU-short} & 79,949 & 342.4 (269) & 9.33 (8) \\ {\tt TIFU-long } & 42,984 & 432.6 (351) & 23.0 (21) \\ \hline \end{tabular} \caption{ Statistics of the Reddit TIFU dataset compared to existing opinion summarization corpora, \textit{RottenTomatoes} and \textit{Idebate} \cite{Wang:2016:NAACL-HLT}. We show average and median (in parentheses) values. } \vspace{-5pt} \label{tab:dataset} \end{table} \begin{figure*}[t] \begin{center} \includegraphics[width=\linewidth]{./pictures/dataset/bigram_all.pdf} \caption{Relative locations of bigrams of gold summary in the source text across different datasets.} \label{fig:dataset_comparison} \vspace{-5pt} \end{center} \end{figure*} \begin{savenotes} \begin{table*}[t!] \centering \small \setlength{\tabcolsep}{3.5pt} \begin{tabular}{|c|ccc|ccc|ccc|c|c|} \hline & \multicolumn{3}{c|}{\texttt{PG}} & \multicolumn{3}{c|}{\texttt{Lead}} & \multicolumn{3}{c|}{\texttt{Ext-Oracle}} & \texttt{PG/Lead} & \texttt{PG/Oracle} \\ \hline Dataset & R-1 & R-2 & R-L & R-1 & R-2 & R-L & R-1 & R-2 & R-L & Ratio (R-L) & Ratio (R-L) \\ \hline {\tt CNN/DM}~\cite{Nallapati:2016:CoNLL} & 36.4 & 15.7 & 33.4 & 39.6 & 17.7 & 36.2 & 54.7 & 30.4 & 50.8 & 0.92x & 0.66x \\ {\tt NY Times}~\cite{Sandhaus:2008:LDC} & 44.3 & 27.4 & 40.4 & 31.9 & 15.9 & 23.8 & 52.1 & 31.6 & 46.7 & 1.70x & 0.87x \\ {\tt Newsroom}~\cite{Grusky:2018:NAACL-HLT} & 26.0 & 13.3 & 22.4 & 30.5 & 21.3 & 28.4 & 41.4 & 24.2 & 39.4 & 0.79x & 0.57x \\ \hline {\tt Newsroom-Abs}~\cite{Grusky:2018:NAACL-HLT} & \textbf{14.7} & \textbf{2.2} & \textbf{10.3} & 13.7 & 2.4 & 11.2 & 29.7 & 10.5 & 27.2 & 0.92x & 0.38x \\ {\tt XSum}~\cite{Narayan:2018:EMNLP} & 29.7 & 9.2 & 23.2 & 16.3 & 1.6 & 12.0 & 29.8 & 8.8 & 22.7 & 1.93x & 1.02x \\ \hline {\tt TIFU-short} & 18.3 & 6.5 & 17.9 & \textbf{3.4} & \textbf{0.0} & \textbf{3.3} & \textbf{8.0} & \textbf{0.0} & \textbf{7.7} & \textbf{5.42x} & \textbf{2.32x} \\ {\tt TIFU-long } & 19.0 & 3.7 & 15.1 & \textbf{2.8} & \textbf{0.0} & \textbf{2.7} & \textbf{6.8} & \textbf{0.0} & \textbf{6.6} & \textbf{5.59x} & \textbf{2.29x} \\ \hline \end{tabular} \caption{ Comparison of F1 ROUGE scores between different datasets (row) and methods (column). \texttt{PG} is a state-of-the-art abstractive summarization method, and \texttt{Lead} and \texttt{Ext-Oracle} are extractive ones. \texttt{PG/Lead} and \texttt{PG/Oracle} are the ROUGE-L ratios of \texttt{PG} with \texttt{Lead} and \texttt{Ext-Oracle}, respectively. We report the numbers for each dataset (row) from the corresponding cited papers. } \vspace{-5pt} \label{tab:dataset-ext} \end{table*} \end{savenotes} \subsection{Preprocessing} \label{sec:preprocessing} We build a vocabulary dictionary $\mathcal{V}$ by choosing the most frequent $V$(=15K) words in the dataset \new{ We exclude any urls, unicodes and special characters. We lowercase words, and normalize digits to 0. Subreddit names and user ids are replaced with @subreddit and @userid token, respectively. We use \texttt{markdown}\footnote{\url{https://python-markdown.github.io/}.} package to strip markdown format, and \texttt{spacy}\footnote{\url{https://spacy.io}.} to tokenize words. Common prefixes of summary sentences (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot tifu by, tifu-, tl;dr, etc) are trimmed. We do not take OOV words into consideration, since our vocabulary with size 15K covers about 98\% of word frequencies in our dataset. } We set the maximum length of a document as 500. We exclude the gold summaries whose lengths are more than 20 and 50 for \textit{TIFU-short} and \textit{TIFU-long}, respectively. They amount to about 0.6K posts in both datasets (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot less than 1\% and 3\%). We use these maximum lengths, based on previous datasets (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot 8, 31, 56 words on average per summary in Gigaword, DUC, and CNN/DailyMail datasets, respectively). We randomly split the dataset into 95\% for training, 5\% for test. \subsection{Abstractive Properties of Reddit TIFU} \label{sec:how_abstractive} We discuss some abstractive characteristics found in Reddit TIFU dataset, compared to existing summarization datasets based on news articles. \textbf{Weak Lead Bias}. Formal documents including news articles tend to be structured to emphasize key information at the beginning of the text. On the other hand, key information in informal online text data are more spread across the text. Figure \ref{fig:dataset_comparison} plots the density histogram of the relative locations of bigrams of gold summary in the source text. In the CNN/DailyMail and Newsroom, the bigrams are highly concentrated on the front parts of documents. Contrarily, our Reddit TIFU dataset shows rather uniform distribution across the text. This characteristic can be also seen from the ROUGE score comparison in Table \ref{tab:dataset-ext}. The \texttt{Lead} baseline simply creates a summary by selecting the first few sentences or words in the document. Thus, a high score of the Lead baseline implicates a strong lead bias. The \texttt{Lead} scores are the lowest in our TIFU dataset, in which it is more difficult for models to simply take advantage of locational bias for the summary. \begin{figure*}[t] \begin{center} \includegraphics[width=\linewidth]{./pictures/model/model_resized.pdf} \caption{Illustration of the proposed \textit{multi-level memory network} (MMN) model.} \vspace{-5pt} \label{fig:model} \end{center} \end{figure*} \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{./pictures/model/gated_units.pdf} \caption{Comparison between (a) the gated linear unit \cite{Gehring:2017:ICML} and (b) the proposed normalized gated tanh unit.} \label{fig:gated_tanh} \end{center} \vspace{-5pt} \end{figure} \textbf{Strong Abstractness}. Besides the locational bias, news articles tend to contain wrap-up sentences that cover the whole article, and they often have resemblance to its gold summary. Its existence can be measured by the score of the \texttt{Ext-Oracle} baseline, which creates a summary by selecting the sentences with the highest average score of \new{F1} ROUGE-1/2/L. Thus, it can be viewed as an upper bound for extractive models \cite{Narayan:2018:EMNLP, Narayan:2018:NAACL-HLT, Nallapati:2017:AAAI}. In Table \ref{tab:dataset-ext}, the ROUGE scores of the \texttt{Ext-Oracle} are the lowest in our TIFU dataset. It means that the sentences that are similar to gold summary scarcely exist inside the source text in our dataset. This property forces the model to be trained to focus on comprehending the entire text instead of simply finding wrap-up sentences. Finally, \texttt{PG/Lead} and \texttt{PG/Oracle} in Table \ref{tab:dataset-ext} are the ROUGE-L ratios of \texttt{PG} with \texttt{Lead} and \texttt{Ext-Oracle}, respectively. These metrics can quantify the dataset according to the degree of difficulty for extractive methods and the suitability for abstractive methods, respectively. High scores of the TIFU dataset in both metrics show that it is potentially an excellent benchmark for evaluation of abstractive summarization systems. \section{Multi-level Memory Networks (MMN)} \label{sec:model} Figure \ref{fig:model} shows the proposed \textit{multi-level memory network} (MMN) model. The MMN memorizes the source text with a proper representation in the memory and generates a summary sentence one word at a time by extracting relevant information from memory cells in response to previously generated words. The input of the model is a source text $\{x_i\} = x_1, ..., x_N$, and the output is a sequence of summary words $\{y_t\} = y_1, ..., y_T$, each of which is a symbol from the dictionary $\mathcal V$. \subsection{Text Embedding} \label{sec:embedding_layers} Online posts include lots of morphologically similar words, which should be closely embedded. Thus, we use the \texttt{fastText} \cite{Bojanowski:2016:TACL} trained on the Common Crawl corpus, to initialize the word embedding matrix $\mathbf W_{emb}$. We use the same embedding matrix $\mathbf W_{emb}$ for both source text and output sentences. That is, we represent a source text $\{x_i\}_{i=1}^N$ in a distributional space as $\{\mathbf d_i^0\}_{i=1}^N $ by $\mathbf d_i^0 = \mathbf W_{emb} \mathbf x_i$ where $\mathbf x_i$ is a one-hot vector for $i$-th word in the source text. Likewise, output words $\{y_t\}_{t=1}^T$ is embedded as $\{\mathbf o_t^0\}_{t=1}^T$, and $\mathbf d_i^0$ and $\mathbf o_t^0 \in \mathbb R^{300}$. \subsection{Construction of Multi-level Memory} \label{sec:construct_memory} As shown in Figure \ref{fig:model}(a), the multi-level memory network takes the source text embedding $\{\mathbf d_i^0\}_{i=1}^N$ as an input, and generates $S$ number of memory tensors $\{\mathbf M_s^{a/c}\}_{s=1}^S$ as output, where superscript $a$ and $c$ denote input and output memory representation, respectively. The multi-level memory network is motivated by that when human understand a document, she does not remember it as a single whole document but ties together several levels of abstraction (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot word-level, sentence-level, paragraph-level and document-level). That is, we generate $S$ sets of memory tensors, each of which associates each cell with different number of neighboring word embeddings based on the level of abstraction. To build memory slots of such multi-level memory, we exploit a multi-layer CNN as the write network, where each layer is chosen based on the size of its receptive field. However, one issue of convolution is that large receptive fields require many layers or large filter sizes. For example, stacking 6 layers with a filter size of 3 results in a receptive field size of 13, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot each output depends on 13 input words. In order to grow the receptive field without increasing the computational cost, we exploit the \textit{dilated} convolution~\cite{Yu:2016:ICLR,Oord:2016:SSW} for the write network. \textbf{Memory Writing with Dilated Convolution}. In dilated convolution, the filter is applied over an area larger than its length by skipping input values with a certain gap. Formally, for a 1-D $n$-length input $\mathbf x \in \mathbb{R}^{n \times 300}$ and a filter $\mathbf w : \{1, ..., k\} \rightarrow \mathbb{R}^{300}$, the dilated convolution operation $\mathcal{F}$ on $s$ elements of a sequence is defined as \begin{align} \label{eq:dilated_cnn_op} \mathcal{F} (\mathbf x, s) = \sum_{i=1}^{k} \mathbf w (i) * \mathbf x_{s + d \cdot (i - \lfloor k/2 \rfloor)} + \mathbf b, \end{align} where $d$ is the dilation rate, $k$ is the filter size, $s - d \cdot (i - \lfloor k/2 \rfloor)$ accounts for the direction of dilation and $\mathbf w \in \mathbb R^{k \times 300 \times 300}$ and $\mathbf b \in \mathbb R^{300}$ are the parameters of the filter. With $d = 1$, the dilated convolution reduces to a regular convolution. Using a larger dilation enables a single output at the top level to represent a wider range of input, thus effectively expanding the receptive field. To the embedding of a source text $\{\mathbf d_i^0\}_{i=1}^N$, we recursively apply a series of dilated convolutions $F(\mathbf d^0) \in \mathbb R^{N \times 300}$. We denote the output of the $l$-th convolution layer as $\{\mathbf d_i^l\}_{i=1}^N$. \textbf{Normalized Gated Tanh Units}. Each convolution is followed by our new activation of \textit{normalized gated tanh unit} (NGTU), which is illustrated in Figure \ref{fig:gated_tanh}(b): \begin{align} \label{eq:gtu} \mbox{GTU}(\mathbf d^l) &= \mbox{tanh} (\mathcal{F}_f^l (\mathbf d^l)) \circ \sigma (\mathcal{F}_g^l (\mathbf d^l)), \\ \label{eq:residual} \mathbf d^{l+1} &= \mbox{LayerNorm} (\mathbf d^l + \mbox{GTU}(\mathbf d^l)), \end{align} where $\sigma$ is a sigmoid, $\circ$ is the element-wise multiplication and $F_f^l$ and $F_g^l$ denote the filter and gate for $l$-th layer dilated convolution, respectively. The NGTU is an extension of the existing gated tanh units (GTU) \cite{Oord:2016:SSW, Oord:2016:NIPS} by applying weight normalization \cite{Salimans:2016:NIPS} and layer normalization \cite{Ba:2016:Stat}. This mixed normalization improves earlier work of \citet{Gehring:2017:ICML}, where only weight normalization is applied to the GLU. As in Figure \ref{fig:gated_tanh}(a), it tries to preserve the variance of activations throughout the whole network by scaling the output of residual blocks by $\sqrt{0.5}$. However, we observe that this heuristic does not always preserve the variance and does not empirically work well in our dataset. Contrarily, the proposed NGTU not only guarantees preservation of activation variances but also significantly improves the performance. \textbf{Multi-level Memory}. Instead of using only the last layer output of CNNs, we exploit the outputs of multiple layers of CNNs to construct $S$ sets of memories. For example, memory constructed from the 4-th layer, whose receptive field is 31, may have sentence-level embeddings, while memory from the 8-th layer, whose receptive field is 511, may have document-level embeddings. We obtain each $s$-th level memory $\mathbf M_s^{a/c}$ by resembling key-value memory networks \cite{Miller:2016:EMNLP}: \begin{align} \mathbf M_s^a &= \mathbf d^{\mathbf m(s)}, ~ \mathbf M_s^c = \mathbf d^{\mathbf m(s)} + \mathbf d^0. \end{align} Recall that $\mathbf M_s^a$ and $\mathbf M_s^c \in \mathbb R^{N \times 300}$ are input and output memory matrix, respectively. $\mathbf {m} (s)$ indicates an index of convolutional layer used for the $s$-th level memory. For example, if we set $S=3$ and $\mathbf m = \{3,6,9 \}$, we make three-level memories, each of which uses the output of the 3-rd, 6-th, and 9-th convolution layer, respectively. To output memory representation $\mathbf M_s^c$, we add the document embedding $\mathbf d^{0}$ as a skip connection. \subsection{State-Based Sequence Generation} \label{sec:query_network} We discuss how to predict the next word $y_{t+1}$ at time step $t$ based on the memory state and previously generated words $y_{1:t}$. Figure \ref{fig:model}(b) visualizes the overall procedure of decoding. We first apply max-pooling to the output of the last layer of the encoder network to build a whole document embedding $\mathbf d^{whole} \in \mathbb R^{300}$: \begin{align} \label{eq:document_max_pooling} \mathbf d^{whole} = \mbox{maxpool}([\mathbf d^L_1; ...; \mathbf d^L_N]). \end{align} The decoder is designed based on WaveNet \cite{Oord:2016:SSW} that uses a series of causal dilated convolutions, denoted by $\hat{\mathcal{F}}(\mathbf o_{1:t}^l) \in \mathbb R^{t \times 300}$. We globally condition $\mathbf d^{whole}$ to obtain embeddings of previously generated words $\mathbf o_{1:t}^l$ as: \begin{align} &\mathbf h_{f/g}^l = \hat{\mathcal{F}}_{f/g}^l ( \mathbf o^l_{1:t} + \mathbf W_{f/g}^l \mathbf d^{whole}), \\ &\mathbf h_a^l = \mbox{tanh} (\mathbf h_f^l) \circ \sigma (\mathbf h_g^l), \\ \label{eq:output_conv} &\mathbf o^{l+1}_{1:t} = \mbox{LayerNorm} (\mathbf o^l_{1:t} + \mathbf h_a^l), \end{align} where $\mathbf h_{f/g}^l$ are the filter and gate hidden state respectively, and learnable parameters are $\mathbf W_f^l$ and $\mathbf W_g^l \in \mathbb R^{300 \times 300}$. We initialize $\mathbf o^0_t = \mathbf W_{emb} \mathbf y_t$. We set the level of the decoder network to $L = 3$ for TIFU-short and $L = 5$ for TIFU-long. Next, we generate $S$ number of query vectors $\{\mathbf q_t^s\}_{s=1}^S$ at time $t$ to our memory network as \begin{align} \label{eq:query} \mathbf q_t^s &= \mbox{tanh}(\mathbf W_q^s \mathbf o^L_t + \mathbf b_q^s), \end{align} where $\mathbf W_q^s \in \mathbb R^{300 \times 300}$ and $\mathbf b_q^s \in \mathbb R^{300}$. Each of these query vectors $\{\mathbf q_t^s\}_{s=1}^S$ is fed into the attention function of each level of memory. As in \citep{Vaswani:2017:NIPS}, the attention function is \begin{align} \label{eq:attention} \mathbf M_{o_t}^s = \mbox{softmax}(\frac{\mathbf q_t^s (\mathbf M_s^a)^T}{\sqrt{d^{emb}}}) \mathbf M_s^c, \end{align} where we set $d^{emb}=300$ for the embedding dimension and $\mathbf M_{o_t}^s \in \mathbb R^{300}$. Next, we obtain the output word probability: \begin{align} \label{eq:output_act} \mathbf s_t = \mbox{softmax}(\mathbf W_o [\mathbf M_{o_t}^1;...;\mathbf M_{o_t}^S;\mathbf o^L_t]), \end{align} where $\mathbf W_o \in \mathbb R^{(300 \times (S + 1)) \times V}$. Finally, we select the word with the highest probability $y_{t+1} = \mbox{argmax}_{\mathbf s \in \mathcal{V}} (\mathbf s_t)$. Unless $y_{t+1}$ is an EOS token, we repeat generating the next word by feeding $y_{t+1}$ into the output convolution layer of Eq.(\ref{eq:output_conv}). \subsection{Training} \label{sec:training} We use the softmax cross-entropy loss from estimated $y_t$ to its target $y_{GT,t}$. However, it forces the model to predict extremes (zero or one) to distinguish among the ground truth and alternatives. The label smoothing alleviates this issue by acting as a regularizer that makes the model less confident in its prediction. We smooth the target distribution with a uniform prior distribution $u$ \cite{Pereyra:2017:ICLR, Edunov:2017:NAACL-HLT, Vaswani:2017:NIPS}. Thus, the loss over the training set $\mathcal{D}$ is \begin{align} \mathcal{L} = - \sum \log p_{\theta} (\mathbf{y} | \mathbf{x}) - D_{KL}(u||p_{\theta}(\mathbf{y} | \mathbf{x})). \nonumber \end{align} We implement label smoothing by modifying the ground truth distribution for word $y_{GT,t}$ to be $p(y_{GT,t}) = 1 - \epsilon$ and $p(y') = \epsilon/ \mathcal{V}$ for $y' \neq y_{GT,t}$ where $\epsilon$ is a smoothing parameter set to 0.1. Further details can be found in the Appendix. \section{Experiments} \label{sec:experiments} \subsection{Experimental Setting} \label{sec:experimental_setting} \textbf{Evaluation Metrics}. We evaluate the summarization performance with two language metrics: perplexity and standard F1 ROUGE scores \cite{Lin:2004:TSBO}. We remind that lower perplexity and higher ROUGE scores indicate better performance. \textbf{Datasets}. In addition to Reddit TIFU, we also evaluate on two existing datasets: abstractive subset of \texttt{Newsroom} \cite{Grusky:2018:NAACL-HLT} and \texttt{XSum} \cite{Narayan:2018:EMNLP}. These are suitable benchmarks for evaluation of our model in two aspects. First, they are specialized for abstractive summarization, which meets well the goal of this work. Second, they have larger vocabulary size (40K, 50K) than Reddit TIFU (15K), and thus we can evaluate the learning capability of our model. \textbf{Baselines}. We compare with three abstractive summarization methods, one basic seq2seq model, two heuristic extractive methods and variants of our model. We choose \texttt{PG} \cite{See:2017:ACL}, \texttt{SEASS} \cite{Zhou:2017:ACL}, \texttt{DRGD} \cite{Li:2017:EMNLP} as the state-of-the-art methods of abstractive summarization. We test the attention based seq2seq model denoted as \texttt{s2s-att} \cite{Chopra:2016:NAACL-HLT}. As heuristic extractive methods, the \texttt{Lead-1} uses the first sentence in the text as summary, and the \texttt{Ext-Oracle} takes the sentence with the highest average score of F1 ROUGE-1/2/L with the gold summary in the text. Thus, \texttt{Ext-Oracle} can be viewed as an upper-bound for extractive methods. We also test variants of our method \texttt{MMN-*}. To validate the contribution of each component, we exclude one of key components from our model as follows: (i) \texttt{-NoDilated} with conventional convolutions instead, (ii) \texttt{-NoMulti} with no multi-level memory (iii) \texttt{-NoNGTU} with existing gated linear units \cite{Gehring:2017:ICML}. That is, \texttt{-NoDilated} quantifies the improvement by the dilated convolution, \texttt{-NoMulti} assesses the effect of multi-level memory, and \texttt{-NoNGTU} validates the normalized gated tanh unit. Please refer to the Appendix for implementation details of our method. \begin{table}[t] \centering \small \setlength{\tabcolsep}{4pt} \begin{tabular}{|c|c|ccc|} \hline \multicolumn{5}{|c|}{\textbf{TIFU-short}} \\ \hline Methods & PPL & R-1 & R-2 & R-L \\ \hline {\tt Lead-1} & n/a & 3.4 & 0.0 & 3.3 \\ {\tt Ext-Oracle} & n/a & 8.0 & 0.0 & 7.7 \\ \hline {\tt s2s-att} \cite{Chopra:2016:NAACL-HLT} & 46.2 & 18.3 & 6.4 & 17.8 \\ {\tt PG} \cite{See:2017:ACL} & 40.9 & 18.3 & 6.5 & 17.9 \\ {\tt SEASS} \cite{Zhou:2017:ACL} & 62.6 & 18.5 & 6.4 & 18.0 \\ {\tt DRGD} \cite{Li:2017:EMNLP} & 69.2 & 14.6 & 3.3 & 14.2 \\ \hline {\tt MMN} & 32.1 & \textbf{20.2} & \textbf{7.4} & \textbf{19.8} \\ {\tt MMN-NoDilated} & \textbf{31.8} & 19.5 & 6.8 & 19.1 \\ {\tt MMN-NoMulti} & 34.4 & 19.0 & 6.1 & 18.5 \\ {\tt MMN-NoNGTU} & 40.8 & 18.6 & 5.6 & 18.1 \\ \hline \multicolumn{5}{|c|}{\textbf{TIFU-long}} \\ \hline {\tt Lead-1} & n/a & 2.8 & 0.0 & 2.7 \\ {\tt Ext-Oracle} & n/a & 6.8 & 0.0 & 6.6 \\ \hline {\tt s2s-att} \cite{Chopra:2016:NAACL-HLT} & 180.6 & 17.3 & 3.1 & 14.0 \\ {\tt PG} \cite{See:2017:ACL} & 175.3 & 16.4 & 3.0 & 13.5 \\ {\tt SEASS} \cite{Zhou:2017:ACL} & 387.0 & 17.5 & 2.9 & 13.9 \\ {\tt DRGD} \cite{Li:2017:EMNLP} & 176.6 & 16.8 & 2.0 & 13.6 \\ \hline {\tt MMN} & \textbf{114.1} & \textbf{19.0} & \textbf{3.7} & \textbf{15.1} \\ {\tt MMN-NoDilated} & 124.2 & 17.6 & 3.4 & 14.1 \\ {\tt MMN-NoMulti} & 124.5 & 14.0 & 1.5 & 11.8 \\ {\tt MMN-NoNGTU} & 235.4 & 14.0 & 2.6 & 12.1 \\ \hline \end{tabular} \caption{ Summarization results measured by perplexity and ROUGE-1/2/L on the TIFU-short/long dataset. } \label{tab:results} \end{table} \begin{table}[t] \centering \small \setlength{\tabcolsep}{4pt} \begin{tabular}{|c|ccc|ccc|} \hline & \multicolumn{3}{c|}{\textbf{Newsroom-Abs} } & \multicolumn{3}{c|}{\textbf{XSum} } \\ \hline Methods & R-1 & R-2 & R-L & R-1 & R-2 & R-L \\ \hline {\tt s2s-att} & 6.2 & 1.1 & 5.7 & 28.4 & 8.8 & 22.5 \\ {\tt PG} & 14.7 & 2.2 & 11.4 & 29.7 & 9.2 & 23.2 \\ {\tt ConvS2S} & - & - & - & 31.3 & 11.1 & 25.2 \\ {\tt T-ConvS2S} & - & - & - & 31.9 & 11.5 & 25.8 \\ {\tt MMN} (Ours) & \textbf{17.5} & \textbf{4.7} & \textbf{14.2} & \textbf{32.0} & \textbf{12.1} & \textbf{26.0} \\ \hline \end{tabular} \caption{ Summarization results in terms of ROUGE-1/2/L on Newsroom-Abs \cite{Grusky:2018:NAACL-HLT} and XSum \cite{Narayan:2018:EMNLP}. Except \texttt{MMN}, all scores are referred to the original papers. {\tt T-ConvS2S} is the topic-aware convolutional seq2seq model. } \vspace{-5pt} \label{tab:results_newsroom} \end{table} \subsection{Quantitative Results} \label{sec:results_word_overlap} Table \ref{tab:results} compares the summarization performance of different methods on the TIFU-short/long dataset. Our model outperforms the state-of-the-art abstractive methods in both ROUGE and perplexity scores. \texttt{PG} utilizes a pointer network to copy words from the source text, but it may not be a good strategy in our dataset, which is more abstractive as discussed in Table \ref{tab:dataset-ext}. \texttt{SEASS} shows strong performance in DUC and Gigaword dataset, in which the source text is a single long sentence and the gold summary is its shorter version. Yet, it may not be sufficient to summarize much longer articles of our dataset, even with its second-level representation. \texttt{DRGD} is based on the variational autoencoder with latent variables to capture the structural patterns of gold summaries. This idea can be useful for the similarly structured formal documents but may not go well with diverse online text in the TIFU dataset. \begin{table}[t] \small \centering \setlength{\tabcolsep}{4.5pt} \begin{tabular}{|c|cc|c|cc|c|} \hline & \multicolumn{3}{c|}{\textbf{TIFU-short}} & \multicolumn{3}{c|}{\textbf{TIFU-long}}\\ \hline vs. Baselines & Win & Lose & Tie & Win & Lose & Tie \\ \hline \texttt{s2s-att} & \textbf{43.0} & 28.3 & 28.7 & \textbf{32.0} & 24.0 & 44.0 \\ \texttt{PG} & \textbf{38.7} & 28.0 & 33.3 & \textbf{42.3} & 33.3 & 24.3 \\ \texttt{SEASS} & \textbf{35.7} & 28.0 & 36.3 & \textbf{47.0} & 37.3 & 15.7 \\ \texttt{DRGD} & \textbf{46.7} & 17.3 & 15.0 & \textbf{61.0} & 23.0 & 16.0 \\ \hline Gold & 27.0 & \textbf{58.0} & 15.0 & 22.3 & \textbf{73.7} & 4.0 \\ \hline \end{tabular} \caption{ AMT results on the TIFU-short/long between our \texttt{MMN} and four baselines and gold summary. We show percentages of responses that turkers vote for our approach over baselines. } \vspace{-5pt} \label{tab:results_amt} \end{table} These state-of-the-art abstractive methods are not as good as our model, but still perform better than extractive methods. Although the \texttt{Ext-Oracle} heuristic is an upper-bound for extractive methods, it is not successful in our highly abstractive dataset; it is not effective to simply retrieve existing sentences from the source text. Moreover, the performance gaps between abstractive and extractive methods are much larger in our dataset than in other datasets \cite{See:2017:ACL, Paulus:2018:ICLR, Cohan:2018:NAACL-HLT}, which means too that our dataset is highly abstractive. Table \ref{tab:results_newsroom} compares the performance of our MMN on Newsroom-Abs and XSum dataset. We report the numbers from the original papers. Our model outperforms not only the RNN-based abstractive methods but also the convolutional-based methods in all ROUGE scores. Especially, even trained on single end-to-end training procedure, our model outperforms \texttt{T-ConvS2S}, which necessitates two training stages of LDA and \texttt{ConvS2S}. These results assure that even on formal documents with large vocabulary sizes, our multi-level memory is effective for abstractive datasets. \subsection{Qualitative Results} \label{sec:qualitative_results} \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{./pictures/examples/post_example.pdf} \caption{Examples of abstractive summary generated by our model and baselines. In each set, we too show the source text and gold summary.} \label{fig:example_short} \vspace{-5pt} \end{center} \end{figure} We perform two types of qualitative evaluation to complement the limitation of automatic language metrics as summarization evaluation. \textbf{User Preferences}. We perform Amazon Mechanical Turk (AMT) tests to observe general users' preferences between the summarization of different algorithms. We randomly sample 100 test examples. At test, we show a source text and two summaries generated by our method and one baseline in a random order. We ask turkers to choose the more relevant one for the source text. We obtain answers from three different turkers for each test example. We compare with four abstractive baselines (\texttt{s2s-att}, \texttt{PG}, \texttt{SEASS} and \texttt{DRGD}) and the gold summary (Gold). Table \ref{tab:results_amt} summarizes the results of AMT tests, which validate that human annotators significantly prefer our results to those of baselines. As expected, the gold summary is voted the most. \textbf{Summary Examples}. Figure \ref{fig:example_short} shows selected examples of abstractive summarization. Baselines often generate the summary by mostly focusing on some keywords in the text, while our model produces the summary considering both keywords and the whole context thanks to multi-level memory. We present more examples in the Appendix. \section{Conclusions} \label{sec:conclusion} We introduced a new dataset \textit{Reddit TIFU} for abstractive summarization on informal online text. We also proposed a novel summarization model named \textit{multi-level memory networks} (MMN). Experiments showed that the Reddit TIFU dataset is uniquely abstractive and the MMN model is highly effective. There are several promising future directions. First, ROUGE metrics are limited to correctly capture paraphrased summaries, for which a new automatic metric of abstractive summarization may be required. Second, we can explore the data in other online forums such as Quora, Stackoverflow and other subreddits. \section*{Acknowledgments} \label{sec:acknowledgments} We thank Chris Dongjoo Kim, Yunseok Jang and the anonymous reviewers for their helpful comments. This work was supported by Kakao and Kakao Brain corporations and IITP grant funded by the Korea government (MSIT) (No. 2017-0-01772, Development of QA systems for Video Story Understanding to pass the Video Turing Test). Gunhee Kim is the corresponding author.
1,108,101,563,562
arxiv
\section{Introduction} While machine learning (ML) plays an increasing role in the technological development of our society (for instance, by providing forecasting systems, search engines, recommendation systems, and logistic planning tools) the ability of ML systems to learn {\it causal} structures as opposed to mere statistical associations is still at an early stage\footnote{Cf. for instance, \cite{Pearl2018}, page 30: ``Some readers may be surprised that I placed present-day learning machines squarely on rung one of the Ladder of Causation...''}. A particularly hard task in causal machine learning is so-called {\it causal discovery}, the task of learning the causal graph (including causal directions) from passive observations \cite{Spirtes1993,Pearl:00,causality_book}. Given how many problems modern deep learning (DL) has solved via providing computers with massive data \cite{d2l}, one may wonder whether appropriate DL architectures could {\it learn how to learn} causal structure from data after feeding them with an abundance of datasets with known ground truth. This approach, however, fails alone due to the scarcity of such datasets.\footnote{Even benchmarking the elementary problem of cause-effect inference from bivariate data is often done via the Tübingen dataset \cite{cepairs} containing currently 106 pairs only \cite{Guyon2019}. A lot odf studies are therefore heavily based on simulated data \cite{Ke2020}.} This, in turn, raises the question: {\it why is there so little benchmarking data with commonly agreed causal structure?} The common answer is that in many real world systems interventions are impossible, costly, or unethical. (e.g. moving the moon to show that its position causes the solar eclipse is costly at least). Although this is a valid explanation, it blurs the question whether the required interventions are {\it well-defined} in the first place. While {\it defining} interventions on the moon seems unproblematic, it will be significantly harder to agree on a definition for interventions on the Gross National Product (GNP)', for instance, as a basis for discussing the impact of GNP on employment. -- Which of all hypothetical political instruments (whether feasible or not) influencing GNP should be considered {\it interventions on GNP}? Before going into this discussion, we first recall the description of interventions in the framework of graphical models. \paragraph{Notation and terminology} Random variables will be denoted by capital letters like $X,Y$ and their values by lower case letters $x,y$. Further, calligraphy letters like ${\cal X},{\cal Y}$ will denote the range of variables $X,Y$. Causal Bayesian Networks (CBNs) or Functional Causal Models (FCMs) \cite{Spirtes1993,Pearl:00} will be our key framework for further discussions. Both concepts describe causal relations between random variables $X_1,\dots,X_n$ via directed acyclic graphs (DAGs). According to the Causal Markov Condition, the respective causal DAG $G$ is compatible with any joint density that factorizes\footnote{Here we have implicitly assumed that the joint distribution has a density with respect to a product measure \cite{Lauritzen}.} according to \begin{equation}\label{eq:fac} p(x_1,\dots,x_n) = \prod_{j=1}^n p(x_j|pa_j), \end{equation} where $pa_j$ denotes the values of the parents $PA_j$ of $X_j$ in $G$. Then, the CBN is given by a DAG $G$ together with a compatible joint distribution. FCMs, in contrast, provide a deterministic model of the joint distribution, in which every node is a function of its parents and an unobserved noise variable $N_j$: \begin{equation}\label{eq:fcm} X_j = f_j(PA_j,N_j), \end{equation} where all $N_1,\dots,N_n$ are statistically independent. Both frameworks, CBNs and FCMs admit the derivation of interventional probabilities, e.g., the change of the joint distribution after setting variables to fixed values. While CBNs only provide statements on how probabilities change by an intervention, FCMs, in addition, tell us how the intervention affected each individual statistical unit and counterfactual causal statements \cite{Pearl:00}. \paragraph{Interventions} To briefly review different types of interventions, note that the point intervention $do(X_j=x_j)$ \cite{Pearl:00}, also called `hard intervention', adjusts the variable to $x_j$, while generalized (also called `soft') interventions on $X_j$ replace $p(x_j|pa_j)$ with a different conditional $\tilde{p}(x_j|pa_j)$ or the FCM \eqref{eq:fcm} with a modification $X_j = \tilde{f}_j(PA_j,\tilde{N}_j)$, see \cite{causality_book}, page 89, and references therein. Structure-preserving interventions \cite{CIC2020} preserve all the dependences on the parents by either operating on the noise $N_j$ or adjusting $X_j$ to the parent-dependent value $f_j(pa_j,N'_j)$ where $N_j'$ is an independent copy of $N_j$ that is generated by the experimenter. The simple observation that any of these interventions on $X_j$ affect only $X_j$ and a subset of its descendants, defines a consistency condition between a hypothetical $G$ and the hypothesis that an action is an intervention on $X_j$. While the above framework has provided a powerful language for a wide range of causal problems, it implicitly requires the following two questions to be clarified: \begin{Question}[coordinatization]\label{q:vardef} How do we define variables $X_1,\dots,X_n$ for a given system, that are not only meaningful in their own right but also allow for well-defined causal relations between them? \end{Question} The second question reads: \begin{Question}[defining interventions] \label{q:intervention} Let $A$ be an intervention on a system $S$ whose state is described by the variables $\{X_1,\dots,X_n\}$ (`coordinates'). What qualifies $A$ to be an intervention on variable $X_j$ only? \end{Question} Note that the word `only' in Question \ref{q:intervention} is meant in the sense that the action intervenes on none of the other variables under consideration {\it directly}, it only affects them in their role of descendants of $X_j$, if they are. Question 1 captures what is often called `causal representation learning' \cite{Chalupka2015,Schoelkopf2021}, while this paper will mainly focus on Question 2 only although we believe that future research may not consider them as separate questions for several reasons. First, because some definitions of variables may seem more meaningful than others because they admit more natural definitions of interventions. Second, because different parameterizations of the space results in different {\it consistency conditions} between variables which may or may not be interpreted as causal interactions between them. Following e.g. \cite{Woodward2003} page 98, one may define interventions on $X_j$ as actions that affect only $X_j$ and its descendants, but this way one runs into the circularity of referring to the causal structure for defining interventions although one would like to define {\it causality via interventions}, see e.g. \cite{Baumgartner2009} for a discussion. The idea of this paper is to define causal directions between a set of variables by first defining a set of {\it elementary transformations} acting on the system, which are later thought of being interventions on one of the variables only. While they can be concatenated to more complex transformations, defining which transformations are {\it elementary}, defines the causal direction. While the notion of interventions comes as a {\it primary} concept in the graphical model based framework of causal inference this paper will tentatively describe a notion of intervention as a {\it secondary} concept derived from an implicit or explicit notion of {\it complexity of actions}. \paragraph{Structure of the paper} Section \ref{sec:hardware} argues that there are domains (mostly technical devices) in which an action can be identified as an intervention on a certain variable via analyzing the `hardware' of the respective system. In contrast, Section \ref{sec:ill} describes a few scenarios with ill-defined causal relations to highlight the limitations of the `hardware analysis' approach, without claiming that our proposal will offer a solution to all of them. Section \ref{sec:im} argues that the idea that some operations are more elementary than others is already necessary to make sense of one version to read the Principle of Independent Mechanisms. Based on this insight, Section \ref{sec:element}, which is the main part of the paper, describes how to define causal directions via declaring a set of transformations as elementary and illustrates this idea for examples of `phenomenological' causality where the causal directions are debatable and paradoxical. Section \ref{sec:coupling} shows that phenomenological causality appears more natural when the system under consideration is not considered in isolation, but in the context of further variables in the world. Then, the joint system can be described by a DAG that is consistent with phenomenological causality (Subsection \ref{subsec:MarkovExt}). Further, the notion of `elementary' also satisfies a certain consistency condition with respect to such an extension (Subsection \ref{subsec:boundary}). \section{Defining interventions via `hardware analysis‘ \label{sec:hardware}} We want to motivate Question 1 and Question 2 from the previous section by two thoughts experiments, starting with Question \ref{q:intervention}. Consider the apparatus shown in figure \ref{fig:apparatus}. \begin{figure} \includegraphics[width=0.5\textwidth]{black_box} \caption{\label{fig:apparatus} Apparatus whose front side contains $n$ measurement devices and $n$ knobs. The measuring devices measure unknown quantities $X_1,\dots,X_n$. How do we `find out' (or `define'?) whether knob $j$ intervenes on $X_j$?} \end{figure} It shows $n$ measuring devices that display real numbers $X_1,X_2,\dots,X_n$. Further, it contains $n$ knobs, whose positions are denoted by $K_1,\dots,K_n$. Assume we know that the box contains an electrical device and $X_j$ are $n$ different voltages whose mutual influence is described by some unknown DAG. In the best case, our knowledge of how the device is internally wired tells us that turning knob $j$ amounts to intervening on $X_j$. In the worst case (the black box scenario) our judgement of whether $K_j$ intervenes on $X_j$ is only based on observing the impact on all $X_i$, where we run in the above mentioned circularity of checking whether $K_j$ only affects $X_j$ and a subset of its descendants. However, depending on `hardware analysis' for defining interventions in a non-circular way is worrisome. First, the power of the causal inference framework relies on the fact that it describes causal relations on a more abstract level without referring to the underlying `hardware'. Second, it is questionable why analyzing the causal relation between the action at hand and a variable $X_j$ should be easier than analyzing the causal relations between different $X_i$ (e.g. following wires between the knobs and the voltages $X_j$ as well as the wires between different $X_i$ both requires opening the box). After all, both causal questions refer to the same domain. A large number of relevant causal relations refer to domains with an inherent fuzziness, e.g., macro economic questions, and it is likely that the same fuzziness applies to the causal relation between an action and the variables it is supposed to intervene or not to intervene on. Variables for technical devices like {\it voltage at a specific component} refer to measurements that are local in space-time and thus require propagating signals to interact, which admits interpreting edges of a causal DAG as those signals. Coarse-grained variables like GNP are highly non-local, which renders causal edges an abstract concept. Part of the fuzziness of causality in `high-level variables' in real-life applications can be captured by the following metaphoric toy example: to motivate Question \ref{q:vardef}, consider the apparatus show in figure \ref{fig:apparatusmech}. Instead of showing $n$ measuring instruments on its front side, it contains a window through which we see a mechanical device with a few arms and hinges, whose positions and angles turn out be controlled by the positions of the knobs. The angles and positions together satisfy geometric constraints by construction, they cannot be changed independently.\footnote{For causal semantics in physical dynamical systems see e.g. \cite{Bongers2018}.} Accordingly, parameterizing the remaining $4$ degrees of freedoms via variables $X_1,\dots,X_4$ is ambiguous, e.g. horizontal and vertical position of one of the $4$ hinges and 2 angle (describing the system by more than $4$ variables would be over-parameterization, which results in constraints, as described in Subsection \ref{subsec:coord}). Now, the question whether or not turning knob $j$ can be seen as intervention of one particular mechanical degree of freedom $X_j$ depends on the parameterization. In this case, even hardware analysis does not necessarily clarify which degrees of freedom the knob intervenes on. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{black_box_mechanic} \caption{\label{fig:apparatusmech} Apparatus whose front side contains $n$ measurement devices and $n$ knobs. The measuring device measure unknown quantities $X_1,\dots,X_n$. How do we `find out' (or `define'?) whether knob $j$ intervenes on $X_j$? The knobs in the toy model symbolize actions in the real world of which we want to define which variables they intervene on.} \end{figure} The example shows also that there is an obvious `cheap' solution to the ill-definedness of interventions: we claim that the position of the knobs are the only variables we are able to intervene on and only explore effects of those, while avoiding questions on causal relations between the {\it internal} variables (accordingly, one would deny talking about interactions {\it between genes} in gene expression experiments and consider changing experimental conditions (the `knobs') as the actual interventions). While this perspective sounds clean and circumvents hard conceptual problems of causality, it dismisses the idea of a causal understanding of the processes {\it inside the box}. \section{Ill-defined causal relations \label{sec:ill}} We now try an incomplete taxonomy of reasons that render causality ill-defined, even after understanding the underlying processes. Using the apparatus in Figure \ref{fig:apparatus} as metaphor, we sometimes still don't understand causal relations even {\it after opening the box}. This is because causal relations between variables are not always like `wires that link devices'. We emphasize that some ill-definedness of causal relations in real life result from ill-definedness of the variable it refers to, thus leading us to Question 1. For instance, the question to what extent the air temperature outside today influences the value tomorrow: if `temperature today' is supposed to only refer to the temperature of a cubic decimeter around the temperature sensor, the impact is negligible. If, however, it refers to the mean temperature of the {\it whole region}, the impact is significant. While these temperatures largely coincide for passive observations, interventions destroy this equality and thus intervening on `the temperature' is an ill-defined concept. Let us now discuss a few reasons for ill-definedness of causal relations that appears even when the variables are well-defined. The purpose of providing the below incomplete list of reasons is to argue that discussions of causality often fail to provide {\it enough context} to get a well-defined causal question. We will later see in what sense this context can be given by specifying those elementary actions that we want to consider {\it interventions} on one variable. \subsection{Coupling induced by 'coordinatization' of the world \label{subsec:coord}} For some person, let us define the variables $W,E,S,O$ describing the hours he/she spends for work, exercises, sleep, and others at some day. These variables satisfy the equation \begin{equation}\label{eq:consist} W+E+S +O = 24, \end{equation} a constraint which results in statistical dependences when observing the variables over several days. According to Reichenbach's principle of common cause \cite{Reichenbach1956}, each dependence is due to some {\it causal} relation, either the variables influence each other or they are influenced by common causes. Since one may be tempted to explain the statistical dependence by a common cause, let us introduce a hidden variables {\it the person's decision} influencing all $4$ variables, as shown in Figure \ref{fig:hours}. \begin{figure} \begin{center} \begin{tikzpicture} \node[obs] at (-1.5,0) (W) {$W$} ; \node[obs] at (-0.5,0) (E) {$E$} ; \node[obs] at (0.5,0) (S) {$S$} ; \node[obs] at (1.5,0) (O) {$O$} ; \node[var] at (0,2) (D) {$D$} edge[->] (W) edge[->] (E) edge[->] (O) edge[->] (S); \end{tikzpicture} \end{center} \caption{\label{fig:hours} The hours spent with certain 'activities' like work, exercise, sleep, and others, are determined by a person's decision.} \end{figure} However, this DAG suggests that independent interventions were possible. Obviously, the intervention of setting all $4$ variables to $7$ hours is prohibited by \eqref{eq:consist}. Likewise, it is not possible to intervene on $W$ only without affecting the other variables. While these remarks seem obvious, they raise serious problems for defining downstream impact of the above variables, since it is impossible to isolate, for instance, the health impact of increasing $W$ from the health impact of the implied reduction of $E,S$, or $O$. Constraints that prohibit variable combinations a priori (by definition), rather than being a result of a mechanism (which could be replaced by others), are not part of the usual causal framework. The fact that independent changes of the variables are impossible also entails the problem of interventions being an ill-defined concept: what qualifies a change of life-style which increased $E$ and decreased $W$ as {\it an intervention on} $E$? Is it a question of the intention? If the intention to do more exercises entailed the reduction of work we tend to talk about an intervention on $E$, but given that people do not always understand even their own motivation behind their actions, it seems problematic that the definition of an intervention then depends on these speculations. \subsection{Dynamical systems} Constraints on variables like the ones above result naturally if we don't think of the state of the world as a priori given {\it in terms of variables}. Let us, instead, consider a model where the admissible states of the world are given as points in a topological space (in analogy to the {\it phase space} of a classical dynamical system \cite{Goldstein2002}), in which variables arise from introducing coordinates, as visualized in Figure \ref{fig:coord}. \begin{figure} \begin{center} \begin{tikzpicture} \draw[<-] (-2.5,-0.5) -- (-2.5,2); \node[rotate=90] at (-2.8,1) {time evolution}; \draw[dashed] (0,0) ellipse (2cm and 1cm); \node at (0,2.5) {$\overset {X^1} \longrightarrow $}; \node at (2.7,1) {$\uparrow X^2$}; \node at (1.25,1.25) {$p_t$}; \filldraw (1,1.5) circle (3pt); \node at (1,-0.4) {$p_{t+\Delta t}$}; \filldraw (0.3,-0.4) circle (3pt); \draw[clip] (0,1) ellipse (2cm and 1cm); \draw[step=0.5] (-5,-5) grid (5,5); \end{tikzpicture} \end{center} \caption{\label{fig:coord} A model of the world where the current state is a point in a topological space, that moves from point $p_t$ to $p_{t+\Delta t}$.} \end{figure} Here, the ellipse defines the admissible states of the world and we have introduced a coordinate system whose coordinates define the two random variables $X^1$ and $X^2$. The states of our toy world are formally given by the set of all those pairs $(x^1,x^2)$ that belong to the ellipse. We assume that the dynamics of our toy world is given by a topological dynamics of the ellipse, that is, a continuous map that maps each state $p_t\in {\mathbb R}^2$ to its time evolved state $p_{t+\Delta t}$. The fact that the state cannot leave the ellipse entails a relation between the variables $X^1$ and $X^2$ of which it is unclear whether we should consider it a {\it causal} relation. We will therefore discuss how to define the notion of 'interventions' on $X^1$ and $X^2$. It is natural to interpret the intervention 'set $X^1$ to $\tilde{x}^1$ as an operation that maps a state $(x^1,x^2)$ to $(\tilde{x}^1,x^2)$. This is, however, only possible if $(\tilde{x}^1,x^2)$ is still a point in the ellipse. Otherwise, one is forced to change $x^2$ to some other value $\tilde{x}^2$ to still end up in an admissible state of the world. Could we then say that $X^2$ changed {\it because} $X^1$ changed? This interpretation may be valid if someone's intention was to change $X^1$, who is then forced to also change $X^2$ by the existing constraints. However, without this knowledge about what has been the intention of the actor, we just observe that both variables have been changed. Interpreting the action as an intervention on $X^1$ becomes questionable. Our simple model of the world already suggests two different notions of causality: \begin{enumerate} \item {\bf Causality between time-like measurements} Here we consider the variables $X^1_t,X^2_t$ as causes of $X^1_{t+\Delta t},X^2_{t+\Delta t}$. \item {\bf Causality without clear time-separation} Here, causal relations between the coordinates appear as a phenomenon emerging from consistency conditions that define the admissible states of the world. \end{enumerate} Note that a more sophisticated version of constraints for dynamical systems can arise from equilibrium conditions \cite{DGL} since the set of active constraints are active only for some interventions, while they get inactive for others \cite{Blom2019}, which motivates so-called ``Causal Constraints Models (CCMs)''. The fact that $X^1_t$ and $X^2_t$ refer to the same points in time, suggests to attribute their observed statistical dependences (which result from most distributions on the ellipse) to their common history and draw the causal DAG \[ X^1 \leftarrow H \rightarrow X^2, \] where $H$ encodes the relevant feature of the state $(X^1_{t-1},X^2_{t-1})$. However, this DAG suggests the existence of independent interventions on $X^1_t$ and $X^2_t$, although the constraints given by the ellipse need to be respected by any action (similar to our remarks on Figure \ref{fig:hours}). This fact would rather be covered by a causal chain graph containing an undirected link $X^1_t - X^2_t$, as discussed in \cite{Lauritzen2002} for mutual interaction in equilibrium states. If we think of a force in the direction of the $x^1$-axis and observe that the point moves also in $x^2$-direction once it reaches the boundary, we would certainly consider $X^1$ the cause and $X^2$ the effect. However, once the boundary is reached it is no longer visible that its a force in $x^1$-direction that drives the curved motion. Accordingly, the causal direction becomes opaque. The second case is more interesting for this paper since it refers to a notion of causality that is less understood. At the same time, it challenges the interpretation of causal directions as a concept that is necessarily coupled to time order. Instead, this kind of {\it phenomenological} causality can also emerge from relations that are not given by the standard physical view on causality dealing with {\it signals} that propagate trough {\it space-time} from a sender to a receiver.\footnote{which prohibits instantaneous influence between remote objects since no signal can propagate faster than light \cite{Einstein1920}.} (such a physically local concept of causality admits defining interventions on a variable $X_j$ as actions for which a signal from the actor reaches the location of $X_j$). There is no analogous approach for phenomenological causality, since it may be too abstract to be directly materialized in space-time. For these reasons, Question \ref{q:intervention} should be particularly raised for causality in domains outside physics when referring to sufficiently abstract variables. However, the variables $X^1$ and $X^2$ referring to different coordinate of the same system can also have an abstract causal relation.\footnote{Note that \cite{Bongers2022} also discusses interventions that change some coordinates in dynamical systems, and asks whether position and momentum of a physical particle allow for separate interventions.} \subsection{Undetectable confounding: distinction between a cause and its witness} To distinguish between the scenarios $X\rightarrow Y$ and $X \leftarrow \tilde{X} \rightarrow Y$ where $\tilde{X}$ is latent, is one of the most relevant and challenging problems. Methods have been proposed that address this task from passively observing $P(X,Y)$ subject to strong assumptions, e.g., \cite{HoyerLatent08,UAI_CAN} or in scenarios where $X,Y$ are embedded in a larger network of variables, e.g., the Fast Causal Inference Algorithm \cite{Spirtes1993}, or instrumental variable based techniques \cite{Bowden} and related approaches \cite{Atalanti}. Certainly the task gets arbitrarily hard when the effect of $\tilde{X}$ on $X$ gets less and less noisy, see Figure \ref{fig:noisyM}, left. \begin{figure} \begin{center} \begin{tikzpicture} \node[obs] at (-1.5,0) (X) {$X$} ; \node[obs] at (1,0) (Y) {$Y$} ; \node[var] at (-1,1) (tX) {$\tilde{X}$} edge[->] (X) edge[->] (Y); \node[] at (-0.7,0.05) {$\approx \tilde{X}$}; \end{tikzpicture} \hspace{0.5cm} \begin{tikzpicture} \node[obs] at (1,0) (Y) {$Y$} ; \node[obs] at (-1.5,0) (X) {$X$} edge[->] (Y); \end{tikzpicture} \end{center} \caption{\label{fig:noisyM} If $X$ is an arbitrarily perfect copy of the confounder $\tilde{X}$ (e.g. if $X$ is a reasonably good measurement of the true quantity $\tilde{X}$), distinction between the left and the right scenario gets arbitrarily hard.} \end{figure} In the limiting case where $X$ is an exact copy of $\tilde{X}$, no algorithm can ever tell the difference between the two scenarios in Figure \ref{fig:noisyM} from passive observations. Note that this scenario is quite common when $\tilde{X}$ is some physical quantity and $X$ the value of $\tilde{X}$ shown by a precise measurement device. In this case, we would usually not even verbally distinguish between $X$ and $\tilde{X}$ although it certainly matters whether an intervention acts on $X$ or $\tilde{X}$. The distinction gets only relevant when we act on the system, and even then, only if actions are available that decouple $X$ from $\tilde{X}$. Accordingly, it is the set of available actions that defines causality, which is the crucial idea in Section \ref{sec:element}. A similar ill-definedness occurs in scene understanding in computer vision: Imagine a picture of a dog snapping for a sausage. Most people would agree that the sausage is the {\it cause} for the presence of the dog (unless the scene is taken from a location where the dog commonly stays anyway, e.g., its doghouse). However, the presence of the sausage {\it on the image} is certainly not the cause for the presence of the dog on the image -- changing the respective pixels in the image will not affect the pixels of the dog. Similar to the measurement device, we can only meaningfully talk about causal relations if we do not distinguish between the cause `presence of the sausage' and its `witness', e.g., its occurrence on the image. Ignoring actions that decouple the presence of an object in the real scene from its appearance on the image (retouching), we may also consider the presence of the sausage on the image the cause of the presence of the dog on the image and call this causal relation `phenomenological'. \subsection{Coarse-grained variables \label{subsec:coarse}} Whenever coarse-grained (`macrosocopic') variables are defined, for instance, by averaging over `microscopic' variables, interventions on the former are no longer well-defined since different changes on the micro-level amount to the same change of the macro-variable. Thus, causal relations between macro-variable may be ill-defined. Accordingly, \cite{Rubensteinetal17} state a consistency condition for coarse-grainings of causal structures according to which interventions on micro-variables that result on the same intervention on the macro-variables, also entail the same downstream impact on other macro-variables. \cite{Beckers2019} argue that different strength of consistency conditions are needed for different levels of abstraction. We believe that discussions on causal relations in real life often refers to macro-variables for which the impact of interventions does highly depend on how the intervention is implemented, and it is thus hard to identify {\it any} valid consistency condition, however weak it may be. Let us consider the following toy model. Given the variables $X_1,X_2,Y_1,Y_2$, with the causal DAG shown in \eqref{eq:coarse}, where $X_1$ influences $Y_1$ and $Y_2$ influences $X_2$: \begin{eqnarray}\label{eq:coarse} \bar{X} \left\{ \quad \begin{array}{ccc} X_1 & \longrightarrow & Y_1 \\ X_2 &\longleftarrow & Y_2 \end{array} \quad \right\} \bar{Y}. \end{eqnarray} We assume the FCMs \begin{eqnarray*} Y_1 &=& X_1\\ X_2 &=& Y_2, \end{eqnarray*} and define the macro-variables $\bar{X} := (X_1+X_2)/2$ and $\bar{Y}:= (Y_1 +Y_2)/2$, whose causal relations we want to discuss. Obviously, neither an intervention on $\bar{X}$ nor on $\bar{Y}$ is well-defined. Increasing $\bar{X}$ by $\Delta$ can be done by adding any vector of the form $(\Delta + c, -c)$ with $c\in {\mathbb R}$, and likewise for intervention on $\bar{Y}$. Someone who changes $\bar{X}$ and $\bar{Y}$ by changing $X_1$ and $Y_1$, respectively, will claim that $\bar{X}$ influences $\bar{Y}$, while someone changing the macro-variables by acting on $X_2$ and $Y_2$ only considers $\bar{Y}$ the cause of $\bar{X}$. We conclude that not only the quantitative effect but even the causal direction is not a property of the system alone, but a result of which actions are available. \subsection{Diversity of non-equivalent interventions} In the framework of FCMs and graphical causal models, the impact of interventions (e.g. point interventions $do(X_j=x_j)$) does not depend on {\it how} this intervention has been performed. Here, the word `how' is meant in the sense of {\it which mechanism has been used to change $X_j$}. The description of the mechanism implementing the intervention is not part of the description, also because the framework implies that different ways of setting $X_j$ to $x_j$ have the same impact. In real world problems, however, we talk about impact of one variable on another one without specifying the model we refer to, which renders impact of interventions ill-defined. Let us elaborate on this in the context of stochastic processes. Let $(X_t)_t$ be a time series of the electricity consumption of a household, where each value is the integral over one hour. A simple action to reduce $X_t$ at some $t$ could be to convince the resident not to start his/her dish washer at this point in time. Whether or not this action causes a change of $X_s$ at some later time $s>t$ depends on whether the resident decides to clean the dishes by hand or to just delay the start of the machine. Likewise, the impact of changing the traffic of some road depends on {\it how} it is performed. In a scenario where the road is made less attractive by a strong speed limit, drivers may take a different route and thus increase the traffic of other roads. Reducing the traffic by offering additional public transport would not have the same effect. Again, it is the nature of the action that defines the causal impact, instead of some causal truth that holds without this additional specification. \subsection{Abstract causal mechanisms} While the previous subsection described difficulties with the concept of causality that already arise in clearly defined physical systems, we now discuss an example where the causal mechanisms lie in a more abstract domain. Examples of causal relations for which we believe the simple mechanistic view of causality to be problematic, are widespread in the literature. \cite{Schoelkopf2021}, for instance, describe an scenario of online shopping where a laptop is recommended to a customer who orders a laptop rucksack. \cite{Schoelkopf2021} argue that this would be odd because the customer probably has a laptop already. Further, they add the causal interpretation that buying the laptop is the {\it cause} of buying the laptop rucksack. We do agree to the causal order but do not believe the {\it time order} of the purchases to be the right argument. For someone who buys laptop and laptop rucksack in different shops it can be reasonable to buy the rucksack first in order to safely carry the laptop home (given that he/she already decided on the size of the laptop). We believe, instead, that the recommendation is odd because the {\it decision} to purchase the laptop is the cause of the {\it decision} to buy the rucksack. Whether or not this necessarily implies that the decision has been made earlier, is a difficult question of brain research. If they were made in two well-localized, slightly different positions in the brain, one could, again, argue that causal influence can only be propagated via a physical signal (of finite speed). We do not want to further elaborate on this question. The remarks were only intended to show that causal problems in everyday business processes refer to rather abstract notions of causality -- for instance, to causality between {\it mental} states. It seems that particularly in these domains, causality seems to be a particularly context-dependent concept. Economic variables often show a several of the above aspects of ill-definedness coming from aggregation or psychological factors or both. For example, the price of a particular good is not only understood as the price at which one firm sells that good (unless we are in a monopoly market) but instead as an aggregation of prices. Likewise, Consumer Confidence Indices (CCI) are an aggregation of the beliefs of individual agents. Economic indices tend to be more abstract than their name might suggest. \section{\label{sec:im} Complexity aspect of Independence of Mechanisms (IM)} The idea that the conditionals $p(x_j|pa_j)$ in \eqref{eq:fac} correspond to `independent' mechanisms of the world, has a long tradition in the causality community, see e.g. \cite{causality_book}, section 2.2, for different aspects of `independence' and their history. \cite{Algorithmic,LemeireJ2012} conclude that the different conditionals contain no algorithmic information about each other, \cite{anticausal} conclude that they change independently across environments and describe implications for transfer learning scenarios. The `sparse mechanism shift hypothesis' \cite{Schoelkopf2021} assumes that changing the setup of an experiment often results in changes of $p(x_j|pa_j)$ for a small number of nodes $X_j$. Here we want to discuss this independent change from a slightly different perspective, namely from the one of {\it elementary} versus {\it complex} actions. To this end, we restrict the attention to a bivariate causal relation $X\to Y$. According to the interpretation of IM in \cite{anticausal}, the causal structure entails that $P(X)$ and $P(Y|X)$ change {\it independently} across environments. More explicitly, knowing that $P(X)$ changed to $P'(X)$ between training and test data, does not provide any information on how $P(Y|X)$ changed. In absence of any further evidence, it will thus often be reasonable to assume that $P(Y|X)$ remained the same (which is the so-called covariate shift scenario \cite{Masashi2012}). Likewise, it can also be the case that $P(Y|X)$ changed to $P'(Y|X)$ while $P(X)$ remained the same. However, the scenario that only $P(Y)$ changed and $P(X|Y)$ remained the same or vice versa, is rather unlikely. The reason is that this required contrived {\it tuning} of the changes of the mechanisms of $P(X)$ and $P(Y|X)$. Let us illustrate this idea for a simple example. \begin{Example}[ball track]\label{ex:ball_track} Figure \ref{fig:ball_track} is an abstraction of a real experiment (which is one of the cause-effect pairs in \cite{cepairs}) with a ball track. A child puts the ball on the track at some position $X$, where it accelerates and reaches a point where its velocity $Y$ is measured by two light barriers. \begin{figure} \includegraphics[width=0.5\textwidth]{ball_track_short} \caption{\label{fig:ball_track} Cause-effect pair with ball-track, taken from \cite{cepairs}: The cause $X$ is he start position of the ball along the inclined plane and the effect $Y$ is the speed at which the ball passes the light barriers at the horizontal track. The example illustrates that $P(X)$ and $P(Y|X)$ correspond to independent mechanisms.} \end{figure} One can easily think of a scenario where $P(X)$ changes without affecting $P(Y|X)$ from datasets to the other one: an older child will tend to choose positions $X$ that are higher. On the other hand, changing $P(Y|X)$ without affecting $P(X)$ can be done, for instance, by mounting the light barriers at a different position and thus measuring velocity at a later point where the ball already lost some speed. It requires, however, contrived actions to change $P(Y)$ without changing $P(X|Y)$. This would involve both changes of the child's behaviour {\it and} changes at the speed measuring unit. \end{Example} Example \ref{ex:ball_track} shows a complexity aspect of IM that we want to build on throughout the paper: changing $P(X)$ or $P(Y|X)$ without affecting the other is easy and requires only {\it one} action. In contrast, changing $P(Y)$ or $P(X|Y)$ without changing the other one of these two objects is difficult for two reasons: first, it requires changes of both, the distribution $P(X)$ of start positions and the conditional $P(Y|X)$ via shifts of the speed measurement. Second, these two actions need to be tuned against each other. After all, those actions on $P(X)$ and $P(Y|X)$ that are easy to implement (e.g. replace the child, shift the mounting of the light barrier) will probably not match together in a way that affects {\it only} $P(Y)$ but not $P(X|Y)$. In general, if we assume that not all operations on $P({\rm Cause})$ and $P({\rm Effect}|{\rm Cause})$ are elementary, it may thus take even a {\it large number} of operations to change only $P({\rm Effect})$ without affecting $P({\rm Cause}|{\rm Effect})$. Accordingly, for causal DAGs with $n$ nodes, we assume that all elementary operations change at most one conditional $P(X_j|PA_j)$, but we do not assume that any change of a single conditional is elementary. Further, we do not even assume that {\it any arbitrary} change of $P(X_j|PA_j)$ can be achieved by concatenations of elementary actions. To relate this view to known perspectives causal counterfactuals, note that Lewis \cite{Lewis1979} defined the impact of an event $E$ via a hypothetical world that is most similar to the true one except for the fact that $E$ did not happen, as opposed to a world in which $E$ happened, but also several subsequent actions were taken so that the world gets back to the path it would have followed without $E$. In the spirit of our paper, we could think of $E$ as generated by one elementary action and read Lewis' view as the statement that after one elementary action the world is closer in Lewis' sense to the original one than after several interventions that undo the downstream impact of the first one. \section{\label{sec:element} Defining causal directions via elementary actions} Here we describe the main idea of the paper which uses the notion of `elementary action' as first principle, and then discuss quite diverse toy examples. Some of them are directly motivated by practically relevant real-life applications, but we also discuss strongly hypothetical scenarios, only constructed with the purpose of challenging our intuition on causality. \subsection{The bivariate case} To avoid the above circularity of defining interventions in a way that relies on the concept of causality and the other way round, we suggest the following approach: \begin{Idea}[phenomenological cause-effect pair]\label{idea:Acausality} Let $X,Y$ be two variables describing properties of some system $S$ and ${\cal A}$ be a set of elementary actions on $S$. We say that $X$ causes $Y$ whenever ${\cal A}$ contains only the following two types of actions:\\ \noindent ${\cal A}_1:$ actions that change $X$, but preserve the relation between $X$ and $Y$\\ \noindent ${\cal A}_2:$ actions that preserve $X$, but change the relation between $X$ and $Y$. \end{Idea} Since Idea \ref{idea:Acausality} is quite informal, it leaves some room for different interpretations. We will work with two different ways of spelling it out: \begin{Definition}[statistical phenomenological causality] \label{def:phcestat} Let $X,Y$ be two variables describing properties of some system $S$ and ${\cal A}$ be a set of elementary actions on $S$. We say that $X$ causes $Y$ whenever ${\cal A}$ contains only the following two types of actions:\\ ${\cal A}_1:$ actions that change $P(X)$, but preserve $P(Y|X)$ and \\ ${\cal A}_2:$ actions that preserve $P(X)$, but change $P(Y|X)$. \end{Definition} \begin{Definition}[Unit level phenomenological causality]\label{def:pheceunit} We say that $X$ causes $Y$ whenever ${\cal A}$ contains only the following two types of actions:\\ ${\cal A}_1:$ a set of actions (containing the identity) such that every pair $(x',y')$ obtained from the observed pair $(x,y)$ by an action in ${\cal A}_1$ satisfies the same law $y'= m(x')$ for some (non constant) function $m$.\\ ${\cal A}_2:$ actions that keep $x$. \end{Definition} Note that $m$ in Definition \ref{def:pheceunit} holds for all actions in ${\cal A}_1$, but different functions $m$ hold for different statistical units. If we think of $X$ and $Y$ as related by the FCM $Y= f(X,N)$ we should think of $m$ as the map $f(.,n)$ with fixed noise value $n$. The statement that actions in ${\cal A}_1$ do not change the mapping from $X$ and $Y$ thus refers to the counterfactual knowledge encoded by the FCM, which we assume to be given from domain knowledge about the system\footnote{We think this is justified because the scope of this paper is to discuss how to define causality, not how to infer it.}. Further note that changing the map $m$ can be done by either changing $f$ or $n$. Although the condition for ${\cal A}_1$ is asymmetric with respect to swapping $X$ and $Y$ since $m$ maps from $X$ to $Y$, this asymmetry does not necessarily imply the mapping $m$ to be {\it causal}. Assume, for instance, $X$ and $Y$ are related by the FCM $Y= X+N$. Then, an observed pair $(x,y)$ for which $N=3$ will obey the rule $y= x+ 3$ and $y' = x' +3$ for all pairs generated by actions in ${\cal A}_1$. However, all these pairs will also obey the rule $x' = y' -3$, and thus there exists also a map $\tilde{m}$ from $Y$ to $X$. In our examples below the crucial asymmetry between cause and effect will not be induced by the existence of $m$, but by the existence of actions ${\cal A}_2$, which only act on the effect. In other words, interventions on the cause to do not reveal the asymmetry because they change cause and effect, while actions on the effect only change the effect. As an aside, note that for the scenario where $X$ and $Y$ are only connected by a confounder we would postulate actions that affect only $X$ and those that affect only $Y$. The idea of identifying causal structure by observing which conditionals in \eqref{eq:fac} change independently across datasets can already be found in the literature, e.g., \cite{Zhang2017CausalDF} and references therein. In the same spirit, Definitions \ref{def:phcestat} and  \ref{def:pheceunit} raise the question whether they define the causal direction uniquely. This is easier to discuss for Definition  \ref{def:phcestat}. Generically, changes of $P(X)$ results in simultaneous changes of both $P(Y)$ and $P(X|Y)$, which thus ensures that the available actions of class ${\cal A}_1$ neither fall into the category corresponding to ${\cal A}_1$ nor ${\cal A}_2$ for the backwards direction from $Y\to X$. The following simple result shows a genericity assumption for which this can be proven: \begin{Proposition}[identifiability via changes] Let $X$ and $Y$ be finite with $|{\cal X}| = |{\cal Y}|$ and the square matrix $p(x,y)_{x,y} $ have full rank with $p(x,y)$ strictly positive.  Then changing $p(x)$ changes $p(y)$ and $p(x|y)$. \end{Proposition} \noindent \begin{Proof}   Define $\tilde{p}(x,y):=\tilde{p}(x) p(y|x)$. Assume $ \tilde{p}(x|y) = p(x|y). $ Hence, \[ \tilde{p}(x) p(y|x) \tilde{p}^{-1} (y) = p(x) p(y|x) p^{-1} (y), \] which is equivalent to $ \tilde{p}(x) p (y) = p(x) \tilde{p} (y). $ Summing over $y$ yields $\tilde{p}(x) =p(x)$, hence $p(x)$ did not change. We conclude that changing $p(x)$ changes $p(x|y)$. That changing $p(x)$ also changes $p(y)$ follows from the full rank assumption. \end{Proof} \paragraph{Abstract toy example} We now describe an example for unit level phenomenological causality according to Definition \ref{def:pheceunit} where the causal direction is a priori undefined, but may be defined after specifying the set of actions, if one is willing to follow our approach. By purpose, we have chosen an example whose causal interpretation seems a bit artificial. \begin{Example}[urn model] \label{ex:urn} Assume we are given an urn containing blue and red balls, as well as a reservoir containing also blue and red balls. The game allows four basic operations: ($A_1^+$) replacing a red ball in the urn with a blue one, ($A_1^-$) replacing a blue ball with a red one, ($A_2^+$) adding a red ball to the urn, and ($A_2^-$) removing a red ball from the urn (and adding it to the reservoir), see Figure \ref{fig:urns}. \begin{figure} \includegraphics[width=0.24\textwidth]{urn_model3} \includegraphics[width=0.24\textwidth]{urn_model4}\\ \includegraphics[width=0.24\textwidth]{urn_model2} \includegraphics[width=0.24\textwidth]{urn_model1} \caption{\label{fig:urns} Urn model in Example \ref{ex:urn} with $4$ different operations.} \end{figure} Define the random variables $K_b$ and $K_r$, describing the number of blue and red balls in the urn, respectively. Obviously, the 4 different operations correspond to the following changes of $K_b,K_r$: \begin{eqnarray*} (A^+_1) \quad K_b &\to& K_b+1; \quad K_r \to K_r-1\\ (A^-_1) \quad K_b &\to& K_b-1; \quad K_r \to K_r+1\\ (A^+_2) \quad K_r &\to & K_r+1 \\ (A^-_2) \quad K_r &\to& K_r-1. \\ \end{eqnarray*} Note that action $A_2^+$ is always possible, but the other three operations are only possible if the quantity to be reduced is greater than zero. According to Definition \ref{def:pheceunit}, we have the causal relation $K_b\rightarrow K_r$ because the actions $A_1^\pm$ belong to the category ${\cal A}_1$ since they preserve the relation $K_b = c - K_r$ for some state-dependent constant $c$. Further, $A_2^\pm$ belong to the category ${\cal A}_2$. We also observe that changing $K_b$ without changing $K_r$ requires the concatenation of two operations at least: For instance, add a red ball, and then `convert' it into a blue one. \end{Example} The example may be considered as representing a chemical process where molecule of type $K_r$ can be converted into molecule of type $K_b$ and vice versa. Then, the `red' molecules are the resource for a reaction that converts `red' into `blue'. Therefore, one may be surprised that not the resource $K_r$, but the product $K_b$ of the reaction, is the cause. We now rephrase Example \ref{ex:urn} into an example for the statistical version in Definition \ref{def:phcestat}. To this end, we consider a random experiment for which the system is initially in the state $K_r=k_r$ and $K_b=k_b$ with $k_r,k_b\gg 0$. Then, in each round we flip a coin for each of the $4$ actions to decide whether they are applied or not. After $\ell < k_r,k_b$ many rounds, let $N_1$ denote the number of times $A_1^+$ minus the number of times $A^-_1$ has been applied. Likewise, $N_2$ counts the number of $A^+_2$ minus the number of $A^-_2$ actions. We then obtain \begin{eqnarray} K_b &=& k_b + N_1 \label{eq:bica} \\ K_r &=& k_r - N_1 + N_2. \label{eq:rica} \end{eqnarray} On easily checks that \eqref{eq:bica} and \eqref{eq:rica} are equivalent to \begin{eqnarray} K_b &=& k_b + N_1 \label{eq:bse} \\ K_r &=& - K_b + k_r + k_b + N_2, \label{eq:rse}. \end{eqnarray} Since the actions are controlled by independent coin flips, we have $N_1\independent N_2$. Following our interpretation that $K_b$ causes $K_r$ and $A^\pm_1$ and $A^\pm_2$ are interventions on $K_r$ and $K_b$, respectively, we thus consider \eqref{eq:bse} and \eqref{eq:rse} as the corresponding FCM. By controlling actions $A^\pm_1$ and $A_2^\pm$ via coins with different bias, we may change the distributions $P(K_r|K_b)$ and $P(K_b)$ independently, and thus have an example of statistical phenomenological causality in Definition \ref{def:phcestat}. The following observation may seem paradoxical at first glance: The set ${\cal A}_1=\{A_1^+,A^-_1\}$ is a priori symmetric with respect to swapping the roles of blue and red. The justification for calling it interventions on $K_b$ is derived from properties of ${\cal A}_2=\{A_2^+,A_2^-\}$. In other words, whether an action is considered an intervention on a certain variable depends on the impact of other actions in the set of elementary actions. This context-dependence of the definition of interventions may be worrisome, but in scenarios where `hardware-analysis' does not reveal (or define) whether an action is an intervention on a particular variable, we do not see a chance that circumvents this dependence on other actions. Given the abstractness of the underlying notion of causal direction, we are glad to observe that the well-known causal discovery method LinGAM \cite{Kano2003} would also infer $K_b \rightarrow K_r$ because \eqref{eq:bse} and \eqref{eq:rse} define a linear model with non-Gaussian additive noise. The crucial assumption inducing the statistical asymmetry is that actions $A^\pm_1$ are implemented independently of $A^\pm_2$, resulting in independent noise variables $N_1,N_2$. There is also another aspect of this example that shows the abstractness of the causal interpretation of the above scenario. The fact that actions in ${\cal A}_1$ preserve the total number of balls has been interpreted as structural equation \eqref{eq:rse} generating $K_r$ from $K_b$. Using the function $m$ from Definition \ref{def:pheceunit}, this structural equation reads $K_r= m(K_b)$ with $m(K_b) = - K_b + k_b$. Since operations in ${\cal A}_2$ change the total number of balls, they change $m$ to $m'$ by changing $k_b$. Hence, actions in ${\cal A}_2$ change the `mechanism' relating $K_b$ and $K_r$. In a mechanistic interpretation of causality, one would expect changes of a mechanism a change of a kind of machine where the input-output behaviour is changed. As abstract as the `mechanism' from $K_b$ to $K_r$ is, as abstract is its change. \paragraph{Context-dependent causal directions} Here we describe a system for which causal directions swap when the system moves from one regime to another one. Although the following example is hypothetical, we encourage the reader to think of similar example in realistic business processes. \begin{Example}[food consumption of rabbits]\label{ex:rabbits} Given a hutch with $n$ rabbits where we define two variables: \begin{tabular}{ll} $X$:& total amount of food consumed by all rabbits at\\ &one day\\ $Y$:& food per rabbit consumed at one day. \end{tabular} By definition, we have $Y = X/n$. We allow the following three types of actions: \\ ${\cal A}_r$: change the number $n$ of rabbits\\ ${\cal A}_f$: change the amount of food provided\\ ${\cal A}_a$: give an appetizer to the rabbits.\\ We then consider two complementary scenarios, see Figure \ref{fig:rabbits}: \begin{figure} \includegraphics[width=0.24\textwidth]{rabbits} \includegraphics[width=0.24\textwidth]{rabbits2} \caption{\label{fig:rabbits} In scenario 1, the food consumption per rabbit is causing the total food consumption because changing the number of rabbits only changes the latter if there is enough food for each rabbit. In scenario 2 with food shortage, changing the number of rabbits does not affect the total food consumption, it only changes the food consumption per rabbit. Therefore, the total food consumption is the cause. Images by Tom Paolini (carrots), michealcopley03 (single carrot), Aswathy (rabits) with unsplash license. } \end{figure} \vspace{0.1cm} \noindent {\bf Scenario 1: there is enough food for each rabbit} \\ Offering more food influences neither $X$ nor $Y$. Adding or removing rabbits changes $X$, but not $Y$, while the appetizer changes both $X$ and $Y$ and preserves the equality $X = n \cdot Y$. We thus have actions influencing both (while preserving their relation) and those influencing only $X$, but no action influencing only $Y$. We thus conclude $Y \to X$. \vspace{0.1cm} \noindent {\bf Scenario 2: shortage of food}\\ Now, the appetizer has no effect. Changing the number of rabbits changes the food per rabbit, but not the total consumption. Changing the amount of food changes consumption per rabbit and total consumption. Hence we have actions that influence $Y$, but not $X$, and actions that influence both, but preserve the relation $Y=X/n$ Further, we have no action influencing $X$ without affecting $Y$. We thus conclude $X \to Y$. \end{Example} \begin{comment} The examples show that the definition of causal directions defines a notion of {\it complexity} of actions: operations on the effect without changing the cause are {\it more elementary} than those that change the cause without changing the effect. The idea is that the latter requires an intervention on the cause and on the effect, while the first one is only an intervention on the effect. It is hard to say in general where this notion of complexity comes from. By default, complexity theory is the domain of computer science and by default restricted to complexity of computation. Computation processes are called complex if they require a large number of basic logical operations, or a large logical depth. One should note, however, that in modern physics, notions of complexity were developed that also refer to processes other than rather computation processes. We slightly elaborate on this in the appendix. For Example \ref{ex:rabbits} people will certainly agree that the action 'give a drug to the rabbits and increase the number of rabbits' is less elementary than 'give a drug to the rabbits' and 'increase the number of rabbits'. We are not claiming that this agreement on what is more elementary will hold for other examples as well. It is possible, however, that the same ill-definedness of what is elementary will result in ill-definedness of causal directions. In other words, we are not denying that our approach to defining causal directions in other real-life scenarios can be rather vague. We are afraid, however, that similar vagueness applies to alternative definitions of causality. \end{comment} The above dependence of the causal direction on the regime suggests that there exists a large grey zone without clearly defined causal direction. We want to discuss this for the following business relevant example, where we will not offer a clear answer. \begin{Example}[revenue and sold units]\label{ex:ru} Let $R$ and $Q$ be random variables denoting the revenue and the number of sold units for a company, respectively. Its instantiations are $r_j,q_j$ denoting revenue and number of units for product $j$. If $p_j$ denotes the price of product $j$ we thus have \begin{equation} r_j = p_j \cdot q_j. \end{equation} If we consider $n_j$ as instantiations of a random variable $N$, we thus write \begin{equation}\label{eq:ru} R = P \cdot Q. \end{equation} One may want to read \eqref{eq:ru} as structural equation, which suggests the causal structure $Q\to R$. However, this requires $P$ to be independent of $Q$ as a minimal requirement (unless we are talking about a confounded relation). This independence is often violated when more items are sold for {\it cheap} products. On the other hand, we cannot expect $P$ to be independent of $R$ either, hence neither \eqref{eq:ru} nor $Q= R/P$ should be considered a structural equation. To argue that $Q$ causes $R$ one may state that a marketing campaign can increase the revenue by increasing the number of sold units. However, why is a marketing campaign an intervention on $U$ rather than on $R$ if it increases both by the same factor? While our intuition may consider $U\to R$ as the `true' causal direction, the following scenario challenges this. Assume there are two farmers, farmer $P$ producing potatoes, and farmer $E$ producing eggs. They have implemented a countertrade with exchanging $K_P$ and $K_E$ many potatoes and eggs, respectively, according to the negotiated exchange factor $F$. We then have \begin{equation}\label{eq:farmers} K_E = K_P \cdot F. \end{equation} For farmer $P$, $K_P$ is the number of units sold while $K_E$ is the revenue, while farmer $E$ considers $K_E$ the number of units sold and $K_P$ the revenue. If number of units is always the cause of the revenue, then the causal direction depends on the perspective. Preference for one causal direction versus the other could come from insights about which quantity reacts more to changes of $F$: assume, for instance, the number of potatoes exchanged is more robust to changes of $F$ (i.e., in economic terms, the demand of potatoes has small price elasticity), we could consider changes of $F$ as interventions on $K_E$ and thus conclude $K_P \to K_E$. \end{Example} Example \ref{ex:ru} shows that causal directions can also be in grey zones because {\it actions} can be in grey zone of being an intervention on one versus the other quantity. Acting on the factor $F$ will in general change both variables $R$ and $U$. However, in the regime where one of it is relatively robust to changes of $F$, we can consider this one as the cause and consider changing $F$ as an intervention on the effect because it affects the mechanism relating cause and effect. It is likely that many causal relations in real life show equally much room for interpretations. Let us now revisit the example from the domain of product recommendation algorithms described in \cite{Schoelkopf2021}: \begin{Example}[Laptop and its rucksack]\label{ex:laptop} Let $X,Y$ be binaries that describe the decisions of a person to buy or not to buy a laptop and a laptop rucksack, respectively. Let $P(X,Y)$ be the prior joint distribution without any marketing campaign. Let actions in ${\cal A}_l$ be marketing campaigns that try to sell more laptops (without explicitly mentioning laptop rucksacks). Then it is likely that these actions influence $P(X)$, but not $P(Y|X)$. Let actions ${\cal A}_r$ define marketing campaigns that target at selling laptop rucksacks. Let us assume that this changed $P(Y|X)$, but not $P(X)$. \end{Example} Here we have neglected that seeing laptop rucksacks may remind some customers that they were planning to buy a laptop already since a while, which could induce additional demand for laptops. Further, a marketing campaign changing $P(X)$ and $P(Y|X)$ could certainly exist, e.g., one that explicitly advertises laptop and rucksack as an economically priced pair. We are thus aware of the fact that any causal statement in this vague domain of customer psychology is a good approximation at best. When previously mentioning the scenario of Example \ref{ex:laptop} in the introduction we have emphasized that the {\it time order} of customer's purchases does not determine the {\it causal order} of the underlying decisions for the purchases. This already suggests that analyzing the causal order of the underlying materialized processes does not reveal the causal structure of their psychologic origin. The following example elaborates on this discrepancy of phenomenological causal direction and causal direction of unerlying micro-processes. \begin{Example}[vending machine]\label{ex:vending} In contrast to usual purchasing process, a vending machine outputs the article clearly {\em after} and {\em because} the money has been inserted. Accordingly, inserting the money is the cause of obtaining the product. We will call this causal relation `microscopic'. For some cigarette vending machine, let $X$ be the number of packages sold at a day and $Y$ be the total amount of money inserted at the same day. Our microscopic causal relations suggests to consider $Y$ the cause of $X$, but previous remarks on the relation between revenue and number of sold units suggest the opposite. Let us therefore ask for `natural actions' on the system. Assume we stop some of the smokers on their way to the machine and convince them not to buy cigarettes. This clearly impacts both $X$ and $Y$. Another action would be to slightly change the price of the packages by manipulating the vending machine. If the change is small enough, it will only affect $Y$ but not $X$. We thus have a natural action influencing both and one influencing only $Y$, which suggest that $X$ influences $Y$, in agreement with what we said about revenue and sold units, but in contrast to the {\em microscopic} causal structure. \end{Example} \subsection{The multivariate case} We first generalize Definition \ref{def:phcestat} to multiple variables. Although these generalizations are straightforward, we will see that our multivariate extensions of the urn example reveal the abstractness of phenomenological causality even more. \begin{Definition}[multivariate causality, statistical]\label{def:phenDAGstat} Let ${\cal A}$ be elementary actions on a system described by the variables $X_1,\dots,X_n$. Then we say that $G$ is a valid causal graph if ${\cal A}$ consists of classes ${\cal A}_1,\dots,{\cal A}_n$ such that actions in ${\cal A}_j$ change no other conditional than that $P(X_j|PA_j)$. \end{Definition} Likewise, we generalize Definition \ref{def:pheceunit}: \begin{Definition}[multivariate causality, unit level]\label{def:phenDAGunit} Adopting the setting from Definition \ref{def:phcestat} we say that $G$ is a valid causal graph if ${\cal A}$ decomposes into classes ${\cal A}_j$ such that for every statistical instantiation $(x_1,\dots,x_n)$ there are maps $m_1,\dots,m_n$ with \begin{equation}\label{eq:fcmunit} x_i = m_i(pa_i), \end{equation} such that actions in ${\cal A}_j$ preserve all equations \eqref{eq:fcmunit} valid for $i\neq j$. \end{Definition} We now generalize Example \ref{ex:urn} to $n$ different balls, where the causal structure suggested by our definition gets even less obvious: \begin{Example}\label{ex:nballs} Given $n$ balls with labels $j=1,\dots,n$. Given the actions $A^+_j$ and $A^-_j$, which replace one ball of type $j-1$ with $j$ for $j=2,\dots,n$ or vice versa, respectively. Further, $A_1^\pm$ are defined as adding or removing balls of type $1$. If $k^0_j$ denotes the initial number of balls of type $j$, and $N_j$ denotes the number of actions $A^+_{j}$ minus the number of $A^-_j$, the number $K_j$ of balls is given by \begin{eqnarray} \label{eq:urns} K_j &=& k^0_j + N_{j} - N_{j-1} \quad \hbox{ for } j \geq 2\\ K_1 &=& k_n^0 + N_{1}. \label{eq:urns0} \end{eqnarray} Let us first recalibrate $K_j$ to $\tilde{K}_j:= K_j - k^0_j$. We then introduce vectors ${\bf k}^0:=(k^0_1,\dots,k^0_n)$ and vector valued variables $\tilde{{\bf K}}:=(\tilde{K}_1,\dots,\tilde{K}_n)^T$, ${\bf N}:=(N_1,\dots,N_n)^T$. Using the Töplitz matrix $S$ with diagonal $1$ and second diagonal $-1$ (and zero elsewhere), we can rewrite \eqref{eq:urns} and \eqref{eq:urns0} as \begin{equation}\label{eq:urnica} \tilde{{\bf K}} = S {\bf N}. \end{equation} This, in turn, can be rewritten as \begin{equation}\label{eq:urnAdef} \tilde{{\bf K}} = A \tilde{{\bf K}} +{\bf N}, \end{equation} with the lower triangular matrix \begin{equation}\label{eq:urnAder} A:=I - S^{-1} = \left(\begin{array}{ccccc} 0 & &\cdots & & 0 \\ -1 & 0 & & & \\ \vdots & -1 & \ddots & & \vdots \\ & & & & \\ -1& & \cdots & -1 & 0\end{array}\right). \end{equation} Equivalently, we can then rephrase \eqref{eq:urnica} by the structural equations \begin{eqnarray} \tilde{K}_j &=& \sum_{i > j} - \tilde{K}_i + N_j. \label{eq:Kseq} \end{eqnarray} The causal structure for the $K_j$, which is the same as for $\tilde{K}_j$, is shown in Figure \ref{fig:doublechain}. \begin{figure} \begin{center} \resizebox{8cm}{!}{% \begin{tikzpicture} \node[obs] at (-5,0) (K5) {$K_5$} ; \node[obs] at (-3,0) (K4) {$K_4$} edge[<-] (K5) ; \node[obs] at (-1,0) (K3) {$K_3$} edge[<-] (K4) edge[<-, bend left] (K5); \node[obs] at (1,0) (K2) {$K_2$} edge[<-] (K3) edge[<-, bend left] (K4) edge[<-, bend left] (K5) ; \node[obs] at (3,0) (K1) {$K_1$} edge[<-] (K2) edge[<-, bend left] (K3) edge[<-, bend left] (K4) edge[<-, bend left] (K5) ; \end{tikzpicture} } \end{center} \caption{\label{fig:doublechain} Causal relation between the variables $K_j$, which count the number of balls in the urn with label $j$ (according to our definition of phenomenological causal structure).} \end{figure} Note that there is exactly one structure matrix $A$ that admits writing each $K_j$ as a linear expression of some $K_i$ and $N_j$ such that $A$ is lower triangular for some ordering of nodes. This is because $S$ uniquely determines $A$. Assuming linear structural equations, we thus obtain Figure \ref{fig:doublechain} as the unique DAG corresponding to the defined set of elementary actions. \end{Example} Note that the algebraic transformations between \eqref{eq:urnica} and \eqref{eq:urnAder} resemble the algebra in Independence Component Analysis (ICA)-based multivariate causal discovery \cite{Moneta} (following the idea of LiNGAM \cite{Kano2003} mentioned for the bivariate case above). This analogy is not a coincidence: ICA decomposes the vector ${\bf K}$ into independent noise variables ${\bf N}$. Accordingly, since \eqref{eq:urnAdef} is a linear acyclic causal model with {\it independent non-Gaussian} noise variables $N_j$, multivariate LiNGAM would also identify the same causal structure and FCMs that we derived as phenomenological causal model. In other words, if we ensure that the choice of the actions is controlled by random generators, independently across different ${\cal A}_j$, we obtain a joint distribution $P(K_1,\dots,K_n)$ for which the causal discovery algorithm LiNGAM infers the DAG in Figure \ref{fig:doublechain}. It is instructive to discuss Example \ref{ex:nballs} from the perspective of complexity of some actions that are {\it not} elementary. Increasing $K_j$ without affecting the others requires $j$ operations, e.g.,one can first increase $K_1$ and propagate this increase to $K_j$. From the causal perspective, these actions are necessary to compensate the impact of $K_j$ on its child. A further remark on causal faithfulness \cite{Spirtes1993}. The fact that an intervention only propagates to the child, but not to the grandchild shows that the structural equations are non-generic; direct and indirect influence of $K_j$ on $K_{j-2}$ compensate. Accordingly, if we control each action by independent coin flips as in the remarks after Example \ref{ex:urn}, the induced joint distribution will not be faithful to the causal DAG. The idea of `nature choosing each mechanism $p(x_j|pa_j)$ in \eqref{eq:fac} independently' seems to have its limitation here. The reason is that the actions $A_j^\pm$ are the building blocks of the system, rather than the Markov kernels $p(x_j|pa_j)$, which are constructed from the former. There is also another `paradox' of our causal interpretation that becomes apparent for $n>2$, while it seems less paradoxical in Example \ref{ex:urn}: imagine what happened if we were to redefine $A_0^\pm$ as adding or removing of balls of type $n$ instead of type $1$. We would then reverse all the arrows in Figure \ref{fig:doublechain}. In other words, the direction of the arrows in a long chain of variables depends on what happens {\it at the end points}. This idea is in stark contradiction to the spirit of modularity \cite{HauWoo99} assuming each $p(x_j|pa_j)$ is an independent mechanism of nature. The reader may see this as an indicator against interpreting the equations \eqref{eq:Kseq} as FCMs, but we think that causal directions on the phenomenological level may well depend on this kind of {\it context}. In Example \ref{ex:nballs} the locality of the impact of each of the actions $A^\pm_j$ itself (affecting only $2$ adjacent variables) entailed long-range causal influence between the variables. Now we will describe the opposite where actions affecting a large number of variables is induced by only {\it local} causal connections (in other words: in the first example $S$ has only entries in the first off-diagonal, in the case following now this is true for $A$). \begin{Example}[$n$ different balls in bundles]\label{ex:bundles} We now modify Example \ref{ex:nballs} such that the $n$ balls come in the following bundles: there are $n$ different types of packages and type $P_j$ contains the balls $1,\dots,j$ (one per package). Then there are $2n$ different actions $A^+_1,A^-_1,\dots,A^+_k,A^-_k$ of the following form: $A_j^+$ puts one package $P_j$ from the stack into the urn, while $A_j^-$ wraps balls with label $1,\dots,j$ to one package and puts them back to the stack. We then introduce $n$ random variables, $K_1,\dots,K_n$, where $K_j$ is the number of balls with label $j$ in the urn. Obviously transformation $A^+_j$ simultaneously increases all the variables $K_1,\dots,K_j$ by $1$, while $A^-_j$ decreases all of them by $1$, as depicted in Figure \ref{fig:ballchain} for $n=4$. \begin{figure} \centerline{ \includegraphics[width=0.48\textwidth]{ball_packages} } \caption{\label{fig:ballchain} Urn containing packages of $n$ different types, where type $P_j$ contains balls will label $1,\dots,j$. The variable $K_j$ counts the number of balls with symbol $j$ in the urn. The elementary operations of the system are adding one package from the stack to the urn or put it back. Changing $K_j$ thus entails the same change for $K_{j-1},\dots,K_1$.} \end{figure} Using the same derivation and notation as in Example \ref{ex:nballs}, we define $N_j$ as the difference of actions $A_j^\pm$ and obtain \begin{equation} \tilde{K}_j = \sum_{i \geq j} N_i, \end{equation} which yields $\tilde{{\bf K}}= S {\bf N}$ with \[ S:= \left(\begin{array}{ccccc} 1 & 0 &\cdots & & 0 \\ & 1 & 0 & & \\ \vdots & & \ddots & & \vdots \\ & & & & \\ 1& & \cdots & & 1\end{array}\right) \] For the structure matrix, we thus obtain the lower triangular matrix \[ A = I - S^{-1} = \left(\begin{array}{ccccc} 0 & &\cdots & & 0 \\ 1 & 0 & & & \\ 0 & 1 & & & 0 \\ \vdots & & \ddots & & \vdots \\ & & & & \\ 0& \cdots &0 & 1 & 0\end{array}\right), \] which amounts to the structural equations \begin{eqnarray} K_n &=& N_n,\\ \label{eq:seballchain1} K_j &=& K_{j+1} + N_j \quad \forall j \leq n-1. \label{eq:seballchain2}. \end{eqnarray} These equations correspond to the causal DAG in Figure \ref{fig:chain}. \begin{figure} \begin{center} \begin{tikzpicture} \node[obs] at (-3,0) (K4) {$K_4$} ; \node[obs] at (-1,0) (K3) {$K_3$} edge[<-] (K4); \node[obs] at (1,0) (K2) {$K_2$} edge[<-] (K3); \node[obs] at (3,0) (K1) {$K_1$} edge[<-] (K2); \end{tikzpicture} \end{center} \caption{\label{fig:chain} Causal relation between the variables $K_j$, which count the number of balls in the urn with label $j$ (according to our definition of phenomenological causal structure).} \end{figure} An intervention that changes $K_j$ necessarily changes all $K_s$ with $s< t$ by the same amount, as a downstream impact, according to \eqref{eq:seballchain2}. While the transformations $A^\pm_j$ change all $K_s$ with $s<t$ per definition, it is a priori not obvious to see which of these changes should be considered direct and which one indirect. However, the causal interpretation \eqref{eq:seballchain2} clearly entails such a distinction. \end{Example} \paragraph{What's the purpose of the causal interpretation?} The balls in the urn show an extreme case where the causal interpretation is far away from any `mechanistic view' of causality where the functions $m_j$ from Definition \ref{def:phenDAGunit} refer to tangible mechanisms (recall, for instance that ${\cal A}_j$ in Example \ref{ex:nballs} were symmetric with respect to swapping $j$ and $j-1$, yet we have identified them as interventions on $K_j$, not on $K_{j-1}$). To argue that our causal interpretation is not just a weird artifact of our concept, we need to show its benefit. The following section will argue that this extension of causality allows us to consistently talk about the overall causal DAG when systems with `phenomenological causality' are embedded in systems with more tangible causal structure. \paragraph{Related ideas in the literature} While the idea that causal conditionals $p(x_j|pa_j)$ and structural equations define `modules' in a causal DAG which can be manipulated independently has a long tradition, we have turned it somehow around by defining these manipulations as the primary concept and the causal DAG (in case the set of elementary actions correspond to a DAG, see below) as a derived concept. The closest work to this idea seems to be \cite{Blom2021}, where DAGs also appear as derived concepts rather than being primary. The idea is to start with a set of equations, where each can contain containing endogenous variables as well as exogenous ones. Subject to certain conditions (phrased in terms of matchings in bipartite graphs), one can uniquely solve the equations to express the endogenous variables in terms of the exogenous ones using Simon's ordering algorithm \cite{Simon1953}.  Remarkably, general interventions in \cite{Blom2021} are thought to act on {\it equations} rather than {\it variables}. Note that in the usual view of causality, an action that changes the structural equation $X_j = f_j(PA_j,N_j)$ to some different equation $X_j = \tilde{f}_j(PA_j,N_j)$ is considered an intervention {\it on} $X_j$ only because the equation is read as a ``structural equation'' (or ``assignment'') for $X_j$, rather than for any of the parents or the noise $N_j$. In \cite{Blom2021,Simon1953}, an equation is not {\it a priori} considered an assignment for a certain variable, but only later after analyzing the direction in which the system of equations is solved. This way, causal direction also emerges from the context of the entire set of equations, while ours emerges from the context of other actions. However, even this difference is less substantial than it appears at first glance. After all, sets of equations can be turned into equivalent sets of equations, but in the different set of equations changes of one equation may translate into changes of many equations. Therefore we assume that in \cite{Blom2021} the preference for any of these equivalent set of equations comes from an implicit notion of {\it which changes of the system are more elementary than others}. The question where this notion of complexity of actions come from goes beyond the scope of this paper. We hope that Examples like Ex.\ref{ex:ball_track} showed that in real life scenarios there are reasons to consider some actions as obviously more elementary than others. Further, we refer to the appendix where we argue that complexity of actions can be subject of scientific research and mention some approaches from modern physics. \section{Phenomenological causality couples to tangible causality \label{sec:coupling}} One can argue that a crucial property of causality is to describe the way a system with some variables couples to other variables in the world. In \cite{Tsamardinos,janzing2018merging,Gresele2022}, causality is used to predict statistical relations of variables that have not been observed together. This section shows in which sense a causal DAG defined via phenomenological causality can be consistently embedded into the context of further variables. The mathematical context of the below observations is fairly obvious and mostly known. Yet, we consider them crucial as justification of phenomenological causality. \subsection{Markov property of phenomenological causality \label{subsec:MarkovExt}} Let us first consider the mechanisms described by functions $m_j$ in Definition \ref{def:phenDAGunit}. Since they represent structural equations $f_j(.,n_j)$ with fixed noise value $n_j$, we will denote them with superscript and write $m_j^{n_j}$. Whenever the noise values $n_j$ are statistically independent across different statistical units, they induce a joint distribution that is Markovian with respect to $G$, see \cite{Pearl:00}, Theorem 1.4.1. We conclude that we obtain a Markovian joint distribution of $P(X_1,\dots,X_n)$ whenever we control actions in ${\cal A}_j$ by independent random variables. The same holds true when we control the actions in Definition \ref{def:phenDAGstat} by independent random variables and introduce formal random variables $\Theta_j$ controlling the causal conditionals $p^{\theta_j} (x_j|pa_j)$. Then the joint distribution \begin{eqnarray*} && P(X_1,\dots,X_n) \\ &=& \int \prod_{j=1} p^{\theta_j} (x_j|pa_j) p(\theta_1)\cdots p(\theta_n) d\theta_1 \cdots d\theta_n \\ &=& \prod_{j=1}^n \int p^{\theta_j} (x_j|pa_j) p(\theta_j) d\theta_j, \end{eqnarray*} still factorizes with respect to $G$. Note that the assumption of independent $\Theta_j$ is in agreement with how \cite{Zhang2017CausalDF} interpret the postulate of independent mechanisms (see e.g. \cite{causality_book}, Section 2.1), namely as {\it statistically independent changes} of the causal conditionals $p(x_j|pa_j)$ across environments. While \cite{Zhang2017CausalDF} use this property for identification of the causal DAG, we use it to show that then the distribution averaging over different environments is still Markovian. In other words, $G$ is true both with respect to each environment but also with respect to the aggregated distribution. Moreover, for linear structural equations, as for instance, \eqref{eq:urnAdef}, also the causal discovery method LiNGAM would infer a causal structure that aligns with phenomenological causality, as mentioned earlier. A more interesting scenario, however, is obtained when elementary actions are controlled by further random variables $Y_1,\dots,Y_m$ which are connected by a non-trivial causal structure. We argue that then we obtain a joint distribution on $X_1,\dots,X_n,Y_1,\dots,Y_m$ whose DAG is consistent with phenomenological causality. Assume, for instance, that some actions are not only controlled by independent noise variables $N_j$ or $\Theta_j$, respectively, but by one of the variables $Y_i$ which are related by a DAG themselves. We then model the influence of $Y_i$ on actions in ${\cal A}_j$ by introducing a second superscript to the mechanisms $m_j$ and $p(x_j|pa_j)$, respectively, and obtain $m^{y_i,n_j}_j$ or $p^{y_i,\theta_j}(x_j|pa_j)$. Obviously, this way $Y_i$ can be read as an additional parent of $X_j$. Further, let some $Y_l$ be influenced by some $X_j$ by modifying the structural equations for some $Y_l$ such that they receive $X_j$ as additional input. We now define a directed graph with nodes $X_1,\dots,X_n,Y_1,\dots,Y_m$ by drawing an edge from $Y_i$ to $X_j$ whenever $Y_i$ controls actions in the set ${\cal A}_j$ and draw an edge from $X_j$ to $Y_l$ whenever the latter is influenced by the former. Whenever this graph is a DAG $\tilde{G}$, $P(X_1,\dots,X_n,Y_1,\dots,Y_m)$ will clearly be Markovian relative to $\tilde{G}$. This is because the generating process, by construction, follows structural equations according to $\tilde{G}$ and the joint distribution admits a corresponding Markov factorization. In a scenario where causal relations among the $Y_i$ and among $Y_i$ and $X_j$ are justified by tangible interventions, the abstract notion of causality between different $X_j$ thus gets justified because it is consistent with the causal Markov condition also after embedding our abstract system into the tangible world. Getting back to our metaphor with a box with $n$ knobs and $n$ displays, our phenomenological definition of the causal relations inside the box is consistent with the DAG that describes causal relations between the box and the more tangible world, see Figure \ref{fig:boxtangible} for a visualization. \begin{figure} \includegraphics[width=0.5\textwidth]{black_box_extended} \caption{\label{fig:boxtangible} Visualization of a scenario where phenomenological causality couples to variables with tangible interventions. Then our construction of abstract causal relations between the variables $X_j$ are justified by consistency in the sense of a Markov condition for the causal DAG of the joint system. } \end{figure} Since the causal structures for our Examples \ref{ex:urn}, \ref{ex:nballs}, and \ref{ex:bundles} seemed particularly artificial, it arguably gets less artificial once such a system gets embedded into an environment with further variables. \subsection{Boundary consistency of the notion 'elementary' \label{subsec:boundary}} We emphasized that determining whether a variable $X_j$ is `directly' affected by an action (and thus deciding whether the action is an intervention on $X_j$) is a causal question that may be equally hard to decide as the causal relations between the variables $X_1,\dots,X_n$. Formally, the question can be phrased within a meta-DAG containing an additional variable $A$ describing the actor, in which the relation between $A$ and $X_j$ appears as a usual link if and only if $A$ is an intervention on $X_j$. Due to this arbitrariness of the boundary between system and actor we expect a framework for causality to be consistent with respect to shifting this boundary (by extending or reducing the set of variables).\footnote{In the early days of quantum physics, Heisenberg described a similar consistency of the theory with respect to shifting the boundary between {\it measurement apparatus} and {\it quantum system to be measured} (the `Heisenberg cut'). Ref. \cite{PhysUniversal} is even more similar in spirit to our boundary consistency because it describes the arbitrariness of the boundary between {\it controlling device} and {\it system to be controlled} for interventions on microscopic physical systems and constructs a framework for physical controllers in which this boundary can be shifted in a consistent way.} Here we want to backup our notion of 'elementary' by the argument that it is consistent with respect to a certain class of marginalizations to subsets of variables. To explain this idea, we first introduce a rather strong notion of {\it causal sufficiency}:\footnote{Note that this notion has been introduced as 'causal sufficiency' in \cite{causality_book}, but the sentence in the bracket has been forgotten, as noted in the errata of the book.} \begin{Definition}[graphical causal sufficiency] Let ${\bf X}:=(X_1,\dots,X_n)$ be nodes of a DAG $G$. A subset ${\bf X}_S$ is called graphically causally sufficient if there is no hidden common cause $C \notin {\bf X}_S$ that is causing at least two variables in ${\bf X}_S$ (and the causing paths go only through nodes that are not in ${\bf X}_S$). \end{Definition} In general, the model class of causal DAGs is not closed under marginalization, but requires the model class of Maximal Ancestral Graphs (MAGs) \cite{Richardson2002}. Here we restrict the attention to the simple case of graphical causal sufficiency, where the causal model remains in the class of DAGs after marginalization: \begin{Definition}[marginal DAG] Let ${\bf X}$ be the nodes of a DAG $G$ and ${\bf X}_S$ a graphically causally sufficient set. Then the marginalization $G_S$ of $G$ to the nodes ${\bf X}_S$ is the DAG with nodes ${\bf X}_S$ and an edge $X_i\to X_j$ whenever there exists a directed path from $X_i$ to $X_j$ in $G$ containing no node from ${\bf X}_S$ (except $X_i,X_j$). \end{Definition} To justify the definition, we need to show that the distribution of ${\bf X}_S$ is Markov relative to $G_S$ and that $G_S$ correctly describes interventional probabilities. It is easy to check the Markov condition: Let ${\bf X}_A,{\bf X}_B,{\bf X}_C$ be subsets of ${\bf X}_S$ such that ${\bf X}_A$ is $d$-separated from ${\bf X}_B$ by ${\bf X}_C$ in $G$, hence every path in $G$ connecting a node in ${\bf X}_A$ with one in ${\bf X}_B$ contains either (i) a chain or a fork with middle node in ${\bf X}_C$ or (ii) an inverted fork whose middle node is not ${\bf X}_C$ and also not its descendants. It is easy to see that conditions (i) and (ii) are preserved when directed paths are collapsed to single arrows, and thus the same conditions hold in $G_S$. To see that interventions on arbitrary nodes in ${\bf X}_S$ can equivalently be computed from $G_S$, we recall that interventional probabilities can be computed from backdoor adjustments \cite{Pearl:00}, Equation (3.19). We can easily verify that if $Z\subset {\bf X}_S$ satisfies the backdoor criterion in $G_S$ relative to an ordered pair $(X_i,X_j)$ of variables in ${\bf X}_S$, it also satisfies it in $G$ because the property of blocking backdoor paths is inherited from $G$. The following result shows that our notion of `elementary' is preserved under marginalization to causally sufficient subsets: \begin{Theorem}[boundary consistency] Let $G$ be a DAG with nodes ${\bf X}:=\{X_1,\dots,X_n\}$ and $P,\tilde{P}$ be joint distributions of ${\bf X}$ that are Markov relative to $G$ and differ only by one term in the factorization \eqref{eq:fac}. For some subset $S$ of nodes satisfying graphical causal sufficiency, let $G_S$ with ${\bf X}_S\subset {\bf X}$ be a marginalization of $G$, and $P_S,\tilde{P}_S$ be marginalizations of $P,\tilde{P}$, respectively. Then $P_S$ and $\tilde{P}_S$ also differ by one conditional at most. \end{Theorem} \begin{Proof} Let $P$ and $\tilde{P}$ differ by the conditional corresponding to $X_j$. Introduce a binary variable $I$ pointing on $X_j$ which controls switching between $P(X_j|PA_j)$ and $\tilde{P}(X_j|PA_j)$. Formally, we thus define a distribution $\hat{P}$ on $({\bf X},I)$ such that $\hat{P}(x_j|pa_j, I=0) = P(x_j|pa_j)$ and $\hat{P}(x_j|pa_j, I=1) = \tilde{P}(x_j|pa_j)$. Let $G^I$ be the augmented DAG containing the nodes of $G$ and $I$ with an arrow from $I$ to $X_j$. For the case where $X_j\in {\bf X}_S$, it is sufficient to show that the marginalization of $G^I$ to $S\cup \{I\}$ does not connect $I$ with any node $X_i$ other than $X_j$, which follows already from the fact that any directed path from $I$ to $X_i$ passes $X_j$. Now assume that $X_j$ is not in ${\bf X}_S$. By causal sufficiency of ${\bf X}_S$, there is a unique node $X_{\tilde{j}}$ among the descendant of $X_j\in {\bf X}_S$ that blocks all paths to other nodes in ${\bf X}_S$ (otherwise ${\bf X}_S$ would not be causally sufficient). Hence, the DAG $G_S^I$ contains only an edge to $X_{\tilde{j}}$ but no other node in ${\bf X}_S$. \end{Proof} \begin{figure} \centerline{ \resizebox{3cm}{!}{% \begin{tikzpicture} \node[obs] at (0,-3) (X3) {$X_3$} ; \node[obs] at (0,-1.5) (X2) {$X_2$} edge[->] (X3) ; \node[] at (2,-1.5) (I) {} edge[->] (X2); \node[obs] at (0,0) (X1) {$X_1$} edge[->] (X2) ; \end{tikzpicture} } \hspace{1cm} \resizebox{3cm}{!}{% \begin{tikzpicture} \node[obs] at (0,-3) (X3) {$X_3$} ; \node[] at (2,-1.5) (I) {} edge[->] (X3); \node[obs] at (0,0) (X1) {$X_1$} edge[->] (X3) ; \end{tikzpicture} } } \caption{\label{fig:marginalization} Left: action on node $X_2$, which results in an action on $X_3$ after dropping node $X_2$ (right).} \end{figure} It seems that every framework that is supposed to be general enough to describe significant aspects of the world should not only be able to describe the system under consideration, but also its interaction with agents. Understanding why and in what sense certain actions are more elementary than others is still a question to be answered outside the framework. However, demanding consistency of different boundaries between system and intervening agents seems a more modest and feasible version of `understanding' of how to define elementary. \section{Conclusions} We have described several scenarios --some of them are admittedly artificial, but some of them are closer to real-life problems-- where causal relation between observed quantities are not defined {\it a priori}, but get only well-defined after specifying the `elementary actions' that are considered interventions on the respective variables. We have argued that this specification admits the definition of an abstract notion of causality in domains where the mechanistic view of tangible causal interactions fails. We believe that this approach renders the context-dependence of causality more transparent since there may be different {\it elementary actions} in different contexts. It is possible that at least some part of the fuzziness of some relevant causal questions (e.g. `does income influence life expectancy?') comes from the missing specification of actions. From this point of view one could argue to accept only causal questions that directly refer to the treatment effect for which {\it the treatment itself} is obviously a feasible action (e.g. taking a drug or not) and rejecting questions about the causal effect of variables like `income'. However, our approach is different in the sense that --after having defined the elementary actions-- it does talk about causal relations between variables `inside the box of abstract variables', that is, variables for which interventions are not defined a priori. This is because we believe that analyzing causal relations `inside the box' is crucial for understanding complex system.
1,108,101,563,563
arxiv
\section{Introduction} Magnetic field generation in astrophysical scenarios, such as AGN and GRBs, is not fully understood \citep{colgate01}. While such phenomena are of fundamental interest, they are also closely related to open questions such non-thermal radiation emission and cosmic ray acceleration \citep{bhattacharjee00}. Recently, collisionless plasma effects have been proposed as candidate mechanisms for magnetic field generation \citep{gruzinovwaxman99, medvedev99}, mediating the formation of relativistic shocks via the Weibel instability \citep{weibel59,silva03}. The KHI \citep{dangelo65,gruzinov08,macfadyen09} should also be considered since it is capable of generating large-scale magnetic fields in the presence of strong velocity shears, which naturally originate in energetic matter outbursts in AGN and GRBs, and which are also present whenever conditions for the formation of relativistic shocks exist. These large-scale fields may also be further amplified by the magnetic-dynamo effect \citep{gruzinov08,macfadyen09}. Recent kinetic simulations have focused on magnetic field generation via electromagnetic plasma instabilities in unmagnetized flows without velocity shears. 3D PIC simulations of Weibel turbulence \citep{silva03,fonseca03,frederiksen04,nishikawa05} have demonstrated subequipartition magnetic field generation. The Weibel instability has been shown to be critical in mediating relativistic shocks \citep{spitkovsky08,martins09}, where a Fermi-like particle acceleration process has also been identified. These works have neglected the role of velocity shear in the flow, which are an alternative mechanism to generate sub-equipartition magnetic fields in relativistic outflows \citep{gruzinov08}. Furthermore, a shear flow upstream of a shock can lead to density inhomogeneities via the KHI which may constitute important scattering sites for particle acceleration. In \cite{macfadyen09}, 3D magnetohydrodynamic (MHD) simulations of KH turbulence are discussed, observing magnetic field amplification due to the KHI. In fact, the KHI setup is routinely used to test MHD models \citep{mignone09, beckwith11}, but the connection between the MHD description and the fully kinetic picture is still missing \citep{gruzinov11}. However, the KHI contains intrinsically kinetic features, and so far 3D fully kinetic ab initio simulations have not been reported. In this Letter, we present the first self-consistent 3D PIC simulations of the KHI for both subrelativistic and relativistic scenarios of shearing unmagnetized electron-proton plasma clouds. We show that the KHI contains important kinetic features which are not captured in previous 3D MHD simulations \citep{keppens99,macfadyen09,mignone09,beckwith11}, namely the transverse KHI dynamics, and the generation of a large-scale DC magnetic field at the shear region. This large-scale field can reach mG levels for typical parameters of the interaction of a relativistic flow with the interstellar medium (ISM). Furthermore, our generalization of the KHI linear theory \citep{gruzinov08} to include arbitrary density jumps between the shearing flows allows us to conclude that the onset of the instability is robust to this asymmetry. The KHI can therefore operate in shears within the ejecta (similar density flows) and also between the ejecta and the surrounding ISM (relative density ratios of $1-10$), and it will likely operate at the same level (or stronger) than the Weibel instability, even for moderate velocity shears. In Section 2, we explore the density jumps effect on the behavior and features of the instability. The 3D PIC simulation results are presented and discussed in detail in Section 3. In Section 4, we discuss the saturation levels of the magnetic field, and conclusions are drawn in Section 5. \section{Theoretical analysis} \label{sec:theory} \begin{figure*} [t] \centering \includegraphics[width = 1.\columnwidth]{fig1.eps} \caption{Magnetic field structures generated by shearing subrelativistic $e^-p^+$ flows with $\gamma_0 = 1.02$ taken at time $t = 49/\omega_p$. The 3D visualizations (a), (d) and (g) correspond to the magnetic field components $B_1$, $B_2$ and $B_3$, respectively. The 2D slices of the magnetic field intensity (b), (e) and (h) are taken at the centre of the box $x_1 = 10 ~ c/\omega_p$, and slices (c), (f) and (i) are taken at $x_2 = 10 ~ c/\omega_p$.} \label{fig:3dclas} \end{figure*} The KHI linear theory, outlined in \citep{gruzinov08}, is based on the relativistic fluid formalism of plasmas coupled with Maxwell's equations, and was analyzed for the particular case where the two shearing flows have equal densities. Realistic shears, however, are more likely to occur between different density flows \citep{hardee92,krause03}. We have extended the analysis presented in \citep{gruzinov08} for shearing electron-proton plasma flows with uniform densities $n_+$ and $n_-$, and counter-velocities $+\vec v_0$ and $-\vec v_0$, respectively. Here, the protons are considered free-streaming whereas the electron fluid quantities and fields are linearly perturbed. The dispersion relation for electromagnetic waves is thus: \begin{eqnarray} \label{eq1} \sqrt{\frac{n_-}{n_+}+\frac{k'^2}{\beta_0^2}-\omega'^2} \left[ \left (\omega'+k' \right)^2 - \left(\omega'^2-k'^2\right)^2 \right] + \nonumber \\ \sqrt{1+\frac{k'^2}{\beta_0^2}-\omega'^2} \left[ \frac{n_-}{n_+} \left (\omega'-k' \right)^2 - \left(\omega'^2-k'^2\right)^2 \right] = 0, \end{eqnarray} where $\omega ' \equiv \omega/\omega_{p+}$ is the wave frequency normalized to the plasma frequency, with density $n_+$, $\omega_{p+} = (4\pi n_+e^2/\gamma_0^3 m_e) ^{1/2}$. $k' \equiv k ~ c / \omega_{p+}$ is the normalized wave number parallel to the flow direction, $\beta_0 = v_0/c$ and $\gamma_0 = ( 1-\beta_0^2) ^{-1/2}$ is the Lorentz relativistic factor; $e$ and $m_e$ are the electron charge and mass, respectively, and $c$ is the speed of light in vacuum. The unstable modes are surface waves, since their transverse wave number is evanescent. We now consider two scenarios. The similar-density ($n_+/n_- \simeq 1$) scenario, relevant for instance in internal shocks within the relativistic ejecta, where the densities of the shearing regions are comparable, and the $n_+/n_- > 1$ scenario, relevant for external shocks associated with the interaction of the relativistic ejecta with the ISM. In the first ($n_+/n_-=1$), the modes with $| k' |<1$ are unstable, with a maximum growth rate $\Gamma_\mathrm{max}'=\Gamma_\mathrm{max}/\omega_{p+}=1/\sqrt{8}$ for the fastest growing mode $k'_\mathrm{max}=k_\mathrm{max} ~ c / \omega_{p+} = \sqrt{3/8}/\beta_0$. The real part of $\omega'$ vanishes over the range of unstable modes meaning that the unstable modes are purely growing waves. In the density-contrast ($n_+/n_- > 1$) scenario, the shape of the spectrum is conserved for different density ratios, indicating that the general characteristics of the instability are unchanged. As $n_+/n_-$ increases, the bandwidth of unstable modes decreases, $k'_\mathrm{max}$ drifts to larger scales, and the growth rate $\Gamma'_\mathrm{max}$ decreases. Furthermore, the frequency $\omega'$ acquires a real part over the range of the unstable modes, meaning that the growing perturbations propagate. The density-contrast unbalances the interaction between flows such that the growing perturbations are more strongly manifested in the less dense flow, and drift in the direction of this less dense flow. In the same-density case, the interaction between flows is balanced and the perturbations are purely growing. \begin{figure*} [t \centering \includegraphics[width = 1.\columnwidth]{fig2.eps} \caption{Magnetic field structures generated by shearing relativistic $e^-p^+$ flows with $\gamma_0 = 3$ taken at time $t = 69/\omega_p$. The 3D visualizations (a), (d) and (g) correspond to the magnetic field components $B_1$, $B_2$ and $B_3$, respectively. The 2D slices of the magnetic field intensity (b), (e) and (h) are taken at the centre of the box $x_1 = 125 ~ c/\omega_p$, and slices (c), (f) and (i) are taken at $x_2 = 40 ~ c/\omega_p$.} \label{fig:3drel} \end{figure*} The normalized growth rate of the fastest growing mode, $\Gamma'_\mathrm{max}$, decreases with the density contrast between flows. For $n_+/n_- \approx 1$, the growth rate scales approximately as $\Gamma'_\mathrm{max} \propto (n_+/n_-)^{-1/4}$ for both relativistic and non-relativistic shears, whereas in the high density contrasts ($n_+/n_- \gg 1$), the growth rate scales as $\Gamma'_\mathrm{max} \propto (n_+/n_-)^{-1/3}$ for non-relativistic shears (similar to the small cold beam-plasma instability, \citep{oneil71}), and $\Gamma'_\mathrm{max} \propto (n_+/n_-)^{-1/2}$ for highly relativistic shears. These scalings show that the KHI will compete and operate on the same time-scales as the Weibel instability, and should thus be considered in realistic relativistic outflows where velocity shears are likely to be present. \begin{figure*} [t \centering \includegraphics[width = 1.\columnwidth]{fig3.eps} \caption{Magnetic field lines generated in (a) the subrelativisitc scenario, and (b) the relativistic scenario, at time $t=100/\omega_p$.} \label{fig:3dlines} \end{figure*} \section{3D PIC simulations} \label{sec:sim} Numerical simulations were performed with OSIRIS, a fully relativistic, electromagnetic, and massively parallel PIC code \citep{fonseca03,fonseca08}, which has been widely used in other astrophysical problems \citep{silva03,medvedev05,medvedev06,martins09}. We simulate 3D systems of shearing slabs of cold ($v_{0} \gg v_{th}$, where $v_{th}$ is the thermal velocity) unmagnetized electron-proton plasmas with a realistic mass ratio $m_p/m_e=1836$ ($m_p$ is the proton mass), and evolve it until the electromagnetic energy saturates on the electron time-scale. In this Letter, we explore a subrelativistic ($\gamma_0 = 1.02$) and a relativistic ($\gamma_0 = 3$) KHI scenario, in the regime of two plasma slabs of equal density, which is directly relevant to internal shocks where $1 < \gamma_0 < 10$ with comparable density shells \citep{Piran05}. The numerical simulations are prepared as follows. The shear flow initial condition is set by a velocity field with $v_0$ pointing in the positive $x_1$ direction, in the upper and lower quarters of the simulation box, and a symmetric velocity field with $-v_0$ pointing in the negative $x_1$ direction, in the middle-half of the box. Initially, the systems are charge and current neutral. In the subrelativistic case, the simulation box dimensions are $20 \times 20 \times 20 ~ ( c/\omega_p )^3$, where $\omega_p = (4\pi n e^2/m_e)^{1/2}$ is the plasma frequency, and we use $20$ cells per electron skin depth ($c/\omega_p$). The simulation box dimensions for the relativistic scenario are $250 \times 80 \times 80 ~ ( c/\omega_p )^3$, with a resolution of $4$ cells per $c/\omega_p$. Periodic boundary conditions are imposed in every direction. The magnetic field structures generated in the subrelativistic and relativistic scenarios are displayed in Figures \ref{fig:3dclas} and \ref{fig:3drel}, respectively, and the magnetic field line topology is illustrated in Figure \ref{fig:3dlines}. In the linear regime, the onset of the fluid KHI, discussed in Section \ref{sec:theory}, occurs in the $x_1x_3$ plane around the shear surfaces, generating a magnetic field normal to this plane, $B_2$. This growing magnetic field component $B_2$ is responsible for rolling-up the electrons to form the signature KH vortices (insets a2 and b2 of Figure \ref{fig:bevol}). The typical length-scale of the KHI modulations, observed in Figures \ref{fig:3dclas} f and \ref{fig:3drel} f, agrees with the wavelength of the most unstable mode predicted by the fluid theory ($\lambda_\mathrm{max} = 2~c/\omega_p$ and $\lambda_\mathrm{max} \simeq 50~c/\omega_p$ in the subrelativistic and relativistic cases, respectively). The KHI modulations, however, are less noticeable in the relativistic regime because these are masked by a strong DC component with a magnitude higher than the AC component, which is negligible in the subrelativistic regime. This DC magnetic field ($k'=0$), is not unstable according to the fluid model as can be seen in equation (\ref{eq1}). As the amplitude of the KHI modulations grow, the electrons from one flow cross the shear surfaces and enter the counter-streaming flow. Since the protons are unperturbed, due to their inertia, the current neutrality around the shear surfaces is unbalanced, forming DC current sheets, which point in the direction of the proton velocity. These DC current sheets induce a DC component in the magnetic field $B_2$. The DC magnetic field is therefore dominant in the relativistic scenario since a higher DC current is setup by the crossing of electrons with a larger initial flow velocity and also since the growth rate of the AC dynamics is lower by $\gamma_0^{3/2}$ compared with the subrelativistic case. We stress that this DC field is not captured in MHD (e.g. \cite{macfadyen09}) or fluid theories because it results from intrinsically kinetic phenomena. Furthermore, since the DC field is stronger than the AC field, a kinetic treatment is clearly required in order to fully capture the field structure generated in unmagnetized relativistic flows with velocity shear. This characteristic field structure will also lead to a distinct radiation signature \citep{jlmartins10}. Electron density structures, which have not been reported in MHD simulations to our knowledge \citep{keppens99,macfadyen09,mignone09,beckwith11}, emerge in the plane transverse to the flow direction (insets a1 and b1 of Figure \ref{fig:bevol}), and extend along the $x_1$ direction forming electron current filaments. A harmonic perturbation in the $B_3$ component of the magnetic field at the shear surfaces forces the electrons to bunch at the shear planes forming current filaments, which amplify the initial magnetic perturbation $B_3$. This process is identical to the one underlying the Weibel instability \citep{medvedev99} and leads to the formation of the observed transverse current filaments, along with the exponential amplification of $B_3$ observed in Figure \ref{fig:bevol}. Figures \ref{fig:3dclas} g and \ref{fig:3drel} g further show that the $B_3$ magnetic field component shares a filamentary structure, underlining its connection in this process. The electrons undergoing this bunching process, slow down along their initial flow direction. Again, since the protons are unperturbed at these time-scales, DC ($k_{x_2}=0$ mode) current sheets are setup around the shear surfaces in a similar fashion to the longitudinal dynamics previously discussed. These current sheets induce a DC magnetic field in $B_2$ (Figures \ref{fig:3dclas} and \ref{fig:3drel} e), which is responsible for accelerating the evolving filaments across the shear surface, into the counter-propagating flow. In the relativistic shear scenario, these filaments are strongly rotated due to the high intensity of $B_2$, into the opposing flow, leading to the formation of well defined finger-like density structures, as seen in inset b1 of Figure \ref{fig:bevol}. These structures are less pronounced in the subrelativistic scenario due to the lower intensity of the DC component of $B_2$ and to the slower Weibel-like electron bunching process (inset a1 of Figure \ref{fig:bevol}). Meanwhile, the current component $J_3$, associated with the crossing motion of the electron current filaments along the $x_3$ direction, induces the magnetic field component $B_1$ (inset b of Figures \ref{fig:3dclas} and \ref{fig:3drel}). The growth rate of the magnetic field in the subrelativistic regime agrees with the theoretical $\Gamma_\mathrm{max}$ (Figure \ref{fig:bevol} a), whereas a significant mismatch is found in the relativistic regime (Figure \ref{fig:bevol} b). This deviation is owed to the transverse dynamics of the KHI, which is not taken into account in the 2D theory. We performed 2D simulations matching the longitudinal ($x_1x_3$) and transverse ($x_2x_3$) planes of the 3D setups, in order to assess the independent evolution of the longitudinal and transverse dynamics of the KHI. The results of the 2D simulations of the longitudinal planes were in excellent agreement with the theoretical growth rates for both subrelativistic and relativistic cases, measuring $0.35 ~ \omega_p$ and $0.07 ~ \omega_p$, respectively. In the 2D simulations of the transverse dynamics we measured $\Gamma = 0.1 ~ \omega_p$ and $\Gamma = 0.3 ~ \omega_p$ for the subrelativistic and relativistic scenarios, respectively. Therefore the full 3D evolution of the KHI is mainly determined by the transverse dynamics in the relativistic regime, in contrast to the subrelativistic regime where the longitudinal dynamics is dominant. We stress that these growth rates are faster/comparable to other collisionless plasma processes that would occur in interpenetrating flows \citep{kazimura98,silva03}. \section{Electron KHI saturation} At later times, the growing KH perturbations begin to interact nonlinearly, ultimately guiding the system into a turbulent state. Eventually, all large-scale shear surfaces in the electron structure become extinct, and the instability saturates. This stage is reached at roughly $t \simeq 100/\omega_p$ in both subrelativistic and relativistic scenarios (Figure \ref{fig:bevol}). In this turbulent state, the drift velocity of the electrons vanishes in favor of heating and the magnetic field is mainly sustained by the proton current sheets close to the shear-surfaces. Here, most of the magnetic field energy is deposited in $B_2$, which has a uniform DC structure that extends throughout the entire shear-surfaces (which can be extremely large-scale in realistic shears), with a characteristic transverse thickness $L_\mathrm{sat} = \alpha ~ c/\omega_p$, where $\alpha$ typically ranges from $5$ to $20$, as measured in the simulations. The AC modulations in the magnetic field structure are at this time negligible compared to the DC component. Using Faraday's equation and neglecting the displacement current term, we may estimate the maximum amplitude of the magnetic field as $B_\mathrm{DC}\sim 2\pi e n_0 L_\mathrm{sat} v_0/c$. In the case of relativistic shears, the estimate yields \begin{equation} B_\mathrm{DC}\sim 1.6~\alpha~\sqrt{n_0\mathrm{[cm^{-3}]}}~\mathrm{[mG]}, \end{equation} where $n_0$ is the plasma density. In the case of a relativistic plasma shear of ISM density, $n_0 \simeq 1~ \mathrm{cm^{-3}}$, and assuming $\langle\alpha\rangle \sim 10$, the maximum amplitude of the DC magnetic field is on the order of 10 mG, spreading over the entire shear surface with a thickness of $5\times 10^{9}$ cm. Our simulations confirm this estimate, since they are performed in normalized units (time is normalized to $\omega_p^{-1}$, space to $c/\omega_p$, and magnetic field to $m_e c~\omega_p/e$), showing similar $B_\mathrm{DC}$ levels. The $B_\mathrm{DC}$ estimate can also be used to determine the average equipartition value $\epsilon_B/\epsilon_p$ (ratio of magnetic to initial particle kinetic energy) of the system as \begin{equation*} \frac{\epsilon_B}{\epsilon_p}\sim \frac{1}{8}\frac{m_e}{m_p}\frac{\gamma_0+1}{\gamma_0^2} \alpha^2 r, \end{equation*} where $\epsilon_p = n_0 (m_p+m_e)c^2(\gamma_0-1) L_{x_1} L_{x_2} L_{x_3}$ ($L_{x_i}$ being the simulation box dimension in the $x_i$ direction), and $r = L_\mathrm{sat}/L_{x_3}$. We measure from the simulation at saturation $\alpha \simeq 4$ ($r \simeq 0.2$) for the subrelativistic case, yielding $\epsilon_B/\epsilon_p = 4 \times 10^{-4}$, and $\alpha \simeq 15$ ($r \simeq 0.19$) for the relativistic case, yielding $\epsilon_B/\epsilon_p = 1.2 \times 10^{-3}$, which are comparable with the simulations results (Figure \ref{fig:bevol}). We may also estimate the maximum value of equipartition which is found at the shear surfaces by averaging around the interaction region, which has a typical thickness on the order of $L_\mathrm{sat}$: \begin{equation*} \left(\frac{\epsilon_B}{\epsilon_p}\right)_{max}\sim \frac{1}{8}\frac{m_e}{m_p}\frac{\gamma_0+1}{\gamma_0^2} \alpha^2, \end{equation*} yielding $2 \times 10^{-3}$ for the subrelativistic case and $7 \times 10^{-3}$ for the relativistic case. A higher efficiency of conversion of particle kinetic energy to magnetic fields is observed in the relativistic case since the thickness of the proton currents sheets ($L_\mathrm{sat}$) is much higher than in the subrelativistic case. Most of the energy in the system, however, is still contained in the ions which remain unperturbed at these time-scales. We expect the system to reach higher levels of equipartition once the protons undergo the proton-scale KHI, which would occur at roughly $t_\mathrm{proton-KHI} \simeq 100 ~ (mi/me)^{1/2} /\omega_p \simeq 4000 /\omega_p$. The long-lived large-scale DC magnetic field can thus be sustained up to the proton time-scale which is roughly long enough for prompt GRB emission and early afterglow \citep{Piran99}. \begin{figure*}[!t] \centering \includegraphics[width = 1.\columnwidth]{fig4.eps} \caption{Evolution of the equipartition energy $\epsilon_B/\epsilon_p$ for a (a) subrelativistic and (b) relativistic shear scenarios. The contribution of each magnetic field component is also depicted. The insets in each frame represent two-dimensional slices of the electron density at $t=49/\omega_p$ and $t=69/\omega_p$ for the respective case. The red (blue) color represents the electron density of the plasma that flows in the positive (negative) $x_1$ direction. Darker regions in the colormap indicate high electron density, whereas lighter regions indicate low electron density. Slices for insets (a1), (a2), (b1) and (b2) were taken at the center of the simulation box; (a1) and (b1) are transverse to the flow direction, and slices (a2) and (b2) are longitudinal to the flow direction.} \label{fig:bevol} \end{figure*} \section{Conclusions} In this Letter, we present the first self-consistent 3D PIC simulations of the KHI in unmagnetized electron-proton plasmas and analyze their evolution on the electron time-scale. Our results show that the multidimensional physics of the KHI is extremely rich and that kinetic effects play an important role, in particular, in the transverse dynamics of the KHI (which is dominant over the longitudinal KHI dynamics in relativistic shears), and in the generation of a strong large-scale DC magnetic field. The transverse dynamics of the KHI consists of a Weibel-like electron bunching process, leading to the formation of electron current filaments which are then accelerated across the shear surface, forming finger-like structures. At the electron saturation time-scale, the magnetic field has evolved to a large-scale DC field structure that extends over the entire shear-surface, reaching thicknesses of a few tens of electron skin depths, and persisting on time-scales much longer than the electron time-scale. This field structure is not captured by MHD or other fluid models, and will lead to a distinct radiation signature. We measure maximum equipartition values of $\epsilon_B/\epsilon_p \simeq 2 \times 10^{-3}$ for the subrelativistic scenario, and $\epsilon_B/\epsilon_p \simeq 7 \times 10^{-3}$ for the relativistic scenario. These equipartition values, which are typically treated as a free parameter in radiation modeling, match the values inferred from GRB afterglows by \citep{panaitescu02}. Moreover, the onset of the KHI is robust to density asymmetries making it ubiquitous in astrophysical settings. The KHI may operate in $n_+/n_- \approx 1$ regimes, relevant in GRB internal shocks, and also in $n_+/n_- \gg 1$ regimes, which are important in external shocks. Future work will address the impact of the KHI in the formation of relativistic shocks in the presence of velocity shears. \vspace{0.2cm} \small This work was partially supported by the European Research Council ($\mathrm{ERC-2010-AdG}$ Grant 267841) and FCT (Portugal) grants SFRH/BD/75558/2010, SFRH/BPD/75462/2010, and PTDC/FIS/111720/2009. We would like to acknowledge the assistance of high performance computing resources (Tier-0) provided by PRACE on Jugene based in Germany. Simulations were performed at the IST cluster (Lisbon, Portugal), and the Jugene supercomputer (Germany).
1,108,101,563,564
arxiv
\section{Introduction} Search engine results pages (SERPs) have become sophisticated user interfaces (UIs) that include heterogeneous modules, or \emph{direct displays}, such as image carousels, videos, cards, and a diverse kind of advertisements. Since users are no longer faced with a text-based linear listing of search results, research demands more sophisticated ways of understanding how users interact and examine SERPs. With multiple page elements competing for the user's attention, understanding which elements do actually attract attention is key to search engines, and has applications for ranking, search page optimization, and UI evaluation. Researchers have shown that mouse cursor movements can be used to infer user attention~\cite{Arapakis:2016:PUE:2911451.2911505} and information flow patterns~\cite{Navalpakkam:2013:MME:2488388.2488471} on SERPs. While mouse tracking cannot substitute eye tracking technology, it is nevertheless much more scalable and requires no special equipment. Further, most queries do not result in a click if the user can satisfy their information needs directly on the SERP~\cite{Fishkin19}, therefore search engines must rely on other behavioral signals to understand the underlying search intent. So, mouse tracking data can be gathered ``for free'' and can provide search engines with an implicit feedback signal for re-ranking and evaluation. For example, a search engine can predict user attention to individual SERP components such as the knowledge module~\cite{Arapakis:2015:KYO:2806416.2806591} and re-design it accordingly. Similarly, predicting attention to advertisements~\cite{Arapakis20_ppaa} can improve current auction schemes and make them more transparent to bidders. Those are important and particularly key use cases, considering that previous research have assumed a uniform engagement with a web page and do not distinguish well enough between attended and ignored layout components~\cite{8010344, Liu:2015:DUD:2766462.2767721}. Previous work has relied on handcrafted features to model user interaction data on SERPs. For example, Guo and Agichtein were able to classify different query types~\cite{Guo:2008:EMM:1390334.1390462} and infer search intent in search results~\cite{Guo:2010:RBJ:1835449.1835473} by examining, e.g. within-distances between cursor movements, hovers, and scrolling. Similarly, \citet{Arapakis:2016:PUE:2911451.2911505} derived 638 features from mouse cursor data to predict user attention to direct displays, and \citet{Lagun:2014:DCM:2556195.2556265} discovered frequent subsequences, or motifs, in mouse movements that were used to improve search results relevance. While these are very valuable works and have contributed to our current understanding of search behavior analysis, finding the right feature set for the task at hand is time-consuming and requires domain expertise. To solve this, we rely on artificial neural networks (ANNs) that are trained on different representations of mouse cursor movements. We build and contrast both recurrent and convolutional ANNs to predict user attention to SERP advertisements. We thus tackle the problem of mouse movements classification using both sequential and pixel-based representations, the latter using different visual encoding of the temporal information embedded in mouse cursor movements, to be described later. Importantly, our models are trained on \emph{raw} mouse cursor movements, which do not depend on a particular page structure, and achieve competitive performance while predicting user attention to different ad formats. Taken together, our results suggest that ANN-based models should be adopted for downstream tasks involving mouse cursor movements, as these models remove the need for handcrafted features and can capture better non-linearities within the data. \subsection{Preliminaries} \label{ssec:prelim} Arguably, ANNs are universal function approximators~\cite{Hornik91, Csaji01}, since a feedforward network (i.e. with no loops) having a single hidden layer with a sufficiently large number of neurons can approximate \emph{any} continuous function of $n$-dimensional input variables~\cite{Lu17}. For most Deep Learning practitioners, sequence modeling is synonymous with Recurrent Neural Networks (RNNs). RNNs are an extension of regular ANNs that have connections feeding the hidden layers of the network back into themselves, also know as recurrent connections or feedback loops. Therefore it might seem clear that we should use RNNs to process mouse tracking data, since this kind of data can be straightforwardly modeled as multivariate time series of spatial (or spatiotemporal) cursor coordinates, where each coordinate can be assumed to depend on the previous one. Yet recent research has suggested that convolutional neural networks (CNNs) may outperform RNNs on tasks dealing with sequential data, such as audio synthesis and machine translation~\cite{Bai18}. CNNs are also an extension of regular ANNs, but using feedforward connections instead of recurrent connections and assembling complex hierarchical patterns via smaller and simpler patterns through convolutional operations. A known issue of training Deep Learning models is that gradients may either vanish or explode while they backpropagate through the network. This problem is particularly exacerbated in RNNs, due to their long-term dependencies within the data. However, mouse cursor trajectories are sometimes very short, e.g. a few seconds worth of interaction, therefore in this paper we explore both simple RNNs and more sophisticated versions thereof: Long Short-Term Memory (LSTM) and Gate Recurrent Unit (GRU) networks. Both networks have similar performance~\cite{Jozefowicz15} and were designed to learn long-term information. We also investigate the bidirectional LSTM network, which allows RNNs to learn from past \emph{and} future timesteps, to better understand the sequence context. Another important limitation while training RNNs is the fine-tuning of many model parameters (network weights), because every timestep depends on the previous one, which usually require high computational resources. Indeed, the temporal dependencies between previous sequence elements prevents parallelizing training of RNNs~\cite{Martin18}. Therefore, in this paper we explore alternative pixel-based representations of mouse cursor data that can be handled with CNNs. Because CNNs usually require a large amount of training data, a common technique is \emph{transfer learning}: use a pre-trained network on a larger dataset and calibrate the model architecture to the nature and characteristics of the smaller dataset. Concretely, we used transfer learning of popular CNN architectures including AlexNet~\cite{AlexNet}, SqueezeNet~\cite{SqueezeNet}, ResNet~\cite{ResNet}, and VGGNet~\cite{VGGNet}, all of them state-of-the-art CNNs and widely used in downstream tasks such as image classification or video analysis. \section{Related Work} \label{sec:related_work} The construct of attention has become the common currency on the Web. Objective measurements of attentional processes~\cite{Wright2008-WRIOOA} are increasingly sought after by both the media industry and scholar communities to explain or predict user behavior. Along those lines, the connection between mouse cursor movements and the underlying psychological states has been a topic of research since the early 90s~\cite{Accot1997, Accot:1999, Card:1987, MacKenzie:2001}. Some studies have investigated the utility of mouse cursor data for predicting the user’s emotional state~\cite{Zimmermann2003, Kaklauskas2009, Azcarraga:2012, Yamauchi:2013, Kapoor:2007}, but also the extent that they can help identify demographic attributes like gender~\cite{Yamauchi2014, Kratky:2016, Pentel2017} and age~\cite{Kratky:2016, Pentel2017}. The above works demonstrate that certain cognitive and motor control mechanisms are embodied and reflected, to some extent, in our mouse cursor movements and online interactions. Recently, a large body of research~\cite{Shapira:2006:SUK:1141277.1141542, Guo:2008:EMM:1390334.1390462, Guo:2010:RBJ:1835449.1835473, Guo:2012:PWS:2396761.2398570, Huang:2012:USU:2207676.2208591, Navalpakkam:2013:MME:2488388.2488471, Lagun:2014:DCM:2556195.2556265, Liu:2015:DUD:2766462.2767721, MARTINALBO2016989, 8010344} established further the cognitive grounding for hand-eye relationship and has demonstrated the utility of mouse cursor analysis as a low-cost and scalable proxy of visual attention, especially on SERPs. In line with this evidence, several works have investigated closely the user interactions that stem from mouse cursor data for various use cases, such as web search~\cite{Guo:2008:EMM:1390334.1390462, Guo:2010:RBJ:1835449.1835473, Guo:2012:PWS:2396761.2398570, Lagun:2014:DCM:2556195.2556265, Liu:2015:DUD:2766462.2767721, Arapakis:2016:PUE:2911451.2911505, 8010344} or web page usability evaluation~\cite{Atterer:2006:KUM:1135777.1135811, Arroyo:2006:CPE:1125451.1125529, Leiva:2011:RWD:2037373.2037467}. In what follows, we review previous research efforts that have focused on mouse cursor analysis to predict user interest and attention. \subsection{User Interest in Web Search Tasks} \label{ssec:interest} User models of scanning behaviour in SERPs have been assumed to be linear, as users tend to explore the list of search results from top to bottom. However, today's SERPs include several direct displays such as image and video search results, featured snippets, or advertisement. To account for this SERP heterogenity, \citet{Diaz13} incorporated ancillary page modules to the classic linear scanning model, which proved useful to help improving SERP design by anticipating searchers' engagement patterns for a given SERP arrangement. However, this model was not designed to measure effectively user attention to specific direct displays and does not exploit the latent information encoded in mouse cursor movements. Another line of research considered simple, coarse-grained features derived from mouse cursor data to be surrogate measurements of user interest, such as the amount of mouse cursor movements~\cite{Shapira:2006:SUK:1141277.1141542} or mouse cursor's ``travel time''~\cite{Claypool:2001:III:359784.359836}. Follow up work adopted fine-grained mouse cursor features, which have been shown to be more effective. For example, Guo et al.~\cite{Guo:2008:EMM:1390334.1390462, Guo:2010:RBJ:1835449.1835473} computed within-distances between mouse cursor distances to disambiguate among informational and navigational queries, and could identify a user's research or purchase intent based on aggregated behavioral signals that include, among others, mouse hovering and scrolling activity. Approaches like these have been directed at predicting general-purpose web-based tasks like search success~\cite{Guo:2012:PWS:2396761.2398570} and satisfaction~\cite{Liu:2015:DUD:2766462.2767721}, user's frustration~\cite{Feild:2010:PSF:1835449.1835458}, relevance judgements of search results~\cite{Huang:2012:ISM:2348283.2348313, Speicher:2013:TPR:2505515.2505703}, and query abandonment~\cite{Huang:2011:NCN:1978942.1979125, Diriye:2012:LSS:2396761.2398399}. Eventually, they lack the granularity in predicting attention with particular direct displays of a SERP, such as advertisements, that our proposed modelling approach achieves. \begin{figure*}[!tpb] \def0.31\linewidth{0.31\linewidth} \subfloat[Organic ad\label{fig:native-top-left}]{ \hspace{-1em}\shadowimage[width=0.31\linewidth]{screenshot-native-top-left-1b_highlighted-crop} } \subfloat[Direct display ad, left-aligned\label{fig:dd-top-left}]{ \hspace{-1em}\shadowimage[width=0.31\linewidth]{screenshot-dd-top-left_highlighted-crop} } \subfloat[Direct display ad, right-aligned\label{fig:dd-top-right}]{ \hspace{-1em}\shadowimage[width=0.31\linewidth]{screenshot-dd-top-right-1b_highlighted-crop} } \caption{ Examples of the ad formats, highlighted in red, and their positions on the Google SERP: Organic ad~\protect\subref{fig:native-top-left} vs. left-aligned~\protect\subref{fig:dd-top-left} and right-aligned~\protect\subref{fig:dd-top-right} direct display ads. In our experiments, only one ad format was visible at a time. } \label{fig:display_ads} \end{figure*} \subsection{User Attention in Web Search Tasks} \label{ssec:attention} Most research studies assume that eye fixation means examination~\cite{Brightfish18}. However, \citet{Liu:2014:SRT:2661829.2661907} reports that about half of the search results fixated by users are not actually read, since there is often a preceding skimming step in which the user quickly scans the search results. Based on this observation, they propose a two-stage examination model: a first ``from skimming to reading'' stage and a second ``from reading to clicking'' stage. Interestingly, they showed that both stages can be predicted with mouse movement behaviour, which can be collected at large scale. Cursor movements can therefore be used to estimate user attention on SERP components, including traditional snippets, aggregated results, maps, and advertisements, among others. However, works that employ mouse cursor information to predict user attention with specific elements within a web page have been scarce. Despite these challenges, some of the early work by~\citet{Arapakis:2014:UEO:3151365.3151368, Arapakis:2014:UWE:2661829.2661909} investigated the utility of mouse movement patterns to measure within-content engagement on news pages and predict reading experiences. \citet{Lagun:2014:DCM:2556195.2556265} introduced the concept of \emph{motifs} for estimating results relevance. Similarly, \citet{Liu:2015:DUD:2766462.2767721} applied the motifs concept to SERPs to predict search result utility, searcher effort, and satisfaction at a search task level. Finally, \citet{Arapakis:2016:PUE:2911451.2911505} which investigated user engagement with direct displays on SERPs, concretely with the Knowledge Graph~\cite{Arapakis:2015:KYO:2806416.2806591}. Our work differs significantly from previous art in several ways. First, we implement a predictive modelling framework to measure user attention to SERP advertisements, which are probably the most relevant instance of direct displays for search engines, from a business perspective. Second, previous work has used Machine Learning models which rely on ad-hoc and domain-specific features. As previously discussed, feature engineering requires domain expertise to come up with the best discriminative features. In contrast, we investigate several ANN architectures that use \emph{raw} mouse cursor data, represented either as time series or as visual representations, and can predict user attention with competitive performance. Finally, we examine the performance of our predictive models w.r.t. sponsored ads served under different formats and different positions within a SERP and, thus, significantly expand on previous research and findings in the community. \section{User Study} \label{sec:user_study} Online advertising comprises ads that are served under different formats (e.g. text, image, or video, or rich media), each with its unique look and feel. Some formats appear to be more effective than traditional online ads in terms of user attention and purchase intent~\cite{sharethrough2013}, but also may cause ``ad blindness'' to a greater or a lesser extent~\cite{Owens:2011:TAB:2007456.2007460}. Therefore, to understand how web search users engage with ads that appear under different formats and positions in SERPs, we conducted a user study through the \textsc{Figure Eight}\footnote{\url{https://www.figure-eight.com}} crowdsourcing platform. We collected feedback from participants who performed brief transactional search tasks using Google Search and aimed to predict when users notice the ads that appear on SERPs, under different conditions. To mitigate low-quality responses, several preventive measures were put into practice, such as introducing gold-standard questions, selecting experienced contributors (Level 3) with high accuracy rates, and monitoring task completion time, thus ensuring the internal validity of our experiment. \subsection{Experiment Design} \label{ssec:design} We used a between-subjects design with two independent variables: (1)~ad format, with two levels (organic and direct display ads) and (2)~ad position, with two levels (top-left and top-right position). Notice that organic ads are only shown in the left part of Google SERPs; see \autoref{fig:display_ads}. The dependent variable was ad attention. Our experiment consisted of a brief transactional search task where participants were presented with a predefined search query and the corresponding SERP, and were asked to click on any element of the page that answered it best. All search queries (\autoref{ssec:search_query_sample}) triggered both organic (\autoref{fig:native-top-left}) and direct display ads (\Cref{fig:dd-top-left,fig:dd-top-right}) on Google SERPs. Each participant was randomly assigned a search query and could perform the task only once, since inquiring at post-task about the presence of an ad would make them aware of it and could introduce carry over effects. In summary, each participant was only exposed to a unique combination of query, ad format, and ad position. \subsection{Search Query Sample} \label{ssec:search_query_sample} Starting from \textsc{Google Trends},\footnote{\url{https://trends.google.com/trends/}} we selected a subset of the Top Categories and Shopping Categories that were suitable candidates for the transactional character of our search tasks. From this subset of categories, we extracted the top search queries issued in the US during the last 12 months. Next, from the resulting collection of 375 search queries, we retained 150 for which the SERPs were showing at least one direct display ad (50 search queries for each combination of direct display ad format and position). Using this final selection of search queries, we produced the static version of the corresponding Google SERPs and injected the JavaScript code (\autoref{ssec:mouse_cursor_tracking}) that allowed us to control the ads format and capture all client-side user interactions. \subsection{SERP Layout} \label{ssec:serp} All SERPs were all in English and were scraped for later instrumentation (\autoref{ssec:mouse_cursor_tracking}). Participants accessed the instrumented SERPs through a dedicated server, which did not alter the look and feel of the original Google SERPs. This allowed us to capture fine-grained user interactions while ensuring that the content of the SERPs remained consistent and that each experimental condition was properly administered. All SERPs had both organic and direct display ads. Organic ads appeared both at the top-left and bottom-left position of the SERP, whereas direct display ads could appear either at the top-right or top-left position (but not both at the same time on the same SERP). Therefore, we ensured that only one ad was visible per condition and participant, since we are focusing on the single-slot auction case. This was possible by instrumenting each downloaded SERP with custom JavaScript code that removed all ads except the one that was tested in each of the experimental conditions (\autoref{fig:display_ads}). For example, bottom-most organic ads were not shown, since (i)~users have to scroll all way down to the bottom of the SERP to reveal them and (ii)~these ads have the same look and feel than the organic ads shown on the top-most position. \subsection{Mouse Cursor Tracking} \label{ssec:mouse_cursor_tracking} We inserted JavaScript code that captured mouse cursor movements and associated metadata while users browsed the SERPs. We used \textsc{EvTrack},\footnote{\url{https://github.com/luileito/evtrack}} a general-purpose open-source JavaScript event tracking library that allows event capturing either via event listeners (the event is captured as soon as it is fired) or via event polling (the event is captured at fixed-time intervals). We captured \texttt{\small{mousemove}} events via event polling, every 150\,ms to avoid unnecessary data overhead~\cite{LEIVA2015114}, and all the other browser events (e.g., \texttt{\small{load}}, \texttt{\small{click}}, \texttt{\small{scroll}}) via event listeners. Whenever an event was recorded, we logged the following information: mouse cursor position ($x$ and $y$ coordinates), timestamp, event name, and the XPath of the DOM element that relates to the event. \subsection{Self-Reported Ground-truth Labels} \label{ssec:self_reported_measures} In a similar vein to previous work~\cite{Feild:2010:PSF:1835449.1835458, Liu:2015:DUD:2766462.2767721, Lagun:2014:DCM:2556195.2556265, Arapakis:2016:PUE:2911451.2911505}, we collected ground-truth labels through an online questionnaire, which was administered at post-task and asked the user \emph{to what extent} they paid attention to the ad using a 5-point Likert-type scale: ``Not at all''~(1), ``Not much''~(2), ``I can't decide''~(3), ``Somewhat''~(4), and ``Very much''~(5). These scores would be collapsed to binary labels, but we felt it was necessary to begin with a 5-point Likert scale for several reasons. Unlike other scales that offer limited options (e.g. 2 or 3-point scales) and can result in highly skewed or neutral data~\cite{Johnson82}, or scales with too many options (7-point scales) that are harder to understand, a 5-point Likert-type scale leaves room for ``soft responses'' while remaining fairly intuitive. Neutral scores were not considered for analysis. \subsection{Participants} \label{ssec:participants} We recruited $3,206$ participants, of age $18-66$ and of mixed nationality, through the \textsc{Figure Eight} platform. All participants were proficient in English and were experienced (Level 3) contributors, i.e. they had a track record of successfully completed tasks and of a different variety, thus being considered very reliable contributors. \subsection{Procedure} \label{ssec:procedure} Participants were informed that they should perform the search task from a desktop or laptop computer as long as they used a computer mouse. They were told to deactivate any ad-blocker before proceeding with the task, otherwise our JavaScript code would prevent them from taking part in the study. Participants were asked to act naturally and click on anything that would best answer the search query, e.g. result links, images, etc. An example of the search task descriptions provided to the participants is the following: ``\emph{You want to buy a Rolex watch and you have submitted the search query `rolex watches' to Google Search. Please browse the search results page and click on the element that you would normally select under this scenario.}'' The search task had to be completed in a single session and each query was performed on average by five different participants. The SERPs were randomly assigned to the participants and each participant could take the study only once. Upon concluding the search task, participants were asked to complete the post-task questionnaire which inquired about the presence of the ad (at that point the participants did not have access to the webpage). The payment was \$0.20 and participants could also opt out at any moment, in which case they would not be compensated. \subsection{Dataset} \label{ssec:data} After excluding those logs with incomplete mouse cursor data (less than five mouse coordinates $\approx$ one second of user interaction data), we concluded to $2,289$ search sessions that hold $45,082$ mouse cursor positions\footnote{The dataset is freely available here: \url{https://gitlab.com/iarapakis/the-attentive-cursor-dataset.git}}. Of these search sessions, $763$ correspond to the organic ad condition, $793$ correspond to the left-aligned direct display ad, and $733$ correspond to the right-aligned direct display ad. Ground-truth labels were converted to a binary scale using the following mapping: ``Not at all'' and ``Not much'' were assigned to the negative class, and ``Somewhat'' and ``Very much'' were assigned to the positive class, while neutral scores were not considered in subsequent analysis. We note that the class distribution was fairly balanced (66\% of positive cases) across the experimental conditions. We then used 60-10-30 (\%) disjoint stratified splits to assign the observations to the training, validation, and test set, respectively. The stratified sampling process was performed once per ad format and preserved the original class distribution in each data partition. \section{Data Representations} We framed the problem of ad attention prediction as a binary classification task: \textit{given a user's mouse cursor trajectory, did the user notice the ad?} To this end, as introduced in \autoref{ssec:prelim}, we implemented both recurrent and convolutional neural networks to handle different representations of mouse cursor movements on SERPs. In what follows, we describe these data representations. \bgroup \def0.9{0.7} \begin{table*}[!ht] \caption{ Types of visual representations used to train the CNN models. The top row shows representations without the ad placeholder, whereas the bottom row shows representations with the ad placeholder. } \vspace{-5pt} \label{tbl:representations} \centering {\footnotesize \begin{tabular}{@{}lC{3.15cm}C{3.15cm}C{3.15cm}C{3.15cm}C{3.15cm}@{}} \toprule \addlinespace & \textbf{Heatmap} & \textbf{Trajectories} & \textbf{Colored trajectories} & \textbf{Trajectories with line thickness} & \textbf{Colored trajectories with line thickness} \\ \addlinespace {\rotatebox[origin=c]{90}{w/o ad placeholder}} & \fpiclarge{figures/hm} & \fpiclarge{figures/trajectory} & \fpiclarge{figures/trajectory_color} & \fpiclarge{figures/trajectory_thickness} & \fpiclarge{figures/trajectory_color_thickness} \\ \addlinespace & \textbf{(1)} & \textbf{(2)} & \textbf{(3)} & \textbf{(4)} & \textbf{(5)} \\[0.5em] {\rotatebox[origin=c]{90}{w/ ad placeholder}} & \fpiclarge{figures/hm_ad} & \fpiclarge{figures/trajectory_ad} & \fpiclarge{figures/trajectory_color_ad} & \fpiclarge{figures/trajectory_thickness_ad} & \fpiclarge{figures/trajectory_color_thickness_ad} \\ \addlinespace \bottomrule \end{tabular} } \end{table*} \egroup \subsection{Time Series Representation} \label{sssec:representation_ts} In our experiments, a mouse cursor trajectory is modeled as a multivariate time series of 2D cursor coordinates. The data is ordered by the time they were collected and there are no consecutively duplicated coordinates. A particular characteristic of this data representation is that events are \emph{asynchronous}, i.e. contrary to regular time series, the sampling rate of \texttt{\small{mousemove}} events is not constant. This is so because web browser events are first placed in an event queue and then fired as soon as possible~\cite{Resig16}. An inherent limitation while training RNN models is that the ``memory'' of the network must be fixed, i.e. the maximum number of timesteps that the model can handle must be set to a fixed length. This will impact model training in two ways. First, if we choose a small memory footprint (short sequence length) for our models, we would be truncating longer sequences that otherwise could bear rich behavioral information about the user. Second, if we choose a large memory footprint, then we would be wasting computational resources, as the model would require more weights to optimize and, in consequence, training time would be unnecessarily longer. Therefore, to make the training of our RNNs tractable, we inspected our data and decided to set the maximum sequence length to 50 timestemps, which roughly corresponds to the mean sequence length observed in our dataset plus one standard deviation. Since mouse cursor trajectories are variable-length sequences, shorter sequences were padded to such a fixed length of 50 timesteps and longer sequences were truncated. Finally, since each mouse cursor trajectory was performed on different web browsers with different screen sizes, the horizontal coordinates were normalised by each user's viewport width. The vertical coordinates do not need to be normalised, since the SERP layout has a fixed width. \subsection{Visual Representation} \label{sssec:representation_visual} According to our data, 90\% of all mouse coordinates happened above the page fold, i.e. within the browser's visual viewport. This was somewhat expected given the nature of our task: in crowdsourcing studies, users often proceed as quickly as possible in order to maximize their profit~\cite{Eickhoff11, Ipeirotis10}. However, this suggests that we can expect a visual representation to perform well, since most of the user interactions would be adequately represented in a fixed-size image. Therefore, we created five visual encodings (\autoref{tbl:representations}): \begin{enumerate}[leftmargin=*] \item \textit{Heatmap.} The influence of each mouse cursor coordinate is determined with a 2D Gaussian kernel of 25\,px radius. When various kernels overlap, their values are added together. \item \textit{Trajectories.} Every two consecutive coordinates are joined with a straight line. The first and last coordinates are rendered as cursor-like images, in green and red color, respectively. \item \textit{Colored trajectories.} The trajectory line color is mapped to a temperature gradient, where green areas denote the beginning of the trajectory and red areas denote the end of the trajectory. \item \textit{Trajectories with variable line thickness.} The trajectory line thickness is proportional to the percentage of time, so that thick areas denote the beginning of the mouse trajectory and thin ares denote the end of the trajectory. \item \textit{Colored trajectories with variable line thickness.} A combination of the two previous representations described. \end{enumerate} The mouse cursor data were rendered according to each visual encoding using the Simple Mouse Tracking system~\cite{Leiva:2013:WBB:2540635.2529996}, which was operated via PhantomJS,\footnote{https://phantomjs.org/} a scriptable headless browser. Each mouse cursor trajectory was normalized according to the user's viewport. Finally, no data augmentation or transformation techniques were applied, as often performed in computer vision tasks, and the images were saved as 1280x900\,px PNG files. \section{Deep Learning Models} \label{ssec:models} \subsection{Recurrent Neural Networks} \label{ssec:rnn_model} In what follows, we provide an overview of the RNN units, or cells, that we investigated for our sequence classification task. These units cover the most popular choices by Deep Learning practitioners. \begin{enumerate}[leftmargin=*] \item \textit{SimpleRNN.} Vanilla recurrent cell, i.e. a fully-connected unit where its output is fed back to its input at every timestep. \item \textit{LSTM.} This unit introduces an output gate and a forget gate~\cite{Hochreiter97}, to remember long-term dependencies within the data. \item \textit{GRU.} A simplification of the LSTM cell~\cite{Cho14}, where there is no output gate. GRU can outperform LSTM units in terms of convergence and generalization~\cite{Chung14}. \item \textit{BLSTM.} This unit uses both past and future contexts, by concatenating the outputs of two RNNs~\cite{Graves13}: one processing the sequence from left to right (forward RNN), the other one from right to left (backward RNN). We note that any RNN unit can become bidirectional, however we used the LSTM variant because of their popularity and to keep our experiments consistent. \end{enumerate} All RNN models have an input layer with $50$ neurons (one neuron per timestep), followed by a hidden layer with $n \in [16, 24, \dots, 128]$ neurons and ReLU activation, a dropout layer with drop rate $q \in [0.5, 0.4, \dots, 0.1]$ to prevent overfitting, and a fully-connected layer of 1 output neuron using sigmoid activation. Each RNN model outputs a probability prediction $p$ of the user's attention to an ad, where $p>.5$ indicates that the user has noticed the ad. We trained the models using binary crossentropy as loss function, the popular Adam optimizer (stochastic gradient descent with momentum) with learning rate $\eta \in [10^{-3}, 10^{-4}, \dots, 10^{-7}]$, and decay rates $\beta_1=0.9$ and $\beta_2=0.999$. We set a maximum number of 90 epochs, using an early stopping of 20 epochs that monitors the validation loss, and tried different batch sizes $b \in [16, 32, 64]$. As noted, we explored different combinations of hyperparameters ($n, q, \eta, b$). Our design space is thus quite large, rendering the classic grid-search approach unfeasible. Therefore, we optimized our RNN models via random search~\cite{Bergstra11}, which has a high probability of finding the most suitable configuration without having to explore all possible combinations~\cite{Zheng13}. The best configuration is determined via 3-fold cross-validation, i.e. a given hyperparameter combination is tested up to three times over the validation data partition and the final result is averaged. The best combination is the one with the minimum validation loss. \subsection{Convolutional Neural Networks} \label{ssec:cnn_model} As anticipated in \autoref{ssec:prelim}, we investigated four popular CNN architectures: \texttt{AlexNet}~\cite{AlexNet}, \texttt{SqueezeNet}~\cite{SqueezeNet}, \texttt{ResNet50}~\cite{ResNet}, and \texttt{VGG19}~\cite{VGGNet}, all of which were trained on the ImageNet database (1M images with 1000 categories). Our choice of CNNs favoured diversity and was guided by the fact that the above architectures have been successfully tested in a wide range of computer vision applications, and have different designs, modules, and number of parameters. For example, \texttt{ResNet50} and \texttt{VGG19} employ a large number of layers, hence resulting in \textit{deep} networks, (roughly twice as deep as \texttt{AlexNet}), use skip connections, and are among the early adopters of batch normalisation. On the other hand, \texttt{AlexNet} and its simplified variant \texttt{SqueezeNet} were the first to introduce ReLU activations and implement a shallow architecture while attaining high accuracy and requiring less bandwidth to operate. Using the representations discussed in~\autoref{sssec:representation_visual}, we applied transfer learning to calibrate the CNNs to the particularities our images, which are quite different from the natural scenery images found in ImageNet. This way, we can reuse an existing architecture and apply it to our own prediction task because there are universal, low-level features shared among images. For each CNN model, we applied the following steps. First, we initialised the last layer of the CNN model with randomly assigned weights, while retaining the weights of the initial layers that were pre-trained on \texttt{ImageNet}. Next, we run a learning rate finder~\cite{smith2015cyclical} that let $\eta$ values to cyclically vary by linearly increasing it for a few epochs. Training with cyclical learning rates instead of fixed values improves classification accuracy without the need to manual fine-tuning and often results in fewer iterations. We then unfreezed and re-trained all the layers of the CNN model for 300 epochs, using a per-cycle maximal LR. In addition, we used the early stopping method that terminated training when the monitored AUC stopped improving after 30 epochs. Batch size was adjusted accordingly to each architecture, to optimise for the use of GPU memory. Finally, we trained our CNN models with the Adam optimizer, using the same decay rates as in the RNN models and binary cross-entropy as loss function. \bgroup \def0.9{0.9} \begin{table*}[!ht] \caption{ Experiment results for organic advertisements. Gray cells indicate the top performer in each representation group. The overall best performance result (across all groups) is denoted in bold typeface. The positive:negative ratio is 447:222. } \vspace{-5pt} \label{tbl:results_top_left_native} \centering {\footnotesize \begin{tabular}[t]{m{2.3cm} m{1.35cm} l *8c} \toprule \textbf{Representation} & \textbf{Example} & \textbf{Architecture} & \textbf{Hyperparameters} & \textbf{Epoch} & \textbf{Adj. Precision} & \textbf{Adj. Recall} & \textbf{Adj. F-measure} & \textbf{AUC} \\ \midrule \multirow{4}{2.3cm}{Time series} & \multirow{4}{*}{$\begin{matrix}(x_1,y_1),\\ \dots,\\ (x_N,y_N)\end{matrix}$} & \texttt{SimpleRNN} & $\eta=10^{-3}$, $q=0.4$, $n=64$, $b=64$ & 90 & 0.556 & 0.697 & 0.603 & 0.529 \\ && \texttt{LSTM} & $\eta=10^{-3}$, $q=0.3$, $n=64$, $b=32$ & 65 & 0.622 & 0.604 & 0.612 & 0.531 \\ && \texttt{BLSTM} & $\eta=10^{-4}$, $q=0.3$, $n=32$, $b=16$ & 20 & \cellcolor{gray!20} 0.695 & 0.637 & 0.654 & \cellcolor{gray!20} 0.631 \\ && \texttt{GRU} & $\eta=10^{-3}$, $q=0.2$, $n=32$, $b=32$ & 90 & 0.672 & \cellcolor{gray!20} 0.711 & \cellcolor{gray!20} 0.678 & 0.557 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Heatmap} & \multirow{4}{*}{\fpic{native-20161215053818-heatmap.png}} & \texttt{AlexNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 38 & 0.572 & 0.667 & 0.583 & 0.547 \\ && \texttt{SqueezeNet} & $\eta=6.91\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 41 & 0.602 & 0.524 & 0.540 & 0.543 \\ && \texttt{ResNet50} & $\eta=9.12\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 46 & \cellcolor{gray!20} 0.652 & \cellcolor{gray!20} 0.679 & \cellcolor{gray!20} 0.659 & \cellcolor{gray!20} 0.638 \\ && \texttt{VGG19} & $\eta=1.90\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=16$ & 54 & 0.627 & 0.636 & 0.628 & 0.614 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Heatmap with ad placeholder} & \multirow{4}{*}{\fpic{native-20161215053818-heatmap-ad.png}} & \texttt{AlexNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 59 & \cellcolor{gray!20} 0.639 & \cellcolor{gray!20} 0.677 & \cellcolor{gray!20} 0.650 & \cellcolor{gray!20} 0.656 \\ && \texttt{SqueezeNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 34 & 0.617 & 0.657 & 0.621 & 0.587 \\ && \texttt{ResNet50} & $\eta=8.06\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 34 & 0.599 & 0.518 & 0.537 & 0.532 \\ && \texttt{VGG19} & $\eta=3.41\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 44 & 0.601 & 0.622 & 0.606 & 0.598 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Trajectories} & \multirow{4}{*}{\fpic{native-20161215053818-trajectory.png}} & \texttt{AlexNet} & $\eta=4.78\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 38 & 0.633 & 0.667 & 0.634 & 0.614 \\ && \texttt{SqueezeNet} & $\eta=3.31\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 32 & 0.654 & 0.654 & \cellcolor{gray!20} 0.654 & 0.589 \\ && \texttt{ResNet50} & $\eta=3.41\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 81 & 0.637 & 0.627 & 0.630 & \cellcolor{gray!20} 0.626 \\ && \texttt{VGG19} & $\eta=6.31\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 33 & \cellcolor{gray!20} 0.658 & \cellcolor{gray!20}\bf 0.693 & 0.645 & 0.600 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Trajectories with ad placeholder} & \multirow{4}{*}{\fpic{native-20161215053818-trajectory-ad.png}} & \texttt{AlexNet} & $\eta=8.31\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=64$ & 62 & 0.620 & \cellcolor{gray!20} 0.639 & \cellcolor{gray!20} 0.626 & 0.610 \\ && \texttt{SqueezeNet} & $\eta=5.75\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 59 & 0.614 & 0.619 & 0.615 & 0.578 \\ && \texttt{ResNet50} & $\eta=1.16\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=32$ & 37 & \cellcolor{gray!20} 0.632 & 0.464 & 0.449 & \cellcolor{gray!20}\bf 0.690 \\ && \texttt{VGG19} & $\eta=1.16\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=16$ & 44 & 0.621 & 0.610 & 0.610 & 0.613 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Colored trajectories} & \multirow{4}{*}{\fpic{native-20161215053818-trajectory-color.png}} & \texttt{AlexNet} & $\eta=3.63\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 57 & 0.594 & 0.633 & 0.607 & 0.561 \\ && \texttt{SqueezeNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 36 & \cellcolor{gray!20} 0.630 & \cellcolor{gray!20} 0.658 & \cellcolor{gray!20} 0.639 & 0.605 \\ && \texttt{ResNet50} & $\eta=5.58\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 50 & 0.619 & 0.534 & 0.550 & 0.601 \\ && \texttt{VGG19} & $\eta=3.41\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 31 & 0.610 & 0.647 & 0.617 & \cellcolor{gray!20} 0.629 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Colored trajectories with ad placeholder} & \multirow{4}{*}{\fpic{native-20161215053818-trajectory-color-ad.png}} & \texttt{AlexNet} & $\eta=3.63\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $=64$ & 57 & 0.640 & \cellcolor{gray!20} 0.637 & \cellcolor{gray!20} 0.637 & 0.610 \\ && \texttt{SqueezeNet} & $\eta=3.98\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $=64$ & 34 & 0.612 & 0.527 & 0.543 & 0.540 \\ && \texttt{ResNet50} & $\eta=3.41\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $=32$ & 87 & \cellcolor{gray!20}\bf 0.694 & 0.505 & 0.506 & \cellcolor{gray!20} 0.654 \\ && \texttt{VGG19} & $\eta=3.86\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $=16$ & 50 & 0.563 & 0.623 & 0.580 & 0.582 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Trajectories with line thickness} & \multirow{4}{*}{\fpic{native-20161215053818-trajectory-thickness.png}} & \texttt{AlexNet} & $\eta=5.24\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 31 & 0.599 & 0.484 & 0.495 & 0.539 \\ && \texttt{SqueezeNet} & $\eta=5.75\ensuremath{ \mbox{\scriptsize{E}} }{-5}$, $b=64$ & 35 & 0.559 & 0.555 & 0.562 & 0.546 \\ && \texttt{ResNet50} & $\eta=3.86\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 49 & \cellcolor{gray!20} 0.657 & \cellcolor{gray!20} 0.660 & \cellcolor{gray!20} 0.654 & 0.612 \\ && \texttt{VGG19} & $\eta=4.36\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 104 & 0.627 & \cellcolor{gray!20} 0.660 & 0.637 & \cellcolor{gray!20} 0.617 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Trajectories with line thickness and ad placeholder} & \multirow{4}{*}{\fpic{native-20161215053818-trajectory-thickness-ad.png}} & \texttt{AlexNet} & $\eta=2.75\ensuremath{ \mbox{\scriptsize{E}} }{-5}$, $b=64$ & 43 & 0.630 & 0.642 & 0.631 & 0.616 \\ && \texttt{SqueezeNet} & $\eta=3.02\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 36 & \cellcolor{gray!20} 0.674 & \cellcolor{gray!20} 0.665 & \cellcolor{gray!20}\bf 0.700 & \cellcolor{gray!20} 0.657 \\ && \texttt{ResNet50} & $\eta=8.06\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 31 & 0.669 & 0.539 & 0.548 & 0.638 \\ && \texttt{VGG19} & $\eta=4.93\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 67 & 0.575 & 0.628 & 0.595 & 0.622 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Colored trajectories with line thickness} & \multirow{4}{*}{\fpic{native-20161215053818-trajectory-colorthickness.png}} & \texttt{AlexNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=64$ & 71 & 0.653 & 0.674 & 0.655 & 0.615 \\ && \texttt{SqueezeNet} & $\eta=5.24\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 55 & \cellcolor{gray!20} 0.676 & \cellcolor{gray!20} 0.688 & \cellcolor{gray!20} 0.680 & \cellcolor{gray!20}\bf 0.690 \\ && \texttt{ResNet50} & $\eta=4.93\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 56 & 0.616 & 0.493 & 0.508 & 0.606 \\ && \texttt{VGG19} & $\eta=8.06\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 39 & 0.591 & 0.618 & 0.603 & 0.557 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Colored trajectories with line thickness and ad placeholder} & \multirow{4}{*}{\fpic{native-20161215053818-trajectory-colorthickness-ad.png}} & \texttt{AlexNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 55 & 0.665 & \cellcolor{gray!20} 0.692 & \cellcolor{gray!20} 0.666 & 0.657 \\ && \texttt{SqueezeNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 61 & 0.614 & 0.626 & 0.618 & 0.576 \\ && \texttt{ResNet50} & $\eta=4.36\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 37 & \cellcolor{gray!20} 0.673 & 0.557 & 0.572 & \cellcolor{gray!20} 0.659 \\ && \texttt{VGG19} & $\eta=1.16\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=16$ & 62 & 0.549 & 0.591 & 0.565 & 0.572 \\ \arrayrulecolor{black}\bottomrule \end{tabular}} \end{table*} \egroup \bgroup \def0.9{0.9} \begin{table*}[!ht] \caption{ Results for left-aligned direct display ads. Gray cells indicate the top performer in each representation group. The overall best performance result (across all groups) is denoted in bold typeface. The positive:negative ratio is 523:192. } \vspace{-5pt} \label{tbl:results_top_left_dd} \centering {\footnotesize \begin{tabular}[t]{m{2.3cm} m{1.35cm} l *8c} \toprule \textbf{Representation} & \textbf{Example} & \textbf{Architecture} & \textbf{Hyperparameters} & \textbf{Epoch} & \textbf{Adj. Precision} & \textbf{Adj. Recall} & \textbf{Adj. F-measure} & \textbf{AUC} \\ \midrule \multirow{4}{2.3cm}{Time series} & \multirow{4}{*}{$\begin{matrix}(x_1,y_1),\\ \dots,\\ (x_N,y_N)\end{matrix}$} & \texttt{SimpleRNN} & $\eta=10^{-3}$, $q=0.3$, $n=64$, $b=64$ & 73 & 0.575 & \cellcolor{gray!20} 0.758 & \cellcolor{gray!20} 0.654 & 0.508 \\ && \texttt{LSTM} & $\eta=10^{-3}$, $q=0.4$, $n=64$, $b=64$ & 74 & 0.607 & 0.656 & 0.628 & 0.542 \\ && \texttt{BLSTM} & $\eta=10^{-3}$, $q=0.5$, $n=64$, $b=16$ & 58 & \cellcolor{gray!20} 0.658 & 0.647 & 0.652 & 0.548 \\ && \texttt{GRU} & $\eta=10^{-3}$, $q=0.2$, $n=64$, $b=32$ & 90 & 0.598 & 0.727 & 0.646 & \cellcolor{gray!20} 0.560 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Heatmap} & \multirow{4}{*}{\fpic{dd-left-20161216033105-heatmap.png}} & \texttt{AlexNet} & $\eta=9.12\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 84 & 0.661 & 0.714 & 0.683 & 0.606 \\ && \texttt{SqueezeNet} & $\eta=3.31\ensuremath{ \mbox{\scriptsize{E}} }{-5}$, $b=64$ & 31 & \cellcolor{gray!20} 0.732 & 0.692 & 0.706 & 0.613 \\ && \texttt{ResNet50} & $\eta=1.44\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=32$ & 46 & 0.697 & \cellcolor{gray!20} 0.743 & \cellcolor{gray!20} 0.715 & \cellcolor{gray!20} 0.668 \\ && \texttt{VGG19} & $\eta=7.58\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 60 & 0.653 & 0.712 & 0.682 & 0.593 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Heatmap with ad placeholder} & \multirow{4}{*}{\fpic{dd-left-20161216033105-heatmap-ad.png}} & \texttt{AlexNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 68 & 0.703 & \cellcolor{gray!20}\bf 0.787 & 0.719 & 0.602 \\ && \texttt{SqueezeNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 31 & 0.672 & 0.700 & 0.681 & 0.594 \\ && \texttt{ResNet50} & $\eta=3.63\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 31 & 0.732 & 0.607 & 0.643 & 0.628 \\ && \texttt{VGG19} & $\eta=2.75\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 66 & \cellcolor{gray!20} 0.749 & 0.712 & \cellcolor{gray!20} 0.728 & \cellcolor{gray!20} 0.694 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Trajectories} & \multirow{4}{*}{\fpic{dd-left-20161216033105-trajectory.png}} & \texttt{AlexNet} & $\eta=3.02\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 103 & \cellcolor{gray!20} 0.723 & \cellcolor{gray!20} 0.738 & \cellcolor{gray!20} 0.727 & 0.596 \\ && \texttt{SqueezeNet} & $\eta=5.75\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 31 & 0.684 & 0.690 & 0.684 & 0.628 \\ && \texttt{ResNet50} & $\eta=7.35\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=32$ & 76 & 0.695 & 0.736 & 0.713 & 0.662 \\ && \texttt{VGG19} & $\eta=5.24\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 36 & 0.692 & 0.732 & 0.709 & \cellcolor{gray!20} 0.687 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Trajectories with ad placeholder} & \multirow{4}{*}{\fpic{dd-left-20161216033105-trajectory-ad.png}} & \texttt{AlexNet} & $\eta=3.02\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 103 & \cellcolor{gray!20} 0.745 & 0.745 & \cellcolor{gray!20}\bf 0.745 & \cellcolor{gray!20}\bf 0.708 \\ && \texttt{SqueezeNet} & $\eta=2.75\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=64$ & 31 & 0.712 & 0.690 & 0.695 & 0.632 \\ && \texttt{ResNet50} & $\eta=3.02\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 39 & 0.717 & \cellcolor{gray!20} 0.755 & 0.729 & 0.629 \\ && \texttt{VGG19} & $\eta=7.58\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 42 & 0.729 & 0.725 & 0.723 & 0.656 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Colored trajectories} & \multirow{4}{*}{\fpic{dd-left-20161216033105-trajectory-color.png}} & \texttt{AlexNet} & $\eta=5.24\ensuremath{ \mbox{\scriptsize{E}} }{-4}$, $b=64$ & 68 & \cellcolor{gray!20} 0.745 & \cellcolor{gray!20} 0.745 & \cellcolor{gray!20}\bf 0.745 & \cellcolor{gray!20} 0.677 \\ && \texttt{SqueezeNet} & $\eta=5.24\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 31 & 0.715 & 0.528 & 0.568 & 0.597 \\ && \texttt{ResNet50} & $\eta=8.06\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 139 & 0.728 & 0.665 & 0.688 & 0.665 \\ && \texttt{VGG19} & $\eta=3.63\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 84 & 0.706 & 0.659 & 0.675 & 0.603 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Colored trajectories with ad placeholder} & \multirow{4}{*}{\fpic{dd-left-20161216033105-trajectory-color-ad.png}} & \texttt{AlexNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-4}$, $b=64$ & 56 & 0.674 & 0.708 & 0.689 & 0.666 \\ && \texttt{SqueezeNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 31 & 0.712 & 0.707 & 0.707 & 0.651 \\ && \texttt{ResNet50} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 38 & \cellcolor{gray!20} 0.747 & 0.738 & \cellcolor{gray!20} 0.742 & \cellcolor{gray!20} 0.675 \\ && \texttt{VGG19} & $\eta=2.75\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 72 & 0.703 & \cellcolor{gray!20} 0.766 & 0.714 & 0.670 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Trajectories with line thickness} & \multirow{4}{*}{\fpic{dd-left-20161216033105-trajectory-thickness.png}} & \texttt{AlexNet} & $\eta=1.31\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=64$ & 62 & 0.690 & 0.720 & 0.703 & 0.606 \\ && \texttt{SqueezeNet} & $\eta=3.63\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 31 & \cellcolor{gray!20} 0.725 & \cellcolor{gray!20} 0.738 & \cellcolor{gray!20} 0.737 & \cellcolor{gray!20} 0.637 \\ && \texttt{ResNet50} & $\eta=8.06\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 39 & 0.692 & 0.732 & 0.709 & 0.604 \\ && \texttt{VGG19} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 95 & 0.710 & 0.682 & 0.695 & 0.572 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Trajectories with line thickness and ad placeholder} & \multirow{4}{*}{\fpic{dd-left-20161216033105-trajectory-thickness-ad.png}} & \texttt{AlexNet} & $\eta=3.31\ensuremath{ \mbox{\scriptsize{E}} }{-5}$, $b=64$ & 31 & 0.705 & 0.732 & 0.717 & \cellcolor{gray!20} 0.664 \\ && \texttt{SqueezeNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-5}$, $b=64$ & 90 & 0.715 & \cellcolor{gray!20} 0.752 & 0.725 & 0.654 \\ && \texttt{ResNet50} & $\eta=9.12\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 72 & 0.709 & 0.740 & 0.720 & 0.653 \\ && \texttt{VGG19} & $\eta=1.31\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=16$ & 38 & \cellcolor{gray!20} 0.733 & 0.730 & \cellcolor{gray!20} 0.735 & 0.616 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Colored trajectories with line thickness} & \multirow{4}{*}{\fpic{dd-left-20161216033105-trajectory-colorthickness.png}} & \texttt{AlexNet} & $\eta=1.20\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=64$ & 31 & 0.682 & 0.659 & 0.670 & 0.604 \\ && \texttt{SqueezeNet} & $\eta=9.12\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 47 & 0.682 & \cellcolor{gray!20} 0.682 & \cellcolor{gray!20} 0.682 & 0.593 \\ && \texttt{ResNet50} & $\eta=3.41\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 116 & 0.667 & 0.593 & 0.623 & 0.588 \\ && \texttt{VGG19} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 50 & \cellcolor{gray!20}\bf 0.760 & 0.615 & 0.652 & \cellcolor{gray!20} 0.668 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Colored trajectories with line thickness and ad placeholder} & \multirow{4}{*}{\fpic{dd-left-20161216033105-trajectory-colorthickness-ad.png}} & \texttt{AlexNet} & $\eta=1.73\ensuremath{ \mbox{\scriptsize{E}} }{-4}$, $b=64$ & 38 & 0.672 & 0.704 & 0.687 & 0.586 \\ && \texttt{SqueezeNet} & $\eta=1.00\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=64$ & 64 & \cellcolor{gray!20} 0.725 & 0.706 & 0.715 & \cellcolor{gray!20} 0.669 \\ && \texttt{ResNet50} & $\eta=4.36\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 54 & 0.712 & 0.591 & 0.629 & 0.638 \\ && \texttt{VGG19} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 46 & 0.717 & \cellcolor{gray!20} 0.726 & \cellcolor{gray!20} 0.722 & 0.652 \\ \arrayrulecolor{black}\bottomrule \end{tabular} } \end{table*} \egroup \bgroup \def0.9{0.9} \begin{table*}[!ht] \caption{ Results for right-aligned direct display ads. Gray cells indicate the top performer in each representation group. The overall best performance result (across all groups) is denoted in bold typeface. The positive:negative ratio is 462:178. } \vspace{-5pt} \label{tbl:results_top_right_dd} \centering {\footnotesize \begin{tabular}[t]{m{2.3cm} m{1.35cm} l *8c} \toprule \textbf{Representation} & \textbf{Example} & \textbf{Architecture} & \textbf{Hyperparameters} & \textbf{Epoch} & \textbf{Adj. Precision} & \textbf{Adj. Recall} & \textbf{Adj. F-measure} & \textbf{AUC} \\ \midrule \multirow{4}{2.3cm}{Time series} & \multirow{4}{*}{$\begin{matrix}(x_1,y_1),\\ \dots,\\ (x_N,y_N)\end{matrix}$} & \texttt{SimpleRNN} & $\eta=10^{-3}$, $q=0.3$, $n=32$, $b=64$ & 56 & 0.577 & 0.677 & 0.572 & 0.530 \\ && \texttt{LSTM} & $\eta=10^{-3}$, $q=0.4$, $n=32$, $b=32$ & 27 & 0.560 & 0.643 & 0.566 & 0.511 \\ && \texttt{BLSTM} & $\eta=10^{-3}$, $q=0.5$, $n=48$, $b=16$ & 82 & \cellcolor{gray!20} 0.608 & 0.658 & \cellcolor{gray!20} 0.615 & \cellcolor{gray!20} 0.614 \\ && \texttt{GRU} & $\eta=10^{-3}$, $q=0.4$, $n=32$, $b=64$ & 48 & 0.550 & \cellcolor{gray!20} 0.678 & 0.564 & 0.561 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Heatmap} & \multirow{4}{*}{\fpic{dd-right-20161224150314-heatmap.png}} & \texttt{AlexNet} & $\eta=4.78\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 60 & \cellcolor{gray!20}\bf 0.743 & \cellcolor{gray!20} 0.721 & 0.630 & 0.566 \\ && \texttt{SqueezeNet} & $\eta=3.63\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 59 & 0.647 & 0.708 & 0.636 & \cellcolor{gray!20} 0.599 \\ && \texttt{ResNet50} & $\eta=8.07\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 39 & 0.668 & 0.698 & \cellcolor{gray!20} 0.668 & \cellcolor{gray!20} 0.599 \\ && \texttt{VGG19} & $\eta=3.02\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 31 & 0.611 & 0.394 & 0.369 & 0.525 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Heatmap with ad placeholder} & \multirow{4}{*}{\fpic{dd-right-20161224150314-heatmap-ad.png}} & \texttt{AlexNet} & $\eta=4.78\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 60 & 0.587 & \cellcolor{gray!20} 0.697 & 0.598 & 0.566 \\ && \texttt{SqueezeNet} & $\eta=6.91\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 70 & 0.568 & 0.652 & 0.593 & 0.609 \\ && \texttt{ResNet50} & $\eta=1.44\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=32$ & 57 & 0.642 & 0.633 & 0.638 & 0.607 \\ && \texttt{VGG19} & $\eta=2.75\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 32 & \cellcolor{gray!20} 0.679 & 0.685 & \cellcolor{gray!20} 0.681 & \cellcolor{gray!20} 0.680 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Trajectories} & \multirow{4}{*}{\fpic{dd-right-20161224150314-trajectory.png}} & \texttt{AlexNet} & $\eta=5.75\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 79 & 0.623 & 0.687 & 0.626 & 0.578 \\ && \texttt{SqueezeNet} & $\eta=2.75\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 56 & 0.606 & 0.632 & 0.613 & 0.584 \\ && \texttt{ResNet50} & $\eta=1.49\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=32$ & 57 & \cellcolor{gray!20} 0.726 & \cellcolor{gray!20}\bf 0.732 & \cellcolor{gray!20}\bf 0.731 & \cellcolor{gray!20}\bf 0.739 \\ && \texttt{VGG19} & $\eta=3.98\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 73 & 0.618 & 0.673 & 0.626 & 0.581 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Trajectories with ad placeholder} & \multirow{4}{*}{\fpic{dd-right-20161224150314-trajectory-ad.png}} & \texttt{AlexNet} & $\eta=5.75\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 32 & 0.612 & 0.662 & 0.629 & 0.561 \\ && \texttt{SqueezeNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 39 & 0.616 & 0.612 & 0.609 & \cellcolor{gray!20} 0.607 \\ && \texttt{ResNet50} & $\eta=7.35\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 31 & \cellcolor{gray!20} 0.646 & 0.676 & \cellcolor{gray!20} 0.658 & 0.602 \\ && \texttt{VGG19} & $\eta=4.93\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 36 & 0.608 & \cellcolor{gray!20} 0.690 & 0.614 & 0.596 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Colored trajectories} & \multirow{4}{*}{\fpic{dd-right-20161224150314-trajectory-color.png}} & \texttt{AlexNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 118 & 0.620 & 0.676 & 0.632 & 0.607 \\ && \texttt{SqueezeNet} & $\eta=6.91\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 39 & 0.634 & 0.527 & 0.550 & 0.564 \\ && \texttt{ResNet50} & $\eta=2.15\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=32$ & 47 & \cellcolor{gray!20} 0.639 & 0.687 & \cellcolor{gray!20} 0.636 & \cellcolor{gray!20} 0.658 \\ && \texttt{VGG19} & $\eta=5.58\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 53 & 0.599 & \cellcolor{gray!20} 0.695 & 0.606 & 0.644 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Colored trajectories with ad placeholder} & \multirow{4}{*}{\fpic{dd-right-20161224150314-trajectory-color-ad.png}} & \texttt{AlexNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 69 & 0.606 & 0.645 & 0.621 & 0.570 \\ && \texttt{SqueezeNet} & $\eta=2.75\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 54 & 0.628 & 0.659 & 0.636 & 0.570 \\ && \texttt{ResNet50} & $\eta=1.49\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=32$ & 59 & \cellcolor{gray!20} 0.697 & \cellcolor{gray!20} 0.725 & 0.652 & 0.640 \\ && \texttt{VGG19} & $\eta=4.36\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 59 & 0.683 & 0.712 & \cellcolor{gray!20} 0.688 & \cellcolor{gray!20} 0.679 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Trajectories with line thickness} & \multirow{4}{*}{\fpic{dd-right-20161224150314-trajectory-thickness.png}} & \texttt{AlexNet} & $\eta=4.78\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 38 & 0.584 & 0.639 & 0.604 & 0.598 \\ && \texttt{SqueezeNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 62 & \cellcolor{gray!20} 0.708 & 0.678 & \cellcolor{gray!20} 0.683 & \cellcolor{gray!20} 0.601 \\ && \texttt{ResNet50} & $\eta=4.93\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 57 & 0.626 & 0.690 & 0.632 & 0.582 \\ && \texttt{VGG19} & $\eta=5.58\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 67 & 0.655 & \cellcolor{gray!20} 0.691 & 0.669 & 0.567 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Trajectories with line thickness and ad placeholder} & \multirow{4}{*}{\fpic{dd-right-20161224150314-trajectory-thickness-ad.png}} & \texttt{AlexNet} & $\eta=6.31\ensuremath{ \mbox{\scriptsize{E}} }{-5}$, $b=64$ & 31 & 0.646 & 0.454 & 0.460 & 0.572 \\ && \texttt{SqueezeNet} & $\eta=3.02\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 47 & 0.669 & 0.688 & \cellcolor{gray!20} 0.676 & 0.576 \\ && \texttt{ResNet50} & $\eta=6.31\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 33 & 0.593 & 0.661 & 0.615 & \cellcolor{gray!20} 0.596 \\ && \texttt{VGG19} & $\eta=2.43\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=16$ & 34 & \cellcolor{gray!20} 0.670 & \cellcolor{gray!20} 0.716 & 0.637 & 0.573 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Colored trajectories with line thickness} & \multirow{4}{*}{\fpic{dd-right-20161224150314-trajectory-colorthickness.png}} & \texttt{AlexNet} & $\eta=5.75\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 72 & 0.553 & 0.612 & 0.581 & 0.528 \\ && \texttt{SqueezeNet} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 34 & 0.618 & 0.673 & 0.626 & 0.548 \\ && \texttt{ResNet50} & $\eta=1.03\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=32$ & 61 & \cellcolor{gray!20} 0.694 & \cellcolor{gray!20} 0.711 & \cellcolor{gray!20} 0.705 & \cellcolor{gray!20} 0.685 \\ && \texttt{VGG19} & $\eta=2.51\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 73 & 0.626 & 0.590 & 0.605 & 0.588 \\ \arrayrulecolor{gray!50!}\midrule \multirow{4}{2.3cm}{Colored trajectories with line thickness and ad placeholder} & \multirow{4}{*}{\fpic{dd-right-20161224150314-trajectory-colorthickness-ad.png}} & \texttt{AlexNet} & $\eta=4.36\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=64$ & 83 & 0.625 & 0.662 & 0.637 & 0.592 \\ && \texttt{SqueezeNet} & $\eta=1.90\ensuremath{ \mbox{\scriptsize{E}} }{-6}$, $b=64$ & 33 & 0.636 & 0.604 & 0.622 & 0.566 \\ && \texttt{ResNet50} & $\eta=3.86\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=32$ & 49 & 0.590 & 0.665 & 0.609 & 0.606 \\ && \texttt{VGG19} & $\eta=4.36\ensuremath{ \mbox{\scriptsize{E}} }{-7}$, $b=16$ & 63 & \cellcolor{gray!20} 0.646 & \cellcolor{gray!20} 0.675 & \cellcolor{gray!20} 0.654 & \cellcolor{gray!20} 0.633 \\ \arrayrulecolor{black}\bottomrule \end{tabular} } \end{table*} \egroup \section{Results} \label{ssec:resutls} We report the performance of our ANNs trained on the different data representations, and for the different ad formats. We use the standard IR metrics of Precision, Recall, and F-Measure (F1 score), weighted according to the target class distributions in each case. The F-Measure provides an aggregated insight about the functionality of a classifier, however remaining sensitive to data distributions. Therefore, we also report the Area Under the ROC curve (AUC), which is insensitive to class distribution and error costs~\cite{Hand2001}, and use it as our key metric to determine the top performing classifier for each setup. To investigate further the performance differences across models and conditions, we run Friedman's ANOVA as an omnibus test and, if the result is statistically significant, we use the Wilcoxon signed-rank test for pairwise comparisons, with correction for multiple testing. Tables~\ref{tbl:results_top_left_native} to~\ref{tbl:results_top_right_dd} show the results of our experiments, including the hyperparameter configuration used for each model. The \texttt{Epoch} column indicates the maximum number of epochs used for training each model, as we used early stopping to prevent overfitting. Gray table cells indicate the top performer in each data representation group, whereas the overall best performance result (across all representations) is denoted in bold typeface. \subsection{Effect of Model Type} We note that, under our experimental settings, CNN models outperform RNN models across all ad format conditions, sometimes by a large margin. When considering the best overall performing models for each type of ANN architecture, we can observe noticeable improvements in terms of the F1 and AUC metrics. More specifically, the best CNN model represents an increment over the best RNN model by $3.24\%$ in terms of F1 and by $9.35\%$ in AUC for the organic ads (\autoref{tbl:results_top_left_native}). Similarly, we observe an increment of $13.91\%$ (F1) and $26.42\%$ (AUC) for the left-aligned direct display advertisements (\autoref{tbl:results_top_left_dd}). Lastly, we note an increment of $18.65\%$ (F1) and $20.35\%$ (AUC) for the right-aligned direct display advertisements (\autoref{tbl:results_top_right_dd}). \subsection{Effect of Ad Placeholder} We run statistical analysis to determine whether the presence of the ad placeholder had any effect on the models' performance. For the organic ads and the right-aligned direct display advertisements, the Wilcoxon signed-rank test showed that the presence of the ad placeholder in the representations did not elicit a significant change in the AUC or F1 scores. However, in the left-aligned direct display condition, F1 scores were significantly higher for the models trained on the representations without the ad placeholder (Mdn=0.718), as opposed to those trained on the representations with the ad placeholder (Mdn=0.691): $W=50, p = 0.041, r = -0.27$. \subsection{Effect of Ad Format} In organic ads, we note that the top performers are \texttt{SqueezeNet} (AUC=0.690), trained on the colored trajectories with varied line thickness, and \texttt{ResNet50} (AUC=0.690), trained on the trajectories with ad placeholder. Considering the remaining performance metrics, \texttt{SqueezeNet}, despite its shallower architecture (3 hidden layers), seems to generalise better. In left-aligned direct display ads, the top performer is \texttt{AlexNet} (AUC=0.708), trained on the trajectories with ad placeholder, followed closely by \texttt{VGG19} (AUC=0.694), trained on the heatmap with ad placeholder representation. The Wilcoxon signed-rank test showed that the presence of the ad placeholder in the representations did not elicit a significant change in the AUC or F-Measure scores, for either group. Also, in right-aligned direct display ads, where the advertisement is clearly separated from the SERP results, the top performing model is the \texttt{ResNet50}, trained on trajectories without ad placeholder, This model holds the best AUC (0.739) and F-Measure (0.731) scores overall. The Wilcoxon signed-rank test on all pairs of ad formats revealed a significant difference in terms of AUC between the organic ad (Mdn=0.610) and the left-aligned direct display (Mdn=0.634): $W=185.5, p < .01, r = -0.62$. Similarly, we found a significant difference between the left-aligned (Mdn=0.634) and right-aligned direct displays (Mdn=0.594): $W=716, p < .0001, r = -0.88$. When examining the F-Measure, the Wilcoxon signed-rank test showed a significant difference between organic ads ($Mdn = 0.616$) and the left-aligned direct display (Mdn=0.708): $W=17, p < .0001, r = -1.15$. Furthermore, we observed a significant difference between the left-aligned (Mdn=0.708) and right-aligned direct displays (Mdn=0.629): $W=788, p < .0001, r = -1.10$. The observed effect sizes are rather large, thus suggesting a practical importance of the results. \section{Discussion and Future Work} \label{sec:discussion} This work has served as a first exploration on the feasibility of ANNs to predict user attention to ads on SERPs. We have shown that, using relatively few training data, it is possible to train RNN models from scratch and fine-tune existing CNNs via transfer learning. Our findings indicate that the mouse cursor representations and the tested model architectures achieve competitive performance in detecting user attention for all ad formats. Note that none of our models use handcrafted features, which require domain expertise, nor page-level information, since they are trained on raw sequences of mouse cursor movements. Taken together, our experiments raise the bar in the IR community and can inform researchers and practitioners when it comes to choosing one model or network configuration over another. Having explored multiple representations of the same mouse cursor data, we have obtained several new insights and perspectives. For example, a times series representation of mouse movements is the obvious choice, if we already know that user interactions consist of a small number of mouse movements. On the contrary, if we foresee that users are going to dwell for a relatively long time on a page, e.g. due to query difficulty or the nature of the search task, then an image-based representation would be a more apt choice. Interestingly, our CNN models outperformed RRN models in most cases. However, we note that this might be due to the fact that our RNN models had a limited sequence length, in order to make training tractable on a single GPU,\footnote{ Even if a computing cluster were used, a single GPU is still required to do a single forward/backward pass during training.} thereby limiting the learning capacity of these models. On the contrary, the CNN models had almost full coverage of the mouse cursor movements, since most user interactions happened above the fold, and they were rendered as a static image, which can be easily trained on commodity hardware. Regarding the CNN models, our experimental results indicate that, in most cases, the presence of the ad placeholder in the visual representation seems to benefit the models' performance, although that finding was not always statistically significant. In addition, the visual representations based on \emph{trajectories} and \emph{colored trajectories with variable line thickness} are consistently found amongst the top-ranked performers. We presume that embedding the temporal dimension into the representations plays a role in accurate prediction of visual attention. Furthermore, we observe that the CNNs that implement shallow architectures (e.g. \texttt{AlexNet} and \texttt{SqueezeNet}) appear to perform equally well, if not better, than their \emph{deeper} counterparts. This suggests that such CNN implementations can attain high accuracy while requiring less bandwidth to operate. Also, the application of transfer learning proved to be useful; hence, reusing existing architectures allows for a quick and inexpensive solution to visual attention prediction, with relatively few training data. Finally, we should mention that our diagnostic technology was tested for the desktop setting, and currently half of the web traffic is mobile. However, user engagement is still higher on desktop~\cite{Arapakis20_ppaa} and amounts for a profitable and sizeable percentage of web traffic. A potential extension is to account for touch-based interactions like, for example, zoom/pinch gestures and scroll activity~\cite{Guo:2013:MTI:2484028.2484100}. Further ideas that could be explored in future work include: benchmark custom CNN architectures (or even combine RNNs and CNNs), analyze other color schema (e.g. for the ad placeholder color), improve the prediction capabilities of our RNNs models (e.g. stacking recurrent layers, using other activation functions, or implementing self-attention mechanisms), and train a general model to predict attention to any direct display. Ultimately, modeling user attention on SERPs has wide-ranging applications in web search ranking and UI design, and this work paves the way to many exciting future directions for research in this topic. \begin{acks} I. Arapakis acknowledges the support of NVIDIA Coorporation with the donation of a Titan Xp GPU used for this research. \end{acks}
1,108,101,563,565
arxiv
\section{Introduction} \begin{figure}[bbp] \centering \subfigure[]{\label{ladders} \includegraphics*[width=4.2cm]{LaddersII.eps} }\tabf \subfigure[]{\label{kagome} \includegraphics[width=4.6cm]{Kagome.eps} } \caption{(a) Quasi one-dimensional lattices (ladders) $L_1, L_2, L_3$. $L_3$ in a triangular lattice pattern reproduces the kagome lattice. (b) Kagome lattice with unit cell labelled A,B,C. } \label{lattice} \end{figure} Ising lattice models are realized in a variety of materials with magnetic interactions, for instance in rare earth compounds where the outer electron in the lanthanide ion acts as an Ising spin interacting with its neighbors. Depending on the lattice structure (frustrated or unfrustrated), dimensionality (one to three), and interaction type (ferromagnetic or antiferromagnetic), the properties and phases in the system display a wide range of possibilities \cite{Liebmann}. Coupling of the Ising spins to an external field can further introduce interesting behavior such as the presence of a magnetization plateau. Many such characteristics of the system may be captured by the model's partition function; accurate determination of the partition function poses a numerical challenge if there are competing interactions within the system, which is often the case for antiferromagnetic interactions on non-bipartite lattices. These competing interactions can result in large degeneracies in phase space.\\ Indeed, antiferromagnetic Ising models can harbor a macroscopic number of degenerate ground states at zero external field on frustrated lattices like the triangular lattice \cite{Wannier}, kagome lattice \cite{Kano}, pyrochlore \cite{Anderson, Liebmann} lattice, to name a few. An extensive entropy may survive, albeit with different values, even for infinitesimal fields \cite{Udagawa, Moessner, Isakov} on certain lattices; as the field is varied, a strongly enhanced peak in the entropy develops just before the field-induced spin-ordering sets in \cite{Domb, Metcalf, Isakov}; this substantial peak occurs because, at this field strength, a large number of non-neighboring spins may be flipped against the field without a cost in energy \cite{Metcalf}. In fact, such residual saturation entropies $S^{\textrm{sat.}}$ persist in quantum spin models (anisotropic Heisenberg models), although with different values from the Ising limits, and for different reasons \cite{Schulenberg, Derzhko} pertaining to the existence of localized magnons.\\ In section \ref{sec: ladders}, residual saturation entropies of related Ising quasi one-dimensional lattices or ladders (Fig. \ref{ladders}) are exactly computed. The statistical properties of the ladders are considered primarily to establish the results on the kagome lattice; $L_1$ will be used to construct the square lattice to compare with earlier results before we employ ladder $L_3$ to build up the kagome lattice. The bounds on the kagome lattice saturation entropy will be justified via the construction of ladder $L_2$ from $L_1$. Moreover, ladders with these geometries are realized in a range of materials \cite{DagottoRice, Ohwada, Heuvel}. The two-leg ladder $L_1$ with magnetic interactions are realized in compounds such as vanadyl pyrophosphate $\mathrm{(VO)}_2\mathrm{P}_2\mathrm{O}_7$ and the cuprate $\mathrm{SrCu}_2\mathrm{O}_7$ \cite{DagottoRice}; ladder $L_2$, which is equivalent to the axial-next-nearest-neighbor Ising (ANNI) model, describes phase transitions in the charge-lattice-spin coupled system $\mathrm{NaV}_2\mathrm{O}_5$ \cite{Ohwada}. And ladder $L_3$ bears resemblance in geometric structure and interaction to the Ising-Heisenberg polymer chain $\mathrm{[DyCuMoCu]}_{\infty}$ \cite{Heuvel}. \\ In sections \ref{sec: kagome} and \ref{sec: Binders}, we consider the Ising kagome lattice, shown in Fig. \ref{kagome}, at saturation, which was argued to be realized in the spin-ice compound Dy$_{2}$Ti$_{2}$O$_{7}$ \cite{Isakov}. For the kagome lattice, approximate values of $S^{\textrm{sat.}}$ may be deduced from calculations for spin ice on the pyrochlore lattice in a [111] field \cite{Isakov}; results of Monte Carlo simulations, series expansion techniques \cite{Nagle} and the Bethe approximation were found to be comparable \cite{Isakov} for the saturation entropy. In section \ref{sec: kagome}, we elucidate a procedure for obtaining a more accurate estimate of this value through (a) transfer matrix methods, and equivalently (b) the solution of appropriate difference equations that generate the partition function. Finally in section \ref{sec: Binders}, we provide a considerably improved estimate of $S^{\textrm{sat.}}$ for the kagome lattice using Binder's algorithm. With which we may exactly calculate the partition function of a system of over, in our case, 1300 Ising spins at the saturation field with the expenditure of modest computational resources. We point out that it is only for the Ising kagome lattice, among other two dimensional lattices, that the zero field entropy exceeds the saturation field entropy.\\ The antiferromagnetic Ising models we investigate are described by the Hamiltonian \begin{equation} \label{eq: eq1} \mathcal{H} = \sum_{<i,j>}\sigma_{i}\sigma_{j} - h_c\sum_{i}\sigma_{i}, \end{equation} on an $N$ site lattice with $|h_c| = z$, the nearest number of neighbors. This is the saturation field beyond which ordering sets in. The variables $\sigma _i = \pm 1$ which we represent by down and up spins, and the interaction between nearest neighbors is denoted by the angular brackets, setting the energy scale of the problem. The boundary conditions are chosen to be either free or periodic. Although the number of allowed states for a given finite system will differ depending on the boundary conditions, the dominant multiplicative degeneracy of the system as $N \rightarrow \infty$ will reflect the bulk property. It will then be a question of computational convenience whether free or periodic boundary conditions be chosen. \begin{table*}[th!] \caption{\label{Kag-chain}Partition function in configuration space $\mathcal{C}_m$ for $m-$site (cell) Ising chain (ladder $L_3$) with periodic and free boundary conditions; values for $m = 1, 2, 3, 4$ (which exclude the boundary spins/cells for free boundaries) are indicated. } \begin{ruledtabular} \begin{tabular}{lrr} Boundary & Ising chain & Ladder $L_3$ \\ \hline Periodic & $(\frac{1+\sqrt{5}}{2})^m + (\frac{1 - \sqrt{5}}{2})^m = 1, 3, 4, 7, \ldots$ & $(2 + \sqrt{3})^m + (2 - \sqrt{3})^m = 4, 14, 52, 194, \ldots$\\ Free & $\frac{1}{\sqrt{5}}\left[(\frac{1+\sqrt{5}}{2})^{m+2} - (\frac{1 - \sqrt{5}}{2})^{m+2} \right] = 2, 3, 5, 8, \ldots$ & $\frac{1}{2\sqrt{3}}\left[(2 + \sqrt{3})^{m+1} - (2 - \sqrt{3})^{m+1}\right] = 4, 15, 56, 209, \ldots$\\ \end{tabular} \end{ruledtabular} \end{table*} \section{Ladders} \label{sec: ladders} \begin{figure}[bbp] \centering \includegraphics*[width=8.cm]{Orderings.eps} \caption{Spin orderings on the Ising ladders $L_1, L_2$ (with spins tilted for clarity), and $L_3$ below and above the saturation fields; ordering regimes are indicated by gray full ($L_2$) and dashed ($L_1, L_3$) arrows. Trivial degeneracies due to translations are not indicated.} \label{Orderings} \end{figure} In this section we describe the transfer matrix procedure to calculate the partition function $\Omega _m$ of a ladder with $m$ unit cells, such that no up spin may neighbor another; the space of such configurations is denoted as $\mathcal{C}_m$ which comprise the degenerate ground states at $h_c$ for $L_1$ and $L_2$; these two ladders are used primarily as (a) testbeds for our computations, and (b) for justifying the bounds in \eqref{eq: eq4} for the kagome lattice. We will demonstrate the idea with the case of ladder $L_3$ where a unit cell is taken to be a simple triangle; this will be relevant while building up the kagome lattice from this ladder. We emphasize that, for this particular ladder $L_3$, the configurations in $\mathcal{C}_m$ do not constitute the degenerate space of states at its critical field as justified in the succeeding paragraph. \\ Below the saturation field value, the various phases on the ladders may be constructed as follows. For ladder $L_1$, clearly the up-down-up-down configuration on each leg avoids any frustration at field strengths $|h| \leq z = 3$ and has energy per site $\epsilon^{L_1}_0 = -3/2$. At $h = 3$, this energy equals that of the ferromagnetically ordered phase with energy per site $\epsilon^{L_1}_1 = 3/2 - h$, signaling a first order transition. Similarly the down-down-up configuration for $L_3$, with the up spins along the base and energy $\epsilon^{L_3}_0 = (-2-h)/3$, is the stable configuration at $h=0$; this phase persists up to $h = 3$ where it becomes degenerate with and transits to the ferromagnetically aligned phase with energy $\epsilon^{L_3}_1 = 4/3 - h$. We point out that the down-down-up configuration with the up spins along the apices (where $z = 2$) will be degenerate with the ferromagnetic phase at $h = 2$ but $\epsilon^{L_3}_0 = (-2-h)/3$ is more stable at this phase point and therefore only a single transition exists for $L_3$. For ladder $L_2$, the three relevant phases and transition points may be readily evaluated \cite{Morita} from Fig. 1 in Ref. [16] along the $J' = J$ line in the phase diagram of the ANNI chain. These results of all three ladders are illustrated in Fig. \ref{Orderings}.\\ To illustrate the procedure, consider the configurations in $\mathcal{C}_m$ of ladder $L_3$; this calculation is relevant for sections \ref{sec: kagome} and \ref{sec: Binders} because it corresponds to an isolated ladder at the saturation field of the kagome lattice. At the saturation field of $L_3$, the configurations are different from $\mathcal{C}_m$ and lesser in number because, as explained in the preceding paragraph, spin flips along the apices are not degenerate at the $h = 3$ transition. However, the same procedure may be adopted for $S^{\textrm{sat.}}$ calculations for all three ladders.\\ The transfer matrix procedure for configuration space $\mathcal{C}_m$ in ladder $L_3$ is as follows: $L_3$ may be thought of as a simple linear chain, with the provisos that (a) now 4 states are permitted on each 'site' i.e. $\mathcal{C}_1 = $\{ \begin{tikzpicture}[scale=3] \draw(0,0) -- (1*0.2,0); \draw (0,0) -- (0.5*0.2,0.866*0.2); \draw (0.5*0.2,0.866*0.2) -- (1*0.2,0); \draw[<-](0.5*0.2,0.866*0.2 - 0.08) -- (0.5*0.2,0.866*0.2+1*0.05); \draw[<-](0.0,-0.05) -- (0,1*0.1); \draw[<-](1*0.2,-0.05) -- (1*0.2,0.1); \end{tikzpicture}\tabf, \tabf \begin{tikzpicture}[scale=3] \draw(0,0) -- (1*0.2,0); \draw (0,0) -- (0.5*0.2,0.866*0.2); \draw (0.5*0.2,0.866*0.2) -- (1*0.2,0); \draw[<-](0.5*0.2,0.866*0.2 - 0.08) -- (0.5*0.2,0.866*0.2+1*0.05); \draw[<-](0.0,-0.05) -- (0,1*0.1); \draw[->](1*0.2,-0.05) -- (1*0.2,0.1); \end{tikzpicture}\tabf, \tabf \begin{tikzpicture}[scale=3] \draw(0,0) -- (1*0.2,0); \draw (0,0) -- (0.5*0.2,0.866*0.2); \draw (0.5*0.2,0.866*0.2) -- (1*0.2,0); \draw[<-](0.5*0.2,0.866*0.2 - 0.08) -- (0.5*0.2,0.866*0.2+1*0.05); \draw[->](0.0,-0.05) -- (0,1*0.1); \draw[<-](1*0.2,-0.05) -- (1*0.2,0.1); \end{tikzpicture}\tabf, \tabf \begin{tikzpicture}[scale=3] \draw(0,0) -- (1*0.2,0); \draw (0,0) -- (0.5*0.2,0.866*0.2); \draw (0.5*0.2,0.866*0.2) -- (1*0.2,0); \draw[->](0.5*0.2,0.866*0.2 - 0.08) -- (0.5*0.2,0.866*0.2+1*0.05); \draw[<-](0.0,-0.05) -- (0,1*0.1); \draw[<-](1*0.2,-0.05) -- (1*0.2,0.1); \end{tikzpicture}\tabf \}, and (b) the third of these states may not follow the second of these states on the chain. Following Metcalf and Yang \cite{Metcalf}, the transfer matrix for the present case may be defined as \[ M_{L_3} = \left( \begin{array}{cccc} 1&1&1&1\\ 1&1&0&1\\ 1&1&1&1\\ 1&1&1&1 \end{array} \right), \] where a zero entry indicates the aforementioned disallowed sequence of states. Under periodic boundary conditions, the total number of states is given by the trace of the transfer matrix over the $m$ cells \cite{Metcalf}. That is \begin{equation} \label{eq: eq2} \Omega ^{\textrm{PBC}} _m = \textrm{Tr}[(M_{L_3})^m] = (2+\sqrt{3})^m + (2-\sqrt{3})^m. \end{equation} Note that the trace automatically disallows the reverse of condition (b) i.e. $\mathcal{C}_1(2)$ not following $\mathcal{C}_1(3)$ through the chain ends; thus the non-Hermiticity of $M_{L_3}$ poses no issues. \\ We treat the boundary conditions on a more general footing by solving for the characteristic polynomial of $M_{L_3}$ to give $\lambda ^2(\lambda ^2 -4\lambda + 1) = 0$, from which the difference equation relating the partition function $\Omega _m$ of $m-$cell ladders may be readily read off as \begin{equation} \label{eq: eq3} \Omega _m = 4\Omega _{m-1} - \Omega _{m-2}, \end{equation} for both periodic and free boundary conditions, for each of which we merely have to set different initial conditions in \eqref{eq: eq3}. The partition functions of $\mathcal{C}_m$ with both boundary conditions are compared with that of the Ising linear chain at saturation in table \ref{Kag-chain}.\\ The entropy per cell is then given by the logarithm of the dominant contribution to $\Omega _m$. We may follow this procedure to obtain the saturation entropies of all the illustrated ladders in Fig. \ref{ladders}. The values and the generating difference equations are tabulated in table \ref{entropies}. The saturation field entropy of ladder $L_2$ checks with an earlier calculation on the ANNI chain \cite{Sela}. The addition of diagonal bonds, in proceeding from $L_1$ to $L_2$, clearly reduces the residual entropy associated per lattice site. \\ \section{Kagome lattice} \label{sec: kagome} \begin{figure*}[t] \centering \subfigure[ ]{\label{PBCvsFBC} \includegraphics[scale=0.32]{PBCvsFBC.eps} } \subfigure[ ]{\label{EntropyScalingPap} \includegraphics[scale=0.32]{EntropyScalingPap.eps} } \caption{(a) Free and periodic boundary conditions for the kagome lattice using an $m \times n = 2 \times 2$ system. Upper figure shows FBC: gray (light) triangles indicate spins aligned with the field, red (circled) triangles have finite degeneracies, black (dark) lines indicate the bonds constituting the triangular lattice. Bottom figure shows PBC with gray (dashed) lines indicating the imposition of periodicity. The partition function for each case and the equivalent triangular lattice are also indicated. (b) Scaling of residual saturation entropy, in units of $k_B$, on the Ising kagome lattice as function of number of triangles for two different scaling and boundary conditions. $m$ denotes the number of unit cell triangles on each ladder, and the number of such ladders $n = 100$.} \end{figure*} The kagome lattice, a section of which is illustrated in Fig. \ref{kagome}, may be thought of as ladder $L_3$ repeated in a two dimensional triangular lattice pattern, with a 'site' now being a simple triangle labelled $A, B, C$ in the figure. At zero field, the kagome Ising model is disordered \cite{Kano} while for fields below the onset of ferromagnetism, there is a finite net moment \cite{MoessnerSondhi}. In these two regimes, the residual entropies for the Ising kagome are known accurately \cite{Kano, Udagawa, MoessnerSondhi}. And above the saturation field, the phase is ordered ferromagnetically.\\ At the saturation field, before proceeding with the calculations, we can provide upper and lower bounds for the kagome lattice's entropy at the very outset. For the lower bound, following the arguments in Ref. [9], there must be more entropy per site than the triangular lattice because the increased connectivity of the latter serves to restrict the configuration space; we have already seen the reduction in entropy, while constructing $L_2$ from $L_1$, from the addition of diagonal bonds. As regard an upper bound, following similar reasoning, clearly the kagome lattice cannot support more configurations than the ladder $L_3$ from which it is built. Therefore we get the inequality \begin{equation} \label{eq: eq4} 0.3332427 \ldots < S_{\textrm{kag.}} < 0.4389859 \dots \end{equation} where the lower bound, the saturation entropy for the Ising triangular lattice, is known exactly through the solution of the hard-hexagon model \cite{Baxter}. The upper bound is obtained from the leading contribution to \eqref{eq: eq2} i.e. $\frac{1}{3}\log{(2 + \sqrt{3})}$, with the result divided by $3$ because we consider the entropy per site in \eqref{eq: eq4}.\\ We adopt 2 approaches for estimating the convergence of the entropy as a function of system size. The first follows the transfer matrix and linear scaling method of Metcalf and Yang \cite{Metcalf}, for which we also provide an alternative reformulation; and the second is the ratios method of Milo\v{s}evi\'{c} et al. \cite{Milosevic}\\ In Fig. \ref{PBCvsFBC}, we illustrate how free and periodic boundary conditions are effected for an $m \times n = 2 \times 2$ kagome system. The black (dark) bonds indicate the underlying equivalent triangular lattice; this transformation to a triangular lattice makes the remainder of the analysis tractable. \subsection{Transfer matrix: linear scaling} For a two dimensional lattice the transfer matrices are constructed as follows from the one dimensional building chains \cite{Metcalf}, which in our case are the $L_3$ ladders. The matrix element $M_{i,j}$ is set to $0$ if the state $j$ of an $m$-cell ladder cannot follow state $i$ on an adjacent $m$-cell ladder; otherwise the matrix element is $1$. Clearly the matrix is of size $\Omega _{m} \times \Omega _{m}$, which already for $m=6$ gives a little over 7 million matrix elements in $M$. The partition function is then given as before by $\Omega _{m,n} = \textrm{Tr}\left[M^n\right]$ for the $m \times n$ system; as we will see, typically $n = 100$ gives a good estimate up to three to four decimal places for the entropy. To obtain the entropy per $m$-cells, it is assumed that every new ladder added to the finite system multiplies the system's degeneracy by a constant factor of $\alpha$, so that \begin{equation} \label{eq: eq5} \log{\Omega _{m,n}} = n\log{\alpha} + C_{m,n}, \end{equation} gives the entropy per $m$ cells as $\log{\alpha}$, where the $C_{m,n}$ denote the correction terms. It is expected that these terms decrease for increasing $m, n$ values. Thus the procedure is to calculate $\Omega _{m,n}$ and use the linear fit against $n$ to extract the entropy. We show in Fig. \ref{EntropyScalingPap} with full and dashed red lines the convergence of the entropy as the number of triangles is varied for periodic and free boundary conditions. Note that the trace operation automatically imposes periodic boundary conditions along the $n$-direction. Moreover we have checked for a system with free boundary conditions along the $n$-direction as well (using Binder's algorithm in section \ref{sec: Binders}) that the values obtained, and hence the convergence trends, are essentially the same. And as observed in Ref. [14] for other lattices, free boundary conditions does not give rapid convergence using \eqref{eq: eq5}.\\ As noted in the previous section (see \eqref{eq: eq3} or table \ref{entropies} for instance), the degeneracies may also be generated by solving difference equations on the lattice subject to appropriate initial values. For the kagome lattice, a difference equation for each $m$ is obtained and solved to obtain identical results as in Fig. \ref{EntropyScalingPap}. However this alternative and equivalent approach to Metcalf and Yang's procedure of matrix multiplication followed by the trace operation retains, at the present time, no computational gain because determining the characteristic polynomial of a matrix is about as hard as matrix multiplication with today's algorithms \cite{Keller-Gehrig}. \subsection{Transfer matrix: ratios} In the ratio method, the correction terms $C_{m,n}$ are substantially reduced by using a sequence of estimators for the entropy as \cite{Milosevic} \begin{equation} \label{eq: eq6} S_{m,n} = \log{\left[(\frac{\Omega _{m+1,n+1}}{\Omega _{m+1,n}})(\frac{\Omega _{m,n}}{\Omega _{m,n+1}})\right]}. \end{equation} For relatively large $m$ and $n$ values each added chain will multiply the system's degeneracy by a factor of $\alpha = \beta ^{3m}$, where $\beta$ is the factor associated with each site. Thus \eqref{eq: eq6} is seen to give the residual entropy per cell with considerable diminution of the correction terms. \\ As plotted in Fig. \ref{EntropyScalingPap} with the dotted and dashed-dotted black lines, the use of \eqref{eq: eq6} provides faster convergence for the entropy compared to \eqref{eq: eq5}; in contrast to \eqref{eq: eq5}, \eqref{eq: eq6} seems better suited for free boundary conditions. Also shown in the figure is the value of the estimator $S_{5,100}/k_B = 0.39360$ obtained from \eqref{eq: eq6} with free boundary conditions (using Binder's algorithm in section \ref{sec: Binders}), which differs from the $(\log{\alpha})_{5,100}$ value obtained from \eqref{eq: eq5} with periodic boundary conditions by approximately $0.00001$, thus giving three certain decimal places with an uncertainty in the fourth.\\ \begin{table*}[th!] \caption{\label{entropies}Residual entropies per site, in units of $k_B$, at the saturation fields for the lattices in Fig. \ref{lattice}. The difference equations for the ladders are independent of the boundary conditions but the total number of states changes for each \textit{finite} segment.} \setlength{\tabcolsep}{0.5em} \begin{ruledtabular} \begin{tabular}{ccr} Lattice&Difference equation &Entropy\\ \hline $L_1$&$x_n = 7x_{n-1} - 7x_{n-2} + x_{n-3}$&$\frac{1}{4}\log{(3 + 2\sqrt{2})} = 0.440686 \ldots$\\ $L_2$&$x_n = 5x_{n-1} - 2x_{n-2} + x_{n-3}$&$\frac{1}{4}\log{\left[5+ (\frac{187 - 9\sqrt{93}}{2})^{1/3} + (\frac{187 + 9\sqrt{93}}{2})^{1/3}\right]} -\frac{1}{4}\log{3} = 0.382245 \ldots$\\ $L_3$&$x_n = 3x_{n-1} - x_{n-2}$&$\frac{1}{3}\log{\left [(3 + \sqrt{5})/2\right ]} = 0.320807 \ldots$\\ Kagome & - & 0.393589(6) \end{tabular} \end{ruledtabular} \end{table*} \section{Binder's algorithm} \label{sec: Binders} We have seen in the preceding section that free boundary conditions along with \eqref{eq: eq6} provide a rapidly convergent sequence for the entropy. The main limitation was however the calculation of $\Omega _{m,n}$ for large $\{m,n\}$ values. This may be achieved by employing Binder's algorithm towards an exact evaluation of the partition function of finite lattice systems \cite{Binder}. To briefly recapitulate, the partition function of a system of size $\{m, n\}$ is expressed in terms of the degeneracies $\gamma _{m,n}(i)$ of the $n^{\textrm{th}}$ ladder in its $i^{\textrm{th}}$ state. Then clearly \begin{equation} \label{eq: eq7} \Omega _{m,n} = \sum_i\gamma _{m,n}(i). \end{equation} Now the degeneracies of an added ladder for the $\{n, m+1\}$ system may be recursively computed by \begin{equation} \label{eq: eq8} \gamma _{m,n+1}(i) = \sum _{i'}\gamma _{m,n}(i'), \end{equation} with the summation running over only those values of $i'$ such that state $i$ may be adjacent to it. With this, we have computed the partition function of over 1300 spins with modest computational effort. For instance, we are able to reproduce up to 10 digits in the residual saturation field entropy value for the square lattice\cite{Milosevic} using twenty 10-rung $L_1$ ladders.\\ Using \eqref{eq: eq6} - \eqref{eq: eq8} we compute $S_{6,50}$, $S_{7,50}$ and $S_{8,50}$ to give six stable digits for the kagome lattice saturation field entropy \begin{equation} \label{eq: binder} S_{\textrm{kag}}/k_B = 0.393589(6). \end{equation} We compare this with low temperature Monte Carlo simulation results and the Bethe approximation for pyrochlore spin ice which, at the saturation field, may be described by a two-dimensional network of Ising pseudo-spin kagome lattice \cite{Isakov}. Scaling the saturation field results of Isakov et al. by a factor of 4/3 (because the corner spin in the pyrochlore tetrahedron is considered frozen giving a high temperature entropy per site of only $\frac{3}{4}\log{2}$), we obtain the relevant results to be \begin{eqnarray} \label{eq: Isakov} S_{\textrm{kag.}}^{\textrm{MC}}/k_B (T/J = 0.15) \approx 0.397, \nonumber \\ S_{\textrm{kag.}}^{\textrm{Bethe}}/k_B \approx 0.38772. \end{eqnarray} This value is to be compared with the experimentally observed peak in the pyrochlore compound Dy$_{2}$Ti$_{2}$O$_{7}$ close to the high-field termination of the plateau \cite{Hiroi} where the physics was argued to be governed by decoupled kagome planes \cite{Isakov}. We point out, along with the authors of Ref. [7], that the computed values (in \eqref{eq: binder} and \eqref{eq: Isakov}) are slightly higher than the magnitude of the experimentally measured peak. For the spin ice compound at saturation, the corresponding value of approximately $2.4544$ Joules/deg./mole (obtained by multiplying \eqref{eq: binder} by $3R/4$, where $R$ is the gas constant) may be compared with the first prominent experimental peak in the saturation entropy at $T = 1K$ of about $2.1$ Joules/deg./mole \cite{Hiroi}. The difference between the two values suggests that either more precise measurements with error bars are required on this compound close to the transition field, or that the applicability of the Ising model on decoupled kagome planes close to this spin-ice material's saturation field might need to be slightly reconsidered. \section{Summary} We have considered the degenerate space of states of a few Ising ladders and the Ising kagome lattice at the saturation external field. For the ladders, by a simple redefinition of a site, the residual entropy may be exactly computed. We treat the generation of states for periodic and free boundary conditions on a general footing by use of difference equations.\\ For the kagome lattice, we are able to provide six stable digits for the residual entropy by calculating the exact partition function of over 1300 spins using Binder's algorithm implemented on a standard computer. Our accurate result compares reasonably with approximate results from low temperature Monte Carlo simulations and the Bethe approximation for an equivalent system. We believe that by constructing appropriate ladders the residual saturation field entropies of geometrically complex lattices, like the pyrochlore, may be similarly computed with more ease after their transformation to standard lattice structures. Comments on the relation to the experimental situation on the spin-ice compound Dy$_{2}$Ti$_{2}$O$_{7}$ were made.\\ The primary results are summarized in table \ref{entropies}. \acknowledgments The author thanks Oleg Derzhko, Sergei Isakov and Hartmut Monien for discussions, the latter for comments on the manuscript. Support of the Bonn-Cologne Graduate School through the Deutsche Forschungsgemeinschaft is gratefully acknowledged. \input{BBL_file.bbl} \end{document}
1,108,101,563,566
arxiv
\section{Introduction} The gauge/gravity duality is a conjectured relation between a classical gravity theory and a strongly coupled gauge theory \cite{Maldacena:1997re,Witten:1998qj}. This duality has been utilized to study various areas of physics such as condensed matter, quantum information theory and the quark-gluon plasma \cite{Natsuume:2014sfa,CasalderreySolana:2011us,Hartnoll:2009sz,Camilo:2016kxq}. A significant example of the gauge/gravity duality is the AdS/CFT correspondence which states that type IIB string theory on ($d+1$)-dimensional AdS spacetimes is dual to $d$-dimensional superconformal field theories. The gauge/gravity duality provides a powerful tool to investigate various physical quantities in the quantum information theory utilizing their corresponding gravity duals. Some of these quantities are entanglement entropy, mutual information and entanglement of purification. Although it may be difficult to calculate these quantities in the field theory, they are relatively simple to calculate on the gravity side. Surprisingly, the gauge/gravity duality is extended to the more general cases such that the field theories which are not conformal. There are many different families of non-conformal theories which one can study the effect of the non-conformality on their physical observables \cite{Attems:2016ugt,Pang:2015lka,Rahimi:2016bbv}. Entanglement entropy (EE) is a significant non-local quantity which measures the quantum entanglement between subsystem $A$ and its complement $\bar{A}$. It has been shown that the EE in the QFTs contains short-distance divergence which satisfies an area law and thus the EE is a scheme-dependent quantity \cite{Bombelli:1986rw,Srednicki:1993im}. Although the EE has simple definition in the QFT but it is very difficult to compute this quantity using QFT techniques \cite{Holzhey:1994we,Calabrese:2004eu,Casini:2009sr,Hertzberg:2012mn,Rosenhaus:2014ula,Rosenhaus:2014zza,Iso:2021vrk,Iso:2021dlj,Sorkin:2012sn,Chen:2020ild}. Fortunately, the gauge/gravity duality has made this problem easier and the EE has a simple holographic dual. For a spatial region $A$ in the boundary field theory, the EE of $A$ corresponds to the area of minimal surface, called Ryu-Takayanagi surface extended in the bulk whose boundary coincides with the boundary of $A$, known as RT prescription \cite{Ryu:2006bv,Ryu:2006ef}. Using this prescription, many studies have been done in the literature to better understand the EE, for example, see \cite{Casini:2011kv,Myers:2012ed,Lokhande:2017jik,Rahimi:2018ica,Fischler:2012ca,Ben-Ami:2014gsa,Pang:2014tpa,Kundu:2016dyk,Ebrahim:2020qif,Buniy:2005au}. When the total system is described by mixed state $\rho_{AB}$, the EE is not a suitable quantity to measure the full correlation between two subsystems $A$ and $B$. In this case, there are known quantities in quantum information theory which measure total, classical and quantum, correlation between subsystems $A$ and $B$. One of the most famous and important of these quantities is the mutual information (MI) which measures the total correlation between subsystems A and B which is defined as $I(A,B)=S_A+S_B+S_{A\cup B}$, where $S_X$ is the EE of the region $X$ \cite{Casini:2004bw,Wolf:2007tdq,Fischler:2012uv,Allais:2011ys,Hayden:2011ag,MohammadiMozaffar:2015wnx,Asadi:2018ijf,Ali-Akbari:2019zkf}. The MI is a finite quantity since the divergent pieces in the EE cancel out and the subadditivity property of the EE also guarantees that MI is always non-negative. When the subsystem $B$ is the complement of $A$, $B=\bar{A}$, the MI is proportional to the EE. Although the EE is dominated by the thermal entropy in the high temperature limit, it has been shown that the MI obeys an area law in the same limit \cite{Wolf:2007tdq,Fischler:2012uv}. The entanglement of purification (EoP) is another quantity which measures the total correlation between two disjoint subsystems $A$ and $B$ for mixed state $\rho_{AB}$ \cite{arXiv:quant-ph/0202044v3,arXiv:quant-ph/1502.01272}. In general, by enlarging the Hilbert space we can purify a mixed state. In fact, purification refers to the fact that every mixed state acting on the Hilbert spaces can be viewed as the reduced state of some pure states. The EoP is defined by the minimum of the EE between subsystems $AA'$ and $BB'$ against all possible purifications where $A'$ and $B'$ are arbitrary. Obviously, when two subsystems $A$ and $B$ are complementary to each other, the EoP between two subsystems reduces to the EE of subsystem $A$. It is very difficult to compute the EoP in QFT, however it has been computed by numerical lattice calculation in QFT \cite{Bhattacharyya:2018sbw,Caputa:2018xuf,Camargo:2020yfv}. It has been conjectured that the EoP is holographically dual to the minimal cross-section of entanglement wedge $E_w$ of $\rho_{AB}$, which satisfies the basic properties of the EoP \cite{Takayanagi:2017knl,Nguyen:2017yqw}. $E_w$ of $\rho_{AB}$ is reduced to the holographic entanglement entropy (HEE), when the total system $A\cup B$ is pure. In the literature, other correlation measures such as reflected entropy, odd entropy and logarithmic negativity are discussed and all of them are connected with the entanglement wedge cross-section \cite{Dutta:2019gen,Tamaoka:2018ned,Kudler-Flam:2018qjo}. Various aspects of the EoP are discussed in the literature, for example, see \cite{Yang:2018gfq,Espindola:2018ozt,Bao:2017nhh,Umemoto:2018jpc,Bao:2018gck,Bao:2018fso,Bhattacharyya:2019tsi,Liu:2019qje,Jokela:2019ebz,BabaeiVelni:2019pkw,BabaeiVelni:2020wfl,Amrahi:2020jqg,Amrahi:2021lgh, Sahraei:2021wqn,Guo:2019azy,Ghodrati:2019hnn,Du:2019emy,Guo:2019pfl,Bao:2019wcf,Harper:2019lff,Camargo:2021aiq,DiNunno:2021eyf,Saha:2021kwq,Lin:2020yzf,Liu:2020blk,Fu:2020oep, Huang:2019zph,Jain:2020rbb,Ghodrati:2021ozc} In this paper we consider a four dimensional QCD-like model, which is non-conformal, at zero and finite temperature. The zero temperature model is dual to modified AdS$_5$ (MAdS) background and the finite temperature one is dual to a modified AdS$_5$ black hole (MBH) \cite{Andreev:2006ct,Andreev:2006eh}. There is a thermal phase transition, in the MBH background, which occurs at the point that the position of the horizon approaches to the lower bound of the minimum value of the radial coordinate. The rest of the paper is organized as follows. Section \ref{section2} includes a brief review of the MAdS and MBH backgrounds. In section \ref{section3}, after a short review on HEE, we compute and study this quantity in our model for a strip entangling subsystem at both zero and finite temperature. In order to obtain the analytical results we use the systematic expansion up to some specific order of the expansion parameters and consider some specific limits of the QFTs which we call high energy limit. Since we can not obtain the analytical results in the low energy limit, we do not study this limit. Section \ref{section4} will be devoted to study the MI at zero and finite temperature. Using the analytical results obtained for the HEE, we reach analytical expressions for the MI corresponding to the two strip entangling subsystems in the some specific limits. Section \ref{section5} is reserved for study the EoP at both zero and finite temperature. At zero temperature, we consider two favorable limits called high and intermediate energy and obtain analytical results in these limits. At finite temperature, we can get analytical expressions for the EoP in the four regimes which are low and intermediate temperature limits at high and at intermediate energy. We will conclude in section \ref{section6} with the discussion of our results. We present the full details of our calculations for the HEE in appendix \ref{Appendix1}. Appendices \ref{Appendix2} and \ref{Appendix3} include the numerical constants and some of the calculations for the MI and EoP, respectively. \section{Backgrounds}\label{section2} We are interested in studying the EE, MI and EoP in the (3+1)-dimensional non-conformal field theories using the framework of holography. Therefore, we start with a holographic background in five dimensions, which is called MAdS and its black hole version MBH. These backgrounds are dual to QCD-like theories at zero and finite temperature, respectively. MAdS background described by the following metric \cite{Andreev:2006ct} \begin{align}\label{MAdS} ds^2=\frac{r^2}{R^2}g(r)\left(-dt^2+d\vec{x}^2+\frac{R^4}{r^4}dr^2\right) , \ \ \ \ \ g(r)=e^{\frac{r_c^2}{r^2}},\ \ \ \ \ \ r_c=R^2\sqrt{\frac{c}{2}}, \end{align} and the MBH background is described by \cite{Andreev:2006eh} \begin{align}\label{MBH} ds^2=\frac{r^2}{R^2}g(r)\left(-f(r)dt^2+d\vec{x}^2+\frac{R^4}{r^4f(r)}dr^2\right) , \ \ \ \ \ f(r)=1-\frac{r_H^4}{r^4}, \end{align} where $\vec{x}\equiv(x,y,z)$. $r$ is the radial coordinate, $R$ is the asymptotic AdS$_5$ radius and $r_H$ is the location of the horizon. The QCD-like model lives on the boundary at $r\rightarrow \infty$. $r_c$ is an lower bound on the minimum value of the radial coordinate which is related to the deformation parameter $c$ as $r_c=R^2\sqrt{\frac{c}{2}}$. From the trajectory of $\rho$ meson the value of $c$ is fixed and almost equals to $0.9$ GeV$^2$ \cite{Andreev:2006eh}. We assign an energy scale $\Lambda_c$ to $r_c$ which is defined $\Lambda_c\equiv\sqrt{c}$. Since $c$ has (energy)$^2$ dimension, $\Lambda_c$ has (energy) dimension. In the limit of $c\rightarrow 0$, the background \eqref{MAdS} becomes AdS$_5$ and $r$ is not bounded \cite{Andreev:2006ct}. Also in this limit, the background \eqref{MBH} becomes AdS$_5$ black hole. In other words, in the $c\rightarrow 0\ (\Lambda_c \rightarrow 0)$ limit we have a conformal field theory. The thermodynamics properties of these backgrounds are studied in \cite{Lezgi:2020bkc} and some thermodynamics parameters such as entropy, density and pressure are expressed in terms of non-conformal parameters. \section{Entanglement entropy}\label{section3} The entanglement entropy is one of the most important quantities which measure the quantum entanglement among different degrees of freedom of a quantum mechanical systems \cite{Casini:2009sr,Horodecki:2009zz}. In fact, EE has emerged as a valuable tool to probe the physical information in quantum systems. To define the EE, we consider a quantum system which is described by the pure state $\vert \psi \rangle$ and density matrix $\rho=\vert \psi \rangle\langle \psi \vert$. By dividing the total system into two subsystems $A$ and its complement $\bar{A}$, the total Hilbert space becomes factorized $\mathcal{H}_{tot}=\mathcal{H}_A\otimes \mathcal{H}_{\bar{A}}$. The reduced density matrix for the subsystem $A$, $\rho_A$, is obtained by tracing out the degrees of freedom in the complementary subsystem, $\rho_A=Tr_{\bar{A}}(\rho)$. The entanglement between subsystems $A$ and $\bar{A}$ is measured by the entanglement entropy which is a non-local quantity and defined as the Von-Neumann entropy of the reduced density matrix $\rho_A$ \begin{align} S_A=-Tr(\rho_A \log \rho_A). \end{align} In the framework of the gauge/gravity duality, the RT-proposal is a remarkably simple prescription to compute entanglement entropy in terms of a geometrical quantity in the bulk \cite{Ryu:2006bv,Ryu:2006ef}. According to the RT-prescription, the HEE is given by \begin{align}\label{HEE} S_A=\frac{\rm{Area}(\Gamma_A)}{4G_N^{(d+2)}}, \end{align} where $G_N^{(d+2)}$ is the $(d+2)$-dimensional Newton constant. $\Gamma_A$ is a codimension-2 minimal hypersurface in the bulk (RT-surface) whose boundary $\partial \Gamma_A$ coincides with the boundary of region $A$, $\partial \Gamma_A=\partial A$. We also require $\Gamma_A$ to be homologous to region $A$. It can be shown that the EE for two disjoint subsystems $A$ and $B$ satisfies the so-called strong subadditivity condition $S_A+S_B\geqslant S_{A\cup B}+S_{A\cap B}$ \cite{Headrick:2007km}. Using the RT-prescription the HEE is calculated in the AdS soliton background and it is shown that the HEE decreases under the tachyon condensations \cite{Nishioka:2006gr}. In \cite{Klebanov:2007ws,Pakman:2008ui} the HEE is used to determine the phase structure of the confining field theories. Using QFT techniques the dependence of the EE on the renormalized mass is studied in the scalar field theory and it is shown that the area law that exists in the free field theory persists in the interacting theory through the mass renormalization \cite{Hertzberg:2012mn}. In \cite{Rosenhaus:2014ula} it is shown that the EE for a general theory can be expressed in terms of a spectral function of this theory. Also the EE has been studied perturbatively for a CFT perturbed by a relevant operator and a deformed planar entangling surface \cite{Rosenhaus:2014zza}. The renormalization of the UV divergences and the non-Gaussianity of the vacuum which are two important issues of the EE in the intacting field theory are studied in \cite{Iso:2021vrk,Iso:2021dlj}. In \cite{Sorkin:2012sn}, a covariant definition of the EE is proposed in terms of the spacetime two-point correlation function for Gaussian theories. This formulation developed for a non-Gaussian scalar field theory in \cite{Chen:2020ild}. In this section we would like to study the HEE in a non-conformal field theory at zero, low and high temperature. \begin{figure} \centering \includegraphics[scale=0.4]{HEE} \caption{A simplified sketch of a strip region $A$ with width $l$ and length $L$. $\Gamma_A$ is the RT-surface of the region $A$, $r^*$ is the turning point of this surface and $r_c$ is the lower bound on the radial coordinate which is related to the deformation parameter $c$.} \label{EE} \end{figure} We want to compute the HEE for a rectangular strip with width $l$ and length $L(\rightarrow \infty)$, depicted in figure \ref{EE}, specified by \begin{align}\label{config} -\frac{l}{2}&\leq x(r) \leq \frac{l}{2},\ \ \ \ \ \ \ \ \ -\frac{L}{2}\leq y\ \&\ z \leq \frac{L}{2}. \end{align} In order to perform the computations analytically we need to focus on the some specific limits of the non-conformal model at zero and finite temperature which are dual to the backgrounds \eqref{MAdS} and \eqref{MBH}, respectively. We consider the high energy limit ($l\Lambda_c\ll 1$) at zero ($T=0$), low ($lT\ll 1$) and high ($lT\gg 1$) temperature where the temperature is given by $T=\frac{r_H}{R^2\pi}$ \cite{Andreev:2006eh}. In the high energy limit, the energy scale corresponding to the subsystem $A$ should be very larger than the energy scale $\Lambda_c$ i.e. $\Lambda _c\ll \frac{1}{l}$. In the MAdS background \eqref{MAdS} the high energy limit equals to $r_c\ll r^*$ and in the MBH background \eqref{MBH} the low and high temperature limits at high energy are equivalent to $r_H \ \& \ r_c\ll r^*$ and $r_c\ll r^* \rightarrow r_H$, respectively, where $r^*$ is the turning point of the RT-surface $\Gamma_A$. In figure \ref{EEhighE} we show these regimes schematically. Hereafter, we assume the AdS radius to be one. \subsection{HEE at zero temperature}\label{sectionzeroEE} Using the RT-prescription eq.\eqref{HEE} and background \eqref{MAdS}, the corresponding area surface for the configuration in eq.\eqref{config}, $\mathcal{A}$, is given by \begin{align}\label{areaa} \mathcal{A}=2L^2\int_{r^*}^\infty r e^{\frac{3}{2}\left(\frac{r_c}{r}\right)^2}\sqrt{1+r^4x'(r)^2} dr. \end{align} Since there is no explicit $x(r)$ dependence in eq.\eqref{areaa}, the corresponding Hamiltonian is constant and one can easily obtain \begin{figure} \centering \subfloat[]{\includegraphics[scale=0.2]{EEhighE}\label{aa}} \subfloat[]{\includegraphics[scale=0.2]{EElowThighE}\label{bb}} \subfloat[]{\includegraphics[scale=0.20]{EEhighThighE}\label{cc}} \caption{(a): The high energy limit i.e. $l\Lambda_c\ll 1$ or equivalently $r_c\ll r^*$ at the zero temperature. (b): The high energy limit at the low temperature i.e. $lT\ll 1$ or equivalently $r_H\ll r^*$. (c): The high energy limit at the high temperature i.e. $lT\gg 1$ or equivalently $r^*\rightarrow r_H$.} \label{EEhighE} \end{figure} \begin{align}\label{xdif} x'(r)=\pm \frac{{r^*}^3}{r^5}e^{\frac{3}{2}\left(\left(\frac{r_c}{r^*}\right)^2-\left(\frac{r_c}{r}\right)^2\right)}\left[1-\left(\frac{r^*}{r}\right)^6e^{3\left(\left(\frac{r_c}{r^*}\right)^2-\left(\frac{r_c}{r}\right)^2\right)}\right]^{-\frac{1}{2}}, \end{align} where the constant is chosen to be $\frac{\sqrt{r^6 e^{3\left(\frac{r_c}{r}\right)^2}-{r^*}^6 e^{3(\frac{r_c}{r^*})^2}}}{r^2}$. Plugging eq.\eqref{xdif} back into eq.\eqref{areaa}, we find \begin{align}\label{HEE1} \mathcal{A}=2{r^*}^2L^2\int _{r^*\epsilon}^1 u^{-3} e^{\frac{3}{2}(\frac{r_c}{r^*})^2u^2}\left[1-u^6e^{3(\frac{r_c}{r^*})^2(1-u^2)}\right]^{-\frac{1}{2}}du, \end{align} where $u=\frac{r^*}{r}$ and $\epsilon$ is an ultraviolet cut off. By integrating the differential equation \eqref{xdif}, the relation between $l$ and $r^*$ is given by \begin{align}\label{lengthh} l=\frac{2}{r^*}\int_0^1 u^3 e^{\frac{3}{2}\left(\frac{r_c}{r^*}\right)^2(1-u^2)}\left[1-e^{3\left(\frac{r_c}{r^*}\right)^2(1-u^2)}u^6\right]^{-\frac{1}{2}} du. \end{align} Unfortunately eqs. \eqref{HEE1} and \eqref{lengthh} can not be analytically solved. Therefore, we need to use the following binomial expansion \cite{Fischler:2012ca,Fischler:2012uv} \begin{align}\label{expansionn} (1+x)^{-r}&=\sum\limits_{n=0}^{\infty}(-1)^n\binom{r+n-1}{n}x^n , \ \ \ \ \ \ \ |x|<1, \end{align} where $x$ and $r$ are real numbers and $r>0$. By identifying $x=-e^{3\left(\frac{r_c}{r^*}\right)^2(1-u^2)}u^6$ and using the facts that $0<u<1$ and $r_c<r^*$, one can easily see that $\vert x\vert<1$ and hence the sum is well-defined. Using eq.\eqref{expansionn}, we can write eqs. \eqref{lengthh} and \eqref{HEE1} as follows \begin{align}\label{Lengthh1} l=\frac{2}{r^*}\sum\limits_{n=0}^{\infty}\frac{\Gamma(n+\frac{1}{2})}{\sqrt{\pi}\Gamma(n+1)}\int_0^1 u^{6n+3} e^{3(n+\frac{1}{2})\left(\frac{r_c}{r^*}\right)^2(1-u^2)} du, \end{align} \begin{align}\label{HEE2} \mathcal{A}=2{r^*}^2L^2\sum\limits_{n=0}^{\infty}\frac{\Gamma(n+\frac{1}{2})}{\sqrt{\pi}\Gamma(n+1)}\int_{r^*\epsilon}^1 u^{6n-3}e^{(3n+(\frac{3}{2}-3n)u^2)(\frac{r_c}{r^*})^2} du. \end{align} In order to find the $\mathcal{A}$ as a function of $l$, we should solve eq.\eqref{Lengthh1} for $r^*$ and then substitute back this in eq.\eqref{HEE2}. Since we can not analytically solve the eq.\eqref{Lengthh1} to find $r^*$ as a function of $l$, we need to focus on the high energy limit ($l\Lambda _c\ll 1$). Note that, in the low energy limit, i.e. $l\Lambda _c\gg 1$, or equivalently $r^*\rightarrow r_c$ we do not reach the analytical results and then we have to neglect studying this limit. \subsubsection{High Energy limit} In the section \ref{sectionzeroEE}, we saw that in order to find $\mathcal{A}$ in terms of $l$, we must consider the high energy limit. This limit can be interpreted in terms of the bulk data as $r_c \ll r^*$, see figure \ref{aa} . Hence the RT-surface $\Gamma$ is restricted to be near the boundary. The boundary is AdS and we should get the conformal entanglement entropy as the leading term. The non-conformal effects appear as sub-leading terms corresponding to the deviations from the boundary geometry. These effects are small and hence can be computed perturbatively. By expanding eq.\eqref{Lengthh1} up to the 4th order in $\frac{r_c}{r^*}$ we obtain \begin{align}\label{lengthhh2} l=\frac{2}{r^*}\left[ a_1+a_2 \left(\frac{r_c}{r^*}\right)^2+a_3 \left(\frac{r_c}{r^*}\right)^4\right] , \ \ \ \ \ \ \ \ a_1, a_2, a_3>0, \end{align} where numerical coefficients $a_1$, $a_2$ and $a_3$ are given by eq.\eqref{qeoff1} in appendix \ref{Appendix1}. Solving eq.\eqref{lengthhh2} perturbatively for $r^*$ and doing some calculations, we finally reach the following expression for the HEE (for more details of the calculations, see appendix \ref{Appendix1}) \begin{align}\label{HEEz} S=\frac{1}{4G_N^{(5)}}\left(\frac{L}{\epsilon }\right)^2-\frac{3}{8G_N^{(5)}}\Lambda_c^2 L^2\log (\Lambda_c \epsilon)+S_{finite}(l,l\Lambda_c) , \end{align} where $S_{finite}(l,l\Lambda_c)$ is the finite part of the HEE which is given by \begin{align} S_{finite}(l,l\Lambda_c)=\frac{1}{4G_N^{(5)}}\frac{L^2}{l^2}\bigg[ \kappa_1 &+\left(\kappa_2+\frac{3}{2}\log(l\Lambda_c) \right) (l \Lambda_c)^2\cr &+\kappa_3(l\Lambda_c)^4\bigg], \ \ \ \ \ \ \ \ \ \kappa _1<0, \ \kappa_2 , \kappa_3>0, \end{align} where $\kappa_1$, $\kappa_2$ and $\kappa_3$ are numerical coefficients which are given by eq.\eqref{Kappa} in Appendix \ref{Appendix1}. The first term in eq.\eqref{HEEz} is a divergent term in the limit of $\epsilon\rightarrow 0$ which appears in the AdS background. From the second term in eq.\eqref{HEEz} we observe that there is a logarithmic divegence because of non-conformality. Since the underlying field theory is conformal, we expect that the dimensionless parameter $l\Lambda_c$ appears. Therefore, we redefined $S_{finite}$ as follows \begin{align}\label{HEEhighh1} \hat{\mathcal{S}}_{finite}(l\Lambda_c)\equiv\frac{4G_N^{(5)}S_{finite}(l,l\Lambda_c)}{L^2\Lambda_c^2}=\frac{1}{(l\Lambda_c)^2}\bigg[& \kappa_1 +\left(\kappa_2+\frac{3}{2}\log(l\Lambda_c) \right) (l \Lambda_c)^2\cr &+\kappa_3(l\Lambda_c)^4\bigg], \ \ \ \ \kappa _1<0, \kappa_2 , \kappa_3>0, \end{align} where $\hat{\mathcal{S}}_{finite}(l\Lambda_c)$ is the redefined $S_{finite}(l,l\Lambda_c)$. By this redefinition, the limit of $l\Lambda_c \rightarrow 0$ become meaningful and intuitive. The first term in eq.\eqref{HEEhighh1} is the contribution of the AdS boundary corresponds to the entanglement entropy in conformal field theory. Obviously, this term is negative. The other two terms in eq.\eqref{HEEhighh1} are the non-conformal effects. In the second term the logarithmic term is the dominant term which is always negative in the high energy limit. This shows that the non-conformal effects decrease $\hat{\mathcal{S}}_{finite}(l\Lambda_c)$ which is in complete agreement with \cite{Rahimi:2016bbv}. We are not worried about the logarithmic term. Although, this term is large for small $l\Lambda_c$, it can still be considered as a perturbation compared to the first term. \subsection{HEE at finite temperature} In this section we want to study the thermal behavior of the entanglement entropy in the non-conformal field theory. Therefore we calculate the HEE in the MBH background \eqref{MBH}. Using eqs. \eqref{HEE} and \eqref{MBH} and doing some calculation similar to the previous section, we reach the following expressions for $l$ and $\mathcal{A}$ \begin{align}\label{LengthhT} l=\frac{2}{r^*}\int_0^1 u^3 e^{\frac{3}{2}\left(\frac{r_c}{r^*}\right)^2(1-u^2)}\left[1-e^{3\left(\frac{r_c}{r^*}\right)^2(1-u^2)}u^6\right]^{-\frac{1}{2}}\left[1-(\frac{r_H}{r^*})^4u^4\right]^{-\frac{1}{2}} du, \end{align} \begin{align}\label{HEET} \mathcal{A}=2{r^*}^2L^2\int _{r^*\epsilon}^1 u^{-3} e^{\frac{3}{2}(\frac{r_c}{r^*})^2u^2}\left[1-u^6e^{3(\frac{r_c}{r^*})^2(1-u^2)}\right]^{-\frac{1}{2}}\left[1-(\frac{r_H}{r^*})^4u^4\right]^{-\frac{1}{2}} du, \end{align} where again $u=\frac{r^*}{r}$ and $\epsilon$ is an ultraviolet cut off. Unfortunately, the above integrals can not be solved analytically. Therefore, in order to calculate eqs. \eqref{LengthhT} and \eqref{HEET}, we use eq.\eqref{expansionn} by identifying $x=-\frac{r_H^4}{r^4}u^4$ and $x=e^{3\left(\frac{r_c}{r^*}\right)^2(1-u^2)}u^6$ and we obtain \begin{align}\label{lengthhT1} l=\frac{2}{r^*}\sum\limits_{m=0}^{\infty}\sum\limits_{n=0}^{\infty}\frac{\Gamma(n+\frac{1}{2})\Gamma(m+\frac{1}{2})}{\pi \Gamma(n+1)\Gamma(m+1)}\left(\frac{r_H}{r^*}\right)^{4m}\int_0^1 u^{6n+4m+3} e^{3(n+\frac{1}{2})\left(\frac{r_c}{r^*}\right)^2(1-u^2)} du, \end{align} \begin{align}\label{HEET1} \mathcal{A}=2{r^*}^2L^2\sum\limits_{n=0}^{\infty}\sum\limits_{m=0}^{\infty}\frac{\Gamma(n+\frac{1}{2})\Gamma(m+\frac{1}{2})}{\pi\Gamma(n+1)\Gamma(m+1)}(\frac{r_H}{r^*})^{4m}\int _{r^*\epsilon}^1u^{6n+4m-3}e^{(3n+(\frac{3}{2}-3n)u^2)(\frac{r_c}{r^*})^2} du. \end{align} One can check that $\vert x\vert < 1$ for any allowable values of background parameters, that is $0<u<1$, $r_H< r$ and $r_c < r^*$ and hence the sums are convergent. Now, we should solve eq.\eqref{lengthhT1} for $r^*$ and use this in eq.\eqref{HEET1} to get $\mathcal{A}$ in terms of $l$. It is impossible to solve eq.\eqref{lengthhT1} analytically and we only consider some orders of expansion in different regimes such as low temperature ($lT \ll 1$) and high temperature ($l T \gg 1$), in the high energy limit ($l\Lambda _c\ll 1$). \subsubsection{High energy and low temperature}\label{EE-highT} Here, we focus on the limit of low temperature i.e $lT \ll 1$ at the high energy $l\Lambda _c\ll 1$. This regime can be interpreted in terms of bulk parameters as $r_H\ll r^*$ and $r_c\ll r^*$, see figure \ref{bb}. In this regime, the leading contribution comes from the AdS boundary and finite temperature and non-conformal corrections are sub-leading terms which are small and hence we can perturbatively do the calculations. We expand eq.\eqref{lengthhT1} up to 4th order in $\frac{r_c}{r^*}$ and $\frac{r_H}{r^*}$ and finally have \begin{align}\label{LengthhhLT} l=\frac{2}{r^*}\Bigg\lbrace &a_1+b_1\left(\frac{r_H}{r^*}\right)^4+\left[a_2+b_2\left(\frac{r_H}{r^*}\right)^4\right]\left(\frac{r_c}{r^*}\right)^2\cr &+\left[a_3+b_3\left(\frac{r_H}{r^*}\right)^4\right]\left(\frac{r_c}{r^*}\right)^4\Bigg\rbrace ,\ \ \ \ \ \ \ b_1,b_2,b_3>0, \end{align} where numerical coefficients $b_1$, $b_2$ and $b_3$ are given by eq.\eqref{qeoff3} in appendix \ref{Appendix1}. Solving eq.\eqref{LengthhhLT} perturbatively for $r^*$ and replacing $r^*$ in the eq.\eqref{HEET1}, we reach (see appendix \ref{Appendix1} for more details of calculation) \begin{align}\label{HEEloww} \tilde{\mathcal{S}}_{finite}(l\Lambda_c,lT)&\equiv\frac{4G_N^{(5)}S_{finite}(l,l\Lambda_c,lT)}{L^2\Lambda_c T}\cr &=\frac{1}{(l \Lambda_c)(lT)}\Big\lbrace \kappa_1 +\bar{\kappa}_1(lT)^4+\left[\kappa_2+\frac{3}{2}\log(l\Lambda_c) +\bar{\kappa}_2(lT)^4\right] (l \Lambda_c)^2\cr &+\left[\kappa_3 +\bar{\kappa}_3(lT)^4\right] (l\Lambda_c)^4\Big\rbrace , \ \ \ \bar{\kappa}_1,\kappa_2,\kappa_3 >0,\kappa_1,\bar{\kappa}_2,\bar{\kappa}_3<0, \ \ \ \ \ \end{align} where $\tilde{\mathcal{S}}_{finite}(l\Lambda_c,lT)$ is the redefined $S_{finite}(l,l\Lambda_c,lT)$ at low temperature and numerical coefficients $\bar{\kappa}_1$, $\bar{\kappa}_2$ and $\bar{\kappa}_3$ are given by eq.\eqref{kappabar} in appendix \ref{Appendix1}. Since the underlying field theory is conformal we expect the dimensionless parameters $l\Lambda_c$ and $lT$ to appear and therefore we redefined $S_{finite}(l,l\Lambda_c,lT)$. By this redefinition we can take more easily the limits of $l\Lambda_c\rightarrow 0$ and $lT\rightarrow 0$. The first two terms are the known results corresponding to the pure AdS and AdS black hole HEE, respectively, and the next two terms are the thermal and non-conformal corrections. Since $\bar{\kappa}_1$ is a positive constant the second term is always positive and hence the thermal fluctuations increase $\tilde{\mathcal{S}}_{finite}$. This behavior is due to the increase in the number of degrees of freedom because of the thermal fluctuations. In the third term the logarithmic term is dominant and always negative in the high energy limit. Therefore, the non-conformal effects decrease $\tilde{\mathcal{S}}_{finite}$. From eq.\eqref{HEEloww} we observe that if we fix $lT$ ($l\Lambda_c$) and increases $l\Lambda_c$ ($lT$), then $\tilde{\mathcal{S}}_{finite}(l\Lambda_c,lT)$ will increase. This result can be clearly seen in figure \ref{N-EE} where we plot $\tilde{\mathcal{S}}_{finite}$ as a function of $l\Lambda_c$ ($lT$) for fixed value of $lT$ ($l\Lambda_c$). Below, we represent the results corresponding to the mentioned limits: \begin{figure} \centering \includegraphics[scale=0.38]{N-EE1} \ \ \includegraphics[scale=0.38]{N-EE2} \caption{Left: $\tilde{\mathcal{S}}_{finite}(l\Lambda_c,lT)$ in terms of $l\Lambda_c$ for fixed $lT=0.0005$ in the low temperature limit at high energy. Right: $\tilde{\mathcal{S}}_{finite}(l\Lambda_c,lT)$ in terms of $lT$ for fixed $l\Lambda_c=0.0005$ in the low temperature limit at high energy. } \label{N-EE} \end{figure} \begin{itemize} \item $\Lambda_c\neq 0$ and $T=0$: In this case, we reproduce the results obtained for MAdS background eq.\eqref{HEEhighh1}. The leading term corresponds to the entanglement entropy in the conformal field theory and the non-conformal effects, the terms including $\kappa_2+\frac{3}{2}\log(l\Lambda_c)$ and $\kappa_3$, appear as the sub-leading terms. These effects are negative and therefore the non-conformality decrease the entanglement between our considered subsystems. \item $\Lambda_c=0$ and $T\neq 0$: In this case, we reproduce the previous results obtained for AdS black hole, see \cite{Fischler:2012ca,BabaeiVelni:2019pkw}. The leading term corresponds to the zero temperature HEE and the finite temperature correction appears as the sub-leading term. Since the constant $\bar{\kappa }_1$ is positive, the thermal fluctuations increase $\tilde{\mathcal{S}}_{finite}$. \item $\Lambda_c= 0$ and $T=0$: We reach the previous results obtained for pure AdS$_5$, which is corresponding to the HEE in zero temperature conformal field theory. Obviously, this term is negative. \end{itemize} There is a phase transition point at $r_H=r_c$ \cite{Andreev:2006eh}. Therefore we are interested in studying the HEE in the limit of $r_H \rightarrow r_c$ or equivalently $T \rightarrow \frac{\Lambda_c}{ \sqrt{2}\pi}$ , which we call it the transition limit. If we take the transition limit from eq.\eqref{HEEloww}, up to the second order in $l\Lambda_c$, we obtain the following expression \begin{align}\label{HEEtra} \tilde{\mathcal{S}}_{finite}(l\Lambda_c ,lT)\bigg\vert _{T \rightarrow \frac{\Lambda_c}{ \sqrt{2}\pi}}=\frac{\sqrt{2}\pi}{(l \Lambda_c)^2}\left\lbrace \kappa_1+ \left(\kappa_2+\frac{3}{2}\log(l\Lambda_c)\right) (l \Lambda_c)^2+ \left(\kappa_3+\frac{\bar{\kappa}_1}{4\pi ^4}\right) (l\Lambda_c)^4\right\rbrace . \end{align} For fixed $l\Lambda_c$, one can compare $\hat{\mathcal{S}}_{finite}(l\Lambda_c)$ at the zero temperature eq.\eqref{HEEhighh1} and $\tilde{\mathcal{S}}_{finite}(l\Lambda_c,lT)$ at low temperature one in the transition limit eq.\eqref{HEEtra}. To do this, we subtract eq.\eqref{HEEhighh1} from eq.\eqref{HEEtra} and we get \begin{align}\label{EEtransition} \frac{\tilde{\mathcal{S}}_{finite}(l\Lambda_c ,lT)}{\sqrt{2}\pi} \bigg\vert _{T \rightarrow\frac{\Lambda_c}{ \sqrt{2}\pi}} -\hat{\mathcal{S}}_{finite}(l\Lambda_c) =\frac{\bar{\kappa}_1}{4\pi^4 }(l\Lambda_c)^2 >0 . \end{align} This result shows that near the transition point, the subsystem $A$ and its complement $\bar{A}$ are less entangled at zero temperature. As we know, the EE describes the amount of information loss because of integrating out the subsystem $\bar{A}$. The higher the EE, the more information we lose. From the information point of view, we would like to define a favorable state such that the subsystems $A$ and $\bar{A}$ are less entangled. Therefore, from eq.\eqref{EEtransition}, we find that near the transition point the state at zero temperature is the favorable one. Note that we define the favorable state here in the point of view of the amount of information loss. However, from resource theory point of view, the definition of favorable state can be different. For example from the resource theory of entanglement we know that if the local operations and classical communication (LOCC) are allowed opperations, then the entangled states can be regarded as a valuable resource \cite{Horodecki:2009zz,Chitambar,Gour,Tejada}. \subsubsection{High energy and high temperature}\label{EE-highT} In this subsection we consider the high temperature limit i.e. $Tl\gg1$ or equivalently $r^*\rightarrow r_H$, at high energy ($r_c\ll r^*$), see figure \ref{cc}. In this case, the turning point of the extremal surface $\Gamma$ approaches the horizon and the leading contribution comes from the near horizon background. We expand eqs. \eqref{lengthhT1} and \eqref{HEET1} up to second order in $\frac{r_c}{r^*}$ and take the integrals. we get \begin{align}\label{LengthTH} l=\frac{2}{r^*}\sum\limits_{n=0}^{\infty}\sum\limits_{m=0}^{\infty}\frac{\Gamma(n+\frac{1}{2})\Gamma(m+\frac{1}{2})}{\pi\Gamma(n+1)\Gamma(m+1)}(\frac{r_H}{r^*})^{4m}\left[L_1+L_2(\frac{r_c}{r^*})^2\right], \end{align} \begin{align}\label{HEEhighT1} \mathcal{A}=\frac{L^2}{\epsilon^2}-3r_c^2L^2\log(r^*\epsilon)+\mathcal{A}', \end{align} where $\mathcal{A}'$ contains the terms which do not include $\epsilon$ \begin{align}\label{HEETH} \mathcal{A}'=2{r^*}^2L^2\sum\limits_{n=0}^{\infty}\sum\limits_{m=0}^{\infty}\frac{\Gamma(n+\frac{1}{2})\Gamma(m+\frac{1}{2})}{\pi\Gamma(n+1)\Gamma(m+1)}(\frac{r_H}{r^*})^{4m}\left[D_1+D_2(\frac{r_c}{r^*})^2\right], \end{align} and $L_1$, $L_2$, $D_1$ and $D_2$ are given by \begin{align} L_1=&\frac{1}{4 m+6 n+4}, \ \ \ \ \ \ \ L_2=\frac{3}{2} (2 n+1) \left(\frac{1}{4 m+6 n+4}-\frac{1}{4 m+6 n+6}\right),\cr D_1=&\frac{1}{4 m+6 n-2} , \ \ \ \ \ \ \ D_2=\frac{3 n}{2 (2 m+3 n-1)}+\frac{3-6 n}{8 m+12 n}. \end{align} In order to write $\mathcal{A}'$ in terms of $l$, we do the calculations order by order up to $(\frac{r_c}{r^*})^2$ and check the convergence of the resulting infinite series. \begin{itemize} \item Up to $(\frac{r_c}{r^*})^0$: In this case, $r_c$ is not appear and it is expected that we reach the results corresponding to AdS black hole in the high temperature limit \cite{Fischler:2012ca,BabaeiVelni:2019pkw}. We consider the first terms in brackets in eqs. \eqref{LengthTH} and \eqref{HEETH} which include $L_1$ and $D_1$, respectively and sum over $m$. We obtain \begin{align}\label{LengthTH1} l=\frac{2}{r^*}\sum\limits_{n=0}^{\infty}\frac{1}{4n+1}\frac{\Gamma(n+\frac{1}{2})\Gamma(\frac{2n+2}{3})}{\Gamma(n+1)\Gamma(\frac{4n+1}{6})}(\frac{r_H}{r^*})^{4n}, \end{align} \begin{align}\label{HEETH1} \mathcal{A}'=2{r^*}^2L^2\sum\limits_{n=0}^{\infty}\frac{1}{4n-2}\frac{\Gamma(n+\frac{1}{2})\Gamma(\frac{2n+2}{3})}{\Gamma(n+1)\Gamma(\frac{4n+1}{6})}(\frac{r_H}{r^*})^{4n}. \end{align} Using eqs. \eqref{LengthTH1} and \eqref{HEETH1} and doing some calculations, we get \begin{align} \mathcal{A}'=2{r^*}^2L^2\Bigg\lbrace & \frac{lr^*}{2}-\frac{3\sqrt{\pi}\Gamma(\frac{2}{3})}{2\Gamma(\frac{1}{6})}+\sum\limits_{n=1}^{\infty}\frac{3}{(4n-2)(4n+1)}\frac{\Gamma(n+\frac{1}{2})\Gamma(\frac{2n+2}{3})}{\Gamma(n+1)\Gamma(\frac{4n+1}{6})}(\frac{r_H}{r^*})^{4n}\bigg\rbrace . \end{align} It is easy to see that in the large $n$ limit the above infinite series behaves as $\frac{1}{n^2}(\frac{r_H}{r^*})^{4n}$, so we can safely consider $r^*\rightarrow r_H$ limit. Hence, the final result in the high temperature limit becomes \begin{align}\label{HEETH2} \mathcal{S}_{HT}^{(0)}(lT)\equiv\frac{4G_N^{(5)}S_{finite}(l,lT)}{L^2T^2}= \left[F_1+\pi^3 (lT)\right], \ \ \ \ \ \ \ F_1<0, \end{align} where $\mathcal{S}_{HT}^{(0)}(lT)$ is the redefined $S_{finite}(l,lT)$ in the high temperature limit up to the zero order of $\frac{r_c}{r^*}$ and $F_1$ is a numerical constant given by eq.\eqref{F2} in appendix \ref{Appendix1}. The second term in eq.\eqref{HEETH2} is the dominant term in $S_{finite}(l,lT)$ which scales with the volume of the rectangular strip, $lL^2$, and the first term in $S_{finite}(l,lT)$ is area dependent, $L^2$. Therefore, the first term corresponds to the entanglement entropy between the strip region and its complement while the second term corresponds to the thermal entropy. \item Up to $(\frac{r_c}{r^*})^2$: In this case, we add the non-conformal effects to the previous one by considering the second terms in brackets in eqs. \eqref{LengthTH} and \eqref{HEETH}. we rewrite eq.\eqref{HEETH} as follows \begin{align} \mathcal{A}'=&2{r^*}^2L^2\sum\limits_{n=0}^{\infty}\sum\limits_{m=0}^{\infty}\frac{\Gamma(n+\frac{1}{2})\Gamma(m+\frac{1}{2})}{\pi\Gamma(n+1)\Gamma(m+1)}(\frac{r_H}{r^*})^{4m}\cr &\times\left[L_1+L_2(\frac{r_c}{r^*})^2+(D_1-L_1)+(D_2-L_2)(\frac{r_c}{r^*})^2\right] \end{align} Performing the summation over $m$, then the above equation leads to \begin{align}\label{HEETHH} \mathcal{A}'=&{r^*}^3L^2l+2{r^*}^2L^2\left[-\frac{3\sqrt{\pi}\Gamma(\frac{2}{3})}{2\Gamma(\frac{1}{6})}+\sum\limits_{n=1}^{\infty}\frac{3}{(4n-2)(4n+1)}\frac{\Gamma(n+\frac{1}{2})\Gamma(\frac{2n+2}{3})}{\Gamma(n+1)\Gamma(\frac{4n+1}{6})}(\frac{r_H}{r^*})^{4n}\right]\cr &+2{r^*}^2L^2B(\frac{r_c}{r^*})^2, \end{align} where $B$ is given by \begin{align}\label{B} B=B_0&+\sum\limits_{n=0}^{\infty}\sum\limits_{m=1}^{\infty}\frac{\Gamma(n+\frac{1}{2})\Gamma(m+\frac{1}{2})}{\pi\Gamma(n+1)\Gamma(m+1)}(\frac{r_H}{r^*})^{4m}\cr &\times\Bigg[\frac{3 n}{4m+6 n-2)}+\frac{3-6 n}{8 m+12 n} -\frac{\frac{3}{2}(2 n+1)}{4 m+6 n+4}+\frac{\frac{3}{2}(2 n+1)}{4 m+6 n+6}\Bigg]. \end{align} $B_0$ is a constant comes from $m=0$ and given by eq.\eqref{B0} in appendix \ref{Appendix1}. The convergence of the series in the second term of eq.\eqref{HEETHH} was seen in the previous case. In order to study the convergence of the series in eq.\eqref{B} at high temperature, $r^*\rightarrow r_H$, we sum over $n$ and obtain \begin{figure} \centering \includegraphics[scale=0.38]{EE-HT2} \ \ \includegraphics[scale=0.38]{EE-HT1} \caption{Left: $\mathcal{S}_{HT}^{(2)}(lT,l\Lambda_c)$ in terms of $l\Lambda_c$ for fixed $lT=50$ in the high temperature limit at high energy. Right: $\mathcal{S}_{HT}^{(2)}(lT,l\Lambda_c)$ in terms of $lT$ for fixed $l\Lambda_c=0.002$ in the high temperature limit at high energy. } \label{EE-HT} \end{figure} \begin{align} B&=B_0+\sum\limits_{m=1}^{\infty}\frac{3\Gamma(m+\frac{1}{2})}{8\sqrt{\pi}\Gamma(m+1)}(\frac{r_H}{r^*})^{4m}\cr &\times\Bigg[\frac{1}{m}\left(\, _4F_3\left(\frac{1}{2},\frac{2 m}{5}+\frac{4}{5},\frac{2 m}{3}-\frac{1}{3},\frac{2 m}{3};\frac{2 m}{5}-\frac{1}{5},\frac{2 m}{3}+\frac{2}{3},\frac{2 m}{3}+1;1\right)\right)\cr &-\frac{1}{(m+1) (2 m+3)}\left(\, _3F_2\left(\frac{3}{2},\frac{2 m}{3}+\frac{2}{3},\frac{2 m}{3}+1;\frac{2 m}{3}+\frac{5}{3},\frac{2 m}{3}+2;1\right)\right)\Bigg]. \ \ \ \ \ \end{align} It is easy to see that in the large $m$ limit, the first series in the brackets behaves as $\frac{1}{m^\frac{3}{2}}(\frac{r_H}{r^*})^{4m}$ and the second series in the brackets behaves as $\frac{1}{m^\frac{5}{2}}(\frac{r_H}{r^*})^{4m}$. Therefore, both of these series are convergent at $r^*\rightarrow r_H$ and we can safely take the mentioned limit. If we add the contribution of the second term in eq.\eqref{HEEhighT1} to the finite part of the HEE, then we will finally reach the following expression \begin{align}\label{HEEhighT} \mathcal{S}_{HT}^{(2)}(l\Lambda_c,lT)&\equiv\frac{4G_N^{(5)}S_{finite}(l,l\Lambda_c,lT)}{L^2\Lambda_c T}\cr &=\frac{1}{(l\Lambda_c )(lT)}\bigg\lbrace F_1(lT)^2+\pi^3 (lT)^3\cr & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\left[F_2+\frac{3}{2}\log(l\Lambda_c )\right](l\Lambda_c )^2\bigg\rbrace, \ \ \ F_1<0, F_2>0,\ \ \ \ \ \end{align} \begin{scriptsize} • \end{scriptsize}where $S_{HT}^{(2)}(l\Lambda_c,lT)$ is the redefined $S_{finite}(l,l\Lambda_c,lT)$ and $F_2$ is positive and given by eq.\eqref{F1} in appendix \ref{Appendix1}. The first two terms in eq.\eqref{HEEhighT} correspond to $S_{HT}^{(0)}$ in the AdS black hole in the high temperature limit. The non-conformal effect appears in third term which is very small with respect to the first two terms. The logarithmic term in the braket is the dominant term and is always negative in the high energy limit. Hence the non-conformal effect decreases the $S_{HT}^{(2)}(l\Lambda_c,lT)$ with respect to $S_{HT}^{(0)}(lT)$ at high temperature in the high energy limit which is in complete agreement with \cite{Rahimi:2016bbv}. In figure \ref{EE-HT} we plot $S_{HT}^{(2)}(l\Lambda_c,lT)$ as a function of $l\Lambda_c$ and $lT$ for fixed value of $lT$ and $l\Lambda_c$, respectively. From this figure, we observe that if we fix $l\Lambda_c$ and increase $lT$, then $S_{HT}^{(2)}(l\Lambda_c,lT)$ will increase. But in the fixed $lT$, $S_{HT}^{(2)}(l\Lambda_c,lT)$ has different behavior and decreases by increasing $l\Lambda_c$. \end{itemize} \section{Mutual information}\label{section4} When the boundary entangling region is made by two disjoint subsystems, an important quantity to study is the mutual information which is a quantity that is derived from entanglement entropy. The definition of mutual information between two disjoint subsystems $A$ and $B$ is given by \begin{figure} \centering \includegraphics[scale=0.39]{Mutual} \caption{A simplified sketch of two strip regions $A$ and $B$ with equal size $l$ which are separated by the distance $l'$. When $l'$ is small enough, the minimal surface of $A\cup B$ are given by the union of blue curves and when $l'$ is large enough, the minimal surface of $A\cup B$ are given by the union of black-dashed curves.} \label{MIandEE} \end{figure} \begin{align} I(A,B)=S_A+S_B-S_{A\cup B}, \end{align} where $S_A$, $S_B$ and $S_{A\cup B}$ denote the entanglement entropy of the region $A$, $B$ and $A\cup B$, respectively. The mutual information measures the total correlation between the two sub-systems, including both classical and quantum correlations \cite{Groisman}. From the definition, it is clear that the mutual information is a finite quantity since the divergent pieces in the entanglement entropy cancel out and the subadditivity of the entanglement entropy, $S_A+S_B\geq S_{A\cup B}$, guarantees that $I(A,B)\geq 0$. We consider the two disjoint subsystems both rectangular strips of size $l$ which are separated by the distance $l'$ on the boundary, see figure \ref{MIandEE}. One can easily follow the RT-prescription to compute the HEE of the individual sub-systems $A$ and $B$. For computing $S_{A\cup B}$, we have two configurations. When the separation distance is large enough, the two subsystems $A$ and $B$ are completely disentangled and we have $S_{A\cup B}=S_A+S_B=2S(l)$. In this case, the mutual information vanishes $I(A,B)=0$. On the other hand, when $l'$ is small enough, the two subsystems $A$ and $B$ are entangled and $S_{A\cup B}$ is given by $S_{A\cup B}=S(l')+S(l'+2l)$. In this case, we have the non-zero mutual information $I(A,B)>0$. Therefore, one can assume that there would be a critical separation distance $x_d$ where the mutual information exhibits a phase transition from positive values to zero at this distance. To summarize, the holographic mutual information (HMI) of two disjoint subsystems is given by \begin{align}\label{HMI} I(A,B)=\Bigg\lbrace \begin{array}{lr} 2S(l) -S(2l+l')-S(l') \ \ l'<x_d & \\ 0 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ l'\geqslant x_d & \end{array} . \end{align} We will use this relation to discuss the behavior of HMI in MAdS \eqref{MAdS} and MBH \eqref{MBH} backgrounds. \subsection{HMI at zero temperature; High energy limit} In this section, we would like to study the HMI for MAdS background \eqref{MAdS}. Since the HMI is a linear combination of HEE we consider the special limit that the HEE can be computed analytically. Considering the high energy limit i.e. $l\Lambda _c \ \& \ l'\Lambda_c \ll 1$ or equivalently $r_c\ll r^*_l \ \& \ r^*_{l'}$ and using eqs. \eqref{HEEhighh1} and \eqref{HMI}, we get \begin{align}\label{HMIhigh} \hat{\mathcal{I}}(l\Lambda_c ,l'\Lambda_c )&\equiv\frac{4G_N^{(5)} I(l,l',l\Lambda_c,l'\Lambda_c)}{L^2\Lambda_c^2}\cr &=\kappa _1\left[\frac{2}{(l\Lambda_c)^2}-\frac{1}{(l'\Lambda_c)^2}-\frac{1}{(l'\Lambda_c+2l\Lambda_c)^2}\right]+\frac{3}{2}\log\left(\frac{(l\Lambda_c)^2}{(l'\Lambda_c)(l'\Lambda_c +2l\Lambda_c)} \right)\cr &-2\kappa _3(l'\Lambda _c+l\Lambda _c)^2, \ \ \ \ \ \ \ \ \ \ \kappa _1<0,\kappa _3>0, \end{align} where $\hat{\mathcal{I}}(l\Lambda_c ,l'\Lambda_c )$ is the redefined $I(l,l',l\Lambda_c,l'\Lambda_c)$ which is given by eq.\eqref{I-highE} in appendix \ref{Appendix2}. By this redefinition, the limits of $l\Lambda\rightarrow 0$ and $l'\Lambda_C\rightarrow 0$ become meaningful. The first term in eq.\eqref{HMIhigh}, including $\kappa_1$, is the contribution of the AdS boundary corresponds to the mutual information between two subsystems in conformal field theory \cite{Fischler:2012uv} and the other terms are the non-conformal effects. We focus on the limit of $l\Lambda_c\gg l'\Lambda_c$. Threfore, the second term which is the dominant term in the non-conformal effects is always positive and hence the non-conformality increases $\hat{\mathcal{I}}(l\Lambda_c ,l'\Lambda_c )$. From the above equation, we observe that by increasing $l\Lambda_c$ ($l'\Lambda_c$) $\hat{\mathcal{I}}(l\Lambda_c ,l'\Lambda_c )$ increase (decrease) for fixed $l'\Lambda_c$ ($l\Lambda_c$) in the high energy limit. \subsection{HMI at finite temperature} Here, we investigate the thermal behavior of HMI in the MBH background \eqref{MBH}. In order to reach the analytical results, we consider the low ($lT\ \&\ l'T \ll 1$), intermediate ($l'T \ll 1\ll lT$) and high ($lT\ \&\ l'T \gg 1$) temperature, in the high energy limit ($l\Lambda _c\ \&\ l'\Lambda _c\ll 1$). \subsubsection{Low temperature at high energy} In the low temperature limit i.e. $lT \ \&\ l'T \ll 1$ or equivalently $r_H \ll {r^*}_{l'}\ \&\ r^*_{l'+2l}$, at high energy, we use eqs. \eqref{HEEloww} and \eqref{HMI} and obtain \begin{align}\label{HMIlow} \tilde{\mathcal{I}}(l\Lambda_c,l'\Lambda_c ,lT,l'T)&\equiv\frac{4G_N^{(5)} I(l,l',l\Lambda_c,l'\Lambda_c ,lT,l'T)}{L^2\Lambda_c T}\cr &=\kappa _1\left[\frac{2}{(l\Lambda_c)(lT)}-\frac{1}{(l'\Lambda_c)(l'T)}-\frac{1}{(l'\Lambda_c+2l\Lambda_c)(l'T+2lT)}\right]\cr &-2\bar{\kappa}_1\frac{(l'T+lT)^3}{(l'\Lambda_c+l\Lambda_c)}+\frac{3(l\Lambda_c)}{2(lT)}\log\left(\frac{(l\Lambda_c)^2}{(l'\Lambda_c)(l'\Lambda_c +2l\Lambda_c)} \right)\cr &-2\kappa _3\frac{(l'\Lambda_c+l\Lambda_c)^3}{(l'T+lT)}, \ \ \ \ \ \ \ \ \kappa _1<0, \bar{\kappa}_1,\kappa _3>0, \end{align} where $\tilde{\mathcal{I}}(l\Lambda_c,l'\Lambda_c ,lT,l'T)$ is the redefined $I(l,l',l\Lambda_c,l'\Lambda_c ,lT,l'T)$ which is given by eq.\eqref{I-lowT} in appendix \ref{Appendix2}. By this redefinition the limits $l\Lambda_c \rightarrow 0$, $lT\rightarrow 0$, $l'\Lambda_c \rightarrow 0$ and $l'T\rightarrow 0$ become meaningful and intuitive. The first two terms, including $\kappa _1$ and $\bar{\kappa}_1$, are the known results corresponding to the pure AdS and AdS black hole, respectively and the next two terms indicate the non-conformal effect. Since $\bar{\kappa}_1$ is positive constants the second term is always negative and hence thermal fluctuations decrease $\tilde{\mathcal{I}}(l\Lambda_c,l'\Lambda_c ,lT,l'T)$. Therefore, thermal fluctuations reduce the total correlation between the subsystems. From eq.\eqref{HMIlow} one can see that $\tilde{\mathcal{I}}(l\Lambda_c,l'\Lambda_c ,lT,l'T)$ decreases by increasing the temperature. At low temperature, this behavior is seen in \cite{Bernigau} which study the MI for thermal free fermions. The third term is the dominant term in the non-conformal effects. We focus on the limit of $l\Lambda_c\gg l'\Lambda_c$. In this limit the logarithmic term is positive and therfere the non-conformal effects increase $\tilde{\mathcal{I}}(l\Lambda_c,l'\Lambda_c ,lT,l'T)$. In the following, we represent the results corresponding to above limits. \begin{itemize} \item $\Lambda_c\neq 0$ and $T=0$: In this case, we reproduce the results obtained for MAdS background \eqref{HMIhigh}. The leading contribution comes from the AdS boundary which corresponds to the HMI between our considered subsystems and the non-conformal effects appear as the sub-leading terms. These effects are positive and therefore the non-conformality increases $\hat{\mathcal{I}}(l\Lambda_c ,l'\Lambda_c )$. \item $\Lambda_c=0$ and $T\neq 0$: In this case, we reproduce the previous results obtained for AdS black hole, see \cite{Fischler:2012uv}. The leading term corresponds to the zero temperature HMI and the finite temperature correction appears as the sub-leading term. The constant $\bar{\kappa}_1$ is positive and therefore, the thermal fluctuations decrease $\tilde{\mathcal{I}}(lT,l'T)$. \item $\Lambda_c= 0$ and $T=0$: We reach the previous results obtained for pure AdS$_5$ which corresponds to the HMI in the zero temperature conformal field theory \cite{Fischler:2012uv}. Obviously, this term is positive. \end{itemize} We want to study the $\tilde{\mathcal{I}}(l\Lambda_c,l'\Lambda_c ,lT,l'T)$ near the transition point and hence we take the transition limit i.e. $T\rightarrow\frac{\Lambda_c}{ \sqrt{2}\pi}$ of eq.\eqref{HMIlow}. Up to the second order in $l\Lambda_c$ and $l'\Lambda_c$ we obtain the following expression \begin{align}\label{HMItra} \tilde{\mathcal{I}}(l\Lambda_c,l'\Lambda_c ,lT,l'T)&\vert _{T \rightarrow \frac{\Lambda_c}{ \sqrt{2}\pi}}=\sqrt{2}\pi\Bigg\lbrace \kappa _1\left[\frac{2}{(l\Lambda_c)^2}-\frac{1}{(l'\Lambda_c)^2}-\frac{1}{(l'\Lambda_c+2l\Lambda_c)^2}\right]\cr &+\frac{3}{2}\log\left(\frac{(l\Lambda_c)^2}{(l'\Lambda_c)(l'\Lambda_c +2l\Lambda_c)} \right)-2(\kappa _3+\frac{\bar{\kappa}_1}{4\pi^4}) (l'\Lambda _c+l\Lambda _c)^2\Bigg\rbrace . \end{align} We fix $l \Lambda_c$ and $l'\Lambda_c$ and compare the zero temperature $\hat{\mathcal{I}}(l\Lambda_c,l'\Lambda_c)$ eq.\eqref{HMIhigh} with the finite temperature $\tilde{\mathcal{I}}(l\Lambda_c,l'\Lambda_c ,lT,l'T)$ in transition limit eq.\eqref{HMItra} \begin{align}\label{HMItran} \frac{\tilde{\mathcal{I}}(l\Lambda_c,l'\Lambda_c ,lT,l'T)}{\sqrt{2}\pi}\bigg\vert _{T \rightarrow \frac{\Lambda_c}{ \sqrt{2}\pi}}-\hat{\mathcal{I}}(l\Lambda_c,l'\Lambda_c)=-\frac{\bar{\kappa}_1}{4\pi^4} (\Lambda _cl'+\Lambda _cl)^2<0. \end{align} From the above equation it is seen that, near the transition point, the subsystems $A$ and $B$ have more information in common at zero temperature than the low temperature. When the two subsystems $A$ and $B$ are more correlated, we can get more information about the subsystem $A$ through subsystem $B$ and hence the state at zero temperature is the favorable one. \subsubsection{Intermediate temperature at high energy} In this subsection we study another interesting limit called intermediate temperature limit which is defined by $l'T \ll 1\ll lT $ or equivalently $r_H\ll r^*_{l'}$ and $r^*_{l'+2l}\to r_H$. We consider this limit at high energy $l\Lambda _c\ \&\ l'\Lambda _c\ll 1$. Using eqs. \eqref{HEEloww}, \eqref{HEEhighT} and \eqref{HMI}, we reach the following expression \begin{align} \label{MIintT} \tilde{\mathcal{I}}(l'\Lambda_c ,l'T)&\equiv\frac{4G_N^{(5)} I(l',l'\Lambda_c ,l'T)}{L^2\Lambda_c T}\cr &=\frac{1}{(l'\Lambda_c)(l'T)}\bigg\lbrace -\kappa_1+F_1(l'T)^2 -\pi^3(l'T)^3-\bar{\kappa}_1 (l'T)^4\cr & +\left[F_3-\frac{3}{2}\log(l'\Lambda_c)-\bar{\kappa}_2(l'T)^4\right](l'\Lambda_c )^2 -\left[\kappa_3+\bar{\kappa}_3(l'T)^4\right] (l'\Lambda_c )^4\bigg\rbrace , \cr &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \kappa_1,\bar{\kappa}_2,\bar{\kappa}_3,F_1<0, \ \bar{\kappa}_1,\kappa_3, F_3>0, \ \ \end{align} where $\tilde{\mathcal{I}}(l'\Lambda_c ,l'T)$ is the redefined $I(l',l'\Lambda_c ,l'T)$ which is given by eq.\eqref{I-intT} and $F_3$ is a numerical constant given by eq.\eqref{F3} in appendix \ref{Appendix2}. It is observed that $\tilde{\mathcal{I}}(l'\Lambda_c ,l'T)$ does not depend on the length of the subsystems $l$ which is in complete agreement with the results obtained in \cite{Fischler:2012uv}. The logarithmic term and the term including $F_3$ which are the non-conformal effects are always positive and therefore the non-conformality increases $\tilde{\mathcal{I}}(l'\Lambda_c ,l'T)$. In the limit of $l'\Lambda_c\rightarrow 0$, we reproduce the results obtained for the AdS black hole \cite{Fischler:2012uv}. Since the term including $F_1$ is proportional to the area of entangling region the HMI obeys an area law even at intermediate temperature. This term is negative and hence temperature effects decrease $\tilde{\mathcal{I}}(l'\Lambda_c ,l'T)$. We want to study the HMI in the limit of $l'\rightarrow 0$ which corresponds to the case that the two subsystems $A$ and $B$ touch each other. By taking $l'\rightarrow 0$ limit from eq.\eqref{I-intT} we reach the following expression \begin{align}\label{I(l'=0)} I(l',l'\Lambda_c ,l'T)\bigg\vert_{l'\rightarrow 0}=\frac{1}{4G_N^{(5)}}\bigg\lbrace & -\kappa_1\left(\frac{L}{l'}\right)^2-\frac{3}{2}\log(l'\Lambda_c)(L\Lambda_c)^2\cr &+F_1(LT)^2+F_3(L\Lambda_c )^2\bigg\rbrace , \ \ \ \ \ \ \ \kappa_1,F_1<0, F_3>0. \end{align} The first term in the above equation is the leading term which obeys an area law divergence in the limit $l'\rightarrow 0$. The second term has a logarithmic divergence because of non-conformality. The third and forth terms are finite sub-leading terms which scale with the area of the strip region $L^2$ times to the $T^2$ and $\Lambda_c^2$, respectively. In the subsection \ref{EE-highT} we observe that at the high temperature there is a thermal entropy contribution to the entanglement entropy which scales with the volume of the strip region $lL^2$. The thermal entropy contribution is not appear in eq.\eqref{I(l'=0)} and hence this equation only measure pure quantum entanglement \cite{Kundu:2016dyk,Ebrahim:2020qif,Fischler:2012uv}. \subsubsection{High temperature at high energy} In the high temperature limit we have $l'T \gg 1$ or equivalently $l' \gg \frac{\pi}{r_H}$. As we discussed at the beginning of this section, for very large separation distances the two subsystems are completely disentangled and hence the HMI is identically zero in this regime. In other words, for large separation distance, the RT-surface corresponding to the $A\cup B$ is equal to the union of the RT-surfaces corresponding to the two subregions $A$ and $B$ and hence the HMI vanishes for this case. \section{Entanglement of purification}\label{section5} Entanglement of purification is an important quantity which measure the total (quantum and classical) correlation between two disjoint subsystems in the mixed states. Consider a bipartite system described by a mixed state and density matrix $\rho_{AB}$. We can always purify this mixed state by enlarging its Hilbert space as $\mathcal{H}_A\otimes \mathcal{H}_B\rightarrow \mathcal{H}_A\otimes \mathcal{H}_B \otimes \mathcal{H}_{A'}\otimes \mathcal{H}_{B'}$ such that the total density matrix in enlarged Hilbert space $\rho_{AA'BB'}$ is given by $\rho_{AA'BB'}=\vert \psi_{AA'BB'}\rangle\langle\psi_{AA'BB'}\vert$. Such a pure state is called a purification of $\rho_{AB}$ if we have \begin{align} \rho_{AB}=Tr_{A'B'}\left(\vert \psi_{AA'BB'}\rangle\langle\psi_{AA'BB'}\vert\right). \end{align} \begin{figure}\label{fig2} \centering \includegraphics[width=80 mm]{EWCS.pdf} \caption{The gray region shows the entanglement wedge dual to $\rho_{AB}$. The minimal surfaces, RT-surfaces, are denoted by $\Gamma$, the dashed curves.} \label{fig2} \end{figure} Obviously there exist infinite ways to purify $\rho_{AB}$. The EoP is defined by minimizing the entanglement entropy $S_{AA'}$ over all purifications of $\rho_{AB}$ \cite{arXiv:quant-ph/0202044v3} \begin{align} E_p(\rho_{AB})=\underset{\vert \psi _{AA'BB'}\rangle}{\rm{min}}(S_{AA'}), \end{align} where $S_{AA'}$ is the entanglement entropy corresponding to the density matrix $\rho_{AA'}$ and $\rho_{AA'}={\rm{Tr}}_{BB'}\big[\left(|\psi\rangle_{ABA'B'}\right) \left({}_{ABA'B'}\langle\psi|\right)\big]$. In the context of the gauge/gravity duality, it has been conjectured that the EoP is dual to the entanglement wedge cross-section $E_w$ of $\rho_{AB}$ which is defined by \cite{Takayanagi:2017knl, Nguyen:2017yqw} \begin{align}\label{EWCS} E_w(\rho_{AB})=\frac{{\rm{Area}}(\Sigma_{AB}^{min})}{4G_N^{(d+2)}}. \end{align} where $\Sigma_{AB}^{min}$ is the minimal surface in the entanglement wedge $E_w(\rho_{AB})$, separating two regions of A and B, that ends on the RT-surface of $A\cup B$, the blue-dashed line in figure \ref{fig2}. As a result, we have \cite{Takayanagi:2017knl, Nguyen:2017yqw} \begin{align}\label{eop} E_p(\rho_{AB})\equiv E_w(\rho_{AB}). \end{align} In \cite{Ghodrati:2021ozc} the EoP is used to probe the phase structure of the QCD and confining backgrounds. We would like to study the EoP in a non-conformal field theory at zero, low and high temperature. In order to compute the EoP, we use the holographic prescription given in \cite{Takayanagi:2017knl, Nguyen:2017yqw}. We consider two parallel strips with equal widths $l$ extended along $y$ and $z$ directions with length $L(\rightarrow \infty)$ and separated by a distance $l'$, see figure \ref{fig2}. Indeed, in this case $\Sigma_{AB}^{min}$ runs along the radial direction and connects the turning points of the minimal surfaces $\Gamma_{l'}$ and $\Gamma_{l'+2l}$. In order to obtain the analytical results, we focus in the some specific limits of the our models such as high ($l'\Lambda_c \ \& \ l\Lambda_c \ll 1$) and intermediate ($l'\Lambda_c \ll 1\ll l\Lambda_c $) energy at zero ($T=0$), low ($l'T \ \& \ lT \ll 1$) and intermediate ($l'T \ll 1\ll lT$) temperature. In the MAdS background \eqref{MAdS} the high and intermediate energy limits are equivalent to $r_c\ll r^*_{l'} \ \& \ r^*_{l'+2l}$ and $r_c\ll r^*_{l'}\ \& \ r^*_{l'+2l}\rightarrow r_c$, respectively and in the MBH background \eqref{MBH} the low temperature limit at high and intermediate energy are given in terms of balk parameters as ($r_c \ \& \ r_H \ll r^*_{l'} \ \& \ r^*_{l'+2l}$) and ($r_c \ \& \ r_H \ll r^*_{l'}$ and $r_H\ll r^*_{l'+2l}\rightarrow r_c$), respectively and the intermediate temperature at high and intermediate energy are given in terms of balk parameters as ($r_c \ \& \ r_H \ll r^*_{l'}$ and $r_c\ll r^*_{l'+2l}\rightarrow r_H$) and ($r_c \ \& \ r_H \ll r^*_{l'}$ and $r^*_{l'+2l}\rightarrow r_c =r_H$), respectively. These regimes are depicted in figure \ref{limitsT1}. \subsection{EoP at zero temperature} At zero temperature, using background \eqref{MAdS} and eqs. \eqref{EWCS} and \eqref{eop} we obtain \begin{figure} \centering \subfloat[]{\includegraphics[scale=0.2]{highE}\label{a}} \subfloat[]{\includegraphics[scale=0.2]{intE}\label{b}} \subfloat[]{\includegraphics[scale=0.2]{lowThighE}\label{c}}\\ \subfloat[]{\includegraphics[scale=0.2]{lowTintE}\label{d}} \subfloat[]{\includegraphics[scale=0.2]{intThighE}\label{e}} \subfloat[]{\includegraphics[scale=0.2]{IntTIntE}\label{f}} \caption{(a): The high energy limit i.e. $l\Lambda_c \ \& \ l'\Lambda_c\ll 1$ or equivalently $r_c\ll r^*_{l'} \ \& \ r^*_{l'+2l}$ at zero temperature. (b): The intermediate energy limit i.e. $l'\Lambda_c\ll 1 \ll l\Lambda_c$ or equivalently $ r^*_{l'} \gg r_c \rightarrow r^*_{l'+2l}$ at zero temperature. (c): The low temperature limit i.e. $lT \ \& \ l'T\ll 1$ or equivalently $r_H\ll r^*_{l'} \ \& \ r^*_{l'+2l}$ at high energy. (d): The low temperature limit at intermediate energy. (e): The intermediate temperature i.e. $l'T\ll 1 \ll lT$ or equivalently $r^*_{l'} \gg r_H\rightarrow r^*_{l'+2l}$ at high energy. (f): The intermediate temperature at intermediate energy. This regime is equivalent to the transition limit i.e. $T\rightarrow \frac{\Lambda_c}{\sqrt{2}\pi}$ or equivalently $r_H\rightarrow r_c$.} \label{limitsT1} \end{figure} \begin{align}\label{EoP1} E_p&=\frac{L^2}{4G_N^{(5)}}\int _{r^*_{l'+2l}} ^{r^*_{l'}} r e^{\frac{3}{2}\left(\frac{r_c}{r}\right)^2}dr \cr &=\frac{L^2}{16G_N^{(5)}}\Bigg\lbrace 2 {r^*_{l'}}^2 e^{\frac{3}{2}\left(\frac{r_c}{r^*_{l'}}\right)^2}-2 {r^*_{l'+2l}}^2 e^{\frac{3}{2}\left(\frac{r_c}{r^*_{l'+2l}}\right)^2}\cr &\ \ \ \ \ \ \ \ \ + 3r_c^2Ei\Bigg(\frac{3}{2}\left(\frac{r_c}{r^*_{l'+2l}}\right)^2\Bigg)- 3r_c^2 Ei\Bigg(\frac{3}{2}\left(\frac{r_c}{r^*_{l'}}\right)^2\Bigg)\Bigg\rbrace , \end{align} where $r^*_{l'}$ and $r^*_{l'+2l}$ in eq.\eqref{EoP1} denote the turning point of $\Gamma_{l'}$ and $\Gamma_{2l+l'}$, respectively, and $Ei(x)$ is the exponential integral defined for a real non-zero values of $x$ \begin{align} Ei(x)=-\int_{-x}^\infty dt \frac{e^{-t}}{t}. \end{align} Using eq.\eqref{Lengthh1}, the relation between $r^*_{l'}$ and $l'$ is given by \begin{align}\label{Length1} l'(r^*_{l'})=\frac{2}{r^*_{l'}}\sum\limits_{n=0}^{\infty}\frac{\Gamma(n+\frac{1}{2})}{\sqrt{\pi}\Gamma(n+1)}\int_0^1 u^{6n+3} e^{3(n+\frac{1}{2})\left(\frac{r_c}{r^*_{l'}}\right)^2(1-u^2)} du, \end{align} where $u=\frac{r^*_{l'}}{r}$. Similar equation can be obtained for $r^*_{l'+2l}$ by replacing $l'$ and $r^*_{l'}$ with $l'+2l$ and $r^*_{l'+2l}$ in above equation, respectively. In order to find the EoP as a function of $l$ and $l'$, we should solve equation \eqref{Length1} for $r^*_{l'}$ and similarly find $r^*_{l'+2l}$ and then substitute these in equation \eqref{EoP1}. Since we can not analytically solve the equations to find $r^*_{l'}$ and $r^*_{l'+2l}$ as a function of $l'$ and $l$ we need to focus on the high and intermediate energy limit. Note that, in the low energy limit, i.e. $1 \ll l\Lambda_c \ \& \ l' \Lambda_c$, we have disconnected configuration where the HMI and consequently the EoP vanishes for this case. The high and intermediate energy limits in terms of bulk parameters is depicted in figure \ref{limitsT1}. \subsubsection{High energy limit} In the high energy limit, i.e. $l\Lambda _c\ \&\ l'\Lambda _c\ll 1$ or equivalently $r_c \ll r^*_{l'}\ \&\ r^*_{l'+2l}$, the extremal surfaces $\Gamma_{l'}$ and $\Gamma_{l'+2l}$ are restricted to be near the boundary, see figure \ref{a}. Therefore, the leading contribution to the EoP comes from the AdS boundary and the non-conformality effects appear as sub-leading terms which correspond to the deviation of the bulk geometry from pure AdS. We can easily obtain $r^*_{l'}$ and $r^*_{l'+2l}$ from eq.\eqref{turning1} and then substitute back these in eq.\eqref{EoP1}. Keeping up to second order in $l\Lambda_c $ and $l'\Lambda_c $ we finally obtain \begin{align}\label{EoPhighE} \hat{E}_p(l\Lambda_c,l'\Lambda_c)&\equiv\frac{4G_N^{(5)}E_p(l,l',l\Lambda_c,l'\Lambda_c)}{L^2\Lambda_c^2}\cr &=2 a_1^2\left(\frac{1}{\left( l'\Lambda_c \right)^2}-\frac{1}{\left(l'\Lambda_c +2l\Lambda_c \right)^2}\right)+ C_1 l\Lambda_c(l\Lambda_c+l'\Lambda_c) \cr &+\frac{3}{4}\log\left(\frac{l'\Lambda_c+2l\Lambda_c}{l'\Lambda_c}\right), \ \ \ C_1>0 \end{align} where $\hat{E}_p(l\Lambda_c,l'\Lambda_c)$ is the redefined $E_p(l,l',l\Lambda_c,l'\Lambda_c)$ which is given by eq.\eqref{Ep-highE} and $C_1$ is a numerical constant given by eq.\eqref{C1} in appendix \ref{Appendix3}. The first term in eq.\eqref{EoPhighE} corresponds to the EoP obtained for the pure AdS$_5$ which is positive \cite{Jokela:2019ebz}. The other terms correspond to the non-conformal effects which are always positive and hence the non-conformal effects increase $\hat{E}_p(l\Lambda_c,l'\Lambda_c)$. From eq.\eqref{EoPhighE} we observe that $\hat{E}_p(l\Lambda_c,l'\Lambda_c)$ increase (decrease) by increasing $l\Lambda_c$ ($l'\Lambda_c$) for fixed $l'\Lambda_c$ ($l\Lambda_c$). \subsubsection{Intermediate energy limit} The intermediate energy limit is defined by $l'\Lambda_c \ll 1\ll l\Lambda_c$ or equivalently $r_c\ll r^*_{l'}$ and $r^*_{l'+2l}\to r_c$ which is depicted in figure \ref{b}. In this limit, the extremal surface $\Gamma_{l'}$ is restricted to be near the boundary and the turning point of the extremal surface $\Gamma_{2l+l'}$ approaches $r_c$. Expanding the first and fourth terms in eq.\eqref{EoP1} up to the second order in $\frac{r_c}{r^*_{l'}}$, replacing $r^*_{l'}$ with eq.\eqref{turning1} and considering the limit of $r^*_{l'+2l}\to r_c$ we reach the following expression \begin{align}\label{EoPE} \hat{E}_p(l'\Lambda_c)\equiv\frac{4G_N^{(5)}E_p(l',l'\Lambda_c)}{L^2\Lambda_c^2}=\frac{1}{(l'\Lambda_c)^2}\bigg\lbrace 2a_1^2+&\left(C_2-\frac{3}{4}\log(l'\Lambda_c )\right)(l'\Lambda_c)^2\cr &-\frac{C_1}{4}(l'\Lambda_c )^4\bigg\rbrace , \ \ \ \ \ \ C_1, C_2>0, \end{align} where $\hat{E}_p(l'\Lambda_c)$ is the redefined $E_p(l',l'\Lambda_c)$ which is given by eq.\eqref{Ep-intE} and $C_2$ is a numerical constant given by eq.\eqref{C2} in appendix \ref{Appendix3}. As we expect in the limit $l'\Lambda_c \ll 1\ll l\Lambda_c$, $\hat{E}_p(l'\Lambda_c)$ does not depend on the length of the subsystems $l$ which is coincides with \cite{BabaeiVelni:2019pkw,Amrahi:2020jqg}. The first term is the leading term diverges in the limit $l'\rightarrow 0$, where the two subsystems touch each other. The other two terms indicate the non-conformal effects which are always positive and hence the non-conformality increases $\hat{E}_p(l'\Lambda_c)$. From eq.\eqref{EoPE} we observe that $\hat{E}_p(l'\Lambda_c)$ decrease by increasing $l'\Lambda_c $. \subsection{EoP at finite temperature} In order to investigate the thermal behavior of the EoP in the MBH background, we use background \eqref{MBH} and develop a systematic expansion at low ($lT \ \&\ l' T \ll 1$) and intermediate($l'T \ll 1\ll lT$) temperature in the high energy limit ($l\Lambda _c\ \&\ l'\Lambda _c\ll 1$) and in the intermediate energy limit ($l'\Lambda_c \ll 1\ll l\Lambda_c $). Using eqs. \eqref{EWCS} and \eqref{eop} and following the same calculations in the previous section we finally reach the following expression \begin{align}\label{EopT} E_p=\frac{L^2}{4G_N^{(5)}}\int _{r^*_{l'+2l}} ^{r^*_{l'}} r e^{\frac{3}{2}\left(\frac{r_c}{r}\right)^2}\left(1-\frac{r_H^4}{r^4}\right)^{-\frac{1}{2}} dr, \end{align} where $r^*_{l'}$ and $r^*_{l'+2l}$ are given by eq.\eqref{LengthhT} by replacing $l$ with $l'$ and $l'+2l$, respectively. Unfortunately, the above integral can not be solved analytically. In order to calculate eq.\eqref{EopT}, we use eq.\eqref{expansionn} and the expansion of the exponential function $e ^y=\sum\limits_{m=0}^{\infty}\frac{y^m}{\Gamma(m+1)}$ by identifying $x=-\frac{r_H^4}{r^4}$ and $y=\frac{3}{2}\left(\frac{r_c}{r^*_{l'}}\right)^2$ and we finally obtain \begin{align}\label{EoPT1} E_p=\frac{L^2}{4G_N^{(5)}}\sum\limits_{m=0}^{\infty}\sum\limits_{n=0}^{\infty}\frac{(\frac{3}{2})^m\Gamma(n+\frac{1}{2})r_c^{2m}r_H^{4n}}{\sqrt{\pi}\Gamma(n+1)\Gamma(m+1)}\int_{r^*_{l'+2l}} ^{r^*_{l'}} r^{1-4n-2m}dr . \end{align} One can check that $\vert x\vert < 1$ for any allowable values of background parameters, $r_H< r^*_{l'}$ and $r_c < r^*_{l'}$. Using eq.\eqref{lengthhT1} the relation between $r^*_{l'}$ and $l'$ is given by \begin{align}\label{lengthT1} l'(r^*_{l'})=\frac{2}{r^*_{l'}}\sum\limits_{m=0}^{\infty}\sum\limits_{n=0}^{\infty}\frac{\Gamma(n+\frac{1}{2})\Gamma(m+\frac{1}{2})}{\pi \Gamma(n+1)\Gamma(m+1)}\left(\frac{r_H}{r^*_{l'}}\right)^{4m}\int_0^1 u^{6n+4m+3} e^{3(n+\frac{1}{2})\left(\frac{r_c}{r^*_{l'}}\right)^2(1-u^2)} du, \end{align} where $u=\frac{r^*_{l'}}{r}$. The relation between $r^*_{l'+2l}$ and $l'+2l$ can be obtained easily by replacing $r^*_{l'}$ and $l'$ in the above equation with $r^*_{l'+2l}$ and $l'+2l$, respectively. Now, we should solve eq.\eqref{lengthT1} for $r^*_{l'}$ and similar equation for $r^*_{l'+2l}$ and then substitute back them in eq.\eqref{EoPT1} to get the EoP in terms of $l$ and $l'$. In order to do analytical calculations we consider the low and intermediate temperature in the high energy limit and in the intermediate energy limit, see figure \ref{limitsT1}. In the high temperature limit i.e. $1 \ll lT \ \&\ l' T$ and in the low energy limit i.e. $1 \ll l\Lambda_c \ \&\ l' \Lambda_c$ we have a disconnected configuration and hence the HMI and consequently the EoP become zero in this limits. \subsubsection{Low temperature at high energy} Here, we focus on the limit of low temperature i.e. $lT \ \&\ l' T\ll 1$ at the high energy i.e. $l\Lambda _c\ \&\ l'\Lambda _c\ll 1$, see figure \ref{c}. In this regime, the leading contribution to the EoP comes from the near boundary expansion and the thermal and non-conformal corrections appear as sub-leading terms. We want to get the EoP in terms of $l$ and $l'$ and hence we should have $r^*_{l'}$ and $r^*_{l'+2l}$ in terms of $l'$ and $l'+2l$ , respectively, which can be obtained from eq.\eqref{rstar}. By taking the integral in eq.\eqref{EoPT1} and keeping up to the second order in $l\Lambda_c$, $l'\Lambda_c$, $lT$ and $l'T$, we obtain \begin{align}\label{EoPlowThighE} \tilde{E}_p(l\Lambda_c &,l'\Lambda_c ,lT,l'T)\equiv\frac{4G_N^{5)}E_p(l,l',l\Lambda_c,l'\Lambda_c ,lT,l'T)}{L^2\Lambda_c T}\cr &=2 a_1^2\left(\frac{1}{( l'\Lambda_c )(l'T)}-\frac{1}{(l'\Lambda_c +2l\Lambda_c )(l'T+2lT)}\right)+C_3\frac{(lT)^2(lT+l'T)}{l\Lambda_c}\cr &+\frac{3(l\Lambda_c)}{4(lT)}\log\left(\frac{l\Lambda_c+2l'\Lambda_c}{l'\Lambda_c}\right)+C_1 \frac{(l\Lambda_c)^2(l\Lambda_c+l'\Lambda_c)}{lT} , \ \ \ C_1>0, C_3<0, \end{align} where $\tilde{E}_p(l\Lambda_c,l'\Lambda_c ,lT,l'T)$ is the redefined $E_p(l,l',l\Lambda_c,l'\Lambda_c ,lT,l'T)$ which is given by eq.\eqref{Ep-lowThighE} and numerical constant $C_3$ is given by eq.\eqref{C3} in appendix \ref{Appendix3}. The first two terms, including $a_1$ and $C_3$ correspond to the AdS and AdS black hole, respectively and the next two terms are the non-conformal effects which are always positive and hence the non-conformality increase $\tilde{E}_p(l\Lambda_c,l'\Lambda_c ,lT,l'T)$. Since $C_3$ is a negative constant the second term is always negative and hence the thermal fluctuations decrease $\tilde{E}_p(l\Lambda_c,l'\Lambda_c ,lT,l'T)$. Since the EoP is a measure of total correlation between two subsystems, the thermal fluctuations promote disentangling between them in this regime. In the following we represent the results corresponding to the mentioned limits: \begin{itemize} \item $\Lambda_c\neq 0$ and $T=0$: In this case, we reproduce the results obtained for MAdS background which is dual to non-conformal field theory at zero temperature. The leading term corresponds to the conformal EoP and the non-conformal corrections appear as the sub-leading terms. These corrections are always positive and hence non-conformality increase $\hat{E}_p(l\Lambda_c,l'\Lambda_c)$. \item $\Lambda_c=0$ and $T\neq 0$: In this case, we reproduce the previous results obtained for AdS black hole, see for example \cite{BabaeiVelni:2019pkw}. The leading term corresponds to the zero temperature EoP and the finite temperature correction appears as the sub-leading term. The constant $C_3$ is negative and hence the finite temperature corrections decrease $\tilde{E}_p(lT,l'T)$ of our configuration. \item $\Lambda_c= 0$ and $T=0$: We reach the previous results obtained for pure AdS$_5$ which is corresponding to the EoP of our configuration in zero temperature conformal field theory \cite{Jokela:2019ebz}. Obviously, this term is positive. \end{itemize} Here, we can study $\tilde{E}_p(l\Lambda_c,l'\Lambda_c ,lT,l'T)$ near the transition point. By taking the transition limit $T\rightarrow \frac{\Lambda_c}{\sqrt{2}\pi}$ from eq.\eqref{EoPlowThighE}, we get \begin{align}\label{EoPtra} \tilde{E}_p(l\Lambda_c,l'\Lambda_c,lT,l'T)\bigg\vert_{T\rightarrow\frac{\Lambda_c}{\sqrt{2}\pi}}&=\sqrt{2}\pi\Bigg\lbrace 2 a_1^2\left(\frac{1}{\left( l'\Lambda_c \right)^2}-\frac{1}{\left(l'\Lambda_c +2l\Lambda_c \right)^2}\right)\cr &+\left(C_1+\frac{C_3}{4\pi^4}\right) (l\Lambda _c)(l\Lambda _c+l'\Lambda _c)+\frac{3}{4}\log\left(\frac{l'+2l}{l'}\right)\Bigg\rbrace . \ \ \ \ \end{align} Similar to the previous sections, we fix $l\Lambda_c$ and $l'\Lambda_c$ and compare $\hat{E}_p(l\Lambda_c,l'\Lambda_c)$ at zero temperature, eq.\eqref{EoPhighE}, with $\tilde{E}_p(l\Lambda_c,l'\Lambda_c,lT,l'T)$ at finite temperature in the transition limit, eq.\eqref{EoPtra}. We get \begin{align} \frac{\tilde{E}_p(l\Lambda_c,l'\Lambda_c,lT,l'T)}{\sqrt{2}\pi}\bigg\vert _{T\rightarrow\frac{\Lambda_c}{\sqrt{2}\pi}}- \hat{E}_p(l\Lambda_c,l'\Lambda_c) =\frac{C_3}{4\pi^4 } (l\Lambda_c)^2(l\Lambda_c+l'\Lambda_c)^2<0. \end{align} This result shows that near the transition point, the correlation between two subsystems is larger at zero temperature than the finite temperature and hence the state at zero temperature is the favorable one. In the high energy limit, since the energy of the subsystems is so much bigger than the energy of the phase transition point, see figure \ref{c}, we expect that the zero temperature state is more favorable than the finite temperature one. \subsubsection{Low temperature at intermediate energy }\label{lowTintE} In this subsection we consider the limit of low temperature, i.e. $lT \ \&\ l'T\ll 1$ at the intermediate energy i.e. $l'\Lambda_c \ll 1\ll l \Lambda_c $, see figure \ref{d}. In this limit $T \ll \Lambda _c$ ($r_H \ll r_c$). In this regime, the extremal surface $\Gamma_{l'}$ is restricted to be near the boundary and the turning point of the extremal surface $\Gamma_{2l+l'}$ approaches $r_c$. Using eq. \eqref{EoPT1} and \eqref{rstar}, doing some computations and keeping up to the 4th order in $l'\Lambda_c$ and $l'T$ we reach the following expression (see appendix \ref{Appendix3} for the details of the calculations) \begin{figure} \centering \includegraphics[scale=0.38]{Ep-IntThighE1} \includegraphics[scale=0.38]{Ep-IntThighE2} \caption{Left: $\tilde{E}_p(l'\Lambda_c,l'T)$ in terms of $l'\Lambda _c$ for fixed $l'T=10^{-5}$ in the low temperature limit at intermediate energy. Right: $\tilde{E}_p(l'\Lambda_c,l'T)$ in terms of $l'T$ for fixed $l'\Lambda_c=0.005$ in the low temperature limit at intermediate energy.} \label{LowTIntermediateE} \end{figure} \begin{align}\label{EoPlTiE} \tilde{E}_p(l'\Lambda_c,l'T)&\equiv\frac{4G_N^{(5)}E_p(l',l'\Lambda_c,l'T)}{L^2\Lambda_c T}=\frac{1}{(l'\Lambda_c)(l'T)}\Bigg\lbrace 2a_1^2-\frac{C_3}{4}(l'T)^4\cr &+\Bigg[C_2-\frac{3}{4}\log(l'\Lambda_c)+C_6\left(\frac{l'T}{l'\Lambda_c}\right)^4+C_4(l'T)^4 \Bigg](l' \Lambda_c)^2\cr &-\left[\frac{C_1}{4}+C_5(l'T)^4\right](l'\Lambda _c)^4\Bigg\rbrace , \ \ C_1,C_2,C_4 >0, C_3,C_5,C_6<0 \end{align} where $\tilde{E}_p(l'\Lambda_c,l'T)$ is the redefined $E_p(l',l'\Lambda_c,l'T)$ which is given by eq.\eqref{Ep-lowTintE} and the numerical constants $C_4$, $C_5$ and $C_6$ are given by eq.\eqref{C4} in appendix \ref{Appendix3}. As one can see, $l$ does not play any role in our final result which is in complete agreement with the results obtained in \cite{Amrahi:2020jqg}. The first term in eq.\eqref{EoPlTiE} which is the leading term, obeys an area law divergence in $l'\rightarrow 0$ limit, where the two subsystems touch each other. The second term, including $C_3$, is dominant term in the finite temperature corrections which is always positive and hence the finite temperature corrections increase $\tilde{E}_p(l'\Lambda_c,l'T)$ in this regime unlike the low temperature limit at high energy. The term including $C_2$ and the logarithmic term are the dominant terms in the non-conformal effects. Since these terms are always positive the non-conformal effects increase $\tilde{E}_p(l'\Lambda_c,l'T)$. From eq.\eqref{EoPlTiE} we observe that for fixed $l'\Lambda_c$ ($l'T$) $\tilde{E}_p(l'\Lambda_c,l'T)$ decreases by increasing $l'T$ ($l'\Lambda_c$). This result depicted clearly in figure \ref{LowTIntermediateE} where we plot $\tilde{E}_p(l'\Lambda_c,l'T)$ in terms of $l'\Lambda_c$ ($l'T$) for fixed $l'T$ ($l'\Lambda_c$). If we set $T=0$, we reproduce the previous results which is obtained for MAdS background in the intermediate energy limit eq.\eqref{EoPE}. From eqs. \eqref{EoPlowThighE} and \eqref{EoPlTiE} one can see that there is a crossover regime between high and intermediate energy in the low temperature where the sign of the thermal effect in the EoP changes. In the low temperature limit at high energy the sign of the thermal effects in the EoP and HMI are similar and these effects decrease the EoP and HMI. In the regime of low temperature at intermediate energy we have $l'\ll l$ and $T\ll \Lambda_c$. If we consider $l'$ to be very small the EoP approach to the HEE and hence it is reasonable that the thermal effects are positive in this regime. \subsubsection{Intermediate temperature at high energy}\label{intThighE} In this subsection we consider the intermediate temperature limit i.e. $l'T \ll 1\ll lT $ at the high energy limit i.e i.e. $l\Lambda _c\ \&\ l'\Lambda _c\ll 1$. In this limit $\Lambda_c \ll T$ ($r_c \ll r_H$), see figure \ref{e}. In this regime, the extremal surface $\Gamma_{l'}$ is restricted to be near the AdS boundary and the turning point of the extremal surface $\Gamma_{l'+2l}$ approaches $r_H$. Doing some calculations and keeping up to the 4th order in $l'\Lambda_c$ and $l'T$, we obtain the following expression (the details of the calculation can be found in appendix \ref{Appendix3}) \begin{align}\label{EoPIntT} \tilde{E}_p(l'\Lambda_c,l'T)&\equiv \frac{4G_N^{(5)}E_p(l',l'\Lambda_c,l'T)}{L^2\Lambda_c T}=\frac{1}{(l'\Lambda_c)(l'T)}\Bigg\lbrace 2a_1^2-\frac{C_3}{4}(l'T)^4\cr &\ \ \ \ \ \ \ \ +\Bigg[C_7-\frac{3}{4}\log(l'T)+\frac{9}{128\pi}\left(\frac{l'\Lambda_c}{l'T}\right)^2-C_4(l'T)^4 \Bigg](l' \Lambda_c)^2\cr &\ \ \ \ \ \ \ \ -\left[\frac{C_1}{4}+C_5(l'T)^4\right](l'\Lambda _c)^4\Bigg\rbrace, \ \ C_1,C_4>0, C_3,C_5,C_7<0, \ \ \ \end{align} where $\tilde{E}_p(l'\Lambda_c,l'T)$ is the redefined $E_p(l',l'\Lambda_c,l'T)$ which is given by eq.\eqref{Ep-intThighE} and the numerical constant $C_7$ is given by eq.\eqref{C7} in appendix \ref{Appendix3}. The first and second terms are the same as in the previous subsection. It is seen that $\tilde{E}_p(l'\Lambda_c,l'T)$ does not depend on the length of the subsystems and diverges in $l'\rightarrow 0$ limit. Since $C_3$ is a negative constant the second term, the dominant term, is always positive and hence the finite temperature correction increases $\tilde{E}_p(l'\Lambda_c,l'T)$ unlike the HMI which the temperature effects decrease the HMI in this regime. This behavior is coincided with the results obtained in \cite{BabaeiVelni:2019pkw}. The logarithmic term is the dominant term in the non-conformal effects. This terms is always positive and hence the non-conformal effects increase $\tilde{E}_p(l'\Lambda_c,l'T)$. For $\Lambda_c=0$ we reproduce the previous results obtained for AdS block holes \cite{BabaeiVelni:2019pkw}. From eq.\eqref{EoPIntT} we observe that $\tilde{E}_p(l'\Lambda_c,l'T)$ decreases by increasing $l'\Lambda_c $ ($l'T$) for fixed $l'T$ ($l'\Lambda_c $). \subsubsection{Intermediate temperature at intermediate energy } In this subsection we consider the intermediate temperature limit i.e. $l'T \ll 1\ll lT$ at the intermediate energy i.e. $l'\Lambda_c \ll 1\ll l\Lambda_c $, see figure \ref{f}. In other words, in the intermediate energy limit we take $r_c\rightarrow r_H$ ($T\rightarrow\frac{\Lambda_c }{\sqrt{2}\pi}$) which is the transition limit. Doing some calculations and keeping up to the 4th order in $l'\Lambda_c$ we obtain (the details of calculations can be found in appendix \ref{Appendix3}) \begin{align}\label{EoPinttra} \tilde{E}_p(l'\Lambda_c,l'T)\bigg\vert_{T\rightarrow \frac{\Lambda_c}{\sqrt{2}\pi}}&\equiv\frac{4G_N^{(5)}E_p(l',l'\Lambda_c,l'T)}{L^2\Lambda_c T}\bigg\vert_{T\rightarrow \frac{\Lambda_c}{\sqrt{2}\pi}}\cr &=\frac{\sqrt{2}\pi}{(l'\Lambda_c)^2}\bigg\lbrace 2a_1^2+\left(C_7-\frac{3}{4}\log(l'\Lambda_c)\right)(l'\Lambda_c)^2 \cr & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +C_8(l'\Lambda_c)^4\bigg\rbrace ,\ \ C_7 >0,\ C_8<0, \ \ \ \ \ \end{align} where $\tilde{E}_p(l'\Lambda_c,l'T)$ is the redefined $E_p(l',l'\Lambda_c,l'T)$ in the transition limit which is given by eq.\eqref{Ep-intTintE} and numerical coefficients $C_7$ and $C_8$ are given by eq.\eqref{C5} in appendix \ref{Appendix3}. Similar to the two previous subsections, $\tilde{E}_p(l'\Lambda_c,l'T)$ does not depend on the length of the subsystems. The first term, including $a_1$, is the leading term which diverges in $l'\rightarrow 0$ limit and the second term, including $C_7$ and logarithmic term, is dominant term in the non-conformal effects which is always positive and hence the non-conformality increases $\tilde{E}_p(l'\Lambda_c,l'T)$. To get a better understanding of the transition limit of the $\tilde{E}_p(l'\Lambda_c,l'T)$ in the intermediate temperature limit at intermediate energy, we fixed $(l'\Lambda_c)$ and compare eq.\eqref{EoPinttra} with the result obtained in eq.\eqref{EoPE}. We have \begin{align} \tilde{E}_p(l'\Lambda_c,l'T)\big\vert _{T\rightarrow\frac{\Lambda_c}{\sqrt{2}\pi}}- \hat{E}_p (l'\Lambda_c)=0.773+0.025(l'\Lambda_c)^2>0 . \end{align} From the above equation we observe that in the intermediate energy limit, unlike the high energy limit, near the transition point the two subsystems are less correlated at zero temperature and hence the state at finite temperature is favorable. In the intermediate energy limit, since the energy of the subsystems is close to the energy of the phase transition point, see figure \ref{f}, we expect that in this limit the state at finite temperature is more favorable than the zero temperature. \section{Conclusion}\label{section6} In this paper, we consider a non-conformal field theory at zero and finite temperature which has holographic dual. Using gauge/gravity duality, we study the HEE, HMI and EoP in the backgrounds which are dual to the mentioned model. In order to obtain analytical expressions for these observables, we use a systematic expansion and consider some specific limits such as high and intermediate energy at zero, low, intermediate and high temperature. In the following we present the results obtained for the HEE, HMI and EoP: \begin{itemize} \item Entanglement entropy: We calculate the redefined HEE using the RT-prescription for a strip with width $l$ and length $L$ in the high energy limit at the zero, low and high temperature. \textit{In the all of these regimes the non-conformal (thermal) effects decrease (increase) the HEE}. Due to the non-conformality a logarithmic divergent term appeares in the HEE. At high temperature, two terms proportional to $T^2$ and $T^3$ appear in the HEE which the first term scales with the area $L^2$ and the second term scales with the volume of the strip $lL^2$ and hence the first term corresponds to the entanglement entropy and the second term corresponds to the thermal entropy. In the high energy limit and near the phase transition point ($T\rightarrow \frac{\Lambda_c}{\sqrt{2}\pi}$), the subsystems $A$ and $\bar{A}$ are less entangled at zero temperature than the finite temperature and hence the state at zero temperature is the favorable one. \item Mutual information: We calculate the redefined HMI for a symmetric configuration including two disjoint strip with equal width $l$ which are separated by the distance $l'$ in the high energy limit at zero, low and intermediate temperature. \textit{In the all of these regimes the non-conformal (thermal) effects increase(decrease) the redefined HMI}. Near the phase transition point, the HMI of the two subsystems is larger at zero temperature than the finite temperature case. Therefore, at high energy the state at zero temperature is the favorable one. \item Entanglement of purification: We calculate the redefined EoP for a symmetric configuration including two disjoint strip with equal width $l$ which are separated by the distance $l'$ at zero and finite temperature using the holographic proposal which gives the EoP in terms of entanglement wedge cross-section. \textit{In the all studied regimes the non-conformal effects increase the redefined EoP}. At low temperature the thermal fluctuations decrease the redefined EoP in the the high energy limit while they increase the redefined EoP in the intermediate energy limit. In the intermediate temperature limit, the effects of temperature increase the redefined EoP in the high and intermediate energy limit. Near the phase transition point, in the high energy limit, the redefined EoP between two subsystems is larger at zero temperature than the finite temperature one while, in the intermediate energy limit, the redefined EoP between two subsystems is smaller at zero temperature than the finite temperature one. Therefore, near the transition point, in the high energy (intermediate) limit the state at zero (finite) temperature is the favorable one. \end{itemize} The non-conformal and thermal effects in the studied regimes are summarized in tables \ref{non-conformal} and \ref{thermal}, respectively. \begin{table}[ht] \caption{Non-conformal effect ($\Lambda_c$)} \begin{scriptsize} {\sffamily% \begin{tabular}{*{13}{c|}} \cline{2-13} \multirow{4}{*}{} & \multicolumn{4}{c|}{high Energy} & \multicolumn{4}{c|}{intermediate energy} & \multicolumn{4}{c|}{low energy}\\ \cline{2-13} & \multicolumn{1}{c|}{$T=0$} & \multicolumn{1}{c|}{low T} & \multicolumn{1}{c|}{int T} & \multicolumn{1}{c|}{high T} & \multicolumn{1}{c|}{$T=0$} & \multicolumn{1}{c|}{low T} & \multicolumn{1}{c|}{int T} & \multicolumn{1}{c|}{high T} & \multicolumn{1}{c|}{$T=0$} & \multicolumn{1}{c|}{low T} & \multicolumn{1}{c|}{int T} & \multicolumn{1}{c|}{high T} \\ \hline \multicolumn{1}{|c|}{HEE} & $\downarrow$ & $\downarrow$ & $-$& $\downarrow$ & $-$ & $-$ & $-$ & $-$ & $\times$ & $\times$ & $\times$ & $\times$\\ \hline \multicolumn{1}{|c|}{HMI} & $\uparrow$ & $\uparrow$ & $\uparrow$ & 0 & $\times$ & $\times$ & $\times$ & 0 & 0 & 0 & 0 & 0\\ \hline \multicolumn{1}{|c|}{EoP} & $\uparrow$ & $\uparrow$ & $\uparrow$ & 0 & $\uparrow$ & $\uparrow$ & $\uparrow$ & 0 & 0 & 0 & 0 & 0\\ \hline \end{tabular} }% \end{scriptsize} \label{non-conformal} \end{table} \begin{table}[ht] \centering \caption{Thermal effect (T)} \begin{scriptsize} {\sffamily% \begin{tabular}{*{10}{c|}} \cline{2-10} \multirow{4}{*}{} & \multicolumn{3}{c|}{Low temperature} & \multicolumn{3}{c|}{Intermediate temperature} & \multicolumn{3}{c|}{High temperature}\\ \cline{2-10} & \multicolumn{1}{c|}{high E} & \multicolumn{1}{c|}{int E} & \multicolumn{1}{c|}{low E} & \multicolumn{1}{c|}{high E} & \multicolumn{1}{c|}{int E} & \multicolumn{1}{c|}{low E}& \multicolumn{1}{c|}{high E} & \multicolumn{1}{c|}{int E} & \multicolumn{1}{c|}{low E} \\ \hline \multicolumn{1}{|c|}{HEE} & $\uparrow$ & $-$ & $\times$ & $-$ & $-$ & $-$ & $\uparrow$ & $-$ & $\times$\\ \hline \multicolumn{1}{|c|}{HMI} & $\downarrow$ & $\times$ & 0 & $\downarrow$ & $\times$ & 0 & 0 & 0 & 0\\ \hline \multicolumn{1}{|c|}{EoP} & $\downarrow$ & $\uparrow$ & 0 & $\uparrow$ & $\uparrow$ & 0 & 0 & 0 & 0\\ \hline \end{tabular} }% \end{scriptsize} \label{thermal} \begin{flushleft} $\uparrow$: Non-conformal and thermal effects increase the mentioned quantity. \\ $\downarrow$: Non-conformal and thermal effects decrease the mentioned quantity.\\ $-$: These regimes does not exist for the mentioned quantity.\\ $\times$ : We can not reach the analytical results in these regimes.\\ $0$ : The mentioned quantity vanishes in these regimes. \end{flushleft} \end{table} \section*{Acknowledgement} We would like to kindly thank M. Lezgi for useful comments and discussions on related topics.
1,108,101,563,567
arxiv
\section{Introduction} Possible evidence of a vortical flow in nuclei was suggested by the analysis of the structure of the isoscalar giant dipole resonance (ISGDR) observed in $(\alpha, \alpha')$ scattering experiments \cite{Clark01,Youngb04,Itoh_PRC_03,Uchi03,Uchi04} (see Ref. \cite{Pa07} for a review and an exhaustive list of references). In fact, the prevailing conclusion from this analysis was that the high-energy peak of the ISGDR is produced by compressional vibrations \cite{Vret00,Colo00}, whereas the low-energy bump should be attributed to vortical nuclear flow associated with a toroidal dipole mode \cite{Bast93,Vret02,Kv03,Mis06}. The toroidal moments emerge as second order terms of multipole expansions of electric currents in physical systems \cite{Dub75,Dub83}. In nuclei, the toroidal mode (TM) was predicted within a hydrodynamical model \cite{Sem81}. On the other hand, it was argued that a strong mixing between compressional and vortical vibrations in the isoscalar E1 states should be expected \cite{Mis06}. Moreover, it is not yet completely settled how the vorticity relates to the toroidal and compressional modes. This depends on the way the vorticity is defined. In hydrodynamical (HD) models, where it is characterized by a non-vanishing curl of the velocity field \cite{La87}, the vorticity is solely associated to the TM while the compressional mode (CM) is irrotational. An alternative definition, more linked to nuclear observables, was proposed by Ravenhall and Wambach (RW) \cite{Ra87}. It adopts, as a measure of the vorticity, the multipole component $j^{(fi)}_{\lambda l=\lambda+1}(r)$ of the transition density current $\langle f|\hat{\vec{j}}_{\rm{nuc}}(\vec{r}) |i\rangle$. The motivation for this choice is that this component is not constrained by the continuity equation. With this measure of the vorticity, the TM and CM describe mixed flows of both vortical and irrotational nature. In the recent study \cite{Kv11}, the vortical operator of RW type was derived and related in a simple manner to the CM and TM operators. Then the vortical, toroidal and compression E1 strengths in $^{208}$Pb were compared and thoroughly scrutinized within a separable random-phase-approximation (SRPA) \cite{Ne02,Ne06} using the SLy6 Skyrme force \cite{Ch97}. Later a similar study was performed for the isotopes $^{100,124,132}$Sn \cite{Sn_PS_13}. Following the above SRPA exploration, the TM falls into the energy region of so-called pygmy dipole resonance (PDR) \cite{Pa07} supposed to be induced in neutron rich nuclei by a relative translational oscillation of the neutron skin against the residual N=Z core. So an interplay of TM and PDR may be expected. As shown in our recent study \cite{Rep_PRC_13} within the full (non-separable) RPA \cite{Rei92}, the PDR energy region indeed embraces various modes with a strong TM fraction. Moreover, the vortical flow dominates in the nuclear interior while the irrotational motion (relevant for E1 transitions in the long-wave approximation) prevails at the nuclear surface. This point deserves a further investigation. In particular, a comparative analysis of experimental data from $(\alpha,\alpha' \gamma)$ (relevant for both TM and PDR in T=0 channel), $(\gamma,\gamma')$ observations (e.g. in Sn isotopes \cite{Endres12} and N=82 isotones \cite{Savran11}), and $(e,e')$ reactions is needed. In this study, we continue the exploration of the toroidal, compression and vortical (RW) E1 strengths in various mass regions using SRPA. While the previous analysis concerned $^{208}$Pb \cite{Kv11} and Sn isotopes \cite{Sn_PS_13}, the present study concentrates on Sm isotopes, from spherical $^{144}$Sm to axially deformed $^{154}$Sm. Thereby we look particularly at the influence of the nuclear deformation. Like in \cite{Sn_PS_13}, we use a representative set of Skyrme forces (SLy6 \cite{Ch97}, SkT6 \cite{To84}, SVbas \cite{Kl09}, SkM* \cite{Ba82}, and SkI3 \cite{Re95}) covering a wide range of the isoscalar effective mass, $m_0/m = 1 - 0.58$. The isoscalar (T=0), isovector (T=1), and pure proton (elm) channels of E1 excitations are considered. Like in \cite{Kv11,Sn_PS_13}, the relative contributions to the strengths of the convection $j_{\text{con}}$ and magnetization $j_{\text{mag}}$ nuclear currents are inspected. To demonstrate the ability of our approach and accuracy of different Skyrme parametrizations, the supplemented characteristics (binding energies, photoabsorption cross sections, energy-weighted sum rules) are considered. The SRPA method used in the present calculations has been already successfully applied to description of various kinds of nuclear excitations (electric \cite{Kv11,Ne07,Nest_IJMPE_08,Kl08,Kv11b} and magnetic \cite{Ve09,Ne10,Nest_IJMPE_10_M1} giant resonances, E1 strength near the particle thresholds \cite{Kv11b,Kv09}, and TM/CM/RW modes \cite{Kv11,Sn_PS_13}) in both spherical and deformed nuclei and was shown as an efficient and reliable theoretical tool. The paper is organized as follows. In Sec. II the theoretical background and calculation scheme are presented. In Sec. III, the supplemented characteristics are inspected and numerical results for TM, CM, and RW strengths are discussed. In Sec. IV, the conclusions are done. In Appendix A, the vortical, toroidal, and compression flows and their operators are discussed. The HD and RW conceptions of the vorticity on nuclear flow are outlined. Appendix B sketches the derivation of the RW equations. In Appendix C, a basic information about the SRPA method is given. \section{Theoretical background and calculation scheme} The main topic of this paper is the influence of nuclear deformation on the toroidal, compressional and vortical dipole strength functions. The corresponding transition operators are \cite{Kv11} \begin{eqnarray} \label{29} && \hat{M}(\rm{tor};\:1\mu) = - \frac{2}{2c\sqrt{3}}\:\int\:d^3r \: \hat{\vec{j}}_{nuc}(\vec{r}) \nonumber \\ && \cdot\:\left[\:\frac{\sqrt{2}}{5} \:r^2 \:\vec{Y}_{12\mu}(\hat{r}) + (r^2 - \delta_{T,0} \langle r^2\rangle_0)\:\vec{Y}_{10\mu}(\hat{r})\:\right], \end{eqnarray} \begin{eqnarray} \label{30} && \hat{M'}(\rm{com};\:1\mu) = \frac{1}{10}\:\int\:d^3r \: \hat{\rho}(\vec{r}) \nonumber \\ && \qquad \qquad \quad \cdot\: \left[\: r^3 - \delta_{T,0} \:\frac{5}{3}\: \langle r^2 \rangle_0 \:r\:\right]\:Y_{1\:\mu}(\hat{r}), \end{eqnarray} \begin{equation} \label{31} \hat{M}(\rm{vor};\:1\mu) = -\frac{i}{5c}\:\sqrt{\frac{3}{2}}\:\int\:d^3r \:r^2\: \hat{\vec{j}}_{\text{nuc}}(\vec{r}) \cdot \vec{Y}_{12\mu}(\hat{r}), \end{equation} where $\hat{\vec{j}}_{\rm{nuc}}(\vec{r})$ and $\hat{\rho}(\vec{r})$ are operator of nuclear current and nuclear density, respectively. Symbols $Y_{\lambda \mu}(\hat{r})$ and $\vec{Y}_{\lambda l \mu}(\vec{r})$ stand for spherical harmonics and vector spherical harmonics, respectively, and $\langle r^2\rangle_0 = \int\:d^3r \: \rho_0(\vec{r}) \:r^2$ is the ground state square radius. The derivation of these operators and their connection to the long-wavelength limit of the standard E1 operator \begin{equation} \hat{M}(E\:1 \mu) = - \int d^3r \:\hat{\rho}(\vec{r}) \: r \: Y_{1\mu}(\hat{r}) \label{E1op} \end{equation} can be found in the Appendix A and Ref. \cite{Kv11}. The toroidal, compressional, and vortical strength functions \begin{eqnarray} \label{39} && S\:'_{\gamma}(E1\:,\;E) = \nonumber \\ && \sum_{\mu=0,\mp1} \sum_{\nu} \: |\:<\nu|\:\hat{M}(\gamma;\;1\mu)\:|0> \:|^2 \:\xi_{\Delta}(E-E_{\nu}) \nonumber \\ \end{eqnarray} are calculated in the framework of the Skyrme SRPA approach \cite{Ne02,Ne06}, see Appendix C for more detail. In the above expression, $\gamma$ labels the TM ,CM, and RW strengths determined by the operators (\ref{29}), (\ref{30}), and (\ref{31}), respectively. The indices $\nu$ stand for the RPA states with the energies $E_{\nu}$, $|0>$ is the RPA ground state. Further, \begin{equation}\label{40} \xi_{\Delta}(E-E_{\nu}) = \frac{1}{2\pi}\:\frac{\Delta}{(E-E_{\nu})^2 + (\frac{\Delta}{2})^2} \end{equation} is the Lorentz weight with the averaging parameter $\Delta$. The averaging is needed for the convenience of comparison of the results with the experimental data and to simulate roughly the smoothing effects beyond the SRPA (escape widths and coupling to complex configurations). For the broad and poorly known TM,CM, and RW strengths, a constant averaging width $\Delta$=1 MeV is optimal. The strengths are analyzed in T=0, T=1, and electromagnetic ($elm$) channels characterized by the effective charges $e_q^{\rm{eff}}$ and gyromagnetic ratios $g_q^{\rm{eff}}$ \begin{eqnarray} \label{44} T=0: && e_n^{\text{eff}}=e_p^{\text{eff}}=1, \; g_{n,p}^{\text{eff}}=\frac{\zeta }{2}\:(g_n + g_p) , \\ \label{45} T=1: && e_n^{\text{eff}}=-e_p^{\text{eff}}=-1, \;g_{n,p}^{\text{eff}}=\frac{\zeta}{2}\:(g_n - g_p) , \\ \label{46} elm: && e_n^{\text{eff}}=0,\:e_p^{\text{eff}}=1, \quad g_{n,p}^{\text{eff}}=\zeta g_{n,p} \; , \end{eqnarray} where $g_{n} = -3.82$ and $g_{p} = 5.58$ are free neutron and proton gyromagnetic ratios, $\zeta \approx 0.7$ is the usual quenching factor \cite{Al89}. See details in Appendix A. The photoabsorption cross-section is fully determined by the $E1$ transitions and thus reads \cite{Ri80} \begin{equation} \label{43} \sigma_{\text{phot}}(E) = \frac{16\:\pi^3\:\alpha_{e}}{9\:e^2} \;E\: S(E1;\:E), \end{equation} where $\alpha_{e}=1/137$ is the fine-structure constant and $S(E1;\:E)$ is the strength function calculated with the standard dipole operator \begin{equation}\label{E1} \hat{M} (E1\mu) = \frac{N}{A}\sum_{p=1}^Z r_p Y_{1\mu}(\Omega_p) - \frac{Z}{A}\sum_{n=1}^N r_n Y_{1\mu}(\Omega_n) \; . \end{equation} For the photoabsorption cross-section, detailed experimental data are available \cite{atlas,janis}. So it is worth to compute this observable more accurately. It should be taken into account that i) the escape widths appear above the particle emission thresholds and grow with energy due to the widening of the emission phase space, ii) the collisional widths, induced by the coupling with complex configurations, also increase with the excitation energy. To simulate these trends, one should use in (\ref{40}) an energy-dependent averaging parameter $\Delta (E)$. This can be done by implementing a double folding scheme \cite{Kv11b}. We calculate first the strength function (\ref{39}) with the operator (\ref{E1}) by using a small but fixed value of $\Delta$. This gives the strength distribution $S'(E1;\;E)$ very closed to one obtained in RPA but with an equidistant energy grid. This strength is then folded again by using an energy dependent $\Delta (E)$: \begin{equation}\label{41} S(E1;\;E) = \int\:dE'\:S\:'(E1;\;E')\;\xi_{\Delta (E')}(E-E'). \end{equation} In the present study, we use a simple linear dependence \begin{equation}\label{42} \Delta (E') = \left\{ \begin{array}{ll} \Delta_0 & \mbox{for } E'\leq E_{\mathrm{th}} \\ \Delta_0 + a (E' - E_{\mathrm{th}}) & \mbox{for } E' > E_{\mathrm{th}} \end{array}\right. \end{equation} where $E_{\text{th}}$ is the energy of the first emission threshold and $\Delta_0=0.1$ MeV is a minimal width. The parameter $a$ is chosen to reproduce the GDR strength distribution. In order to test the sensitivity of the modes, we consider a selection of sufficiently different Skyrme parametrizations (SkT6 \cite{To84}, SVbas \cite{Kl09}, SkM* \cite{Ba82}, SLy6 \cite{Ch97}, and SkI3 \cite{Re95}) covering a wide range of isoscalar effective masses: $m_0/m$=1, 0.9, 0.79, 0.69, and 0.58, respectively. All the parametrizations provide a good description of basic ground properties of nuclei. The configuration space in the calculations covers all particle-hole (two-quasiparticle) states with an excitation energy up to $E_{\rm{cut}} \approx 175$ MeV. Such a big basis allows to exhaust in the SRPA calculations about 100\% of the isovector Thomas-Reiche-Kuhn (TRK) energy-weighted sum rule (EWSR) \cite{Ri80} for the GDR and isoscalar Harakeh's EWSR for the ISGDR \cite{Ha01} (see also an extensive discussion at the end of the next subsection and Appendix C). Besides that, the large basis is crucial to lower the energy position of the spurious E1(T=0) peak towards the correct zero energy. In the present study, it lies in spherical $^{144}$Sm at $1.5-2.0$ MeV, depending on the Skyrme parametrization. This is not so good as in some full RPA calculations for $^{208}$Pb, which use a larger configuration space and manage to yield the spurious peak below $\sim$ 0.6 MeV, see e.g. SLy5 \cite{Co13} and SLy6 \cite{Rep_PRC_13} results. However, the present value is sufficient for our aims since the T=0 strengths of interest are situated at $E>$ 5 MeV and further extension of the configuration space does not significantly affect them. To be on the safe side, the TM, CM, and RW strength functions are given below only in the range $E>$ 5 MeV. The calculations exploit a 2D representation in cylindrical coordinates using a mesh size of 0.3 fm and a calculation box of 21 fm. For open-shell isotopes, we use zero-range pairing forces. The Hartree-Fock-Bogoliubov (HFB) equations are implemented at the BCS level \cite{Ben00}. For closed shell nuclei, this reduces automatically to a Hartree-Fock (HF) treatment. \section{Results and Discussions} \subsection{Supplemented characterisics} \begin{figure}[b] \includegraphics[width=9cm]{fig1.eps} \vspace{2mm} \caption{ (color online) HF/HFB binding energies of Sm isotopes versus the dimensionless parameter of quadrupole deformation $\beta$, calculated with different Skyrme parametrizations. The binding energies at an equilibrium deformation are to be compared with the experimental values (dashed horizontal lines) obtained from mass measurements \cite{mass}}. \end{figure} \begin{figure}[t] \includegraphics[width=9cm]{fig2.eps} \vspace{5mm} \caption{ (color online) Photoabsorption cross section versus excitation energy in $^{144-154}$Sm isotopes, calculated for different Skyrme parametrizations and compared with experimental values \cite{janis}. For each isotope, we indicate the dimensionless quadrupole deformation $\beta$ of the prolate ground state computed with the force SLy6.} \end{figure} In Figures 1 and 2, some basic features (binding energies and photoabsorption cross sections) are presented to demonstrate the accuracy of our approach. In addition to spherical $^{144}$Sm and deformed $^{154}$Sm, the transitional isotopes $^{148,150,152}$Sm are also shown to illustrate the trends with development of the deformation. The treatment of these soft isotopes within RPA is known to be insufficient and has to be amended by the coupling with complex configurations \cite{En10,Li08}. Nevertheless, we find useful to present these RPA results to outline the trends. In Figure 1, the binding energies of Sm isotopes calculated within the HF/HFB approach are depicted in dependence on the dimensionless parameter of the quadrupole deformation $\beta$. For all Skyrme forces used here, we see the pronounced main minima at $\beta$=0 in the spherical $^{144}$Sm and at $\beta$=0.33-0.34 in the prolate deformed $^{154}$Sm. In the latter case, the internal quadrupole moment is $Q$=6.4-6.6 b. Both computed $\beta$ and $Q$ are in a good agreement with the experimental values $\beta_{\text{exp}}$=0.34 and $Q_{\text{exp}}$=6.6 b \cite{Raman87}. In the transitional $^{148,150,152}$Sm, the calculations produce two shallow minima with roughly the same depth, corresponding to prolate and oblate shapes, respectively. Despite all the Skyrme parametrizations are fitted to experimental binding energies of selected doubly magic or semi-magic nuclei, in our calculations only the SLy6, SV-bas, and SkI3 forces reproduce the measured binding energy \cite{mass} in the semi-magic $^{144}$Sm. The two older parametrizations SkM$^*$ and SkT6 were tuned to doubly-magic nuclei only and do not perform so well for the rather soft $^{144}$Sm. In Figure 2, the calculated photoabsorption cross section is inspected. We get a good agreement with experiment \cite{janis} for most Skyrme forces in the spherical $^{144}$Sm and the well deformed $^{154}$Sm. The deformation splitting of the GDR in $^{154}$Sm is also reproduced. For the transitional isotopes, the agreement is acceptable as well. In all the nuclei, the main deviation arises at the high-energy wing of the GDR. This is probably an effect of neglecting the complex configurations. \begin{figure*} \includegraphics[width=12cm]{fig3.eps} \caption{ (color online) Pure HF/HFB strength functions for T=0 (left) and T=1 (right) compression mode in spherical $^{144}$Sm (black thin curve) and deformed $^{154}$Sm (red bold curve) isotopes, computed with different Skyrme forces. For SLy6 in $^{154}$Sm, we show also the strength computed for an artificial ground state forced to have $\beta$=0, i.e. stay spherical (blue dashed curve). On the left, the isoscalar effective masses $m_0/m$ of the forces are listed.} \end{figure*} Additional useful insight may be obtained from the analysis of the sum rules. As was mentioned above, the TRK sum rule for the photoabsorption cross section is exhausted in our calculations by about 100$\%$, see details in Appendix C. For the purposes of the present study, it is crucial to check the sum rule for the ISGDR \cite{Ha01} \begin{equation}\label{Har_EWSR} \text{EWSR}_{\text{ISGDR}} = \frac{1}{100} \left[ \frac{3 \hbar^2}{8 \pi m} \:A\:( 11 \left<r^4\right>_0 - \frac{25}{3} \left<r^2\right>_0^2 ) \right] , \end{equation} obtained for the transition operator (\ref{30}) and $\left<r^4\right>_0=\int\:d^3r \: \rho_0(\vec{r}) \:r^4$. The results are shown in Table \ref{tab1} and they prove that this sum rule is also nicely fulfilled. Thus exhausting both isovector TRK and and isoscalar ISGDR E1 sum rules confirms that our configuration space is sufficiently large. \begin{table} \begin{center} \caption{\label{tab1} The ISGDR (\ref{Har_EWSR}) and RPA EWSR (in $\mathrm{e^2\:fm^6} 10^3$ MeV) for CM(T=0) in $^{144,154}$Sm, computed with different Skyrme forces.} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|r|}{ } & SkI3 & SLy6 & SkM* & SVbas & SkT6 \\ \hline \multicolumn{2}{|r|} {ISGDR} & 24.4 & 24.7 & 25.1 & 24.3 & 24.4 \\ \multicolumn{2}{|r|}{ $^{144}$Sm \qquad RPA } & 25.0 & 25.4 & 24.8 & 23.1 & 24.3 \\ \hline \hline \multicolumn{2}{|r|}{ ISGDR} & 34.1 & 33.8 & 34.1 & 33.2 & 33.2 \\ \multicolumn{2}{|r|}{ $^{154}$Sm \qquad RPA } & 34.8 & 34.0 & 33.5 & 32.6 & 33.1 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \includegraphics[width=8cm]{fig4.eps} \caption{ (color online) Total (black bold), $\mu=0$ (red dash), and $\mu=1$ (blue dash-dotted) CM(T=0) strength functions in the deformed nucleus $^{154}$Sm, computed with the force SLy6 within HFB (upper panel) and RPA (bottom panel) approaches.} \end{figure} \begin{figure*} \includegraphics[width=12cm]{fig5.eps} \caption{ (color online) The same as in Fig. 3 but for RPA compressional (CM) strength functions. The widths and energy centroids of the low- and high-energy ISGDR branches observed in ($\alpha, \alpha'$) reaction \protect\cite{Itoh_PRC_03,Uchi04} are denoted at the upper/left panel.} \end{figure*} Altogether, the above results demonstrate reliability of our approach in description of static and E1 dynamical properties of spherical and deformed nuclei and thus justifies its further application to TM/CM/RW E1 strength functions. \subsection{TM, CM, and RW strength functions} In Figure 3, the E1(T=0) and E1(T=1) strength functions for the CM in spherical $^{144}$Sm and deformed $^{154}$Sm are shown for the chosen set of Skyrme forces. The strengths are computed purely within the HF/HFB approach (without the residual interaction) by using the transition operator (\ref{30}). As seen from the figure, the calculations give two broad CM bumps, a smaller one at low-energy and a large one at high-energy, which is in accordance to previous theoretical and experimental studies, see review \cite{Pa07}. In $^{154}$Sm, the strength is larger and more uniform than in $^{144}$Sm, as expected for a heavier and more deformed nucleus. \begin{figure*}[t] \includegraphics[width=12cm]{fig6.eps} \caption{ (color online) The same as in Fig. 3 but for the RPA toroidal strength function.} \end{figure*} \begin{figure*} \includegraphics[width=12cm]{fig7.eps} \caption{ (color online) The same as in Fig. 3 but for the RPA vortical (RW) strength function.} \end{figure*} Three non-trivial effects are visible in both T=0 and T=1 channels. First, in $^{154}$Sm we see a substantial growth of the CM strength in the region $4\leq E \leq 11$ MeV where the PDR is supposed to appear. The comparison with the constrained case $\beta$=0 (no deformation) for the force SLy6 indicates that this effect should be mainly ascribed to increasing the number of neutrons (from magic 82 to 92). Indeed, unlike $^{144}$Sm, in $^{154}$Sm particular low-energy s.p. transitions $\nu\nu [1h_{9/2} \leftrightarrow 2f_{7/2}]$ become active, which may lead to the growth of the strength. However, following Fig. 4, where the deformation splitting of $\mu$=0 and $\mu$=1 CM(T=0) components for SkT6 and SLy6 forces is demonstrated, the deformation effect is also important. Just due to deformation, the low-energy and high-energy parts of the region $4\leq E \leq 11$ MeV are dominated by $\mu$=0 and $\mu$=1 branches, respectively. Moreover, the comparison of T=0 and T=1 strengths allows to state that the PDR region may be also separated into two isospin sectors, a low-energy T=0 and a high-energy T=1 ones. This conjecture finds support in recent experimental analysis \cite{Endres12,Savran11}, which have split the low-lying E1 spectra into an upper and lower energy sectors. The upper sector is excited solely in $(\gamma,\gamma')$ and, therefore, has T=1 nature. The lower sector, instead, is composed of levels excited in both $(\gamma,\gamma')$ and ($\alpha,\alpha' \gamma)$ and thus may be related to T=0 excitations. Second, Fig. 3 shows that in $^{154}$Sm a substantial bump appears at 20-30 MeV. This is particularly pronounced in T=0. The comparison to the case $\beta$=0 indicates that the bump is caused by the deformation. Fig. 4a) shows that this is just the $\mu$=0 branch of the high-energy CM. We also see in Fig. 4a) a huge deformation splitting ($\sim$ 10 MeV) of the CM(T=0) strength computed with SLy6 force (similar results are obtained for other Skyrme forces). Thus deformation gives for the high-energy CM indeed dramatic effect. Third, Fig. 3 shows that the CM strength in general and low-energy and high-energy CM bumps in particular are noticeably upshifted with decreasing $m_0/m$. This takes place for both isotopes $^{144}$Sm and $^{154}$Sm and in both channels, T=0 and T=1. The effect is straightforwardly explained by well known spread of s.p. spectra below the Fermi level with decreasing $m_0/m$, see examples in \cite{Nest_PRC_04}. \begin{figure*}[t] \includegraphics[width=12cm]{fig8.eps} \caption{ (color online) RPA toroidal (upper), RW vortical (middle), and compression (bottom) strength functions calculated in T=0 (left), T=1 (middle), and electromagnetic (right) channels with SLy6 Skyrme force in spherical $^{144}$Sm. For TM and RW, the contributions from total (black/bold line), convection (blue/thin line), and magnetization (red/dotted line) currents are depicted.} \end{figure*} \begin{figure*}[t] \includegraphics[width=12cm]{fig9.eps} \caption{ (color online) The same as in Fig.8 but for deformed $^{154}$Sm.} \end{figure*} In Figure 5, the RPA CM strength functions are depicted. The residual interaction significantly downshifts (upshifts) the strength in T=0 (T=1) channels. Nevertheless all three effects discussed above for HF/HFB case remain the same. The $(\alpha,\alpha')$ experiment \cite{Itoh_PRC_03,Uchi04} gives for the ISGDR in $^{144}$Sm two bumps with the energies and widths E=14.2 MeV, $\Gamma$=4.8 MeV and E=25.0 MeV, $\Gamma$=19.9 MeV. The narrow low-energy and broad high-energy bumps of the ISGDR are commonly treated as TM \cite{Vret02,Kv03,Mis06} and CM \cite{Vret00,Colo00}, respectively. Alternatively, both bumps may be treated as merely CM branches with the strong CM/TM coupling at a low energy \cite{Rep_PRC_13}. Following Fig. 5, our calculations for $^{144}$Sm roughly reproduce the experimental data \cite{Itoh_PRC_03,Uchi04} for the forces SkT6, SVbas, SkM*, and SLy6. For SkI3 with a low effective mass $m_0/m$=0.58, the agreement for the high-energy CM bump is not so good. It seems that Skyrme forces with a large $m_0/m$ are more preferable for description of CM. Note that, following Fig. 4, the residual interaction considerably decreases the deformation splitting. As compared to the HFB case, it is reduced for the high-energy CM(T=0) bump from $\sim$ 10 MeV (HFB) to $\sim$ 5 MeV (RPA). Nevertheless, the deformation splitting remains very large. Such strong splitting is known only for the isovector GDR. In Fig. 6, the RPA TM strength in $^{144,154}$Sm is depicted in T=0 and T=1 channels. Unlike the CM, the TM is concentrated at low energy while the high-energy strength is rather weak. Following our analysis, this is because of the instructive (destructive) summation of $\lambda l$=10 and 12 terms in the TM operator (\ref{29}) in the low-energy (high-energy) regions. For the CM operator (\ref{26}) (which may be decomposed in similar manner \cite{Kv11}), we have the opposite result. The TM(T=0) strength in Fig. 6 demonstrates the same three effects discussed above for CM: i) pumping the strength to the PDR region due to neutron excess/deformation impact, ii) appearance of an appreciable $\mu$ =0 bump due to the deformation splitting of the high-energy strength, iii) general upshift of the strength with $m_0/m$. The manifestation of the effects for TM is weaker than for CM and noticeably depends on the force: the effects are significant for SkT6, SVbas, SkM*, SkI3 but small for SLy6. In the T=1 channel, the effects are much weaker, perhaps because the main player, the high-energy TM, is almost absent. In Fig. 7, a similar behavior is observed for the vortical RW strength generated by the operator (\ref{31}). The main difference from TM is that low-energy and high-energy RW bumps in T=0 channel are of a comparable strength. Anyway both TM and RW are strong at 5-15 MeV where the maximal nuclear vorticity is expected \cite{Kv11}. Figs. 8 compares in $^{144}$Sm the CM, TM, and RW strengths in T=0 and T=1 channels with the strengths in the electromagnetic ('elm') channel relevant for $(e,e')$ reaction, where only the proton part of the transition operators is active, see (\ref{46}). Fig. 9 does the same for $^{154}$Sm. It is seen that the strengths in the 'elm' channel are very similar to T=1 ones. Further Figs. 8 and 9 demonstrate the contributions of the convection $j_{\rm{con}}$ and magnetization $j_{\rm{mag}}$ nuclear currents (\ref{11}) to the strength. In accordance to the previous calculations for $^{208}$Pb \cite{Kv11}, the $j_{\rm{con}}$ dominates at T=0 and $j_{\rm{mag}}$ at T=1 (and 'elm'). Finally we remark that for a detailed study it is desirable to go beyond RPA and take into account more complex configurations. The low-lying E1 modes, in fact, may couple to two-phonon states describing collective-core excitations coupled to collective surface vibrations, like an octupole-quadrupole excitation. Several RPA extensions are already available. Among them, there are the QRPA plus phonon-coupling model \cite{colo94,CoBo01} and the separable RPA plus two-phonon approach \cite{NVG_98,Sever_08,Ars} with Skyrme forces, and the relativistic time-blocking approximation (RTBA) \cite{Li08} embracing the anharmonicity by coupling the RPA phonons to particle-hole + phonon states. Alternatively, one may use the quasiparticle-phonon model (QPM) \cite{Sol92,Pon96,LoJoP12} with Woods-Saxon separable potential and the equation-of-motion phonon method (EMPM) with realistic interactions \cite{bianco12}. All these RPA extensions have been adopted with fair success to describe the low-energy E1 response in nuclei like $^{208}$Pb (see e.g. \cite{Tamii11,Polto,bianco12a}) and confirmed an important role played by the multi-phonon states in E1 low-lying spectra. However, before performing more detailed investigations with including multiphonon states, a first exploration with mere RPA exploration is desirable and this is just what we have presented here. \section{Conclusion} The E1 compression, toroidal, and vortical \cite{Ra87} strengths in even Sm isotopes (from spherical $^{144}$Sm to deformed $^{154}$Sm) were investigated within the self-consistent separable random-phase approximation (SRPA) approach \cite{Ne02,Ne06} with Skyrme forces. A representative set of different Skyrme parametrizations (SkT6 \cite{To84}, SVbas \cite{Kl09}, SkM* \cite{Ba82}, SLy6 \cite{Ch97}, and SkI3 \cite{Re95}) was used. Three reaction channels were inspected: isoscalar (T=0), isovector (T=1) and electromagnetic (with the proton part of the transition operator). Particular attention was paid to effects of the quadrupole deformation. As a first step, the accuracy of the scheme was checked with respect to binding energies, photoabsorption cross sections, and E1 energy-weighted sum rules EWSR (isovector for the GDR and isoscalar for the ISGDR). It was shown that the configuration space used in our calculations is large enough to guarantee a good reproduction of the sum rules. We have analyzed in detail the E1 compressional, toroidal, and vortical strength functions for the chosen variety of the Skyrme forces. The strengths in the electromagnetic channel (relevant for the reaction of inelastic electron scattering) are very similar to T=1 strength. In accordance to our previous explorations for $^{208}$Pb \cite{Kv11} and Sn isotopes \cite{Sn_PS_13}, we find that the convection nuclear current $j_{\text{con}}$ plays a dominate role in all T=0 strengths while the magnetization nuclear current $j_{\text{mag}}$ is crucial in T=1 and electromagnetic channels. All strength distributions are upshifted to higher energies with decreasing effective mass $m_0$ which may be related to the corresponding spreading the single-particle spectra \cite{Nest_PRC_04}. Following our analysis, Skyrme forces with a high affective masses, $m_0/m=$ 0.8--1, are more suitable for the description of the experimental data for the ISGDR \cite{Itoh_PRC_03,Uchi04}, whose branches are commonly related to the toroidal and compression modes (TM and CM). The most interesting results are obtained for the deformation impact on the strengths. There is a dramatic general redistribution of the compressional strength with deformation when passing from spherical $^{144}$Sm to deformed $^{154}$Sm. In the quasiparticle (HFB) approximation, the deformation splitting of the high-energy CM amounts to as much as 10 MeV. The residual interaction activated in RPA somewhat decreases the effect and reduces the splitting to 5 MeV. However even such a splitting is huge. As a result, the high-energy CM demonstrates a clear separation into $\mu$=0 and $\mu$=1 branches. The $\mu$=0 branch is considerably downshifted and looks more like an individual resonance. This effect is strong for CM and weaker for other strengths. Besides, it is stronger in the T=0 channel than in the T=1 channel. It would be interesting to check our findings in $(\alpha,\alpha')$ experiment for $^{154}$Sm. Besides that, the move from $^{144}$Sm to $^{154}$Sm causes a considerable redistribution of the strength in the low-energy (PDR) region. This may be attributed to both increasing the number of neutrons and deformation. Following our calculations, the PDR region covers both CM and TM strengths in both T=0 and T=1 channels, see also discussion \cite{Sn_PS_13,Rep_PRC_13}. Moreover, in accordance to recent experimental analysis \cite{Endres12,Savran11}, the PDR region may be separated into T=0 and T=1 sectors. However, a proper exploration of this detail requires a more involved theoretical framework taking into account the coupling with complex configurations. \section*{Acknowledgments} The work was partly supported by the GSI-F+E-2010-12, Heisenberg-Landau (Germany - BLTP JINR), and Votruba - Blokhintsev (Czech Republic - BLTP JINR) grants. W.K. and P.-G.R. are grateful for the BMBF support under contracts 05P12ODDUE and support from the F+E program of the Gesellschaft f\"ur Schwerionenforschung (GSI). The support of the research plan MSM 0021620859 (Ministry of Education of the Czech Republic) and the Czech Science Foundation project No P203-13-07117S to this work are also appreciated.
1,108,101,563,568
arxiv
\section*{Introduction} The new human coronavirus SARS-CoV-2 emerged in Wuhan Province, China in December 2019 \citep{chen2020,li2020}, reaching 10,000 confirmed cases and 200 deaths due to the disease (known as COVID-19) by the end of January this year. Although travel from China was halted by late-January, dozens of known introductions of the virus to North America occurred prior to that \citep{holshue2020,kucharski2020}, and dozens more known cases were imported to the US and Canada during February from Europe, the Middle East, and elsewhere. Community transmission of unknown origin was first detected in California on February 26, followed quickly by Washington State \citep{chu_englund2020}, Illinois and Florida, but only on March 7 in New York City. Retrospective genomic analyses have demonstrated that case-tracing and self-quarantine efforts were effective in preventing most known imported cases from propagating \citep{ladner2020, gonzalezreiche2020, worobey2020}, but that the eventual outbreaks on the West Coast \citep{worobey2020, chu_englund2020,deng2020} and New York \citep{gonzalezreiche2020} were likely seeded by unknown imports in mid-February. By early March, cross-country spread was primarily due to interstate travel rather than international imports \citep{fauver_grubaugh2020}. In mid-March 2020, nearly every region of the country saw a period of uniform exponential growth in daily confirmed cases --- signifying robust community transmission --- followed by a plateau in late March, likely due to social mobility reduction. The same qualitative dynamics were seen in COVID-19 mortality counts, delayed by approximately one week. Although the qualitative picture was similar across locales, the quantitative aspects of localized epidemics --- including initial rate of growth, infections/deaths per capita, duration of plateau, and rapidity of resolution ---were quite diverse across the country. Understanding the origins of this diversity will be key to predicting how the relaxation of social distancing, annual changes in weather, and static local demographic/population characteristics will affect the resolution of the first wave of cases, and will drive coming waves, prior to the availability of a vaccine. The exponential growth rate of a spreading epidemic is dependent on the biological features of the virus-host ecosystem --- including the incubation time, susceptibility of target cells to infection, and persistence of the virus particle outside of the host --- but, through its dependence on the transmission rate between hosts, it is also a function of external factors such as population density, air humidity, and the fraction of hosts that are susceptible. Initial studies have shown that SARS-CoV-2 has a larger rate of exponential growth (or, alternatively, a lower doubling time of cases\footnote{The doubling time is $\ln 2$ divided by the exponential growth rate.}) than many other circulating human viruses \citep{park2020}. For comparison, the pandemic influenza of 2009, which also met a largely immunologically-naive population, had a doubling time of 5--\unit{10}{d} \citep{yu2012, storms2013}, while that of SARS-CoV-2 has been estimated at 2--\unit{5}{d} \citep{sanche2020, oliveiros2020} (growth rates of $\unit{\sim0.10}{{\rm d}^{-1}}$ vs.\ $\unit{\sim0.25}{{\rm d}^{-1}}$). It is not yet understood which factors contribute to this high level of infectiousness. While the dynamics of an epidemic (e.g., cases over time) must be described by numerical solutions to nonlinear models, the exponential growth rate, $\lambda$, usually has a simpler dependence on external factors. Unlike case or mortality incidence numbers, the growth rate does not scale with population size. It is a directly measurable quantity from the available incidence data, unlike, e.g., the reproduction number, which requires knowledge of the serial interval distribution \citep{wallinga_lipsitch2007, roberts_heesterbeek2007, dushoff_park2020}, something that is difficult to determine empirically \citep{champredon2015, nishiura2010}. Yet, the growth rate contains the same threshold as the reproduction number ($\lambda = 0$ vs.\ $R_0$ = 1), between a spreading epidemic (or an unstable uninfected equilibrium) and a contracting one (or an equilibrium that is resistant to flare-ups). Thus, the growth rate is an informative direct measure on that space of underlying parameters. \begin{figure*} \centering \includegraphics[width=\linewidth]{metroregions.pdf} \caption{Mobility and COVID-19 incidence data examples, and the results of linear regression to extracted initial exponential growth rates, $\lambda_{\rm exp}$, in the top 100 metropolitan regions. (A) Three example cities with different initial growth rates. Data for Google mobility (blue points), daily reported cases (black points), and weather (red and blue points, bottom) are shown with a logistic fit to cases (green line). Data at or below detection limit were excluded from fits (dates marked by red points). Thin grey bars at base of cases graphs indicate region considered ``flat'', with right end indicating the last point used for logistic fitting; averaging over ``flat'' values generates the thick grey bars to guide the eye. [See Supp.\ Mat. for additional information and for complete data sets for all metropolitan regions.] (B) Weighted linear regression results in fit to $\lambda_{\rm exp}$ for all metropolitan regions. (C) Effect of each variable on growth rate (i.e., $\Delta \lambda$ values) for those regions with well-estimated case and death rates; white/yellow indicates a negative effect on $\lambda$, red indicates positive.} \label{fig:metroregions} \vspace{10pt} \includegraphics[width=0.49\linewidth]{D7_4US.pdf} \includegraphics[width=0.49\linewidth]{I7_4US.pdf} \caption{COVID-19 mortality incidence (7-day rolling average, left) and exponential growth rate ($\lambda_{14}$, determined by regression of the logged mortality data over 14-day windows, right) for the four US counties with >2400 confirmed COVID-19 reported deaths (as of 8th June, 2020). }\label{US4_ID7} \end{figure*} In this work, we leverage the enormous data set of epidemics across the United States to evaluate the impact of demographics, population density and structure, weather, and non-pharmaceutical interventions (i.e., mobility restrictions) on the exponential rate of growth of COVID-19. Following a brief analysis of the initial spread in metropolitan regions, we expand the meaning of the exponential rate to encompass all aspects of a local epidemic --- including growth, plateau and decline --- and use it as a tracer of the dynamics, where its time dependence and geographic variation are dictated solely by these external variables and per capita cumulative mortality. Finally, we use the results of that linear analysis to calibrate a new nonlinear model --- a renewal equation that utilizes the excursion probability of a random walk to determine the incubation period --- from which we develop local predictions about the impact of social mobility relaxation, the level of herd immunity, and the potential of rebound epidemics in the Summer and Fall. \section*{Results} \subsection*{Initial growth of cases in metropolitan regions is exponential with rate depending on mobility, population, demographics, and humidity} As an initial look at COVID-19's arrival in the United States, we considered the $\sim$100 most populous metropolitan regions --- using maps of population density to select compact sets of counties representing each region (see Supplementary Material) --- and estimated the initial exponential growth rate of cases in each region. We performed a linear regression to a large set of demographic (sex, age, race) and population variables, along with weather and social mobility \citep{google2020} preceding the period of growth (Figure \ref{fig:metroregions}). In the best fit model ($R^2=0.75$, ${\rm BIC} = -183$), the baseline value of the initial growth rate was $\lambda = \unit{0.21}{{\rm d}^{-1}}$ (doubling time of \unit{3.3}{d}), with average mobility two weeks prior to growth being the most significant factor (Figure \ref{fig:metroregions}B). Of all variables considered, only four others were significant: population density (including both {\em population-weighted density} (PWD) --- also called the ``lived population density'' because it estimates the density for the average individual \citep{craig1985}--- and {\em population sparsity}, $\gamma$, a measure of the difference between PWD and standard population density, see Methods), $p<0.001$ and $p=0.006$; specific humidity two weeks prior to growth, $p=0.001$; and median age, $p=0.04$. While mobility reduction certainly caused the ``flattening'' of case incidence in every region by late-March, our results show (Figure \ref{fig:metroregions}C) that it likely played a key role in reducing the {\em rate} of growth in Boston, Washington, DC, and Los Angeles, but was too late, with respect to the sudden appearance of the epidemic, to have such an effect in, e.g., Detroit and Cleveland. In the most extreme example, Grand Rapids, MI, seems to have benefited from a late arriving epidemic, such that its growth (with a long doubling time of \unit{7}{d}) occurred almost entirely post-lockdown. Specific humidity, a measure of absolute humidity, has been previously shown to be inversely correlated with respiratory virus transmission \citep{lowen2007, shaman2009, shaman2011, kudo2019}. Here, we found it to be a significant factor, but weaker than population density and mobility (Figure \ref{fig:metroregions}C). It could be argued that Dallas, Los Angeles, and Atlanta saw a small benefit from higher humidity at the time of the epidemic's arrival, while the dry late-winter conditions in the Midwest and Northeast were more favorable to rapid transmission of SARS-CoV-2. \subsection*{\label{sec:I7}Exponential growth rate of mortality as a dynamical, pan-epidemic, measure} In the remainder of this report, we consider the exponential rate of growth (or decay) in local confirmed deaths due to COVID-19. The statistics of mortality is poorer compared to reported cases, but it is much less dependent on unknown factors such as the criteria for testing, local policies, test kit availability, and asymptomatic individuals \citep{pearce2020}. Although there is clear evidence that a large fraction of COVID-19 mortality is missed in the official counts \citep[e.g.,][]{leon2020, Modi2020.04.15.20067074}, mortality is likely less susceptible to rapid changes in reporting, and, as long as the number of reported deaths is a monotonic function of the actual number of deaths (e.g., a constant fraction, say $50\%$), the sign of the exponential growth rate will be unchanged, which is the crucial measure of the success in pandemic management. To minimize the impact of weekly changes, such as weekend reporting lulls, data dumps, and mobility changes from working days to weekends, we calculate the regression of $\ln\left[{\rm Mortality}\right]$ over a 14-day interval, and assign this value, $\lambda_{14}(t)$, and its standard error to the last day of the interval. Since only the data for distinct 2-week periods are independent, we multiply the regression errors by $\sqrt{14}$ to account for correlations between the daily estimates. Together with a ``rolling average'' of the mortality, this time-dependent measure of the exponential growth rate provides, at any day, the most up-to-date information on the progression of the epidemic (Figure \ref{US4_ID7}). In the following section, we consider a linear fit to $\lambda_{14}$, to determine the statistically-significant external (non-biological) factors influencing the dynamics of local exponential growth and decline of the epidemic. We then develop a first-principles model for $\lambda_{14}$ that allows for extrapolation of these dependencies to predict the impact of future changes in social mobility and climate. \subsection*{Epidemic mortality data explained by mobility, population, demographics, depletion of susceptible population and weather, throughout the first wave of COVID-19} \begin{table*} \begin{tabular}{l|lll} \hline \text{Joint Fit to All potential drivers} & \text{Estimate} & \text{Std Err} & \text{t-Statistic} \\ \hline Baseline Mortality Growth Rate $\lambda_{14}$ & 0.195 & 0.011 & 17.2 \\ \text{COVID Death Fraction} & -59.4 & 6.1 &-9.7 \\ \text{Social Mobility (2wks prior)} & 0.00238 & 0.00028 & 8.5 \\ $ \ln$(Population Weighted Density)-8.24 & 0.0412 & 0.0058 & 7.1 \\ \text{Social Mobility (4wks prior)}& 0.00122 & 0.00019 & 6.6\\ Population Sparsity-0.188 & -0.249 & 0.063 & -3.9 \\ $\log$(\text{Annual Death})-4.04 & -0.0301 & 0.0091 & -3.3 \\ \text{Median Age}-37.47 & 0.0038 & 0.0012 & 3.0 \\ \text{People per Household}-2.76 & 0.023 & 0.014 & 1.6 \\ \text{Specific Humidity (2wks prior)}-5.92 g/kg & -0.0033 & 0.0031 & -1.1 \\ \text{Temperature (2wks prior)}-13.11 C & -0.00083 & 0.0013 & -0.6 \\ \text{Temperature (4wks prior)}-11.60 C & -0.00060 & 0.0014 & -0.4 \\ \text{Specific Humidity (4wks prior)}-5.53 g/kg & 0.00058 & 0.0032 & 0.2 \\ \end{tabular} \begin{tabular}{l|lll} \hline \text{Joint Fit to statistically significant drivers} & \text{Estimate} & \text{Std Err} & \text{t-Statistic} \\ \hline Baseline Mortality Growth Rate $\lambda_{14}$ & 0.198 & 0.011 & 18.7\\ \text{COVID Death Fraction} & -56.7 & 5.9 & -9.7 \\ \text{Social Mobility (2wks prior)}& 0.00236 & 0.00027 & 8.8 \\ \text{Social Mobility (4wks prior)} & 0.00131 & 0.00017 & 7.6 \\ $ \ln$(Population Weighted Density)-8.24 & 0.0413 & 0.0058 & 7.2 \\ Population Sparsity-0.188 & -0.260 & 0.061 & -4.3 \\ \text{Specific Humidity (2wks prior)}-5.92 g/kg & -0.0047 & 0.0011 & -4.1 \\ $\log$(\text{Annual Death})-4.04 & -0.0324 & 0.0088 & -3.7 \\ \text{Median Age}-37.48 & 0.0040 & 0.0012 & 3.3 \\ \end{tabular} \caption{Joint Linear Fit to $\lambda_{14}(t)$ data (Top). Any dependence with t-Statistic below $2.5\sigma$ is considered not statistically significant. Joint Linear Fit to $\lambda_{14}(t)$, including only statistically significant dependencies (Bottom). For all coefficients, the population-weighted baseline is subtracted from the linear variable. \label{tab:linear}} \vspace{12pt} \begin{tabular}{l|l|l } \hline Parameter & Best-Fit $\pm$ Std Err & Description\\ \hline $\tau = \tau_0 ({\rm Median~Age}/26.2~ {\rm years})^{C_A}$ & &Time from exposure to contagiousness\\ $\tau_0$(day) & $160 \pm 58$ & Normalization \\ $C_{A}$ & $-2.26 \pm 0.95$ & Age dependence \\ \hline $d^{-1}$(day) & $17.6 \pm 2.2$ & Time from exposure to quarantine/recovery \\ \hline $C_D$ & $3460 \pm 610$ & Conversion constant, $f_D \rightarrow f_I$ \\ \hline $\beta$: Equation (\ref{eq:beta})& & Rate constant for infection\\ $\ln\left[k\beta_0\tau_0^{-2}({\rm m}^2/{\rm day}^3)\right]$ & $0.37 \pm 1.25 $ & Normalization \\ $100 C_{\cal M}$ & $8.08 \pm 1.76$ & Dependence on Social Mobility \\ $C_{\cal H}$ & $ -0.154 \pm 0.055$ & Dependence on specific humidity \\ $C_\gamma$ & $-5.52 \pm 2.35$ & Dependence on population sparsity \\ $C_{A_D}$ & $-1.05 \pm 0.25$ & Dependence on total annual deaths \\ \end{tabular} \caption{Best-fit parameters for the nonlinear model using parametrization defined in the text.} \label{tab:nonlinear} \end{table*} We considered a spatio-temporal dataset containing 3933 estimates of the exponential growth measure, $\lambda_{14}$, covering the three month period of 8 March 2020 -- 8 June 2020 in the 187 US counties for which information on COVID-19 mortality and all potential driving factors, below, were available (the main barrier was social mobility information, which limited us to a set of counties that included 69\% of US mortality). A joint, simultaneous, linear fit of these data to 12 potential driving factors (Table \ref{tab:linear}) revealed only 7 factors with {\it independent} statistical significance. Re-fitting only to these variables returned the optimal fit for the considered factors (${\rm BIC}=-5951$; $R^2 = 0.674$). We found, not surprisingly, that higher population density, median age, and social mobility correlated with positive exponential growth, while population sparsity, specific humidity, and susceptible depletion correlated with exponentially declining mortality. Notably the coefficients for each of these quantities was in the 95\% confidence intervals of those found in the analysis of metropolitan regions (and vice versa). Possibly the most surprising dependency was the negative correlation, at $\simeq -3.7\sigma$ between $\lambda_{14}$ and the {\it total} number of annual deaths in the county. In fact, this correlation was marginally more significant than a correlation with $\log$(population), which was $-3.3\sigma$. One possible interpretation of this negative correlation is that the number of annual death is a proxy for the number of potential outbreak clusters. The larger the number of clusters, the longer it might take for the epidemic to spread across their network, which would (at least initially) slow down the onset of the epidemic. \subsection*{\label{sec:nonlinear} Nonlinear model} To obtain more predictive results, we developed a mechanistic nonlinear model for infection (see Supplementary Material for details). We followed the standard analogy to chemical reaction kinetics (infection rate is proportional to the product of susceptible and infectious densities), but defined the generation interval (approximately the incubation period) through the excursion probability in a 1D random walk, modulated by an exponential rate of exit from the infected class. This approach resulted in a {\em renewal equation} \citep{heesterbeek_dietz1996, champredon2015, champredon2018}, with a distribution of generation intervals that is more realistic than that of standard SIR/SEIR models, and which could be solved formally (in terms of the Lambert W function) for the growth rate in terms of the infection parameters: % \begin{equation} \label{eq:nonlinmodel} \lambda = \frac{1}{2\tau} \left[ {\rm W} \left(\sqrt{\frac{\beta \mathcal{S} \tau}{2}}\right) \right]^2 - d \end{equation} % The model has four key dependencies, which we describe here, along with our assumptions about their own dependence on population, demographic, and climate variables. As mortality (on which our estimate of growth rate is based) lags infection (on which the renewal equation is based), we imposed a fixed time shift of $\Delta t$ for time-dependent variables: % \begin{enumerate} \item We assumed that the susceptible population, which feeds new infections and drives the growth, is actually a sub-population of the community, consisting of highly-mobile and frequently interacting individuals, and that most deaths occurred in separate sub-population of largely immobile non-interacting individuals. Under these assumptions, we found (see Supp.\ Mat.) that the susceptible density, $S(t)$, could be estimated from the cumulative per capita death fraction, $f_D$, as: % \begin{equation*} S(t-\Delta t) = S(0) \exp \left[ - C_D \, f_D(t) \right] \quad \left(f_D= D_{\rm tot}/N\right)\,, \end{equation*} % where $D_{\rm tot}$ is the cumulative mortality count, $N$ is the initial population, and the initial density is $S(0)=k\,{\rm PWD}$. \item We assumed that the logarithm of the ``rate constant'' for infection, $\beta$, depended linearly on social mobility, $m$, specific humidity, $h$, population sparsity, $\gamma$, and total annual death, $A_D$, as: % \begin{equation} \begin{split} &\ln\left[\beta\left(\mathcal{M}, \mathcal{H}, \gamma, A_D \right)\right] = \ln\left[\beta_0 \right] \\ &\quad + C_{\mathcal{M}} \left(\mathcal{M} - \bar{\mathcal{M}}\right) + C_{\mathcal{H}} \left(\mathcal{H} - \bar{\mathcal{H}} \right) \\ &\quad + C_{\gamma} \left(\gamma - \bar{\gamma}\right) + C_{A_D} \left(A_D - \bar{A}_D\right) \end{split} \label{eq:beta} \end{equation} % where a barred variable represents the (population-weighted) average value over all US counties, and where the mobility and humidity factors were time-shifted with respect to the growth rate estimation window: $\mathcal{M} = m\left(t - \Delta t\right)$ and $\mathcal{H} = h\left(t - \Delta t\right)$. \item The characteristic time scale to infectiousness, $\tau$, is intrinsic to the biology and therefore we assumed it would depend only on the median age of the population, $A$. We assumed a power law dependence: % \begin{equation} \tau = \tau_0 \left(\frac{A}{A_0}\right)^{C_A} \end{equation} % where we fixed the pivot age, $A_0$, to minimize the error in $\tau_0$. \item The exponential rate of exit from the infected class, $d$, was assumed constant, since we found no significant dependence for it on other factors in our analysis of US mortality. From the properties of the Lambert W function, when the infection rate or susceptibility density approach zero --- through mobility restrictions or susceptible depletion --- the growth rate will tend to $\lambda \approx - d$, its minimum value. \end{enumerate} % \begin{figure*} \centering \includegraphics[width=0.48\linewidth]{I7_NYC.pdf} \includegraphics[width=0.48\linewidth]{I7_Cook.pdf} \includegraphics[width=0.48\linewidth]{I7_Wayne.pdf} \includegraphics[width=0.48\linewidth]{I7_Nassau.pdf} \includegraphics[width=0.48\linewidth]{I7_LA.pdf} \includegraphics[width=0.48\linewidth]{I7_Suffolk.pdf} \caption{Nonlinear model prediction (Eqn.\ \ref{eq:nonlinmodel}, red) for the actual (blue) mortality growth rate, in the six counties with highest reported death. Bands show 1-$\sigma$ confidence region for both the model mean and the $\lambda_{14}$ value.} \label{fig:I7_postdiction} \end{figure*} With these parameterizations, we performed a nonlinear regression to $\lambda_{14}(t)$ using the entire set of US county mortality incidence time series (Table \ref{tab:nonlinear}). Compared to the linear model of the previous section (Table \ref{tab:linear}b), the fit improved by 7.6$\sigma$ (${\rm BIC} = -6008$ ; $R^2=0.724$), despite both having 9 free parameters. Through the estimated parameter values, the model makes predictions for an individual's probability of becoming infectious, and the distributions of incubation period and generation interval, all as a function of the median age of the population (see Supplementary Material). \begin{figure*} \centering \includegraphics[width=0.45\linewidth]{King_predict.pdf} \includegraphics[width=0.45\linewidth]{Suffolk_predict.pdf} \includegraphics[width=0.45\linewidth]{LA_predict.pdf} \includegraphics[width=0.45\linewidth]{Miami_predict.pdf} \caption{\label{fig:Mortality_prediction} Forecasts of COVID-19 mortality (orange) --- based on the best-fit nonlinear model to data prior to May 16th, 2020 --- versus actual reported mortality (blue) for 4 large US counties. The 68\% confidence range (orange regions) were determined from 100 random 60-day long simulations (see Supplementary Methods). The vertical red lines indicate June 21st. Forecasts for most US counties can be found at our online dashboard: \href{https://wolfr.am/COVID19Dash}{https://wolfr.am/COVID19Dash} } \end{figure*} The model was very well fit to the mortality growth rate measurements for counties with a high mortality (Figure \ref{fig:I7_postdiction}). More quantitatively, the scatter of measured growth rates around the best-fit model predictions was (on average) only 13\% larger than the measurement errors, independent of the population of the county \footnote{See Supplementary Material for more detailed discussion of Error Diagnostics.}. Importantly, when the model was calibrated on only a subset of the data --- e.g., all but the final month for which mobility data is available --- its 68\% confidence prediction for the remaining data was accurate (Figure \ref{fig:Mortality_prediction}) given the known mobility and weather data for that final month. This suggests that the model, once calibrated on the first wave of COVID-19 infections, can make reliable predictions about the ongoing epidemic, and future waves, in the United States. \subsection*{Predictions for relaxed mobility restrictions, the onset of summer, and the potential second wave} \begin{figure*} \centering \includegraphics[width=0.45\linewidth]{I7_PWPD.pdf} \includegraphics[width=0.45\linewidth]{I7_gamma.pdf} \includegraphics[width=0.45\linewidth]{I7_Age.pdf} \includegraphics[width=0.45\linewidth]{I7_Mobility.pdf} \includegraphics[width=0.45\linewidth]{I7_Annual_Death.pdf} \includegraphics[width=0.45\linewidth]{I7_Humidity.pdf} \includegraphics[width=0.45\linewidth]{Contour_gamma_PWPD.pdf} \includegraphics[width=0.45\linewidth]{Contour_Herd_PWPD.pdf} \caption{Dependence and 68\% confidence bands of the mortality growth rate --- as specified by the nonlinear model (Eqn.\ \ref{eq:nonlinmodel}) --- on various parameters for an ``average county.'' All parameters not being varied are fixed at their population-weighted mean values (as of 8th June, 2020): log$_{10}$[PWD / km$^{-2}$)] = 3.58, population sparsity = 0.188, COVID death fraction = $5.1 \times 10^{-4}$ (510 deaths/million population), Median Age = 37.5 yr, log(Annual Death) = 4.04, social mobility $\bar{\cal M}=$ -44\% , and specific humidity $\bar{\cal H}$=5.7 g/kg. } \label{fig:I7vEverything} \end{figure*} \begin{figure*} \includegraphics[width=0.49\linewidth]{I7_Herd_Ave.pdf} \includegraphics[width=0.49\linewidth]{I7_Herd_NYC.pdf} \includegraphics[width=0.49\linewidth]{Herd_Portrait.pdf} \includegraphics[width=0.49\linewidth]{NYC_Portrait.pdf} \caption{Nonlinear model prediction of the exponential growth rate, $\lambda_{14}$, vs.\ cumulative COVID-19 mortality (top panels), assuming baseline social mobility, $\bar{\cal M}=0$, in the ``average US county'' (see caption of Figure \ref{fig:I7vEverything}) on the left, and New York City, on the right. The curves show 68\% predictions for the nonlinear model (Table \ref{tab:nonlinear}), while the points with errorbars are linear fits to all the data within bins of death fraction. The threshold for ``herd immunity'' ($\lambda_{14} = 0$) is reached at a mortality of approximately 1300 (1700) per million for an average county (NYC), but this would be higher in counties with more unfavorable values of the drivers. The eventual mortality burden of the average county will be determined by its path through a ``phase space'' of Daily vs.\ Total Mortality (bottom panel). An epidemic without intervention (red curves, with the particular trajectory starting at zero death shown in bold) will pass the threshold for herd immunity (1300 deaths per million; note that at zero daily deaths this is a fixed point) and continue to three times that value due to ongoing infections. A modest $33\%$ reduction in social mobility (blue curves), however, leads to mortality at ``only'' the herd immunity level (the green disk). The black curve on the bottom right panel shows the 7-day rolling average of reported mortality for NYC, which appears to have ``overshot'' the ``herd immunity threshold''. } \label{fig:Herd} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.4\linewidth]{King_Portrait.pdf} \includegraphics[width=0.4\linewidth]{Suffolk_Portrait.pdf} \includegraphics[width=0.4\linewidth]{LA_Portrait.pdf} \includegraphics[width=0.4\linewidth]{Miami_Portrait.pdf} \caption{\label{fig:Portraits} Epidemic Phase Portraits for the same four counties as in Figure (\ref{fig:Mortality_prediction}), similar to the Phase portrait in Figure (\ref{fig:Herd}). The blue curves are for the county's average Social Mobility during Feb. 15 through June 12, 2020, while red curves/arrows are at normal (pre-covid) social mobility. The thick black curve is the 7-day rolling average of the official reported mortality, while the green disk shows the threshold for ``herd immunity''.} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{US_Immunity_Map.pdf} \includegraphics[width=\linewidth]{US_county_significance_map.pdf} \caption{Top: United States counties that have passed (blue), or are within (cyan), the threshold for ``herd immunity'' at the 1-$\sigma$ level, as predicted by the nonlinear model. Bottom: Predicted confidence in the growth of COVID-19 outbreak (defined as predicted daily growth rate divided by its uncertainty), for all counties should they return today to their baseline (pre-COVID) social mobility. Counties that have approached the threshold of herd immunity have lower growth rates due to the depletion of susceptible individuals.} \label{fig:USMap} \end{figure*} Possibly the most pressing question for the management of COVID-19 in a particular community is the combination of circumstances at which the virus fails to propagate, i.e., at which the growth rate, estimated here by $\lambda_{14}$, becomes negative (or, equivalently, the reproduction number $R_t$ falls below one). In the absence of mobility restrictions this is informally called the threshold for ``herd immunity,'' which is usually achieved by mass vaccination \citep[e.g.,][]{Herd1,Herd2}. Without a vaccine, however, ongoing infections and death will deplete the susceptible population and thus decrease transmission. Varying the parameters of the nonlinear model individually about their Spring 2020 population-weighted mean values (Figure \ref{fig:I7vEverything}) suggests that this threshold will be very much dependent on the specific demographics, geography, and weather in the community, but it also shows that reductions in social mobility can significantly reduce transmission prior the onset of herd immunity. To determine the threshold for herd immunity in the absence or presence of social mobility restrictions, we considered the ``average US county'' (i.e., a region with population-weighted average characteristics), and examined the dependence of the growth rate on the cumulative mortality. We found that in the absence of social distancing, a COVID-19 mortality rate of 0.13\% (or 1300 per million population) would bring the growth rate to zero. However, changing the population density of this average county shows that the threshold can vary widely (Figure \ref{fig:I7vEverything}). \begin{figure} \centering \includegraphics[width=\linewidth]{Death_Histogram.pdf} \caption{Histogram of reported COVID-19 deaths per million for all US counties, showing the proportion that have passed ``herd immunity'' threshold, according to fit of the nonlinear model.} \label{fig:herd_counties_death} \end{figure} Examination of specific counties showed that the mortality level corresponding to herd immunity varies from 10 to 2500 per million people (Figure \ref{fig:herd_counties_death}). At the current levels of reported COVID-19 mortality, we found that, as of June 22nd, 2020, only $128\pm 55$ out of 3142 counties (inhabiting $9.4 \pm 2.1$\% of US population) have surpassed this threshold at 68\% confidence level (Figure \ref{fig:USMap}). Notably, New York City, with the highest reported per capita mortality (2700 per million) has achieved mobility-independent herd immunity at the 10$\sigma$ confidence level, according to the model (Figure \ref{fig:Herd}). A few other large-population counties in New England, New Jersey, Michigan, Louisiana, Georgia and Mississippi that have been hard hit by the pandemic also appear to be at or close to the herd immunity threshold. This is not the case for most of the United States, however (Figure \ref{fig:USMap}). Nationwide, we predict that COVID-19 herd immunity would only occur after a death toll of $340,000 \pm 61,000$, or $1058 \pm 190$ per million of population. We found that the approach to the herd immunity threshold is not direct, and that social mobility restrictions and other non-pharmaceutical interventions must be applied carefully to avoid excess mortality beyond the threshold. In the absence of social distancing interventions, a typical epidemic will ``overshoot'' the herd immunity limit \citep[e.g.,][]{handel2007best,fung2012minimize} by up to 300\%, due to ongoing infections (Figure \ref{fig:Herd}). At the other extreme, a very strict ``shelter in place'' order would simply delay the onset of the epidemic; but if lifted (see Figures \ref{fig:Herd} and \ref{fig:Portraits}), the epidemic would again overshoot the herd immunity threshold. A modest level of social distancing, however --- e.g., a 33\% mobility reduction for the average US county --- could lead to fatalities ``only'' at the level of herd immunity. Naturally, communities with higher population density or other risk factors (see Figure \ref{fig:I7vEverything}), would require more extreme measures to achieve the same. Avoiding the level of mortality required for herd immunity will require long-lasting and effective non-pharmaceutical options, until a vaccine is available. The universal use of face masks has been suggested for reducing the transmission of SARS-CoV-2, with a recent meta-analysis \citep{chu2020physical} suggesting that masks can suppress the rate of infection by a factor of 0.07--0.34 (95\% CI), or equivalently $\Delta \ln$(transmission) $= -1.9 \pm 0.4$ (at 1$\sigma$). Using our model's dependence of the infection rate constant on mobility, this would correspond to an equivalent social mobility reduction of $\Delta \bar{\cal M}_{\rm mask} \simeq -24\% \pm 9\%$. Warmer, more humid weather has also considered a factor that could slow the epidemic (\cite[e.g.,][]{wang2020high,2020arXiv200312417N,Xu2020.05.05.20092627}). Annual changes in specific humidity are $\Delta \bar{\cal H} \simeq \unit{6}{\gram/\kilo\gram}$ (Figure \ref{fig:mobility}b in Supplementary Material), which can be translated in our model to an effective mobility decrease of $\Delta \bar{\cal M}_{\rm summer} \simeq -12\% \pm 5\%$. Combining these two effects could, in this simple analysis, yield a modestly effective defense for the summer months: $\Delta \bar{\cal M}_{\rm mask+ summer} \simeq -37\% \pm 10\%$. Therefore, this could be a reasonable strategy for most communities to manage the COVID-19 epidemic at the aforementioned -33\% level of mobility needed to arrive at herd immunity with the least excess death. More stringent measures would be required to keep mortality below that level. Of course, this general prescription would need to be fine-tuned for the specific conditions of each community. \section*{Discussion and Conclusions} By simultaneously considering the time series of mortality incidence in every US county, and controlling for the time-varying effects of local social distancing interventions, we demonstrated for the first time a dependence of the epidemic growth of COVID-19 on population density, as well as other climate, demographic, and population factors. We further constructed a realistic, but simple, first-principles model of infection transmission that allowed us to extend our heuristic linear model of the dataset into a predictive nonlinear model, which provided a better fit to the data (with the same number of parameters), and which also accurately predicted late-time data after training on only an earlier portion of the data set. This suggests that the model is well-calibrated to predict future incidence of COVID-19, given realistic predictions/assumptions of future intervention and climate factors. We summarized some of these predictions in the final section of Results, notably that only a small fraction of US counties (with less than ten percent of the population) seem to have reached the level of herd immunity, and that relaxation of mobility restrictions without counter-measures (e.g., universal mask usage) will likely lead to increased daily mortality rates, beyond that seen in the Spring of 2020. In any epidemiological model, the infection rate of a disease is assumed proportional to population density \citep{dejong_diekmann_heesterbaak1995}, but, to our knowledge, its explicit effect in a real-world respiratory virus epidemic has not been demonstrated. The universal reach of the COVID-19 pandemic, and the diversity of communities affected have provided an opportunity to verify this dependence. Indeed, as we show here, it must be accounted for to see the effects of weaker drivers, such as weather and demographics. A recent study of COVID-19 in the United States, working with a similar dataset, saw no significant effect due to population density \citep{hamidi2020}, but our analysis differs in a number of important ways. First, we have taken a dynamic approach, evaluating the time-dependence of the growth rate of mortality incidence, rather than a single static measure for each county, which allowed us to account for the changing effects of weather, mobility, and the density of susceptible individuals. Second, we have included an explicit and real-time measurement of social mobility, i.e., cell phone mobility data provided by Google \citep{google2020}, allowing us to control for the dominant effect of intervention. Finally, and perhaps most importantly, we calculate for each county an estimate of the ``lived'' population density, called the population-weighted population density (PWD) \citep{craig1985}, which is more meaningful than the standard population per political area. As with any population-scale measure, this serves as a proxy --- here, for estimating the average rate of encounters between infectious and susceptible people --- but we believe that PWD is a better proxy than standard population density, and it is becoming more prevalent, e.g., in census work \citep{dorling_atkins1995,wilson2012}. We also found a significant dependence of the mortality growth rate on specific humidity (although since temperature and humidity were highly correlated, a replacement with temperature was approximately equivalent), indicating that the disease spread more rapidly in drier (cooler) regions. There is a large body of research on the effects of temperature and humidity on the transmission of other respiratory viruses \citep{moriyama2020, kudo2019}, specifically influenza \citep{barreca_shimshack2012}. Influenza was found to transmit more efficiently between guinea pigs in low relative-humidity and temperature conditions \citep{lowen2007}, although re-analysis of this work pointed to absolute humidity (e.g., specific humidity) as the ultimate controller of transmission \citep{shaman2009} . Although the mechanistic origin of humidity's role has not been completely clarified, theory and experiments have suggested a snowballing effect on small respiratory droplets that cause them to drop more quickly in high-humidity conditions \citep{tellier2009, noti2013, marr2018}, along with a role for evaporation and the environmental stability of virus particles \citep{morawska2005, marr2018}. It has also been shown that the onset of the influenza season \citep{shaman2010, shaman2011} --- which generally occurs between late-Fall and early-Spring, but is usually quite sharply peaked for a given strain (H1N1, H3N2, or Influenza B) --- and its mortality \citep{barreca_shimshack2012} are linked to drops in absolute humidity. It is thought that humidity or temperature could be the annual periodic driver in the resonance effect causing these acute seasonal outbreaks of influenza \citep{dushoff2004, tamerius2011}, although other influences, such as school openings/closings have also been implicated \citep{earn2012}. While little is yet known about the transmission of SARS-CoV-2 specifically, other coronaviruses are known to be seasonal \citep{moriyama2020, neher2020}, and there have been some preliminary reports of a dependence on weather factors \citep{xu2020, schell2020}. We believe that our results represent the most definitive evidence yet for the role of weather, but emphasize that it is a weak, secondary driver, especially in the early stages of this pandemic where the susceptible fraction of the population remains large \citep{baker2020}. Indeed, the current early-summer rebound of COVID-19 in the relatively dry and hot regions of the Southwest suggests that the disease spread will not soon be controlled by seasonality. We developed a new model of infection in the framework of a renewal equation (see, e.g., \cite{champredon2018} and references therein), which we could formally solve for the exponential growth rate. The incubation period in the model was determined by a random walk through the stages of infection, yielding a non-exponential distribution of the generation interval, thus imposing more realistic delays to infectiousness than, e.g., the standard SEIR model. In this formulation, we did not make the standard compartmental model assumption that the infection of an individual induces an autonomous, sequential passage from exposure, to infectiousness, to recovery or death; indeed, the model does not explicitly account for recovered or dead individuals. This freedom allows for, e.g., a back passage from infectious to noninfectious (via the underlying random walk) and a variable rate of recovery or death. We assumed only that the exponential growth in mortality incidence matched (with delay) that of the infected incidence --- the primary dynamical quantity in the renewal approach --- and we let the cumulative dead count predict susceptible density --- the second dynamical variable in the renewal approach --- under the assumption that deaths arise from a distinct subset of the population, with lower mobility behavior than those that drive infection (see Supplementary Material). Therefore, we fitted the model to the (rolling two-week estimates of the) COVID-19 mortality incidence growth rate values, $\lambda_{14}$, for all counties and all times, and used the per capita mortality averaged over that period, $f_D$, to determine susceptible density. Regression to this nonlinear model was much improved over linear regression, and, once calibrated on an early portion of the county mortality incidence time series, the model accurately predicted the remaining incidence. Because we accounted for the precise effects of social mobility in fitting our model to the actual epidemic growth and decline, we were then able to, on a county-by-county basis, ``turn off'' mobility restrictions and estimate the level of cumulative mortality at which SARS-CoV-2 would fail to spread even without social distancing measures, i.e., we estimated the threshold for ``herd immunity.'' Meeting this threshold prior to the distribution of a vaccine should not be a goal of any community, because it implies substantial mortality, but the threshold is a useful benchmark to evaluate the potential for local outbreaks following the first wave of COVID-19 in Spring 2020. We found that a few counties in the United States have indeed reached herd immunity in this estimation --- i.e., their predicted mortality growth rate, assuming baseline mobility, was negative --- including counties in the immediate vicinity of New York City, Detroit, New Orleans, and Albany, Georgia. A number of other counties were found to be at or close to the threshold, including much of the greater New York City and Boston areas, and the Four Corners, Navajo Nation, region in the Southwest. All other regions were found to be far from the threshold for herd immunity, and therefore are susceptible to ongoing or restarted outbreaks. These determinations should be taken with caution, however. In this analysis, we estimated that the remaining fraction of susceptible individuals in the counties at or near the herd immunity threshold was in the range of 0.001\% to 5\% (see Supplementary Materials). This is in strong tension with initial seroprevalence studies \citep{rosenberg2020,havers2020} which placed the fraction of immune individuals in New York City at 7\% in late March and 20\% in late April, implying that perhaps 75\% of that population remains susceptible today. We hypothesize that the pool of susceptible individuals driving the epidemic in our model is a subset of the total population --- likely those with the highest mobility and geographic reach --- while a different subset, with very low baseline mobility, contributes most of the mortality (see Supplementary Material). Thus, the near total depletion of the susceptible pool we see associated with herd immunity corresponds to the highly-mobile subset, while the low-mobility subset could remain largely susceptible. One could explicitly consider such factors of population heterogeneity in a model --- e.g., implementing a saturation of infectivity as a proxy for a clustering effect \citep{capasso1978,mollison1985,deboer2007,farrell2019} --- but we found (in results not shown) that the introduction of additional of parameters left portions of the model unidentifiable. Despite these cautions, it is interesting to note that the epidemic curves (mortality incidence over time) for those counties that we have predicted an approach to herd immunity are qualitatively different than those we have not. Specifically, the exponential rise in these counties is followed by a peak and a sharp decline --- rather than the flattening seen in most regions --- which is a typical feature of epidemic resolution by susceptible depletion. At the time of this writing, in early Summer 2020, confirmed cases are again rising sharply in many locations across the United States --- particularly in areas of the South and West that were spared significant mortality in the Spring wave. The horizon for an effective and fully-deployed vaccine still appears to be at least a year away. Initial studies of neutralizing antibodies in recovered COVID-19 patients, however, suggest a waning immune response after only 2--3 months, with 40\% of those that were asymptomatic becoming seronegative in that time period \citep{long2020}. Although the antiviral remdesivir \citep{beigel2020, grein2020, wang2020} and the steroid Dexamethasone \citep{horby2020} have shown some promise in treating COVID-19 patients, the action of remdesivir is quite weak, and high-dose steroids can only be utilized for the most critical cases. Therefore, the management of this pandemic will likely require non-pharmaceutical intervention --- including universal social distancing and mask-wearing, along with targeted closures of businesses and community gathering places --- for years in the future. The analysis and prescriptive guidance we have presented here should help to target these approaches to local communities, based on their particular demographic, geographic, and climate characteristics, and can be facilitated through our \href{https://wolfr.am/COVID19Dash}{online simulator dashboard}. Finally, although we have focused our analysis on the United States, due to the convenience of a diverse and voluminous data set, the method and results should be applicable to any community worldwide, and we intend to extend our analysis in forthcoming work. \section*{Acknowledgement} We are indebted to helpful comments and discussions by our colleagues, in particular Bruce Bassett, Ghazal Geshnizjani, David Spergel, and Lee Smolin. NA is partially supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. \printbibliography[title={Bibliography}] \newpage \section*{Supplementary Material} \subsection*{Data \& Methods} \subsubsection*{Datasets, Resources, Definitions} All data for cases and mortality, demographics, mobility, and weather were incorporated into the publicly-available Wolfram COVID-19 Dataset and the Wolfram|Alpha Knowledgebase \citep{wolfram2020}. COVID-19 confirmed case and mortality data were obtained from both the NYTimes and Johns Hopkins University Github repositories \citep{JHU2020, NYTimes2020}; the former was used for the analysis initial case data in metropolitan regions, while the average of the two data sets was used for all other analyses. In each case, daily confirmed counts were utilized. Demographic data by county, including people per household, estimated 2019 population, annual births, and annual deaths were obtained from the US Census 2019 {\em County Population Estimates} data set \citep{census2019a}. Median ages were determined from the US Census 2018 {\em County Characteristics Resident Population Estimates} data set \citep{census2018}. For the Median Age, Wolfram|Alpha has curated the raw data from \textit{United States Census Bureau, American Community Survey 5-Year Estimates: B01002, the Median Age By Sex, American FactFinder}; for the People per Household and Annual Death, the source of curated data is \textit{United States Census Bureau, State \& County QuickFact}s. County outline polygons were obtained from the US Census 2019 {\em TIGER/Line shapefiles} database \citep{census2019b}. Local weather data (Figure \ref{fig:mobility} was obtained from the NOAO {\em Global Surface Summary of the Day} (GSOD) database \citep{noao2020}. The nearest WBAN station with daily dew point and pressure values (for calculation of specific humidity), and daily average temperature was chosen for each county or metropolitan region. Weather data was averaged over a two-week period for $\lambda_{14}$, and over a window equal to the growth period for metropolitan regions. Google's {\em COVID-19 Community Mobility Reports} dataset \citep{google2020}, specifically ``Workplace mobility,'' was used to estimate the human social mobility in each county (Figure \ref{fig:mobility}). \begin{figure} \centering \includegraphics[width=\linewidth]{Mobility_NYC.pdf} \includegraphics[width=\linewidth]{US_Humid_Average.pdf} \caption{(Top)(a)The 14-day rolling average of (population-weighted) social mobility \citep{google2020} for NYC, as well as all US counties considered here. For our analysis, we only use ``work places'' as an indicator, as others do not appear to show any {\it independent} correlation with $\lambda_{14}(t)$. (Bottom)(b) 28-day moving average of historical annual specific humidity in the United States (weighted-averaged by population). } \label{fig:mobility} \end{figure} Population-weighted population density (or, population weighted density, PWD) \citep{craig1985,wilson2012, dorling_atkins1995}, was calculated using the Global Human Settlement Population raster dataset \citep{GHS2019}, which contains $\unit{250}{\meter}$-resolution population values worldwide, taken from census data. The value of PWD for a county --- or for a set of counties, in the metropolitan region analysis --- was calculated as the population-weighted average of density over all $(\unit{250}{\meter})^2$-area pixels contained within the region, i.e., % \begin{equation} {\rm PWD} = \sum_j \frac{\left(p_j/a_j\right) p_j}{\sum_i p_i}\,, \label{eq:PWPD} \end{equation} % where $p_j$ is the value (i.e., the population) of the $j$th pixel, $a_j = \unit{0.0625~}{\kilo\meter}^2$ is the area of each pixel (the GHS-POP image uses the equal-area Molleweide projection), and $\sum_i p_i$ is the total population of the region. This measure has also been called the {\em lived population density} because it is the population density experienced by the average person. \begin{figure} \centering \includegraphics[width=\linewidth]{PWPD_Histogram.pdf} \includegraphics[width=\linewidth]{Den_PWPD.pdf} \caption{Comparison of the distribution of Population Weighted Density with the Crude population density of US counties: (a)(Top): Histograms, (b)(Bottom): Relative distributions: The blue line shows the one-to-one correspondence, while the orange line is the best-fit power-law ${\rm PWD}_{\rm 250}({\rm km}^{-2}) \simeq 430 \times \left[D({\rm km}^{-2})\right]^{1/4}$. } \label{fig:PWPD} \end{figure} In high density counties, the population weighted density PWD is close to the mean density of the county $D={\rm Pop}/{\rm Area}$, suggesting a uniform distribution of population (see Figure \ref{fig:PWPD}). However, in lower density counties, the mean density is much lower than the population weighted density, due to heterogeneous dense pockets of population amidst vast empty spaces outlined by political boundaries. To represent the degree to which the population density changes across the region (county or metropolitan region) we define the {\em population sparsity index}, $\gamma$. Assuming that the population-weighted population density declines approximately as a power law with ``pixel'' area, ${\rm PWD}_{\sqrt{\rm Area}} \sim {\rm Area}^{-\gamma}$, we define: % \begin{equation} \gamma = \frac{\log\left({\rm PWD}_{\unit{250}{\meter}}\right)- \log(D)}{\log\left[{\rm Area}\right] -\log\left[\left(\unit{250}{\meter}\right)^2\right]}\,. \label{eq:gamma} \end{equation} % In other words we estimate the assumed power-law decline using two data points. The distribution of $\gamma$ and its correlation with county population and population density are shown in Figure (\ref{fig:gamma}). We can see that $\gamma$ ranges from $0.09$ (i.e., very uniform) for the most populous/dense counties to $0.88$ (i.e. very sparse) for least populated/dense counties. For reference, the value of $\gamma$ for New York City is $0.14$. \begin{figure} \centering \includegraphics[width=\linewidth]{gamma_Histogram.pdf} \includegraphics[width=\linewidth]{gamma_Pop.pdf} \includegraphics[width=\linewidth]{gamma_Density.pdf} \caption{(a)(Top)Distribution of Population Sparsity Index $\gamma$ (b)(Middle) its correlation with total population of the county (c)(Bottom) and its average population density} \label{fig:gamma} \end{figure} \subsubsection*{Initial growth of confirmed cases for metropolitan regions} For each of the top 100 metropolitan regions \citep{census2019c}, a logarithmic-scale population heat map, windowed from the full GHS-POP raster image, was used to select a minimal connected set of US counties enclosing the region of population enhancement. In this process, overlap and merger reduced the total number of metropolitan regions under consideration to 89. As is discussed in the main text, nearly every metropolitan region saw, in mid-March 2020, an exponential increase of daily confirmed cases, followed by a flat/plateau period of nearly constant daily confirmed cases. In a few cases, the second phase --- primarily caused by the country-wide lockdown --- lasted only days or weeks (possibly signaling a depletion of the susceptible population, see discussion in main text), but for most metropolitan regions the plateau persisted for months (indeed, persists or is again increasing at the time of this writing). Thus, the initial value of the exponential growth rate, $\lambda$, of daily confirmed cases could be reliably and automatically estimated by fitting the case numbers to a logistic function % \begin{equation} f_{\rm logistic} (t) = \frac{f_{\rm max}}{1 + \exp\left[ - \lambda \left(t - t_0\right)\right]} \end{equation} % where $t_0$ represents the transition time from exponential growth to a constant, $f_{\rm max}$ is the plateau value in case numbers, and $f_{\rm logistic}(t) \propto \exp[\lambda t]$ for $t\ll t_0$. Fits were performed on the logarithm of the case numbers, yielding the maximum likelihood estimation of parameters under the assumption lognormally-distributed errors (an analysis of the fit residuals, not shown, confirmed this assumption: case number fluctuations exhibit a variance far in excess of Poisson noise, but are well modeled by a log-normal probability density function with constant width), and associated estimates of the variance in each parameter. To avoid polluting the exponential growth phase with singular early cases, a ``detection limit'' of 3 was imposed, and all daily case values less than or equal to that limit were ignored in fitting. The only manual intervention required for this fit was the specification of the upper limit of its range, i.e., the {\em end} of the plateau region, for each data set. To analyze the effect of demographic, population, mobility, and weather variables on this initial growth rate, we perform a weighted linear regression to the lambda values (and their standard errors) of the 89 metropolitan regions. To choose representative cities for the visual examples in Figures \ref{fig:metroregions}A and \ref{fig:metroregions}C, we performed an additional logistic fit to the mortality incidence data of each region and retained for Figure \ref{fig:metroregions}C only those that had (1) less than 15\% error in both growth rates, and (2) $|\lambda_{\rm case} - \lambda_{\rm death}| < \unit{0.15}{d^{-1}}$. This was done in an effort to specifically comment on or highlight only those cities for which the growth rate was accurately determined, and was well correlated with the more reliable measure, mortality growth, that we used for the remainder of the analysis. \subsubsection*{Linear Dynamical Model of Mortality Data} A standard weighted least squares analysis was performed on the measured exponential growth rate, $\lambda_{14}$, as a function of demographic, mobility, population and weather variables, with weights equal to inverse root of the estimated variance. \subsubsection*{\label{sec:Model} Nonlinear Model} We construct a model where, in the standard analogy to chemical reaction kinetics, the incidence of infections per unit area at time, $i(t)$, is proportional to the product of the density of susceptible individuals, $S(t)$, and the density of infected individuals, $I(t)$. But, we allow for the rate constant for infection\footnote{In a physical picture of collisions, the rate constant of infection is $\beta(C) = \langle \sigma v\rangle_{\rm eff}(C)$, i.e., the scattering cross section of an encounter between a susceptible individual and an infectious individual in stage $C$, $\sigma$, multiplied by their relative velocity, $v$.} in the encounter, $\beta$, to depend on the infected individual's ``stage'' of infection, $C$, with $C=0$ immediately following infection. The incidence then has the form: % \begin{equation} i(t) = -\dot{S}(t) = S(t) \int_0^{\infty} \, \beta(C) \, \mathcal{I}(C,t) \, {\rm d}C\, , \label{eq:Sdot} \end{equation} % where $\mathcal{I}(C,t)$ is the density of infected per stage at time $t$, and the first equality expresses that we neglect changes to the susceptible population by all means other than infection. The density of infected individuals is found by integration over the stages of infection, % \begin{equation} I(t) = \int_0^{\infty} \, \mathcal{I}(C, t) \, {\rm d}C \, . \end{equation} % If the rate constant were taken to be independent of stage, i.e., $\beta(C)= \bar{\beta}$, we would obtain the familiar expression $\dot{S}(t) = - \bar{\beta} S(t) I(t)$. We will assume spatial homogeneity and that the total density of individuals is constant and equal to $S(0)$ for a particular region, but, that the density could vary when comparing different regions. We assume that an infected individual's evolution through the stages of infection, $C$, follows a Gaussian random walk in time, but modulated by an exponential rate, $d$, of death or recovery. Therefore we have % \begin{equation} \mathcal{I}(C,t) = \int_0^{\infty} \, i(t - a) \, f_{\rm rw}\left(C; a\right) \, {\rm e}^{-d \, a} \, {\rm d} a \end{equation} % where $a$ is the ``age'' of an infection (time since infection), and the probability density function for the stage at a given age is given by \begin{equation} f_{\rm rw}\left(C; a \right) = \sqrt{\frac{2 \tau}{\pi a}} \, \exp \left[ -\frac{\tau C^2}{2 a}\right] \,, \end{equation} % where $\tau$ is the characteristic time scale of the random walk\footnote{More precisely: $C$ is the absolute value of the position of a 1D random walker, taking one step every $\Delta t$, with step size drawn from a normal distribution with mean zero and variance $\Delta t/\tau$. The variance of the walker position at time $t$ is then $t/\tau$}. Integrating the expression for $\mathcal{I}(C,t)$ over all stages and taking the derivative with respect to time yields the familiar expression $\dot{I}(t) = i(t) - d I(t)$, showing that the model reduces to the SIR model if a stage-independent rate constant, $\bar{\beta}$, is assumed. As we show here, using the random walk to specify the dependence of infection stage on time allows for both a non-exponential distribution of delays to infectiousness (which is more realistic than that of the simplest model with incubation, the SEIR model) and a formal solution for the exponential growth rate. Inserting the expression for $\mathcal{I}(t,C)$ into the incidence equation yields % \begin{equation} i(t) = S(t) \int_0^{\infty} i(t-a)\left[ {\rm e}^{-d \,a} \int_0^{\infty} \beta(C) \, f_{\rm rw}\left(C; a\right) {\rm d}C \right]{\rm d}a \end{equation} % which is in the form of a {\em renewal equation} \citep{heesterbeek_dietz1996, champredon2015, champredon2018}, with the bracketed expression being the expected infectivity of an individual with infection age $a$. To obtain the simplest nontrivial incubation period, we assume that $\beta(C) = \bar{\beta} \, \Theta(C-1)$ --- where $\Theta(x)$ is the Heaviside step function --- meaning that an infected individual is only infectious once they reach stage $C=1$, and the infection rate constant is otherwise unchanging. This implies that the incidence is % \begin{equation} i(t) = S(t) \int_0^{\infty} i(t-a) \, \bar{\beta}\, \mathcal{F}(a) \,{\rm d}a \, . \end{equation} % where % \begin{equation} \begin{split} \mathcal{F}(a) &= {\rm e}^{-d \,a} \int_1^{\infty} \, f_{\rm rw}\left(C; a\right) {\rm d}C \\ & = {\rm e}^{-d \,a} \left[1 - {\rm erf}\left(\sqrt{\frac{\tau}{2 a}}\right)\right] \end{split} \end{equation} % is the probability that an individual infected $a$ time units ago is infectious. If we now assume that the density of susceptibles is constant $S(t) = \bar{S}$ over some interval of time, and that the incidence grows (or decays) exponentially in that interval, $i(t) = A {\rm e}^{\lambda t}$, we find % \begin{equation} 1 = \bar{\beta} \bar{S} \int_0^{\infty} \exp[-a \lambda] \, \mathcal{F}(a) \, {\rm d} a \end{equation} % which, assuming $\lambda + d > 0$ (i.e., the exponential growth rate cannot go below $-d$), can be integrated to obtain % \begin{equation} \left( \lambda + d \right) \, \exp\left[ \sqrt{2 \left(\lambda + d\right) \tau}\right] = \bar{\beta} \bar{S}\,. \end{equation} % This expression for $(\lambda + d)$ has a formal solution in terms of the {\em Lambert W-function}, with simple asymptotic forms: % \begin{equation} \begin{split} \lambda + d &= \frac{2}{\tau} \left[ {\rm W}\left(\sqrt{\frac{\bar{\beta} \bar{S} \tau}{2}}\right)\right]^2 \\ &\approx \left\{ \begin{array}{lc} \frac{1}{2 \tau} \left\{ \ln \left[ \frac{2 \bar{\beta} \bar{S} \tau}{\ln^2\left(\bar{\beta} \bar{S} \tau / 2\right)}\right] \right\}^2 & \left(\bar{S} \bar{\beta} \tau\gg 1\right)\\ \\ \bar{\beta} \bar{S} & \left(\bar{S} \bar{\beta} \tau\ll 1\right) \end{array} \right. \end{split} \label{eq:nmodel} \end{equation} % For the early stages of the epidemic, when we can assume that the population of susceptibles is approximately constant and large, we see that the growth rate depends approximately linearly on the square of the logarithm of the density. In later stages, when either the base contact rate declines due to social distancing interventions, or the population of susceptibles decreases, we see that the exponential rate takes the value $\lambda \approx - d$. In practice, we utilize the exact Lambert W-function expression as our ``nonlinear model'' for fitting $\lambda_{14}$, where we parameterize $\beta$ and $\tau$ by the demographic, population, and weather variables (see main text). To estimate the susceptible density, $\bar{S}$, in this procedure we must use the reported mortality statistics. Thus far we have not specified the dynamics of death. We now make the assumption that the probability of death increases proportionally to the number of exposures an individual experiences. As we prove in a separate section, below, this implies that the susceptible density is related to the fraction of dead in the community, $f_D = D_{\rm tot}/N$ (where $D_{\rm tot}$ is the cumulative mortality and $N$ is the total population), by $S(t) = S(0) \exp\left[-C_D \, f_D\right]$. The {\em basic reproduction number}, $R_0$, and the distribution of {\em generation intervals}, $g(t_g)$, are defined \citep{champredon2015,nishiura2010} through the function $\mathcal{F}(a)$: % \begin{equation} g(t_g) = \frac{\bar{\beta}S(0)\, \mathcal{F}(t_g)}{R_0} \quad {\rm with} \quad R_0 \overset{\mathrm{def}}{=} \int_0^{\infty} \bar{\beta} S(0) \, \mathcal{F}(t_g) \, {\rm d} t_g\,. \end{equation} % The generation interval (or, generation time), $t_g$, is the time between infections of an infector-infectee pair, and is often estimated from clinical data by the {\em serial interval}, which is the time between the start of symptoms \citep{britton2019}, and the basic reproduction number is the average number of infectees produced by a single infected individual, assuming a completely susceptible population. These quantities can be calculated exactly for our model, as % \begin{equation} R_0 = \frac{\bar{\beta}S(0)}{d} \, {\rm e}^{-\sqrt{2 d \tau}} \end{equation} % and % \begin{equation} g(t_g) = d \, {\rm e}^{\sqrt{2 d \tau} - d t_g} \int_1^{\infty} \sqrt{\frac{2 \tau}{\pi t_g}} \exp\left[ - \frac{\tau C^2}{2 t_g}\right] {\rm d}C\,, \label{eq:generation} \end{equation} % where the expected value and variance of the generation interval are then: % \begin{equation} \label{eq:genEandVar} E\left[t_g\right] = \frac{1}{d} + \sqrt{\frac{\tau}{2 d}} \quad {\rm and} \quad {\rm Var}\left[t_g\right] = \frac{1}{d^2} + \sqrt{\frac{\tau}{8 d^3}}\,. \end{equation} % \subsection*{Extended Results and Analysis} \subsubsection*{Relation between the remaining susceptible density, $S(t)$ and the death fraction, $f_D(t)$} In epidemic models the infection of susceptible individuals is typically determined by % \begin{equation}\label{eq:infectionS} \dot{S} = - \beta \, S\, I_* \end{equation} % where $I_*$ is the density of infectious (contagious) individuals, and for our model, $\beta S I_*$ is the right-hand side of Eqn.\ \ref{eq:Sdot}. This can be solved, formally, as: % \begin{equation} \label{eq:solveS} S(t) = S(0) \exp\left[- \beta \int^tI_*(s) \, {\rm d}s\right] \end{equation} % Alternatively, the susceptible density can be expressed in terms of the cumulative number of infected individuals, $I_{\rm tot}$, i.e., % \begin{equation} S(t) = S(0) \left[ 1 - f_I(t) \right] \end{equation} % where $f_I(t) = I_{\rm tot}/N$, with $N$ the total population. When fitting the exponential growth rate of mortality, $\lambda_{14}(t)$ to our nonlinear model, Eqn.\ \eqref{eq:nmodel} (see main text), we must estimate the value of the susceptible density driving growth at that time. Without any reliable information about the true infected or infectious populations, we must do so using the mortality statistics. We show here how the previous two equations can be used, along with reasonable assumptions about distinct sub-populations driving infection and death, to determine a relationship between the reported cumulative mortality (per capita) and the remaining susceptible density. Our basic assumption is that there are two different categories of susceptible individuals underlying the dynamics of the epidemic: (A) highly mobile individuals with a large geographic reach that frequently interact with other individuals (in particular, infectious individuals) and thus drive the dynamics of infection (these could be termed ``super-spreaders'' \citep{liu2020secondary}); and (B) essentially non-mobile individuals that have quite rare contacts with infectious individuals, but have a much higher probability of death once infected, and therefore make up the majority of the mortality burden. The dynamics of each susceptible population is governed by an equation of the form in Eqn.\ \eqref{eq:infectionS}, with a common density of infectious individuals, $I_*$, but with different rate constants, $\beta_A \gg \beta_B$. From Eqn.\ \eqref{eq:solveS}, we see that the susceptible densities of the two populations are then related, at any time, by: % \begin{equation} \frac{S_A(t)}{S_A(0)} = \left[\frac{S_B(t)}{S_B(0)}\right]^{\beta_A/\beta_B}\,. \end{equation} % Expressing the non-mobile population in terms of the cumulative fraction infected, we have % \begin{equation} \frac{S_A(t)}{S_A(0)} = \left[ 1 - f_I^{(B)}(t) \right]^{\beta_A/\beta_B}\,, \end{equation} % and, assuming that the infection fatality rate (IFR) is a constant factor, $f_D(t) = {\rm IFR} \times f_I(t-\Delta t)$, where $\Delta t$ is the delay from infection to death, we can write: % \begin{equation} \frac{S_A(t)}{S_A(0)} = \left[ 1 - \frac{f_D^{(B)}(t+\Delta t)}{\rm IFR} \right]^{\beta_A/\beta_B}\,. \end{equation} % Finally, having assumed that the ratio of infection rates is large, we can approximate this as: % \begin{equation} \frac{S_A(t-\Delta t)}{S_A(0)} \approx \exp\left[- \frac{\beta_A}{\beta_B\, {\rm IFR}} \,f_D^{(B)}(t)\right] \end{equation} % The ``A'' category of individuals, as defined above, are exactly those individuals driving the infection in our model (and, presumably, in the real world), and, therefore, the susceptible density $S_A$ is exactly that which must be estimated for Eqn.\ \eqref{eq:nmodel}. On the other side, with people aged 65 and over accounting for $\sim$80\% of COVID-19 deaths, and with approximately $\sim$45\% of deaths linked to nursing homes, the mortality statistics are clearly tracing individuals similar to category ``B.'' Therefore, we use this relationship, % \begin{equation*} S(t-\Delta t) = S(0) \, \exp\left[- C_D \, f_D(t)\right]\,, \end{equation*} % to estimate the susceptible density in terms of the reported per capita mortality, where we assume $S(0)$ is proportional to the population weighted density (PWD). \begin{figure} \centering \includegraphics[width=\linewidth]{Herd_NYC_Linear.pdf} \includegraphics[width=\linewidth]{Herd_NYC_Exp.pdf} \caption{Comparison of hypotheses regarding the relationship between the susceptible fraction, $S(t)/S(0)$, and the dead fraction, $f_D(t)$. A model in which deaths are suffered by a largely immobile population, while the infection is driven by a mobile category of individual (bottom) was preferred to one with a single homogenous population (top) at the $9\sigma$ confidence level. Similar to Figure \ref{fig:Herd}, the points show the best-fit linear model prediction for NYC, fitted independently in different bins of mortality to population ration, while the lines show best-fit $\pm$ 1$\sigma$ nonlinear models. Note that while we show the nonlinear model predictions for a county similar to NYC, we use all US data to find the best fits.} \label{fig:Herd_Lin_Exp} \end{figure} We also considered the standard approach, in which the population is a single homogeneous group. In that case, the susceptible density could be estimated as % \begin{equation} S(t) = S(0) \left[1 - f_I(t)\right] = S(0) \left[ 1 - \frac{f_D(t+\Delta t)}{\rm IFR}\right]\,. \end{equation} % In testing both models, we found that the two-component population scenario was preferred by the data at the $\sim 10\sigma$ confidence level, with the homogeneous population model failing to capture the observed dependence of the growth rate on the per capita mortality (Figure \ref{fig:Herd_Lin_Exp}). The broader implications of our assumption of two populations is that the required proportion of individuals with immunity for ``herd immunity'' to take effect, is lower than population with homogeneous mobility characteristics, i.e., the epidemic will slow as a significant proportion of the ``super-spreader'' category have been infected (category A, above), regardless of the level of infection and immunity in the rest of the population. Indeed, the effect of population heterogeneity on lowering the ``herd immunity'' threshold for COVID-19 was recently noted \citep{britton2020mathematical}, and will be important in interpreting the results of randomized serology tests across the entire population \citep{Havers2020.06.25.20140384}. \subsection*{Incubation Time} \begin{figure} \includegraphics[width=1.6\linewidth]{Diagram.pdf} \caption{The stages of COVID-19 infection, as a continuous variable $C$. We model the disease as a random walk, starting at $C=0$, and a uniform exit rate (due to either quarantine or recovery). The curves show the resulting distribution in $C$, during the growing (early) and decaying (late) epidemic. Note that the {\it Hospitalization} and {\it Death} are not directly modelled in the Nonlinear model, as they should have a small effect on the spread of the epidemic. } \label{fig:Stages} \end{figure} The nonlinear epidemic model described above posits that the incubation of SARS-COV2 virus within an infected individual can be modelled by a stochastic random walk starting at zero, with excursions beyond $\pm1$ corresponding to episode(s) of infectiousness. This makes our model distinct from the standard SE$^m$I$^n$R compartmental models (see, e.g., \citep{champredon2018}), where the progress of the disease is only in one direction --- $E^1 \to E^2 \to \ldots \to I^1 \to I^2 \to \ldots$ --- while in our model (Figure \ref{fig:Stages}), the individual can jump back and forth between different stages (with the obvious exception of Death), with a constant exit rate of $d$ for quarantine, recovery, or death. This can be described using a (leaky) diffusion equation: \begin{equation} \frac{\partial \mathcal{I}}{\partial t} = \frac{1}{2\tau} \frac{\partial^2 \mathcal{I}}{\partial C^2} - d \mathcal{I}.\label{eq:diffusion} \end{equation} Based on this picture, and the best-fit parameters to the US county mortality data (Table \ref{tab:nonlinear}), we can infer the probabilities associated with the different stages of the disease. For example, by looking at the steady-state solutions of Equation (\ref{eq:diffusion}), we can compute the probability that an exposed individual (who starts at $C=0$) will ever become infectious (i.e., make it beyond $C=C_{\rm inf} =1$): \begin{equation} P_{\rm inf} = \exp[-\sqrt{2 d \tau} C_{\rm inf}],~~~{\rm where}~~~ C_{\rm inf}=1, \end{equation} This is plotted as a function of the median age of the community in Figure (\ref{fig:incubation}a). For example, for the median age of all US counties, $A= 37.4$-yr, we get: \begin{equation} P_{\rm inf}(37.4~{\rm years}) = 0.08^{+0.04}_{-0.03} ~(68\%~ {\rm C.L.}), \end{equation} i.e., less than 12\% of exposed individuals will ever be able to infect others, although this fraction increases in older communities. Next, we can compute the distribution of times for the onset of infectiousness, i.e., the incubation period. This can be done by using a first crossing probability of a random walk, which we did by solving Equation (\ref{eq:diffusion}) using a discrete Fourier series in the $(0,+1)$ interval. The resulting probability density is shown in Figure (\ref{fig:incubation}b), again showing a shorter incubation period in older communities. Finally, we can compute the probability density function for the generation interval, $g(t_g)$, defined in Equation (\ref{eq:generation}) (Figure \ref{fig:incubation}c). This shows a similar qualitative dependence on age as the incubation period, but the median incubation period is, as expected, shorter than the generation interval for each age group. Using Eqn.\ \eqref{eq:genEandVar} and our parameterization of $\tau$, we find a mean generation interval of % \begin{equation} E[t_g](37.4~{\rm years}) = 43\pm23~{\rm d} \,. \end{equation} % This estimate is much longer than those found by tracking the serial interval (time from between the start of symptoms for an infector-infectee pair) in COVID-19 patients \citep{ganyani2020, nishiura2020}, which are on the order of 5--10~d. It is possible that the long tail of these distributions, generated by the slow asymptotic exponential decay at rate $d\approx\unit{0.06}{d^{-1}}$, raises the mean generation interval, while a clinical study, is necessarily biased toward shorter serial intervals. \begin{figure} \centering \includegraphics[width=\linewidth]{prob_contagious_total.pdf} \includegraphics[width=\linewidth]{Incubation.pdf} \includegraphics[width=1.08\linewidth]{Generation.pdf} \caption{(Top)(a) The probability that an individual exposed to the virus will ever become infectious. (Middle)(b) The probability distribution for incubation period for onset of virus shedding. (Bottom)(c) Distribution of the generation Interval, $g(a)$, i.e., the time from an individual's infection to them infecting another. } \label{fig:incubation} \end{figure} \subsection*{Error Diagnostics and Forecasting COVID-19 Mortality} One of the most pressing questions in any exercise in physical modelling is whether we have a good understanding of the uncertainty in the predictions of the model. While we have an estimate of the measurement uncertainties for the mortality growth rates, $\lambda_{14}$, which we discussed in the main text, we also should characterize whether the deviation of the best-fit model from the measurements are consistent with statistical errors. To evaluate this, we can look at the average of the ratio of the variance of the model residuals to that of the measurement errors, otherwise known as reduced $\chi^2$, or $\chi^2_{\rm red}$. This is shown in Figure (\ref{fig:chi2}), demonstrating that we see no systematic error in model that is significantly bigger than statistical errors, across counties with different populations. As another consistency check, Table (\ref{tab:big_v_small}) examines whether the parameters of the model change significantly from urban counties with large, uniform populations, to rural counties with a small and more sparse population (Figures \ref{fig:PWPD}-\ref{fig:gamma}). From counties with enough COVID mortality data, roughly those with population $\gtrsim 10^6$ inhabit half of the total population, which we chose as our threshold, separating large from small counties. We notice no statistically significant difference, and Table (\ref{tab:big_v_small}) even suggests that Fisher errors quoted here might be overestimating the true errors. This comparison brings further confidence in the universality of the nonlinear model across geography and demography. \begin{table*} \begin{tabular}{l|l|l|l } \hline Parameter & Small Counties& Large Counties & Difference/Error \\ \hline $ \tau_0( {\rm day})$ & 126. $\pm$ 58.9 & 219. $\pm$ 124. & -0.37 \\ $d^{-1}({\rm day})$ & 18.4 $\pm$ 3.92 & 18.1 $\pm$ 2.32 & 0.0104 \\ $\ln\left[\beta_0\tau_0^{-2}({\rm m}^2/{ \rm day}^3) \right]$& -2.23 $\pm$ 1.57 & -0.519 $\pm$ 2.1 & -0.747 \\ $C_D$ & 2743. $\pm$ 845. & 4425. $\pm$ 958. & -0.323 \\ $C_A$ & -4.73 $\pm$ 2.31 & -2.87 $\pm$ 1.75 & -0.338 \\ $100 C_{\cal M}$ & 0.0527 $\pm$ 0.0162 & 0.0732 $\pm$ 0.0233 & -0.227 \\ $C_\gamma$ & -1.97 $\pm$ 2.13 & -8.37 $\pm$ 4.48 & 0.744 \\ $C_T$ & -0.0415 $\pm$ 0.0251 & -0.0768 $\pm$ 0.0342 & 0.405 \\ $C_{A_D}$ & -1.36 $\pm$ 0.632 & -0.588 $\pm$ 0.641 & -0.521 \\ \end{tabular} \caption{Comparison of nonlinear model parameters between small (population < 1 million) with large (population > 1 million) counties. We see no significant statistical difference, as demonstrated by the values in the last column that remain below 1. Note that, in contrast to Table (\ref{tab:nonlinear}), we use temperature rather than specific humidity ($C_{T}$ rather than $C_{\cal H}$), as the latter was not available for some small counties. Nevertheless, the parameters remain also statistically consistent with Table (\ref{tab:nonlinear}). } \label{tab:big_v_small} \end{table*} \begin{figure} \centering \includegraphics[width=\linewidth]{Deviation_Error.pdf} \caption{Root Mean Square of the Ratio of the Simplified Nonlinear model residual to measurement errors (i.e. the $\sqrt{\chi^2_{\rm red}}$), as a function of the population of the county. We see that the residuals are consistent with measurement errors. } \label{fig:chi2} \end{figure} On average, we find that (either county-weighted or population-weighted) $\chi_{\rm red}^2 \simeq 1.28$, suggesting that the model errors are only 13\% bigger than statistical errors. We further compare the model-prediction vs measured mortality growth rate in Figure (\ref{fig:I7_Observed_v_Predict}) for all our available data. We find that the 1-$\sigma$ error in the model prediction (in excess measurement errors) is on average $\pm \sigma_{14} = \pm 0.0180$, i.e. 1.8\% error in the daily mortality growth rate. This is shown in Figure (\ref{fig:I7_Observed_v_Predict}) as the red region, which compares the model prediction with the observed mortality growth rates. We can also see that there appears to be no significant systematic deviation from the predictions, at least for $\lambda_{14} < 0.23$/day. \begin{figure} \centering \includegraphics[width=\linewidth]{I7_Observed_v_Predict.pdf} \caption{Observed versus Best-Fit model prediction for bins of $\lambda_{14}$. The points show the mean of measured $I_7$ within each predicted bin, as well as the error on the mean. The red region shows the mean excess model error, on top of measurement uncertainties. } \label{fig:I7_Observed_v_Predict} \end{figure} Given an understanding of the physical model and its uncertainties, we can provide realistic simulations to forecast the future of mortality in any community, similar to those provided in the main text (Figure \ref{fig:Mortality_prediction}), which can be made on-demand using our online dashboard: \href{https://wolfr.am/COVID19Dash}{https://wolfr.am/COVID19Dash}. In order to perform these simulations, we follow these simple steps. To predict the daily mortality on day $t+1$, $D(t+1)$, we use the prior 13 days of $D(t)$, as well as the total mortality up to that point: \begin{enumerate} \item Use Equation (\ref{eq:nonlinmodel}), plugging in prior total mortality, county information, weather, mobility and parameters in Table \ref{tab:nonlinear} to find $\lambda_{14}$. Every simulation uses a random realization of model parameters (from their posterior fits), which remain fixed through that simulation. \item Add the random model uncertainty to $\lambda_{14}$ using: \begin{equation} \lambda_{14}(t+1) \to \lambda_{14}(t+1)+ \frac{\sigma_{14}}{\sqrt{14}}\sum_{t'=t-12}^{t+1} g_{t'}, \end{equation} where $g_{t'}$s are random independent numbers drawn from a unit-variance normal distribution. This captures the model uncertainty mentioned above, while ensuring that it remains correlated across the 14 days that are used to define $\lambda_{14}$. \item Having fixed the logarithmic slope for daily mortality $\lambda_{14}$, find the best-fit intercept and its standard error for $\ln[D(t')+1/2]$ for the preceding 13 days, i.e. $t-12 \leq t'\leq t$, which can then be used to find a random realization for $\ln[D(t+1)+1/2]$ \item Advance to next day and return to step 1. \end{enumerate} \end{document} \section*{Scheduled for re-insertion or removal?} \subsection*{Methods} \subsubsection*{ (old) Nonlinear Model} We consider community spread of the infection as a collisional processes on the two-dimensional surface of Earth. The rate of new infections per unit area (treating Earth as a 2-dimensional surface) is given by: \begin{equation} \frac{\Delta N_{\rm new}}{\Delta t \Delta A}(t,{\bf x}) \simeq n_s(t,{\bf x})\int dC \times n(t,{\bf x},C) \times \langle \sigma v \rangle_{\rm eff}(C), \label{eq:transmission} \end{equation} where $n_s(t,{\bf x})$ and $n(t,{\bf x},C)$ are the number densities of the susceptible and infectious individuals, respectively. The effective interaction rate is$ \langle \sigma v \rangle_{\rm eff}(C)$, where $\sigma$ can be thought of as cross-section (with dimension of length in 2d), while $v$ is the relative velocity of the two populations during the transmission. $C$ here quantifies the stage of infection in infected individual, which determines how infectious they can be. For example, it could be proportional to the logarithm of the number of SARS-COV-2 viruses in the respiratory tract. Ignoring the spatial dependence for now, we shall further assume that infection proceeds as a random walk process. This might be particularly apt for COVID-19 infection, as it appears unpredictable, impacting similar people very differently [Ref.] .... This assumption suggests a random walk in the space of infection (and thus infectiousness) $C$: \begin{equation} n(t,C) = \int_{-\infty}^t dt' \frac{d^2N_{\rm new}}{dt d^2x}(t') \times \sqrt{2\tau\over \pi (t-t')} \exp\left[-\tau C^2 \over 2(t-t')\right], \label{eq:gaussian} \end{equation} where $\tau$ is a characteristic time-scale of incubation. Without loss of generality, we can assume that an individual has $C=0$ at onset of infection, and becomes infectious at $C=1$. Furthermore, for simplicity we shall assume that: $\langle \sigma v \rangle_{\rm eff}(C) \simeq S\times\Theta(C-1)$, where $\Theta$ is the Heaviside step function, and S is assumed to be approximately independent of $C$. In other words, we assume that after an infected individual becomes contagious, they can infect $S\times n_s \times \Delta t$ susceptible individuals, who are distributed with a local density of $n_s$, within time $\Delta t$. For an exponentially growing infection: \begin{equation} \frac{d^2N_{\rm new}}{dt d^2x}(t) \propto \exp(\lambda\times t), \label{eq:exponential}, \end{equation} combining Equations (\ref{eq:transmission}-\ref{eq:exponential}) {\bf [Be more explicit]} yields: \begin{equation} n_sS\int_1^\infty dC \int_{-\infty}^0 dt' \sqrt{2\tau \over \pi (-t')} \exp\left[\lambda t' +{\tau C^2 \over 2t'}\right]=1, \end{equation} which yields: \begin{equation} \lambda \exp\left(\sqrt{2 \lambda \tau}\right)=n_s S, \end{equation} which has a formal solution, in terms of the Lambert W-function, although it takes simple asymptotic forms: \begin{eqnarray} &&\lambda\tau = 2\left[W_0\left(\sqrt{n_s S\tau\over 2}\right)\right]^2 \simeq \nonumber\\&& \begin{cases} \frac{1}{2}\left\{\ln\left[4n_s S \tau \over \ln^2(n_s S \tau/2)\right] \right\}^2, & \text{for}~~\ n_s S \tau \gg 1, \\ n_s S\tau , &\text{for}~~\ n_s S \tau \ll 1. \end{cases}\label{eq:Lambert} \end{eqnarray} We notice that the growth rate $\lambda$ evaluated here does approach zero, as $n_s \to 0$, as expected. However, $\lambda$ is always positive, which is an artifact of the assumption that $\langle C^2 \rangle = t/\tau$ continuously grows with time, while the infection rate $S$ remains constant. However, we expect that at late times, either the infection starts to decay due to recovery, or that the infection rate $S$ drops as the patient self-quarantines, is hospitalized or leaves the county. To account for this, we replace $\lambda$ by $\lambda+\lambda_0$ in Equation ( \ref{eq:Lambert}), where $\lambda_0$ is a constant, i.e. \begin{equation} \boxed{\lambda\tau = 2\left[W_0\left(\sqrt{n_s S\tau\over 2}\right)\right]^2 -\lambda_0\tau}.\label{eq:prediction} \end{equation} This is equivalent to saying that the probability of an infected individual to remain contagious within the community decays as $\exp(-\lambda_0 t)$. \subsection*{I7 introduction} This section is to be edited to change ``$I_7$'' into ``exponential growth rate'', and to describe briefly the idea of (1) using mortality data, and (2) monitoring the time dependence of the exponential growth rate. In our opinion, the most accurate way to capture the rate of growth of COVID-19 infection across a community is by comparing the number of reported confirmed deaths from one week to the next. We call this quantity $I_7$, defined as: \begin{equation} I_7(t) \equiv \frac{\ln[D_7(t)]-\ln[D_7(t-7~{\rm days})]}{7~{\rm days}},\label{eq:I7} \end{equation} where $D_7(t)$ is the average daily mortality over the 7 days preceding time $t$. For example, a constant $I_7=0.1$ means that the number of death doubles from one week to the next ($\exp(7 \times 0.1) \approx 2$). Examples, $I_7(t)$ and $D_7(t)$ for 4 of US counties with the largest reported COVID-19 deaths (>1500 as of April 30, 2020) are shown in Figure (\ref{US4_ID7}) The $I_7$ statistics has several advantages over other measures that are often quoted in media and literature: \begin{enumerate} \item As it is a relative quantity, unlike the daily or cumulative counts, it does not scale with the population size of the county, city, or state \item Unlike the commonly used reproduction index, $R_t$ (quantifying the number of people infected by an individual), it is a direct observable and does no require an assumption on incubation period, which is biased by the reported clinical cases \item While the statistics is poorer compared to the reported cases, it is much less dependent unknown factors such as the criteria for testing, local policies, test kit availability, and asymptomatic individuals [ref.]. \item While there is clear evidence that a large fraction of mortality is missed in the official COVID-19 mortality count [ref.], as long as the number of reported deaths is a monotonic function of the total number of deaths (e.g., a constant fraction, say $50\%$), it does not affect the sign of $I_7$, which is the crucial measure of the success of pandemic management. \item By averaging over 7-day periods, $I_7$ (and $D_7$) will not be sensitive to weekly changes, such as reporting lulls or increase/decrease in social activities over weekends. \end{enumerate} \begin{figure} \centering \includegraphics[width=\linewidth]{I7_Histogram.pdf} \caption{Histogram of the average daily mortality growth rate, $I_7$, for US counties, starting from the first 2-week period in the county with $\geq$ 56 reported death. } \label{fig:I7_Histogram} \end{figure} While the Mortality growth rate $I_7(t)$ is defined via Equation (\ref{eq:I7}), it is not trivial to a devise an estimator that could capture the best-fit value, as well as the error and its correlations. Common challenges to interpret the raw data are inconsistent reporting strategies and policies, sudden data dumps, and delayed reporting, that could vary on weekly cycles. To marginalize over these inconsistencies, we use standard regression to fit a linear time-dependence $\ln\left[D(t')+\frac{1}{2}\right]$ over with $t-13\leq t'\leq t$, and interpret the slope as our estimator for $I_7(t)$. The standard regression error captures the deviations from the linear relationship, that could be intrinsic to the epidemic (e.g. different infection clusters), reporting inconsistencies, or simple Poisson error (which is often subdominant for larger infection clusters). Furthermore, we average the daily mortality reports by Johns Hopkins University [Ref.] and New York Times [Ref.], which should further diminish the impact of reporting inconsistencies. Finally, the addition of $1/2$ to the argument of the $\ln$ ensures that it does not vanish, and also that $\ln\left[D+\frac{1}{2}\right]$ obeys gaussian statistics, if $D$ obeys Poisson statistics, to leading order in $D^{-1}$. Since only the data for distinct 2-week periods are independent, in order to make $\chi^2$ fits to temporal data, we multiply the regression errors by $\sqrt{14}$, to account for correlations amongst daily $I_7(t)$ estimates. \subsection{Fitting information about I7} As $I_7(t)$ is an average rate of growth of mortality over the past 14 days, we expect it should be governed by conditions that drive infection in the preceding weeks. In particular, since the time from infection to death is thought to be around 21-28 days, based on clinical studies (Ref's), we consider the potential time-dependent drivers of infection, i.e. social mobility and weather, within the two broad bins, defined as: \begin{eqnarray} \bar{Z}_1(t)= \frac{1}{14} \sum_{t'=t-27}^{t-14} Z(t), \\ \bar{Z}_2(t)= \frac{1}{14} \sum_{t'=t-41}^{t-28} Z(t), \\ \end{eqnarray} where $Z(t)$ could stand for ${\cal M}(t)$, $T(t)$ or $H(t)$. The only exception to this delayed temporal data is $X_{14}(t)$, the 14-day rolling average of total COVID-related death-to-population average in the community. As we discussed above, we can consider $X_{14}(t)$ as a tracer of the level of ``herd immunity'' in the community. Here, since both $I_7(t)$ and $X_{14}(t)$ are delayed tracers of the infection process, we do not require any additional delay on $X_{14}$(t). We shall do our model fit to data in two steps. We first start with a simple linear regression to see if $I_7(t)$ can be jointly fitted as a linear combination of our delayed temporal data and county demographics (described in the previous section). This allows us to identify the statistically significant drivers of the epidemic. After narrowing down the main culprits, we move on the non-linear physical model developed in Section (\ref{sec:Model}) and constrain its parameters, as well as their demographic dependence. \subsection*{Original ``Linear Model''} Table (\ref{tab:linear}) shows the results of a joint linear fit to all 1784 spatio-temporal data during March 8-May 9, 2020, for 524 US counties, where information on COVID mortality and all potential driving factors considered here were available. The main limiting factor was the social mobility information, limiting us to counties that include 86\% of US mortality, as of May 9, 2020. We see that, out of the 12 potential driving factors considered, only 7 have {\it independent} statistical significance (or 6, if we do not double count Social Mobility). In particular, dependence on weather, i.e. temperature and humidity, which has been often discussed in literature[Ref Hazhir's others, Alesio], appears to be a secondary factor. This shows the importance of considering all potential driving factors {\it simultaneously}. For example, as seasonal warming of weather in Northern hemisphere in 2020 did coincide with the reduction in Social Mobility (driven by the spread of the pandemic), one may mistake the effect of social distancing with that of the weather, if no measure of social mobility is considered in the analysis. As to the dependencies that we find significant, most are not surprising. For example, we do expect higher infection rate, and thus death growth rate, at higher population density, median age, $n_{\rm PW}$and social mobility, ${\cal M}$, while population sparsity,$\gamma$ and herd immunity, $X_{14}$, would drive down $I_7(t)$. Possibly the most surprising dependency is the negative correlation, at $\simeq -4.1\sigma$ between $I_7$ and the {\it total} number of annual death in the county. In fact, this correlation is marginally more significant than a correlation with $\log$(population), which is $-3.9\sigma$. One possible interpretation of this negative correlation could be that the number of annual death might be a proxy for the number of potential outbreak clusters. The larger the number of these clusters, the longer it might take for the epidemic to spread across all of them, which would (at least initially) slow down the onset of the epidemic. In fact, such a delay could also be an artifact of the local reporting systems used to keep track of the death statistics in US counties, which may incur longer delays when dealing with larger populations. Finally, we note that the coefficient of determination for the linear fit is $R^2= 0.737$, and the Bayesian Information Criterion is BIC $= -3028$. Now that we have identified the statistically significant drivers of COVID-19 mortality growth, we can move on to examine the nonlinear model of the epidemic, developed above in Section (\ref{sec:Model}). \subsection*{Original ``NonLinear Model''} We developed a nonlinear model for infection (see Supplementary Material) that intrinsically incorporates: population density; randomly distributed timings from infection to symptoms, and from infection to resolution/death; an accounting of susceptible density changes due to immunity/death; and a dependence on potential demographic factors. The key aspects of the exponential growth rate within this model framework (Eqn.\ \ref{eq:Lambert}, Supplementary Material) are the following: \begin{enumerate} \item $n_s$: the number density of the susceptible population. We shall assume that $n_s$ starts as the population weighted density $n_{\rm PW}$, but falls exponentially with the total infection/death in the community, i.e. \begin{equation} \ln n_s(t) = \ln n_{\rm PW} - a_X X_{14}(t), \end{equation} where $a_X$ assumed to be a constant. Given that the number of susceptible population is expected to fall exponentially with the number of exposures to the virus, we can interpret $a_X$ as the number of community exposures per number of deaths. \item $S = \langle \sigma v \rangle_{\rm eff}$: the individual transmission rate. We assume the individual transmission rate depends on population sparsity, $\gamma$ in the county, as well as the social mobility indicators ${\cal M}_1$ and ${\cal M}_2$ and specific humidity $H_1$ and $H_2$. Furthermore, as we discovered in the previous section, there appears to be a significant dependence on the total annual death $A_D$, which we shall include as a contribution to $S$: \begin{eqnarray} &&\ln S(t) = \ln S_0 + a_\gamma (\gamma - 0.1882) \nonumber\\ &&+ \bar{a} \left(\bar{\cal M}_1(t) + \bar{\cal M}_2(t) \over 2 \right)+ \bar{h}\left( \bar{H}_1(t) + \bar{H}_2(t) \over 2\right) \nonumber\\ &&+a_D (\log A_D -4.039).\label{eq:S(t)} \end{eqnarray} Note that the two mobility and humidity bins (with 2-week and 4-week delays), have nearly identical statistical weight and coefficients (see Table \ref{tab:nonlinear_supp} in Supplementary Material). Therefore, it makes sense to combine them into a single bin for humidity and a single bin for mobility, i.e. the individual transmission rate $S(t)$ that drives the mortality growth $\lambda_{14}$ in a 2-week period is driven by the average mobility and average specific humidity within the preceding month, through coefficients $\bar{a}$ and $\bar{h}$, respectively. \item $\tau$: characteristic time-scale from exposure to the start of becoming contagious. $\tau$ is something intrinsic to the contagion and how it grows in the body. For now, we shall assume that this depends on the median age of the population, $A$, although further dependence on other health and demographic factors are possible and can be explored in the future. For simplicity, we adopt a power-law\footnote{The choice of pivot, i.e. the median age of 26.2 yr, minimizes the relative error in normalization, $\tau_0$, for the data used at the time of publication.}: \begin{equation} \tau = \tau_0 \left({\rm Median ~Age} \over 26.2 ~~{\rm years} \right)^\alpha \end{equation} \item $\lambda_0^{-1}$: characteristic time-scale from exposure to the end of being contagious in the community. While this period could depend on a variety of factors, either intrinsic to the disease, human body, or the control measures in the society, we do not find a statistically significant dependence for $I_0$ on other factors, and thus assume it to be a constant. \end{enumerate} With these parameterizations of $\lambda_{14}$, we performed a nonlinear regression to the entire US mortality data (Table \ref{tab:nonlinear}). Compared to the linear model of the previous section, the fit is improved to ${\rm BIC} = -6008$; $R^2=0.724$, even though both nonlinear model and linear models have 9 free parameters. This corresponds to an improvement of 7.6$\sigma$ over the best linear model (Table \ref{tab:linear}b). For example, we see that the mortality growth rate measurements for the six US counties with the highest mortality are well described by the model Figure (\ref{fig:I7_postdiction}). More quantitatively, the scatter of measured growth rates around the best-fit model predictions is (on average) only 13\% larger than the measurement errors, independent of the population of the county \footnote{See Supplementary Material for more detailed discussion of Error Diagnostics.}. \subsection*{Population Sparsity Index, $\gamma$} It is easy to see that an epidemic can easily propagate through a uniformly distributed urban area through human interactions. However, the transmission will be much slower if the population of similar local density are separated into sparse pockets. To quantify this, we shall simply fit a power: \begin{equation} n_{\rm PW}({\rm Area}) \propto ({\rm Area})^{-\gamma}, \end{equation} i.e. we model the population density around an average person as a power-law in the area surrounding them. Combining the mean density of the county, its area, and population-weighted density within (250 m)$^2$ yields: \begin{equation} \gamma = -\frac{\log(n_{\rm PW}/\bar{n}_{\rm county})}{\log\left[(250~{\rm m})^2/{\rm Area}_{\rm county}\right]}. \end{equation} \subsection*{(Old) Pandemic across Geography and Demography} Now that we have confirmed the predictive power of our model, we can examine how \subsection*{old herd imm discussion} Possibly the most pressing question about the management of COVID-19 pandemic is the mortality level required for a community to develop immunity to SARS-COV-2 virus transmission, i.e. $I_7<0$. As we have seen above (e.g., Figure \ref{fig:I7vEverything}) this is very much dependent on the demography and geography of the community, as well as the level of social distancing. Figure (\ref{fig:Herd})(a) shows how a typical US county could reach herd immunity, in lieu of any social distancing, after 0.1\% (or 1000 per million population) COVID-19 fatality. However, as Figure (\ref{fig:Herd})(b) shows, this number can vary a lot, ranging from a few hundred per million, to 1700/million for New York City (see Fig. \ref{fig:Herd}) . In fact, by our estimate, 43 out of 3142 US counties in Alaska, Colorado, Idaho, Montana, Nebraska, North and South Dakota, Texas, Utah, and Wisconsin have sparse enough population that will not see any COVID-19 outbreak. Unfortunately, this is not the case for most of the United States, and reaching COVID-19 herd immunity would only happen after a death toll of $334,000 \pm 15,000$, or $1055 \pm 47$ per million of population. However, in lieu social distancing measures, Figure (\ref{fig:Herd})(c) shows that a typical epidemic will overshoot the herd immunity limit, by up to 150\% (i.e. a factor of 2.5, or 830,000 US fatality) due to ongoing infections. \begin{table} \begin{tabular}{l|l} \hline Nonlinear Model Parameters \\ \hline $\tau_0$(day) & $157 \pm 58$ \\ $I_0^{-1}$(day) & $17.7 \pm 2.4$ \\ $\ln\left[S_0\tau_0^{-2}({\rm m}^2/{\rm day}^3)\right]$ & $0.31\pm 1.30 $ \\ $a_X$ & $3480 \pm 610$ \\ $\alpha$ & $-2.25 \pm 0.96$ \\ $100 a_1$ & $3.89 \pm 1.14$ \\ $100 a_2$ & $4.12 \pm 0.97$ \\ $h_1$ & $-0.075 \pm 0.100$ \\ $h_2$ & $-0.078 \pm 0.114$ \\ $a_\gamma$ & $-5.48 \pm 2.32$ \\ $a_D$ & $-1.05 \pm 0.26$ \\ \end{tabular} \begin{tabular}{l|l} \hline Simplified Model Parameters\\ \hline $\tau_0$(day) & $160 \pm 58$ \\ $I_0^{-1}$(day) & $17.6 \pm 2.2$ \\ $\ln\left[S_0\tau_0^{-2}({\rm m}^2/{\rm day}^3)\right]$ & $0.37 \pm 1.25 $ \\ $a_X$ & $3460 \pm 610$ \\ $\alpha$ & $-2.26 \pm 0.95$ \\ $100 a$ & $8.08 \pm 1.76$ \\ $h$ & $ -0.154 \pm 0.055$ \\ $a_\gamma$ & $-5.52 \pm 2.35$ \\ $a_D$ & $-1.05 \pm 0.25$ \\ \end{tabular} \caption{(a)(Top)Best-fit parameters for the nonlinear model (\ref{eq:prediction}), using parametrization defined in Section \ref{sec:nonlinear}. (b)(Bottom) Simplified model, assuming $I_0\tau_0 =1$.} \label{tab:nonlinear_supp} \end{table} Old susceptible description By definition, it declines with the total infected, $S(t) = S(0) - \int^t i(T) {\rm d}T = S(0) \left[ 1 - I_{\rm tot}(t) / N\right]$, where $i(t)$ is the incidence of infected density, $I_{\rm tot}(t)$ is the cumulative number of individuals ever infected, and $N$ is the population of the region. To estimate this quantity using the mortality data, we assume a time delay, and an infection mortality rate that depends exponentially on age, $D_{\rm tot}(t + \Delta t) = {\rm IFR}_{\rm 25} \exp\left[C_{\rm IFR} \left( A - \unit{25}{y}\right)\right] I_{\rm tot}(t)$, where $D_{\rm tot}$ is the cumulative mortality count\footnote{The constants are free parameters, but IFR estimates from the literature \citep{verity2020} are well fit with values of ${\rm IFR}_{\rm 25} = 0.02\%$ and $C_{\rm IFR} = \unit{0.11}{y^{-1}}$}. The estimated susceptibility density at the time of infection (i.e., $\Delta t$ prior to the window over which the growth rate is estimated) is then: % \begin{equation} \begin{split} \mathcal{S} = S(t- \Delta t) &= S(0) \Big[ 1 - \frac{1}{N \, {\rm IFR}_{\rm 25}} \\ & \times \exp\left[- C_{\rm IFR} \left(A - \unit{25}{y}\right)\right] D_{\rm tot}(t) \Big] \end{split} \end{equation} another old susceptible description: Because infectious individuals are assumed to die at a constant exponential rate (see Supplemental Materials), $d_D$, the susceptible density is related to the fraction that have died in the community, $f_D = D_{\rm tot}/S(0)$, where $D_{tot}$ is the cumulative mortality count and $S(0)$ is the initial population, as: % \begin{equation} S(t) = S(0) \exp\left[ - \frac{\beta S(0)}{d_D} f_D(t)\right]\,. \end{equation} % Using the reported values of $f_D(t)$ leaves a single additional fitting parameter, $d_D$.% \subsection*{Extended Introduction (Orig Niayesh)} It will be hard to exaggerate the impact that COVID-19 pandemic outbreak, caused by the rapid spread of the SARS-CoV-2 coronavirus, has had on human civilization. While hundreds of academic studies have now considered the pathology and epidemiology of this infection, many of the aspects of this disease remains controversial and shrouded in mystery. Why does the contagion impact some individuals much more than others? What is the role of asymptomatic spreaders in driving the epidemic? Do official statistics reflect (or significantly underestimate) the number of the COVID-19 related infections or dead? What is the Infection Fatality Rate? Cascading effects from the impact of the pandemic on national healthcare systems, as well as the shutdown of a large fraction of global socioeconomic activities, can further impact the health and livelihood of the world population and lead to secondary fatalities, as well as shortening and/or deterioration of lives. Therefore, it is of paramount importance to understand the true dynamics and direct human cost of this disease, so that a proper, transparent, and balanced response can be designed and adopted by local governments across the world. For example, the mitigation strategies in London, Paris, and New York city, are not likely to be the most appropriate ones, e.g., in Nebraska, Congo, or Saudi Arabia. Given the wealth of data now available on the tracers of the spread of COVID-19, as well as the availability of social mobility data thanks to prevalence of cellphones, and other geographic and demographic information, we can now examine and dissect different potential drivers of the epidemic in an unprecedented way, potentially accessing information that were hard to infer from traditional epidemiological studies or laboratory tests, e.g., due to small samples or unknown sampling biases. Example, Stanford anti-body test, ... Another potential pitfall in purely statistical studies is that ``correlation do not imply causation''. For example, several studies have suggested a correlation between weather (or other factors) and the growth rate of the epidemic in different countries or cities. However, weather could also be correlated with social mobility, population density, ``herd effect'' \citep{Herd1,Herd2} and other factors that could be more primary to the transmission of the contagion. This highlights the importance of a first-principle approach to epidemiological dynamics, one that if correct, could identify the primary drivers of an epidemic, and provide us with a (semi-)predictive ability, as in other areas in Science and Engineering, e.g., Schrodinger equation in Chemistry or Navier-Stokes equation in Aerodynamics. The objective of this article is to provide a similar framework for epidemics, combining the dynamics of two-body interactions (as in Boltzmann equation, e.g., for stars or plasmas) with the empirically measured cross-sections and interaction rates, to provide a (semi-) first-principle derivation of a predictable epidemic dynamics. The outline of the rest of this article is as follows: In Section \ref{sec:Model}, we develop a first-principle model of the epidemic transmission, identifying the physical quantities that drive the process. Section old infectious method of getting the S vs D dependence: We will assume that only infectious individuals, with density $I_*(t) = \int_1^{\infty} \mathcal{I}(C,t) {\rm d C},$ have a hazard of death, meaning that exit from the infected class in the incubation period is a recovery, but could be either recovery or death once infectious. The dynamics of the density of dead individuals is then $\dot{D} = d_D I_* = - d_D \dot{S}/\left(\bar{\beta} S\right)$. This implies --- just as in standard compartmental models, e.g., SEIRD --- that the susceptible fraction of the population $f_S = S(t)/S(0)$ is related to the dead fraction, $f_D$, as $f_S(t) = \exp\left[ - \bar{\beta} S(0) f_D(t) / d_D \right]$. Conservation implies that the dynamics of the recovered density is $\dot{R} = d E + (d - d_D) I_*$, where $E(t) = I(t) - I_*(t)$ is the density of exposed (infected but not infectious) individuals. \subsubsection*{Distribution of Incubation Period} In this appendix, we provide the analytic derivation of the distribution of incubation period for the onset of virus shedding, based on the random walk model presented in Section (\ref{sec:Model}). old version of susceptible vs f_D \subsubsection*{Relation between the remaining susceptible density, $S(t)$ and the death fraction, $f_D(t)$} The model for spread of a respiratory disease consists of susceptible members of a community being exposed to the contagion through interactions with infectious (contagious) individuals. The basic element in this process is an {\em exposure event}, and any susceptible individual that experiences one or more exposure events is defined to be infected. If these events can be considered rare for a small enough time interval, then Poisson statistics implies that for any interval, $\Delta t$ (over which the average rate of events, $r$, remains constant), the probability of any particular individual experiencing one or more infection events is $1 - \exp\left[-r\Delta t\right]$. For small enough $\Delta t$ this becomes simply, $r \Delta t$, and, if we assume that the rate is proportional to the density of infectious individuals, $I_*$, we obtain: % \begin{equation} \label{eq:susceptPoisson} \begin{split} \frac{\left[ - \Delta S \right]}{S} &= \left[ \begin{array}{c} \textrm{probability each}\\ \textrm{susceptible has of}\\ \textrm{infection in }\Delta t \end{array} \right] = \beta \,I_* \, \Delta t\\ &\quad \underset{\Delta t \rightarrow 0}{\longrightarrow} \quad \dot{S} = - \beta \,S \,I_*\,, \end{split} \end{equation} % which is the standard model for infection (in our model, $\beta S I_*$ is the right-hand side of Eqn.\ \ref{eq:Sdot}). Since the expected number of events experienced by an individual in any small time interval is equal to $r \Delta t$, the expected number of total infection events experienced by an individual at time $t$ is % \begin{equation} \langle N_{\cal E}(t)\rangle = \beta \int^t I_*(s)\, {\rm d}s \end{equation} % which, by equation \eqref{eq:susceptPoisson} implies that % \begin{equation} S(t) = S(0) \exp\left[ - \langle N_{\cal E} (t)\rangle \right]\,. \end{equation} % We will assume that the hazard of death by COVID-19 increases proportionally with the number of infection events, i.e., that the conditional probability of dying from COVID-19 given that you have experienced $N_{\cal E}$ infection events is: % \begin{equation} P({\rm DC} \mid N_{\cal E}) = k \, N_{\cal E} \end{equation} % where $k$ is a constant. Applying Bayes' Theorem and summing over all $N_{\cal E}$ values, we find that % \begin{equation} P({\rm DC}) = \sum_{N_{\cal E} = 1}^{\infty} k N_{\cal E} P(N_{\cal E})\,. \end{equation} % We will use this expression to estimate the remaining susceptible population from the reported per-capita death fraction, $f_D$. Thus we write % \begin{equation} f_D(t) = P({\rm DC})(t) = \sum_{N_{\cal E}=0}^{\infty} k N_{\cal E} P(N_{\cal E})(t)\,, \end{equation} % where the probability that an individual from the initial population experienced $N_{\cal E}$ infection events by time $t$, $P(N_{\cal E})(t)$, depends on whether, and if so, when, that individual died prior to $t$: % \begin{equation} \begin{split} P(N_{\cal E})(t) &\approx P(N_{\cal E} \mid \textrm{alive at } t) P(\textrm{alive at }t)\\ &+ \sum_{i=1}^{N_{\rm seg}} P(N_{\cal E} \mid \textrm{died at }i \, \Delta t) P(\textrm{died at }i \Delta t) \end{split} \end{equation} % where $t=N_{\rm seg} \Delta t$. If we assume, however, that the overall infection fatality rate (IFR) is low, i.e., $P(\textrm{alive at }t) = 1 - f_D(t) \approx 1$, then we can neglect the sum in the previous equation in favor of the first term. The per-capita death fraction can then be written % \begin{equation} \begin{split} f_D(t) &\approx k \sum_{N_{\cal E} = 0}^{\infty} \, N_{\cal E} \, P(N_{\cal E} \mid \textrm{alive at } t)\\ &= k \, \langle N_{\cal E}(t)\rangle\,. \end{split} \end{equation} % Therefore, we can estimate the susceptible fraction of individuals at any time as % \begin{equation} S(t) = S(0) \exp\left[- \frac{1}{k} f_D\right]\, \end{equation} % where we write the parameter as $C_D = 1/k$ in the main text. \begin{figure} \centering \includegraphics[width=\linewidth]{Herd_NYC_Linear.pdf} \includegraphics[width=\linewidth]{Herd_NYC_Exp.pdf} \caption{Comparison of hypotheses regarding the relationship between the susceptible fraction, $S(t)/S(0)$, and the dead fraction, $f_D(t)$. A model with hazard of death proportional to the number of exposures (bottom) is preferred to one with a constant IFR (top) at $9\sigma$ confidence level. Similar to Figure \ref{fig:Herd}, the points show the best-fit linear model prediction for NYC, fitted independently in different bins of mortality to population ration, while the lines show best-fit $\pm$ 1$\sigma$ nonlinear models. Note that while we show the nonlinear model predictions for a county similar to NYC, we use all US data to find the best fits.} \label{fig:Herd_Lin_Exp} \end{figure} We considered an alternative scenario, in which an infected individual had a constant probability of death, regardless of the number of infection events suffered, i.e., that $f_D = {\rm IFR} \times f_I$, where $f_I$ is the cumulative fraction of individuals ever infected. Then the susceptible density could be estimated as % \begin{equation} S(t) = S(0) \left[1 - f_I(t)\right] = S(0) \left[ 1 - \frac{f_D(t)}{\rm IFR}\right]\,. \end{equation} % This is of course indistinguishable from the previous assumption when $f_D/k \ll 1$ --- where we see that $k=1/C_D = {\rm IFR}$ --- but it drops more sharply with $f_D$. In testing both models, we found that the multiple-exposure scenario was preferred by data at $\sim 10\sigma$ confidence level (Figure \ref{fig:Herd_Lin_Exp}). Our resulting fit to the mortality growth rate found that ${\rm IFR}^{-1}= C_D \simeq 3460 \pm 610$, i.e. an average of $\sim$3500 exposures lead to one death in the community. To reconcile this number with the percent-level Case Fatality Rates (CFR) observed for SARS-COV2 \citep[e.g.,][]{wu2020estimating}, we should note that the sub-population responsible for most effective spread of the contagion \citep[a.k.a. ``superspreaders'', e.g., ][]{liu2020secondary}, are not the same as the sub-population most likely to die due to exposure to the virus (i.e. the elderly or those who suffer from co-morbidities). The former group are likely to experience most exposures and thus reach immunity quickly, while the latter group would remain susceptible much longer. Therefore, in contrast to classical compartmental SEIR models, immunity is not built through infection (or vaccination) of a significant fraction $1-R^{-1}_0$ of the population, but rather by exhausting the superspreader sub-population, who are primarily responsible for spreading the virus, from the susceptible pool. Indeed, the effect of population heterogeneity on lowering the ``herd immunity'' threshold for COVID-19 outbreak was recently noted by \citet{britton2020mathematical}, which is important in interpretation of randomized serology tests, such as \citet{Havers2020.06.25.20140384}.
1,108,101,563,569
arxiv
\section{Introduction} One-shot Video Object Segmentation (VOS) is the task of segmenting an object of interest throughout a video sequence with many applications in areas such as autonomous systems and robotics. In this task, the first mask of the object appearance is provided and the model is supposed to segment that specific object during the rest of the sequence. VOS is a fundamental task in Computer Vision dealing with various challenges such as handling occlusion, tracking objects with different sizes and speed, and drastic motion either from the camera or the object \cite{yao2019video}. Within the last few years, video object segmentation has received a lot of attention from the community \cite{caelles2017one,perazzi2017learning,voigtlaender2017online,tokmakov2017learning}. Although VOS has a long history \cite{chang2013video,grundmann2010efficient,marki2016bilateral}, only recently it has resurfaced due to the release of large-scale and specialized datasets \cite{pont20172017,perazzi2016benchmark}. To solve VOS, a wide variety of approaches have been proposed in the literature ranging from training with static images without using temporal information \cite{caelles2017one} to using optical flow for utilizing the motion information and achieving better temporal consistency \cite{tokmakov2017learning}. However, the methods relying solely on static images lack temporal consistency and using optical flow is computationally expensive and imposes additional constraints. With the release of YoutubeVOS \cite{xu2018youtub}, the largest video object segmentation dataset to date, the authors demonstrated that having enough labeled data makes it possible to train a sequence-to-sequence (S2S) model for video object segmentation. In S2S, an encoder-decoder architecture is used similar to \cite{badrinarayanan2017segnet}. Furthermore, a recurrent neural network (RNN) is employed after the encoder (referred to as bottleneck) to track the object of interest in a temporally coherent manner. In this work, we build on top of the S2S architecture due to its simplicity and elegant design that reaches impressive results compared to the state of the art while remaining efficient \cite{xu2018youtub}. In the YoutubeVOS dataset, there are sequences with up to five objects with various sizes to be segmented. By having a close look at the S2S behavior, we noticed that it often loses track of smaller objects. The problem in the failure cases is that early in the sequence, the network prediction of the segmentation masks has a lower confidence (the output of the $sigmoid$ layer is around $0.5$). This uncertainty increases and propagates rapidly to the next frames resulting in the model losing the object as shown in \autoref{fig:uncertainty}. Therefore, the segmentation score of that object would be zero for the rest of the sequence which has a strong negative impact on the overall performance. We argue that this is partly due to lack of information in the bottleneck, especially for small objects. To improve the capacity of the model for segmenting smaller objects, we propose utilizing spatio-temporal information at multiple scales. To this end, we propose using additional skip connections incorporating a memory module (henceforth referred to as skip-memory). Our intuition is based on the role of ConvLSTM in the architecture that is remembering the area of interest in the image. Using skip-memory allows the model to track the target object at multiple scales. This way, even if the object is lost at the bottleneck, there is still a chance to track it by using information from lower scales. Our next contribution is the introduction of an auxiliary task for improving the performance of video object segmentation. The effectiveness of multi-task learning has been shown in different scenarios \cite{ruder2017overview}, but it has received less attention in video object segmentation. We borrow ideas from Bischke et al. \cite{bischke2019multi} for satellite image segmentation and adapt it for the task at hand. The auxiliary task defined here is distance classification. For this purpose, border classes regarding the distance to the edge of the target object is assigned to each pixel in the ground-truth mask. We adapt the decoder network with an extra branch for the additional distance classification mask and use its output as an additional training signal for predicting the segmentation mask. The distance classification objective provides more specific information about the precise location of each pixel, resulting in significant improvement of the $F$ score (measuring the quality of the segmentation boundaries). The overall architecture is shown in \autoref{fig:architecture}. In the rest of the paper we refer to our method as S2S++. \begin{figure} \centering \includegraphics[width=3in]{uncertainty_propagation.pdf} \caption{In this image, the ground-truth mask is shown on the left and the output of the decoder of the S2S architecture, on the right. The output of the $sigmoid$ function (last layer in the decoder) acts like a probability distribution over the binary classification, measuring the model confidence. The output of around $0.5$ (white color coding) implies a low confidence in the prediction while values close to $0$ or $1$ (blue and red colors) show confident outputs w.r.t. to background and foreground classes). Our observation is that the model is not often confident when predicting masks for small objects. This uncertainty propagates to the next predictions causing the model to lose the target object within a few time steps. We argue that part of this issue is because the RNN located in the bottleneck of the encoder-decoder architecture, does not receive enough information from the small objects. } \label{fig:uncertainty} \end{figure} \section{Related Work} \label{seq:related} \begin{figure*}[t!] \centering \includegraphics[width=15cm]{model4.pdf} \caption{The overall architecture of our approach. We utilize information at different scales of the video by using skip-memory (RNN2). Experiments with more than one skip-memory connection are possible (only one is shown here for simplicity). We use an additional distance-based loss to improve the contour quality of the segmentation masks. For this purpose, a distance class is assigned to each pixel in the mask, based on its distance to the object boundary. We use a $softmax$ at the distance classification branch and a $sigmoid$ at the segmentation branch to compute the $L_{dist}$ and $L_{seg}$, respectively. Yellow blocks show the architecture of the original S2S model and all other blocks depict our extension to this model.} \label{fig:architecture} \end{figure*} One-shot video object segmentation can be seen as pixel-wise tracking of target objects throughout a video sequence, where the first segmentation mask is provided as shown in \autoref{fig:intro}. This field has a long history in the literature \cite{brox2010object}, however, with the rise of deep learning, classical methods based on energy minimization, using super-voxels and graph-based methods \cite{papazoglou2013fast,jain2014supervoxel,shankar2015video,faktor2014video} were replaced with deep learning based approaches. In the following we provide a brief overview of the state of the art approaches in this domain. Having the first segmentation mask at hand, two training mechanisms exist in the literature: \emph{offline training} which is the standard training process and \emph{online training} which is performed at the test time. In online training, heavy augmentation is applied to the first frame of the sequence in order to generate more data and the network is further trained on the specific sequence \cite{perazzi2017learning,caelles2017one,voigtlaender2017online,Man+18b}. Using online training as an additional training step leads to better results, however, it makes the inference phase quite slow and computationally more demanding. Regarding offline training, various approaches have been suggested in the literature. Some approaches are based on using static images and extending image segmentation for video \cite{caelles2017one,perazzi2017learning}. In \cite{caelles2017one}, authors use a VGG architecture \cite{simonyan2014very} pre-trained on ImageNet \cite{krizhevsky2012imagenet} and adapt it for video object segmentation. Further offline and online training accompanied by contour snapping, allow the model to keep the object of interest and discard the rest of the image (classified as background). \cite{perazzi2017learning} treats the task as guided instance segmentation. In this case, the previous predicted mask (first mask in the beginning followed by using predicted masks at next time steps) is used as an additional input, serving as the guidance signal. Moreover, the authors experiment with different types of signals such as bounding boxes and optical flow demonstrating that even a weak guidance signal such as bounding box can be effective. In OSMN \cite{yang2018efficient}, the authors propose using two visual and spatial modulator networks to adapt the base network for segmenting only the object of interest. The main problem with these methods is that they do not utilize sequential data and therefore suffer from a lack of temporal consistency. Another approach taken in the literature is using object proposals based on RCNN-based techniques \cite{he2017mask}. In \cite{luiten2018premvos} the task is divided into two steps: First, generating the object proposals and second, selecting and fusing the promising mask proposals trying to enforce the temporal consistency by utilizing optical flow. In \cite{li2017video} the authors incorporate a re-identification module base on patch-matching to recover from failure cases where the object is lost during segmentation (e.g., as a result of accumulated error and drift in long sequences). These methods achieve a good performance, with the downside of being quite complex and slow. Before the introduction of a comprehensive dataset for VOS, it was customary to pre-train the model parameters on image segmentation datasets such as PASCAL VOC \cite{everingham2010pascal} and then fine-tune them on video datasets \cite{voigtlaender2017online,wug2018fast}. Khoreva et al. suggest an advanced data augmentation method for video segmentation including non-rigid motion, to address the lack of labeled data in this domain \cite{khoreva2019lucid}. However, with the release of YoutubeVOS dataset \cite{xu2018youtub}, the authors show that it is possible to train an end-to-end, sequence-to-sequence model for video object segmentation when having enough labeled data. They deploy a ConvLSTM module \cite{xingjian2015convolutional} to process the sequential data and to maintain temporal consistency. In \cite{tokmakov2017learning}, the authors propose a two-stream architecture composed of an appearance network and a motion network. The result of these two branches are merged and fed to a ConvGRU module before the final segmentation. \cite{ventura2019rvos} extends the spatial recurrence proposed for image instance segmentation \cite{salvador2017recurrent} with temporal recurrence, designing an architecture for zero-shot video object segmentation (without using the first ground-truth mask). In this paper, we focus on using ConvLSTM for processing sequential information at multiple scales, following the ideas in \cite{xu2018youtub}. In the next sections, we elaborate on our method and proceed with the implementation details followed by experiments as well as an ablation study on the impact of different components and the choice of hyper-parameters. \section{Method} In this section we describe our method including the modifications to the S2S architecture and the use of our multi-task loss for video object segmentation. The S2S model is illustrated with yellow blocks in \autoref{fig:architecture} where the segmentation mask is computed as (adapted from \cite{xu2018youtub}): \begin{equation} h_{0}, c_{0} = Initializer(x_{0}, y_{0}) \end{equation} \begin{equation} \Tilde{x}_{t}= Encoder(x_{t}) \end{equation} \begin{equation} h_{t}, c_{t} = RNN1(\Tilde{x_{t}}, h_{t-1}, c_{t-1}) \end{equation} \begin{equation} \hat{y_{t}} = Decoder(h_{t}) \end{equation} with $x$ referring to the RGB image and $y$ to the binary mask. \subsection{Integrating Skip-Memory Connections} We aim to better understand the role of the memory module used in the center of the encoder-decoder architecture in S2S method. To this end, we replaced the ConvLSTM with simply feeding the previous mask as guidance for predicting the next mask, similar to \cite{perazzi2017learning}. Doing so, we observed a drastic performance drop of about ten percent in the overall segmentation accuracy. This suggests that only the guidance signal from the previous segmentation mask is not enough and that features from the previous time step should be aligned to the current time step. As a result, we hypothesise that the role of ConvLSTM in the architecture is twofold: First, to remember the object of interest through the recurrent connections and the hidden state, and to mask out the rest of the scene and second, to align the features from the previous step to the current step, having a role similar to optical flow. As mentioned earlier, the S2S model incorporates a memory module at the bottleneck of the encoder-decoder network. By inspecting the output of this approach, we noticed that the predicted masks for small objects are often worse than the other objects (see \autoref{fig:planes} and \autoref{fig:examples} for visual examples). The issue is that the target object often gets lost early in the sequence as shown in \autoref{fig:uncertainty}. We reason that this is partially due to the lack of information for smaller objects in the bottleneck. For image segmentation, this issue is addressed via introducing skip connections between the encoder and the decoder \cite{ronneberger2015u, badrinarayanan2017segnet}. This way the information about small objects and fine details are directly passed to the decoder. Using skip connections is very effective in image segmentation; however, when working with video, if the information in the bottleneck (input to the memory) is lost, the memory concludes that there is no object of interest in the scene anymore (since the memory provides information about the target object and its location). As a result, the information in the simple skip connections will not be very helpful in this failure mode. As a solution, we propose a system that keeps track of features at different scales of the spatio-temporal data by using a ConvLSTM in the skip connection as shown in \autoref{fig:architecture}. We note that some technical considerations should be taken into account when employing ConvLSTM at higher image resolutions. As we move to higher resolutions (lower scales) in the video, the motion is larger and also the receptive field of the memory is smaller. As stated in \cite{reda2018sdc}, capturing the displacement is limited to the kernel size in kernel-based methods such as using ConvLSTMs. Therefore, adding ConvLSTMs at lower scales in the decoder, without paying attention to this aspect might have negative impact on the segmentation accuracy. Moreover, during our experiments we observed that it is important to keep the simple skip connections (without ConvLSTM) intact in order to preserve the uninterrupted flow of the gradients. Therefore, we add the ConvLSTM in an additional skip connection (RNN2 in \autoref{fig:architecture}) and merge the information from different branches using weighted averaging with learnable weights. Hence, it is possible for the network to access information from different branches in an optimal way. For the training objective of the segmentation branch in \autoref{fig:architecture}, we use the sum of the balanced binary-cross-entropy loss \cite{caelles2017one} over the sequence of length $T$, defined as: \begin{equation} \begin{aligned} L_{seg}(W) & = \sum_{t=1}^{T}(-\beta\sum\limits_{j\in Y_{+}}logP(y_{j}=1|X;\textbf{W})\\ & -(1-\beta)\sum\limits_{j\in Y_{-}}logP(y_{j}=0|X;\textbf{W})) \end{aligned} \label{eq:seg_loss} \end{equation} where $X$ is the input, $\textbf{W}$ is the learned weights, $Y_{+}$ and $Y_{-}$ are foreground and background labeled pixels, $\beta=|Y_{-}|/|Y|$, and $Y$ is the total number of pixels. \begin{figure} \centering \includegraphics[width=3in]{distance_classes2.png} \vspace{1mm} \caption{In this figure, we show a binary mask (left) together with a heat-map depicting the distance classes (right). The number of distance classes is determined by two hyper-parameters for the \emph{number of border pixels} around the edges and the \emph{bin size} for each class. The visualization shows that unlike previous works, our representations captures distance classes inside (reddish colors) as well as outside of the objects (blueish colors). } \label{fig:distance} \end{figure} \begin{table*}[!htbp] \centering \caption{Comparison of our best-obtained results with the state of the art approaches in video object segmentation using YoutubeVOS dataset. The values are reported in percentages and divided into columns for each score as in \cite{xu2018youtub}. The table is divided to two parts for methods with and without online training. We can see that our approach (even without online training) achieves the best overall score. } \begin{tabular}{lc c c c c c} Method & Online training & $J_{seen}$ & $J_{unseen}$ & $F_{seen}$ & $F_{unseen}$ & overall\\ \hline OSVOS \cite{caelles2017one}& yes & 59.8 & \textbf{54.2} & 60.5 & \textbf{60.7} & \textbf{58.08}\\ MaskTrack \cite{perazzi2017learning} & yes & 59.9 & 45.0 & 59.5 & 47.9 & 53.08 \\ OnAVOS \cite{voigtlaender2017online}& yes & \textbf{60.1} & 46.6 & \textbf{62.7} & 51.4 & 55.20\\ \hline OSMN \cite{yang2018efficient} & No & 60.0 & 40.6 & 60.1 & 44.0 & 51.18\\ RVOS \cite{ventura2019rvos} & No & 63.6 & 45.5 & 67.2 & 51.0 & 56.83\\ S2S \cite{xu2018youtub} & No & 66.7 & 48.2 & 65.5 & 50.3 & 57.68\\ S2S++(ours) & No & \textbf{68.68} & \textbf{48.89} & \textbf{72.03} & \textbf{54.42} & \textbf{61.00}\\ \hline \end{tabular} \label{tab:SOTA} \end{table*} \subsection{Border Distance Mask and Multi-Task Objective} As the second extension, we build upon previous work of Bischke et al. \cite{bischke2019multi} and train the network parameters in addition to the object segmentation mask with an image representation based on a distance transformation (see \autoref{fig:distance} for an example). This image representation was successfully used in a multi-task learning setup to explicitly bias the model to focus more on those pixels which are close to the object boundary and more error prone for misclassification, compared to the ones further away from the edge of the object. In order to derive this representation, we first apply the distance transform to the object segmentation mask. We truncate the distance at a given threshold to only incorporate the nearest pixels to the border. Let $Q$ denote the set of pixels on the object boundary and $C$ the set of pixels belonging to the object mask. For every pixel $p$ we compute the truncated distance $D(p)$ as: \begin{equation} \begin{split} D(p) = \delta_p\ \inf \{\ \min_{\forall q \in Q} d(p, q), R \ \}, \\ \mbox{where} \ \delta_p = \begin{cases} +1 & \text{if} \quad p \in C \\ -1 & \text{if} \quad p \notin C \end{cases} \end{split} \end{equation} where $d(p,q)$ is the Euclidean distance between pixels $p$ and $q$ and $R$ is the maximal radius (truncation threshold). The pixel distances are additionally weighted by the \emph{sign function} $\delta_p$ to represent whether pixels lie inside or outside objects. The continuous distance values are then uniformly quantized with a bin-size $s$ into $ \lfloor R/s \rfloor$ bins. Considering both inside and outside border pixels, this yields to $2*R/s$ binned distance classes as well as two classes for pixel distances that exceeds the threshold R. We one-hot encode every pixel $p$ of this image representation into $k$ classification maps $D_K(p)$ corresponding each of the border distance. We optimize the parameters of the network with a multi-task objective by combining the loss for the segmentation mask $L_{seg}$ and the loss for the border distance mask $L_{dist}$ as a weighted sum as follows. Since we consider a multi-class classification problem for the distance prediction task we use the cross-entropy loss. $L_{dist}$ is defined as the cross entropy loss between the derived distance output representation $D_K(p)$ and the network output: \begin{equation} L_{total} = \lambda\ L_{seg} + (1-\lambda)\ L_{dist} \label{eq:total} \end{equation} The loss of the object segmentation task is the balanced binary-cross-entropy loss as defined in \autoref{eq:seg_loss}. The network can be trained end-to-end. \section{Implementation Details} In this section we describe implementation details of our approach. \subsection{Initializer and Encoder Networks} The backbone of the initializer and the encoder networks in \autoref{fig:architecture} is a VGG16 \cite{simonyan2014very} pre-trained on ImageNet \cite{krizhevsky2012imagenet}. The last layer of VGG is removed and the fully-connected layers are adapted to a convolution layer to form a fully convolutional architecture as suggested in \cite{long2015fully}. The number of input channels for the initializer network is changed to $4$, as it receives the RGB and the binary mask of the object as the input. The initializer network has two additional $1\times1$ convolution layers with $512$ channels to generate the initial hidden and cell states of the ConvLSTM at the bottleneck (RNN1 in \autoref{fig:architecture}). For initializing the ConvLSTMs at higher scales, up-sampling followed by convolution layers are utilized, with the same fashion as the decoder. Additional convolution layers are initialized with Xavier initialization \cite{glorot2010understanding}. \subsection{RNNs} The ConvLSTM $1,2$ (shown as RNN1 and RNN2 in \autoref{fig:architecture}) both have a kernel size of $3\times3$ with $512$ channels. The ConvLSTM at the next level has a kernel size of $5\times5$ with $256$ channels. Here, we chose a bigger kernel size to account for capturing larger displacements at lower scales in the image pyramid. \subsection{Decoder} The decoder consists of five up-sampling layers with bi-linear interpolation, each followed by a convolution layer with kernel size of $5\times5$ and Xavier initialization \cite{glorot2010understanding}. the number of channels for the convolution layers are $512, 256, 128, 64, 64$ respectively. The features from the skip connections and the skip-memory are merged using a $1\times1$ convolution layer. To adapt the decoder for the multi-task loss, an additional convolution layer is used to map $64$ channels to the number of distance classes. This layer is followed by a $softmax$ to generate the distance class probabilities. The distance scores are merged into the segmentation branch where a $sigmoid$ layer is used to generate the binary segmentation masks. We use the Adam optimizer \cite{kingma2014adam} with an initial learning rate of $10^{-5}$. In our experiments we set the value of $\lambda$ in Equation \ref{eq:total} to $0.8$. When the training loss is stabilized, we decrease the learning rate by a factor of $0.99$ every $4$ epochs. Due to GPU memory limitations, we train our model with batch size $4$ and a sequence length of $5$ to $12$ frames. \begin{table*}[!htbp] \centering \caption{Ablation study on the impact of skip-memory and multi-task loss. We can notice that multi-task loss and skip-memory individually improve the results, but lead to the best results when combined. } \begin{tabular}{c c c c c c} Method & $J_{seen}$ & $J_{unseen}$ & $F_{seen}$ & $F_{unseen}$ & overall\\ \hline base model & 65.36 & 43.55 & 67.90 & 47.50 & 56.08\\ base model + multi-task loss & 67.65 & 44.62 & 70.81 & 49.84 & 58.23\\ base model + one skip-memory & 66.89 & 46.82 & 69.22 & 50.08 & 58.25\\ base model + one skip-memory + multi-task loss & 67.18 & 47.04 & 70.24 & 52.30 & 59.19\\ base model + two skip-memory + multi-task loss & \textbf{68.68} & \textbf{48.89} & \textbf{72.03} & \textbf{54.42} & \textbf{61.00}\\ \hline \end{tabular} \label{tab:ablation} \end{table*} \begin{table*}[!htbp] \centering \caption{Results for different hyper-parameters for the multi-task loss on our best model. We can see that a higher number of distance classes slightly improves the metrics.} \begin{tabular}{c c c c c c c c} border size & bin size & number of classes & $J_{seen}$ & $J_{unseen}$ & $F_{seen}$ & $F_{unseen}$ & overall\\ \hline 20 & 10 & 6 & 68.37 & 47.68 & 71.54 & 52.38 & 59.99 \\ 20 & 1 & 42 & \textbf{68.68} & \textbf{48.89} & \textbf{72.03} & \textbf{54.42} & \textbf{61.00}\\ 10 & 1 & 22 & 68.40 & 47.91 & 71.61 & 52.83 & 60.19\\ \hline \end{tabular} \label{tab:hyper} \end{table*} \begin{figure*} \centering \includegraphics[width=\textwidth]{planes4.png} \caption{Qualitative comparison between the results obtained from S2S approach (first row) and the results from our method (second row). The first mask ($t=0$) is provided at test time and the target objects are segmented independently throughout the whole sequence. Every second frame is shown here and the brightness of the images is adjusted for better visibility. As it can be seen, our approach successfully tracks the target airplanes throughout the sequence while the S2S method loses and mixes the object masks early in the sequence.} \label{fig:planes} \end{figure*} \subsection{Border Output Representation} The number of border pixels and the bin size per class are hyper-parameters which determine the resulting number of distance classes. In our internal experiments (see \autoref{subsec:ablation}), we noticed better results can be achieved if the number of distance classes is increased. In the following experiments, we set the \emph{border\_pixels}=20, the \emph{bin\_size}=1. Thereby we obtain for each object a segmentation mask with $42$ distance classes (the number of output classes is $2\times\frac{border\_pixels}{bin\_size}+2$). Having the edge as the center, we have $\frac{border\_pixels}{bin\_size}$ classes at each of the inner and outer borders plus two additional classes for pixels which do not lie within the borders (inside and outside of the object) as shown in \autoref{fig:distance}. \subsection{Data Pre- and Post-Processing} In line with the previous work in multiple object video segmentation, we follow a training pipeline, in which every object is tracked independently and at the end the binary masks from different objects are merged into a single mask. For pixels with overlapping predictions, the label from the object with the highest probability is taken into account. For data loading during the training phase, each batch consists of a single object from a different sequence. We noticed that processing multiple objects of the same sequence degrades the performance. The images and the masks are resized to $256\times448$ as suggested in \cite{xu2018youtub}. For data augmentation we use random horizontal flipping and affine transformations. For the results provided in \autoref{seq:results}, we have not used any refinement setp (e.g. CRF \cite{krahenbuhl2011efficient}) or inference-time augmentation. Moreover, we note that pre-training on image segmentation datasets can greatly improve the results due to the variety of present object categories in these datasets. However, in this work we have solely relied on pre-trained weights from ImageNet \cite{krizhevsky2012imagenet}. \section{Experiments and Results} \label{seq:results} In this section a comparison with the state-of-the-art methods is provided in \autoref{tab:SOTA}. Additionally, we perform an ablation study in \autoref{tab:ablation} to examine the impact of skip-memory and multi-task loss in our approach. We evaluate our method on YoutubeVOS dataset \cite{xu2018youtub} which is currently the largest dataset for video object segmentation. We use the standard evaluation metrics \cite{perazzi2016benchmark}, reporting \textit{Region Similarity} and \textit{Contour Accuracy} ($J \& F$). $J$ corresponds to the average intersection over union between the predicted segmentation masks and the ground-truth, and $F$ is defined as $F = 2\frac{precision*recall}{precision+recall}$, regarding the boundary pixels after applying sufficient dilation to the object edges. For an overall comparability, we use the \textit{overall} metric of the dataset \cite{xu2018youtub} that refers to the average of $J \& F$ scores. \subsection{Comparison to state-of-the-art approaches} In \autoref{tab:SOTA}, we provide a comparison to state of art methods with and without online training. As mentioned in \autoref{seq:related}, online training is the process of further training at test time through applying a lot of data augmentation on the first mask to generate more data. This phase greatly improves the performance, at the expense of slowing down the inference phase. As it can be seen in \autoref{tab:SOTA}, the scores are measured for two categories of seen and unseen objects. This is a difference between other datasets and YoutubeVOS \cite{xu2018youtub} which consists of new object categories in the validation set. Specifically, the validation set in YoutubeVOS dataset includes $474$ videos with 65 seen and 26 unseen categories. The score for unseen categories serves as a measure of generalization of different models. As expected, the unseen object categories achieve a higher score when using online training (since the object is already seen by the network during the online training). However, despite not using online training (and therefore also having lower computational demands during test time), S2S and S2S++ achieve higher overall performance. It is worth mentioning, that both $F_{seen}$ and $F_{unseen}$ scores improve by more than 4 percentage points in our approach. \autoref{fig:planes} illustrates a qualitative comparison between our results and the ones from the S2S method. We provide additional examples in \autoref{fig:examples}. \subsection{Ablation Study} \label{subsec:ablation} Since the S2S method is the base of our work and the source code is not available, we provide a comparison between our implementation of S2S and S2S++ in \autoref{tab:ablation}, when adding each component. As it can be seen from the results, the best performance is achieved when using two skip-memory modules and multi-task loss. We then take this model and experiment with different hyper-parameters for multi-task loss, as shown in \autoref{tab:hyper}. The results show that a higher number of border classes that is closer to regression yields to a higher overall score. It is worth mentioning that the distance loss ($L_{dist}$) has less impact for small objects, especially if the diameter of the object is below the border size (in this case no extra distance classes will be added). Hence, we suspect the improvement in segmenting small objects (shown in \autoref{fig:planes} and \autoref{fig:examples}) is mainly due to the use of skip-memory connections. \begin{figure*} \centering \includegraphics[width=\textwidth]{comparison3.png} \caption{Additional examples for qualitative comparison between the S2S method in the first row, and ours in the second row. In the top part, the yellow frames were zoomed in for better visibility. We can observe a better capacity for tracking small objects.} \label{fig:examples} \end{figure*} \section{Conclusion} In this work we observed that the S2S method often fails when segmenting small objects. We build on top of this approach and propose using skip-memory connections for utilizing multi-scale spatio-temporal information of the video data. Moreover, we incorporate a distance-based multi-task loss to improve the predicted object masks for video object segmentation. In our experiments, we demonstrate that this approach outperforms state of the art methods on the \textit{YoutubeVOS} dataset \cite{xu2018youtub}. Our extensions to the S2S model require minimal changes to the architecture and greatly improves the contour accuracy score (\textit{F}) and the overall metric. One of the limitations of the current model is a performance drop for longer sequences, especially in the presence of multiple objects in the scene. In future, we would like to study this aspect and investigate the effectiveness of using attention as a potential remedy. Furthermore, we would like to study the multi task loss in more. One interesting direction is to learn separate task weights for the segmentation and distance prediction task as in \cite{bischke2019multi} rather than using fixed task weights as in our work. In this context, we would also like to examine the usage of a regression task rather than classification task for predicting the distance to the object border. \section*{Acknowledgments} This work was supported by the TU Kaiserslautern CS PhD scholarship program, the BMBF project DeFuseNN (Grant 01IW17002) and the NVIDIA AI Lab (NVAIL) program. Further, we thank all members of the Deep Learning Competence Center at the DFKI for their feedback and support. \bibliographystyle{plain}
1,108,101,563,570
arxiv
\section{Introduction}\label{section-intro} In static data structure problems, a database is preprocessed to form a table according to certain encoding scheme, and upon each query to the database, an algorithm (decision tree) answers the query by adaptively probing the table cells. The complexity of this process is captured by the cell-probe model for static data structures. Solutions in this model are called cell-probing schemes. The cell-probe model plays a central role in studying data structure lower bounds. The existing cell-probe lower bounds for static data structure problems can be classified into the following three categories according to the techniques they use and the highest possible lower bounds supported by these techniques: \begin{itemize} \item Lower bounds implied by asymmetric communication complexity: Classic techniques introduced in the seminal work of Miltersen~\textit{et al.}~\cite{miltersen1998data} see a cell-probing scheme as a communication protocol between the query algorithm and the table, and the cell-probe lower bounds are implied by the asymmetric communication complexity lower bounds which are proved by the \emph{richness lemma} or round eliminations. In the usual setting that both query and data items are points from a $d$-dimensional space, the highest time lower bound that can be proved in this way is $t=\Omega\left(\frac{d}{\log s}\right)$ with a table of $s$ cells. This bound is a barrier for the technique, because a matching upper bound can always be achieved by communication protocols. \item Lower bounds proved by self-reduction using direct-sum properties: The seminal works of P\v{a}tra\c{s}cu and Thorup~\cite{patrascu06pred,patrascu2010higher} introduce a very smart idea of many-to-one self-reductions, using which and by exploiting the direct-sum nature of problems, higher lower bounds can be proved for a near-linear space. The highest lower bounds that can be proved in this way is $t=\Omega\left({d}/{\log \frac{sw}{n}}\right)$ with a table of $s$ cells each containing $w$ bits. Such lower bounds grow differently with near-linear space and polynomial space, which is indistinguishable in the communication model. \item Higher lower bounds for linear space: A recent breakthrough of Larsen~\cite{larsen2012higher} uses a technique refined from the cell sampling technique of Panigrahy \textit{et al.}~\cite{panigrahy2008geometric,panigrahy2010lower} to prove an even higher lower bound for the polynomial evaluation problem. This lower bound behaves as $t=\Omega(d)$ when the space is strictly linear. This separates for the first time between the cell-probe complexity with linear and near-linear spaces, and also achieves the highest cell-probe lower bound ever known for any data structure problems. \end{itemize} In this paper, we consider an even stronger model: \emph{certificates} in static data structures. A query to a database is said to have certificate of size $t$ if the answer to the query can be uniquely identified by the contents of $t$ cells in the table. This very natural notion represents the nondeterministic computation in cell-probe model and is certainly a lower bound to the complexity of deterministic cell-probing schemes. This nondeterministic model has been explicitly considered before in a previous work~\cite{yin2010cell} of one of the authors of the current paper. Surprisingly, in spite of the seemingly extra power brought by the nondeterminism, the highest cell-probe lower bound to date is in fact a certificate lower bound~\cite{larsen2012higher}. Indeed, we conjecture that for typical data structure problems, especially those hard problems, \emph{the complexity of certifying the answer should dominate that of computing the answer}.\footnote{Interestingly, the only known exception to this conjecture is the predecessor search problem whose cell-probe complexity is a mild super-constant while the queries can be easily certified with constant cells in a sorted table.} This belief has been partially justified in~\cite{yin2010cell} by showing that a random static data structure problem is hard nondeterministically. In this paper, we further support this conjecture by showing that several mainstream techniques for cell-probe lower bounds in fact can imply as good or even higher certificate lower bounds. \subsection{Our contributions} We make the following contributions: \begin{enumerate} \item We prove a richness lemma for certificates in data structures, which improves the classic richness lemma for asymmetric communication complexity of Miltersen~\textit{et al.}~\cite{miltersen1998data} in two ways: (1) when applied to prove data structure lower bounds, our richness lemma implies lower bounds for a stronger \emph{nondeterministic} model; and (2) our richness lemma achieves better parameters than the classic richness lemma and may imply higher lower bounds. \item We give a scheme for proving certificate lower bounds using a similar direct-sum based self-reduction of P\v{a}tra\c{s}cu and Thorup~\cite{patrascu2010higher}. The certificate lower bounds obtained from our scheme is at least as good as before when the space is near-linear or polynomial. And for strictly linear space, our technique may support superior lower bounds, which was impossible for the direct-sum based techniques before. \item By applying these techniques, adopting the existing reductions, and modifying the reductions in the communication model to be model-independent, we prove certificate lower bounds for a variety of static data structure problems, listed in Table~\ref{table-results}. All these certificate lower bounds are at least as good as the highest known cell-probe lower bounds for the respective problems. And for approximate near neighbor (ANN), our $t=\Omega\left(d/\log\frac{sw}{nd}\right)$ lower bound improves the state of the art. When the space $sw=O(nd)$ is strictly linear, our lower bound for ANN becomes $t=\Omega(d)$, which along with the recent breakthrough for polynomial evaluation~\cite{larsen2012higher}, are the only two $t=\Omega(d)$ lower bounds ever proved for any problems in the cell-probe model. \end{enumerate} \begin{table}[tbt] \begin{tabular}{|c|c|c|} \hline problem & \tabincell{c}{certificate lower bound\\ proved here} & \tabincell{c}{highest known\\ cell-probe lower bound}\\ \hline bit-vector retrieval & $t=\Omega\left(\frac{m\log n}{\log s}\right)$& not known\\ \hline lopsided set disjointness (LSD) & $t=\Omega\left(\frac{m\log n}{\log s}\right)$ & $t=\Omega\left(\frac{m\log n}{\log s}\right)$~\cite{miltersen1998data,patrascu06eps2n,patrascu11structures}\\ \hline \tabincell{c}{ approximate near neighbor (ANN) \\ in Hamming space} & $t=\Omega\left(d/\log\frac{sw}{nd}\right)^\diamond$ & $t=\Omega\left(d/\log\frac{sw}{n}\right)^{\star}$~\cite{patrascu2010higher,panigrahy2008geometric} \\ \hline partial match (PM) & $t=\Omega\left(d/\log\frac{sw}{n}\right)^\star$ & $t=\Omega\left(d/\log\frac{sw}{n}\right)^{\star}$~\cite{patrascu2010higher,panigrahy2008geometric} \\ \hline 3-ANN in $\ell_\infty$ & $t=\Omega\left(d/\log\frac{sw}{n}\right)^\star$ & $t=\Omega\left(d/\log\frac{sw}{n}\right)^{\star}$~\cite{patrascu2010higher} \\ \hline \tabincell{c}{ reachability oracle \\ 2D stabbing \\ 4D range reporting } & $t=\Omega\left(\log n/\log\frac{sw}{n}\right)^\star$ & $t=\Omega\left(\log n/\log\frac{sw}{n}\right)^{\star}$~\cite{patrascu11structures} \\ \hline 2D range counting & $t=\Omega\left(\log n/\log\frac{sw}{n}\right)^\star$ & $t=\Omega\left(\log n/\log\frac{sw}{n}\right)^{\star}$~\cite{patrascu2007lower,patrascu11structures}\\ \hline approximate distance oracle & $t=\Omega\left(\frac{\log n}{\alpha \log(s\log n/n)}\right)^\star$ & $t=\Omega\left(\frac{\log n}{\alpha \log(s\log n/n)}\right)^{\star}$~\cite{sommer2009distance}\\ \hline \end{tabular} \begin{tabular}{rl} $\star$: & lower bound which grows differently with near-linear and polynomial space;\\ $\diamond$: & lower bound which grows differently with linear, near-linear, and polynomial space. \end{tabular} \caption{Certificate lower bounds proved in this paper.}\label{table-results} \end{table} \subsection{Related work} The richness lemma, along with the round elimination lemma, for asymmetric communication complexity was introduced in~\cite{miltersen1998data}. The richness lemma was later widely used, for example in~\cite{borodin1999lower,barkol2000tighter,jayram2003cell,liu2004strong}, to prove lower bounds for high dimensional geometric problems, e.g.~nearest neighbor search. In~\cite{patrascu06eps2n,patrascu11structures}, a generalized version of richness lemma was proved to imply lower bounds for (Monte Carlo) randomized data structures. A direct-sum richness theorem was first proved in the conference version of~\cite{patrascu2010higher}. Similar but less involved many-to-one reductions were used in~\cite{patrascu11structures} and~\cite{sommer2009distance} for proving lower bounds for certain graph oracles. The idea of cell sampling was implicitly used in~\cite{patrascu2007lower} and independently in~\cite{panigrahy2008geometric}. This novel technique was later fully developed in~\cite{panigrahy2010lower} for high dimensional geometric problems and in~\cite{larsen2012higher} for polynomial evaluation. The lower bound in~\cite{larsen2012higher} actually holds for nondeterministic cell probes, i.e.~certificates. The nondeterministic cell-probe complexity was studied for dynamic data structure problems in~\cite{husfeldt1998hardness} and for static data structure problems in\cite{yin2010cell}. \section{Certificates in data structures} A data structure problem is a function $f:X\times Y\rightarrow Z$ with two domains $X$ and $Y$. We call each $x\in X$ a \concept{query} and each $y\in Y$ a \concept{database}, and $f(x,y)\in Z$ specifies the result of query $x$ on database $y$. A code $T:Y\to\Sigma^s$ with an alphabet $\Sigma=\{0,1\}^w$ transforms each database $y\in Y$ to a \concept{table} $T_y=T(y)$ of $s$ \concept{cells} each containing $w$ bits. We use $[s]=\{1,2,\ldots,s\}$ to denote the set of indices of cells, and for each $i\in[s]$, we use $T_y(i)$ to denote the content of the $i$-th cell of table $T_y$. A data structure problem is said to have \concept{$(s,w,t)$-certificates}, if any database can be stored in a table of $s$ cells each containing $w$ bits, so that the result of each query can be uniquely determined by contents of at most $t$ cells. Formally, we have the following definition. \begin{definition}\label{definition-certificate} A data structure problem $f:X\times Y\rightarrow Z$ is said to have \concept{$(s,w,t)$-certificates}, if there exists a code $T:Y\rightarrow\Sigma^s$ with an alphabet $\Sigma=\{0,1\}^w$, such that for any query $x\in X$ and any database $y\in Y$, there exists a subset $P\subseteq[s]$ of cells with $|P|=t$, such that for any database $y'\in Y$, we have $f(x,y')=f(x,y)$ if $T_{y'}(i)=T_y(i)$ for all $i\in P$. \end{definition} Because certificates represent nondeterministic computation in data structures, it is obvious that it has stronger computational power than cell-probing schemes. \begin{proposition}\label{lemma-certificate-vs-probe} For any data structure problem $f$, if there is a cell-probing scheme storing every database in $s$ cells each containing $w$ bits and answering every query within $t$ cell-probes, then $f$ has $(s,w,t)$-certificates. \end{proposition} \newcommand{\CertificateFormulation}{ Data structure certificates can be equivalently formulated as proof systems as well as certificates in decision trees of partial functions. \paragraph*{As proof systems.} In a previous work~\cite{yin2010cell}, an equivalent formulation of data structure certificates as proof systems is used. A data structure problem $f:X\times Y\rightarrow Z$ has $(s,w,t)$-certificates if and only if there exist a code $T:Y\rightarrow\Sigma^s$ with an alphabet $\Sigma=\{0,1\}^w$ and a verifier $V:\{0,1\}^*\to Z\cup\{\bot\}$ where $\bot$ is a special symbol not in $Z$ indicating the failure of verification, so that for any query $x\in X$ and any database $y\in Y$, the followings are satisfied: \begin{itemize} \item Completeness: $\exists P\subseteq[s]$ with $|P|=t$ such that $V(x,\langle i,T_y(i)\rangle_{i\in P})=f(x,y)$; \item Soundness: $\forall P'\subseteq[s]$ with $|P'|=t$, $V(x,\langle i,T_y(i)\rangle_{i\in P'})\in\{f(x,y),\bot\}$; \end{itemize} where $\langle i,T_y(i)\rangle_{i\in P}$ denotes the sequence of pairs $\langle i,T_y(i)\rangle$ for all $i\in P$. \paragraph*{As certificates in decision trees.} Certificate is a well-known notion is studies of decision tree complexity (see~\cite{buhrman2002complexity} for a survey). A certificate in a Boolean function $h:\{0,1\}^n\to \{0,1\}$ for an input $x\in\{0,1\}^n$ is a subset $i_1,i_2,\ldots,i_t\in[n]$ of $t$ bits in $x$ such that for every $x'\in\{0,1\}^n$ satisfying that $x'(i_j)=x(i_j)$ for all $1\le j\le t$, it holds that $h(x)=h(x')$. And the certificate complexity of $h$, denoted by $C(h)$, is the minimum number of bits in a certificate in the worst-case of input $x$. The certificates and certificate complexity $C(h)$ can be naturally generalized to partial function $h:\Sigma^s\to Z$ with non-Boolean domain $\Sigma$ and range $Z$. Given a data structure problem $f:X\times Y\rightarrow Z$, and a code $T:Y\rightarrow\Sigma^s$ with an alphabet $\Sigma=\{0,1\}^w$, for each query $x\in X$, the function $f$ can be naturally transformed into a partial function $f^T_x:\Sigma^s\to Z$ so that $f^T_x(T_y)=f(x,y)$ for every database $y\in Y$ and $f_x^T$ is not defined elsewhere. It is easy to verify that a data structure problem $f:X\times Y\rightarrow Z$ has $(s,w,t)$-certificates if and only if there exists a code $T:Y\rightarrow\Sigma^s$ with an alphabet $\Sigma=\{0,1\}^w$ such that $\max_{x\in X}C(f^T_x)\le t$, where $C(f^T_x)$ is the certificate complexity of the partial function $f^T_x:\Sigma^s\to Z$. } \ifabs{ In Appendix~\ref{appendix-certificate-formulation}, we state the equivalent formulations of data structure certificates as proof systems and certificates in decision trees of partial functions. }{ \CertificateFormulation } \section{The richness lemma}\label{section-characterization} From now on, we focus on the decision problems where the output is either 0 or 1. A data structure problem $f:X\times Y\rightarrow\{0,1\}$ can be naturally treated as an $|X|\times |Y|$ matrix whose rows are indexed by queries $x\in X$ and columns are indexed by data $y\in Y$. The entry at the $x$-th row and $y$-th column is $f(x,y)$. For $\xi\in\{0,1\}$, we say $f$ has a \concept{monochromatic $\xi$-rectangle} of size $k\times\ell$ if there is a combinatorial rectangle $A\times B$ with $A\subseteq X, B\subseteq Y, |A|=k$ and $|B|=\ell$, such that $f(x,y)=\xi$ for all $(x,y)\in A\times B$. A matrix $f$ is said to be \concept{$(u,v)$-rich} if at least $v$ columns contain at least $u$ 1-entries. The following richness lemma for cell-probing schemes is introduced in \cite{miltersen1998data}. \begin{lemma}[Richness Lemma \cite{miltersen1998data}]\label{lemma-richness} Let $f$ be a $(u,v)$-rich problem. If $f$ has an $(s,w,t)$-cell-probing scheme, then $f$ contains a monochromatic 1-rectangle of size $\frac{u}{2^{t\log s}}\times \frac{v}{2^{wt+t\log s}}$. \end{lemma} In \cite{miltersen1998data}, the richness lemma is proved for asymmetric communication protocols. A communication protocol between two parties Alice and Bob is called an $[A,B]$-protocol if Alice sends Bob at most $A$ bits and Bob sends Alice at most $B$ bits in total in the worst-case. The richness lemma states that existence of $[A,B]$-protocol for a $(u,v)$-rich problem $f$ implies a submatrix of dimension $\frac{u}{2^{A}}\times \frac{v}{2^{A+B}}$ containing only 1-entries. An $(s,w,t)$-cell-probing scheme can imply an $[A,B]$-protocol with $A=t\log s$ and $B=wt$, so the above richness lemma for the cell-probing schemes follows. \subsection{Richness lemma for certificates} We prove a richness result for data structure certificates, with even a better reliance on parameters. \begin{lemma}[Richness Lemma for data structure certificates]\label{lemma-richness-cert} Let $f$ be a $(u,v)$-rich problem. If $f$ has $(s,w,t)$-certificates, then $f$ contains a monochromatic 1-rectangle of size $\frac{u}{\binom{s}{t}}\times \frac{v}{\binom{s}{t}2^{wt}}$. \end{lemma} \noindent\textbf{Remark.} Note that we always have $\log {s\choose t}={t\log\frac{s}{t}+O(t)}\le {t\log s}$. The bound in Lemma~\ref{lemma-richness-cert} is at least as good as the bound in classic richness lemma, even though now it is proved for nondeterministic computation. When $s$ and $t$ are close to each other, the bound in Lemma~\ref{lemma-richness-cert} is substantially better than that of classic richness lemma. Later in Section~\ref{section-direct-sum-property}, this extra gain is used in direct-sum reductions introduced in~\cite{patrascu2010higher} to achieve better time lower bounds for linear or near-linear space which match or improve state of the art. It is quite shocking to see all these achieved through a very basic reduction to the 1-probe case to be introduced later. \paragraph{} The classic richness lemma for asymmetric communication protocol is proved by a halving argument. Due to determinism of communication protocols (and cell-probing schemes), the combinatorial rectangle obtained from halving the universe are \emph{disjoint}. This disjointness no longer holds for the rectangles obtained from certificates because of nondeterminism. We resolve this issue by exploiting combinatorial structures of rectangles obtained from data structure certificates. The following preparation lemma is a generalization of the averaging principle. \begin{lemma}\label{lemma-bad-data} Let $\family{P}_1,\family{P}_2,\ldots,\family{P}_r\subset 2^{V}$ be partitions of $V$ satisfying $|\family{P}_i|\le k$ for every $1\le i\le k$. There must exist a $y\in V$ such that $|\family{P}_i(y)|\ge\frac{|V|}{rk}$ for all $1\le i\le r$, where $\family{P}_i(y)$ denotes the partition block $B\in \family{P}_i$ containing $y$. \end{lemma} \begin{proof} The lemma is proved by the probabilistic method. Let $y$ be uniformly chosen from $V$. Fix an arbitrary order of partition blocks for each partition $\family{P}_i$. Let $w_{ij}$ be the cardinality of the $j$-th block in $\family{P}_i$. Obviously the probability of $\family{P}_i(y)$ being the $j$-th block in $\family{P}_i$ is $\frac{w_{ij}}{|V|}$. By union bound, the probability that $|\family{P}_i(y)|<w$ is bounded by $\sum_{j:w_{ij}<w}\frac{w_{ij}}{|V|}<|\{j\,:\,w_{ij}<w\}|\frac{w}{|V|}$. Since $|\family{P}_i|\le k$, for every $i$ there are at most $k$ many such $j$ satisfying that $w_{ij}<w$, thus $\Pr\left[|\family{P}_i(y)|<\frac{|V|}{rk}\right]<k\cdot\frac{|V|/rk}{|V|}=\frac{1}{r}$. Applying union bound again for all $\family{P}_i$, we have $\Pr\left[\exists 1\le i\le r, |\family{P}_i(y)|<\frac{|V|}{rk}\right]<1$, which means there exists a $y\in V$ such that $|\family{P}_i(y)|\ge\frac{|V|}{rk}$ for all $1\le i\le r$. \end{proof} We first prove the richness lemma for the 1-probe case. \begin{lemma}\label{lemma-richness-one-cell} Let $f$ be a $(u,v)$-rich problem. If $f$ has $(s,w,1)$-certificates, then $f$ contains a monochromatic 1-rectangle of size $\frac{u}{s}\times \frac{v}{s\cdot2^{w}}$. \end{lemma} \begin{proof} Let $T: Y\to\Sigma^s$ where $\Sigma=\{0,1\}^w$ be the code in the $(s,w,1)$-certificates for $f$. Let $V\subseteq Y$ denote the set of $v$ columns of $f$ that each contains at least $u$ 1-entries. For each cell $1\le i\le s$, an equivalence relation $\sim_i$ on databases in $V$ can be naturally defined as follows: for any $y,y'\in V$, $y\sim_i y'$ if $T_y(i)=T_{y'}(i)$, that is, if databases $y$ and $y'$ look same in the $i$-th cell. Let $\family{P}_i$ denote the partition induced by the equivalence relation $\sim_i$. Each partition $\family{P}_i$ classifies the databases in $V$ according to the content of the $i$-th cell. Obviously $|\family{P}_i|\le 2^w$, because the content of a cell can have at most $|\Sigma|=2^w$ possibilities, and we also have $\family{P}_i(y)=\{y'\in V\mid T_{y'}(i)=T_{y}(i)\}$ being the set of databases indistinguishable from $y$ by looking at the $i$-th cell, where $\family{P}_i(y)$ denotes the partition block $B\in \family{P}_i$ containing $y$. By Lemma~\ref{lemma-bad-data}, there always exists a bad database $y\in V$ such that $|\family{P}_i(y)|\ge\frac{|V|}{s\cdot2^{w}}=\frac{v}{s\cdot2^{w}}$ for all $1\le i\le s$. For each database $y\in V$, let $X_{1}(y)=\{x\in X\mid f(x,y)=1\}$ denote the set of positive queries on database $y$, and for a subset $A\subseteq V$ of databases, let $X_1(A)=\bigcap_{y\in A}X_1(y)$ denote the set of queries which are positive on all databases in $A$. Note that $X_{1}(y)$ and $X_1(A)$ are the respective 1-preimages of Boolean functions $f(\cdot,y)$ and $\bigwedge_{y\in A} f(\cdot,y)$. By definition, it is easy to see that $X_1(A)\times A$ is a monochromatic 1-rectangle for any $A\subseteq V$. \textbf{Claim:} For any $y\in V$, it holds that $X_1(y)=\bigcup_{1\le i\le s} X_1(\family{P}_i(y))$. It is easy to see the direction $\bigcup_{1\le i\le s} X_1(\family{P}_i(y))\subseteq X_1(y)$ holds because $X_1(A)\subseteq X_{1}(y)$ for any $A$ containing $y$ and clearly $y\in \family{P}_i(y)$. So we only need to prove the other direction. Since $f$ has $(s,w,1)$-certificates, for any positive query $x$ on database $y$ (i.e.~any $x\in X_1(y)$), there is a cell $i$ such that all databases $y'$ indistinguishable from $y$ by looking at the $i$-th cell (i.e.~all $y'\in\family{P}_i(y)$) answer the query $x$ positively (i.e.~$f(x,y')=f(x,y)=1$), which gives $x\in X_1(\family{P}_i(y))$ by definition of $X_1(A)$. This proves the direction $X_1(y) \subseteq \bigcup_{i\in [s]} X_1(\family{P}_i(y))$. Consider the bad database $y\in V$ satisfying $|\family{P}_i(y)|\ge\frac{|V|}{s\cdot2^{w}}=\frac{v}{s\cdot2^{w}}$ for all $1\le i\le s$. Due to the above claim, we have \[ u\le \left|X_1(y)\right|=\left|\bigcup_{1\le i\le s} X_1(\family{P}_i(y))\right|\le\sum_{1\le i\le s}\left|X_1(\family{P}_i(y))\right|. \] By averaging principle, there exists a cell $i$ such that $\left|X_1(\family{P}_{i}(y))\right|\ge\frac{u}{s}$. This gives us a monochromatic 1-rectangle $X_1(\family{P}_{i}(y))\times \family{P}_{i}(y)$ of size at least $\frac{u}{s}\times\frac{v}{s\cdot 2^{w}}$. \end{proof} The richness lemma for general case can be derived from the 1-probe case by a one-line reduction. \begin{lemma}\label{lemma-certificate-reduction} If a data structure problem $f$ has $(s,w,t)$-certificates, then $f$ has $\left(\binom{s}{t},w\cdot t,1\right)$-certificates. \end{lemma} \begin{proof} Store every $t$-combination of cells with a new table of $\binom{s}{t}$ cells each of $w\cdot t$ bits. \end{proof} \newcommand{\SectionRichApp}{ We apply our richness lemma to two fundamental data structure problems: the bit-vector retrieval problem, and the lopsided set disjointness (LSD) problem. We prove certificate lower bounds matching the cell-probing scheme upper bounds, which shows that for these fundamental data structure problems, answering queries is as hard as certifying them. \paragraph{Bit-vector retrieval.} We consider the following fundamental problem: a database $y$ is a vector of $n$ bits, a query $x$ specifies $m$ indices, and the answer to the query returns the contents of these queried bits in the bit vector $y$. Although is fundamental in database and information retrieval even judging by a glance, this problem has not been very well studied before (for a reason which we will see next). We call this problem the \concept{bit-vector retrieval} problem. A naive solution is to explicitly store the bit-vector and access the queried bits directly, which gives an bit-probing scheme using $n$ bits and answering each query with $m$ bits. A natural and important question is: can we substantially reduce the time cost by using a more sophisticated data structure with a tolerable overhead on space usage and allowing probing cells instead of bits? We shall see this is impossible in any realistic setting by showing a certificate lower bound. We study a decision version of the bit-vector retrieval problem, namely the \concept{bit-vector testing} problem. Let $Y=\{0,1\}^n$ and $X=[n]^m\times\{0,1\}^m$. Each database $y\in Y$ is still an $n$-bit vector, and each query $x=(u,v)\in X$ consists of two parts: a tuple $u\in[n]^m$ of $m$ positions and a prediction $v\in\{0,1\}^m$ of the contents of these positions. For $y\in\{0,1\}^n$ and $u\in[n]^m$, we use $y(u)$ to denote the $m$-tuple $(y(u_1),y(u_2),\ldots, y(u_m))$. The bit-vector testing problem $f:X\times Y\rightarrow\{0,1\}$ is then defined as that for any $x=(u,v)\in X$ and any $y\in Y$, $f(x,y)$ indicates whether $y(u)=v$. \begin{proposition}\label{proposition-bitvector-rectangle} The bit-vector testing problem $f$ is $(n^m,2^n)$-rich and every $M\times N$ monochromatic 1-rectangles in $f$ must have $M\le (n-\log N)^m$. \end{proposition} \begin{proof} We use the notation in the proof of Lemma~\ref{lemma-richness-one-cell}: we use $X_1(y)$ to denote set of positive queries on database $y$ and $X_1(A)$ to denote the set of queries positive on all databases in $A\subset Y$. Note that $X_1(y)$ contains all the rows at which column $y$ has 1-entries. It holds that $|Y|=n^m$ and for every $y\in Y$, we have $|X_1(y)|=|\{(u,v)\in [n]^m\times\{0,1\}^m\mid y(u)=v\}|=n^m$, thus $f$ is $(n^m, 2^n)$-rich. For any set $A\subseteq Y$, observe that $|X_1(A)|=|\{u\in[n]^m\mid \forall y, y'\in A, y(u)=y'(u)\}|$, i.e.~$|X_1(A)|$ is the number of such $m$-tuples of indices over which all bit-vectors in $A$ are identical. Let $S$ denote the largest $S\subseteq[n]$ such that for every $i\in S$, $y(i)$ is identical for all $y\in A$. It is easy to see that $|X_1(A)|=|S|^m$ and $|A|\le2^{n-|S|}$, therefore it holds that $|X_1(A)|\le(n-\log|A|)^m$. Note that $X_1(A)\times A$ is precisely the maximal 1-rectangle with the set of columns $A$. Letting $N=|A|$, we prove that every $M\times N$ 1-rectangle must have $M\le(n-\log N)^m$. \end{proof} \begin{theorem}\label{lower-bound-bitvector} If the bit-vector testing problem has $(s,w,t)$-certificates, then for any $0<\delta<1$, we have either $t\ge\frac{n^{1-\delta}}{w+\log s}$ or $t\ge\frac{\delta m\log n}{\log s}$. \end{theorem} \begin{proof} Due to Proposition~\ref{proposition-bitvector-rectangle}, the problem is $(n^m,2^n)$-rich, and hence by Lemma~\ref{lemma-richness-cert}, if it has $(s,w,t)$-certificates, then it contains a 1-rectangle of size $\frac{n^m}{{s\choose t}}\times2^{n-wt-t\log {s\choose t}}$. As ${s\choose t}\leq s^t$, so we have a 1-rectangle of size $\frac{n^m}{s^t}\times2^{n-wt-t\log s}$, which by Proposition~\ref{proposition-bitvector-rectangle}, requires that $\frac{n^m}{s^t}\le (wt+t\log s)^m$. For any $0<\delta<1$, if $t<\frac{n^{1-\delta}}{w+\log s}$, then $t\ge\frac{\delta m\log n}{\log s}$. \end{proof} A standard setting for data structure is the lopsided case, where query is significantly shorter than database. For this case, the above theorem has the following corollary. \begin{corollary} Assuming $m=n^{o(1)}$, if the bit-vector testing problem has $(s,w,t)$-certificates for $w\le n^{1-\delta}$ where $\delta>0$ is an arbitrary constant, then $t=\Omega\left(\frac{m\log n}{\log s}\right)$. \end{corollary} With any polynomial space $s=n^{O(1)}$ and a wildly relaxed size of cell $n^{1-\delta}$, the above bound matches the naive solution of directly retrieving $m$ bits, implying that the fundamental problem of retrieving part of a bit vector cannot be made any easier in a general setting, because queries are hard to certify. \paragraph{Lopsided set disjointness.} The set disjointness problem plays a central role in communication complexity and complexity of data structures. Assuming a data universe $[N]$, the input domains are $X={[N]\choose m}$ and $Y={[N]\choose n}$ where $m\le n<\frac{N}{2}$. For each query set $x\in X$ and data set $y\in Y$, the set disjointness problem $f(x,y)$ returns a bit indicating the emptyness of $x\cap y$. The following proposition is implicit in~\cite{miltersen1998data}. \begin{proposition}[Milersen~\textit{et al.}~\cite{miltersen1998data}]\label{proposition-set-disjoint} The set disjointness problem $f$ is $\left({N-n\choose m},{N\choose n}\right)$-rich, and for every $n\le u\le N$, any monochromatic 1-rectangle in $f$ of size $M\times {u\choose n}$ must have $M\le {N-u\choose m}$. \end{proposition} \begin{proof} We use the notation in the proof of Lemma~\ref{lemma-richness-one-cell}: let $X_1(y)$ denote set of positive queries on database $y$ and $X_1(A)$ denote the set of queries positive on all databases in $A\subset Y$. $X_1(y)$ contains all the rows at which column $y$ has 1-entries. It holds that $|Y|={N\choose n}$ and for every set $y\in Y$ with $|y|=n$, we have $|X_1(y)|=|\{x\mid x\subset [N], |x|=m, x\cap y=\emptyset\}|={N-n\choose m}$, thus $f$ is $\left({N-n\choose m}, {N\choose n}\right)$-rich. For any set $A\subseteq Y$ with $|A|={u\choose n}$, let $y'=\bigcup_{y\in A}y$. We have $|y'|\geq u$. For $X_1(A)=\{x\mid \forall y\in Y, x\cap y=\emptyset \}$, we have $|X_1(A)|\leq {N-|y'|\choose m}\leq{N-u\choose m}$. Thus we get the conclusion. \end{proof} Applying the above proposition and Lemma~\ref{lemma-richness-cert}, we have the following certificate lower bound. \begin{theorem}\label{theorem-set-disjoint} If the set disjointness problem has $(s,w,t)$-certificates, then for any $0<\delta<1$, we have either $t\ge\frac{n^{1-\delta}}{w+\log s}$ or $t\ge\frac{\delta m(\log n-o(1)))}{\log s}$. \end{theorem} \begin{proof} Due to Proposition~\ref{proposition-set-disjoint}, the problem is $\left({N-n\choose m},{N\choose n}\right)$-rich, and hence by Lemma~\ref{lemma-richness-cert}, if it has $(s,w,t)$-certificates, then it contains a 1-rectangle of size $\frac{{N-n\choose m}}{{s\choose t}}\times\frac{{N\choose n}}{{s\choose t}2^{wt}}$. As ${s\choose t}\leq s^t$, so we have a 1-rectangle of size ${N-n\choose m}/{2^{t\log s}}\times{N\choose n}/{2^{wt+t\log s}}$. Let $a=t\log s$ and $b=wt$. Let $M,u$ denote the parameters in Proposition~\ref{proposition-set-disjoint} respectively, so $M={N-n\choose m}/{2^a}$ and ${u\choose n}={N\choose n}/{2^{a+b}}$. Let $k=(N-n)/2^{a/m}$. Since ${k\choose m}\leq{N-n\choose m}/2^{a}\leq{N-u\choose m}$ by Proposition~\ref{proposition-set-disjoint}, we have $k\leq N-u$, which leads to $u\leq N-k$. Now we have ${N\choose n}/2^{a+b}={u \choose n}\leq {N-k \choose n}$ and therefore $2^{a+b}\geq{N\choose n}/{N-k\choose n}>(\frac{N}{N-k})^n>(1+k/N)^n=(1+(N-n)/2^{a/m}N)^n\geq(1+2^{-a/m-1})^n$. By taking logarithm, we have $a+b\geq n\log(1+2^{-a/m-1})>n\cdot 2^{-a/m-1}$. If $a+b< n^{1-\delta}$ for any $\delta$, then $2^{a/m+1}\geq n^{\delta}$, thus $a\geq \delta m (\log n-o(1))$. Replacing $a,b$ with $t\log s, wt$ respectively, we get the conclusion. \end{proof} This certificate lower bound matches the well-known cell-probe lower bound for set-disjointness~\cite{miltersen1998data,patrascu11structures}. The most interesting case of the problem is the lopsided case where $m=n^{o(1)}$. A calculation gives us the following corollary. \begin{corollary} Assume $m=n^{o(1)}$ and $\alpha n\le N\le n^c$ for arbitrary constants $\alpha, c>1$. If the set disjointness problem has $(s,w,t)$-certificates for $w\le n^{1-\delta}$ where $\delta>0$ is an arbitrary constant, then $t=\Omega\left(\frac{m\log n}{\log s}\right)$. \end{corollary} } \ifabs{ In Appendix~\ref{appendix-richness-app}, we apply Lemma~\ref{lemma-richness-cert} to prove certificate lower bounds for bit-vector retrieval problem and lopsided set disjointness problem. } { \subsection{Applications}\label{section-certificate-lower-bounds} \SectionRichApp } \section{Direct-sum richness lemma}\label{section-direct-sum-property} In this section, we prove a richness lemma for certificates using direct-sum property of data structure problems. Such a lemma was introduced in~\cite{patrascu2010higher} for cell-probing schemes, which is used to prove some highest known cell-probe lower bounds with \emph{near-linear spaces}. Consider a vector of problems $\bar{f}=(f_1,\dots,f_k)$ where every $f_i:X\times Y\rightarrow \{0,1\}$ is defined on the same domain $X\times Y$. Let $\bigoplus^{k}\bar{f}:([k]\times X)\times Y^k\rightarrow\{0,1\}$ be a problem defined as follows: $\bigoplus^{k}\bar{f}((i,x),\bar{y})=f_i(x,y_i)$ for every $(i,x)\in[k]\times X$ and every $\bar{y}=(y_1,y_2,\ldots,y_k)\in Y^k$. In particular, for a problem $f$ we denote $\bigoplus^{k}f=\bigoplus^{k}\bar{f}$ where $\bar{f}$ is a tuple of $k$ copies of problem $f$. \begin{lemma}[direct-sum richness lemma for certificates]\label{lemma-direct-sum-cert} Let $\bar{f}=(f_1,f_2\dots\,f_k)$ be a vector of problems such that for each $i=1,2,\ldots,k$, we have $f_i:X{\times}Y\rightarrow\{0,1\}$ and $f_i$ is $(\alpha|X|,\beta|Y|)$-rich. If problem $\bigoplus^{k}\bar{f}$ has $(s,w,t)$-certificates for a $t\leq\frac{s}{k}$, then there exists a $1\le i\le k$ such that $f_i$ contains a monochromatic 1-rectangle of size $\frac{{\alpha}^{O(1)}|X|}{2^{O(t\log {\frac{s}{kt}})}}\times \frac{{\beta}^{O(1)}|Y|}{2^{O(wt+t\log {\frac{s}{kt}})}}$. \end{lemma} \paragraph{Remark 1.} The direct-sum richness lemma proved in~\cite{patrascu2010higher} is for asymmetric communication protocols as well as cell-probing schemes, and gives a rectangle size of $\frac{{\alpha}^{O(1)}|X|}{2^{O(t\log {\frac{s}{k}})}}\times \frac{{\beta}^{O(1)}|Y|}{2^{O(wt+t\log {\frac{s}{k}})}}$. Our direct-sum richness lemma has a better rectangle bound. This improvement may support stronger lower bounds which separate between linear and near-linear spaces. \paragraph{Remark 2.} A key idea to apply this direct sum based lower bound scheme is to exploit the extra power gained by the model from solving $k$ problem instances in parallel. In~\cite{patrascu2010higher}, this is achieved by seeing cell probes as communications between query algorithm and table, and $t$-round adaptive cell probes for answering $k$ parallel queries can be expressed in $t\log {s\choose k}$ bits instead of naively $kt\log s$ bits. For our direct-sum richness lemma for certificates, in contrast, we will see (in Lemma~\ref{lemma-direct-sum}) that unlike communications, the parallel simulation of certificates does not give us any extra gain, however, in our case all extra gains are provided by the improved bound in Lemma~\ref{lemma-richness-cert}, the richness lemma for certificates. Indeed, all our extra gains by ``parallelism'' are offered by the one-line reduction in Lemma~\ref{lemma-certificate-reduction}, which basically says that the certificates for $k$ instances of a problem can be expressed in $\log{s\choose kt}$ bits, even better than the $t\log {s\choose k}$-bit bound for communications. Giving up adaptivity is essential to this improvement on the power of parallelism, so that all $kt$ cells can be chosen at once which gives the $\log{s\choose kt}$-bit bound: \emph{we are now not even parallel over instances, but also parallel over time}. \paragraph*{} The idea of proving Lemma~\ref{lemma-direct-sum-cert} can be concluded as: (1) reducing the problem $\bigoplus^{k}\bar{f}$ from a direct-product problem $\bigwedge^{k}\bar{f}$ whose richness and monochromatic rectangles can be easily translated between $\bigwedge^{k}\bar{f}$ and subproblems $f_i$; and (2) applying Lemma~\ref{lemma-richness-cert}, the richness lemma for certificates, to obtain large monochromatic rectangles for the direct-product problem. We first define a direct-product operation on vector of problems. For $\bar{f}=(f_1,\dots,f_k)$ with $f_i:X\times Y\rightarrow \{0,1\}$ for every $1\le i\le k$, let $\bigwedge^{k}\bar{f}:X^k\times Y^k\rightarrow \{0,1\}$ be a direct-product problem defined as: $\bigwedge^{k}\bar{f}(\bar{x},\bar{y})=\prod_i{f_i(x_i,y_i)}$ for every $\bar{x}=(x_1,\dots,x_k)$ and every $\bar{y}=(y_1,\dots,y_k)$. \begin{lemma}\label{lemma-direct-sum} For any $\bar{f}=(f_1,\dots,f_k)$, if $\bigoplus^{k}\bar{f}$ has $(s,w,t)$-certificates for a $t\leq\frac{s}{k}$, then $\bigwedge^{k}\bar{f}$ has $(s,w,kt)$-certificates. \end{lemma} \begin{proof} Suppose that $T:Y^k\to \Sigma^s$ with $\Sigma=\{0,1\}^w$ is the code used to encode databases to tables in the $(s,w,t)$-certificates of $\bigoplus^{k}\bar{f}$. For problem $\bigwedge^{k}\bar{f}$, we use the same code $T$ to prepare table. And for each input $(\bar{x},\bar{y})$ of problem $\bigwedge^{k}\bar{f}$ where $\bar{x}=(x_1,\dots,x_k)$ and $\bar{y}=(y_1,\dots,y_k)$, suppose that for each $1\le i\le k$, $P_i\subset[s]$ with $|P_i|=t$ is the set of $t$ cells in table $T_{\bar{y}}$ to uniquely identify the value of $\bigoplus^{k}\bar{f}((i,x_i),\bar{y})$, then let $P=P_1\cup P_2\cup\cdots\cup P_k$ so that $|P|\le kt$. It is easy to verify that the set $P$ of at most $kt$ cells in $T_{\bar{y}}$ uniquely identifies the value of $\bigwedge^{k}\bar{f}(\bar{x},\bar{y})=\bigwedge_{1\le i\le k}\left(\bigoplus^{k}\bar{f}((i,x_i),\bar{y})\right)$ because it contains all cells which can uniquely identify the value of $\bigoplus^{k}\bar{f}((i,x_i),\bar{y})$ for every $1\le i\le k$. Therefore, problem $\bigwedge^{k}\bar{f}$ has $(s,w,kt)$-certificates. \end{proof} The following two lemmas are from~\cite{patrascu2010higher}. These lemmas give easy translations of richness and monochromatic rectangles between the direct-product problem $\bigwedge^{k}\bar{f}$ and subproblems $f_i$. \begin{lemma}[P\v{a}tra\c{s}cu and Thorup \cite{patrascu2010higher}]\label{lemma-direct-sum-richness} If $\bar{f}=(f_1,f_2\dots\,f_k)$ has $f_i:X{\times}Y\rightarrow\{0,1\}$ and $f_i$ is $(\alpha|X|,\beta|Y|)$-rich for every $1\le i\le k$, then $\bigwedge^{k}\bar{f}$ is $((\alpha|X|)^k,(\beta|Y|)^k)$-rich. \end{lemma} \begin{lemma}[P\v{a}tra\c{s}cu and Thorup \cite{patrascu2010higher}]\label{lemma-direct-sum-rectangle} For any $\bar{f}=(f_1,\dots,f_k)$ with $f_i:X{\times}Y\rightarrow\{0,1\}$ for every $1\le i\le k$, if $\bigwedge^{k}\bar{f}$ contains a monochromatic 1-rectangle of size $(\alpha|X|)^k\times(\beta|Y|)^k$, then there exists a $1\le i \le k$ such that $f_i$ contains a monochromatic 1-rectangle of size $(\alpha)^3|X|\times(\beta)^3|Y|$. \end{lemma} The direct-sum richness lemma can be easily proved by combining the above lemmas with the richness lemma for certificates. \begin{proof}[Proof of Lemma~\ref{lemma-direct-sum-cert}] If $\bigoplus^{k}\bar{f}$ has $(s,w,t)$-certificates, then by Lemma~\ref{lemma-direct-sum}, the direct-product problem $\bigwedge^{k}\bar{f}$ has $(s,w,kt)$-certificates. Since every $f_i$ in $\bar{f}=(f_1,f_2,\ldots,f_k)$ is $(\alpha|X|,\beta|Y|)$-rich, by Lemma \ref{lemma-direct-sum-richness} we have that $\bigwedge^{k}\bar{f}$ is $((\alpha|X|)^k,(\beta|Y|)^k)$-rich. Applying Lemma~\ref{lemma-richness-cert}, the richness lemma for certificates, problem $\bigwedge^{k}\bar{f}$ has a 1-rectangle of size $\frac{(\alpha|X|)^k}{\binom{s}{kt}}\times\frac{(\beta|Y|)^k}{\binom{s}{kt}2^{kwt}}$. Then due to Lemma~\ref{lemma-direct-sum-rectangle}, we have a problem $f_i$ who contains a monochromatic 1-rectangle of size $\frac{{\alpha}^{O(1)}|X|}{2^{O(t\log {\frac{s}{kt}})}}\times \frac{{\beta}^{O(1)}|Y|}{2^{O(wt+t\log {\frac{s}{kt}})}}$. \end{proof} \subsection{Applications}\label{section-direct-sum-application} We then apply the direct-sum richness lemma to prove lower bounds for two important high dimensional problems: approximate near neighbor (ANN) in hamming space and partial match (PM). \begin{itemize} \item For ANN in $d$-dimensional hamming space, we prove a $t=\Omega(d/\log\frac{sw}{nd})$ lower bound for $(s,w,t)$-certificates. The highest known cell-probing scheme lower bound for the problem is $t=\Omega(d/\log\frac{sw}{n})$. In a super-linear space, our certificate lower bound matches the highest known lower bound for cell-probing scheme; and for linear space, our lower bound becomes $t=\Omega(d)$, which gives a strict improvement, and also matches the highest cell-probe lower bound ever known for any problem (which has only been achieved for polynomial evaluation~\cite{larsen2012higher}). \item For $d$-dimensional PM, we prove a $t=\Omega(d/\log\frac{sw}{n})$ lower bound for $(s,w,t)$-certificates, which matches the highest known cell-probing scheme lower bound for the problem in~\cite{patrascu2010higher}. \end{itemize} \subsubsection{Approximate near neighbor (ANN)} The near neighbor problem $\mathrm{NN}_n^d$ in a $d$-dimensional metric space is defined as follows: a database $y$ contains $n$ points from a $d$-dimensional metric space, for any query point $x$ from the same space and a distance threshold $\lambda$, the problem asks whether there is a point in database $y$ within distance $\lambda$ from $x$. The approximate near neighbor problem $\mathrm{ANN}_{n}^{\lambda,\gamma,d}$ is similarly defined, except upon a query $x$ to a database $y$, answering ``yes'' if there is a point in database $y$ within distance $\lambda$ from $x$ and ``no'' if all points in $y$ are $\gamma\lambda$-far away from $x$ (and answering arbitrarily if otherwise). We first prove a lower bound for $\mathrm{ANN}_{n}^{\lambda,\gamma,d}$ in Hamming space $X=\{0,1\}^d$, where for any two points $x,x'\in X$ the distance between them is given by Hamming distance $h(x,x')$. \begin{suppress} \begin{theorem}\label{theorem-ANN} For problem $\mathrm{ANN}_{n}^{\lambda,\gamma,d}$ in Hamming cube $\{0,1\}^d$, assume $d\geq(1+5\gamma)\log n$. There exists some $\lambda$, such that if $ANN_{n}^{\lambda,\gamma,d}$ has $(s,w,t)$-certificates, where $w=d^{O(1)}$, then $t=\Omega({{\frac{d}{\gamma^3}}/{\log\frac{sd}{n}}})$. \end{theorem} \begin{proof} Assume problem $ANN_{n}^{\lambda,\gamma,d}$ has $(s,w,t)$-certificates. Let $D=d/(1+5\gamma)\geq\log n$ and $k=n/N$ for $N<n$ to be chosen later. In [Theorem 11,Ref?], Mihai has proved a solution for $ANN_{n}^{\lambda,\gamma,d}$ can also work as a solution for $\bigoplus^k ANN_N^{\lambda,\gamma,D}$ with the same complexity. It's easy to see this also holds for certificates. So $\bigoplus^k ANN_N^{\lambda,\gamma,D}$ also has $(s,w,t)$-certificates. By [Ding Liu], there exists a $\lambda$ such that: \begin{itemize} \item by [Ding Liu, Claim 10],$ANN_N^{\lambda,\gamma,D}$ is $[2^{D-1},2^{ND}]$-rich. \item by [Ding Liu, Claim 11],$ANN_N^{\lambda,\gamma,D}$ doesn't have a 1-rectangle of size $2^{D-D/(169\gamma^2)}\times 2^{ND-ND/(32\gamma^2)}$. \end{itemize} Note the query domain $|X|=2^D$, and the data domain $|Y|=2^{ND}$, so the problem is $[|X|/2,|Y|]$-rich. By Lemma \ref{lemma-direct-sum-cert}, we have either $t=\Omega(\frac{D}{\gamma^2}/\log\frac{s}{k})$ or $t=\Omega(\frac{ND}{\gamma^2}/w)$. Fix $N=w=d^{O(1)}$, the first item is the smaller one. Now we get $t=\Omega(\frac{d}{\gamma^3}/\log\frac{sN}{n})=\Omega(\frac{d}{\gamma^3}/\log\frac{sd}{n})$. \end{proof} \end{suppress} The richness and monochromatic rectangles of $\mathrm{ANN}_n^{\lambda,\gamma,n}$ were analyzed in~\cite{liu2004strong}. \begin{Claim}[Claim 10 and 11 in~\cite{liu2004strong}]\label{claim-rectangle-ANN} There is a $\lambda\leq d$ such that $\mathrm{ANN}_n^{\lambda,\gamma,d}$ is $(2^{d-1},2^{nd})$-rich and $\mathrm{ANN}_n^{\lambda,\gamma,d}$ does not contain a 1-rectangle of size $2^{d-d/(169\gamma^2)}\times 2^{nd-nd/(32\gamma^2)}$. \end{Claim} A model-independent self-reduction of ANN was constructed in~\cite{patrascu2010higher}. \begin{Claim}[Theorem 6 in \cite{patrascu2010higher}]\label{claim-selfreduction-ANN} For $D=d/(1+5\gamma)\geq\log n$, $N<n$ and $k=n/N$, there exist two functions $\phi_X,\phi_Y$ such that $\phi_X$ (and $\phi_Y$) maps each query $(x,i)$ (and database $\bar{y}$) of $\bigoplus^k \mathrm{ANN}_N^{\lambda,\gamma,D}$ to a query $x'$ (and database $y'$) of $\mathrm{ANN}_{n}^{\lambda,\gamma,d}$ and it holds that $\bigoplus^k \mathrm{ANN}_N^{\lambda,\gamma,D}((x,i),\bar{y})= \mathrm{ANN}_{n}^{\lambda,\gamma,d}(x',y')$. \end{Claim} We then prove the following certificate lower bound for ANN. \begin{theorem}\label{theorem-ANN-v2} For $\mathrm{ANN}_{n}^{\lambda,\gamma,d}$ in $d$-dimensional Hamming space, assuming $d\geq(1+5\gamma)\log n$, there exists a $\lambda$, such that if $\mathrm{ANN}_{n}^{\lambda,\gamma,d}$ has $(s,w,t)$-certificates, then $t=\Omega\left({{\frac{d}{\gamma^3}}/{\log\frac{sw\gamma^3}{nd}}}\right)$. \end{theorem} \begin{proof} Due to the model-independent reduction from $\bigoplus^k \mathrm{ANN}_N^{\lambda,\gamma,D}$ to $\mathrm{ANN}_{n}^{\lambda,\gamma,d}$ of Claim~\ref{claim-selfreduction-ANN}, existence of $(s,w,t)$-certificates for $\mathrm{ANN}_{n}^{\lambda,\gamma,d}$ implies the existence of $(s,w,t)$-certificates for $\bigoplus^k \mathrm{ANN}_N^{\lambda,\gamma,D}$. Note that for problem $\mathrm{ANN}_N^{\lambda,\gamma,D}$, the size of query domain is $|X|=2^D$, and the size of data domain is $|Y|=2^{ND}$, so applying Claim~\ref{claim-rectangle-ANN}, the problem is $(|X|/2,|Y|)$-rich. Assuming that $t\le\frac{s}{k}$, by Lemma \ref{lemma-direct-sum-cert}, $\mathrm{ANN}_N^{\lambda,\gamma,D}$ contains a 1-rectangle of size $2^{D}/2^{O(t\log\frac{s}{kt})}\times 2^{ND}/2^{O(wt+t\log\frac{s}{kt})}$. Due to Claim~\ref{claim-rectangle-ANN}, and by a calculation, we have either $t=\Omega\left(\frac{D}{\gamma^2}/\log\frac{s}{kt}\right)$ or $t=\Omega\left(\frac{ND}{\gamma^2}/w\right)$. We then choose $N=w$. Note that such choice of $N$ may violate the assumption $t\le\frac{s}{k}$ (that is, $N\ge \frac{tn}{s}$) only when it implies an even higher lower bound $t>\frac{sw}{n}$. With this choice of $N=w$, the bound $t=\Omega\left(\frac{D}{\gamma^2}/\log\frac{s}{kt}\right)$ is the smaller one in the two branches. Substituting $D=d/(1+5\gamma)$ and $k=n/N$ we have $t=\Omega\left(\frac{d}{\gamma^3}/\log\frac{sN}{nt}\right)=\Omega\left(\frac{d}{\gamma^3}/\log\frac{sw}{nt}\right)$. Multiplying both side by a $\Delta=\frac{sw}{nd}$ gives us $\Delta\cdot\gamma^3=\Omega\left(\frac{\Delta d}{t}/\log\frac{\Delta d}{t}\right)$. Assuming $\Delta'=\frac{\Delta d}{t}$, we have $\frac{\Delta '}{\log \Delta'}=O(\Delta \gamma^3)$. The function $f(x)=\frac{x}{\log x}$ is increasing for $x>1$, so we have $\Delta'=O(\Delta\gamma^3\log(\Delta \gamma^3))$, which gives us the lower bound $t=\Omega\left({{\frac{d}{\gamma^3}}/{\log\frac{sw\gamma^3}{nd}}}\right)$. \end{proof} For general space, when points are still from the Hamming cube $\{0,1\}^d$, for any two points $x,x'\in\{0,1\}^d$, the Hamming distance $h(x,x')=\|x-x'\|_1=\|x-x'\|_2^2$. And by setting $\gamma=1$, we have the following corollary for exact near neighbor. \begin{corollary}\label{corollary-NN} There exists a constant $C$ such that for problem $\mathrm{NN}_{n}^{d}$ with Hamming distance, Manhattan norm $\ell_1$ or Euclidean norm $\ell_2$, assuming $d\geq{C\log n}$, if $\mathrm{NN}_{n}^{d}$ has $(s,w,t)$-certificates, then $t=\Omega({{d}/{\log\frac{sw}{nd}}})$. \end{corollary} \subsubsection{Partial match} The partial match problem is another fundamental high-dimensional problem. The $d$-dimensional partial match problem $\mathrm{PM}_n^d$ is defined as follows: a database $y$ contains $n$ strings from $\{0,1\}^d$, for any query pattern $x\in\{0,1,*\}^d$, the problem asks whether there is a string $z$ in database $y$ matching pattern $x$, in such a way that $x_i=z_i$ for all $i\in[d]$ that $x_i\neq *$. \begin{theorem}\label{theorem-PM} Assuming $d\geq{2\log n}$, if problem $\mathrm{PM}_{n}^{d}$ has $(s,w,t)$-certificates for a $w=d^{O(1)}$, then $t=\Omega\left({{d}/{\log\frac{sd}{n}}}\right)$. \end{theorem} \newcommand{\ProofPM}{ The proof is almost exactly the same as the proof of partial match lower bound in~\cite{patrascu2010higher}. We restate the proof in the context of certificates. Let $N=n/k$ and $D=d-\log k\geq d/2$. We have the following model-independent reduction from $\bigoplus^k \mathrm{PM}_N^D$ to $\mathrm{PM}_{n}^{d}$: For the data input $\bigoplus^k \mathrm{PM}_N^D$, we add the subproblem index in binary code, which takes $\log k$ bits, as a prefix for every string. And for the query, we also add the subproblem index $i$ in binary code as a prefix to the query pattern to form a new query in $\mathrm{PM}_{n}^{d}$. It is easy to see $\mathrm{PM}_{n}^{d}$ solves $\bigoplus^k \mathrm{PM}_N^D$ with such a reduction, and $(s,w,t)$-certificates for $\mathrm{PM}_{n}^{d}$ are $(s,w,t)$-certificates for $\bigoplus^k \mathrm{PM}_N^D$. In Theorem~11 of~\cite{patrascu2010higher}, it is proved that on a certain domain $X\times Y$ for $\mathrm{PM}_N^D$: \begin{itemize} \item $\mathrm{PM}_N^D$ is $\left({|X|}/{4},{|Y|}/{4}\right)$-rich. In fact, in~\cite{patrascu2010higher} it is only proved that the density of 1s in $\mathrm{PM}_N^D$ is at least $1/2$, which easily implies the richness due to an averaging argument. \item $\mathrm{PM}_N^D$ has no 1-rectangle of size $|X|/2^{O(D)}\times |Y|/2^{O(\sqrt{N}/D^2)}$. \end{itemize} Assuming that $t\le\frac{s}{k}$, by Lemma~\ref{lemma-direct-sum-cert}, we have either $t\log\frac{s}{k}=\Omega(D)$ or $t\log\frac{s}{k}+wt=\Omega(\sqrt N/D^2)$. We choose $N=w^2\cdot D^8$. Note that this choice of $N$ may violate the assumption $t\le\frac{s}{k}$ only when an even higher lower bound $t>\frac{sw^2D^8}{n}=\Omega(d^2)$ holds. With this choice of $N=w^2\cdot D^8=d^{O(1)}$, the second bound above becomes $t=\Omega(d^2)$, while the first becomes $t=\Omega\left(d/\log\frac{sd}{nt}\right)=\Omega\left(d/\log\frac{sd}{n}\right)$. } \ifabs{ The proof of this theorem is in Appendix~\ref{appendix-direct-sum}. }{ \begin{proof} \ProofPM \end{proof} } \begin{suppress} In \cite{p2011unifying}, Patrascu shows that BLOCKED-LSD on universe $[U\cdot B]$ reduces to partial match over $N=U\cdot B$ strings in dimension $D=O(U\log B)$. We have proved in Section \ref{section-characterization} that BLOCKED-LSD on universe $[U\cdot B]$ has lower bound $t\log \frac{s}{t}=\Omega(U\cdot \log B)$ or $tw =\Omega(U\cdot \sqrt B)$. By Lemma \ref{lemma-direct-sum-cert}, we have either $t\log\frac{s}{kt}=\Omega(U\log B)=\Omega(D)$ or $wt=\Omega(U\cdot \sqrt B)=\Omega(\sqrt N)$. Now set $N=w^2\cdot D^4=d^{O(1)}$, the second inequality becomes $t=\Omega(d^2)$, while the first becomes $t=\Omega(d/\log\frac{sd}{nt})=\Omega(d/\log\frac{sd}{n})$. We thus conclude that $t=\Omega(d/\log\frac{sd}{n})$. In [Unifying Landscape], Patrascu shows that BLOCKED-LSD on universe $[U\cdot B]$ reduces to partial match over $N=U\cdot B$ strings in dimension $D=O(U\log B)$. We have proved in Section \ref{section-characterization} that BLOCKED-LSD on universe $[U\cdot B]$ has lower bound $t\log \frac{s}{t}=\Omega(U\cdot \log B)$ or $tw =\Omega(U\cdot \sqrt B)$. By Lemma \ref{lemma-direct-sum-cert}, we have either $t\log\frac{s}{kt}=\Omega(U\log B)=\Omega(D)$ or $wt=\Omega(U\cdot \sqrt B)=\Omega(\sqrt N)$. Now set $N=w^2\cdot D^4=d^{O(1)}$, the second inequality becomes $t=\Omega(d^2)$, while the first becomes $t=\Omega(d/\log\frac{sd}{nt})=\Omega(d/\log\frac{sd}{n})$. We thus conclude that $t=\Omega(d/\log\frac{sd}{n})$. \end{suppress} \begin{suppress} In [Borodin, Ostrovsky, and Rabani], they observe that partial match can be reduced to near neighbor problem in Hamming distance, Manhattan norm $\ell_1$ and Euclidean norm $\ell_2$. So the lower bound for partial match implies a lower bound for $NN_{n}^d$. We restate the reductions in the following theorem. \begin{theorem}\label{theorem-NN} There exists a constant $C$ such that the following holds. For problem $NN_{n}^{d}$ in Hamming cube, Manhattan norm $\ell_1$ or Euclidean norm $\ell_2$, assume $d\geq{C\log n}$. If $NN_{n}^{d}$ has $(s,w,t)$-certificates where $w=d^{O(1)}$, then $t=\Omega({{d}/{\log\frac{sd}{n}}})$. \end{theorem} \begin{proof} For any query pattern in $\{0,1,*\}^d$, let $k$ denote the number of $*$ in the pattern, we make the following transformations. \begin{itemize} \item{Hamming distance:} $0\rightarrow 01$; $1\rightarrow 10$ $*\rightarrow 11$. Here the dimension $d$ doubles but does not affect the complexity. If there exists a string that matches the pattern, then there must exist a point in the Hamming cube within distance $k$ of the query point, where $k$ denotes the number of $*$ in the pattern. \item{$\ell_1$ and $\ell_2$ norm:} $0\rightarrow -\frac{1}{2};*\rightarrow\frac{1}{2};1\rightarrow\frac{3}{2}$. If such a string exists, then there exists a point within distance $k$ in $\ell_1$ norm and $\sqrt k$ in $\ell_2$. \end{itemize} Now we get the conclusion. \end{proof} \end{suppress} It is well known that partial match can be reduced to 3-approximate near neighbor in $\ell_\infty$-norm by a very simple reduction~\cite{indyk2001approximate}. We write 3-$\mathrm{ANN}_n^{\lambda,d}$ for $\mathrm{ANN}_n^{\lambda,3,d}$. \begin{theorem}\label{theorem-3-ANN} Assuming $d\geq{2\log n}$, there is a $\lambda$ such that if 3-$\mathrm{ANN}_{n}^{\lambda,d}$ in $\ell_{\infty}$-norm has $(s,w,t)$-certificates for a $w=d^{O(1)}$, then $t=\Omega({{d}/{\log\frac{sd}{n}}})$. \end{theorem} \begin{proof} We have the following model-independent reduction. For each query pattern $x$ of partial match, we make the following transformation to each coordinate: $0\rightarrow-\frac{1}{2}$; $*\rightarrow\frac{1}{2}; 1\rightarrow\frac{3}{2}$. For a string in database the $\ell_{\infty}$-distance is $\frac{1}{2}$ if it matches pattern $x$ and $\frac{3}{2}$ if otherwise. \end{proof} \section{Lower bounds implied by lopsided set disjointness}\label{section-Unifying-Landscape} It is observed in~\cite{patrascu11structures} that a variety of cell-probe lower bounds can be deduced from the communication complexity of one problem, the lopsided set disjointness (LSD). In~\cite{sommer2009distance}, the communication complexity of LSD is also used to prove the cell-probe lower bound for approximate distance oracle. \ifabs{ In this section, we modify these communication-based reductions to make them model-independent. A consequence of this is a list of certificate lower bounds as shown in Table~\ref{table-results} for: 2-Blocked-LSD, reachability oracle, 2D stabbing, 2D range counting, 4D range reporting, and approximate distance oracle. }{ In this section, we modify these communication-based reductions to make them model-independent. A consequence of this is a list of certificate lower bounds which match the highest known cell-probe lower bounds for respective problems, including: 2-Blocked-LSD, reachability oracle, 2D stabbing, 2D range counting, 4D range reporting, and approximate distance oracle. } \newcommand{\SectionLSD}{ \subsection{LSD with structures}\label{section-2-BlockedLSD} A key idea of using LSD in reduction is to reduce from LSD with restricted inputs. For the purpose of reduction, the LSD problem is usually formulated as follows: the universe is $[N\cdot B]$, each query set $S\subset[N\cdot B]$ has size $N$, and there is no restriction on the size of data set $T\subseteq[N\cdot B]$. The LSD problem asks whether $S$ and $T$ are disjoint. \begin{proposition}\label{proposition-LSD-rectangle} For any $M\geq N$, if LSD has monochromatic 1-rectangle of size ${M\choose N}\times K$ then $K\le 2^{NB-M}$. \end{proposition} \begin{proof} For any 1-rectangle of LSD, suppose the rows are indexed by $S_1,S_2,\dots,S_R$ and the columns are indexed by $T_1,T_2,\dots,T_K$. Consider the set $\mathcal{S}=\bigcup_i S_i$. Let $M=|\mathcal{S}|$. Note that $R\le \binom{M}{N}$. For any $T_i$, we have $T_i\cap \mathcal{S}=\emptyset$, so it holds that $K\leq 2^{NB-M}$. \end{proof} The 2-Blocked-LSD is a special case of LSD problem: the universe $[N\cdot B]$ is interpreted as $[\frac{N}{B}]\times[B]\times[B]$ and it is guaranteed that for every $x\in[\frac{N}{B}]$ and $y\in[B]$, $S$ contains a single element of the form $(x,y,*)$ and a single element of the form $(x,*,y)$. In~\cite{patrascu11structures}, general LSD problem is reduced to 2-Blocked-LSD by communication protocols. Here we translate this reduction in the communication model to a model-independent reduction from subproblems of LSD to 2-Blocked-LSD. The following claim can be proved by a standard application of the probabilistic method. \begin{Claim}[Lemma~11 in~\cite{patrascu2008data}]\label{claim-LSD-2blockedLSD} There exists a set $\mathcal{F}$ of permutations on universe $[N\cdot B]$, where $|\mathcal{F}|=e^{2N}\cdot 2N\log B$, such that for any query set $S\subset[N\cdot B]$ of LSD, there exists a permutation $\pi\in \mathcal{F}$ for which $\pi(S)$ is an instance of 2-Blocked-LSD. \end{Claim} We then state our model-independent reduction as the following certificate lower bound. \begin{theorem}\label{theorem-2Blocked-LSD} For any constant $\delta>0$, if $2$-Blocked-LSD on universe $[\frac{N}{B}]\times[B]\times[B]$ has $(s,w,t)$-certificates, then it holds either $t=\Omega\left(\frac{NB^{1-\delta}}{w}\right)$ or $t=\Omega\left(\frac{N\log B}{\log \frac{s}{t}}\right)$. \end{theorem} \begin{proof} By Claim \ref{claim-LSD-2blockedLSD}, we know there exists a small set $\mathcal{F}$ of permutations for the universe $[N\cdot B]$ such that $|\mathcal{F}|=2^{O(N)}$ and for any input $S$ of LSD, there exists $\pi\in \mathcal F$ for which $\pi(S)$ is an instance of $2$-Blocked-LSD. By averaging principle, there exists a $\pi\in\mathcal{F}$ such that for at least $|X|/2^{O(N)}$ many sets $S$, $\pi(S)$ is an instance of 2-Blocked-LSD. Denote the set of these $S$ as $\mathcal{X}$. Restrict LSD to the domain $\mathcal{X}\times Y$ and denote this subproblem as LSD$_\mathcal{X}$. Obviously LSD$_\mathcal{X}$ can be solved by $2$-Blocked-LSD by transforming the input with permutation $\pi$, and hence LSD$_\mathcal{X}$ has $(s,w,t)$-certificates. For any $S\in \mathcal{X}$, there are $2^{NB-N}$ choices of $T\in Y$ such that $S\cap T=\emptyset$, so the density of 1 in LSD$_\mathcal{X}$ is at least $\frac{1}{2^N}$, thus by a standard averaging argument LSD$_\mathcal{X}$ is $(\frac{1}{2^{O(N)}}|\mathcal X|,\frac{1}{2^{O(N)}}|Y|)$-rich. Now by the richness lemma, there exists a $|{X}|/{2^{O(N+t\log\frac{s}{t})}}\times|Y|/2^{O(N+t\log\frac{s}{t}+wt)}$ 1-rectangle of LSD$_\mathcal{X}$, which is certainly a 1-rectangle of LSD. Due to Proposition~\ref{proposition-LSD-rectangle}, for any $M\geq N$, LSD has no 1-rectangle of size greater than $\binom{M}{N}\times 2^{NB-M}$, which gives us either $N+t\log \frac{s}{t}=\Omega(N\log{B}-N\log\frac{M}{N})$ or $N+tw+t\log \frac{s}{t}=\Omega(M)$. By setting $M=NB^{1-\delta}$, we prove the theorem. \end{proof} \subsection{Reachability oracle}\label{section-reachability-oracle} The problem of reachability oracle is defined as follows: a database stores a (sparse) directed graph $G$, and reachability queries (can $u$ be reached from $v$ in $G$?) are answered. The problem is trivially solved, even in the sense of certificates, in quadratic space by storing answers for all pairs of vertices. Solving this problem using near-linear space appears to be very hard. This is proved in~\cite{patrascu11structures} for communication protocols as well as for cell-probing schemes. We show the method in~\cite{patrascu11structures} can imply the same lower bound for data structure certificates. \begin{theorem}\label{corollary-reachability-oracle} If reachability oracle of $n$-vertices graphs has $(s,w,t)$-certificates for $s=\Omega(n)$, then $t=\Omega\left({\log n}/{\log{\frac{sw}{n}}}\right)$. \end{theorem} The lower bound is proved for a special class of graphs, namely butterfly graphs. Besides implying the general reachability oracle lower bound, the special structure of butterfly graphs is very convenient for reductions to other problems. A butterfly graph is defined by degree $b$ and depth $d$. The graph has $d+1$ layers, each having $b^d$ vertices. The vertices on level $0$ are source vertices with 0 in-degree and the the ones on level $d$ are sinks with 0 out-degree. On each level, each vertex can be regarded as a vector in $[b]^d$. For each non-sink vector (vertex) on level $i$, there is an edge connecting a vector (vertex) on the $(i+1)$-th level that may differ only on the $i$-th coordinate. Therefore each non-sink vertex has out-degree $b$. The problem $\mathrm{Butterfly\mbox{-}RO}_{n,b}$ is the reachability oracle problem defined on subgraphs of the butterfly graph uniquely specified by degree $b$ and number of non-sink vertices $n$. For a problem $f:X\times Y\to\{0,1\}$ we define $\bigotimes^k f:X^k\times Y\to \{0,1\}$ as that $\bigotimes^k f(\bar{x},y)=\prod_{i=1}^kf(x_i,y)$ for any $\bar{x}=(x_1,x_2,\ldots,x_k)\in X^k$ and any $y\in Y$. We further specify that in reachability oracle problem, the answer is a bit indicating the reachability, thus $\bigotimes^k\mathrm{Butterfly\mbox{-}RO}_{n,b}$ is well-defined. It is discovered in~\cite{patrascu11structures} a model-independent reduction from 2-Blocked-LSD on universe $[\frac{N}{B}]\times[B]\times[B]$ to $\bigotimes^k\mathrm{Butterfly\mbox{-}RO}_{N,B}$ for $k=\frac{N}{d}$, where $d=\Theta(\frac{\log N}{\log B})$ is the depth of the butterfly graph. This can be used to prove the following certificate lower bound \begin{lemma}\label{lemma-reachability-oracle-butterfly} If $\mathrm{Butterfly\mbox{-}RO}_{N,B}$ has $(s,w,t)$-certificates, then either $t=\Omega\left(\frac{d\sqrt B}{w}\right)$, or $t=\Omega\left(\frac{d\log B}{\log \frac{sd}{N}}\right)$, or $t=\Omega\left(\frac{ds}{N}\right)$, where $d=\Theta\left(\frac{\log N}{\log B}\right)$ is the depth of the butterfly graph. \end{lemma} \begin{proof} By the same way of straightforwardly combining certificates as in the proof of Lemma~\ref{lemma-direct-sum}, assuming that $\frac{N}{d}t\le s$, if $\mathrm{Butterfly\mbox{-}RO}_{N,B}$ has $(s,w,t)$-certificates then $\bigotimes^k\mathrm{Butterfly\mbox{-}RO}_{N,B}$ with $k=\frac{N}{d}$ has $(s,w,\frac{N}{d}t)$-certificates. Violating the assumption of $\frac{N}{d}t\le s$ gives us $t=\Omega\left(\frac{ds}{N}\right)$. By the model-independent reduction in~\cite{patrascu11structures}, 2-Blocked-LSD on universe $[\frac{N}{B}]\times[B]\times[B]$ has $(s,w,\frac{N}{d}t)$-certificates. Due to Theorem \ref{theorem-2Blocked-LSD}, for any constant $\delta>0$, either $\frac{N}{d}t=\Omega\left(\frac{NB^{1-\delta}}{w}\right)$ or $\frac{N}{d}t=\Omega\left(\frac{N\log B}{\log \frac{sd}{Nt}}\right)=\Omega\left(\frac{N\log B}{\log \frac{sd}{N}}\right)$. By setting $\delta=\frac{1}{2}$, we have either $t=\Omega(\frac{d\sqrt B}{w})$ or $t=\Omega\left(\frac{d\log B}{\log \frac{sd}{N}}\right)$. \end{proof} Theorem~\ref{corollary-reachability-oracle} for general graphs is an easy consequence of this lemma. \begin{proof}[Proof of Theorem~\ref{corollary-reachability-oracle}] Suppose the input graphs are just those of $\mathrm{Butterfly\mbox{-}RO}_{N,B}$. By Lemma~\ref{lemma-reachability-oracle-butterfly}, either $t=\Omega(\frac{d\log B}{\log \frac{sd}{N}})$, or $t=\Omega(\frac{d\sqrt B}{w})$, or $t=\Omega(\frac{ds}{N})$. Assuming $s=\Omega(n)$, the third branch becomes $t=\Omega(d)$. Choose $B$ to satisfy $\log B=\max\{2\log w,\log \frac{sd}{N}\}=\Theta(\log\frac{sdw}{N})$. Then we have $t=\Omega(d)$ for the first and second branches. Since $d=\Theta({\log N/\log B})$, we have $t=\Omega({\log N}/{\log{\frac{sdw}{N}}})=\Omega({\log n}/{\log{\frac{sw}{n}}})$. \end{proof} Applying the model-independent reductions introduced in~\cite{patrascu11structures} from $\mathrm{Butterfly\mbox{-}RO}_{n,b}$ to 2D stabbing, 2D range counting, and 4D range reporting, we have the certificate lower bounds which match the highest known lower bounds for cell-probing schemes for these problems. \begin{theorem}\label{theorem-2d-stabbing} If 2D stabbing over $m$ rectangles has $(s,w,t)$-certificates, then $t=\Omega({\log m}/{\log{\frac{sw}{m}}})$. \end{theorem} \begin{theorem}\label{theorem-2d-range-counting} If 2D range counting has $(s,w,t)$-certificates, then $t=\Omega({\log n}/{\log{\frac{sw}{n}}})$. \end{theorem} \begin{theorem}\label{theorem-4d-range-reporting} If 4D range reporting has $(s,w,t)$-certificates, then $t=\Omega({\log n}/{\log{\frac{sw}{n}}})$. \end{theorem} \subsection{Approximate distance oracle}\label{section-distance-oracle} For the distance oracle problem, distance queries $d_G(u,v)$ are answered for a database graph $G$. For this fundamental problem, approximation is very important because exact solution appears to be very difficult for nontrivial settings. Given a stretch factor $\alpha>1$, the $\alpha$-approximate distance oracle problem can be defined as: for each queried vertex pair $(u,v)$ and a distance threshold $\tilde{d}$, the problem is required to distinguish between the two cases $d_G(u,v)\le \tilde d$ and $d_G(u,v)\ge \alpha \tilde{d}$. We prove the following certificate lower bound for approximate distance oracle which matches the lower bound proved in~\cite{sommer2009distance} for cell-probing schemes. \begin{theorem}\label{theorem-distance-oracle} If $\alpha$-approximate distance oracle has $(s,w,t)$-certificates, then $t=\Omega\left(\frac{\log n}{\alpha\log(s\log n/n)}\right)$. This holds even when the problem is restricted to sparse graphs with max degree $\mathrm{poly}(tw\alpha/\log n)$ for an $\alpha=o\left(\frac{\log n}{\log(w\log n)}\right)$. \end{theorem} We use the following notations introduced in~\cite{sommer2009distance}. For graph $G=(V,E)$ and any two positive integers $k,\ell$, let $\mathcal{P}(G,\ell,k)$ be the set whose elements are all possible sets $P\subseteq E$ where $P$ can be written as a union of $k$ vertex-disjoint paths in $G$, each of length exactly $\ell$. Let $g(G)$ denote the girth of graph $G$. The following claim, which is quite similar to Claim~\ref{claim-LSD-2blockedLSD}, is proved in~\cite{sommer2009distance} by the same probabilistic argument. \begin{Claim} [Claim 13 in \cite{sommer2009distance}]\label{claim-LSD-distance-bijection} Let $k,\ell>0$ be two integers and $N=k\ell$. Let $G=(V,E)$ be a graph with $|E|=B\cdot N$ for a positive integer $B$, and $\mathcal{P}=\mathcal{P}(G,\ell,k)$. There exist $m$ bijections $f_1,\dots,f_m:[NB]\rightarrow E$, where $m=\ln((\mathrm{e}B)^N)\cdot \frac{(\mathrm{e}B)^N}{|\mathcal{P}|}$, such that for any $S\subseteq [NB]$ with $|S|=N$, there is a bijection $f_i$ such that $f_i(S)\in\mathcal{P}(G,\ell,k)$. \end{Claim} Consider the problem of $\alpha$-approximate distance oracle for base-graph $G$, in which the $\alpha$-approximate distance queries are answered only for spanning subgraphs of $G$. The following lemma is the certificate version of a key theorem in~\cite{sommer2009distance}. \begin{lemma}\label{lemma-distance-oracle-general} There exists a universal constant $C$ such that the following holds. Let $G=(V,E)$ be a graph, such that $\alpha$-approximate distance oracle for the base-graph $G$ has $(s,w,t)$-certificates. Let $k,\ell$ be two positive integers, such that $\ell<\frac{g(G)}{\alpha+1}$. Assume $|E|\geq k\ell(2tw/\ell)^{1/C}$. Then \begin{equation*} s\geq \frac{k}{e}\left(\frac{|\mathcal{P}(G,\ell,k)|^{1/{k\ell}}}{e(|E|/k\ell)^{1-C}}\right)^{\frac{\ell}{t}}\left(e|E|\right)^{-\frac{1}{tk}} \end{equation*} \end{lemma} \begin{proof} Suppose $N=k\ell$ and $B=|E|/N$. Consider the LSD problem LSD$:X\times Y\to\{0,1\}$ defined on universe $[N\cdot B]$ such that each query set $S\subset[N\cdot B]$ is of size $|S|=N$ and each dataset $T\subseteq [N\cdot B]$ is of arbitrary size. By Claim~\ref{claim-LSD-distance-bijection}, there exists $m$ bijections, $f_1,\dots,f_m:[NB]\rightarrow E$, where $m=\ln((eB)^N)\cdot \frac{(eB)^N}{|\mathcal{P}|}$ where $\mathcal{P}=\mathcal{P}(G,\ell,k)$, such that for any $S\subseteq [NB]$ with $|S|=N$, there exists a bijection $f_i$ such that $f_i(S)\in\mathcal{P}(G,\ell,k)$. By averaging principle, there exists an $f_i$ such that for at least $|X|/m$ many sets $S$, it holds that $f_i(S)\in\mathcal{P}(G,l,k)$. Denote the set of such $S$ as $\mathcal{X}$. Restrict LSD to the domain $\mathcal{X}\times Y$ and denote this subproblem as LSD$_\mathcal{X}$. Next we prove LSD$_\mathcal{X}$ can be solved by a composition of $\alpha$-approximate distance oracles. Let $f_i$ be the bijection such that $f_i(S)\in\mathcal{P}(G,\ell,k)$ for all $S\in \mathcal{X}$. For any $S\in \mathcal{X}, T\subseteq[N\cdot B]$, an instance for approximate distance oracle for the base graph $G=(V,E)$ is constructed as follows. The database graph for distance oracle is the spanning subgraph $G'=(V,E')$ where $E'=E\setminus f_i(T)$. Due to the property of bijection $f_i$, it holds that $P=f_i(S)$ contains $k$ vertex-disjoint paths $p_1,p_2,\ldots, p_k$, each of length $\ell$. Let $(u_1,v_1),\dots,(u_k,v_k)$ denote the pairs of end-vertices of these paths. Since $f_i$ is a bijection, the disjointness of $S$ and $T$ translates to the disjointness of $f_i(S)$ and $f_i(T)$, i.e.~all these $k$ vertex-disjoint paths are intact by removing edges in $f_i(T)$ from the graph $G$. \newcommand{\alpha\mbox{-}\mathrm{Dist}}{\alpha\mbox{-}\mathrm{Dist}} Consider the $\alpha$-approximate distance oracle problem $\alpha\mbox{-}\mathrm{Dist}_G$ for the base-graph $G$. We then observe that LSD$_\mathcal{X}$ can be solved by the problem $\bigotimes^k \alpha\mbox{-}\mathrm{Dist}_G$ of answering $k$ parallel approximate distance queries, where $\bigotimes^k f$ of a problem $f$ is as defined in last section. Consider the $k$ vertex pairs $(u_i,v_i), i=1,2,\ldots, k$ connected by vertex-disjoint paths $p_i$ constructed above. We have $d_{G}(u_i,v_i)=\ell$ for every $1\le i\le k$. For $\alpha\mbox{-}\mathrm{Dist}_G$, if all edges in $p_i$ are in $E'$, then $d_{G'}(u_i,v_i)\leq \ell$, so $\alpha\mbox{-}\mathrm{Dist}_G((u_i,v_i, \ell), G')$ will return ``yes'', and if there is an edge in $p_i$ is not in $E'$, since graph $G$ has girth $g(G)>(\alpha+1)\ell$, we must have $d_{G'}(u_i,v_i)\ge g(G)-\ell>\alpha \ell$, so $\alpha\mbox{-}\mathrm{Dist}_G((u_i,v_i, \ell), G')$ will return ``no''. By above discussion, if $\alpha\mbox{-}\mathrm{Dist}_G((u_i,v_i, \ell),G')$ returns ``yes'' for all $k$ queries then it must hold $S\cap T=\emptyset$, and if $\alpha\mbox{-}\mathrm{Dist}_G((u_i,v_i, \ell),G')$ returns ``no'' for some $i$, then $S\cap T\ne \emptyset$, i.e.~we have a model-independent reduction from LSD$_\mathcal{X}$ to $\bigotimes^k \alpha\mbox{-}\mathrm{Dist}_G$. If the $\alpha$-approximate distance oracle problem $\alpha\mbox{-}\mathrm{Dist}_G$ has $(s,w,t)$-certificates, then by directly combining $k$ certificates for $k$ parallel queries, the problem $\bigotimes^k \alpha\mbox{-}\mathrm{Dist}_G$ has $(s,w,kt)$-certificates, and hence LSD$_\mathcal{X}$ has $(s,w,kt)$-certificates. For every $S\in\mathcal{X}$, there are $2^{NB-N}$ many $T$ disjoint with $S$, so the density of LSD$_\mathcal{X}$ is at least $2^{-N}$. By a standard averaging argument, this means LSD$_\mathcal{X}$ is $(\frac{1}{2^{N+1}}|\mathcal{X}|,\frac{1}{2^{N+1}}|Y|)$-rich. By Lemma~\ref{lemma-richness-cert}, there exist universal constants $C_1,C_2>0$ such that LSD$_\mathcal{X}$ has monochromatic 1-rectangle of size $|\mathcal{X}|/2^{O(N+kt\log \frac{s}{kt})}\times|Y|/2^{O(N+kt\log \frac{s}{kt}+ktw)}$, which is also 1-rectangle of LSD. Note that $|\mathcal{X}|\geq|X|/m=|X|/\ln((eB)^N)\cdot \frac{(eB)^N}{|\mathcal{P}|}$, so the rectangle is of size at least \begin{equation*} |X|/2^{O(N\log(eB)+\log(eBN)-\log(|\mathcal{P}|)+kt\log \frac{s}{kt})}\times|Y|/2^{O(N+kt\log \frac{s}{kt}+ktw)}, \end{equation*} where the big-O notations hide only universal constants. And for LSD, $|X|={NB\choose N}$ and $|Y|=2^{NB}$. Due to Proposition~\ref{proposition-LSD-rectangle}, for any $M\geq N$, LSD has no 1-rectangle of size greater than $\binom{M}{N}\times 2^{NB-M}$. By a calculation, there exist a universal constant $C>0$ such that by considering an $M=\Theta(NB^C)$, we have either $tk\log(s/k)+N\log(eB)+\log(eBN)-\log(|\mathcal{P}|)\geq CN\log B$ or $ktw\geq NB^{C}$. Since the lemma assumes $|E|\ge k\ell(2tw/\ell)^{1/C}$, we have $B\geq (2tw/\ell)^{1/C}$, thus $NB^C\geq k\ell\cdot2tw/\ell=2ktw$. The second branch can never be satisfied. And by a calculation, the first branch gives us the bound of the lemma. \end{proof} The following graph-theoretical theorem is proved in~\cite{sommer2009distance}. \begin{theorem}[combining Lemma 14, Theorem 9, 17, and 18 of~\cite{sommer2009distance}]\label{theorem-final-proof} Let $n$ be sufficiently large. For any constant $C>0$, any $t=t(n)$ and any $\alpha=\alpha(n), w=w(n)$ satisfying $w=n^{o(1)}$ and $\alpha=o\left(\frac{\log n}{\log(w\log n)}\right)$. There exist $r=r(n)$ and $r$-regular graph $G=G_n$ of $n$ vertices, such that \begin{itemize} \item $r\geq (4tw\alpha/g(G))^{1/C}$; \item $2\alpha\leq g(G)\leq\log n$; \item $|\mathcal{P}(G,\ell,k)|^{1/k\ell}=\Omega(r)$ for $\ell=\left\lfloor g(G)/2\alpha \right\rfloor$ and $k=n/20\ell$; \item $r^{g(G)}=n^{\Omega(1)}$. \end{itemize} \end{theorem} Now we prove Theorem~\ref{theorem-distance-oracle} by applying Lemma~\ref{lemma-distance-oracle-general} to the sequence of regular graphs $G_n$ constructed in Theorem~\ref{theorem-final-proof}. Note that in $G_n$, we have $|E|=n\cdot r/2=10k\ell\cdot r\geq 10k\ell\cdot(2tw/\ell)^{1/C}\geq k\ell(2tw/\ell)^{1/C}$, so the assumption of Lemma~\ref{lemma-distance-oracle-general} is satisfied. On the other hand, we have $|\mathcal{P}(G,\ell,k)|^{1/k\ell}=\Omega(r)$ and $e(|E|/k\ell)^{1-C}=\Theta(r^{1-C})$. Since $|E|\leq n^2 \leq (\ell k)^4\leq k^8$, we have $(e|E|)^{1/tk}=\Theta(1)$. And it holds that $k=\frac{n}{20\ell}=\Omega(\frac{n}{\log n})$. Ignoring constant factors, the bound in Lemma~\ref{lemma-distance-oracle-general} implies: \begin{equation*} s\geq \frac{n}{\log n}\left(\frac{r}{r^{1-C}}\right)^{\Omega(\ell/t)}=\frac{n}{\log n}r^{\Omega(\ell/t)}=\frac{n}{\log n}r^{\Omega(g(G)/\alpha t)}=\frac{n^{\Omega(1+1/\alpha t)}}{\log n} \end{equation*} Translating this to a lower bound of $t$, we have $t=\Omega\left(\frac{\log n}{\alpha \log(s\log n/n)}\right)$. } \ifabs{ The rest of this section can be found in Appendix~\ref{appendix-LSD}. }{ \SectionLSD } \section*{Appendix}
1,108,101,563,571
arxiv
\section{Experimental setup description} \subsection{Electrostatic experiment} Imagine a parallel plate capacitor. One of the plates is a suspended pendulum at $x_1=0$. The other plate is fixed on a movable block at $x$. When a voltage $\mathcal{E}$ is applied to the capacitor, the pendulum is shifted towards the other plate to a distance $y$. The distance between the parallel disk-shaped plates becomes $z=x-y > 0$. The pendulum length $L_\mathrm{e} \approx 54\,\mathrm{cm}$ is much larger than the shift and for the restoring gravitational force we have approximately Hooke's law \begin{equation} F_\mathrm{g} = -\frac{\mathrm{d}}{\mathrm{d}z} \left (U_g(z) = m_\mathrm{e}g \sqrt{L_\mathrm{e}^2 - y^2} \right) \approx m_\mathrm{e}g \frac{x-z}{L_\mathrm{e}}. \nonumber \end{equation} For brevity in one and the same expression we introduced the potential energy $U_g$, the force $F_\mathrm{g}$ calculated as its derivative and gave the approximate expression used in the present work. We use $D_\mathrm{e}=2 R_\mathrm{e} = 54\,\mathrm{mm}$ diameter aluminum plates punched according to EC standard jar caps with mass $m_\mathrm{e}=1.14\,\mathrm{g}.$ Here we are not going to rewrite a textbook on electrodynamics, our purpose is to give a concise reference to many formulae for the force which can be found in the literature. The electrostatic force $F_\mathrm{e}$ is the gradient of the effective potential energies\cite{LL} defined in the parentheses below \begin{equation} F_\mathrm{e}(z) = -\frac{\mathrm{d}}{\mathrm{d}z} \left. \left ( U_\mathcal{_E}(z,\mathcal{E}) = -\frac{1}{2} C(z) \mathcal{E}^2 \right )\right|_{\mathcal{E}=\mathrm{const}} = -\frac{\mathrm{d}}{\mathrm{d}z} \left. \left (U_{_Q}(z,Q) \equiv \frac{1}{2} \frac{Q^2}{C(z)} \right )\right|_{Q=\mathrm{const}} = \frac{1}{2} \left(\mathcal{E}^2 = \frac{Q^2}{C^2} \right) \frac{\mathrm{d}}{\mathrm{d}z} C(z) , \nonumber \end{equation} where $Q=C\mathcal{E}$ is the capacitor charge. The concise expression above could be described in six different rows with one page text between them thus losing the transparent physical meaning. Nevertheless, some people prefer the horrible pleonasm of the detailed sequential description. Only referred formulae are numbered because reference to a numbered formula is like GOTO operator in the programming. The capacity can be calculated as the energy of the electric field $\mathbf{E}(\mathbf{x}) = -\nabla \varphi $ \begin{equation} Q=\varepsilon_0 \oint \mathbf{E} \cdot \mathrm{d} \mathbf{S},\qquad U_{_Q} = \frac{1}{2} \frac{Q^2}C = \int \mathrm{d}^3 x \frac{1}{2} \varepsilon_0 E^2,\quad U_\mathcal{_E}\equiv U_{_Q}-Q\mathcal{E}=-U_{_Q},\quad \Delta\varphi=0, \quad \varphi_1=0,\quad \varphi_2=\mathcal{E}, \nonumber \end{equation} where the first surface integration is around one of the plates of the capacitor and the second volume integration is over the whole 3-dimensional space. The electrostatic energy $U_{Q}$ calculated as integral of the energy density on the whole space is a positive variable. The electric potential on the electrodes of the capacitor is constant while in free space it is a harmonic function. Different expressions for the force \begin{equation} F_e=-\left(\frac{\partial U_{_Q}}{\partial z}\right)_{\!\! Q} =-\left(\frac{\partial U_\mathcal{_E}}{\partial z} \right)_{\!\mathcal{E}}, \qquad \mathcal{E}=+\left(\frac{\partial U_{_Q}}{\partial Q}\right)_{\!z}=\frac{Q}{C}, \qquad Q=-\left(\frac{\partial U_\mathcal{_E}}{\partial \mathcal{E}} \right)_{\!z}=C\mathcal{E} \nonumber \end{equation} are convenient for different type experiments at fixed voltage $\mathcal{E}$ or at fixed charge $Q$. Both type experiment can be done in the described experimental set-up. One more intuitive point of view for the effective potential $U_\mathcal{_E}$ is to consider as a \textit{Gedanken Experiment} parallel switching of one big capacitor $C_0\gg C(z) $ charged by a voltage $\mathcal{E}$ at $z=\infty$, when $C(z=\infty)=0.$ For the charge of the big capacitor we have $Q_0=C_0 \mathcal{E}$ and this charge is conserved. After opening of the parentheses, the total electrostatic energy reads \begin{equation} U_\mathrm{tot}(z)=\frac{Q^2}{2C(z)}+\frac{(Q_0-Q)^2}{2C_0} =U_\mathcal{_E}(z,\mathcal{E})+\frac12 C_0\mathcal{E}^2 +\frac{Q^2}{2C_0}. \nonumber \end{equation} The last term is negligible $1/C_0\ll 1/C(z)$, the middle is a constant irrelevant with respect to $z$ differentiation, and again we arrive at $U_\mathcal{_E}(z,\mathcal{E})=-\frac12C(z)\mathcal{E}^2$ using a charge reservoir as an auxiliary construction. The effective potential $U_\mathcal{_E}$ is negative because it describes the energy of an open system including the energy spent by external voltage source to keep voltage constant. The second derivation is more understandable for students not familiar with the thermodynamic style of writing the derivatives. The position $z$ of the shifted pendulum by the electric field is determined by the minimum of the total energy \begin{equation} U_\mathrm{e}(z)=U_\mathrm{g}(z)+U_\mathcal{_E}(z). \nonumber \end{equation} The experiment is conducted at DC voltage, but if AC current is used $\mathcal{E}$ is the RMS value. The forces are balanced in equilibrium and the total force \begin{equation} F(z_0) = - \frac{\mathrm{d}}{\mathrm{d}z} U_\mathrm{e}(z=z_0)=0. \nonumber \end{equation} The equilibrium is stable if the potential energy second derivative is positive \begin{equation} k_\mathrm{e}(z) \equiv - \frac{\mathrm{d}^2}{\mathrm{d}z^2} U_\mathrm{e}(z=z_0) >0. \nonumber \end{equation} Then, for small deviations from equilibrium, we again have Hooke's law for the force \begin{equation} F(z) \approx -(z-z_0) k_\mathrm{e}(z_0) \nonumber \end{equation} and the oscillations frequency of the pendulum \begin{equation} \omega = \sqrt{k_\mathrm{e}(z_0)/m_\mathrm{e}}, \nonumber \end{equation} if the friction force is negligible. We can describe the experiment now. At fixed voltage $\mathcal{E}=\mathrm{const},$ after waiting for the oscillations to attenuate, we move the block very slowly towards the pendulum, decreasing the control parameter $x$. We can note that the oscillations frequency also decreases and their period $T=2\pi/\omega(z_0)$ increases threateningly, and that critical slowing down is the precursor of the stability loss. The system loses stability $F(z_\mathrm{e}) = 0$ and $k_\mathrm{e}(z_\mathrm{e})=0$ at some critical value $x_\mathrm{e}$ of the control parameter, and evanescent perturbations of the pendulum suddenly swing it towards the block. Such a leap or a catastrophic change of the state of the systems at a slight variation of the control parameter is systematically described by the catastrophe theory. In our case, the potential energy at $x_\mathrm{e}$ has an inflection point \begin{equation} \left. \mathrm{d}_z U_\mathrm{e}(z,x_\mathrm{e})\right|_{z=z_0}=0 \qquad \& \qquad \mathrm{d}_z^2 \left. U_\mathrm{e}(z,x_\mathrm{e})\right|_{z=z_0}=0, \qquad \mathrm{d}_z \equiv \frac{ \mathrm{d}}{\mathrm{d} z}. \label{einfl} \nonumber \end{equation} If we substitute in this system the Helmholtz formula for a round capacitor, see for example 8th volume of the Landau-Lifshitz Course of theoretical physics\cite{LL} \begin{equation} C(z)=4 \pi \varepsilon_0 \left ( \frac{S}{4 \pi z} + \frac{l}{8 \pi^2} \ln\frac{\sqrt{S}}{z} + \overline{C}_\mathrm{H} \right ), \qquad S=\pi R_\mathrm{e}^2, \qquad l=2 \pi R_\mathrm{e}, \qquad \overline{C}_\mathrm{H}=\ln(16 \sqrt{\pi}) - 1= 2.34495, \nonumber \end{equation} after some algebra we get \begin{equation} \varepsilon_0 \mathcal{E}^2 \approx \mathcal{F}_\mathrm{e}\equiv \frac{32}{27 \pi} \frac{m_\mathrm{e}g}{L_\mathrm{e} D_\mathrm{e}^2} \left [ 1 - f_\mathrm{e}(x_\mathrm{e}/D_\mathrm{e}) \right ] x_\mathrm{e}^3, \qquad f_\mathrm{e}\approx\frac{4}{3 \pi} \frac{x_\mathrm{e}}{D_\mathrm{e}}>0. \label{eps0} \nonumber \end{equation} There are only measurable quantities on the right and electric ones on the left; the dimensionality of this equation is force. We repeat the experiment for different voltages, for instance 100, 200, \dots, 800~V provided by 23A batteries placed in plastic tubes ($16 \times 12\,\mathrm{V}\approx 200\,\mathrm{V}$). This is a safety measure for the high school students participating in the Olympiad, while a standard voltage source could be used in a university student laboratory. After the plates stick to each other, the capacitor is short-circuited and the distance $x_\mathrm{e}$ is carefully measured with a ruler with 0.5~mm accuracy. The proportionality coefficient $\varepsilon_0$ at the different voltages $\mathcal{E}$ is determined via the standard method for linear regression. And the experimental points are fitted in ($\mathcal{F}_\mathrm{e}$, $\mathcal{E}^2 $)~plane with a straight line with high correlation coefficient.\cite{EPO4} \subsection{Magneto-static experiment} The magneto-static experiment is practically identical to the electrostatic one. The attracting metal disks are substituted with attracting coils with diameter $D_\mathrm{m}=2R_\mathrm{m}=65\,\mathrm{mm}$ and N=50~turns of 80~$\mu$m Cu wire which parallel currents $I$ flow through. The most sensitive range of the used multimeters is 200~mA therefore the currents for the magneto-static experiment are up to this value. The equilibrium position $z$ of the perturbated by the magnetic attraction pendulum with length $L_\mathrm{m} \approx 55\,\mathrm{cm}$ and coil mass $m_\mathrm{m}=1.18\,\mathrm{g},$ is determined by the minimum of the potential energy\cite{LL} \begin{eqnarray} && U_\mathrm{m}(z) = \frac{1}{2} \frac{m_\mathrm{m} g}{L_\mathrm{m}}(y_\mathrm{m} = x-z)^2 - M(z)I^2, \qquad \mbox{where:} \nonumber \\ && M(z)\equiv\frac{2 \pi R_\mathrm{m} N }{I} \left \{ A_\varphi \equiv \frac{\mu_0}{4 \pi} \frac{4NI}{\kappa} \left [ \left ( 1 - \frac{\kappa^2}{2} \right ) \mathrm{K}-\mathrm{E} \right ] \right \}, \nonumber \\ && \kappa=\frac{1}{\sqrt{1+\epsilon^2}}, \qquad \epsilon(z)=\frac{z}{D_\mathrm{m}}, \nonumber \\ && \mathrm{K}(\kappa^2) \equiv \int_0^{\pi/2} \frac{\mathrm{d}\theta}{\sqrt{ 1-\kappa^2 \sin^2{\theta} }}, \nonumber \\ && \mathrm{E}(\kappa^2) \equiv \int_0^{\pi/2} \sqrt{ 1-\kappa^2 \sin^2{\theta} } \, \mathrm{d}\theta, \nonumber \end{eqnarray} and by the zeroing of the force\cite{LL} \begin{eqnarray} && F_\mathrm{m}(z) = -\mathrm{d}_z U_\mathrm{m} = \frac{m_\mathrm{m} g}{L_\mathrm{m}}(x-z) + \left ( \frac{1}{2} I^2 \mathrm{d}_z M = - 2 \pi R_\mathrm{m} NI B_r(z) \right )=0, \quad\mbox{where:} \nonumber \\ && B_r(z) \equiv -\mathrm{d}_z A_\varphi = \frac{\mu_0}{4 \pi} \frac{NI}{D_\mathrm{m}} \frac{4\epsilon}{\sqrt{1+\epsilon^2}} \left ( -\mathrm{K} + \frac{1+2 \epsilon^2}{2 \epsilon^2} \mathrm{E} \right ), \nonumber \\\nonumber && \frac{\mathrm{d}\mathrm{K}}{\kappa\mathrm{d} \kappa}= 2\frac{\mathrm{d}\mathrm{K}}{\mathrm{d} \kappa^2} =\frac{\mathrm{E}}{\kappa^2(1-\kappa^2)} -\frac{\mathrm{K}}{\kappa^2},\qquad \frac{\mathrm{d}\mathrm{E}}{\kappa\mathrm{d} \kappa}= 2\frac{\mathrm{d}\mathrm{E}}{\mathrm{d} \kappa^2} =\frac{\mathrm{E}-\mathrm{K}}{\kappa^2}. \end{eqnarray} All formulae are given in the most rigorous, logically and sequential way possible, see for example arbitrary encyclopedia on theoretical physics.\cite{LL} The final formulae for the effective potential energy $U_\mathrm{m}$ and the force $F_\mathrm{m}$ are expressed by the mutual inductance $M$, radial component of the magnetic field $B_r$ and the azimuthal component of the vector-potential $A_\varphi$. The force between two coaxial coils is derived in every complete text on electrodynamics. In most software systems the argument of elliptic integrals $\mathrm{E}$ and $\mathrm{K}$ is $\kappa^2$. The mutual inductance between the coils can be determined experimentally by applying a current through one of the coils and measuring the electromotive voltage $\mathcal{E}_2 =-M \mathrm{d}_t I_1$ of the other. One of the methods for measuring the mutual inductance between the coils is, for example, applying a DC current trough one of the coils, fast switching off the current and measuring the peak voltage on the other. As a rule however, for practical realizations a harmonic AC current is applied and the voltage is measured by a lock-in but those are technical details. The radial magnetic field $B_r$ is expressed by $z$-derivative of the azimuthal component of the vector potential $A_\varphi$. The minus sign in the effective potential energy $-MI^2$ has the same nature\cite{LL} as the minus sign of the effective electric potential energy $-C\mathcal{E}^2/2$. The experimental method of the magneto-static experiment is slightly different. Instead of a fixed set of voltages $\mathcal{E}$ and block movement changing $x$, we now fix $x$ and with a voltage source and potentiometer vary the total current $I$ passing successively through both closely separated coils $\epsilon=z/D_\mathrm{m} \ll 1$. Gradually increasing the current, which is a control parameter now, the small oscillations frequency decreases, and at a definite critical current $I$ the system loses stability and the pendulum coil sticks to the one fixed on the block. The solution of the magneto-static problem \begin{equation} F_\mathrm{m}(z_\mathrm{m})=-\mathrm{d}_z U_\mathrm{m}=0 \quad \& \quad k_\mathrm{m}(z_\mathrm{m})=\mathrm{d}_z F_\mathrm{m}=-\mathrm{d}_z^2 U_\mathrm{m}=0 \label{minfl} \nonumber \end{equation} determines the distance $x_\mathrm{m}$ between the coils at the potential inflection point and gives the condition \begin{equation} \mu_0 I^2 =\mathcal{F}_\mathrm{m}\equiv \frac{m_\mathrm{m} g}{2 L_\mathrm{m} N^2 D_\mathrm{m}} x_\mathrm{m}^2 [1 + f_\mathrm{m}(x_\mathrm{m}/D_\mathrm{m})], \nonumber \end{equation} where the correction function $f_\mathrm{m}>0$ is depicted in Figure~\ref{fdelta} and tabulated in Table~\ref{tdelta}. The experimental data processing is related to fitting of experimental data by the linear regression in the ($\mathcal{F}_\mathrm{m}$, $I^2$)~plane. The parameters of the magnetic experiment are similar to the electric one, see also the photo of the experimental setup.\cite{EPO4} Thus the constant $\mu_0$ is determined in an explicit form by the coefficient in the linear regression of the experimental data as $\varepsilon_0$ by the electric experiment. \begin{center} \begin{table}[t] \begin{tabular}{| c | c || c | c || c | c || c | c || c | c |} \hline \multicolumn{1}{| c |}{ } & \multicolumn{1}{ c ||}{ } & \multicolumn{1}{ c |}{ } & \multicolumn{1}{ c ||}{ } & \multicolumn{1}{ c |}{ } & \multicolumn{1}{ c ||}{ } & \multicolumn{1}{ c |}{ } & \multicolumn{1}{ c ||}{ } & \multicolumn{1}{ c |}{ } & \multicolumn{1}{ c |}{ } \\ \boldmath$x/D$ & \boldmath$f(x/D)$ & \boldmath$x/D$ &\boldmath$f(x/D)$ & \boldmath$x/D$ & \boldmath$f(x/D)$ & \boldmath$x/D$ & \boldmath$f(x/D)$ & \boldmath$x/D$ & \boldmath$f(x/D)$ \\[15pt] \hline \input{ftable.txt} \hline \end{tabular} \caption{The correction function $f_\mathrm{m}(x_\mathrm{m}/D_\mathrm{m})$ tabulated as a function of the dimensionless ratio of the equilibrium displacement $x_\mathrm{m}$ and the coils diameter $D_\mathrm{m}$ (index m is omitted for brevity).} \label{tdelta} \end{table} \end{center} \begin{figure}[t] \includegraphics[width=18cm]{./corr_func.eps} \caption{The correction function $f_\mathrm{m}(x_\mathrm{m}/D_\mathrm{m})$ as a function of the dimensionless ratio of the equilibrium displacement $x_\mathrm{m}$ and the coils diameter $D_\mathrm{m}$ (index m is omitted for brevity).} \label{fdelta} \end{figure} We can use linear regression or simply divide the measured current by the ammeter from the right side. At small values of the parameter $\delta_\mathrm{m}=x_\mathrm{m}/D_\mathrm{m}$ when the coils are separated at a distance much less than their diameter, the approximate formulas for elliptic integrals\cite{JEL} \begin{eqnarray} && \mathrm{K} = \Lambda + \frac{1}{4} (\Lambda-1){\kappa^\prime}^2 + \frac{9}{64} \left ( \Lambda - \frac{7}{6} \right ) {\kappa^\prime}^4 +\frac{25}{256}\left(\Lambda - \frac{37}{30}\right){\kappa^\prime}^6 + \cdots, \nonumber \\ && \mathrm{E} = 1+\frac{1}{2} \left ( \Lambda - \frac{1}{2} \right ) {\kappa^\prime}^2 +\frac{3}{16}\left(\Lambda - \frac{13}{12}\right){\kappa^\prime}^4 +\frac{15}{128}\left(\Lambda - \frac{6}{5}\right){\kappa^\prime}^6 + \cdots, \nonumber \\ &&\kappa^\prime=\sqrt{1-\kappa^2}=4\mathrm{e}^{-\Lambda} =\frac{\epsilon}{\sqrt{1+\epsilon^2}} \ll1, \nonumber \\ && \Lambda=\ln{\left (\frac{4}{\kappa^\prime} \right )} =-\ln\epsilon+\frac12\ln(1+\epsilon^2)+\ln4 \nonumber \end{eqnarray} give \begin{equation} f_\mathrm{m}(\delta_\mathrm{m}) \approx \frac{1}{16} \left ( -5 + 6\ln{\frac{8}{\delta_\mathrm{m}}} \right ) \delta_\mathrm{m}^2 + \mathcal{O}(\delta_\mathrm{m}^4) \label{fcorr} \nonumber \end{equation} and this approximation for the system parameters $m_\mathrm{m}$ and $L_\mathrm{m}$ provides percent accuracy in the $\mu_0$ determination. The well-known from the mathematical analysis $\mathcal O$ function means which power of the argument is neglected in the current approximation. \subsection{Determination of the light velocity} At known $\varepsilon_0$ and $\mu_0$, the velocity of light $c=1/\sqrt{\varepsilon_0 \mu_0}$ is also determined with percent accuracy and this also is the accuracy of the standard multimeter for current $I$ and voltage $\mathcal{E}$ measurements, as is the precision of the measured distances with 0.5~mm accuracy also. The largest error is in the distance $x$ measurement and it may be useful a magnifying glass to be added to the experimental setup after power source shut off. Both methods have in common the lack of forces measurement with an analytical (electronic) scale, which significantly lowers the price of the experimental setup and makes it suitable for popularization even for 10-ager terminators of expensive equipment. Both electrostatic and magneto-static experiments have also in common the usage of the catastrophe theory that we mention in the next section. \section{Catastrophe theory} The current for the magneto-static experiment can be fixed too, and thus both experiments control parameter will be the distance $x$ between the coils or the capacitor plates after the voltage disconnect and short-circuiting. It is convenient to consider $x$ as a parameter of the total potential energy $U(z,x)$ dependence on the distance $z$ between the plates or coils at a switched circuit. Around the minimum we have: \begin{equation} U(z) \approx \frac{1}{2} k(z,x)(z-z_0)^2. \nonumber \end{equation} The position of the minimum $z_0$ is determined by the zeroing of the force \begin{equation} F(z_0)=\left. -\mathrm{d}_z U(z) \right |_{z=0}=0 \nonumber \end{equation} and the small oscillations frequency is determined by the second derivative in the minimum \begin{equation} \omega = \sqrt{k(z_0,x)/m}, \qquad \left. k(z_0(x))=\mathrm{d}_z^2U(z,x) \right|_{z=z_0} \ge 0. \nonumber \end{equation} Let us look at the second derivative $k(z_0(x))$ behaviour, i.e. the system rigidity around this minimum when the control parameter $x$ is varied. Gradually decreasing the parameter $x,$ at some critical value $x_\mathrm{c}$ the second derivative in the minimum becomes zero $k(z_0, x_\mathrm{c})=0$. If we analyse the potential energy as a function of two variables $U(z,x)$, we have the mathematical problem for finding the solution $(z_\mathrm{c},x_\mathrm{c})$ of the system \begin{equation} \partial_z U(z,x)=0 \quad \& \quad \partial_z^2 U(z,x)=0. \nonumber \end{equation} At close proximity to the thus determined values of the potential energy variables we have the approximation \begin{eqnarray} && U = U_\mathrm{c} + U_{zx}(z-z_\mathrm{c})(x-x_\mathrm{c}) + \frac{1}{3!} U_{zzz}(z-z_\mathrm{c})^3, \nonumber \\ && U_{zx} = \partial_z \partial_x U(z_\mathrm{c},x_\mathrm{c}) < 0, \qquad U_{zzz}=\partial_z^3U(z_\mathrm{c},x_\mathrm{c})>0, \qquad U_\mathrm{c} = U(z_\mathrm{c},x_\mathrm{c}). \nonumber \end{eqnarray} Let us introduce new variables \begin{equation} \overline{x}=z-z_\mathrm{c}, \qquad \overline{b}=-\frac{3! (U-U_\mathrm{c})}{U_{zzz}}, \qquad \overline{a}=\frac{3!U_{zx}}{U_{zzz}}(x-x_\mathrm{c}), \nonumber \end{equation} then the potential energy approximation has the standard form of the canonical fold catastrophe from the catastrophe theory \begin{equation} \overline{a}\,\overline{x}+\overline{x}^3+\overline{b} =\frac{\mathrm{d}}{\mathrm{d}\overline{x}} \left(\frac12\overline{a}\,\overline{x}^2+\frac14\overline{x}^4+\overline{b}\overline{x}\right)=0. \nonumber \end{equation} This fold in the space $(\overline{a},\overline{b},\overline{x})$ is presented 40~times and the corresponding formula 12~times in the well-known reference monograph on catastrophe theory and its applications by Tim~Poston and Ian~Stewart.\cite{PostStew} Let us review the used terminology. The pendulum transition, which at a critical value of the current $I$, voltage $\mathcal{E}$ or the distance from equilibrium $x$ suddenly rushes towards the block is an example for the so called catastrophic jumps by Ren\'e Thom\cite{Thom} and Cristopher Zeeman.\cite{{ZeemanCM}} The variables $x$, $\mathcal{E}$ or $I$ are called control variables (or control parameters) and $z$ is called a behaviour variable (or state variable). The catastrophic jumps occur when smooth variations of controls cause a discontinuous change of state. In other words, the variable $x$ is a control parameter and the distance $z$ between the coils or capacitor plates is a behaviour variable. The variable $z$ has a catastrophic change when a smooth variation of $x$ takes place around the critical value $x_\mathrm{c}$. Without referring the catastrophe theory notions explicitly, such behaviour can be found in many physical problems: stability of orbits in the field of a black hole (briefly mentioned below), appearance of p-, d-, f-, and g-electrons in atoms with different Z, critical point, corresponding states rule and Landau theory of second order phase transitions, plane flow of compressional gases, see the well-known Landau and Lifshitz encyclopaedia.\cite{LL} And there are applications in such fields like heartbeat and propagation of a nerve impulse.\cite{Zeeman,ODE} Landau concepts of description of phase transition by breaking symmetry order parameter replaced science of type of zoology in a unified theory.\cite{PatashinskiPokrovsky} It is interesting that even biological phenomena can be described by differential equations similar to the kinetics of the order parameter.\cite{LL} Our experimental setup is to a large extent influenced by the Zeeman catastrophic machine\cite{ZeemanCM} and by Tim~Poston's work on Do-it-yourself catastrophe machine.\cite{Poston} In our machine the rubber elastics are replaced by force lines of the electric and magnetic field. In the same intuitive manner in which Faraday introduced force lines and concepts of a field in the mathematical Physics. Do-it statement does not refer to funding restrictions, we introduce a new idea for the usage of the notions of the catastrophe theory in the methodology of a student laboratory. Concerning the high school students, they are potential terminators of precise scales. The theory of the described experiment is related to analysis of the potential surfaces derived in the appendices for the electric $W_\mathrm{e}$ and magnetic $W_\mathrm{m}$ problem. In Fig.~\ref{surfaces} the surfaces are depicted in dimensionless variables \begin{eqnarray} && \nonumber W_\mathrm{e}(Z,X)=\frac12 (X-Z)^2 - \frac{1}{2Z} ,\\ && \nonumber W_\mathrm{g}(Z,X) =\sqrt{\left(1-\frac1{Z}\right)\left(1+\frac{X^2}{Z^2}\right)} ,\\ && \nonumber W_\mathrm{m}(Z,X)=\frac12 (X-Z)^2 +\ln Z. \end{eqnarray} For the gravitational problem of stability of a circular orbit around a black hole\cite{LL} $W_\mathrm{g}=U/mc^2,$ $Z=r/r_g$, $X=M/mcr_g$ is the dimensionless angular momentum and $r_g$ is the Schwarzschild radius. We refer to black holes because ``collapse'' of the plates of the capacitor or the joining of the coils of the magnetic pendulum is analogous to the recently observed merging of black holes,\cite{Abbott} which approximately can be described using fold instability. The sections $W(Z,X)$ in Fig.~\ref{curves} are given for 3 typical sections values $W(Z;X)$ for $X<X_\mathrm{c}$, $X=X_\mathrm{c}$ and $X>X_\mathrm{c}$. \begin{figure}[h] \includegraphics[width=18cm]{./surfaces.eps} \caption{ The potential surfaces $W_\mathrm{e}$ (left), $W_\mathrm{g}$ (center) and $W_\mathrm{m}$ (right). The right branch of the curves (blue in the colour version) lines represent the stable local minima of the potential energy as function of $Z$ at fixed values of $X$. The left branch of the curves (green in the colour version) show the local unstable maxima. Those two branches (the green and blue lines) join at the critical point (red in the colour version) at which minima and maxima annihilate for the critical value $X_\mathrm{c}.$ For the gravitational problem local extrema are depicted by magenta in the colour version. The 3 parallel black curves over all 3 surfaces are copied in the next Fig.~\ref{curves}. Those curves demonstrate local extrema for $X>X_\mathrm{c},$ monotonous dependence for $X<X_\mathrm{c},$ and most important the catastrophic behavior at the critical value $X=X_\mathrm{c}.$ } \label{surfaces} \end{figure} \begin{figure}[h] \includegraphics[width=18cm]{./curves.eps} \caption{Three sections of the potential surfaces close to the critical point for electric $W_\mathrm{e}$ (left), gravitational $W_\mathrm{g}$ (center) and magnetic $W_\mathrm{m}$ (right) problem. The points of the unstable maxima are marked in green in the colour version, while the points of local minima are marked by blue for $X>X_\mathrm{c}.$ For the critical value $X=X_\mathrm{c}$ maximums and minimums merge in an inflection point marked by red in the colour version. For $X<X_\mathrm{c}$ the potential curves are monotonous without local extrema. The zeros of the second derivative between the minumum and the maximum of the potential curves are shown in purple in the colour version.} \label{curves} \end{figure} \begin{figure}[h] \includegraphics{./artwork.eps} \caption{Scheme of the method for electrostatic measurement: a picture of the experimental setup is given in the EPO4 problem. Under the action of the electric field created by the batteries, the movable plate of the capacitor deforms the spring with effective rigidity $\varkappa=m_\mathrm{e}g/L_\mathrm{e}.$ In equilibrium position $z(x)$ ``the elastic force'' $F_\mathrm{g}=\varkappa(x-z)$ is compensated by the electric force $F_\mathrm{e}=-\frac12 \varepsilon_0 (E=\mathcal{E}/z)^2(S=\pi R_\mathrm{e}^2).$ We prefer an expression in which one can easily trace the origin of the different multipliers. We gradually decrease $x$ and at some critical value $x_\mathrm{c}$ the equilibrium position $z_\mathrm{c} \approx x_\mathrm{c}/3$ loses stability and a catastrophe happens. The pendulum (the suspended plate of the capacitor) suddenly sticks to the fixed one at $z=0.$ When the switch is changed to upper position, the pendulum minimizes its gravitational energy $\varkappa (x_\mathrm{c}-z)^2/2$ only and $z=x_\mathrm{c}.$ In a good approximation $x_\mathrm{c}^3 \propto \mathcal{E}^2.$ In the magneto-static experiment the plates are substituted with coils with parallel currents and $x_\mathrm{c}^2 \propto I^2.$ Similar catastrophe happens with a circular orbit around a black hole too.} \label{scheme} \end{figure} The St.~Clement of Ohrid University students use a catastrophic machine for measurement of the fundamental constant velocity of light. We review some technical details in the next section. \section{Technical details} In numerical analysis of the problem when the total potential energy for both electrostatic and magneto-static problems are programmed as functions $U(z,x)$, the inflection point is found via a solution of the corresponding system for zeroing of the force $F$ and the rigidity of the system $k$. With thus found current $I$ or voltage $\mathcal{E}$ critical values, the universal scaling functions $\delta=x/D$ can be determined by the numerical solution \begin{eqnarray} && f_\mathrm{e}(\delta_\mathrm{e}=x_\mathrm{e}/D_\mathrm{e} ) = 1 - \frac{27 \pi L_\mathrm{e} D_\mathrm{e}^2 \varepsilon_0 \mathcal{E}^2}{32 m_\mathrm{e} g x_\mathrm{e}^3}, \nonumber \\ && f_\mathrm{m}(\delta_\mathrm{m}=x_\mathrm{m}/D_\mathrm{m} ) = \frac{2 L_\mathrm{m} N^2 D_\mathrm{m}}{\mu_0 I^2 m_\mathrm{m} g x_\mathrm{m}^2}-1. \nonumber \end{eqnarray} There are analytical methods, of course, that give power series and the first correction was given for homework to the participated students in EPO4 with a Sommerfeld prize of 137~DM. And the high school students had to derive the main term with $f_\mathrm{e} \approx 0$ and $f_\mathrm{m} \approx 0$ during the Olympiad too.\cite{EPO4} \section{History notes} Immediately after realizing that there is current in the magnetic field equation $\textbf{j}+\varepsilon_0 \partial_t \textbf{E}$, Maxwell understood that the velocity of light can be determined from purely static separate measurements of electric and magnetic forces connected with the electromagnetic stress tensor and its energy density $\frac12 \varepsilon_0 E^2+\frac{1}{2}B^2/\mu_0$.\cite{Maxwell} If the product of unit current $\mathrm{e}_{_I}$ and unit voltage $\mathrm{e}_\mathcal{_E}$ gives the mechanical unit for power, then $c=1/\sqrt{\varepsilon_0 \mu_0}$ in any choice of units. The $ \varepsilon_0$ and $\mu_0$ numerical values is a matter of choice and convenience, for instance in Gaussian units $4 \pi \varepsilon_0=1$ and $\mu_0/4 \pi=1$ and these relict multipliers participate in our formulas. In Lorentz-Heaviside units $\varepsilon_0=1$ and $\mu_0=1$ and naturally $c=1$. This is practice in the modern metrology, the velocity of light is not measured from a long time and the convention $c=299~792~458$~m/s is used.~\cite{Mohr} The unit meter is redefined at a fixed time standard. The same can be said for the Ampere, the unit of current is fixed in 1948 in order $\mu_0=4\pi\times10^{-7}\;\mathrm{NA^{-2}}$. In this sense ``measurement the speed of light'' only marks an important stage of the development of physics. In common language using, for example Google, the ``speed of light'' is almost twenty times more frequently used than ``velocity of light'' but in science in the titles of the arXiv e-prints the frequencies are comparable. But even now, when the student measure the mechanical force of the electric field the tutorial\cite{MIT} says: \textit{Congratulations you have just measured one of the fundamental constants of nature!} For $\mu_0$ again\cite{MIT} with one extra comma: \textit{Congratulations, you have just measured one of the fundamental constants of nature!}. As in biology, the individual development repeats the evolutionary one. That is why we are saying to the students that they ``measure'' fundamental constants, not: \textit{Congratulations your multimeter is still OK!}. The purpose of our methodical experiment is to guide the students through the development of the electrodynamics using for fun a catastrophic machine that can be built in a day, costs \pounds20 and has a percent accuracy in case of precise work. But we use catastrophe machine not by funding restrictions but to demonstrate how an good mathematical idea\cite{Poston} can be used in student laboratory experiment. Organizing of a Olympiad with 137 participants and giving the setup to every one we had no possibility to buy for everybody electronic scale. That is why we decided to apply catastrophic theory which requires to measure only distances but not forces. Of course, for students labs the usage of measurement of forces or balance of scales is a tradition coming from the time of Maxwell.\cite{MaxwellExp} Let us mention the setup of Berkeley university,\cite{Purcell} MIT,\cite{MIT} University of Sofia,\cite{Gourev} and the Gymnasium in Breziche.\cite{Breziche} \section{Conclusions} This experimental setup is a part of the Physics faculty of St.~Clement of Ohrid University program for development of cheap experimental setups for fundamental constants measurements, see for example the description of the setup for measurement of Planck constant by electrons\cite{PlanckLandauer} and the measurement of speed of light by analytical scales.\cite{Gourev} The experimental setups can be constructed even in high (secondary) school laboratories and the corresponding measurements can be conducted by the high (secondary) school students. The authors are grateful to 137~participants (students and teachers) in EPO4 where the described experimental setup in this article was used and a dozen students measured $c$ and derived the formulas.~\cite{EPO4} In general, we can conclude that notions of catastrophe theory can be very useful for invention of new set-ups in student laboratory of physics. This is a style of thinking in a broad problems in science and technology. \acknowledgments{} The Olympiad was held with the cooperation of Faculty of Physics of St.~Clement of Ohrid University at Sofia -- special gratitude to the dean prof.~A.~Dreischuh and also to president of Macedonian physical society assoc.~prof.~B.~Mitrevski, and the president of the Balkan Physical Union acad.~A.~Petrov. We, the EPO4 organizers, are grateful to the high school participants who managed to derive the $\varepsilon_0 \mathcal{E}^2$ and $\mu_0 I^2$ formulas without the correction functions $f_\mathrm{e}$ and $f_\mathrm{m}.$ The authors, including the EPO4 champion Dejan Maksimovski who measured the velocity of light with 1~\% accuracy, are thankful to the university students from Skopje Biljana Mitreska and Ljupcho Petrov, who during the night after the experimental part of the Olympiad, solved a significant part of the derived here correction functions $f_\mathrm{e}$ and $f_\mathrm{m}$ used for the accurate determination of $\varepsilon_0$, $\mu_0$ and $c$ by the used electrostatic and magneto-static experiments.
1,108,101,563,572
arxiv
\section{Introduction} Integrable systems of the KP (Kadomtsev--Petviashvilli) type hierarchy of partial differential equations, which corresponds to infinite-dimensional Lie algebra of type A, and of its type B variant, the BKP hierarchy, have as solutions renowned families of symmetric functions~-- Schur polynomials in the KP case, and Schur $Q$-polynomials in the BKP case \cite{DKTIV,DKTA,DKTII, JM,JMbook1,Sato,You1,You2}, etc. In this note we show that multiparameter Schur $Q$-functions also provide solutions of the BKP hierarchy. Multiparameter Schur $Q$-functions $Q_\lambda ^{(a)}$ were introduced and studied combinatorially in \cite{Iv1}. These symmetric functions are interpolation analogues of the classical Schur $Q$-functions depending on a sequence of complex valued parameters $a=(a_0, a_1,\dots)$. The definition of multiparameter Schur $Q$-functions is reproduced in (\ref{defQa}). {\it Classical} Schur $Q$-functions correspond to $a=(0,0,0,\dots)$, and with the evaluation $a=(0, 1,2, 3,\dots)$ the multiparameter Schur $Q$-functions are called {\it factorial} Schur $Q$-functions. These families of symmetric functions proved to be useful in study of a number of questions of representation theory and algebraic geometry. Here are a few examples. The authors of \cite{ASS, Naz,Serg} described Capelli polynomials of the queer Lie superalgebra which form a distinguished family of super-polynomial differential operators indexed by strict partitions acting on an associative superalgebra. The eigenvalues of these Capelli polynomials are expressed through the factorial Schur $Q$-functions. In \cite{Ikeda, Ikeda2} the equivariant cohomology of a Lagrangian Grassmannian of a symplectic or orthogonal types is studied. The restrictions of Schubert classes to the set of points fixed under the action of a maximal torus of the symplectic group are calculated in terms of factorial symmetric functions. Further in \cite{Leon1} factorial Schur $Q$-functions are used to write generators and relations for the equivariant quantum cohomology rings of the maximal isotropic Grassmannians of types~B,~C and~D. In \cite{Henry} the center of the twisted version of Khovanov's Heisenberg category is identified with the algebra generated by classical Schur $Q$-functions (denoted as $\mathcal{B} _{\rm odd}$ in the exposition below). Factorial Schur $Q$-functions are described as closed diagrams of this category. The goal of this note is to show that multiparameter Schur $Q$-functions $Q_\lambda ^{(a)}$ are solutions of the BKP hierarchy. The origin for this phenomena lies in the fact proved in \cite{Kor} that generating functions of multiparameter Schur $Q$-functions and of classical Schur $Q$-functions coincide. While the BKP hierarchy is described in a wide range of literature on integrable systems and solitons, for the completeness of exposition and for the convenience of the reader we formulate the whole setting of the BKP hierarchy in terms of generating functions of symmetric functions with the neutral fermions bilinear identity~(\ref{binBKP}) as a starting point. We avoid to use any other facts than the well-known properties of symmetric functions that can be found in the classical monograph~\cite{Md}, and through the text we provide the references to the corresponding chapters and examples of that monograph. It is worth to mention that formulation of the KP and the BKP integrable systems solely in terms of symmetric functions can be found, e.g., in~\cite{JY}. The authors of \cite{JY} start with the bilinear identities in integral form, then, using the Cauchy type orthogonality properties of symmetric functions (cf.~\cite[Chapter~III, equation~(8.13)]{Md}), they arrive at Plucker type relations, and the later ones are transformed into the collection of partial differential equations of Hirota derivatives that constitute the hierarchy. As it is mentioned above, our route is traced differently employing the properties of generating functions of complete, elementary symmetric functions and power sums. We obtain differential equations of the hierarchy in Hirota form as coefficients of Taylor expansions. One of the advantages of this approach is that it directly addresses the corresponding vertex operators actions, since the later ones are also `generating functions' (formal distributions). The paper is organized as follows. In Section~\ref{section2} we recall some facts about complete, elementary symmetric functions, power sums and classical Schur $Q$-functions. In Section~\ref{section3} we describe the action of neutral fermions on the space generated by classical Schur $Q$-functions. In Section~\ref{section4} we review properties of generating functions for multiplication operators and corresponding adjoint operators and deduce vertex operator form of the formal distribution of neutral fermions. In Section~\ref{section5} we review all the steps of recovering the BKP hierarchy of partial differential equations in Hirota form from the neutral fermions bilinear identity. In Section~\ref{section6} we make simple observation that immediately shows that classical Schur $Q$-functions are solutions of the BKP hierarchy (which recovers the result of~\cite{You1}). In Section~\ref{multi_sec} we introduce multiparameter Schur $Q$-functions, and using the observation of Section~\ref{section6}, we show that $Q_\lambda ^{(a)}$ are also solutions of the BKP hierarchy. \section[Schur $Q$-functions]{Schur $\boldsymbol{Q}$-functions}\label{section2} Let $\mathcal{B} $ be the ring of symmetric functions in variables $(x_1, x_2, \dots)$. Consider the families of the following symmetric functions: \begin{alignat*}{3} &\text{elementary symmetric functions}\qquad && \left\{e_k=\sum_{ i_1<\dots < i_k} x_{i_1}\cdots x_{i_k}\,| \, {k=0,1,\dots}\right\}, & \\ &\text{complete symmetric functions} \qquad && \left\{h_k=\sum_{ i_1\le \dots \le i_k} x_{i_1}\cdots x_{i_k}\,|\, {k=0,1,\dots}\right\}, \\ &\text{symmetric power sums} \qquad && \left\{p_k=\sum x_i^k \,|\,{k=0,1,\dots }\right\}. \end{alignat*} We set $e_k=h_k=0$ for $k<0$. It is well-known \cite[Chapter I.2]{Md}, that each of these families generate $\mathcal{B} $ as a polynomial ring: \begin{gather*} \mathcal{B} ={\mathbb C}[p_1, p_2, p_3,\dots]= {\mathbb C}[e_1, e_2, e_3,\dots]= {\mathbb C}[h_1, h_2, h_3,\dots]. \end{gather*} Combine the families $h_k$, $e_k$, $p_k $ into generating functions \begin{gather*} H(u)=\sum_{k\ge 0} \frac{h_k}{u^k},\qquad E(u)=\sum_{k\ge 0} \frac{e_k}{u^k}, \qquad P(u)=\sum_{k\ge 1} \frac{p_k}{u^k}. \end{gather*} The following facts are well-known \cite[Chapter~I.2]{Md}. \begin{Lemma}\label{Lemma1} \begin{gather*} H(u)=\prod_i \frac{1}{1-x_i/u},\qquad E(u)=\prod_i {1+x_i/u},\qquad E(-u)H(u)=1,\\ H(u)=\exp\left(\sum_{n\ge1}\frac{1}{n} \frac{p_n}{u^{n}}\right),\qquad E(u)=\exp\left(\sum_{n\ge1}\frac{(-1)^{n-1}}{n} \frac{p_{n}}{u^n}\right). \end{gather*} \end{Lemma} We introduce one more family of symmetric functions $\{Q_k=Q_k(x_1, x_2,\dots)\}$ with $(k=0,1,\dots)$ as the coefficients of the generating function \begin{gather}\label{Qk1} Q(u)=\sum_{k\ge 0} \frac{Q_k}{u^k}=\prod_i \frac{u+x_i}{u-x_i}. \end{gather} From Lemma \ref{Lemma1} and (\ref{Qk1}) we immediately get relations of the next lemma. \begin{Lemma}\label{Lemma2} \begin{gather* Q(u) = E(u) H(u)= R(u)^2, \qquad R(u)=\exp\left(\sum_{n\in {\mathbb N}_{\rm odd}}\frac{p_{n} }{nu^{n}}\right), \end{gather*} where ${\mathbb N}_{\rm odd}=\{1,3,5,\dots\}$. \end{Lemma} Note that $Q(u) Q(-u)=1$, which implies that $Q_r$ with even $r$ can be expressed algebraically through $Q_r$ with odd $r$: \begin{gather*} Q_{2m}=\sum_{r=1}^{m-1}(-1)^{r-1} Q_rQ_{2m-r} +\frac{1}{2} (-1)^{m-1}Q^2_m. \end{gather*} More generally, Schur $Q$-functions $Q_\lambda$ labeled by strict partitions are defined as a specialization of Hall--Littlewood polynomials \cite[Chapter~III.2]{Md}. \begin{Definition}Let $\lambda=(\lambda_1>\lambda_2>\dots> \lambda_l)$ be a strict partition. Let $l\le N$. Schur $Q$-polynomial $Q_\lambda (x_1,\dots, x_N)$ is the symmetric polynomial in variables $x_i$'s defined by the formula \begin{gather}\label{defQ} Q_\lambda (x_1,\dots, x_N)=\frac{2^l}{(N-l)!} \sum_{\sigma\in S_N} \prod_{i=1}^{l}x_{\sigma(i)}^{\lambda_i} \prod_{i<j}\frac{x_{\sigma(i)}+x_{\sigma(j)}}{x_{\sigma(i)}- x_{\sigma(j)}}. \end{gather} \end{Definition} Alternatively, Schur $Q$-polynomial $Q_\lambda =Q_\lambda (x_1,\dots, x_N)$ for $N>l$ is the coefficient of ${u^{-\lambda_1}\cdots u^{-\lambda_l}}$ in the formal generating function \begin{gather}\label{genQ} Q(u_1,\dots, u_l)=\sum_{\lambda_1,\dots, \lambda_l\in {\mathbb Z}} \frac{Q_\lambda }{u^{\lambda_1}\cdots u^{\lambda_l}}=\prod_{1\le i<j\le l}\frac{u_j-u_i}{u_j+u_i} \prod_{i=1}^{l} Q(u_i), \end{gather} where it is understood that \begin{gather*} \frac{u_j-u_i}{u_j+u_i}= 1+2\sum_{r\ge 1}(-1)^r u_i ^{r} u_j^{-r}, \end{gather*} and $Q(u)$ is given by (\ref{Qk1}) \cite[Chapter III, equation~(8.8)]{Md}. Schur $Q$-polynomials have a stabilization property, hence, one can omit the number $N$ of variables $x_i's$ as long as it is not less than the length of the partition $\lambda$ and consider $Q_\lambda$ as functions of infinitely many variables $(x_1, x_2, \dots )$. \section[Action of neutral fermions on bosonic space $\mathcal{B} _{\rm odd}$]{Action of neutral fermions on bosonic space $\boldsymbol{\mathcal{B} _{\rm odd}}$}\label{section3} Consider the subalgebra $\mathcal{B} _{\rm odd}$ of $\mathcal{B} $ generated by odd ordinary Schur $Q$-functions: $\mathcal{B} _{\rm odd}= {\mathbb C}[Q_1, Q_3,\dots]$. It is known that $\mathcal{B} _{\rm odd}$ is also a polynomial algebra in odd power sums $\mathcal{B} _{\rm odd}= {\mathbb C}[p_1, p_3,\dots]$ and that Schur $Q$-functions $Q_\lambda$ labeled by strict partitions constitute a linear basis of $\mathcal{B} _{\rm odd}$ \cite[Chapter~III.8, equation~(8.9)]{Md}. Define operators $\{\varphi_k\}_{k\in {\mathbb Z}}$ acting on the coefficients of generating functions $Q(u_1,\dots, u_l)$ by the rule \begin{gather}\label {phi} \Phi(v) Q(u_1,\dots, u_l)= Q(v, u_1,\dots, u_l) \end{gather} with $\Phi(v)=\sum\limits_{m\in {\mathbb Z}}{\varphi_m}{v^{-m}}$. Then in the expansion~(\ref{genQ}) \begin{gather*} \varphi_m\colon \ Q_\lambda \mapsto Q_{(m,\lambda)}. \end{gather*} Observe that from (\ref{genQ}) \begin{gather*} ( \Phi(u)\Phi(v) + \Phi(v)\Phi(u) )Q(u_1,\dots, u_l)\\ \qquad{} =2\left(1+\sum_{r\ge 1}(-1)^r u ^{r} v^{-r} +\sum_{r\ge 1}(-1)^r v ^{r} u^{-r}\right) A(u,v, u_1,\dots, u_l)\\ \qquad{} =2\sum_{r\in{\mathbb Z}} \left(\frac{-u}{v}\right)^r A(u,v, u_1,\dots, u_l), \end{gather*} where \begin{gather*} A(u,v, u_1,\dots, u_l)=\prod_{1\le j\le l}\frac{(u_j-u)(u_j-v)}{(u_j+u)(u_j+v)} Q(u) Q(v) Q(u_1,\dots, u_l). \end{gather*} Using that $\delta(u,v)=\sum\limits_{r\in{\mathbb Z}} {u^r}{v^{-(r+1)}}$ is a formal delta distribution with the property $ \delta(u,v) a(u)= \delta(u,v)a(v)$ for any formal distribution $a(u)=\sum\limits_{n\in{\mathbb Z}} a_nu^n$, and that $Q(u)Q(-u)=1$, we get \begin{gather}\label{rel1} \left( \Phi(u)\Phi(v) + \Phi(v)\Phi(u) \right) Q(u_1,\dots, u_l)= 2v \delta(-u,v) Q(u_1,\dots, u_l). \end{gather} Since coefficients of the expansion of $Q(u_1,\dots, u_l)$ in powers of $u_1,\dots, u_l$ include Schur $Q$-functions $Q_\lambda$, and the latter form a linear basis of $\mathcal{B} _{\rm odd}$, it follows that (\ref{phi}) provides the action of well-defined operators $\{\varphi_k\}_{k\in {\mathbb Z}}$ on $\mathcal{B} _{\rm odd}$: \begin{gather*} \varphi_{k} (Q_\lambda) = Q_{(k,\lambda)}. \end{gather*} Relation (\ref{rel1}) on generating functions is equivalent to the commutation relations \begin{gather}\label{neut1} [\varphi_m, \varphi_n]_+= 2(-1)^m \delta_{m+n,0}\qquad \text{for}\quad m, n\in {\mathbb Z}. \end{gather} Thus, operators $\{\varphi_i\}_{i\in {\mathbb Z}}$ and ${1}$ provide the action of Clifford algebra ${\rm Cl}_\varphi$ of neutral fermions on the Fock space $\mathcal{B} _{\rm odd}$. Note that for any strict partition $\lambda=(\lambda_1>\lambda_2>\dots >\lambda_l)$ \begin{gather}\label{ver1} Q_{(\lambda_1,\dots \lambda_l)}= \varphi_{\lambda_1}\cdots \varphi_{\lambda_l} ( 1), \end{gather} or in terms of generating functions, \begin{gather}\label{ver2} Q(u_1,\dots, u_l)= \Phi(u_1)\cdots \Phi(u_l) (1). \end{gather} Formulae (\ref{ver1}), (\ref{ver2}) sometimes are called the vertex operator realization of Schur $Q$-functions. \section[Vertex operator form of formal distribution of neutral fermions]{Vertex operator form of formal distribution\\ of neutral fermions}\label{section4} It will be convenient for us to consider $\mathcal{B} _{\rm odd}$ as a subring of the ring of symmetric functions~$\mathcal{B} $. This allows us to recover the celebrated vertex operator form of the formal distribution of neutral fermions $\Phi(u)$ from no-less celebrated properties of generating functions of complete and elementary symmetric functions. All of these properties are discussed in \cite[Chapter~I]{Md}. The ring of symmetric functions $\mathcal{B} $ possesses a bilinear form $(\cdot,\cdot)$ \cite[Chapter~I, equation~(4.5)]{Md} defined on the linear basis of monomials of power sums labeled by partitions~$\lambda$ and~$\mu$ as \begin{gather*}(p_{\lambda_1}\cdots p_{\lambda_l},p_{\mu_1}\cdots p_{\mu_l})=z_{\lambda}\delta_{\lambda,\mu} ,\end{gather*} where $z_\lambda=\prod i^{m_i} m_i!$ and $m_i=m_i(\lambda)$ is the number of parts of~$\lambda$ equal to~$i$. We will use this form and its restriction to $\mathcal{B} _{\rm odd}$ to define adjoint operators\footnote{Traditionally, one uses rescaled form on $\mathcal{B} _{\rm odd}$ defined as $(p_{\lambda}, p_{\mu})=2^{-l(\lambda)}z_{\lambda}\delta_{\lambda,\mu},$ where $l(\lambda)$ is the number of parts of $\lambda$, but rescaling is not necessary for our purposes, since in the rescaled form $p_n^\perp= n/2\cdot\partial /\partial{p_n}$ (see \cite[Chapter~III.8, Example~11]{Md}).} of the multiplication operators. By definition, given an element $f\in \mathcal{B} $, the operator $f^{\perp}$ adjoint to the operator of multiplication by $f$ is given by the rule \begin{gather*} \big(f^{\perp} g, h\big)= (g,fh)\qquad \text{for any} \quad g,h\in \mathcal{B} . \end{gather*} \cite[Chapter I.5, Example 3]{Md} contains the following statement. Consider a symmetric function $ f=f(p_1,p_2,\dots) $ expressed as a polynomial in power sums~$p_i$. Then the adjoint operator on $\mathcal{B} $ to the multiplication operator by~$f$ is given by \begin{gather}\label{padj} f^\perp= f\left(\frac{\partial}{\partial{p_1}},\frac{2\partial}{\partial{p_2}},\frac{3\partial}{\partial{p_3}}, \dots\right). \end{gather} In particular $p_n^\perp= n\partial /\partial{p_n}$. Combine the corresponding adjoint operators of the families $h_k$, $e_k$, $p_k $ and $Q_k$ into generating functions \begin{gather*} H^{\perp}(u)=\sum_{k\ge 0} {h_k^{\perp}}{u^k},\qquad E^{\perp}(u)=\sum_{k\ge 0} {e_k^{\perp}}{u^k},\\ P^{\perp}(u)=\sum_{k\ge 1} {p_k^{\perp}}{u_k},\qquad Q^{\perp}(u)=\sum_{k\ge 0} {Q_k ^{\perp}}{u_k}. \end{gather*} Then (\ref{padj}) immediately implies the following relations. \begin{Lemma} \begin{gather*} H^{\perp}(u)=\exp\left(\sum_{n\ge1}\frac{\partial}{\partial p_n} {u^{n}}\right),\qquad E^{\perp}(u)=\exp\left(\sum_{n\ge1}{(-1)^{n-1}}\frac{\partial}{\partial p_n} {u^{n}}\right), \\ Q^{\perp}(u) = E^{\perp}(u) H^{\perp}(u)= R^{\perp}(u)^2, \qquad R^{\perp}(u)=\exp\left(\sum_{n\in {\mathbb N}_{\rm odd}} \frac{\partial}{\partial p_n}{u^{n}}\right), \end{gather*} where ${\mathbb N}_{\rm odd}=\{1,3,5,\dots\}$. \end{Lemma} The proof of the next lemma is outlined in \cite[Chapter~I.5, Example~29]{Md}. \begin{Lemma}The following commutation relations on generating functions of multiplication and adjoint operators acting on~$\mathcal{B} $ hold: \begin{gather*} H^{\perp}(u) \circ H(v) = (1-u/v)^{-1}H(u) \circ H^{\perp}(v),\\ H^{\perp}(u) \circ E(v) = (1+u/v)E(u) \circ H^{\perp}(v),\\ E^{\perp}(u) \circ H(v) = (1+u/v) H(u) \circ E^{\perp}(v),\\ E^{\perp}(u) \circ E(v) = (1-u/v)^{-1}E(u) \circ E^{\perp}(v). \end{gather*} \end{Lemma} \begin{Corollary}\label{cor} \begin{gather*} H^{\perp}(u) \circ Q(v)= \frac{v+u}{v-u}Q(u) \circ H^{\perp}(v),\\ E^{\perp}(u) \circ Q(v)= \frac{v+u}{v-u}Q(u) \circ E^{\perp}(v),\\ R^{\perp}(u) \circ Q(v)= \frac{v+u}{v-u}Q(u) \circ R^{\perp}(v). \end{gather*} \end{Corollary} \begin{proof}For the first and second one we use that $Q(u) = E(u) H(u)$. Observe that \begin{gather*} H^{\perp}(u)|_{\mathcal{B} _{\rm odd}}=E^{\perp}(u)|_{\mathcal{B} _{\rm odd}}=R^{\perp}(u)|_{\mathcal{B} _{\rm odd}}. \end{gather*} In other words, since $Q_k$ does not depend on even power sums~$p_{2r}$, we can add terms $ {\partial}/{\partial p_{2r}}$ in the sum under the exponent when applying to elements of~$\mathcal{B} _{\rm odd}$: \begin{gather*} R(u)^{\perp}(Q(v)) = \exp\left(\sum_{n\in {\mathbb N}_{\rm odd}} \frac{\partial}{\partial p_n}{u^{n}}\right) Q(v) = \exp\left(\sum_{n\ge 1} \frac{\partial}{\partial p_n}{u^{n}}\right) Q(v)= H^{\perp}(u) Q(v).\tag*{\qed} \end{gather*}\renewcommand{\qed}{} \end{proof} We arrive at the vertex operator form of formal distribution of neutral fermions. \begin{Proposition} \begin{gather}\label{phiqr} \Phi(v)= Q(v) R(-v)^{\perp}= \exp\left(\sum_{n\in {\mathbb N}_{\rm odd}}\frac{2p_{n}}{n}\frac{1}{v^{n}}\right)\exp\left(-\sum_{n\in {\mathbb N}_{\rm odd}} \frac{\partial}{\partial p_n}{v^{n}}\right). \end{gather} \end{Proposition} \begin{proof}From Corollary \ref{cor}, the action of the operator $ Q(v) R(-v)^{\perp}$ on the coefficients of generating function $Q(u_1,\dots, u_l)$ coincides with the action of~$\Phi(v)$: \begin{gather*} Q(v) R(-v)^{\perp} ( Q(u_1,\dots, u_l) ) = Q(v) \prod_{1\le i<j\le l}\frac{u_j-u_i}{u_j+u_i}\, R(-v)^{\perp}\left( \prod_{i=1}^{l} Q(u_i)\right)\\ \qquad{} = Q(v) \prod_{1\le j<i\le l}\frac{u_j-u_i}{u_j+u_i} \prod_{i=1}^{l}\frac{v-u_i}{v+u_i} \prod_{i=1}^{l} Q(u_i) =Q(v,u_1,\dots, u_l). \end{gather*} Since coefficients of $Q(u_1,\dots, u_l)$ contain a linear basis of $\mathcal{B} _{\rm odd}$, the equality~(\ref{phiqr}) follows. \end{proof} \section{The neutral fermions bilinear identity}\label{section5} Let \begin{gather*} \Omega= \sum_{n}\varphi_n\otimes (-1)^n \varphi_{-n}. \end{gather*} One looks for the solutions in $\mathcal{B} _{\rm odd}$ of {\it the neutral fermions bilinear identity} \begin{gather}\label{binBKP} \Omega ( \tau\otimes \tau)= \tau\otimes\tau, \end{gather} where $\tau= \tau(\tilde p)= \tau ( 2p_1, 2p_3/3, 2p_5/5,\dots )$. It is known \cite{DKTIV,DKTA,DKTII,JM} that (\ref{binBKP}) is equivalent to an infinite integrable system of partial differential equations called the BKP hierarchy. Further in Section~\ref{section6} a simple observation explains, why Schur $Q$-functions constitute solutions of the neutral fermions bilinear identity, and hence of the BKP hierarchy. In this section we would like to make a small deviation and review the steps of recovering the BKP hierarchy of partial differential equations in the Hirota form from the neutral fermions bilinear identity. This is certainly a well-known procedure. However, the explicit calculations are often omitted in the literature, and we would like to provide them here for the convenience of the reader. Note that $\Omega$ is the constant coefficient of the formal distribution $\Phi(u)\otimes \Phi(-u)$, or, in terms of residue, \begin{gather}\label{resid} \Omega=\mathop{\operatorname{Res}}\limits_{u=0} \frac{1}{u} \Phi(u)\otimes \Phi(-u). \end{gather} We identify $\mathcal{B} _{\rm odd}\otimes \mathcal{B} _{\rm odd}$ with ${\mathbb C}[p_1,p_3,\dots]\otimes {\mathbb C}[r_1,r_3,\dots]$ -- two copies of polynomial rings, where variables in each of them play role of power sum symmetric functions. Set $ \tilde p= ( 2p_1, 2p_3/3,\allowbreak 2p_5/5,\dots )$, $ \tilde r=(2r_1,2r_3/3, 2r_5/5,\dots)$. Then $\partial_{p_n}=2\partial_{\tilde p_n}/n$ and \begin{gather*} \Phi(u)\tau\otimes \Phi(-u) \tau = \exp\left(\sum_{n\in {\mathbb N}_{\rm odd}}(\tilde p_{n}- \tilde r_n)\frac{1}{u^{n}}\right)\\ \hphantom{\Phi(u)\tau\otimes \Phi(-u) \tau =}{}\times \exp\left(-\sum_{n\in {\mathbb N}_{\rm odd}}\frac{2}{n}\left(\frac{\partial}{\partial \tilde p_n}-\frac{\partial}{\partial \tilde r_n}\right){u^{n}}\right)\tau( \tilde p)\tau(\tilde r). \end{gather*} Introduce the change of variables \begin{gather*} \tilde p_n=x_n-y_n,\qquad \tilde r_n=x_n+y_n. \end{gather*} Then \begin{gather*} \Phi(u)\tau\otimes \Phi(-u) \tau = \exp\left(\sum_{n\in {\mathbb N}_{\rm odd}} -2y_n \frac{1}{u^n}\right)\exp\left(\sum_{n\in {\mathbb N}_{\rm odd}}\frac{2} {n}\frac{\partial}{\partial y_n}u^{n}\right)\tau( x- y)\tau( x+ y) \end{gather*} with $(x\pm y)=(x_1\pm y_1, x_3\pm y_3, x_5\pm y_5,\dots)$. \begin{Definition} Let $P(D )$ be a multivariable polynomial in the collection of variables $D=(D_1, D_2,\dots)$, let $f(x)$, $g(x)$ be differentiable functions in $ x =(x_1,x_2,\dots)$. The {\it Hirota derivative} $P( D) f \cdot g$ is a function in variables $(x_1,x_2,\dots)$ given by the expression \begin{gather*} P( D) f \cdot g =P(\partial_{z_1},\partial_{z_1},\dots)f (x+ z)g( x- z)|_{ z=0}, \end{gather*} where $ x\pm z =(x_1\pm z_1,x_2\pm z_2,\dots)$. \end{Definition} For example, \begin{gather*} D_i^n f \cdot g=\sum_{k=0}^{n} (-1)^k {n\choose k} \frac{\partial ^k f}{\partial x_i^k}\frac{\partial ^{n-k} g}{\partial x_i^{n-k}}, \end{gather*} which implies in particular that odd Hirota derivatives are tautologically zero when $f=g$: \begin{gather*} D_i^{2n+1} f \cdot f =0\qquad \text{for} \quad n=0,1,2,\dots. \end{gather*} The following lemma allows one to rewrite bilinear identity (\ref{binBKP}) in terms of the Hirota derivatives. \begin{Lemma} \begin{gather*} \exp\left(\sum_{n\in {\mathbb N}_{\rm odd}}\frac{2}{n}\frac{\partial}{\partial y_n}{u^{n}}\right)\tau( x- y)\tau(x+y) =\exp\left(\sum_{n\in {\mathbb N}_{\rm odd}}\left(y_n+\frac{2}{n}u^n\right)D_n\right)\tau \cdot\tau. \end{gather*} \end{Lemma} \begin{proof}By the Taylor series expansion, \begin{gather}\label{taylor} {\rm e}^{a\partial/\partial_y} g(y)=\sum_{n=0}^{\infty} \frac{ a^ng^{(n)} (y)} {n!}= g(y+a). \end{gather} Applying (\ref{taylor}) twice with $ t=(t_1,t_3, t_5,\dots)$, $ \tilde u=\big(2u, 2u^3/3, 2u^5/5,\dots \big)$, \begin{gather*} \exp\left(\sum_{n\in {\mathbb N}_{\rm odd}}\frac{2}{n}\frac{\partial}{\partial y_n} u^n\right) \tau( x- y)\tau( x+y)= \tau( x+ y+ \tilde u)\tau( x- y-\tilde u)\\ \qquad{} =\tau(x+ y +\tilde u+ t)\tau(x-( y+\tilde u+t))|_{ t=0}\\ \qquad{} =\left.\exp\left(\sum_{n\in {\mathbb N}_{\rm odd}}\left(y_n+\frac{2}{n}u^n\right)\frac{\partial}{\partial t_n }\right)\tau( x+ t)\tau( x- t)\right\vert_{ t=0}.\tag*{\qed} \end{gather*}\renewcommand{\qed}{} \end{proof} Thus, we can write in terms of Hirota derivatives \begin{gather} \Phi(u)\tau\otimes \Phi(-u) \tau =\exp\left(\sum_{n\in {\mathbb N}_{\rm odd}} \frac{-2y_n}{u^n}\right)\nonumber\\ \hphantom{\Phi(u)\tau\otimes \Phi(-u) \tau =}{}\times \exp\left(\sum_{n\in {\mathbb N}_{\rm odd}}\frac{2}{n}D_n u^n\right) \exp\left(\sum_{n\in {\mathbb N}_{\rm odd}}y_nD_n\right) \tau\cdot\tau.\label{eq1} \end{gather} In order to compute $\mathop{\operatorname{Res}}\limits_{u=0} \frac{1}{u} \Phi(u)\tau\otimes \Phi(-u)\tau$, which is just the coefficient of~$u^0$ of $ \Phi(u)\tau\otimes \Phi(-u) \tau $, we recall the following well-known facts on the composition of exponential series with generating series. Their proofs can be done, e.g., by induction, or again found in~\cite[Chapter~I]{Md}. \begin{Proposition}\label{XY} Let $S(u)=\sum\limits_{k=0}^\infty S_k \frac{1}{u^k}$ and $X(u)=\sum\limits_{k=1}^\infty X_k \frac{1}{u^k}$ be related by \begin{gather*} \exp ( X(u)) =S(u). \end{gather*} Then the following statements hold \begin{gather*} S_k=\sum_{s=1}^{k}\sum_{l_1+2l_2+\dots +sl_s=k,\, l_i\ge 1}\frac{1}{l_1!\cdots l_s!}X_1^{l_1}\cdots X_s^{l_s},\\ S_k=\det\frac{1}{n!} \begin{pmatrix} X_1 &-1&0&0&\dots&0\\ 2X_2 &X_1&-2&0&\dots&0\\ 3X_3 &2X_2&X_1&-3&\dots&0\\ \dots&\dots&\dots&\dots&\dots&0\\ kX_k &(k-1)X_{k-1}&(k-2)X_{k-2}&(k-3)X_{k-3}&\dots&X_1\\ \end{pmatrix},\\ X_k=\frac{(-1)^{k-1}}{k} \det \begin{pmatrix} S_1 &1&0&0&\dots&0\\ 2S_2 &S_1&1&0&\dots&0\\ 3S_3 &S_2&S_1&1&\dots&0\\ \dots&\dots&\dots&\dots&\dots&0\\ kS_k &S_{k-1}&S_{k-2}&S_{k-3}&\dots&S_1\\ \end{pmatrix} . \end{gather*} \end{Proposition} \begin{Example} \begin{gather*} S_0=1,\qquad S_1= X_1,\qquad S_2= \frac{1}{2} X_1^2+X_2,\qquad S_3= S_3+X_2X_1 +\frac{1}{6}X_1^3,\\ S_4=X_4+X_3X_1 +\frac{1}{2}X_2^2+\frac{1}{2} X_2X_1^2+\frac{1}{24}X_1^4. \end{gather*} By Lemma \ref{Lemma1}, when $X$ variables in these formulae are interpreted as normalized power sums $X_{k}=p_{k}/k$, $S_k$'s are identified with complete symmetric functions~$h_k$'s. \end{Example} \begin{Example}\label{eg2} Let $X_{2k}=0$ for $k=1,2,\dots.$ Then the first few $S_n=S_n(X_1, 0, X_3,\dots)$ are given by \begin{gather*} S_0 =1,\qquad S_1= X_1,\qquad S_2= \frac{1}{2}X_1^2,\qquad S_3= X_3+\frac{1}{6}X_1^3,\qquad S_4 =X_3X_1 +\frac{1}{24}X_1^4,\\ S_5= \frac{1}{120}X_1^5+\frac{1}{2}X_1^2X_3+X_5,\qquad S_6 =\frac{1}{720} X_1^6+\frac{1}{6}X_1^3X_3+\frac{1}{2}X_3^2 +X_1X_5,\\ S_7 =\frac{1}{5040} X_1^7+\frac{1}{24}X_1^4X_3+\frac{1}{2} X_1X_3^2 +\frac{1}{2}X_1X_5 +X_7. \end{gather*} Note that by Lemma \ref{Lemma2} when $X$ variables in these formulae are interpreted as odd normalized power sums $X_{2k+1}=2p_{2k+1}/(2k+1)$, $S_k$'s are identified with Schur $Q$-functions $Q_k$'s. \end{Example} Using the statement of Proposition \ref{XY}, we can write the coefficient of $u^0$ of (\ref{eq1}) as \begin{gather}\label{BKP3} \sum_{m=0}^{\infty} S_{m}\left( \tilde y \right) S_{m}\big( \tilde D\big) \exp\left(\sum_{n\in {\mathbb N}_{\rm odd}}y_nD_n\right) \tau \cdot\tau =\tau(x-y)\cdot\tau(x+y), \end{gather} where $\tilde y= ( {-2y_1},0, -2y_3, \dots )$, $\tilde D=( 2D_1, 0,2D_3/3, 0,\dots)$. Note that $S_0=1$ and $\exp\Big(\sum\limits_{n\in {\mathbb N}_{\rm odd}}y_nD_n\Big) \tau \cdot\tau =\tau(x- y)\cdot\tau( x+ y)$, hence we can rewrite~(\ref{BKP3}) as \begin{gather}\label{BKP2} \sum_{m=1}^{\infty} S_{m}(\tilde{y})S_{m}\big( \tilde D\big)\exp\left(\sum_{n\in {\mathbb N}_{\rm odd}}y_nD_n\right) \tau \cdot\tau=0. \end{gather} To obtain the equations of the BKP hierarchy, one expands the left hand side of (\ref{BKP2}) in monomials $y_1^{m_1}y_2^{m_2}\cdots y_N^{m_N}$ to obtain as coefficients Hirota operators in terms of $D_k$'s. For example, let us compute the coefficient of $y_3^2$. In the expansion of \begin{gather*} \big(S_{1}(\tilde{y}) S_{1}\big( \tilde D\big) + S_{2}(\tilde{y}) S_{2}\big( \tilde D\big)+ S_{3}(\tilde{y}) S_{3}\big( \tilde D\big) +\cdots\big) \left(1+\sum y_i D_i+ \frac{1}{2}\left(\sum y_i D_i\right)^2+\cdots \right) \end{gather*} the term $y_3^2$ appears in $S_{3}(\tilde{y}) S_{3}\big( \tilde D\big)\times y_3D_3$ and in $S_{6}(\tilde{y}) S_{6}\big( \tilde D\big)\times 1$. Using the expansions of Example~\ref{eg2}, the coefficient of $y_3^2$ is \begin{gather*} -2S_{3}\big( \tilde D\big)D_3+ 2S_{6}\big( \tilde D\big)= \frac{8}{45}\big(D_1^6- 5D_1D_3 -5 D_3^2+9D_1D_5\big), \end{gather*} which provides the Hirota bilinear form of the BKP equation that gives the name to the hierarchy \begin{gather*} \big(D_1^6- 5D_1D_3 -5 D_3^2+9D_1D_5\big)\tau\cdot \tau=0. \end{gather*} \begin{Remark}Writing the residue (\ref{resid}) as a contour integral, one gets the BKP in its {\it integral form} \begin{gather*} \oint \frac{1}{2\pi {\rm i} u} \exp\left(\sum_{n\in {\mathbb N}_{\rm odd}}(\tilde p_n-\tilde r_n)\frac{1}{u^n}\right)\exp\left(-\sum_{n\in {\mathbb N}_{\rm odd}}\frac{2}{n}\left(\frac{\partial}{\partial \tilde p_n}-\frac{\partial}{\partial \tilde r_n}\right){u^{n}}\right)\tau( \tilde p)\tau(\tilde r)\\ \qquad{} =\tau( \tilde p)\tau(\tilde r). \end{gather*} \end{Remark} \section{Commutation relation for the bilinear identity}\label{section6} Our goal is to show that multiparameter Schur $Q$-functions are solutions of the neutral fermions bilinear identity (\ref{binBKP}), thus they provide solutions of the BKP hierarchy. Let $X=\sum\limits_{n> 0} A_n\varphi_n$ for some $A_n\in {\mathbb C}$. From~(\ref{neut1}) one gets $X^2=0$. \begin{Proposition} \begin{gather*} \Omega (X\otimes X)= (X\otimes X)\Omega. \end{gather*} \end{Proposition} \begin{proof} \begin{align*} \Omega (X\otimes X)&=\sum_{k\in {\mathbb Z}} \varphi_k X\otimes (-1)^{k}\varphi_{-k} X\\ &=\sum_{k\in {\mathbb Z}} (-X\varphi_k+[\varphi_k, X]_+)\otimes (-1)^{k}(-X\varphi_{-k} +[\varphi_{-k}, X]_+). \end{align*} Note that $[\varphi_{k}, X]_+=\big[\varphi_{k}, \sum\limits_{n>0}A_n\varphi_n\big]_+=2\sum\limits_{n>0}(-1)^n A_n\delta_{k+n,0}$, hence \begin{gather*} \Omega (X\otimes X) = (X\otimes X)\Omega - \sum_{n>0}2(-1)^n A_n\otimes (-1)^n X\varphi_n - \sum_{n>0}(-1)^n X \varphi_n \otimes 2(-1)^n A_n\\ \hphantom{\Omega (X\otimes X) =}{} +4\sum_{k\in {\mathbb Z}}\sum_{m,n>0}(-1)^n A_n\delta_{n+k,0}\otimes (-1)^m A_m\delta_{m-k,0}\\ \hphantom{\Omega (X\otimes X)}{} = (X\otimes X)\Omega - 2\otimes X^2- X^2 \otimes 2+4\sum_{m,n>0}(-1)^n A_n\otimes (-1)^m A_m\delta_{m+n,0}. \end{gather*} We use that $X^2= 0$, and since both $m$ and $n$ in the last sum are always positive, the last term is also zero. \end{proof} \begin{Corollary}\label{cor8} Let $\tau\in \mathcal{B} _{\rm odd}$ be a solution of~\eqref{binBKP}, and let $X=\sum\limits_{n> 0} A_{n} \varphi_n$ with $A_{n}\in {\mathbb C}$. Then $\tau^\prime =X\tau$ is also a solution of~\eqref{binBKP}. \end{Corollary} The vertex operator presentation~(\ref{ver1}) of Schur $Q$-functions and Corollary~\ref{cor8} immediately imply that Schur $Q$-functions are solutions of~(\ref {binBKP}), since constant function~$1$ is a~solution of~(\ref{binBKP}). This argument reproves the result of~\cite{You1} and easily extends to more general case of multiparameter Schur $Q$-functions defined in the next section. \section[Multiparameter Schur $Q$-functions are solutions of the BKP hierarchy]{Multiparameter Schur $\boldsymbol{Q}$-functions are solutions\\ of the BKP hierarchy}\label{multi_sec} Multiparameter Schur $Q$-functions were introduced in~\cite{Iv1} as a generalization of definition (\ref{defQ}). Fix an infinite sequence of complex numbers $a= (a_0, a_1, a_2,\dots)$. Consider the analogue of a~power of a~variable~$x$ \begin{gather*} (x|a)^{k}= (x-a_0)(x-a_2)\cdots (x- a_{k-1}). \end{gather*} We also define a shift operation $\tau\colon a_k\mapsto a_{k+1}$, so that \begin{gather*} (x|\tau a)^{k}= (x-a_1)(x-a_2)\cdots (x- a_{k}). \end{gather*} \begin{Definition} Let $\alpha=(\alpha_1,\dots,\alpha_l)$ be a vector with non-negative integer coefficients $\alpha_i\in {\mathbb Z}_{\ge 0}$. The multiparameter Schur $Q$-function in variables $(x_1,\dots, x_N)$ with $l\le N$ is defined by \begin{gather}\label{defQa} Q^{(a)}_\alpha (x_1,\dots, x_N)=\frac{2^l}{(N-l)!} \sum_{\sigma\in S_N} \prod_{i=1}^{l}(x_{\sigma(i)}|a)^{\alpha_i} \prod_{i\le l,i<j\le N }\frac{x_{\sigma(i)}+x_{\sigma(j)}}{x_{\sigma(i)}- x_{\sigma(j)}}. \end{gather} \end{Definition} When $a=(0,0,0,\dots )$ and $\alpha$ is a strict partition, one gets back (\ref{defQ}), the classical Schur $Q$-functions $Q_\alpha (x_1,\dots, x_N)$. The evaluation $a=(0,1,2,\dots)$ gives factorial Schur $Q$-functions denoted as $Q^*_\alpha(x)$, those applications are outlined in the introduction. The multiparameter Schur $Q$-functions enjoy a~stability property, hence one can consider $Q^{(a)}_\alpha(x_1,x_2,\dots)$ to be a~function of infinitely many variables. Note from (\ref{defQa}) that for any permutation $\sigma \in S_l$, \begin{gather}\label{alter} Q^{(a)}_{\alpha} (x_1,\dots, x_N)= (-1)^\sigma Q^{(a)}_{\sigma(\alpha)} (x_1,\dots, x_N), \end{gather} where $(-1)^\sigma$ is the sign of permutation $\sigma$ \cite[Proposition~3]{Kor}. Hence, $Q^{(a)}_{\alpha} =0$ if $\alpha_i=\alpha_j$ for some~$i$,~$j$, and for a vector $\alpha=(\alpha_1,\dots,\alpha_l)$ with positive distinct integer coefficients $\alpha_i\in {\mathbb Z}_{>0}$, function $Q^{(a)}_{\alpha}$ coincides up to a sign with another $Q^{(a)}_{\alpha^\prime}$ labeled by strict partition~$\alpha^\prime$. One can check directly the following transitions between regular and multiparameter powers of variables. \begin{Lemma}\label{transitions}For $n=0,1,2,\dots$ \begin{gather*} (u-a_1)\cdots (u-a_{n}) =\sum_{k=0}^{\infty} (-1)^{n-k}e_{n-k}(a_1,\dots, a_{n}) u^k, \\ \frac{1}{(u-a_1)\cdots (u-a_{n})} =\sum_{k=0}^{\infty}h_{k-n}(a_1,\dots, a_{n}) u^{-k}, \\ u^n =\sum_{k=0}^{\infty} h_{n-k}(a_1,\dots, a_{k+1}) (u-a_1)\cdots (u-a_k), \\ \frac{1}{u^{n}} =\sum_{k=0}^{\infty} (-1)^{n-k}e_{k-n}(a_1,\dots, a_{k-1}) \frac{1}{(u-a_1)\cdots (u-a_k)}. \end{gather*} \end{Lemma} Double application of Lemma \ref{transitions} implies the following useful observation. \begin{Lemma}\label{identity} For any sequence $a=(0, a_1, a_2,\dots)$ \begin{gather*} \sum_{m=0}^{\infty} \frac{(x|a)^m}{(u|\tau a)^m}= \sum_{m=0}^{\infty} \frac{x^m}{u^m}. \end{gather*} \end{Lemma} \begin{proof} \begin{align*} \sum_{m=0}^{\infty} \frac{(x|a)^m}{(u|\tau a)^m}& = \sum_{m,k=0}^{\infty} (-1)^{m-k}e_{m-k}(0, a_1\cdots a_{m-1}) x^k\frac{1}{(u|\tau a)^m}\\ & =\sum_{k=0}^{\infty} x^k\sum_{m=0}^{\infty} (-1)^{k-m}e_{m-k}(a_1\cdots a_{m-1}) \frac{1}{(u|\tau a)^m}= \sum_{k=0}^{\infty} \frac{x^k}{u^k}.\tag*{\qed} \end{align*}\renewcommand{\qed}{} \end{proof} Consider a part of the generating function~(\ref{genQ}) of ordinary Schur $Q$-functions that corresponds only to positive values of $\lambda_i$: \begin{gather* Q^+(u_1,\dots, u_l)= \sum_{\lambda_1,\dots, \lambda_l\in {\mathbb Z}_{>0}} \frac{Q_\lambda}{u_1^{\lambda_1}\cdots u_l^{\lambda_l}}. \end{gather*} By (\ref{alter}), every non-zero coefficient of $Q^+(u_1,\dots, u_l)$ up to a sign coincides with a classical Schur $Q$-function labeled by an appropriate strict partition. In~\cite{Kor} the following remarkable observation is made. \begin{Theorem}[\cite{Kor}]\label{Korotkih} For any sequence $a=(0, a_1, a_2,\dots)$ \begin{gather*} Q^+(u_1,\dots, u_l)= \sum_{\lambda_1,\dots, \lambda_l\in {\mathbb Z}_{> 0}} \frac{Q^{(a)}_\lambda}{(u_1|\tau a)^{\lambda_1}\cdots (u_l|\tau a)^{\lambda_l}}. \end{gather*} \end{Theorem} \begin{proof} In~\cite{Kor} theorem is proved by induction on the length of the vector $\lambda$. A very short proof of this theorem follows from Lemma~\ref{identity} and definition (\ref{defQa}). Indeed, \begin{gather*} \sum_{\lambda_i \in {\mathbb Z}_{>0}} \frac{Q^{(a)}_\lambda}{(u_1|\tau a)^{\lambda_1}\cdots (u_l|\tau a)^{\lambda_l}} =\frac{2^l}{(N-l)!}\sum_{\sigma\in S_N} \prod_{i=1}^{l}\sum_{\lambda_i\in {\mathbb Z}_{> 0}}\frac{(x_{\sigma(i)}|a)^{\lambda_i}}{(u_i|\tau a)^{\lambda_i}} \prod_{i\le l,i<j\le N }\frac{x_{\sigma(i)}+x_{\sigma(j)}}{x_{\sigma(i)}- x_{\sigma(j)}}\\ \qquad{} =\frac{2^l}{(N-l)!}\sum_{\sigma\in S_N} \prod_{i=1}^{l}\sum_{\lambda_i\in {\mathbb Z}_{> 0}}\frac{x_{\sigma(i)}^{\lambda_i}}{u_i^{\lambda_i}} \prod_{i\le l,i<j\le N }\frac{x_{\sigma(i)}+x_{\sigma(j)}}{x_{\sigma(i)}- x_{\sigma(j)}}=Q^+(u_1,\dots, u_l).\tag*{\qed} \end{gather*}\renewcommand{\qed}{} \end{proof} Thus, Theorem \ref{Korotkih} suggests that multiparameter Schur $Q$-functions are obtained from classical Schur $Q$-functions by the change of the basis of expansion $\big\{1/u^{k}\big\} \to\big\{ 1/(u|\tau a)^k\big\}$ in the generating function $Q^+(u_1,\dots, u_l)$. Lemma \ref{transitions} immediately implies the following relations between classical and multiparameter Schur $Q$-functions (see also \cite[Theorem~10.2]{Iv1}) \begin{Corollary} \label{cor10} For any sequence of complex numbers $a= (0, a_1, a_2,\dots)$ and any integer vector $\alpha =(\alpha_1,\dots, \alpha_l)$ with $\alpha_i\in {\mathbb Z}_{>0}$, \begin{gather*} Q^{(a)}_{\alpha}=\sum_{\lambda_1,\dots, \lambda_l \in {\mathbb Z}_{> 0}} (-1)^{\sum \lambda_i-\sum\alpha_i}e_{\alpha_1-\lambda_1}(a_1,\dots, a_{\alpha_1-1}) \cdots e_{\alpha_l-\lambda_l}(a_1,\dots, a_{\alpha_l-1}) {Q_\lambda}. \end{gather*} \end{Corollary} \begin{Theorem} For any sequence of complex numbers $a= (0, a_1, a_2,\dots)$ and any integer vector $\alpha =(\alpha_1,\dots, \alpha_l)$ with $\alpha_i\in {\mathbb Z}_{> 0}$, multiparameter Schur $Q$-function $Q^{(a)}_\alpha$ is a solution of~\eqref{binBKP}. \end{Theorem} \begin{proof}The constant polynomial $1$ is obviously a solution of~(\ref{binBKP}) in $\mathcal{B} _{\rm odd}$. By the vertex operator presentation~(\ref{ver1}) and Corollary~\ref{cor10}, \begin{gather*} Q^{(a)}_{\alpha}= \sum_{\lambda_1,\dots, \lambda_l\in {\mathbb Z}_{>0}} A_{\lambda_1,\alpha_1}\cdots A_{\lambda_l,\alpha_l} \varphi_{\lambda_l}\cdots \varphi_{\lambda_1} \cdot 1 \end{gather*} with $A_{n,k}= (-1)^{n-k}e_{k-n}(a_1,\dots, a_{k-1})$. Hence, \begin{gather*} Q^{(a)}_{\alpha}= X_{\alpha_l}\cdot X_{\alpha_1} \cdot 1, \end{gather*} where $ X_{m} =\sum\limits_{s>0} (-1)^{m-s}e_{s-m}(a_1,\dots, a_{s-1})\varphi_{s}$, and Corollary~\ref{cor8} proves the statement. \end{proof} \subsection*{Acknowledgements} The author would like to thank the referee for the thoughtful and careful review that helped to improve the text of the paper. \pdfbookmark[1]{References}{ref}
1,108,101,563,573
arxiv
\section*{Acknowledgments} I am grateful to Julia Stoyanovich for her insights. I thank Markus Brill, Benny Kimelfeld, and Phokion G. Kolaitis for many interesting and thought-provoking questions. \section{Fair Allocation Model} \label{sec:Model} In this section, we formally define a model that maps the DiRe\xspace Committee Winner Determination problem to a problem of fairly allocating indivisible goods. The model mitigates unfairness to the voter population caused by DiRe\xspace Committees. We first define constraints. \paragraph{Diversity Constraints,} denoted by $l^D_G \in [1,$ $\min(k, |G|)]$ for each candidate group $G \in \mathcal{G}$, enforces at least $l^D_G$ candidates from the group $G$ to be in the committee $W$. Formally, $\forall$ $G \in \mathcal{G}$, $|G \cap W|\geq l^D_G$. \paragraph{Representation Constraints,} denoted by $l^R_P \in [1,k]$ for each voter population $P \in \mathcal{P}$, enforces at least $l^R_P$ candidates from the population $P$'s committee $W_P$ to be in the committee $W$. Formally, $\forall$ $P \in \mathcal{P}$, $|W_P \cap W|\geq l^R_P$. \begin{definition} \label{def-DiReCWD} \textbf{DiRe\xspace Committee Winner Determination (DRCWD\xspace):} We are given an instance of election $E=(C,V)$, a committee size $k \in [m]$, a set of candidate groups $\mathcal{G}$ under $\mu$ attributes and their diversity constraints $l^D_G$ $\forall$ $G \in \mathcal{G}$, a set of voter populations $\mathcal{P}$ under $\pi$ attributes and their representation constraints $l^R_P$ and the winning committees $W_P$ $\forall$ $P \in \mathcal{P}$, and a committee selection rule\xspace $\mathtt{f}$. Let $\mathcal{W}$ denote the family of all size-$k$ committees. The goal of DRCWD\xspace is to select committees $W \in \mathcal{W}$ that satisfy diversity and representation constraints such that $|G\cap W|\geq l^D_G$ $\forall$ $G\in \mathcal{G}$ and $|W_P\cap W|\geq l^R_P$ $\forall$ $P\in \mathcal{P}$ and maximizes, $\forall W \in \mathcal{W}$, $\mathtt{f}(W)$. Committees that satisfy the constraints are DiRe\xspace committees. \end{definition} Example~\ref{eg-balanced} showed that a DiRe\xspace committee may create or propagate biases by systematically increasing the selection of lower preferred candidates. This may result in more loss in utility to historically disadvantaged, smaller voter populations. To mitigate this, we model our problem of selecting a committee as allocating goods (candidates) into $k$ slots. Hence, to assess the quality of candidates being selected from voter populations, we borrow ideas from the literature on the problem of fair resource allocation of indivisible goods \cite{brandt2016handbook}. Formally, given a set of $n$ agents, a set of $m$ resources, and a valuation each agent gives to each resource, the problem of fair allocation is to partition the resources among agents such that the allocation is fair. There are three classic fairness desiderata, namely, proportionality, envy-freeness, and equitability \cite{freemanrecent}. Intuitively, proportionality requires that each agent should receive her ``fair share'' of the utility, envy-freeness requires no agent should wish to swap her allocation with another agent, and equitability requires all agents should have the exact same value for their allocations and no agent should be jealous of what another agent receives. As balancing the loss in utility to candidate groups is analogous to balancing the fairness in ranking \cite{yang2019balanced}, we focus on balancing the loss in utility to voter populations. We propose varying notions of envyfreeness to balance the loss in utility to voter populations\footnote{Our model, \emph{Fairly Allocating} Utility in Constrained Multiwinner Elections, is analogous to a Swiss Army knife. The model can be applicable to more or less any context of a constrained multiwinner election and the setting of the model can be chosen based on the context.}: \begin{table*}[t] \centering \begin{tabular}{l|l||c|c||c|c||c|c|c||c|c|c} \toprule & Committee & score & DiRe & \multicolumn{2}{c||}{FEC} & \multicolumn{3}{c||}{UEC} & \multicolumn{3}{c}{WEC}\\ & & & Committee & \multicolumn{2}{c||}{up to} & \multicolumn{3}{c||}{up to} & \multicolumn{3}{c}{up to}\\ \cline{5-12} & & & & 0 & 1 & 0 & 1 & 2 & 0 & $\frac{1}{13}$& $\frac{2}{13}$\\ \midrule 1&$\{c_1, c_2, c_6, c_5\}$&342& \xmark &\cmark&\cmark&\cmark&\cmark&\cmark&\cmark&\cmark&\cmark\\ \hline 2&$\{c_1, c_6, c_3, c_8\}$&286& \cmark &\xmark&\cmark&\xmark&\xmark&\cmark&\xmark&\xmark&\cmark\\ \hline 3&$\{c_1, c_5, c_3, c_8\}$&285& \cmark &\cmark&\cmark&\cmark&\cmark&\cmark&\xmark&\cmark&\cmark\\ \hline 4&$\{c_1, c_5, c_3, c_7\}$&284& \cmark &\cmark&\cmark&\xmark&\xmark&\cmark&\cmark&\cmark&\cmark\\ \bottomrule \end{tabular} \caption{Properties satisfied by various example committees selected using the election setup given in Figure~\ref{fig:whatexample/candidates} and Figure~\ref{fig:whatexample/voters}. Each row corresponds to a committee being (row 1) optimal, (row 2) optimal DiRe, (row 3) optimal FEC and optimal UEC given it is DiRe, and (row 4) optimal WEC given it is DiRe. `DiRe' denotes a committee satisfying diversity and representation constraints. `FEC' denotes Favorite-Envyfree-Committee. `UEC' denotes Utility-Envyfree-Committee. `WEC' denotes Weighted-Envyfree-Committee.} \label{tab:FECUECWEC-examplesummary} \end{table*} \subsection{Favorite-Envyfree-Committee (FEC)} Each population deserves their top-ranked candidate to be selected in the winning committee. However, selecting a DiRe\xspace committee may result into an imbalance in the position of the most-favorite candidate selected from each population's ranking. A natural relaxation of FEC is finding a committee with a bounded level of envy. Specifically, in the relaxation of FEC up to $x$ where $x \in [k]$, rather than selecting the most preferred candidate, we allow for one of the top-$(x+1)$ candidates to be selected. Note that when $x=0$, FEC and FEC up to 0 are equivalent. Note that the relaxation, FEC up to $x$ is useful when a FEC does not exist. If a FEC exists, then FEC up to $x$ exists for all non-negative $x$. \subsection{Utility-Envyfree-Committee (UEC)} Each population deserves to minimize the difference between the utilities each one gets from the selected winning committee, where the utility is the sum of Borda scores that the population gives to the candidates in the winning committee. However, a DiRe\xspace committee may result into an unequal utility amongst all the populations. Formally, for each $P \in \mathcal{P}$, the utility that the population gets from a winning DiRe\xspace committee $W$ is: \begin{equation} \U_P = \sum_{c \in W} \mathtt{f}_{\Borda}^{W_P}(c) \end{equation} where $\mathtt{f}_{Borda}^{W_P}(c)$ is the Borda score that candidate $c$ gets based on its rank in population $P$'s winning committee $W_P$. Overall, a UEC is a $k$-sized committee such that it aims to \begin{equation} \forall P, P' \in \mathcal{P} : P \neq P', |\U_P - \U_{P'}| = 0 \end{equation} A natural relaxation of UEC is UEC up to $\eta$ where $\eta \in \mathbb{Z}: \eta \in [0,$ $\frac{(m-1)\cdot(m)}{2}]$. This is to say that each population deserves to minimize the difference between utilities from the selected winning committee but up to $\eta$. Hence, a UEC up to $\eta$ is a $k$-sized committee such that it aims to \begin{equation} \forall P, P' \in \mathcal{P} : P \neq P', |\U_P - \U_{P'}| \leq \eta \end{equation} Note that in line with FEC, the relaxation, UEC up to $\eta$ is useful when a UEC does not exist. If a committee $W$ is UEC, then it implies that $W$ is UEC up to $\eta$ for all $\eta$ in $[0,$ $\frac{(m-1)\cdot(m)}{2}]$. The relation does not hold the other way, which is to say that if a committee $W$ is UEC up to $\eta$, then it may not be UEC up to $\eta-1$. \subsection{Weighted-Envyfree-Committee (WEC)} Each population deserves to minimize the difference between the weighted utilities they get from the selected winning committee. The weighted utility is the sum of Borda scores that the population gives to the candidates in the winning committee who are among their top-$k$ candidates over the maximum Borda score that they can give to candidates based on their representation constraint. Formally, for each $P \in \mathcal{P}$, the weighted utility that the population having representation constraint $l_P^R$ gets from a winning DiRe\xspace committee $W$ is: \begin{equation} \WU_P = \frac{\sum_{c \in W_P \cap W} \mathtt{f}_{\Borda}^{W_P}(c)}{\sum_{i=1}^{l_P^R}m-i} \end{equation} Overall, a WEC is a $k$-sized committee such that it aims to \begin{equation} \forall P, P' \in \mathcal{P} : P \neq P', |\WU_P - \WU_{P'}| = 0 \end{equation} \begin{example} \label{eg-WECcalc} The highest-scoring DiRe\xspace committee selected in Example~\ref{eg-dire} was $W'=\{c_1, c_6, c_3, c_8\}$ ($\mathtt{f}(W')=286$). The $\WU_{IL}$ will be calculated as follows: Given, $l_{IL}^R$=2, $W_P \cap W = \{c_6, c_8\}$. \[\WU_{IL} = \frac{\mathtt{f}_{\Borda}^{W_{IL}}(c_6) + \mathtt{f}_{\Borda}^{W_{IL}}(c_8)}{7+6} = \frac{6+4}{7+6}=\frac{10}{13}\] Similarly, $\WU_{CA}$ will be calculated as follows: Given, $l_{CA}^R$=2, $W_P \cap W = \{c_1, c_3\}$. \[\WU_{CA} = \frac{\mathtt{f}_{\Borda}^{W_{CA}}(c_1) + \mathtt{f}_{\Borda}^{W_{CA}}(c_3)}{7+6} = \frac{7+5}{7+6}=\frac{12}{13}\] \end{example} A natural relaxation of WEC is WEC up to $\zeta$ where $\zeta \in \mathbb{Q}$ such that $\zeta \in [0,$ $1]$. This is to say that each population deserves to minimize the difference between weighted utilities from the selected winning committee but up to $\zeta$. Hence, a WEC up to $\zeta$ is a $k$-sized committee such that it aims to \begin{equation} \forall P, P' \in \mathcal{P} : P \neq P', |\WU_P - \WU_{P'}| \leq \zeta \end{equation} \section{Tractable Cases} \label{sec:tractable} \subsection{$|\mathcal{P}|$ is small} \subsection{$|\mathcal{G}|$ is small} \subsection{$|\mathcal{G}|$ and $|\mathcal{P}|$ are small} \section{Conclusion} \label{sec:conc} We classified the complexity of finding a diverse and representative committee using a monotone, separable positional multiwinner voting rule. In doing so, we established the close association between DRCWD\xspace and the vertex problem problem. This can lead to interesting future work about having an understanding of the existence of a diverse and representative outcome and complexity of finding one. Specifically, we can study if DRCWD\xspace under certain realistic assumptions is PPAD-complete or not based on knowledge about the vertex cover problem and its possible PPAD-completeness \cite{kiraly2013ppad}. Additionally, this association can be used to find realistic settings where the model becomes tractable and validate it by giving examples of real-world datasets where such algorithms may work. \section{Classification of Complexity} \label{sec:compclass} We now study the computational complexity of DRCWD\xspace due to the presence of candidate attributes \emph{and} voter attributes. Specifically, we establish the NP-hardness of DRCWD\xspace for various settings of $\mu$, $\pi$, and $\mathtt{f}$ via reductions from a single well known NP-hard problem, namely, the vertex cover problem on 3-regular\footnote{A 3-regular graph stipulates that each vertex is connected to exactly three other vertices, each one with an edge, i.e., each vertex has a degree of 3. The VC problem on 3-regular graphs is NP-hard.}, 2-uniform\footnote{A 2-uniform hypergraph stipulates that each edge connects exactly two vertices.} hypergraphs \cite{garey1979computers, alimonti1997hardness}. Note that the reductions are designed to conform to the real-world stipulations of candidate attributes such that (i) each candidate attribute $A_i, \forall i \in [\mu]$, \emph{partitions} all the $m$ candidates into two or more groups and (ii) either no two attributes partition the candidates in the same way or if they do, the lower bounds across groups of the two attributes are not the same. For stipulation (ii), note that if two attributes partition the candidates in the same way and if the lower bounds across groups of the two attributes are also the same, then mathematically they are identical attributes that can be combined into one attribute. We use the same stipulations for voter attributes. Finally, all our results are based \emph{only} on the assumption that P$\neq$NP. \begin{theorem}\label{lemma:DiReCWDrepdiv1} If $\mu=1$, $\forall \pi \in \mathbb{Z} : \pi\geq1$, and $\mathtt{f}$ is the monotone, separable function arising from an arbitrary single-winner positional scoring rule, then DRCWD\xspace is NP-hard \end{theorem} \begin{proof}[Proof Sketch] The proof extends from the reduction used in the proof of Theorem 5 in \cite{relia2021dire} (full-version). Specifically, we have $m+ (n \cdot m) + 1$ candidates such that $m+ (n \cdot m)$ from the previous reduction are in one group and we have a dummy candidate $a$ in the second group. The diversity constraints are set to 1 for both the groups. The voters, voter populations, and rankings are the same as before except candidate $a$ is ranked first by all the voters. The representation constraints are set to 2 for all voter populations. The committee size is set to $k+1$. It is easy to see that the proof of correctness follows the proof of correctness of Theorem 5 \cite{relia2021dire} (full-version) with an addition of always selecting the candidate $a$. \end{proof} \begin{theorem}\label{thm:DiReCWDdivrep21} If $\mu=2$, $\forall \pi \in \mathbb{Z} : \pi\geq 1$, and $\mathtt{f}$ is the monotone, separable function arising from an arbitrary single-winner positional scoring rule, then DRCWD\xspace is NP-hard \end{theorem} \begin{proof}[Proof Sketch] We now build upon the reduction used in the proof of Corollary~\ref{lemma:DiReCWDrepdiv1}. The only change in the reduction is the addition of the second candidate attribute. In addition two groups already present under one attribute, we create two more groups under the second attribute such that the first group contains the $m$ candidates from $C$ and the second group contains $(n\cdot m) + 1$ candidates ($(n\cdot m)$ dummy candidates $D$ and 1 dummy candidate $a$). The diversity constraints are set to 1 for both the groups. The voters, voter populations, rankings, and the representation constraints are the same as before. The committee size is again set to $k+1$. It is easy to see that the proof of correctness follows the proof of correctness of Corollary~\ref{lemma:DiReCWDrepdiv1}. \end{proof} \begin{theorem}\label{thm:DiReCWDdivrep31odd} If $\forall \mu \in \mathbb{Z} : \mu\geq 3$ and $\mu$ is an odd number, $\forall \pi \in \mathbb{Z} : \pi\geq 1$, and $\mathtt{f}$ is the monotone, separable function arising from an arbitrary single-winner positional scoring rule, then DRCWD\xspace is NP-hard. \end{theorem} \begin{proof} We reduce an instance of vertex cover (VC) problem to an instance of DRCWD\xspace. \paragraph{\underline{PART I: Construction}} \paragraph{Candidates:} We have one candidate $c_i$ for each vertex $x_i \in X$, and $2 \mu^2 m - 7 \mu m + 2 \mu m n + 2 m n + 3 m$ dummy candidates $d \in D$ where $m$ corresponds to the number of vertices in the graph $H$, $n$ corresponds to the number of edges in the graph $H$, and $\mu$ is a positive, odd integer (hint: the number of candidate attributes). Specifically, we divide the dummy candidates into two types of blocks: \begin{itemize} \item Block type $B_1$ consists of $m\mu-3m$ blocks and each block consists of three sets of candidates: \begin{itemize} \item Set $T_1$ consists of single dummy candidate, $d_{i, 1}^{T_1} \in T_1$, $\forall$ $i \in [1,$ $m\mu-3m]$. \item Set $T_2$ consists of $\mu-1$ dummy candidates, $d_{i, j}^{T_2} \in T_2$, $\forall$ $i \in [1,$ $m\mu-3m]$, $j \in [1,$ $\mu-1]$. \item Set $T_3$ consists of $\mu-1$ dummy candidates, $d_{i, j}^{T_3} \in T_3$, $\forall$ $i \in [1,$ $m\mu-3m]$, $j \in [1,$ $\mu-1]$. \end{itemize} \item Block type $B_2$ consists of $2 m n$ blocks and each block consists of one set of candidates: \begin{itemize} \item Set $T_4$ consists of $\mu+1$ dummy candidates, $d_{i, j}^{T_4} \in T_4$, $\forall$ $i \in [1,$ $2 m n]$, $j \in [1,$ $\mu+1]$. \end{itemize} \end{itemize} Hence, there are $(m\mu-3m)\cdot(1+\mu-1+\mu-1)$ dummy candidates in blocks of type $B_1$ and $(2 m n) \cdot(\mu+1)$ dummy candidates in blocks of type $B_2$. This results in a total of $2 \mu^2 m - 7 \mu m + 3 m$ dummy candidates of type $B_1$ and $2\mu mn + 2mn$ dummy candidates of type $B_2$. Thus, $|D|$ = $2 \mu^2 m - 7 \mu m + 2\mu mn + 2mn + 3 m$. Note that the types of blocks and sets discussed above are different and independent from the candidate groups (discussed later) that are used to enforce diversity constraints. Overall, we set $A$ = \{$c_1, \dots, c_m$\} and the dummy candidate set $D$ = \{$d_1, \dots, d_{2 \mu^2 m - 7 \mu m + 2 \mu m n + 2 m n + 3 m}$\}. Hence, the candidate set $C$ = $A \cup D$ is of size $|C|=$ $2 \mu^2 m - 7 \mu m + 2 \mu m n + 2 m n + 4 m$ candidates. \paragraph{Committee Size:} We set the target committee size to be $k + m \mu^2 + 2mn\mu - 3m\mu $ \paragraph{Candidate Groups:} We now divide the candidates in $C$ into groups such that each candidate is part of $\mu$ groups as there are $\mu$ candidate attributes. \underline{Candidates in $A$:} Each edge $e \in E$ that connects vertices $x_i$ and $x_j$ correspond to a candidate group $G \in \mathcal{G}$ that contains two candidates $c_i$ and $c_j$. As our reduction proceeds from a 3-regular graph, each vertex is connected to three edges. This corresponds to each candidate $c \in A$ having three attributes and thus, belonging to three groups. Additionally, each candidate $c \in A$ is part of $\mu-3$ groups where each group is with the one candidate from Set $T_1$ of block type $B_1$. Specifically, candidate $c_i$ forms a group each with $d_{j, 1}^{T_1} \in T_1$ : $j\in[1+(i-1)(\mu-3),$ $i(\mu-3)]$. Hence, as each one of the $m$ candidates form $\mu-3$ groups, we have a total of $m(\mu-3)=m\mu-3m$ blocks of type $B_1$ consisting of $m(\mu-3)$ candidates in Set $T_1$. Overall, each candidate $c \in A$ has $\mu$ attributes and is part of $\mu$ groups. \begin{table*} \centering \resizebox{\textwidth}{!}{\begin{tabular}{c|lllllllllllllll} $v_1^{1}$, \dots, $v_1^{n}$&$ c_{i_1} \succ c_{j_1} $ & $\succ$ & $ U_{2}$ & $ \succ$&$ U_{3} $ & $\succ$&$ U_{4}^{v_1} $ & $\succ$&$ U_{5}^{v_1} $ & $\succ$&$ U_{6}^{v_1} $ & $\succ$&$ U_7$ \\[5pt] $v_2^{1}$, \dots, $v_2^{n}$&$ c_{j_1} \succ c_{i_1} $ & $\succ$ & $ U_{2}$ & $ \succ$&$ U_{3} $ & $\succ$&$ U_{4}^{v_2} $ & $\succ$&$ U_{5}^{v_2} $ & $\succ$&$ U_{6}^{v_2} $ & $\succ$&$ U_7$ \\[5pt] $v_3^{1}$, \dots, $v_3^{n}$&$ c_{i_2} \succ c_{j_2} $ & $\succ$ & $ U_{2}$ & $ \succ$&$ U_{3} $ & $\succ$&$ U_{4}^{v_3} $ & $\succ$&$ U_{5}^{v_3} $ & $\succ$&$ U_{6}^{v_3} $ & $\succ$&$ U_7$ \\[5pt] $v_4^{1}$, \dots, $v_4^{n}$&$ c_{j_2} \succ c_{i_2} $ & $\succ$ & $ U_{2}$ & $ \succ$&$ U_{3} $ & $\succ$&$ U_{4}^{v_4} $ & $\succ$&$ U_{5}^{v_4} $ & $\succ$&$ U_{6}^{v_4} $ & $\succ$&$ U_7$ \\[5pt] \vdots&&&&&&&\\ $v_{2n-1}^{1}$, \dots, $v_{2n-1}^{n}$&$ c_{i_n} \succ c_{j_n} $ & $\succ$ & $ U_{2}$ & $ \succ$&$ U_{3} $ & $\succ$&$ U_{4}^{v_{2n-1}} $ & $\succ$&$ U_{5}^{v_{2n-1}} $ & $\succ$&$ U_{6}^{v_{2n-1}} $ & $\succ$&$ U_7$ \\[5pt] $v_{2n}^{1}$, \dots, $v_{2n}^{n}$&$ c_{j_n} \succ c_{i_n} $ & $\succ$ & $ U_{2}$ & $ \succ$&$ U_{3} $ & $\succ$&$ U_{4}^{v_{2n}} $ & $\succ$&$ U_{5}^{v_{2n}} $ & $\succ$&$ U_{6}^{v_{2n}} $ & $\succ$&$ U_7$ \\[5pt] \end{tabular}} \caption{An instance of preferences of voters created in the reduction for the proof of Theorem~\ref{thm:DiReCWDdivrep31odd}.} \label{tab:voterRankingsDivRepConsodd} \end{table*} \underline{Candidates of Block type $B_1$:} Each candidate in the block type $B_1$ has $\mu$ attributes and are grouped as follows: \begin{itemize} \item Each dummy candidate $d_{j, 1}^{T_1} \in T_1$ : $j\in[1+(i-1)(\mu-3),$ $i(\mu-3)]$ is in the same group as candidate $c_i \in A$. Additionally, it is also in $\mu-1$ groups, individually with each of $\mu-1$ dummy candidates, $d_{j,o}^{T_2} \in T_2$, $\forall o \in [1,$ $\mu-1]$. Thus, the each dummy candidate $d_{j,1}^{T_1} \in T_1$ has $\mu$ attributes and is part of $\mu$ groups. \item For each dummy candidate $d_{j,i}^{T_2} \in T_2$ : $j\in[1+(i-1)(\mu-3),$ $i(\mu-3)]$ and $\forall i \in [1,$ $\mu-1]$, it is in the same group as $d_{j,1}^{T_1}$ as described in the previous point. It is also in $\mu-1$ groups, individually with each of $\mu-1$ dummy candidates, $d_{j,i}^{T_3} \in T_3$. Thus, each dummy candidate $d_{j,i}^{T_2} \in T_2$ has $\mu$ attributes and is part of $\mu$ groups. \item For each dummy candidate $d_{j,i}^{T_3} \in T_3$ : $j\in[1+(i-1)(\mu-3),$ $i(\mu-3)]$ and $\forall i \in [1,$ $\mu-1]$, it is in $\mu-1$ groups, individually with each of $\mu-1$ dummy candidates, $d_{j,i}^{T_2} \in T_2$, as described in the previous point. Next, note that when $\mu$ is an odd number, $\mu-1$ is an even number, which means Set $T_3$ of each block has an even number of candidates. We randomly divide $\mu-1$ candidates into two partitions. Then, we create $\frac{\mu-1}{2}$ groups over one attribute where each group contains two candidates from Set $T_3$ such that one candidate is selected from each of the two partitions without replacement. Thus, each pair of groups is mutually disjoint. Thus, each dummy candidate $d_{j,i}^{T_3} \in T_3$ is part of exactly one group that is shared with exactly one another dummy candidate $d_{j',i}^{T_3} \in T_3$ where $j \neq j'$. Overall, this construction results in one attribute and one group for each dummy candidate $d_{j,i}^{T_3} \in T_3$. Hence, each dummy candidate $d_{j,i}^{T_3} \in T_3$ has $\mu$ attributes and is part of $\mu$ groups. \end{itemize} \underline{Candidates of Block type $B_2$:} Finally, we assign candidates from the Block type $B_2$ to groups. Each of the dummy candidate in Set $T_4$ of each of the $2mn$ blocks is grouped individually with each of the remaining $\mu$ dummy candidates in Set $T_4$ of a block. Formally, $\forall i \in [1,$ $2mn]$, $\forall j \in [1,$ $\mu+1]$, $d_{i, j}^{T_4} \in T_4$ and $\forall o \in [1,$ $\mu+1]$, $d_{i, o}^{T_4} \in T_4$ such that $j\neq o$, $d_{i, j}^{T_4}$ and $d_{i, o}^{T_4}$ are in the same group. Hence, each block consists of $\mu+1$ candidates and each candidate is grouped pairwise with each of the remaining $\mu$ candidates. This means that each dummy candidate $d_{i, j}^{T_4} \in T_4$ has $\mu$ attributes and is part of $\mu$ groups\footnote{This setup of Block type $B_2$ is analogous to creating $2mn$ $K_{\mu+1}$ graphs, i.e., a total of $2mn$ complete (2-uniform, $\mu$-regular) (hyper-)graphs, each with $\mu+1$ vertices.}. \paragraph{Diversity Constraints:} We set the lower bound for each candidate group as follows: \begin{itemize} \item $l^D_G=1$ for all $G \in \mathcal{G}$ : $G\cap A\neq \phi$, which corresponds that each vertex in the vertex cover should be covered by some chosen edge. \item $l^D_G=1$ for all $G \in \mathcal{G}$ such that at least one of the following holds: \begin{itemize} \item $\forall i \in [1,$ $m\mu-3m]$, $G \cap d_{i, 1}^{T_1} \neq \phi$. \item $\forall i \in [1,$ $m\mu-3m]$, $\forall j \in [1,$ $\mu-1]$, $G \cap d_{i, j}^{T_2} \neq \phi$. \item $\forall i \in [1,$ $m\mu-3m]$, $\forall j \in [1,$ $\mu-1]$, $G \cap d_{i, j}^{T_3} \neq \phi$. \end{itemize} \item $l^D_G=2$ for all $G \in \mathcal{G}$ such that $\forall i \in [1,$ $2mn]$, $\forall j \in [1,$ $\mu+1]$, $G \cap d_{i, j}^{T_4} \neq \phi$ and $\forall i \in [1,$ $2mn]$, $G\cap d_{i, \mu+1}^{T_4}=\phi$. \item $l^D_G=1$ for all $G \in \mathcal{G}$ such that $\forall i \in [1,$ $2mn]$, $\forall j \in [1,$ $\mu+1]$, $G \cap d_{i, j}^{T_4} \neq \phi$ \end{itemize} In summary, $\forall$ $G \in \mathcal{G}$, \[ l^D_G = \begin{cases} 2, & \text{if } \forall i \in [1, 2mn], \forall j \in [1, \mu], G \cap d_{i, j}^{T_4} \neq \phi \text{ and } \\ & \forall i \in [1, 2mn], G\cap d_{i, \mu+1}^{T_4}=\phi\\ 1, & \text{otherwise} \end{cases} \] \paragraph{Voters and Preferences:} We now introduce $2n^2$ voters, $2n$ voters for each edge $e \in E$. More specifically, an edge $e \in E$ connects vertices $x_i$ and $x_j$. Then, the corresponding $2n^2$ voters $v \in V$ rank the candidates as follows: \begin{itemize} \item \textbf{First $2$ positions} - Set $U_1$ : The top two positions are occupied by candidates $c_i$ and $c_j$ that correspond to vertices $x_i$ and $x_j$. For voter $v^b_a$ where $a \in [2n]$ and $b \in [n]$, we denote the candidates $c_i$ and $c_j$ as $c_{i_{a}}$ and $c_{j_{a}}$. Note that we have two voters that correspond to each edge and hence, two voters $v^b_a$ and $v^b_{a-1}$ $\forall a \in [2n]_{\text{even}}$, rank the same two candidates in the top two positions. Specifically, voter $v^b_a$ ranks the two candidates based on the non-increasing order of their indices. Voter $v^b_{a-1}$ ranks the two candidates based on the non-decreasing order of their indices. \item \textbf{Next $m\mu^2-3m\mu$ positions} - Set $U_2$ : All of the $m\mu-3m$ dummy candidates from Set $T_1$ and all of the $m\mu^2-4m\mu+3m$ dummy candidates from Set $T_3$ are ranked based on non-decreasing order of their indices. Note that $m\mu-3m + m\mu^2-4m\mu+3m$ = $m\mu^2-3m\mu$. \item \textbf{Next $2mn\mu$ positions} - Set $U_3$ : $2mn\mu$ of the $2mn\mu+2mn$ dummy candidates from Set $T_4$ are ranked based on non-decreasing order of their indices. Specifically, a dummy candidate $d_{o, j}^{T_4}$ from Set $T_4$ is selected $\forall o \in [1,$ $2mn]$ and $\forall j \in [1,$ $\mu]$. Note that dummy candidate from Set $T_4$ of the kind $d_{o, \mu+1}^{T_4}$, for all $\forall o \in [1,$ $2mn]$, is not ranked in these positions. \item \textbf{Next $m$ positions} - Set $U_4$ : $m$ out of $2mn$ unranked dummy candidates from Set $T_4$ of the kind $d_{o, \mu+1}^{T_4}$, for all $\forall o \in [1,$ $2mn]$, are ranked in the next $m$ positions based on non-decreasing order of their indices. Specifically, the $m$ candidates that are ranked satisfy $(o\mod2n)+1=a$. Note that for each type of voter of the kind $v_i$, these $m$ candidates are distinct as shown below. Hence, for all pairs of voters of the kind $v_i, v_j \in V : v_i \neq v_j$, we know that $U_4^{v_i} \cap U_4^{v_j}=\phi$. \item \textbf{Next $m-2$ positions} - Set $U_5$ : The $m-2$ candidates from $A$, which are not ranked in the top two positions, are ranked based on non-decreasing order of their indices. Formally, $U_5=A \setminus \{c_{i_a}, c_{j_a}\}$. \item \textbf{Next $2mn-m$ positions} - Set $U_6$ : $2mn-m$ out of $2mn$ unranked dummy candidates from Set $T_4$ of the kind $d_{o, \mu+1}^{T_4}$, for all $\forall o \in [1,$ $2mn]$, are ranked based on non-decreasing order of their indices. Specifically, the candidates that are ranked satisfy $(o\mod2n)+1\neq a$. \item \textbf{Next $m\mu^2-4m\mu+3m$ positions} - Set $U_7$ : All of the $m\mu^2-4m\mu+3m$ dummy candidates from Set $T_2$ are ranked based on non-decreasing order of their indices. \end{itemize} More specifically, the voters rank the candidates as shown in Table~\ref{tab:voterRankingsDivRepConsodd}. The sets without a superscript (e.g., $U_2$) denote the candidate rankings that are the same for all voters. \paragraph{Voter Populations:} We now divide the voters in $V$ into populations such that each voter is part of $\pi$ populations as there are $\pi$ voter attributes. Specifically, the voters are divided into disjoint population over one or more attributes when $\forall \pi \in \mathbb{Z}, \pi\geq1$. The voters are divided into populations as follows: $\forall x \in [\pi]$, $\forall y \in [2n]$, $\forall z \in [n]$, voter $v_y^z \in V$ is part of a population $P\in\mathcal{P}$ such that $P$ contains all voters with the same ($z \mod x$) and $y$. Each voter is part of $\pi$ populations. \paragraph{Representation Constraints:} We set the lower bound for each voter population as follows: $\forall P \in \mathcal{P}$, $l^R_P$ = $1+m\mu^2-3m\mu+2mn\mu$. This completes our construction for the reduction, which is a polynomial time reduction in the size of $n$ and $m$. \paragraph{\underline{PART II: Proof of Correctness}} \begin{claim} We have a vertex cover $S$ of size at most $k$ that satisfies $e \cap S\neq\phi$ $\forall$ $e \in E$ if and only if we have a committee $W$ of size at most $k + m \mu^2 + 2mn\mu - 3m\mu $ such that $\forall$ $G \in \mathcal{G}$, $|G \cap W|\geq l^D_G$, $\forall$ $P \in \mathcal{P}$, $|W_P \cap W|\geq l^R_P$, and $\mathtt{f}(W) = \max_{W' \in \mathcal{W}} \mathtt{f}(W')$\footnote{The ties are broken based on rankings of the candidate such that higher ranked candidates are chosen over the lower ranked candidates. Note that with some trivial changes in voter preferences, we can ensure that there are no ties. Specifically, we create enough copies of each voter to ensure that each candidate not in the top $k + m \mu^2 + 2mn\mu - 3m\mu $ positions occupy the last position once.} where $\mathcal{W}$ is a set of committees that satisfy all constraints. \end{claim} ($\Rightarrow$) If the instance of the VC problem is a yes instance, then the corresponding instance of DRCWD\xspace is a yes instance. \paragraph{Diversity constraints are satisfied:} Firstly, each and every candidate group will have at least one of their members in the winning committee $W$, i.e., $|G \cap W|\geq l^D_G$ for all $G \in \mathcal{G}$. More specifically, for each of the $m\mu-3m$ blocks of type $B_1$ of candidates, we select: \begin{itemize} \item one dummy candidate from Set $T_1$ \item all $\mu-1$ dummy candidates from Set $T_3$ \end{itemize} This helps to satisfy the condition $l^D_G=1$ for all $G \in \mathcal{G}$ such that at least one of the following holds: \begin{itemize} \item $\forall i \in [1,$ $m\mu-3m]$, $G \cap d_{i, 1}^{T_1} \neq \phi$. \item $\forall i \in [1,$ $m\mu-3m]$, $\forall j \in [1,$ $\mu-1]$, $G \cap d_{i, j}^{T_2} \neq \phi$. \item $\forall i \in [1,$ $m\mu-3m]$, $\forall j \in [1,$ $\mu-1]$, $G \cap d_{i, j}^{T_3} \neq \phi$. \end{itemize} Thus, we select $\mu$ candidates from $\mu-3$ blocks for each of the $m$ candidates that correspond to vertices in the vertex cover. This results in $(\mu \cdot (\mu-3) \cdot m) = m \mu^2 - 3m\mu$ candidates in the committee. Next, for each of the $2mn$ blocks of type $B_2$ of candidates, we select: \begin{itemize} \item $\mu$ dummy candidates $d_{i, j}^{T_4}$ from Set $T_4$ such that $i \in [1,$ $2 m n]$ and $j \in [1,$ $\mu]$. \end{itemize} This helps to satisfy the conditions : $l^D_G=2$ for all $G \in \mathcal{G}$ such that $\forall i \in [1,$ $2mn]$, $\forall j \in [1,$ $\mu+1]$, $G \cap d_{i, j}^{T_4} \neq \phi$ and $\forall i \in [1,$ $2mn]$, $G\cap d_{i, \mu+1}^{T_4}=\phi$ and $l^D_G=1$ for all $G \in \mathcal{G}$ such that $\forall i \in [1,$ $2mn]$, $\forall j \in [1,$ $\mu+1]$, $G \cap d_{i, j}^{T_4} \neq \phi$. Overall, we select $\mu$ candidates from $2mn$ blocks. This results in additional $2mn\mu$ candidates in the committee. Finally, for groups that do not contain any dummy candidates, select $k$ candidates $c \in A$ that correspond to $k$ vertices $x \in X$ that form the vertex cover. These candidates satisfy the remainder of the constraints. Specifically, these $k$ candidates satisfy $|G \cap W|\geq 1$ for all the candidate groups that do not contain any dummy candidates. Hence, we have a committee of size $k + m \mu^2 + 2mn\mu - 3m\mu $. \paragraph{Representation constraints are satisfied:} Next, if the instance of the VC problem is a yes instance, then we have a winning committee $W$ of size $k + m \mu^2 + 2mn\mu - 3m\mu$ that consists of $k$ candidates corresponding to the VC and $m \mu^2 + 2mn\mu - 3m\mu$ candidates from Sets $U_2$ and $U_3$. Also, each and every population's winning committee, $W_P$ for all $P \in \mathcal{P}$, will have at least $1+m\mu^2-3m\mu+2mn\mu$ of their members in the winning committee $W$ such that $|W_P \cap W|\geq 1+m\mu^2-3m\mu+2mn\mu$, for all $P \in \mathcal{P}$, because: \begin{itemize} \item as we have a yes instance of the VC problem, one of the two corresponding candidates occupying the first two positions of the ranking will be on the committee. \item each of the $m\mu^2-3m\mu$ candidates from Set $U_2$ will be on the committee \item each of the $2mn\mu$ candidates from Set $U_3$ will be on the committee \end{itemize} By construction, candidates in Set $U_2$ and candidates in Set $U_3$ will always be the part of each population's winning committee. Additionally, candidates in Set $U_2$ are the $\mu$ candidates selected from each of the $m\mu-3m$ blocks of the type $B_1$ and these candidates are the same across all voter populations. Also, the candidates in Set $U_3$ are the $\mu$ candidates selected from each of the $2mn$ blocks of the type $B_2$ and these candidates are the same across all voter populations. Thus, $|W_P \cap W|\geq 1+m\mu^2-3m\mu+2mn\mu$, for all $P \in \mathcal{P}$, and the \emph{same} winning committee $W$ satisfy the diversity constraints \emph{and} the representation constraints. \paragraph{Highest scoring committee:} It remains to be shown that $W$ is the highest scoring committee among all the committees that satisfy the given constraints. Note that for a given population $P \in \mathcal{P}$, $\forall c \in C$ : $\pos_P(c)=1$ or $\pos_P(c)=2$, $\forall d_{i,j}^{T_4} \in $ Set $U_4^P$, $\forall v \in V$, $\pos_v(c)$ $\succ$ $\pos_v(d_{i,j}^{T_4})$. This holds based on the prerequisite condition that we are interested in committees that satisfy the constraints. Additionally, Set $U_2$ $\succ$ Set $U_3$ $\succ$ $d_{i,j}^{T_4}$, $\forall d_{i,j}^{T_4} \in $ Set $T_4$. This holds because $d_{i,j}^{T_4}$ is either in Set $U_4$ or Set $U_6$, and even after accounting for varying preferences of these two sets, Sets $U_2$ and $U_3$ are always ranked higher. Thus, the contribution of each candidate in Sets $U_2$ and $U_3$ to the total score will be greater than equal to a candidate $d_{i,j}^{T_4}$. As noted earlier, the ties are broken based on ranking. Hence, $W$ will be the highest scoring committee. Overall, a yes instance of the VC problem implies a yes instance of the DRCWD\xspace problem such that the committee $W$ s of size at most $k + m \mu^2 + 2mn\mu - 3m\mu $, $\forall$ $G \in \mathcal{G}$, $|G \cap W|\geq l^D_G$, $\forall$ $P \in \mathcal{P}$, $|W_P \cap W|\geq l^R_P$, and $\mathtt{f}(W) = \max_{W' \in \mathcal{W}} \mathtt{f}(W')$. ($\Leftarrow$) The instance of the DRCWD\xspace is a yes instance when we have $k + m \mu^2 + 2mn\mu - 3m\mu $ candidates in the committee. This means that each and every group will have at least one of their members in the winning committee $W$, i.e., $|G \cap W|\geq l^D_G$ for all $G \in \mathcal{G}$. Then the corresponding instance of the VC problem is a yes instance as well. This is because the $k$ vertices $x \in X$ that form the vertex cover correspond to the $k$ candidates $c \in A$ that satisfy $|G \cap W|\geq 1$ for all the candidate groups that do not contain any dummy candidates. Next, by construction, we know that as a yes instance of the DRCWD\xspace problem satisfies the diversity constraints, each population's winning committee, $W_P$ for all $P \in \mathcal{P}$, will have at least $l^R_P$ of their members in the winning committee $W$, i.e., $|W_P \cap W|\geq l^R_P$ for all $P \in \mathcal{P}$. Importantly, as the $k + m \mu^2 + 2mn\mu - 3m\mu $-sized committee $W$ also satisfies the diversity constraints, we know that all candidates from Set $U_2$ and Set $U_3$ are selected. Additionally, no candidate from Set $U_4$ of each voter population is selected and \emph{at least one} of the top 2 ranked candidates from each voter population is selected. Hence, this means that the corresponding instance of the VC problem is a yes instance as well. Finally, this committee will be the highest scoring committee among all the committees that satisfy the constraints. This completes the other direction of the proof of correctness. We note that as we are using a separable committee selection rule\xspace, computing scores of candidates takes polynomial time. This completes the overall proof. \end{proof} The hardness holds even when either all diversity constraints are set to zero ($\forall G \in \mathcal{G}$, $l_G^D=0$) or all representation constraints are set to zero ($\forall P \in \mathcal{P}$, $l_P^R=0$) Finally, we now show that DRCWD\xspace is NP-hard even when either all diversity constraints are set to zero ($\forall G \in \mathcal{G}$, $l_G^D=0$) \emph{or} all representation constraints are set to zero ($\forall P \in \mathcal{P}$, $l_P^R=0$). \begin{theorem}\label{thm:DiReCWDdivrep31even} If $\forall \mu \in \mathbb{Z} : \mu\geq 3$ and $\mu$ is an even number, $\forall \pi \in \mathbb{Z} : \pi\geq 1$, and $\mathtt{f}$ is the monotone, separable function arising from an arbitrary single-winner positional scoring rule, then DRCWD\xspace is NP-hard. \end{theorem} \begin{proof}[Proof Sketch] The proof follows from the proof of Theorem~\ref{thm:DiReCWDdivrep31odd}. Hence, we only discuss the major changes in the construction of the proof. \paragraph{\underline{PART I: Construction}} \paragraph{Candidates:} We have two candidates $c_i$ and $c_{m+i}$ for each vertex $x_i \in X$, and $2\cdot (2 \mu^2 m - 7 \mu m + 2 \mu m n + 2 m n + 3 m)$ dummy candidates $d \in D$ where $m$ corresponds to the number of vertices in the graph $H$, $n$ corresponds to the number of edges in the graph $H$, and $\mu$ is a positive, even integer (hint: the number of candidate attributes). We divide the dummy candidates into two types of blocks in line with the proof of Theorem~\ref{thm:DiReCWDdivrep31odd} but there twice the number of blocks and hence, by transitivity, twice the number of candidates. \paragraph{Committee Size:} We set the target committee size to be $2\cdot(k + m \mu^2 + 2mn\mu - 3m\mu) $ \paragraph{Candidate Groups:} We divide the candidates in $C$ into groups such that each candidate is part of $\mu$ groups as there are $\mu$ candidate attributes. The division is in line with the proof of Theorem~\ref{thm:DiReCWDdivrep31odd} but each set contains twice the number of candidates and there is one exception. For candidates in Block Type $B_2$: \begin{itemize} \item For each dummy candidate $d_j^{T_3} \in T_3$, it is in $\mu-1$ groups, individually with each of $\mu-1$ dummy candidates, $d_i^{T_2} \in T_2$, as described in the previous point. Next, note that when $\mu$ is an even number, $\mu-1$ is an odd number, which means Set $T_3$ has an \emph{odd} number of candidates. We randomly divide $\mu-2$ candidates into two partitions. Then, we create $\frac{\mu-2}{2}$ groups over one attribute where each group contains two candidates from Set $T_3$ such that one candidate is selected from each of the two partitions without replacement. Thus, each pair of groups is mutually disjoint. Hence, each dummy candidate $d_j^{T_3} \in T_3$ is part of exactly one group that is shared with exactly one another dummy candidate $d_{j'}^{T_3} \in T_3$ where $j \neq j'$. Overall, this construction results in one attribute and one group for all but one dummy candidate $d_j^{T_3} \in T_3$, which results into a total of $\mu$ attributes and $\mu$ groups for these $\mu-2$ candidates. This is because $\frac{\mu-2}{2}$ groups can hold $\mu-2$ candidates. Hence, one candidate still has $\mu-1$ attributes and is part of $\mu-1$ groups. If this block of dummy candidates is for candidate $c_i \in A$, then another corresponding block of dummy candidates for candidate $c_{m+i} \in A$ will also have one candidate $d_{z}^{T'_3} \in T'_3$ who will have $\mu-1$ attributes and is part of $\mu-1$ groups. We group these two candidates from separate blocks. Hence, now that one remaining candidate also has $\mu$ attributes and is part of $\mu$ groups. As there is always an even number of candidates in set $A$ ($|A|=2m$), such cross-block grouping of candidates among a total of $(\mu-3) \cdot 2m$ blocks, also an even number, is always possible. \end{itemize} \paragraph{Diversity Constraints:} We set the lower bound for each candidate group in line with the proof of Theorem~\ref{thm:DiReCWDdivrep31odd}. \paragraph{Voters and Preferences:} We now introduce $4n^2$ voters, $4n$ voters for each edge $e \in E$. More specifically, an edge $e \in E$ connects vertices $x_i$ and $x_j$. Then, the corresponding $4n^2$ voters $v \in V$ rank the candidates in line with the proof of Theorem~\ref{thm:DiReCWDdivrep31odd} but each set now consists of twice the number of candidates. Specifically, the major change is in Set $U_1$ as it now consists of $c_i$, $c_j$, $c_{i+m}$, and $c_{j+m}$. \paragraph{Voter Populations:} We divide the voters into populations in line with the proof of Theorem~\ref{thm:DiReCWDdivrep31odd}. \paragraph{Representation Constraints:} We set the lower bound for each voter population as follows: $\forall P \in \mathcal{P}$, $l^R_P$ = $2\cdot(1+m\mu^2-3m\mu+2mn\mu)$. This completes our construction for the reduction, which is a polynomial time reduction in the size of $n$ and $m$ in line with the proof of Theorem~\ref{thm:DiReCWDdivrep31odd}. \paragraph{\underline{PART II: Proof of Correctness}} \begin{claim} We have a vertex cover $S$ of size at most $k$ that satisfies $e \cap S\neq\phi$ $\forall$ $e \in E$ if and only if we have a committee $W$ of size at most $2\cdot(k + m \mu^2 + 2mn\mu - 3m\mu) $ such that $\forall$ $G \in \mathcal{G}$, $|G \cap W|\geq l^D_G$, $\forall$ $P \in \mathcal{P}$, $|W_P \cap W|\geq l^R_P$, and $\mathtt{f}(W) = \max_{W' \in \mathcal{W}} \mathtt{f}(W')$\footnote{The ties are broken based on rankings of the candidate such that higher ranked candidates are chosen over the lower ranked candidates.} where $\mathcal{W}$ is a set of committees that satisfy all constraints. \end{claim} The proof of correctness easily follows from the proof of correctness of Theorem~\ref{thm:DiReCWDdivrep31odd}. \end{proof} In line with the previous theorem, our reduction holds for a weaker result such that the hardness holds even when either all diversity constraints are set to zero ($\forall G \in \mathcal{G}$, $l_G^D=0$) or all representation constraints are set to zero ($\forall P \in \mathcal{P}$, $l_P^R=0$) \section{Related Work} \label{sec:RW} \subsection{COMSOC and Classification of Complexity} Computational Social Choice research has particularly focused on classifying the complexity of a known social choice problem. For instance, Konzcac and Lang \shortcite{konczak2005voting} introduced the problem of voting under partial information. This led to line of research that aimed to classify the complexity of the problem of possible winners (candidate is a winner in at least one completion) and necessary winners (candidate is a winner in all completions) over all pure positional scoring rules \cite{baumeister2012taking,betzler2010towards,chakraborty2021classifying,kenig2019complexity,xia2011determining}. \subsection{Multiwinner Elections, Fairness, and Complexity} Our work primarily builds upon the literature on constrained multiwinner elections. Fairness from candidate's perspective is discussed via the use of diversity constraints over multiple attributes and its use is known to be NP-hard, which has led to approximation algorithms and matching hardness of approximation results by Bredereck \emph{et~al.}\xspace \shortcite{bredereck2018multiwinner} and Celis \emph{et~al.}\xspace \shortcite{celis2017multiwinner}. Additionally, goalbase score functions, which specify an arbitrary set of logic constraints and let the score capture the number of constraints satisfied, could be used to ensure diversity \cite{uckelman2009representing}. On the other hand, the study of fairness from voters' perspective pertains the use of representation constraints \cite{cheng2019group}. Finally, due to the hardness of using diversity constraints over multiple attributes in approval-based multiwinner elections \cite{brams1990constrained}, these have been formalized as integer linear programs (ILP) \cite{potthoff1990use}. Overall, our work is at the intersection of the interest COMSOC researchers have on classifying the complexity and fairness in multiwinner elections. Specifically, our work is the closest to the work by Bredereck \emph{et~al.}\xspace \shortcite{bredereck2018multiwinner}, Celis \emph{et~al.}\xspace \shortcite{celis2017multiwinner}, Cheng \emph{et~al.}\xspace \shortcite{cheng2019group} and Relia \shortcite{relia2021dire} but we differ as we: (i) provide a complete classification of complexity of using finding a diverse and representative committee using a monotone, separable positional multiwinner voting rule, (ii) our NP-hardness results hold for \emph{all} integer values of attributes, and (iii) our NP-hardness results are conditioned \emph{only} on the assumption that P$\neq$NP. \section{Introduction} \label{sec:intro} Fairness has recently received particular attention from the computer science research community. For context, the number of papers that contain the words ``fair'' or ``fairness'' in their titles and are published at top-tier computer science conferences like NeurIPS and AAAI grew at an average of 38\% year on year since 2018. Moreover, the conference ACM FAccT, formerly known as FAT*, was established in 2018 to ``bring together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems''. Similarly, there is a growing trend in the Computational Social Choice (COMSOC) community towards the use of ``fairness''\footnote{We do not consider the research on Fair Resource Allocation due to the specificity in the use of the word ``fair''.)} \cite{bredereck2018multiwinner,celis2017multiwinner,cheng2019group,hershkowitzdistrict,shrestha2019fairness}. However, the term ``fairness'' is used in varying contexts. For example, Bredereck \emph{et~al.}\xspace \shortcite{bredereck2018multiwinner} and Celis \emph{et~al.}\xspace \shortcite{celis2017multiwinner} call diversity of candidates in committee elections as fairness, while Cheng \emph{et~al.}\xspace \shortcite{cheng2019group} call representation of voters in committee elections as fairness. Such context-specific use of the term narrates an incomplete story. Hence, Relia \shortcite{relia2021dire} unified the framework using the DiRe Committee model that combines the use of diversity and representation constraints. In line with the conceptual difference, the use of constraints leads to setups of multiwinner elections that are technically different. For instance, diversity constraint is a property of candidates and representation constraint is a property of voters. The use of these constraints are mathematically as different as the regularity and uniformity properties of hypergraphs\footnote{The mathematical differences between, say, the vertex cover problem on $d$-regular hypergraphs and $k$-uniform hypergraphs are well-known \cite{bansal2010inapproximability, feige2003vertex}}. Hence, it is important to mathematically delineate the two notions. A starting step towards understanding the difference between diversity and representation is having a classification of complexity of using diversity and representation constraints in multiwinner elections. The classification of complexity, while technically interesting and important in itself, enables a detailed understanding of the nuances that delineate the two notions. Our main contributions are the complexity results that are based on a singular assumption that P$\neq$NP. \section{Preliminaries and Notation} \label{sec:prelim} \paragraph{Multiwinner Elections.} Let $E = (C, V )$ be an election consisting of a candidate set $C = \{c_1,\dots,c_m\}$ and a voter set $V = \{v_1,\dots,v_n\}$, where each voter $v \in V$ has a preference list $\succ_{v}$ over $m$ candidates, ranking all of the candidates from the most to the least desired. $\pos_{v}(c)$ denotes the position of candidate $c \in C$ in the ranking of voter $v \in V$, where the most preferred candidate has position 1 and the least preferred has position $m$. Given an election $E = (C,V)$ and a positive integer $k \in [m]$ (for $k \in \mathbb{N}$, $[k] = \{1, \dots, k\}$), a multiwinner election selects a $k$-sized subset of candidates (or a committee) $W$ using a multiwinner voting rule $\mathtt{f}$ (discussed later) such that the score of the committee $\mathtt{f}(W)$ is the highest. We assume ties are broken using a pre-decided priority order \paragraph{Candidate Groups.} The candidates have $\mu$ attributes, $A_1 , . . . , A_\mu$, such that $\mu \in \mathbb{Z}$ and $\mu \geq 0$. Each attribute $A_i$, $\forall$ $i \in [\mu]$, partitions the candidates into $g_i \in [m]$ groups, $\group{i}{1} , . . . , \groupsub{i}{g} \subseteq C$. Formally, $\group{i}{j} \cap \group{i}{j'} = \emptyset$, $\forall j,j' \in [g_i], j \ne j'$. For example, candidates may have race and gender attribute ($\mu$ = 2) with disjoint groups per attribute, male and female ($g_1$ = 2) and Caucasian and African-American ($g_2$ = 2). Overall, the set $\mathcal{G}$ of \emph{all} such arbitrary and potentially non-disjoint groups is $\group{1}{1} , . . . , \groupsub{\mu}{g} \subseteq C$. \paragraph{Voter Populations.} The voters have $\pi$ attributes, $A'_1 , . . . , A'_\pi$, such that $\pi \in \mathbb{Z}$ and $\pi \geq 0$. The voter attributes may be different from the candidate attributes. Each attribute $A'_i$, $\forall$ $i \in [\pi]$, partitions the voters into $p_i \in [n]$ populations, $\population{i}{1} , . . . , \populationsub{i}{p} \subseteq V$. Formally, $\population{i}{j} \cap \population{i}{j'} = \emptyset$, $\forall j,j' \in [p_i], j \ne j'$. For example, voters may have state attribute ($\pi$ = 1), which has populations California and Illinois ($p_1$ = 2). Overall, the set $\mathcal{P}$ of \emph{all} such predefined and potentially non-disjoint populations will be $\population{1}{1} , . . . , \populationsub{\pi}{p} \subseteq V$. Additionally, we are given $W_P$, the winning committee $\forall$ $P \in \mathcal{P}$. We limit the scope of $W_P$ to be a committee instead of a ranking of $k$ candidates because when a committee selection rule\xspace such as CC rule is used to determine each population’s winning committee $W_P$, then a complete ranking of each population’s collective preferences is not possible. \paragraph{Multiwinner Voting Rules.} There are multiple types of multiwinner voting rules, also called committee selection rules. In this paper, we focus on committee selection rules $\mathtt{f}$ that are based on single-winner positional voting rules, and are monotone and submodular ($\forall A \subseteq B, f(A) \leq f(B)$ and $f(B) \leq f(A) + f(B \setminus A)$), and specifically separable. \cite{bredereck2018multiwinner,celis2017multiwinner}. Specifically, a special case of submodular functions are separable functions: score of a committee $W$ is the sum of the scores of individual candidates in the committee. Formally, $\mathtt{f}$ is separable if it is submodular and $\mathtt{f}(W) = \sum_{c \in W}^{}\mathtt{f}(c)$ \cite{bredereck2018multiwinner}. Monotone and separable selection rules are natural and are considered good when the goal of an election is to shortlist a set of individually excellent candidates: \begin{definition} \label{def-kborda} \textbf{$k$-Borda rule} The $k$-Borda rule outputs committees of $k$ candidates with the highest Borda scores. \end{definition}
1,108,101,563,574
arxiv
\section{Introduction} Multi-agent decision making is an established paradigm to model and solve problems involving multiple heterogeneous (possibly selfish) agents with individual goals. Moreover, as part of the same population, such entities interact and potentially share common resources or compete for them, giving rise to a noncooperative setup. In this context, equilibrium solution concepts based on game theory and, in particular, \glspl{GNEP} \cite{facchinei2007generalized}, provide a framework that encompasses many control engineering problems, e.g., communication and networks \cite{scutari2014real,facchinei2017feasible}, automated driving and traffic control \cite{smith1979existence,8672171}, smart grids and demand-side management \cite{ma2011decentralized,chen2014autonomous,9030152}. However, Nash equilibria are typically formalized in games with complete information, i.e., where the main ingredients (agents' cost functions and strategies, local and coupling constraints) are fully deterministic. Apparently, this might suggest a conceptual shift when dealing with real-world applications, since the latter are strongly affected by the presence of uncertainty, and therefore traditional equilibrium notions may no longer be appropriate. This motivates to seek for robust \gls{GNEP} reformulations, suitably accompanied by tailored equilibrium solution definitions. By the pioneering work in \cite{harsanyi1962bargaining}, the literature on robust game theory divides into two main directions that depend on the available information (or working assumptions) around the uncertain parameter. Specifically, several results deal with uncertainty characterized by specific models of either their probability distribution \cite{couchman2005gaming,singh2016existence}, or the geometry of its support set \cite{aghassi2006robust,hayashi2005robust,perchet2020finding}. Conversely, there has been a recent development of data-driven (or distribution-free) robust approaches, see, e.g., \cite{9028952,paccagnan2019scenario,pantazis2020aposteriori}. Within this data-driven context, the main results of the aforementioned papers characterize the robustness of equilibria to unseen realizations of the uncertain parameter by leveraging on the scenario approach paradigm \cite{campi2018introduction}. Originally conceived to provide a-priori feasibility guarantees associated with the optimal solution to an uncertain convex optimization problem \cite{calafiore2006scenario}, the scenario theory has been recently extended by means of an a-posteriori assessment of the feasibility risk to nonconvex decision-making problems \cite{campi2018general}. In a nutshell, the scenario theory establishes that the robustness of the solution to a given uncertain decision-making problem shall be assessed by solving an approximated, yet computationally tractable, problem that is built upon a finite number of observed realizations of the uncertainty. We aim at bridging the multi-agent generalized game theory with the data-driven scenario paradigm, in order to compute \gls{GNE} with quantifiable robustness properties in a distribution-free fashion. Specifically, we focus on the broad class of \glspl{GNEP} in aggregative setting (\S II), where the cost function of each agent depends on the average behaviour of the whole population and the strategies are coupled by means of (affine) coupling constraints affected by uncertainty with a possibly unknown probability distribution. Here, we contextualize and apply the probabilistic results in \cite{campi2018general} to provide a-posteriori feasibility certificates to the entire set of \glspl{v-GNE}, a popular subset of \glspl{GNE} \cite{cavazzuti2002nash}. Compared with the literature on robust data-driven game theory, our contributions can be summarized as follows. \begin{itemize} \item Along the direction of \cite{pantazis2020aposteriori}, we focus on the entire set of equilibria, implicitly relaxing the assumption on the uniqueness of the equilibrium postulated in \cite{paccagnan2019scenario}; \item We extend the results in \cite{9028952,pantazis2020aposteriori} providing a-posteriori robustness certificates for the set of \gls{GNE} rather than for the feasible set or Nash equilibria without coupling constraints. We also show that the resulting bounds are less conservative (\S III); \item The obtained probabilistic guarantees rely on the notion of support subsample, a key concept of the scenario approach theory. To compute these support subsamples we show that it is merely required to enumerate the constraints that ``shape'' the set of \gls{GNE}. An explicit representation of the unknown set of equilibria is therefore not needed (\S III); \item For the considered class of \glspl{GNEP}, we propose a structure-preserving, semi-decentralized algorithm to compute the number of minimal irreducible support subsamples \gls{w.r.t.} the set of \gls{GNE} (\S IV). \end{itemize} Finally, we validate the proposed theoretical results on an illustrative example (\S V). \smallskip \subsubsection*{Notation} $\mathbb{N}$ and $\mathbb{R}$ denote the set of natural and real numbers, respectively. For vectors $v_1,\dots,v_N\in\mathbb{R}^n$ and $\mc I=\{1,\dots,N \}$, we denote $\boldsymbol{v} \coloneqq (v_1 ^\top,\dots ,v_N^\top )^\top = \mathrm{col}((v_i)_{i\in\mc I})$ and $ \boldsymbol{v}_{-i} \coloneqq ( v_1^\top,\dots,v_{i-1}^\top,v_{i+1}^\top,\dots,v_{N}^\top )^\top =\mathrm{col}(( v_j )_{j\in\mc I\setminus \{i\}})$. With a slight abuse of notation, we also use $\boldsymbol{v} = (v_i,\boldsymbol{v}_{-i})$. Given a matrix $A \in \mathbb{R}^{m \times n}$, $A^\top$ denotes its transpose, while for $A \in \mathbb{R}^{n \times n}$, $A \succ 0$ ($\succcurlyeq 0$) implies that $A$ is symmetric and positive (semi)-definite. For a given set $\mc{S} \subseteq \mathbb{R}^n$, $\textrm{bdry}(\mc{S})$ denotes its boundary. If $\mc{S}$ is closed and convex, the normal cone of $\mc{S}$ evaluated at some $\boldsymbol{x}$ is the set-valued mapping $\mc{N}_{\mc{S}} : \mathbb{R}^n \to 2^{\mathbb{R}^n}$, defined as $\mc{N}_{\mc{S}}(\boldsymbol{x}) \coloneqq \{ d \in \mathbb{R}^n \mid d^\top (\boldsymbol{y} - \boldsymbol{x}) \leq 0, \; \forall \boldsymbol{y} \in \mc{S} \}$ if $\boldsymbol{x} \in \mc{S}$, $\mc{N}_{\mc{S}}(\boldsymbol{x}) \coloneqq \emptyset$ otherwise. A mapping $F:\mathbb{R}^n \rightarrow \mathbb{R}^n$ is monotone if $(F(\boldsymbol{x}) - F(\boldsymbol{y}))^\top(\boldsymbol{x} - \boldsymbol{y}) \geq 0$ for all $\boldsymbol{x}, \boldsymbol{y} \in \mathbb{R}^n$. $\mc{C}^1$ is the class of continuously differentiable functions. \section{Mathematical setup and problem statement} We start by formalizing the data-driven, uncertain game considered. Then, we mathematically define the problem addressed, and finally recall some key results for the class of \gls{v-GNE} characterizing \glspl{GNEP} in aggregative form. \subsection{Aggregative game formulation} We consider a noncooperative, multi-agent game whose $N$ players are indexed by the set $\mc{I} \coloneqq \{1, \ldots,N\}$. Let $x_i \in \mathbb{R}^{n_i}$ be the decision vector of the $i$-th player, locally constrained to a set $\mc{X}_i \subseteq \mathbb{R}^{n_i}$. In this context, each player aims at minimizing a predefined cost function $J_i : \mathbb{R}^n \to \mathbb{R}$, $n \coloneqq \sum_{i \in \mc{I}} n_i$, while satisfying a set of coupling constraints among the agents affected by the realization of an uncertain vector $\delta$, encoded by the set $\mc{X}_{\delta} \subseteq \mathbb{R}^n$. Specifically, $\delta$ takes values in the set $\Delta \subseteq \mathbb{R}^\ell$, endowed with a $\sigma$-algebra $\mc{D}$ and distributed according to $\mathbb{P}$, a possibly unknown probability measure over $\mc{D}$. This results in the following family of mutually coupled optimization problems: $$ \forall i \in \mc{I} : \left\{ \begin{aligned} &\underset{x_i \in \mc{X}_i}{\textrm{min}} & & J_i(x_i, \boldsymbol{x}_{-i})\\ &\hspace{.1cm}\textrm{ s.t. } & & (x_i, \boldsymbol{x}_{-i}) \in \mc{X}_{\delta}, \, \delta \in \Delta. \end{aligned} \right. $$ For computational purposes, hereinafter we consider each cost function to be in aggregative form and quadratic, while $\mc{X}_{\delta}$ is a polyhedral set for every realization of $\delta$, i.e., $$ \begin{aligned} J_i &\coloneqq \tfrac{1}{2} x^\top_i Q_i x_i + (\tfrac{1}{N}\textstyle\sum_{j \in \mc{I} \setminus \{i\}} C_{i,j} x_j + q_i)^\top x_i, \; \forall i \in \mc{I},\\ \mc{X}_{\delta} &\coloneqq \{\boldsymbol{x} \in \mathbb{R}^n \mid A(\delta) \, \boldsymbol{x} \leq b(\delta)\}, \; \forall \delta \in \Delta, \end{aligned} $$ where $Q_i \succ 0$, $C_{i,j} \in \mathbb{R}^{n_i \times n_j}$ for all $(i,j) \in \mc{I}^2$, $q_i \in \mathbb{R}^{n_i}$, while $A : \Delta \to \mathbb{R}^{m \times n}$ and $b : \Delta \to \mathbb{R}^{m}$. In view of the considered structure, it follows immediately that every $J_i(\cdot, \boldsymbol{x}_{-i})$ is a convex function of class $\mc{C}^1$, for any $\boldsymbol{x}_{-i} \in \mathbb{R}^{(n - n_i)}$, for all $i \in \mc{I}$. Then, given the linear structure of $\mc{X}_{\delta}$, we note that it can be equivalently defined by the set of inequalities $A_i(\delta) x_i + \textstyle\sum_{j \in \mc{I} \setminus \{i\}} A_j(\delta) x_j \leq b(\delta)$, with $A_i : \Delta \to \mathbb{R}^{m \times n_i}$, for all $i \in \mc{I}$ and for all $\delta \in \Delta$. For the remainder, we postulate the following assumption. \smallskip \begin{standing} For all $i \in \mc{I}$, $\mc{X}_i \subseteq \mathbb{R}^{n_i}$ is a polytopic set. \hfill$\square$ \end{standing} \smallskip To conclude, we note that the polytopic set encompassing all deterministic, local constraints $\mc{X} \coloneqq \prod_{i \in \mc{I}} \mc{X}_i$, can also be rewritten in compact form as $\mc{X} \coloneqq \{\boldsymbol{x} \in \mathbb{R}^n \mid H \boldsymbol{x} \leq h\}$, for some $H$ and $h$ obtained by concatenating the matrices and vectors that define the local constraint sets, $\mc{X}_i$. \subsection{Scenario-based \gls{GNEP}} The noncooperative game considered directly falls within the set of jointly convex \glspl{GNEP} \cite[Def.~2]{facchinei2007generalized}, and we consider a data driven approach to asses the robustness of a set of equilibria to such game. Specifically, let $\delta_K \coloneqq \{\delta^{(k)}\}_{k \in \mc{K}} = \{\delta^{(1)}, \ldots, \delta^{(K)}\} \in \Delta^K$ be a finite collection of $K \in \mathbb{N} \cup \{0\}$ \gls{iid} samples of $\delta$, $\mc{K} \coloneqq \{1,2,\ldots,K\}$, hereinafter referred to as $K$-multisample. The scenario-based \gls{GNEP} $\Gamma$ is defined as the tuple $\Gamma \coloneqq (\mc{I}, (\mc{X}_i)_{i \in \mc{I}}, (J_i)_{i \in \mc{I}}, \delta_K)$, encoded by the following family of optimization problems: \begin{equation}\label{eq:single_prob_aggregative} \forall i \! \in \! \mc{I} \! : \! \left\{ \begin{aligned} &\underset{x_i \in \mc{X}_i}{\textrm{min}} & & \tfrac{1}{2} x^\top_i Q_i x_i + (\tfrac{1}{N} \textstyle\sum_{j \in \mc{I} \setminus \{i\}} C_{i,j} x_j + q_i)^\top x_i \\ &\hspace{.1cm}\textrm{ s.t. } & & A(\delta^{(k)}) \, \boldsymbol{x} \leq b(\delta^{(k)}), \, \text{ for all } k \in \mc{K}. \end{aligned} \right. \end{equation} For any $\delta^{(k)} \in \delta_K$, define the set $\mc{X}_{\delta^{(k)}} \coloneqq \{\boldsymbol{x} \in \mathbb{R}^n \mid A(\delta^{(k)}) \, \boldsymbol{x} \leq b(\delta^{(k)}) \}$, while $\mc{X}^K_{i}(\boldsymbol{x}_{-i}) \coloneqq \{x_i \in \mc{X}_i \mid (x_i, \boldsymbol{x}_{-i}) \in \cap_{k \in \mc{K}} \mc{X}_{\delta^{(k)}} \} $ and $\mc{X}_K \coloneqq \cap_{k \in \mc{K}} \mc{X}_{\delta^{(k)}} \cap \mc{X}$. We consider the following notion of equilibrium for $\Gamma$. \smallskip \begin{definition}\label{def:GNE} Let $\delta_K \in \Delta^K$ be any $K$-multisample. The collective vector of strategies $\boldsymbol{x}^\ast \in \mc{X}_{K}$ is a \gls{GNE} of $\Gamma$ in \eqref{eq:single_prob_aggregative} if, for all $i \in \mc{I}$, $$ J_i (x^\ast_i, \boldsymbol{x}^\ast_{-i}) \leq \underset{y_i \in \mc{X}^K_{i}(\boldsymbol{x}^\ast_{-i})}{\mathrm{min}} \, J_i (y_i, \boldsymbol{x}^\ast_{-i}). $$ \hfill$\square$ \end{definition} \smallskip Clearly, given the dependence on the set of $K$ realizations $\delta_K \in \delta_K$, any equilibrium of $\Gamma$ is a random variable itself. Now, let $\Omega_\delta$ be the set of equilibria induced by $\delta \in \Delta$. In the spirit of \cite[Def.~4]{pantazis2020aposteriori}, we investigate the violation probability of the set of equilibria of a scenario-based \gls{GNEP}, according to the definition given next. \smallskip \begin{definition}\label{def:violation_set} The violation probability of a set of \gls{GNE}, $\Omega$, is defined as \begin{equation}\label{eq:violation_set} V(\Omega) \coloneqq \mathbb{P}\{\delta \in \Delta \mid \Omega \not\subseteq \Omega_\delta \}. \end{equation} \hfill$\square$ \end{definition} \smallskip Specifically, the random variable $V(\Omega)$ encodes the robustness of the set $\Omega$ to the uncertain parameter $\delta$, i.e., given any reliability parameter $\epsilon \in (0,1)$, we say that $\Omega$ is $\epsilon$-robust if $V(\Omega) \leq \epsilon$. Here, the condition $\Omega \not\subseteq \Omega_\delta$ means that, once $\delta$ is drawn, at least one element in $\Omega$ is not an equilibrium any more. Thus, along the lines of \cite{campi2018general}, by relying on the observations of the uncertain parameter, i.e., the $K$-multisample $\delta_K$, our goal is to evaluate the violation probability of the set of equilibria $\Omega_K$. For the remainder, we restrict the set $\Omega_K$ to correspond to the set of \gls{v-GNE} of the scenario-based \gls{GNEP} \eqref{eq:single_prob_aggregative}, as described in the next section. \subsection{Characterization of \gls{v-GNE}} A popular subset of \gls{GNE} of a given game $\Gamma$ is the one of \gls{v-GNE}, characterized as the set of equilibria providing ``larger social stability'' \cite[\S 5]{cavazzuti2002nash}. Specifically, the set of \gls{v-GNE} corresponds to the set of collective strategies that solve the variational inequality associated with the scenario-based \gls{GNEP} in \eqref{eq:single_prob_aggregative}. Thus, given the $K$-multisample $\delta_K$, the set of \gls{v-GNE} coincides with the solution set to VI$(\mc{X}_K, F)$, where $\mc{X}_K$ is the feasible set and $F:\mathbb{R}^n \to \mathbb{R}^n$ is the so-called game mapping, constructed by stacking the partial derivatives of $J_i$, i.e., $F(\boldsymbol{x}) \coloneqq \mathrm{col}((\nabla_{x_i} J_i(x_i, \boldsymbol{x}_{-i}))_{i \in \mc{I}})$, given by $$\Omega_{K} \coloneqq \{\boldsymbol{x} \in \mc{X}_K \mid (\boldsymbol{y} - \boldsymbol{x})^\top F(\boldsymbol{x}) \geq 0, \; \forall \boldsymbol{y} \in \mc{X}_K \}.$$ In our aggregative setting with quadratic cost functions, the game mapping turns out to be affine in the collective vector of strategies $\boldsymbol{x}$, i.e., $F(\boldsymbol{x}) = M \boldsymbol{x} + q$, where $M \in \mathbb{R}^{n \times n}$ and $q \in \mathbb{R}^n$ are defined as: $$ M \! \coloneqq \! \left[\begin{array}{cccc} Q_1 & \tfrac{1}{N}C_{1,2} & \cdots & \tfrac{1}{N}C_{1,N}\\ \tfrac{1}{N}C_{2,1} & Q_2 & \cdots & \tfrac{1}{N}C_{2,N}\\ \vdots & \vdots & \ddots &\vdots\\ \tfrac{1}{N}C_{N,1} & \tfrac{1}{N}C_{N,2} & \cdots & Q_N \end{array}\right]\!, \, q \coloneqq \! \left[\begin{array}{c} q_1\\ q_2\\ \vdots\\ q_N \end{array} \right]. $$ \smallskip \begin{standing}\label{ass:monotone_game} The mapping $F:\mathbb{R}^n \to \mathbb{R}^n$ is monotone. \hfill$\square$ \end{standing} \smallskip We remark that an affine mapping is monotone if and only if $(M + M^\top) \succcurlyeq 0$. This can be guaranteed by, e.g., assuming equivalent bilateral interactions among agents, $C_{i,j} = C_{j,i}$, for all $(i,j) \in \mc{I}^2$ (in addition to $Q_i \succ 0$, for all $i \in \mc{I}$). Now, we recall some results available in the literature on affine variational inequalities, which will be key in the remainder of the paper. Specifically, let us consider first the game $\Gamma$ in the absence of the coupling constraints, and let us focus on the (deterministic) \gls{NEP} associated to \eqref{eq:single_prob_aggregative} with $\mc{X}_K = \mc{X}$, which reads as \begin{equation}\label{eq:NEP_single_prob_aggregative} \forall i \! \in \! \mc{I} \!:\! \underset{x_i \in \mc{X}_i}{\textrm{min}} \; \tfrac{1}{2} x^\top_i Q_i x_i + (\tfrac{1}{N} \textstyle\sum_{j \in \mc{I} \setminus \{i\}} C_{i,j} x_j + q_i)^\top x_i. \end{equation} The set of variational Nash equilibria to such \gls{NEP}, namely $\Omega_{0} \coloneqq \Omega_{\delta^{(0)}}$, coincides with the set of solutions to a linearly constrained, affine variational inequality problem, and hence is characterized by the following lemma that combines \cite[Lemma~2.4.14, Th.~2.4.15]{facchinei2007finite}, \cite[Lemma~1, Th.~2]{gowda1994boundedness}. \smallskip \begin{lemma}\label{lemma:agg_GNEP} Let $M \succcurlyeq 0$. Then, the following statements hold true: \begin{enumerate} \item[(i)] $\Omega_{0}$ is a bounded polyhedral set; \item[(ii)] There exist a vector $c \in \mathbb{R}^n$ and a constant $d \geq 0$ such that, for all $\boldsymbol{x} \in \Omega_{0}$, $(M + M^\top) \boldsymbol{x} = c$ and $\boldsymbol{x}^\top M \boldsymbol{x} = d$; \item[(iii)] Let $\omega(\boldsymbol{x}) \coloneqq \textrm{min}_{\boldsymbol{y} \in \mc{X}} \; \boldsymbol{y}^\top (M \boldsymbol{x} + q)$, and let $\mc{P} \coloneqq \{ \boldsymbol{x} \in \mc{X} \mid \omega(\boldsymbol{x}) - (d + q^\top \boldsymbol{x}) \geq 0 \}$. Then $$ \Omega_{0} \coloneqq \{ \boldsymbol{x} \in \mc{P} \mid (M + M^\top) \boldsymbol{x} = c \}. $$ \end{enumerate} \hfill$\square$ \end{lemma} \smallskip By noticing that $\mc{P}$ is a polyhedral set, roughly speaking the set of Nash equilibria $\Omega_{0}$ contains the feasible strategies that span $(M + M^\top)$, and it is characterized by the two invariants $c$ and $d$. We note that, given any $\delta_K \in \Delta^K$, Lemma~\ref{lemma:agg_GNEP}(iii) allows to introduce coupling constraints, and characterize the set of \gls{v-GNE}, $\Omega_{K}$. Specifically, we have \begin{equation}\label{eq:generic_set_equilibria} \Omega_{K} \coloneqq \{ \boldsymbol{x} \in \mathbb{R}^n \mid (M + M^\top) \boldsymbol{x} = c \} \cap \mc{P}_K, \end{equation} where $\mc{P}_K \coloneqq \{ \boldsymbol{x} \in \mc{X}_K \mid \omega(\boldsymbol{x}) - (d + q^\top \boldsymbol{x}) \geq 0 \}$, and the function $\omega(\boldsymbol{\cdot})$ is restricted to the feasible set $\mc{X}_K \subseteq \mc{X}$, which accounts for the coupling constraints. Finally, we recall that $\boldsymbol{x}^\ast \in \Omega_{K} \iff -F(\boldsymbol{x}^\ast) = -(M \boldsymbol{x}^\ast + q) \in \mc{N}_{\mc{X}_K}(\boldsymbol{x}^\ast)$. \smallskip \begin{remark} In view of Standing Assumption~\ref{ass:monotone_game}, we have $M \succcurlyeq 0$. When $M \succ 0$, the mapping $F$ is strictly monotone and hence the scenario-based \gls{GNEP} admits a unique equilibrium that, in general, can not be characterized as in Lemma~\ref{lemma:agg_GNEP}. The results showed next focus on the general case, i.e., $F$ monotone mapping with $M \succcurlyeq 0$, while the other case follows straightforwardly. In fact, if $M \succ 0$, Lemma~\ref{lemma:ifonlyif} and Theorem~\ref{th:VI} below still hold, by requiring only Assumption~\ref{ass:nonemptiness} to be imposed, thus relaxing Assumption~\ref{ass:nondeg}. \hfill$\square$ \end{remark} \section{Probabilistic feasibility for a set of \gls{GNE}} In this section, we first recall some key concepts and results of the scenario approach theory, and then discuss how to extend them to a set-oriented framework. Successively, we provide bounds on the violation probability related to the set of equilibria of the scenario-based \gls{GNEP} $\Gamma$ in \eqref{eq:single_prob_aggregative}. \subsection{A weak connection among sets of \gls{GNE}} Recent developments in the scenario approach literature have led to a-posteriori probabilistic feasibility guarantees for abstract decision problems \cite{campi2018general} (see Theorem 1 therein), which is based on the two following conditions: \begin{enumerate} \item[(i)] For all $K \in \mathbb{N} \cup \{0\}$ and all $\delta_K \in \Delta^K$, the decision of an abstract problem is unique; \item[(ii)] The decision taken while observing $K$ realizations shall be \emph{consistent} for all the collected situations $k \in \mc{K}$ \cite[Assumption~1]{campi2018general}. \end{enumerate} Specifically, \cite[Th.~1]{campi2018general} studies the distribution of $V(\theta^\ast_K)$, where $\theta^\ast_K$ is the unique solution to the abstract decision problem computed after observing $K$ realizations of the uncertain parameter, and finds a suitable (probabilistic) bound $1-\beta$ guaranteeing that $V(\theta^\star_K) \leq \epsilon$ holds, for some $\beta \in (0,1)$. Since the randomized \gls{GNEP} in \eqref{eq:single_prob_aggregative} is a decision problem, a key step to apply the probabilistic feasibility bound in \cite[Th.~1]{campi2018general} to the entire set of \gls{GNE} is to extend the conditions above to embrace the scenario-based generalized aggregative game $\Gamma$. To this end, in view of Definition~\ref{def:violation_set}, we mimic the steps made in \cite{campi2018general} by focusing on set-oriented decisions. In the scenario-based \gls{GNEP} considered, our decision is a set and, specifically, we let correspond to the set of equilibria, $\Omega_K$. Then, in view of item \textrm{(i)}, guaranteeing the uniqueness of the set of equilibria for $\Gamma$ in \eqref{eq:single_prob_aggregative} is implicit since, for any $K$-multisample $\delta_K \in \Delta^K$, there is naturally a single set of equilibria $\Omega_K$, which is a nonempty, compact and convex set. This follows immediately from \cite[Th.~2.3.5]{facchinei2007finite}, as $\mc{X}_K$ is a bounded polyhedral set and $F$ is a continuous, monotone mapping. Therefore, let us consider a single-valued mapping $\Theta_K : \Delta^K \to 2^\mc{X}$ that, given a specific set of realizations $\delta_K$, returns the set of equilibria to the scenario-based \gls{GNEP} in \eqref{eq:single_prob_aggregative}, i.e., $\Omega_{K} \coloneqq \Theta_K(\delta^{(1)}, \ldots, \delta^{(K)}) = \Theta_K(\delta_K)$. When $K = 0$, $\Theta_0$ has no argument, and it is to be understood that it returns the set of equilibria of the deterministic \gls{NEP} in \eqref{eq:NEP_single_prob_aggregative}. In view of item \textrm{(ii)} above, we envision the following set-oriented counterpart of \cite[Ass.~1]{campi2018general}. \smallskip ``For all $K \in \mathbb{N}$ and for all $\delta_K \in \Delta^K$, $\Theta_K(\delta_K) \subseteq \Omega_{\delta^{(k)}}$, for all $k \in \mc{K} \cup \{0\}$.'' \smallskip In the proposed analogy, we let the admissible decision for the situation represented by $\delta$ to coincide with the set of equilibria $\Omega_\delta$, which is clearly a subset of the feasible set $\mathcal{X}_\delta$ shaped by the uncertain parameter. Next, we show that the above set-oriented counterpart of \cite[Ass.~1]{campi2018general} holds true for the scenario-based \gls{GNEP} in \eqref{eq:single_prob_aggregative}. Given the specific structure of the problem addressed, in view of Lemma~\ref{lemma:agg_GNEP}, we postulate the following assumptions on the set of equilibria. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{deg_scenario_GNEP.pdf} \caption{Schematic two-dimensional representation of the type of degenerate cases that, due to Assumption 2, can only happen with probability zero. In this case, $\Omega_{0}, \ldots, \Omega_{K-1}$ are singletons, however, if the $K$-th sample overlaps with the affine set in \eqref{eq:generic_set_equilibria} (assuming $b(\delta^{(K)}) = c = \boldsymbol{0}$), it might generate additional equilibria belonging to $\textrm{bdry}(\mc{X}_{K})$, as formalized in the proof of Lemma~\ref{lemma:ifonlyif}. Thus, $\Omega_{K}$ is no longer a singleton and may lie entirely on the $\textrm{bdry}(\mc{X}_K)$.} \label{fig:deg_scenario_GNEP} \end{figure} \smallskip \begin{assumption}\label{ass:nonemptiness} For all $K \in \mathbb{N} \cup \{0\}$, $\Omega_{K} \cap \mc{X}_{\delta}$ is nonempty, for any $\delta \in \Delta$. \hfill$\square$ \end{assumption} \smallskip \begin{assumption}\label{ass:nondeg} For all $\boldsymbol{x} \in \mathbb{R}^n$, $\mathbb{P}\{\delta \in \Delta \mid A(\delta) \boldsymbol{x} - b(\delta) = (M + M^\top) \boldsymbol{x} - c\} = 0$. \hfill$\square$ \end{assumption} \smallskip Nonemptiness of $\Omega_{K}$ is reasonable as we aim at quantifying robustness to unseen scenarios, while Assumption~\ref{ass:nondeg} is a non-degeneracy condition often imposed in the scenario approach literature \cite[Ass.~6]{garatti2019risk}. It rules out, indeed, the possibility that a new affine coupling constraint corresponding to $\delta$ overlaps with the equilibria subspace $(M+M^\top)\boldsymbol{x} - c$, allowing such situations to occur with probability zero (see Fig. 1 for a graphical representation). This requirement is satisfied for all probability distributions $\mathbb{P}$ that admit a density function. Pictorially, generating samples gives rise to shared constraints that ``shape'' the set of equilibria, as represented in Fig.~\ref{fig:scenario_GNEP}. With this in mind, we are now in the position to prove the main result that links the set of \gls{GNE} of \eqref{eq:single_prob_aggregative} across the samples scenarios, thus establishing (probabilistically) the set-oriented counterpart of \cite[Ass.~1]{campi2018general}. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{scenario_GNEP.pdf} \caption{The set of \gls{GNE}, $\Omega_{K}$, can be ``shaped'' by the set of linear constraints, $\mc{X}_{\delta^{(k)}}$, $k \in \mc{K}$. Specifically, by referring to Definition~\ref{def:support_sub}, the labelled dashed orange lines define the support subsample for $\delta_K$ \gls{w.r.t.} to $\mc{X}_K$, which in this case correspond to the the active samples that shape the feasibility region. Their intersections in $\mc{X}$ are denoted by orange dots.} \label{fig:scenario_GNEP} \end{figure} \smallskip \begin{lemma}\label{lemma:ifonlyif} Let Assumption~\ref{ass:nonemptiness} and \ref{ass:nondeg} hold true. Then, for all $K \in \mathbb{N}$ and for all $\delta_K \in \Delta^K$, $\Theta_K(\delta_K) \subseteq \Omega_{\delta^{(k)}}$ \gls{a.s.}, for all $k \in \mc{K} \cup \{0\}$. \hfill$\square$ \end{lemma} \begin{proof} Given any $K \in \mathbb{N}$ and any associated $K$-multisample $\delta_K \in \Delta^K$, let $\bar{k} $ be an arbitrary index belonging to $\mc{K}$. The mapping $\Theta_{\bar{k}}(\delta^{(1)}, \ldots, \delta^{(\bar{k})})$ returns the set of equilibria $\Omega_{\bar{k}}$, while once drawn the ($\bar{k}+1$)-th sample, we have $\Omega_{\bar{k}+1} \coloneqq \Theta_{\bar{k}+1}(\delta^{(1)}, \ldots, \delta^{(\bar{k})}, \delta^{(\bar{k}+1)})$. Note that, in view of Assumption~\ref{ass:nonemptiness}, both sets are guaranteed to be nonempty and are of the form defined in \eqref{eq:generic_set_equilibria}, i.e., generated by the intersection between an affine and a bounded polyhedral set. We show now that $\Omega_{\bar{k}+1} \subseteq \Omega_{\bar{k}}$, and then the statement will follow by induction over $\bar{k} \in \mc{K}$ by noticing further that $\Theta_0 \eqqcolon \Omega_{0} \supseteq \Omega_{1} \supseteq \ldots \supseteq \Omega_{K} \eqqcolon \Theta_K(\delta_K)$. On one hand, any $\boldsymbol{x}^\ast$ that is a \gls{GNE} for $\Gamma$ on $\mc{X}_{\bar{k}}$ and such that $\boldsymbol{x}^\ast \in \mc{X}_{\delta^{(\bar{k} +1)}}$, also belongs to $\Omega_{\bar{k}+1}$. To see this, recall the definition of $\Omega_{\bar{k}+1}$ in \eqref{eq:generic_set_equilibria}: the inclusion is clearly true for the affine part, $(M + M^\top) \boldsymbol{x}^\ast = c$, while if $\boldsymbol{x}^\ast \in \mc{P}_{\bar{k}}$ and $\boldsymbol{x}^\ast \in \mc{X}_{\delta^{(\bar{k} +1)}}$, then $\boldsymbol{x}^\ast \in \mc{P}_{\bar{k} + 1}$ in view of the structure of $\omega(\cdot)$, along with the convexity and compactness of each set involved. Now, let $\mc{X}_{\bar{k} + 1} \coloneqq \mc{X}_{\bar{k}} \cap \mc{X}_{\delta^{(\bar{k}+1)}}$. In view of the properties of the normal cone, if there exists some \gls{GNE} $\boldsymbol{x}^\ast$ such that $\boldsymbol{x}^\ast \in \Omega_{\bar{k} +1}$, but $\boldsymbol{x}^\ast \notin \Omega_{\bar{k}}$, it must happen that $\boldsymbol{x}^\ast \in \textrm{bdry}(\Omega_{\bar{k} +1})$. In fact, $-F(\boldsymbol{x}^\ast) \in \mc{N}_{\mc{X}_{\bar{k} + 1}}$ and $-F(\boldsymbol{x}^\ast) \notin \mc{N}_{\mc{X}_{\bar{k}}}$ if and only if $F(\boldsymbol{x}^\ast) \neq \{\boldsymbol{0}\}$, and this is possible at the boundary of $\mc{X}_{\bar{k} + 1}$ only, which in view of the compactness and convexity of each set corresponds to the boundary of $\Omega_{\bar{k} +1}$ (see also Fig.~\ref{fig:deg_scenario_GNEP}). Thus, $\Omega_{\bar{k} +1}$ can be represented as the union of two sets. Specifically, the first set gathers all those points that were equilibria for the game with $\bar{k}$-samples and remain feasible for the constraint corresponding to $\bar{k}+1$, while the second one contains all those points that did not belong to $\Omega_{\bar{k}}$ and may lie on the boundary of $\Omega_{\bar{k} +1}$, i.e., \begin{equation}\label{eq:sets} \begin{aligned} \Omega_{\bar{k} +1} &= \{\boldsymbol{x} \in \mc{X}_{\bar{k} + 1} \mid \boldsymbol{x} \in \Omega_{\bar{k}}\} \cup \left\{\boldsymbol{x} \in \textrm{bdry}(\mc{X}_{\bar{k} +1}) \mid \right. \\ &\left.\boldsymbol{x} \notin \Omega_{\bar{k}}, (\boldsymbol{y} - \boldsymbol{x})^\top F(\boldsymbol{x}) \geq 0 \; \forall \boldsymbol{y} \in \mc{X}_{\bar{k} +1}\right\}. \end{aligned} \end{equation} In view of Assumption~\ref{ass:nonemptiness}, $\{\boldsymbol{x} \in \mc{X}_{\bar{k} + 1} \mid \boldsymbol{x} \in \Omega_{\bar{k}}\} \subseteq \Omega_{\bar{k}}$ is nonempty, for any $\mc{X}_{\delta^{(\bar{k}+1)}}$ and $\bar{k} \in \mc{K}$. Finally, it follows that the second set in \eqref{eq:sets} is empty \gls{a.s.}, since it contains all points that are on the boundary of $\mc{X}_{\bar{k}+1}$ and belong to the affine set $(M + M^\top) \boldsymbol{x} = c$ which is of measure zero due to Assumption~\ref{ass:nondeg}, thus concluding the proof. \end{proof} \smallskip We finally remark again that Fig.~\ref{fig:deg_scenario_GNEP} shows an example where the second set in \eqref{eq:sets} would not be of measure zero. \subsection{A-posteriori probabilistic feasibility guarantees for $\Omega_{K}$} The following definition is at the core of scenario approach theory and crucial for our subsequent developments. \smallskip \begin{definition}\textup{\cite[Def.~2]{campi2018general}}\label{def:support_sub} Given a $K$-multisample $\delta_K \in \Delta^K$, a support subsample $S \subseteq \delta_K$ is a $p$-tuple of elements extracted from $\delta_K$, i.e., $S \coloneqq \{\delta^{(k_1)}, \ldots, \delta^{(k_p)}\}$, $k_1 < \ldots < k_p$, which gives the equilibria of the original sample, i.e., $$ \Theta_p(\delta^{(k_1)}, \ldots, \delta^{(k_p)}) = \Theta_K(\delta^{(1)}, \ldots, \delta^{(K)}). $$ \hfill$\square$ \end{definition} \smallskip Moreover, a support subsample $S$ is said to be irreducible if no further elements can be removed from $S$ without leaving the solution unchanged. With a slight abuse of notation, in the remainder we will refer to the notion of support subsample for $\delta_K \in \Delta^K$ \gls{w.r.t.} either $\mc{X}_K$, or $\Omega_{K}$. In general, an algorithm that determines a support subsample can be defined as $\Upsilon_K : \delta_K \to \{k_1, \ldots, k_p\}$, $k_1 < \ldots < k_p$, such that $\{\delta^{(k_1)}, \ldots, \delta^{(k_p)}\}$ is a support subsample for $\delta_K$. Let us denote with $s_K \coloneqq |\Upsilon_K(\delta_K)|$ its cardinality. Note that $s_k$ is a random variable itself as it depends on $\delta_K$. Thus, given any $K$-multisample $\delta_K \in \Delta^K$, the following result provides an a posteriori bound of the violation probability in \eqref{eq:violation_set} for the entire set of equilibria, $\Omega_K$. \smallskip \begin{theorem}\label{th:VI} Let Assumption~\ref{ass:nonemptiness} and \ref{ass:nondeg} hold true, fix some $\beta \in (0,1)$. Let $\varepsilon : \mc{K} \cup \{0\} \to [0, 1]$ be a function such that $$ \left\{\begin{aligned} & \varepsilon(K) = 1,\\ & \sum_{h = 0}^{K - 1} \left( \begin{array}{c} K\\ h \end{array} \right) (1 - \varepsilon(h))^{K - h} = \beta. \end{aligned} \right. $$ Then, for any $\Theta_K$, $\Upsilon_K$ and probability $\mathbb{P}$, it holds that \begin{equation}\label{eq:prob_feas_boud} \mathbb{P}^K \{\delta_K \in \Delta^K \mid V(\Omega_{K}) > \varepsilon(s_K) \} \leq \beta. \end{equation} \hfill$\square$ \end{theorem} \begin{proof} By leveraging on Lemma~\ref{lemma:ifonlyif}, the proof follows as a corollary of \cite[Th.~1]{campi2018general}. \end{proof} \smallskip \begin{remark} As evident from \eqref{eq:prob_feas_boud}, to asses the robustness of the set of equilibria $\Omega_{K}$, one does not need to dispose of a full characterization of $\Omega_{K}$, namely an algorithm $\Theta_K(\cdot)$, but rather the number of support subsamples $s_K$, computed by means of $\Upsilon(\cdot)$. In the next section, we provide a possible algorithm $\Upsilon(\cdot)$ for the scenario-based \gls{GNEP} in \eqref{eq:single_prob_aggregative}. \hfill$\square$ \end{remark} \smallskip The following result provides an upper bound for $V(\Omega_{K})$. \smallskip \begin{proposition}\label{prop:better_performance} Given any $K \in \mathbb{N} \cup \{0\}$ and $\delta_K \in \Delta^K$, let $s_K$ and $v_K$ be the number of (possibly irreducible) support subsample for $\delta_K$, evaluated \gls{w.r.t.} $\Omega_{K}$ and $\mc{X}_K$, respectively. Then, $s_K \leq v_K$, and therefore $V(\Omega_{K}) \leq V(\mc{X}_{K})$. \hfill$\square$ \end{proposition} \begin{proof} Given the linearity of both local and coupling constraints defining the feasible set of the game $\Gamma$ in \eqref{eq:single_prob_aggregative}, it follows from Definition~\ref{def:support_sub} that some sample $\delta^{(k)}$ is of support for $\delta_K$ \gls{w.r.t.} $\mc{X}_K$ if $\mc{X}_{\delta^{(k)}}$ is active on $\textrm{bdry}(\mc{X}_K)$, i.e., $\textrm{bdry}(\mc{X}_{\delta^{(k)}}) \cap \textrm{bdry}(\mc{X}_K) \neq \emptyset$. On the other hand, $\delta^{(k)}$ is of support \gls{w.r.t.} $\Omega_{K}$ if $\textrm{bdry}(\mc{X}_{\delta^{(k)}}) \cap \Omega_{K} \neq \emptyset$ (see Fig.~\ref{fig:scenario_GNEP} for a graphic illustration). Since, in general, $\Omega_{K} \subseteq \mc{X}_K = \cap_{k \in \mc{K}} \mc{X}_{\delta^{(k)}} \cap \mc{X}$, those samples that are of support for $\delta_K$ \gls{w.r.t.} $\Omega_{K}$, are of support \gls{w.r.t.} $\mc{X}_K$, but not viceversa. Therefore, $s_K \leq v_K$. Finally, since $\varepsilon(\cdot)$ in Theorem~\ref{th:VI} is an increasing function, we have $V(\Omega_{K}) \leq V(\mc{X}_{K})$ as desired. \end{proof} \smallskip \begin{remark} The result in Proposition~\ref{prop:better_performance} implies that, given the same $K$-multisample $\delta_K \in \Delta^K$, \eqref{eq:prob_feas_boud} provides tighter bounds compared to \cite[Cor.~7]{pantazis2020aposteriori}, since we focus on the set of equilibria rather than on the entire feasibility set. \hfill$\square$ \end{remark} \section{Computational aspects} Next, we propose a structure-preserving, semi-decentralized algorithm to compute the number of support subsample \gls{w.r.t.} $\Omega_{K}$. In view of Theorem~\ref{th:VI}, $s_K$ is a crucial quantity to assess the risk associated with the entire set $\Omega_{K}$. \begin{algorithm}[!t] \caption{Computation of the number of support subsample for aggregative \glspl{GNEP}}\label{alg:support_agg_game} \DontPrintSemicolon \SetArgSty{} \SetKwFor{ForAll}{for all}{do}{end forall} \smallskip \textbf{Initialization:} \begin{itemize} \setlength{\itemindent}{0.8cm} \item[(\texttt{S0.1})] Set $s_K \coloneqq 0$, identify $$\mc{A}_K \coloneqq \{k \in \mc{K} \mid \textrm{bdry}(\mc{X}_{\delta^{(k)}}) \cap \textrm{bdry}(\mc{X}_K) \neq \emptyset \}$$ \item[(\texttt{S0.2})] Run $\Phi(\delta_0)$ to compute $\boldsymbol{x}_0 \in \mc{X}$, set $d \coloneqq \boldsymbol{x}^\top_0 M \boldsymbol{x}_0$ and $c \coloneqq (M + M^\top) \boldsymbol{x}_0$ \end{itemize} \smallskip \textbf{Iteration $(i \in \mc{A}_K)$:} \\ \begin{itemize}\setlength{\itemindent}{.5cm} \item[(\texttt{S1})] Solve the feasibility problem: \begin{equation}\label{eq:feas_prob_GNEP} \left\{ \begin{aligned} &\underset{(\lambda, \boldsymbol{x}) \in \mathbb{R}^{n+1}}{\textrm{min}} & & 0\\ &\hspace{.45cm}\textrm{ s.t. } & & h^\top \lambda + q^\top \boldsymbol{x} + d \leq 0,\\ &&& H^\top \lambda - M^\top \boldsymbol{x} + c + q = 0,\\ &&& \lambda \geq 0, \boldsymbol{x} \in \mc{X}_{K} \cap \textrm{bdry}(\mc{X}_{\delta^{(i)}}). \end{aligned} \right. \end{equation} \item[(\texttt{S2})] If $\exists (\lambda, \boldsymbol{x})$ that solves \eqref{eq:feas_prob_GNEP}, set $s_K \coloneqq s_K + 1$ \end{itemize} \end{algorithm} Specifically, by leveraging on Lemma~\ref{lemma:agg_GNEP}, in the case of \gls{GNEP} in aggregative setting the computation of the (minimal) number of support subsample \gls{w.r.t.} $\Omega_{K}$ reduces to solving a feasibility problem on the augmented space $\mathbb{R}^{n + 1}$. An outline of a complete procedure can be found in Algorithm~\ref{alg:support_agg_game}, where, given any $K$-multisample $\delta_K \in \Delta^K$, $\Phi : \Delta^K \to 2^{\mc{X}_K}$ can be seen as any iterative algorithm available in the literature that allows to compute an equilibrium solution to the aggregative \gls{GNEP} in \eqref{eq:single_prob_aggregative}, e.g., \cite{salehisadaghiani2016distributed,belgioioso2017semi,liang2017distributed}. Specifically, while (\texttt{S0.1}) allows to identify the active facets of the convex polytope $\mc{X}_K$ \cite{ziegler2012lectures}, (\texttt{S0.2}) requires to solve the \gls{NEP} in \eqref{eq:NEP_single_prob_aggregative}, here identified by $\delta_0 = \emptyset$. In this way, computing an equilibrium of the \gls{NEP} allows us to define the quantities $d$ and $c$, which characterize every point in $\Omega_{0}$ (and therefore of $\Omega_{K}$), also shaping the feasibility set in \eqref{eq:feas_prob_GNEP}. Successively, (\texttt{S1}) requires to solve a feasibility problem on each active facet identified at (\texttt{S0.1}), where $\boldsymbol{x} \in \mc{X}_{K} \cap \textrm{bdry}(\mc{X}_{\delta^{(i)}})$ translates into an equality constraint in view of the affine constraints involved, while (\texttt{S2}) increments the counter $s_K$ in case the problem at (\texttt{S1}) is feasible. We next state and prove the main result related with Algorithm~\ref{alg:support_agg_game}. \smallskip \begin{proposition}\label{prop:irreducible_set_agg_GNEP} Let Assumption~\ref{ass:nonemptiness} and \ref{ass:nondeg} hold true. For any $K \in \mathbb{N}$ and $\delta_K \in \Delta^K$, Algorithm~\ref{alg:support_agg_game} returns $s^\ast_K$, the cardinality of the minimal, irreducible support subsample $\delta_K$ \gls{w.r.t.} the entire set of equilibria, $\Omega_{K}$. \hfill$\square$ \end{proposition} \begin{proof} First note that, in the setting of the scenario-based \gls{GNEP} in \eqref{eq:single_prob_aggregative}, $\mc{A}_K$ denotes the minimal, irreducible support subsample for $\delta_K$ \gls{w.r.t.} the convex polytope $\mc{X}_K$. Then, by following the consideration adopted within the proof of Proposition~\ref{prop:better_performance}, i.e., every $\delta^{(i)}$, $i \in \mc{A}_K$, is of support also \gls{w.r.t.} $\Omega_{K}$ if and only if $\textrm{bdry}(\mc{X}_{\delta^{(i)}}) \cap \Omega_{K} \neq \emptyset$. To check this condition for each $\delta^{(i)}$ it is sufficient to compute a solution (if one exists) on the active region of $\mc{X}_K$ associated with $\mc{X}_{\delta^{(i)}}$. Since, in general, $\Omega_{K} \subseteq \Omega_{0}$ (both bounded polyhedral sets), in view of Lemma~\ref{lemma:agg_GNEP} every equilibrium solution in $\Omega_{K}$ is characterized by: i) the invariance property with parameter $c$, which is computed, together with $d$, for the \gls{NEP}, i.e., \eqref{eq:single_prob_aggregative} with no coupling constraints; ii) shall lie into $\mc{P}_K$, defined in \eqref{eq:generic_set_equilibria}. Let us consider now the Lagrange dual optimization problem associated with $\omega(\boldsymbol{x})$ given by \begin{equation}\label{eq:dual_agg_GNEP} \left\{ \begin{aligned} &\underset{\lambda \geq 0}{\textrm{max}} & & - h^\top \lambda\\ &\hspace{0cm}\textrm{ s.t. } & & H^\top \lambda + M \boldsymbol{x} + q = 0.\\ \end{aligned} \right. \end{equation} In view of weak duality \cite{boyd2004convex}, $\boldsymbol{x} \in \mc{P}_K$ (recall the definition of $\mc{P}_K$ below \eqref{eq:generic_set_equilibria}) if there exists some $\lambda \geq 0$ such that \eqref{eq:dual_agg_GNEP} is feasible and $- h^\top \lambda - (d + q^\top \boldsymbol{x}) \geq 0$ is satisfied as $\omega(\boldsymbol{x}) \geq -h^\top \lambda$ for any such $\lambda$. Thus, by combining the equality in \eqref{eq:dual_agg_GNEP} and $(M+M^\top) \boldsymbol{x} = c$ to obtain the second constraint in \eqref{eq:feas_prob_GNEP}, computing an equilibrium on the boundary of an active constraint $\mc{X}_K \cap \textrm{bdry}(\mc{X}_{\delta^{(i)}})$ reduces to finding a feasible pair $(\lambda, \boldsymbol{x})$ for the convex optimization problem in \eqref{eq:feas_prob_GNEP}. Finally, $s_K$ increases only if such a feasibility problem has a solution, excluding all those samples $\mc{X}_{\delta^{(i)}}$ that does not intersect $\Omega_{K}$. The minimality follows as a consequence of the fact that $\mc{A}_K$ is the minimal support subsample for the polytope $\mc{X}_K$. \end{proof} \smallskip \begin{remark} As tailored for \gls{GNEP} in aggregative form, Algorithm~\ref{alg:support_agg_game} requires to run the adopted iterative procedure $\Phi(\delta_K)$ once, and to solve \eqref{eq:feas_prob_GNEP} by means of some distributed algorithm $|\mc{A}_K|$-times, with $|\mc{A}_K| \leq K$. This clearly improves \gls{w.r.t.} the greedy algorithms proposed in \cite[\S II]{campi2018general} and \cite[\S III]{paccagnan2019scenario}, which would require running $\Phi(\delta_K)$ $K$-times. \hfill$\square$ \end{remark} \section{Illustrative example} We choose an academic example to illustrate the introduced theoretical results. Specifically, we consider a two-player \gls{GNEP} in aggregative form with scalar decision variables and quadratic structure, i.e., we consider $N=2$ agents, with cost functions $J_1(x_1, x_2) \coloneqq \tfrac{1}{2} x^2_1 + (1 - x_2) x_1$, $J_2(x_1, x_2) \coloneqq \tfrac{1}{2} x^2_2 - (1 + x_1) x_2$, and $\mc{X}_{i} \coloneqq \{x_i \in \mathbb{R} \mid |x_i| \leq 2\}$, $i = 1, 2$. Here, $\boldsymbol{x} \coloneqq \mathrm{col}(x_1,x_2)$, and $$ M \coloneqq \left[\begin{array}{cc} \phantom{-}1 & -1\\ -1 & \phantom{-}1 \end{array}\right], \, q \coloneqq \left[\begin{array}{c} \phantom{-}1\\ -1 \end{array} \right], $$ which guarantee the monotonicity of the game mapping $F$ as $M+M^\top \succcurlyeq 0$. Thus, it turns out that $\Omega_{0} \coloneqq \{\boldsymbol{x} \in \mc{X} \mid x_2 - x_1 - 1 = 0\}$, $\mc{X} \coloneqq \mc{X}_1 \times \mc{X}_2$, and since $M \succcurlyeq 0$, every $\boldsymbol{x}^\ast \in \Omega_{0}$ is characterized by invariants $c \coloneqq \mathrm{col}(-2,2)$ and $d \coloneqq 1$ as in Lemma~\ref{lemma:agg_GNEP}. We assume each set $\mc{X}_\delta$ be defined by a random halfspace of the form $\delta_1 x_1 + \delta_2 x_2 \leq \delta_3$. Moreover, we assume that $\delta \coloneqq \mathrm{col}(\delta_1,\delta_2,\delta_3)$ follows a uniform distribution with support $\Delta \coloneqq [-4, 4] \times [-4, 4] \times [4, 10] \subseteq \mathbb{R}^3$, shaping the feasible set $\mc{X}_\delta \cap \mc{X}$. \begin{figure} \centering \includegraphics[width=\columnwidth]{set_red-eps-converted-to.pdf} \caption{Size of $\Omega_K = \Omega_0 \cap \mathcal{X}_K$, normalized with the one of $\Omega_0$, as a function of the number of samples $K$. The solid line represents the average of $|\Omega_{0} \cap \mc{X}_K|/|\Omega_{0}|$ over $10$ numerical experiments, while the shaded area the standard deviation.}\label{fig:set_red} \end{figure} Then, given any $K$-multisample, the structure of $\Omega_{K}$ enable us to estimate $|\Omega_{K}|$ as the length of the interval $(M+M^\top) \boldsymbol{x} = c$ contained in $\mc{X}_K$, i.e., $|\Omega_k| = |\Omega_0 \cap \mathcal{X}_K|$. Thus, Fig.~\ref{fig:set_red} shows the average length of $\Omega_{K}$ over $10$ numerical experiments, normalized \gls{w.r.t.} the one of $\Omega_{0}$. Here, $\Omega_{K}$ shrinks as the number of samples grows, numerically supporting Lemma~\ref{lemma:ifonlyif}. Note that, in view of the structure of the support $\Delta$, as $K$ increases, the standard deviation of the uncertain parameter $\delta$ narrows around the average. \begin{figure} \centering \includegraphics[width=\columnwidth]{preliminary-eps-converted-to.pdf} \caption{Sets obtained after drawing $100$ samples. The green dots $\boldsymbol{x}^{\ast,1}$ and $\boldsymbol{x}^{\ast,2}$ are the extrema of the set of \gls{GNE}, $\Omega_{100}$, while the orange dashed lines confine the feasible set, $\mathcal{X}_{100}$.}\label{fig:preliminary} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{theovsemp-eps-converted-to.pdf} \caption{Comparison between theoretical and empirical violation probability for $\Omega_{100}$. After gridding $\Omega_{100}$ with granularity $0.01$, the empirical violation probability is evaluated for each grid-point against $10^4$ new samples.}\label{fig:theovsemp} \end{figure} We now compare the theoretical bounds provided in Theorem~\ref{th:VI}, by using $\beta = 10^{-6}$, with an empirical estimate of the violation probability in \eqref{eq:violation_set}. To this end, we generate $K = 100$ samples to obtain $\Omega_{100}$ in Fig.~\ref{fig:preliminary} and, after gridding the set of equilibria with granularity $0.01$, we compute the empirical violation of probability for each grid-point against $10^4$ new realizations of $\delta$. The theoretical violation level, encoded by the function $\varepsilon(\cdot)$ is analytically obtained by splitting $\beta$ evenly among the $100$ terms within the summation defined in Theorem~\ref{th:VI}. Given the structure of the problem, the family of equilibria in $\Omega_{100}$ corresponds to an interval, which can be parametrized by the points $(1 - \mu) \, \boldsymbol{x}^{\ast,1} + \mu \, \boldsymbol{x}^{\ast,2}$, for $\mu \in [0,1]$, where $\boldsymbol{x}^{\ast,1}$ and $\boldsymbol{x}^{\ast,2}$ are the extrema of $\Omega_{100}$ (see Fig.~\ref{fig:preliminary}). As reported in Fig.~\ref{fig:theovsemp}, while the theoretical bound in \eqref{eq:prob_feas_boud}, determined by $s_{100} = 2$, provides an equivalent feasibility certificate for all the points in $\Omega_{100}$, the empirical violation probability is generally lower and attains the highest values close to $\boldsymbol{x}^{\ast,1}$ and $\boldsymbol{x}^{\ast,2}$. This is anticipated as closer to the boundary of the set higher probability of violation is expected. \section{Conclusion and Outlook} The scenario approach applied to robust game theory provides a numerically tractable framework to compute \gls{GNE} with quantifiable robustness properties in a distribution-free fashion. In the specific case of a \gls{GNEP} in aggregative form, we allow assessing the robustness properties of the entire set of generalized equilibria, thus relaxing the requirement for imposing a Nash equilibrium uniqueness assumption as typically performed in the literature. This merely requires to enumerate the active coupling constraints that intersect such set. Further extensions to other classes of \glspl{GNEP} and potential games, along with different algorithms to compute the number of support subsamples, a crucial quantity for the feasibility certificate, constitute topics of future work. \balance \bibliographystyle{IEEEtran}
1,108,101,563,575
arxiv
\section*{abstract} This paper illustrates how multilevel functional models can detect and characterize biomechanical changes along different sport training sessions. Our analysis focuses on the relevant cases to identify differences in knee biomechanics in recreational runners during low and high-intensity exercise sessions with the same energy expenditure by recording $20$ steps. To do so, we review the existing literature of multilevel models and then, we propose a new hypothesis test to look at the changes between different levels of the multilevel model as low and high-intensity training sessions. We also evaluate the reliability of measures recorded in three-dimension knee angles from the functional intra-class correlation coefficient (ICC) obtained from the decomposition performed with the multilevel funcional model taking into account $20$ measures recorded in each test. The results show that there are no statistically significant differences between the two modes of exercise. However, we have to be careful with the conclusions since, as we have shown, human gait-patterns are very individual and heterogeneous between groups of athletes, and other alternatives to the p-value may be more appropriate to detect statistical differences in biomechanical changes in this context. \section*{General overview and motivation} Advances in biosensors and digital medicine are improving disease monitoring and detection. A promising field for implementing these novel strategies is sports training and biomechanics. These tools are a crucial element for optimizing athlete training and reducing the incidence of sports injuries. Multiple repeated measurements are collected from each individual over different sessions, weeks, or the entire season in these domains. So, it is essential to evaluate the changes produced along with relevant outcomes at different resolution levels scales among individuals. In addition, much of the information recorded is of a functional nature, such as the cycle of gait movement. Functional gait analysis enables a more accurate assessment of the effect of fatigue and the detection of potential injury risk factors. This paper illustrates how multilevel functional models can detect and characterize biomechanical changes along different training sessions. Besides, the multilevel models can provide a vectorial representation of different athlete training activities and feed supervised predictive models in various modeling tasks. Our analysis focuses on the relevant cases to identify differences in knee biomechanics in recreational runners during low and high-intensity exercise sessions with the same energy expenditure by recording $20$ steps. To do so, we review the existing literature of multilevel models and then, we propose a new hypothesis test to look at the changes between different levels of the multilevel model as low and high-intensity training sessions. We also evaluate the reliability of measures recorded in three-dimension knee angles from the functional intra-class correlation coefficient (ICC) obtained from the decomposition performed with the multilevel funcional model taking into account $20$ measures recorded in each test. The results show that there are no statistically significant differences between the two modes of exercise. However, we have to be careful with the conclusions since, as we have shown, human gait-patterns are very individual and heterogeneous between groups of athletes, and other alternatives to the p-value may be more appropriate to detect statistical differences in biomechanical changes in this context. \section{Introduction} In recent years, there has been a big increase in the availability of powerful biosensors. These are now capable of monitoring an individual’s energy expenditure with great accuracy and measuring various physiological and biomechanical variables in real-time. This provides the opportunity to have a unique assessment of an individual’s physical capability \cite{lencioni2019human} and performance and thus, be able to schedule optimal interventions over time \cite{kosorok2019precision, buford2013toward}. One field that can benefit from the intensive use of these technologies is biomechanics \cite{ibrahim2021artificial,uhlrich2020personalization}. In both sports and general populations, abnormal movement patterns are synonymous with muscular and motor problems, risk of injury, or even the appearance of severe neurological diseases such as Parkinson’s \cite{morris2001biomechanics}. Therefore, the detection and characterization of these abnormal movement patterns in biological activities such as walking and running, are essential in areas beyond professional and sports medicine, such as clinical medicine \cite{chia2020decision}. Nowadays, with the growing boom of wearables, these technologies are being democratized, and their use is increasingly more common among the general population, such as amateur runners. In this setting, the remote control of athlete training and even monitoring their daily routine out of sport activity is feasible. Although we are in the early stages of this technological revolution, the first research papers are appearing, which through high-resolution data gathered with biosensors, can begin to answer unknown and complex questions about the relationship between training load \cite{cardinale2017wearable}, daily biomechanical patterns \cite{10.1093/biostatistics/kxz033}, and injury prediction \cite{bittencourt2016complex, malone2017unpacking}. Furthermore, they may even enable us to build predictive models that support decision-making and help optimize the performance \cite{matabuena2019improved,hemingway2020narrative, piatrikova2021monitoring}. For example, several contemporary works provide new epidemiological knowledge using biomechanical data of human locomotion \cite{10.1093/biostatistics/kxz033}, \cite{WARMENHOVEN2020110106}. Other papers have tried to predict sports injuries\cite{rossi2018effective} or other motor or neurological diseases prematurely \cite{belic2019artificial}, or even the impact of therapy together with their prognosis in the recovery phase after surgery \cite{karas2020predicting}. The rising proliferation of running as a sporting activity carries a substantial risk for recreational runners who often perform high-intensity training as interval sessions without a formal training schedule. For recreational runners there has been an increased prevalence of running related injures the most common of which is the knee \cite{van2007incidence,messier2008risk}. To date, several works have studied the aetiology of running related knee injuries in recreational runners, some even using $3$-dimensional analysis \cite{messier2008risk}. However, to the best of our knowledge, no studies have compared biomechanical changes during high intensity interval training (HIIT) compared to lower intensity continuous running. Moreover, some essential questions remain unanswered, for example the reliability of biomechanical measures at the knee in two or more HIIT training sessions. Traditionally, gait analysis has been performed at fixed points within the gait cycle. A more detailed and meaningful analysis can be attained by using a complete stride cycle with functional data analysis (FDA) \cite{febrero2012statistical} techniques. These analyses can provide a greater insight into understanding the mechanics of locomotion especially as a runner fatigues. Under different fatigue conditions it would be possible to identify with greater clarity, changes that take place within each part of the gait cycle. The predominant data-analysis practice in biomechanics is to summarize the curve recorded for each stride using several statistical metrics and applied standard multivariate techniques, although there is a loss of information with this approach. There is a rising popularity of using functional data in biomechanics, with the purpose too bridge this gap between complex statistical modeling with functional data and standard analysis between practitioners. An interesting paper \cite{WARMENHOVEN2020110106} explains a global target audience using principal functional component analysis to more accurately analyze biomechanics data. However, this methodology does not take into account that we can obtain multiple strides per individual in each training session. Here, the general procedure is to normalize the curve obtained for step and body segment to the $[0,1]$ interval and take the mean of the different curves recorded and create an average functional curve for analysis. Nevertheless, this procedure can be suboptimal because the constructed mean representation ignores the individual variability between the distinct steps of the same individual, something crucial in the evaluation of the movement patterns in some settings. In addition, the mean curve can be a summary measure of the information that is very sensitive to outliers, something that is frequent in biomechanical data. This is particularly true in the measurement of movements performed at high or low speed, where sensor and/or human variability often increase. Moreover, we often need to compare the effect of an intervention along the different training sessions on different days, and for this, we have several repeated measures per individual in different periods. For all this, a more natural analysis is to exploit the advantages of multi-level functional models that allow the analysis of several hierarchy levels. With these models, we can incorporate in a natural way, into the same statistical models, biomechanical patterns using a significant fraction of training session, test or event data from a complete session. These methods also allow the capture the variations in different periods at a intra-inter individual level. Surprisingly, there is little use of these FDA techniques within the literature, either in sport or indeed other clinical areas \cite{ullah2013applications}. Both of these areas would benefit from larger data sets, be that longitudinally or more strides or conditions. The objective of this paper is two-fold. First, we will introduce the analysis of multi-level FDA from the methodological point of view. After, we will illustrate from an applied point of view that these biomechanical methods analyze several exciting research questions. For this purpose, we use a sample composed of $19$ athletes during two different training sessions, one moderate and one high speed, in a controlled laboratory environment. During these training sessions we measured knee patterns with a tridimensional sensor of $20$ strides during the stance phase. The structure of the paper is as follows. First, we introduce multi-level models with FDA, as review the literature. We then describe the study sample; we do a more in-depth analysis of tridimensional changes to acquire new biomechanic knowledge. Finally, we discuss the results and future challenges in multi-level models in biomechanics and other sport biosensor data. Given the double target audience of this paper, to maintain the interest for biomechanical practitioners that do not have a specific interest in the mathematical details, we a illustrate multi-level methodology to show biomechanical applications to our data analysis example: \begin{itemize} \item What are the correlations between knee functional running patterns during a HIIT training session and the loss of force production in training? \item What is the reliability of the functional running parameters in two independent HIIT training seasons? \item Are there differences in knee angles between a continuous running session and HIIT training? \item Are functional biomechanics patterns very individual between runners? \item Is it appropriate to use p-value to detect biomechanical changes in the practice? \end{itemize} \section{ Multi-level functional data analysis} \subsection{General overview of state-of-art of multi-level models} Functional data analysis \cite{febrero2012statistical, cuevas2014partial, wang2016functional} with a multi-level structure and repeated measurements is a field that has received substantial attention in recent years (see for example \cite{lee2018bayesian,li2020regression}). In the statistical community, it appeared in the literature as an essential new methodology to statistical practitioners. These techniques have been applied successfully to answer central scientific questions in such heterogeneous domains, to study the variability between subjects, days, tests, physical activity patterns, speech, or sleep quality monitoring \cite{xiao2015quantifying,huang2019multilevel,park2018simple,martinez2013study, pouplier2017mixed, di2009multilevel}. Probably the first work that addressed mixed functional data problem was back in $2003$—in this work \cite{morris2003wavelet}, using wavelets and a Bayesian estimation procedure, the effect of type f dietary fat on O6-methylguanine-DNA-methyltransferase (MGMT), an important biomarker in early colon carcinogenesis, was explored. Since then, different models have appeared in the functional data analysis literature that have modeled different hierarchy levels, including nested and crossed structures—using diverse estimation strategies adapted to the nature of the real problem that the authors treat to solve. From a general point of view, the different data characteristics involved in the creation of new mixed functional data models are the number of data recorded \cite{zipunnikov2011multilevel}, the density of the functional data \cite{https://doi.org/10.1002/sta4.50}, the number of replicates in each unit of the hierarchical structure \cite{zipunnikov2011multilevel}, the structure of dependence between levels of the hierarchy or replicates \cite{10.1093/biostatistics/kxp058, staicu2012modeling}, or the dependence between covariates in multidimensional functional problems \cite{volkmann2021multivariate}. For example, in some relevant applications, such as the analysis of longitudinal data obtained from medical images using nuclear magnetic resonance, the high computational demands of the estimation of image correlation operators and the calculation of projections between subjects and visits have resulted in a series of papers using new computationally efficient multi-level methods that scale well in problems involving millions of data \cite{zipunnikov2011multilevel, zipunnikov2014longitudinal}. In a similar way, essential progress has been made in recent years in the smoothing of correlation operators, whereby at the moment, in problems with half a million covariates, the smoothing can be done in a few seconds \cite{xiao2016fast, cederbaum2018fast}. In the reverse situation with sparse functional data, efficient estimation methods have also been proposed \cite{https://doi.org/10.1002/sta4.50, xiao2018fast, li2020fast}, but from a statistical perspective, data variability increases due to the low density of functional data in this framework, and the estimation problems are magnified. Using this approach, several works have proposed different inferential contributions such as re-sampling bootstrap methods to perform inferential tasks such as calculating confidence intervals or comparing the equality of means between groups of subjects by exploiting the rich source of information several measures of the same individual in biological problems \cite{crainiceanu2012bootstrap, goldsmith2013corrected, park2018simple}. Also, some of the previous multi-level models have been generalized to introduce the impact of specific covariates on the levels of variability of the different levels of hierarchy and subjects so in supervised and unsupervised problems \cite{crainiceanu2009generalized, 10.1093/biostatistics/kxs051,xiao2015quantifying,scheipl2015functional}. Furthermore, new methods have been proposed in a more general set up as other complex objects such as functional matrix structure data \cite{huang2019multilevel}. This technique, has facilitated opportunities to study the variations of physical activity patterns in a group of subjects with cardiac pathology over several weeks, highlighting the power of these models to solve real complex problems. \subsection{Mathematical models} \subsubsection{Mathematical foundations of standard functional principal component analysis} Let $X(t),$ $t\in [0,1]$, be a random function with mean $\mu(t)= E(X(t))$ and covariance function $\Sigma(t,s)= E((X(t)-\mu(t)(X(s)-\mu(s))$ for all $t,s$ $\in$ $[0,1]$. The heart of many functional data analysis models is based on calculating modes of variability of the random function $X(t)$ based on the spectral decomposition of the covariance operator $\Sigma(\cdot,\cdot)$ in a set of eigen-functions $\{e_i(\cdot)\}^{\infty}_{i=1}$ and eigenvalues $\{\lambda_{i}\}^{\infty}_{i=1}$, and where, we suppose that $\lambda_{1}\geq \lambda_{2} \geq \cdots $. Thus, from the decomposition of Karhunen-Loève we have \begin{equation} X(t)= \mu(t)+\sum_{k=1}^{\infty} c_k e_k(t) \end{equation} where, $c_k= \int_{0}^{1}(X(t)-\mu(t)) e_k(t) dt$ being $c_k$'s incorrelated random variables with mean zero, and variance $\lambda_{k}$. These variables are usually known as scores or loading variables. In the real-world setting, we have $n$ realizations generally independent of the process $X(\cdot)$, $X^{1}(\cdot)$, $\dots$, $X^{n}(\cdot)$, but only we observe a sample of $n$ vectors of length $m$, $X^1$, $\dots$, $X^n$, in a grid $\{0\leq t_1< \dots <t_{m}\leq 1\}$, and where $X_{j}^i= X^{i}(t_j)$ for all $i=1,\dots,n$, $j=1,\dots,m$. Next, by simplicity, to refer $X_{j}^i$, we use $X^{i}(t_j)$. The simpler estimator of $\Sigma$ is, \begin{equation} \hat{\Sigma}(t_j,t_k)= \frac{1}{n} \sum_{i=1}^{n}(X^i(t_j) -\hat{\mu}(t_j))(X^i(t_k) -\hat{\mu}(t_k)), \end{equation} where $\hat{\mu}(t_j)= \frac{1}{n}\sum_{i=1}^{n} X_i(t_j)$, for all $j=1,\dots, m$. A major step here, in many applications where observations are subjected to a large measurement error, is the smoothing process, to ensure the optimal performance of the empirical estimator $\hat{\Sigma}$. Three different strategies have generally been used in the literature \cite{shang2014survey, cederbaum2018fast}: i) Smoothing of the original functional data; ii) Introduction of a regularization term in the estimation of $\hat{\Sigma}$; and iii) direct application of a smoothing procedure in the raw estimation of $\hat{\Sigma}$. Subsequently, we denote by $\hat{\Sigma}_{Smooth}$ the smoothed version of $\hat{\Sigma}$ by any of the three previous procedures. The next step is to calculate the auto-vectors and auto-values of $\hat{\Sigma}_{Smooth}$, according to the spectral theory of linear algebra, as happens in the classical context of principal component analysis in multivariate statistics. After performing this procedure, and selecting the first $K<m$ auto-vectors $\{\hat{e}^i\}_{i=1}^{K}$ and auto-values $\{\hat{\lambda}_i\}_{i=1}^{K}$ we obtain the following decomposition: \begin{equation} X^{i}(t_j) \approx \hat{\mu}(t_j) + \sum_{k=1}^{K} \hat{c}^{k} \hat{e}^{i}_k \hspace{0.25cm} (i=1,\cdots,n; j=1,\cdots, m), \end{equation} where $\hat{c}^{i}= \langle X^{i}-\hat{\mu},\hat{e}^{i}\rangle $, denoting with $\langle,\rangle$ the usual scalar-product and being $\hat{e}^{i}_j$ the $j$ component of the auto-vector $\hat{e}^i$. More details of this procedure can be found in the following reviews and general books of functional data analysis \cite{horvath2012inference,shang2014survey, kokoszka2017introduction}, where different estimation procedures of the number $K$, of components are established \cite{li2013selecting}. For more theoretical aspects of the estimators such as asymptotic properties we refer the reader to \cite{hall2006properties}. \subsubsection{Introduction of functional multilevel models}\label{sec:multilevel1} In the previous Section, we have seen how to carry out a principal component analysis when we observed $n$ independent functional data. In practice, in biomedical and sports applications, when patients or athletes are analyzed at different moments in time, for example, the training load throughout a season, it is common to have several repeated and correlated measurements. Therefore, the previous procedure may be inadequate. Before starting, we introduce some extra notation to describe this scenario. Le $X^{i,j}(t)$, $t\in [0,1]$, the functional datum of individual $i$, with the measure $j$, for $i=1,\cdots, n$, $j=1,\cdots, n_i$, that for simplicity we assume the method introduces that $n_i= J$ (for all $i=1,\cdots, n$). To start, let-consider the two-way functional ANOVA model, whose structure is introduced below: \begin{equation}{\label{sec:multilevel}} X^{i,j}(t)= \mu(t)+\nu^{j}(t)+ Z^{i}(t)+W^{i,j}(t) \hspace{0.2cm} (i=1,\dots, n; j=1,\dots, J), \end{equation} where $\mu(t)$ is the mean global, $\nu^{j}(t)$, is the mean of measure $j$, $Z^{i}(t)$ is the subject-specific deviation from the measure-specific mean function, and $W^{i,j}(t)$ is the residual subject- and measure-specific deviation from the subject-specific mean. In this framework, $\mu(t)$ and $\nu^{j}(t)$ are treated as fixed functions, while $Z^i(t)$ and $W^{i,j}(t)$ are treated as random function of mean zero. Moreover, with the proposal of identification correctly the model, we assume that $Z^{i}(t)$, and $W^{i,j}(t)$ are random uncorrelated functions. In many applications, we note $\nu^{j}(t)$ could be set to zero when functional responses are interchangeable within different measures, and the model becomes a one-way functional ANOVA. In the literature of multi-level models, the functions $Z^{i}(t)$'s are known as the $1$-level of functions, while $W^{i,j}(t)$'s functions composed the $2$-level. Again, the foremost step in a multi-level functional component analysis model is to rely on the Karhunen-Loève decomposition. For example, in the model defined by the equation \ref{sec:multilevel}, we have \begin{equation}{\label{sec:multilevel2}} Z^{i}(t)= \sum_{k=1}^{\infty} c_{k}^{i} e_{k}^{(1)}(t) \hspace{0.4cm} W^{i,j}(t)= \sum_{k=1}^{\infty} d_{k}^{i,j} e^{(2)}_{k}(t) \end{equation} where $\{e_{k}^{(1)}\}^{\infty}_{k=1}$ y $\{e_{k}^{(2)}\}^{\infty}_{k=1}$ are the auto-functions related to the random functions of the levels $1$, $2$, respectively, while $\{c_{k}^{i}\}^{\infty}_{k=1}$, $\{d_{k}^{i,j}\}^{\infty}_{k=1}$ are the scores o loading variables for all $i=1,\dots, n$; $j=1,\dots, J$. In more compact form, the equation \ref{sec:multilevel}, is rewritten as \begin{equation}{\label{sec:multilevel3}} X^{i,j}(t)= \mu(t)+\nu^{j}(t)+\sum_{k=1}^{\infty} c_{k}^{i} e_{k}^{(1)}(t)+\sum_{k=1}^{\infty} d_{k}^{i,j} e^{(2)}_{k}(t) \hspace{0.2cm} (i=1,\cdots, n; j=1,\cdots, J). \end{equation} Importantly, in this models, the functions $\{e_{k}^{(1)}\}^{\infty}_{k=1}$ $\{e_{k}^{(2)}\}^{\infty}_{k=1}$ are ortho-normal basis in the space of square functions, but in general the functions that compose each function bases are not orthogonal with each other, which implies that the estimation of scores is not simple in practice, a topic that we discuss later. Moreover, score`s $\{c_{k}^{i}\}^{\infty}_{k=1}$ and $\{d_{k}^{i,j}\}^{\infty}_{k=1}$ are random variables of mean zero and with variance are given by covariance funtions of stochastic process $Z^i(t)$'s and $W^{i,j}(t)$'s. Below, we explain how to calculate the auto-functions and auto-values of the model defined in the equation \ref{sec:multilevel3}. Let $\Sigma_{T}(s,t)= Cov(X^{i,j}(s), X^{i,j}(t))$ be the overall covariance function and $\Sigma_{B}(s,t)= Cov(X^{i,j}(s), X^{i,k}(t))$ the covariance function between the units of the second level setting the effect of the first level. Applying Mercer's theorem, we can see that it verifies $\Sigma_{T}(s,t)= \sum_{k=1}^{\infty} e_{k}^{(1)}(s) e_{k}^{(1)}(t) \lambda_{k}^{(1)}+\sum_{k=1}^{\infty} e_{k}^{(2)}(s) e_{k}^{(2)}(t) \lambda_{k}^{(2)}$ and $\Sigma_{B}(s,t)= \sum_{k=1}^{\infty} e_{k}^{(1)}(s) e_{k}^{(1)}(t) \lambda_{k}^{(1)}$. Defining $\Sigma_{W}(s,t)= \Sigma_{T}(s,t)= \Sigma_{T}(s,t)-\Sigma_{B}(s,t)=\sum_{k=1}^{\infty} e_{k}^{(2)}(s) e_{k}^{(2)}(t) \lambda_{k}^{(2)}$, where the indices $T$,$B$, and $W$ are used to refer to the "total," "between," and "within" subject covariances. As in the previous Section, the curves are observed uniquely, in a grid of points $\{0\leq t_1< \cdots <t_{m}\leq 1\}$ and, in this situation we have to perform the empirical estimators $\hat{\Sigma}_{T}(t_s,t_k)$, $\hat{\Sigma}_{W}(t_s,t_k)$, $\hat{\Sigma}_{B}(t_s,t_k)$. Unlike in the preceding Section, we only observe directly information from the process $X^{i,j}(t)$, and, it is possible to estimate the covariance matrix, $\hat{\Sigma}_{T}(t_s,t_k)$, according the usual empirical estimator, that is, \begin{equation} \hat{\Sigma}_{T}(t_s,t_k)= \frac{1}{nJ} \sum_{i=1}^{n}\sum_{j=1}^{J}(X^{i,j}(t_s)-\hat{\mu}(t_s)-\hat{\eta}^{j}(t_s))(X^{i,j}(t_k)-\hat{\mu}(t_k)-\hat{\eta}^{j}(t_k)). \end{equation} To estimate the covariance operator $\Sigma_{B}(s,t)$, it is enough to appeal to the method of moments or in an equivalent way to the covariance estimator through a $U$-statistic estimator, \begin{equation} \hat{\Sigma}_{B}(t_s,t_k)= \frac{1}{nJ(J-1)} \sum_{i=1}^{n}\sum_{j=1}^{J} \sum_{j^{\prime} \neq j}^{J} (X^{i,j}(t_s)-\hat{\mu}(t_s)-\hat{\eta}^{j}(t_s))(X^{i,j^{\prime}} (t_k)-\hat{\mu}(t_k)-\hat{\eta}^{j^{\prime}`}(t_k)). \end{equation} As $\hat{\Sigma}_{W}= \hat{\Sigma}_{T}-\hat{\Sigma}_{B}$, is not necessarily a defined positive matrix in the sample context, we have to trim the eigenvalue-eigenvector pair where the eigenvalue is negative. Finally, for the calculation of the scores $\{c_{k}^{i}\}^{\infty}_{k=1}$ y $\{d_{k}^{i,j}\}^{\infty}_{k=1}$ (for all $i=1,\dots, n$; $j=1,\dots, J$). Different estimation strategies can be used. We highlight computationally intensive methods such as Markov chain Monte Carlo (MCMC) \cite{di2009multilevel}, projection algorithms designed for this problem, or the most computationally efficient and used in practice method: the best linear unbiased prediction estimator for mixed models (BLUP) \cite{robinson1991blup}, which is detailed for the multilevel problem defined in this Section, in the following reference \cite{https://doi.org/10.1002/sta4.50}. \subsubsection{More general extensions}\label{sec:multilevel2} Different levels of hierarchy may appear in real problems that can be nested as in the previous Section or crossed. Following \cite{shou2015structured}, the different situations that usually occur are listed in Table \ref{table:sec1}. All these models have the same structure $X(t)= \mu(t)+ \sum latent processes +\epsilon(t)$, where $\mu(t)$ is the mean curve or fixed effect and $\epsilon(t)$ is a white noise, $\epsilon(t) \sim N(0,\sigma^2)$ for all $t\in [0,1]$. The latent processes are assumed to be zero-mean and square-integrable so that they are identifiable, and the standard statistical assumptions for scalar outcomes can be generalized to functional data. In this way, the total variability of a functional outcome is decomposed into a sum of process-specific variations plus $\sigma^{2}$. Both nested and crossover models can be used to employ a general estimation strategies. Below we summarize the steps necessary to do so, which are analogous to those explained in the previous Section: \begin{enumerate} \item Estimate the means and covariance functions involved in the differents models via moment methods. \item With the estimated covariance functions, calculate an appropriate number of $K$- auto-values and auto-vectors along the different levels of hierarchy that collect the different modes of variability in a precise way so that the problem we want to address. \item Estimate the scores using the BLUP estimator \cite{robinson1991blup}, as it is done in \cite{shou2015structured} based on \cite{zipunnikov2011multilevel} and \cite{crainiceanu2009generalized}. \end{enumerate} \begin{center} \begin{table}[ht!] \scalebox{0.8}{ \begin{tabular}{ |c|c|c|c| } \hline & Model & Structure \\ \hline \multirow{3}{4em}{Nested} & (N1) One-way & $X^{i}(t)= \mu(t)+Z^{i}(t)+\epsilon_{i}(t) $ \\ & (N2) Two-way & $X^{i,j}(t)= \mu(t)+Z^{i}(t)+W^{i,j}+ \epsilon_{i,j}(t) $ \\ & (N3) Three-way & $X^{i,j,k}(t)=\mu(t)+Z^{i}(t)+W^{i,j}+ U^{i,j,k}+ \epsilon_{i,j,k}(t) $ \\ & (NM) Multi-way & $X^{i_1,i_2,\cdots, i_r}(t)=\mu(t)+R_{(1)}^{i_1}(t)+R_{(2)}^{i_2}+\cdots R_{(r)}^{i_r}+ \epsilon_{i_1,i_2,\cdots, i_r}(t) $ \\ \hline \multirow{3}{4em}{Crossed} & (C2) Two-way & $X^{i,j}(t)= \mu(t)+ \eta^{j}(t)+Z^{i}(t)+W^{i,j}+ \epsilon_{i,j}(t) $ \\ & (C2s) Two-way sub & $X^{i,j,k}(t)=\mu(t)+\eta^{j}+Z^{i}(t)+W^{i,j}+ U^{i,j,k}+ \epsilon_{i,j,k}(t) $ \\ & (CM) Multi-way & $X^{i_1,i_2,\dots, i_ru}(t)=\mu(t)+R^{S_i}(t)+R^{S_2}+\cdots R^{S_r}+ \epsilon_{i_1,i_2,\dots, i_ru}(t) $ \\ \hline \end{tabular}} \label{table:tabla1}\caption{ Structured functional models. For nested models, $i=1,2,\dots,n$;$j=1,2,\dots,n_i$; $k=1,2,\dots,K_{ij}$; $i_1=1,2,\dots,I_1$, $i_2=1,2,\dots, I_{2i1},\dots, i_r=1,2,\dots, i_{r_{i_1},i_2,\dots,i_{r_1}}$. For crossed designs, $i=1,2, \dots,n;j=1,2,\cdots,J;k=1,2,\cdots,n_{ij}$; (C2s) "Two-way sub" stands for "Two-way crossed design with subsampling"; (CM) contains combinations of anys $(s=1,2,\dots,r)$ subset of the latent processes, as well as repeated measurements within each cell. $S_1,S_2,\dots,S_d$ $\in\{i_{k_1}i_{k_2} \dots, i_{k_s}, u:k_1,k_2,\dots,k_s \in (1,2, \dots,r), u\in(\emptyset,1,2,\dots,I_{i_1i_2,\dots,i_r}), s \leq r\}$, $u$ is the index for repeated observation in cell $(i_{k1},i_{k2},\dots, i_{kr})$. $\epsilon(t)$ is a random errror $N(0,\sigma^{2})$} \label{table:sec1} \end{table} \end{center} In the step $1$, to estimate the covariance functions in the different models mentioned above, a general estimation strategy proposed in \cite{koch1968some} can be used. For example, following the notation and problem $(N2)$ defined in the previous Section, $\hat{\Sigma}_{T}$, $\hat{\Sigma}_W$, $\hat{\Sigma}_B$, can be expressed with the following sandwich structure: \begin{equation}\label{eqn:sand} \hat{\Sigma}_{T}= X G_{T} X^{T} \hspace{0.2cm} \hat{\Sigma}_{W}= X G_{W} X^{T} \hspace{0.2cm} \hat{\Sigma}_{B}= X G_{B} X^{T}, \end{equation} where $X$ is a matrix of size $(nJ)\times m$ that records the different observations of all the individuals and levels of hierarchy, while $G_{T}$, $G_{W}$ and $G_{B}$ are design-specific matrices of dimension $m\times m$. In particular, the usual co-variance matrix $\hat{\Sigma}_{T}$ is written as $\hat{\Sigma}_{T}= X G_{T} X^{T}$, where $G_{T}= \frac{1}{nJ} (I-11^{T})$ where $I$ denotes the identity matrix, and with $1$, we denote $m$ length vector with all ones. More details about these procedures, as well as about the selection of the components and the score estimation, can be found in the following references \cite{di2009multilevel,shou2015structured}. \subsubsection{Intra-class correlation coefficient (ICC)} A significant problem when several repeated measurements are collected from a subject over different days or other periods is to determine how much variability is explained by the subjects' effect and how much by making different measurements over different levels of the hierarchy. This issue is in the literature and is known as the process of estimating the coefficient of intra-class correlation (ICC) \cite{muller1994critical} that pursues to estimate the variability of measuring a subject in conditions that are assumed to be standardized across different tests. The estimation of ICC is crucial, for example, in the field of clinical laboratory testing, where we want to use clinical variables for the monitoring and diagnosis of patients that are not modified abruptly between days by a problem of error of measurement of the device, and by the intra-day variability of individuals, see an example of the above in diabetes in \cite{Selvin2007short}. In biomechanics and exercise sciences, the ICC's quantification is also critical in searching for objective criterium to assess performance and control the individual's degree of fatigue \cite{van2002reliability,koldenhoven2018validation}. Although a variable may have a high variability, it can be a very useful criterion for decision-making. In this case, it is necessary to make several measurements to capture that variable accurately. The ICC can also quantify how many measures we have to make to capture variable distribution with enough accuracy. The first model where the ICC was estimated is $(N2)$ of the Table \ref{table:sec1}. In this scenario, we have \begin{equation} X^{i,j}(t)= \mu(t)+Z^{i}(t)+ W^{i,j}+ \epsilon_{i}(t). \end{equation} Fixed $t\in [0,1]$, by analogy with a univariate non-functional case, the proportion of the total explained variable by the effect of the subjects at that point, is given by \begin{equation} \rho(t)= \frac{Var(Z^{i}(t))}{Var(Z^{i}(t) +W^{i,j}(t)+\epsilon_{i}(t))}, \end{equation} being $\rho(t)$, the intra-class correlation coefficient in the point, $t$. In a straightforward way, the ICC can be generalized as a global measure at the functional level, see for example (\cite{shou2013quantifying}) comparing the total variability collected by the involved co-variance operators with the variability modes $K_{A}$, and $K_{W}$ and the global white noise $\epsilon(t)$ (covariance functions at the levels $1$ and $2$). Thus, the global ICC, denoted as $\rho$, is \begin{equation} \rho= \frac{tr(K_{A})}{tr(K_{A})+ tr(K_{W})+\sigma^{2}}= \frac{\sum_{k=1}^{\infty} \lambda_{k}^{(1)}}{\sum_{k=1}^{\infty} \lambda_{k}^{(1)}+ \sum_{k=1}^{\infty} \lambda_{k}^{(2)} + \sigma^{2} }, \end{equation} wherewith $tr(\cdot)$, we denote the trace operation, and where we are using the notation of the Section \ref{sec:multilevel1}. We have to point out that the homoscedastic error-term $\epsilon(t)$ has been included according to the convention followed in the Section \ref{sec:multilevel2}-a more general setting that source of random variable is decomposed into an independent term. The ICC can be calculated in more complex multi-level models. Suppose that we wish to use model (N3) and then, we have three levels. To develop such a task, it is enough to divide the source of variability generated by the hierarchy associated with subjects by all variability sources, that is: \begin{equation} \rho= \frac{\sum_{k=1}^{\infty} \lambda_{k}^{(1)}}{\sum_{k=1}^{\infty} \lambda_{k}^{(1)}+ \sum_{k=1}^{\infty} \lambda_{k}^{(2)}+\sum_{k=1}^{\infty} \lambda_{k}^{(3)} + \sigma^{2}}. \end{equation} Recently, the intra-class correlation coefficient has been extended for objects that live in complex spaces where similarity between objects can be computed by the particular distance \cite{xu2020generalized}. \subsubsection{Hypothesis testing between different levels} Consider the model (N3) specified on Table \ref{table:sec1} : \begin{equation} X^{i,j,k}(t)=\mu(t)+Z^{i}(t)+W^{i,j}(t)+ U^{i,j,k}(t)+ \epsilon_{i,j,k}(t), \end{equation} where $i=1,2,\dots,n;j= 1,2,\dots,n_i;k=1,2,\dots,K_{ij}$. Without a loss of generality, suppose that the first level is the individual, the second is the test performed (HIIT or CTR), and the last level is the stride number. Comparisons between the differences in HIIT run and CTR are of considerable interest in biomechanical studies. To do this, we need to compare the difference between $2-$levels effect functions \begin{equation} W^{i, HIT}(t) \hspace{0.2cm} \text{and} \hspace{0.2cm} W^{i, CTR}(t), \hspace{0.2cm} \forall t\in [0,1], i=1,\dots,n. \end{equation} Evoking again, Karhunen-Loève's decomposition, we know that $W^{i, HIT}(t)\approx \sum_{k=1}^{m} d^{i,HIT}_{k} e^{(2)}_k (t)$ and $W^{i, CTR}(t)\approx \sum_{k=1}^{m} d^{i,CTR}_{k} e^{(2)}_k (t)$. Then, to test the null hypothesis in a distributional sense $H_0: W^{HIT}= W^{CTR}$, we can test the score values in a distribution, as follows: \begin{equation*} d_k^{HIT} \overset{D}{=} d_{k}^{CTR} \hspace{0.2cm} (k=1,\dots, m) ,\end{equation*} and it is expected that as $m,n\to \infty$, we have asymptotic test consistency. In practice, fixed $k$, we can test univariate distribution changes with the estimated score of the second level composed of HIIT and CRT runs effects respectively, $\{\hat{d}^{i,HIT}_{k}\}^{n}_{i=1}$, $\{\hat{d}^{i,CRT}_{k}\}^{n}_{i=1}$. For this purpose, we can use the rich family test that provides energy distance methodology \cite{rizzo2016energy}, or with classical tests such as Kolmogorov-Smirnov or Crammer-Von-Misses. As we applied univariate-test $m$ times and obtained $m$ marginal p-values (for each score), we must apply false discovery rate \cite{benjamini1995controlling} or other criteriums to performer corrections for multiple comparisons to control type $1$ error under the null hypothesis. Finally, we return as global p-value $\min\{p^{*}_1,\cdots, p^{*}_m\}$, where $p^{*}_{i}$ denotes the adjusted p-value for score $i$. A similar methodology introduced here, was used in the standard set-up of hypothesis testing with functional data \cite{pomann2013two} out of a multilevel data framework. \subsection{Summary of functional multi-level models. What is the reason that this models are so important?} The increasing ability to store different profiles and functions of different variables that measure individuals' health from a broad spectrum of perspectives at different time scales provides several methodological challenges of statistical analysis that multilevel models can solve. In particular: \begin{itemize} \item We can obtain a vectorial representation for each individual that captures the differences between individuals in a context of repeated and longitudinal measures. \item We can obtain the same representation for each individual in different hierarchical levels, for example, in a specific run and specific step recorded. \item For a specific individual, we can estimate the differences between different hierarchical levels. In addition, we can quantify intra and inter individuals' variability in all model levels. With this model, we can see under specific conditions, the specific modes of variability and compare with other conditions. \item We can obtain reliability measures as ICC or compare through hypothesis testing changes along a group of individuals or test conditions with paired and repeated measures. We can do this with the methodology previously established or following \cite{crainiceanu2012bootstrap}. \end{itemize} \section{Biomechanical data} \subsection{General description of study and variables} In order to assess biomechanical changes in typical training sessions in recreational runners on an equal level, $20$ participants ($10$ women and $10$ men) were initially selected to complete four typical training sessions sufficiently spaced in time. Two were high-intensity interval training, and the rest were continuous training. In the first case, athletes ran $6\times 800(m)$ intervals at 1 km/h below their maximum aerobic speed with $1:1$ recovery. While in the second one, the athletes completed a continuous at a speed below maximum steady stater. The duration of the continuous run was individualised to the same estimated energy expenditure as the interval training. In addition, the training sessions were conducted at the same time of day to avoid possible daytime fluctuations. Running kinematics was measure with three-Dimensional motion analysis system that collect data at frequency of 500 Hz. All sessions were performed in an environmentally controlled laboratory setting, the athletes all used the same treadmill. Isometric strength was for various actions were recorded pre and post run. The participants' basic characteristics can be found in Table \ref{table:tabla2}. The strength changes in the last 800-m interval in Hip Abduction, Hip Adduction, Knee Extension are shown in Figure \ref{fig:fuerza1}. In this paper, we analyze $20$ cycles of the stance phase for each run over $19$ participants. For security reasons, we have excluded one of the participants, due to the presence of some outliers and missing data in some part of strides. In our analysis, we have only focused on what happens in the three-dimensional knee segments: Knee-X, Knee-Y and Knee-Z. Additional details, about the study design and how the measurements were made can be found for example in \cite{10.3389/fbioe.2020.00360}. \begin{table}[ht!] \begin{center} \begin{tabular}{|c|c|c|} \hline Variable & Female & Male \\ \hline Age (years) & $42.3 \pm 4.4$ & $43.8 \pm 4$ \\ \hline Height (cm) & $164.8 \pm 6.3$ & $181.2 \pm 7.9$ \\ \hline Mass (kg) & $58.3 \pm 6.6$ & $77.3\pm 6.5$ \\ \hline HIIT Speed ($m\cdot s^{-1}$) & $3.9 \pm 0.3$ & $4.6 \pm 0.3$ \\ \hline HIIT rep duration (min:sec) & $3:24 \pm 13(s)$ & $2:47 \pm 16(s)$ \\ \hline MICR Speed $(m\cdot s^{-1})$ & $3.3 \pm 0.2$ & $3.6 \pm 0.4$ \\ \hline MICR duration (min:sec) & $32:16 \pm 2:03$ & $25:53 \pm 3:40 $\\ \hline $V0_{2}$ max ($ml\cdot kg^{-1}\cdot min^{-1}$) & $52.8 \pm 5.0$ & $60.5\pm 4.4$ \\ \hline sLTP ($m\cdot s^{-1}$) & $3.4 \pm 0.1$ & $3.9 \pm 0.3$ \\ \hline $\%$ $V0_{2}$ max at sLTP & $81.3 \pm 5.9$ & $72.7 \pm 8.1$ \\ \hline \end{tabular} \caption{Descriptive characteristics of participants, training runs, speeds, durations, $V0_{2}$ max, speed at lactate turnpoint (sLTP), percentage of $\%$ $V0_{2}$ max at sLTP ($\%$ $V0_{2}$ max at sLTP), represented as mean $\pm$ standard deviation.} \label{table:tabla2} \end{center} \end{table} \begin{figure}[ht!] \centering \includegraphics[width=0.7\linewidth]{fuerzanuevo} \caption{Changes in strength production at the start and end of last HIIT session for hip abduction, hip adduction and knee extension.} \label{fig:fuerza1} \end{figure} \subsection{Aims of the analysis, clinical implications and statistical analysis} Knee injuries are the most common injury among runners of all levels. Therefore analyzing changes in posture and differences during typical training sessions has high clinical value in acquiring new epidemiological knowledge related to the causes of injuries. Examining the reliability between two interval training sessions is of fundamental importance. This will enable us to know how much information needs to be captured in order to characterize an athlete's biomechanical profile in a training session . In addition, changes in biomechanical patterns are very individualized and variable between individuals, therefore, using statistical tools that put the focus on the average subject in the study population rather than on individual variations can be very misleading, particularly in studies where the sample size is minimal. Finally, it is of interest to know if the information registered through the functional profiles and analyzed by the multilevel models provided information of the changes that occur during the athletes’ training sessions. In order to answer some of the questions mentioned above, we have divided the statistical analysis made with the three-dimensional information of the knee, in the following items: \begin{enumerate} \item Examine the correlation between the scores obtained after applying the functional component analysis with the force production changes in the training session. \item To estimate the multilevel functional intraclass correlation coefficient to measure the reliability between two interval training sessions using the 20-step information. \item To establish if statistically significant differences exist between a continuous and an interval training session by means of a hypothesis test exploiting the representation constructed from a functional multilevel model. \end{enumerate} In particular, we have selected the three-level nested model (N3) from the Table \ref{table:sec1} to carry out the model of all the previous issues. In the analyses outlined, the results will be accompanied by graphs that help us understand and discuss how individualized the biomechanical changes are and how it is a useful hypothesis test to infer conclusions in this context. \section{Results} Figures \ref{fig:atletas1} and \ref{fig:atletas2} contain the information about 20 strides per individual in different two HIIT runs in Knee-X, Knee-Y and Knee-Z. We can see that there are subjects in whom there are hardly any differences in their biomechanical profiles between the two runs. However, in others, differences seem to be present. In addition, we can also see that the patterns between the two runs are quite individual; no common pattern in angle values exists across the all the runners examined. Figure \ref{fig:correlacion1} show the bivariate association between each functional scores after applied multilevel principal component analysis and changes in the strength production between athletes. The results show that in some scores, there is a significant correlation against changes in strength production. However, in other cases, the correction is very poor. Notwithstanding this, we examined the marginal association and probability can be the interaction in more complex models between scores, but the limited sample size of this study remains fit more complex model. The functional ICC for Knee-X is $0.55$, Knee-Y $0.54$, and Knee-Z $0.61$. Likewise, at a significance level of 5\%, no statistical differences were found between the biomechanical patterns during interval training and continuous running, with the p-values for Knee-X, Knee-Y, and Knee-Z respectively of $0.17$, $0.12$ and $0.4$. Figures \ref{fig:cambios1}, \ref{fig:cambios3} shows the measured curves of each group taken with the average of the $20$ steps, along with the biomechanical profiles of two athletes we consider representative. They show that the biomechanical changes between an interval run and a continuous run among the athletes are very changeable. In some individuals, there is no biomechanical changes in running style, while in others, the biomechanic profiles are very different. However, the mean values are not significantly different. \begin{figure}[ht!] \subfloat[fig 1]{\includegraphics[width = 5.0in, height=2.5in, scale=0.6]{correlacionX}}\\ \subfloat[fig 2]{\includegraphics[width = 5.0in, height=2.5in, scale=0.6]{correlacionY}}\\ \subfloat[fig 3]{\includegraphics[width = 5.0in, height=2.5in, scale=0.6]{correlacionZ}}\\ \caption{ Spearman-correlation and bidimensional plots between functional scores calculated with multilevel models and change in strength production in Hip Abduction, Hip Adduction and Knee Extension} \label{fig:correlacion1} \end{figure} \begin{figure}[ht!] \subfloat[fig 1]{\includegraphics[width = 2.5in]{KneeX1nuevo}} \subfloat[fig 2]{\includegraphics[width = 2.5in]{KneeX2nuevo}}\\ \subfloat[fig 3]{\includegraphics[width = 2.5in]{KneeY1nuevo}} \subfloat[fig 4]{\includegraphics[width = 2.5in]{KneeY2nuevo}}\\ \subfloat[fig 5]{\includegraphics[width = 2.5in]{KneeZ1nuevo}} \subfloat[fig 6]{\includegraphics[width = 2.5in]{KneeZ2nuevo}}\\ \caption{Angle profiles were recorded along $20$ strides in two HIIT sessions. Each individual in the same plot has their curves in the same color. We show graphics for Knee-X, Knee-Y, and Knee-Z. } \label{fig:atletas1} \end{figure} \begin{figure}[ht!] \subfloat[fig 1]{\includegraphics[width = 2.2in]{KneeX3nuevo}} \subfloat[fig 2]{\includegraphics[width = 2.2in]{KneeY3nuevo}}\\ \begin{center} \subfloat[fig 3]{\includegraphics[width = 2.2in]{KneeZ3nuevo}} \end{center} \caption{Angle profiles were recorded along $20$ strides in two HIIT sessions. Each individual in the same plot has their curves in the same color. We show graphics for Knee-X, Knee-Y, and Knee-Z.} \label{fig:atletas2} \end{figure} \begin{figure} \subfloat[fig 1]{\includegraphics[width = 2.5in]{KneeXcambios1nuevo}} \subfloat[fig 3]{\includegraphics[width = 2.5in]{KneeXcambios2}} \caption{Biomechanical pattern of two athletes between HIIT session and continuous running and mean functional curve in each run for all runners} \label{fig:cambios1} \end{figure} \begin{figure} \subfloat[fig 1]{\includegraphics[width = 2.5in]{KneeZcambios1nuevo}} \subfloat[fig 3]{\includegraphics[width = 2.5in]{KneeZcambios2}} \caption{Biomechanical pattern of two athletes between HIIT session and continuous running and mean functional curve in each run for all runners} \label{fig:cambios3} \end{figure} \section{Discussion} Knee injuries are one of the most frequent problems faced by recreational runners \cite{van2007incidence}. An accurate characterization of the biomechanical changes that occur in typical training sessions can be critical in identifying the etiology of injuries \cite{donoghue2008functional} and developing predictive models to detect injury risk \cite{ceyssens2019biomechanical}. Here, we have illustrated how to exploit the functional information of different steps during different training sessions from multilevel models to: i) examine the correlation between knee angles and changes in force production in the same training session; ii) measure the reliability between two training sessions: iii) see that there are no statistically significant differences between a continuous run and an interval training with the same energy expenditure, although remarkable differences exist if we visually analyze some individuals. The complete analysis of each cycle through functional analysis techniques that analyze the curve in its totality has lead to more nuanced findings \cite{donoghue2008functional}. Traditional techniques that analyze either fixed angles, the average angle, the range of movement or other measures summarized, result in the loss of information that its use entails. Complementary, interesting problems can be identified when using more informative gait points. Recent statistical methodologies can be used to address this problem \cite{berrendero2016variable, poss2020superconsistent} Functional multilevel models are an essential weapon in the challenge to exploit information from monitoring athletes or patients, to optimize decision making using different sources of information and measurements, made at different resolution levels. These tools can help integrate and analyze the information together, obtain a representation of the individuals along with different levels of hierarchy, and establish the different forms of variability in the different levels considered. These tools are remarkable if we want to analyze all training records or physiological variables of a group of athletes over a season or different micro-macro-cyc*les \cite{lambert2010measuring,halson2014monitoring}. For example, there is not yet a sufficiently good methodology to represent the information inherently as proposed by these models \cite{matabuena2019improved, piatrikova2021monitoring, kalkhoven2021training}. Despite being an exciting research topic with high relevance, we believe that there are not many methodologies to address relevant problems in biomechanics to date. For example, a specific need of this field could be to build a multilevel model that considers the different time length of the step, and not lose information of the step geometry with the standardization of all the strides to the $[0,1]$ interval. The multilevel models have allowed us to calculate the intraclass correlation coefficient between the two interval training sessions taking into account the $20$ steps recorded in each session. To the best of our knowledge, this is a novel approach in this area since the traditional approaches previously used to measure reliability rely on the compression of information in the average curve and only beetween two conditions \cite{pini2019test}. At the same time, we have introduced a new hypothesis test to test the statistical differences between continuous and multilevel running, taking advantage of the representation we obtained with the multilevel model at the second level of the hierarchy. This also represents an advance, since with the inclusion of the $20$ steps in the model in each test, we have more information, and with the new procedure, we can see if there are statistical differences between the different levels of hierarchy or groups of patients/athletes taking into account the differences in the study design. An important aspect to consider in analyzing the results is that the individuals' movement patterns seem unique. This is not new, and several papers have exempted the individuality of human walking and running \cite{horst2019explaining}. In this sense, since the biomechanical patterns are probably grouped in clusters \cite{phinyomark2015kinematic, jauhiainen2020hierarchical}, standard hypothesis tests applied to the whole sample are not the best way to establish biomechanical differences. There are some discrepancies between studies when examining these issues. Also, in the biomechanics literature, as in other biomedical literature areas, there is some controversy about the use of p-value \cite{benjamin2018redefine}, and the use of other approaches such as effect size \cite{browne2010t} or e-values \cite{vovk2019values} may be recommended. A limitation of this study is the sample size, together with the fact that we are analyzing the biomechanical variations of the knee, without taking into account the possible multivariate structure of knee movement. However, due to the reduced number of data, we can gain a greater interpretation in this type of study of a more exploratory character with this procedure. Moreover, this work's main objective is to illustrate the use of multilevel models with biomechanical data. The rise of biosensors \cite{ferber2016gait,phinyomark2018analysis} in the area of biomechanics and medicine is causing an unprecedented revolution in the evaluation of athletes and patients care. It is likely that in the coming years, many of the clinical decisions will also be supported by the values predicted from the algorithms in many contexts, such as the prediction of injuries \cite{clermont2020runners, van2020real} or optimal surgery recovery \cite{karas2020predicting, kowalski2021recovery} so in sport and general populations. Undoubtedly, the introduction of the data analysis techniques discussed here will help practitioners analyze objects that vary in a continuum repeatedly and that appear more and more frequently in biomedical data \cite{dunn2018wearables}. \section*{Acknowledgements} Marcos Matabuena thanks Ciprian Crainiceanu for their email answers to specific questions related to functional multilevel model methodology developed for their research group in the last two decades. This work has received financial support from the Spanish Ministry of Science, Innovation and Universities under Grant RTI2018-099646-B-I00, the Consellería de Educación, Universidade e Formación Profesional and the European Regional Development Fund under Grant ED431G-2019/04. \section*{Competing Interests} The authors declare no competing interests. \section*{ETHICS STATEMENT} The studies involving human participants were reviewed and approved by Northumbria University. The patients/participants provided their written informed consent to participate in this study. \bibliographystyle{apalike}
1,108,101,563,576
arxiv
\section{System parameters} We give here a brief description of the main experimental parameters and how they are measured. A summary is provided in Tab.~\ref{t:parameters}. We estimate the power spectral density (PSD) from the homodyne photocurrent $i(t)$. The fluctuating mechanical motion appears as a Lorentizian peak around the resonance frequency $\Omega_m/(2\pi)=1.140$~MHz, as shown in Fig.~\ref{fig:SM_Sxx}. \begin{figure}[h] \centering \includegraphics{SI_fig_1_01.pdf} \caption{\textbf{Calibrated displacement spectrum.} Homodyne photocurrent power spectral density, calibrated in displacement units (blue), and a Lorentzian fit (dashed light blue). The shot noise PSD (gray) is acquired when the light from the optomechanical cavity is blocked. The offset in the horizontal axis is the mechanical resonance frequency $\Omega_m/(2\pi)=1.140$~MHz.} \label{fig:SM_Sxx} \end{figure} This PSD is calibrated into displacement units via a phase-modulation technique~\cite{Gorodetsky2010_SI}, combined with the independent measured vacuum optomechanical coupling $g_0/(2\pi)=129$~Hz~\cite{PhysRevLett.123.163601_SI}. From a Lorentzian fit we get the mechanical linewidth $\ensuremath{\Gamma_\mathrm{m}}/(2\pi)=19.0$~Hz and the total, unconditional occupation $\overline{n}_\text{uc}=\ensuremath{V_\mathrm{uc}}-1/2=33$. We infer the multi-photon optomechanical coupling $g/(2\pi) = 40.8$~kHz from the measurement of the output optical power used in the experiment. The quantum backaction rate is $\ensuremath{\Gamma_\mathrm{qba}} = 4 g^2/\kappa = 2\pi\times 360$~Hz. The total detection efficiency, including cavity overcoupling, optical losses from the cavity output to the homodyne detector, photodiodes quantum efficiencies and homodyne interference visibility, is $\eta_\text{det}=74\%$. We estimate a measurement rate of $\ensuremath{\Gamma_\mathrm{meas}} = \eta_\text{det} \ensuremath{\Gamma_\mathrm{qba}} = 2\pi\times 266$~Hz. From the unconditional variance $\ensuremath{V_\mathrm{uc}}$ and the classical cooperativity $\mathcal{C}=\ensuremath{\Gamma_\mathrm{qba}}/\ensuremath{\Gamma_\mathrm{m}}$ we estimate the average number of thermal phonons $\ensuremath{\bar n_\mathrm{th}} = \ensuremath{V_\mathrm{uc}} - \mathcal{C}-1/2=14$, consistent with a bath temperature of $T=11$~K when the dynamical backaction from the probe and auxiliary lasers are taken into account. \begin{table}[H] \centering \begin{tabular}{llll} \hline Symbol & Definition & Name & Value\\ \hline $\ensuremath{\Gamma_\mathrm{m}}$ & & Effective mechanical damping rate & $2\pi\times19.0$~Hz \\ $\ensuremath{\bar n_\mathrm{th}}$ & & Effective thermal bath occupation & 14 \\ $\ensuremath{\Gamma_\mathrm{meas}}$ & $\eta_\text{det}\ensuremath{\Gamma_\mathrm{qba}}$ & Measurement rate & $2\pi\times266$~Hz \\ $\mathcal{C}$ & $\ensuremath{\Gamma_\mathrm{qba}}/\ensuremath{\Gamma_\mathrm{m}}$ & Classical cooperativity & 19\\ $\ensuremath{V_\mathrm{uc}}$ & $\ensuremath{\bar n_\mathrm{th}} + \mathcal{C} + \frac{1}{2}$& Unconditional variance & 33.5\\ \end{tabular} \caption{{\bf Summary of the main experimental parameters.}}\label{t:parameters} \end{table} \section{\label{sec:level1} Irreversible entropy production rate for linear quantum systems} Consider a general Gaussian quantum system subject to continuous measurements. The general form of the master equation describing the Markovian conditioned dynamics of the system is given by Eq.[3] of Ref.~\cite{PhysRevLett.94.070405_SI}. It should be noted that, the master equation of the system studied in the main text is exactly of this form. However, for the case in which we are interested it suffices to consider the less general master equation (see also Ref.~\cite{jacobs2006straightforward_SI}) \begin{equation}\label{SMEgen} d\rho_c=-i[H,\rho_c]dt+\sum_{\ell=1}^L\mathcal{D}[c_\ell]\rho_c dt+\sum_{\ell=1}^L\sqrt{\eta_\ell}\mathcal{H}[c_\ell]\rho_c dW_\ell, \end{equation} where $\rho_{c}$ is the conditional density matrix, $\hat{c}_\ell$ are arbitrary bounded operators, $\mathcal{D}[c]=c\rho c^\dag-\{c^\dag c,\rho\}/2$, and $\mathcal{H}[c]\rho=c\rho+\rho c^\dag-\rm{Tr}[\rho (c+c^\dag)]\rho$. Here $L$ is the number of output channels which can be measured with efficiencies $\eta_\ell$, and $dW_\ell$ are real, independent, Wiener increments. This master equation corresponds to a stochastic Fokker-Planck (sFP) equation in phase-space for the dynamics of the Wigner function, $W$, associated to the state density matrix of the system. Given the Gaussian nature of the problem at hand, the dynamics of the system is also fully characterised by the equations for the first two cumulants of the Wigner distribution, \begin{eqnarray} d\ensuremath{\mathbf{r}}(t) &=& \sum_{\ell=1}^L A_\ell\ensuremath{\mathbf{r}}(t) dt +\sum_\ell(V(t)C_\ell^T+\Gamma_\ell^T)d\mathbf{W},\label{r_riccatti}\\ \dot{V}(t) &=&\sum\limits_{\ell=1}^L \left(A_{\ell}V(t)+V(t)A_\ell+D_\ell\right) -\sum_\ell\chi_\ell(V(t)).\label{V_riccatti} \end{eqnarray} Here $\ensuremath{\mathbf{r}}(t)$ are the first cumulants, $C$ and $\Gamma$ are matrices describing the measurement process, $d\mathbf{W}$ is a vector of independent Wiener increments~\cite{PhysRevLett.94.070405_SI}, $V(t)$ is the quadrature covariance matrix (CM) and the term $\chi(V(t))$, called the innovation matrix, is a quadratic function of $V$ quantifying the rate at which information about the system is learned through the measurement. The sum in Eqs.~(\ref{r_riccatti})-(\ref{V_riccatti}) refers to the different environments (output channels) acting on the system; splitting the contributions of $A$, $D$ and $\chi$ in this way will be convenient in what follows. Following Ref.~\cite{belenchia2019entropy_SI}, and using as entropic measure the Wigner entropy~\cite{PhysRevLett.118.220601_SI}, the thermodynamics of the system can be characterised by establishing the corresponding entropy production and flux rates. Let us first focus on the unconditional evolution, i.e., when the measurement outcomes are not recorded (equivalently, when the detectors' efficiency vanishes). This is obtained from~\eqref{SMEgen} by setting $\eta_\ell = 0$. In this case, the entropy production and flux rates are deterministic quantities, given by~\cite{PhysRevLett.118.220601_SI} \begin{equation} \Phi_{\rm{uc}} =-2\int d^{2n}\mathbf{x} \sum_\ell J^{T}_{\rm{irr},\ell}D_{\ell}^{-1}A_{\rm{irr},\ell}\mathbf{x},\quad \Pi_{\rm{uc}}=2\int \frac{d^{2n}\mathbf{x}}{W_{\rm{uc}}}\sum_\ell J_{\rm{irr},\ell}^TD^{-1}_\ell J_{\rm{irr},\ell}, \end{equation} where $A_{\rm{irr},\ell}$ is the irreversible part of the drift matrix stemming from the corresponding dissipator $\mathcal{D}[c_\ell]$, and $J_{\rm{irr},\ell}=A_{\rm{irr},\ell}\mathbf{x} W-({D_\ell}/{2})\nabla W$. More explicitly, \begin{align}\label{ucthermo} & \Phi_{\rm{uc}}=-{\rm{Tr}}[A_{\rm{irr}}]-\sum_\ell\left(2{\rm{Tr}}[A_{\rm{irr},\ell}^TD^{-1}_\ell A_{\rm{irr},\ell}V_{\rm{uc}}]-2\mathbf{r}_{\rm{uc}}^TA_{\rm{irr},\ell}^TD_\ell^{-1}A_{\rm{irr},\ell}\mathbf{r}_{\rm{uc}}\right) \\ &\Pi_{\rm{uc}}=2{\rm{Tr}}[A_{\rm{irr}}]+\sum_\ell \left(2{\rm{Tr}}[A_{\rm{irr},\ell}^TD^{-1}_\ell A_{\rm{irr},\ell}V_{\rm{uc}}]+2\mathbf{r}_{\rm{uc}}^TA_{\rm{irr},\ell}^TD^{-1}_\ell A_{\rm{irr},\ell}\mathbf{r}_{\rm{uc}}\right)+\frac{1}{2}\rm{Tr}[V_{\rm{uc}}^{-1}D], \end{align} where $\mathbf{r}_{\rm{uc}}$ are the first cumulants. It is important to note that the separation of the different terms stemming from the different output channels is crucial to correctly assess the thermodynamics of the system and to not overestimate the entropy production (cf.~\cite{PhysRevE.82.011143_SI}). For the conditional evolution, i.e., when the measurement outcomes are recorded, the entropy production and flux rates become stochastic variables on the single trajectory level. Their expressions are given in Ref.~\cite{belenchia2019entropy_SI} and read: \begin{align} & \phi_{c,\mathbf{r}}=-{\rm{Tr}}[A_{\rm{irr}}]dt-\sum_\ell \left(2{\rm{Tr}}[A_{\rm{irr},\ell}^TD^{-1}_\ell A_{\rm{irr},\ell}V]dt+2\mathbf{r}^TA_{\rm{irr},\ell}^TD_\ell^{-1}A_{\rm{irr},\ell}\mathbf{r}dt\right)\\ & \pi_{c,\mathbf{r}}=2{\rm{Tr}}[A_{\rm{irr}}]dt+\sum_\ell \left(2{\rm{Tr}}[A_{\rm{irr},\ell}^TD_\ell^{-1}A_{\rm{irr},\ell}V]dt+2\mathbf{r}^TA_{\rm{irr},\ell}^TD_\ell^{-1}A_{\rm{irr},\ell}\mathbf{r}dt\right)+\frac{1}{2}\rm{Tr}[V^{-1}(D-\chi)], \end{align} where $\bm{r}$ is the conditional first moment. All the stochasticity in the above expressions is encoded in $\bm{r}$. Averaging over all the trajectories in phase-space we obtain deterministic entropy production and flux rates~\cite{belenchia2019entropy_SI} \begin{IEEEeqnarray}{rCl} \Phi_{\rm{c}} &=&-{\rm{Tr}}[A_{\rm{irr}}]-\sum_\ell\left(2{\rm{Tr}}[A_{\rm{irr},\ell}^TD^{-1}_\ell A_{\rm{irr},\ell}V_{\rm{uc}}]+2\mathbf{r}_{\rm{uc}}^TA_{\rm{irr},\ell}^TD_\ell^{-1}A_{\rm{irr},\ell}\mathbf{r}_{\rm{uc}}\right) \\[0.2cm] &=& \Phi_{\rm{uc}}, \nonumber\\[0.2cm] \Pi_{\rm{c}} &=& 2{\rm{Tr}}[A_{\rm{irr}}]+\frac{1}{2}\rm{Tr}[V_{\rm{uc}}^{-1}D]+\frac{1}{2}{\rm{Tr}}[(V^{-1}-V_{\rm{uc}}^{-1})D-V^{-1}\chi] +\sum_\ell \left(2{\rm{Tr}}[A_{\rm{irr},\ell}^TD^{-1}_\ell A_{\rm{irr},\ell}V_{\rm{uc}}]+2\mathbf{r}_{\rm{uc}}^TA_{\rm{irr},\ell}^TD^{-1}_\ell A_{\rm{irr},\ell}\mathbf{r}_{\rm{uc}}\right) \nonumber\\[0.2cm] &=&\Pi_{\rm{uc}}+\dot{\mathcal{I}}, \end{IEEEeqnarray} where \begin{equation}\label{SM_Idot} \dot{\mathcal{I}}=\frac{1}{2}\rm{Tr}[V^{-1}(D-\chi(V))-V_{\rm{uc}}^{-1}D]. \end{equation} As can be seen, the conditional entropy flux, on average, coincides with the unconditional one. For the entropy production, on the other hand, this is not the case. Instead, the mismatch between the two is given by the net information gain $\dot{\mathcal{I}}$. \section{\label{sec:level2} Optomechanical cavity system} In this section, we apply the above ideas to the specific experiment reported in the main text. In particular, we recognise the steady-state of the unconditional dynamics as a non-equilibrium steady-state (NESS) and derive the expression for the unconditional and conditional entropy production and flux rates. The experimental system considered in the main text is a cavity optomechanical system working in a range of parameters in which the adiabatic elimination of the cavity field gives a good description of the dynamics of the mechanical system, i.e., in the bad-cavity weak coupling limit $\kappa\gg\Omega_m\gg g$, where $\kappa$ is the cavity linewidth, $\Omega_m$ is the mechanical resonance frequency and $g$ the linearised, multi-photon optomechanical coupling. \subsection{\label{sec:level2} Master equation in adiabatic approximation and non-equilibrium steady-state} As detailed in~\cite{hofer2017quantum_SI}, by adiabatically eliminating the cavity mode the dynamics of the system can be enormously simplified leading to a master equation of the form \begin{equation}\label{adlimit} \dot{\rho}_m=-i\left[\delta\Omega_m c^\dag c,\rho_m\right]+\left(\Gamma_m (\bar{n}_{\rm{th}}+1)+\Gamma_-\right)\mathcal{D}[c]\rho_m+\left(\Gamma_m \bar{n}_{\rm{th}}+\Gamma_+\right)\mathcal{D}[c^\dag]\rho_m, \end{equation} where $\rho_m$ represents the state of the sole mechanical mode and we have introduced the following quantities \begin{align} &\delta\Omega_m=g^2 \rm{Im}[\eta_- +\eta_+]\\ &\eta_\pm=\frac{1}{\kappa/2+i(-\Delta\pm\Omega_m)}\\ &\Gamma_\pm=2g^2\rm{Re}[\eta_\pm], \end{align} with $\Delta$ the effective laser detuning. The master equation of the system studied in the main text is obtained in the limit in which the detuning vanishes and, introducing also the stochastic terms due to the continuous monitoring, it is exactly of the form Eq.\eqref{SMEgen}. We report it here for later convenience \begin{equation}\label{SMEad} d\rho_c=\mathcal{L}_{\rm{th}}\rho_c dt +\Gamma_{\rm{qba}} \left(\mathcal{D}(c)+\mathcal{D}(c^\dag)\right)\rho_c dt+\sqrt{\eta\Gamma_{\rm{qba}}}\left(\mathcal{H}(X)\rho_c dW_X+\mathcal{H}(Y)\rho_c dW_Y\right), \end{equation} where $\mathcal{L}_{\rm th}\rho=\Gamma_m (\bar{n}_{\rm{th}}+1)\mathcal{D}[c]\rho+\Gamma_m \bar{n}_{\rm{th}}\mathcal{D}[c^\dag]\rho$ represents the contact with the mechanical phonon thermal bath, which is not subject to continuous monitoring. This equation was first introduced in~\cite{PhysRevLett.107.213603_SI} and then formally derived in~\cite{doherty2012quantum_SI}. Given the master equation, and the initial condition of the dynamics described in the main text, the first and second cumulants of the quadratures of the mechanical oscillator evolve according to \begin{eqnarray}\label{e:sme_moments} d\ensuremath{\mathbf{r}}(t) &=& -\frac{\ensuremath{\Gamma_\mathrm{m}}}{2}\ensuremath{\mathbf{r}}(t) dt + \sqrt{4\eta_{\rm{det}}\Gamma_{\rm{qba}}} V(t) \ensuremath{d\mathbf{W}},\\ \dot{V}(t) &=& -\ensuremath{\Gamma_\mathrm{m}} V(t) + \ensuremath{\Gamma_\mathrm{m}} (\bar{n}_{\rm{th}}+1/2)+\Gamma_{\rm{qba}} - 4\eta_{\rm{det}}\Gamma_{\rm{qba}} V(t)^2,\label{e:V_c_dynamics} \end{eqnarray} where $\ensuremath{d\mathbf{W}}=(dW_x,dW_y)^T$ and the covariance matrix is always diagonal $\mathbf{V}(t)$=V(t)\openone. Let us focus for a moment on the unconditional dynamics. In the experimental realisation discussed in the main text, the initial state of the mechanical oscillator corresponds to the steady-state of the unconditional dynamics, characterised by $\mathbf{r}_\mathrm{uc} = \mathbf{0}$ and $\ensuremath{V_\mathrm{uc}} =( \ensuremath{\bar n_\mathrm{th}} + 1/2 + \ensuremath{\Gamma_\mathrm{qba}}/\ensuremath{\Gamma_\mathrm{m}})\mathbbm{1}$. It should be noted here that, in the unconditional steady-state the system has thermalised at an effective temperature different from the one of the phononic bath. Crucially, this is not an equilibrium state due to the fact that the mechanical phonon bath ($\mathcal{L}_{\rm{th}}$) and the optical mode ($\mathcal{L}_{\rm{qba}}$) represent two different baths giving rise to a steady energy current. Indeed, consider the energy fluxes to each bath at the unconditional steady-state. These can be obtained by looking at \begin{align} \frac{d}{dt}\langle c^\dag c\rangle&=\rm{Tr}[ \mathcal{L}_{m}c^\dag c]+\rm{Tr}[ \mathcal{L}_{\rm{qba}}c^\dag c]=\Gamma_m (\langle c^\dag c\rangle-\bar{n}_{\rm{th}})+\Gamma_{\rm{qba}}. \end{align} While it is easy to show that at the unconditional steady-state the two energy fluxes cancel each other, the fact that they are not individually zero tells us that there is a steady-current making all the energy coming from the thermal phonon bath flow into the optical mode, i.e., the unconditional steady-state is a non-equilibrium steady-state (NESS). This implies that for the unconditional dynamics $\Phi_{\rm{uc}}=-\Pi_{\rm{uc}}\neq 0,\,\,\forall t$. \subsection{\label{sec:level2} Irreversible entropy production} In order to compute the non-zero unconditional entropy rates, we need to resort to Eq.\eqref{ucthermo} and consider the contributions of the three output channels related to the phonon bath and the two independent Wigner increments ($dW_X,\,dW_Y$) in the master equation. This is justified from the fact that, while the phonon thermal bath is not measured, the optical bath is probed via two distinct output channels characterised by independent Wiener increments. However, care has to used when considering the master equation in~\eqref{SMEad} in conjuction with Eq.\eqref{ucthermo}. Indeed, it is easy to realise that a n\"aive use of such equations leads to a non-zero entropy flux to the thermal phonon bath but to a vanishing one to the optical bath. However, we have already seen that there is a non-vanishing {\it energy} flux associated with the optical bath, to which there must correspond a non-vanishing entropy flux and an associated entropy production term. This discrepancy can be traced back to the singular character of some of the matrices entering Eq.\eqref{ucthermo} and is linked to the fact that the adiabatic elimination of the cavity field entails a loss of information on the details of the process undergone by the system-bath compound. While this is not critical when evaluating properties of the state of the mechanical system, the evaluation of thermodynamic quantities — which are instead crucially dependent on the process itself — should be done {\it cum grano salis}. There are two alternative ways to recover a consistent result. On the one hand, it is possible to work at the level of Eq.~\eqref{SMEad} but consider the full system-plus-environment interaction and evolution in a microscopic collisional model picture. The collisional model allows to derive, from a microscopic perspective, a non-null entropy flux with the optical bath even after the adiabatic elimination and the RWA have been performed. This points, again, toward the fact that the knowledge of the sole master equation, while sufficient to completely characterise the dynamics of the system, does not capture entirely the thermodynamics of the problem~\cite{De_Chiara_2018_SI} in general. A full derivation of the fluxes from a microscopic collisional model is beyond the scope of this work, and will be discussed in detail in an upcoming work~\cite{CMarticle_SI}. On the other hand, the expressions for the needed entropy flux and production rates can be derived also by considering the full composite system of cavity+mechanical mode before the adiabatic elimination of the cavity mode and the RWA. In this way, it is possible to correctly obtain the entropy flux associated to the optical bath --- responsible for the term $\Gamma_{qba}(\mathcal{D}[c]+\mathcal{D}[c^\dag])$ in the master equation after the adiabatic elimination --- which is, indeed, not vanishing even in the bad-cavity limit. \subsubsection{Full optomechanical system} The master equation describing the unconditional dynamics of the composite mechanical-optical modes system (we refer to~\cite{hofer2017quantum_SI} for further details) is given by \begin{equation}\label{fullME} \dot{\rho}=-i\left[H_{\rm{lin}},\rho\right]+\kappa\mathcal{D}[a]\rho+\Gamma_m (\bar{n}+1)\mathcal{D}[c]\rho+\Gamma_m \bar{n}\mathcal{D}[c^\dag]\rho, \end{equation} where $\rho$ represents now the state of the composite system, $H_{\rm{lin}}=\Omega_m c^\dag c+g(a+a^\dag)(c+c^\dag)-\Delta a^\dag a$ is the linearised optomechanical Hamiltonian with detuning $\Delta$, multi-photon optomechanical coupling $g$ and the operators $\hat{a},\hat{a}^\dag$ are the annihilation and creation operators for the cavity optical mode. Moreover, in line with the notation in the main text, $\kappa$ is the linewidth of the optical cavity, $\Gamma_m$ is the mechanical damping factor, and $\bar{n}_{\rm{th}}$ is the mean number of excitations in the thermal phononic bath in contact with the mechanical mode. The output mode of the cavity is measured with a balanced homodyne detection scheme in order to gather information on the position of the mechanical mode. This corresponds to the stochastic master equation \begin{equation}\label{fullSME} d{\rho}=-i\left[H_{\rm{lin}},\rho\right]dt+\kappa\mathcal{D}[a]\rho dt+\Gamma_m (\bar{n}_{\rm{th}}+1)\mathcal{D}[c]\rho dt+\Gamma_m \bar{n}_{\rm{th}}\mathcal{D}[c^\dag]\rho dt+\sqrt{\eta_{\rm{det}} \kappa}\mathcal{H}[-i a]\rho dW, \end{equation} where $\mathcal{H}[\hat{A}]\rho=A\rho+\rho A^\dag -\rm{Tr}[A\rho+\rho A^\dag]\rho$, $\eta_{\rm{det}}$ is the efficiency of the detector, and $dW$ is a real Wiener increment. The equivalent description of the system in terms of first and second cumulants of the Wigner distribution in phase-space is obtained by using the following matrices [(c.f. Eqs.~\eqref{r_riccatti}-~\eqref{V_riccatti}]: \begin{align} A_{H_{\rm{lin}}}&=\begin{pmatrix} 0 & \Omega_m & 0 & 0\\ -\Omega_m & 0 & -2g & 0\\ 0 & 0 & 0 & -\Delta\\ -2g & 0 & \Delta & 0 \end{pmatrix} \label{matrices1} \\ A_{\rm{irr,th}}&=\begin{pmatrix} -\Gamma_m/2 & 0 & 0 & 0\\ 0 & -\Gamma_m/2 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{pmatrix} && A_{\rm{irr,opt}}=\begin{pmatrix} 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & -\kappa/2 & 0\\ 0 & 0 & 0 & -\kappa/2 \end{pmatrix} \label{matrices2} \\ D_{\rm{th}}&=\begin{pmatrix} \Gamma_m(\bar{n}_{\rm{th}}+1/2) & 0 & 0 & 0\\ 0 & \Gamma_m(\bar{n}_{\rm{th}}+1/2) & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{pmatrix} && D_{\rm{opt}}=\begin{pmatrix} 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & \kappa/2 & 0\\ 0 & 0 & 0 & \kappa/2 \end{pmatrix};\\ \label{matrices3} \\ C&=\begin{pmatrix} 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & \sqrt{2\kappa\eta_{\rm{det}}} \end{pmatrix} && \Gamma=\begin{pmatrix} 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & -\sqrt{\kappa\eta_{\rm{det}}/2} \end{pmatrix};\label{matrices4} \end{align} Given these expressions, in conjunction with Eqs.\eqref{ucthermo}, the expressions for the unconditional entropy flux and production rate in the bad-cavity limit, i.e. for the system of interest as described by Eq.\eqref{SMEad}, can be easily determined. For the entropy fluxes of the $X$ and $Y$ channels we have \begin{equation} \Phi_{\rm{uc},X}=\Phi_{\rm{uc},Y}=-2 V_{\rm{uc}} \Gamma_{\rm{qba}}, \end{equation} where we have used the fact that, in the case of interest, the unconditional first momenta vanishes and that the unconditional steady-state of the full dynamics has a covariance matrix proportional to the identity $V_{\rm{uc},11}=V_{\rm{uc},22}\equiv V_{\rm{uc}}$ and $V_{\rm{uc},12}=0$. The corresponding contribution to the unconditional entropy production rate are instead given by \begin{equation} \Pi_{\rm{uc},X}=\Pi_{\rm{uc},Y}= \Gamma_{\rm{qba}}\left(2 V_{\rm{uc}}+\frac{1}{2V_{\rm{uc}}}\right). \end{equation} Summing up these rates with the ones due to the thermal phonon bath, the final result for the case of interest in the main text is \begin{align} \Phi_{\rm{uc}}&=\Phi_{\rm{uc, th}}+\Phi_{\rm{uc},X}+\Phi_{\rm{uc},Y}\\ &=\Gamma_m-\frac{V_{\rm{uc}}}{\bar{n}_{\rm{th}}+1/2}\Gamma_{m}-4 V_{\rm{uc}} \Gamma_{\rm{qba}}, \end{align} and \begin{align} \Pi_{\rm{uc}}&=\Pi_{\rm{uc, th}}+\Pi_{\rm{uc},X}+\Pi_{\rm{uc},Y}\\ &=-2\Gamma_m+\frac{V_{\rm{uc}}}{\bar{n}_{\rm{th}}+1/2}\Gamma_{m}+\frac{(\bar{n}_{\rm{th}}+1/2)\Gamma_m}{V_{\rm{uc}}}+4 V_{\rm{uc}} \Gamma_{\rm{qba}}+\frac{\Gamma_{\rm{qba}}}{V_{\rm{uc}}}, \end{align} As a sanity check, using the form of the unconditional steady-state we find $\Phi_{\rm{uc}}=-\Pi_{\rm{uc}}$ as indeed expected. \subsubsection{Information gain} For what concerns the conditional dynamics, we have already seen that $\Pi_{\rm{c}}=\Pi_{\rm{uc}}+\dot{\mathcal{I}}$ and $\Phi_{\rm{c}}=\Phi_{\rm{uc}}$, so that we only need to derive the net information gain $\dot{\mathcal{I}}$ in Eq.\eqref{SM_Idot}. Considering that the covariance matrix of the conditioned dynamics remains proportional to the identity at all times, we find \begin{align} \dot{\mathcal{I}}&=\frac{1}{2}\rm{Tr}[V^{-1}(D-\chi(V))-V_{\rm{uc}}^{-1}D]\\ &=\frac{\ensuremath{\Gamma_\mathrm{m}} \ensuremath{V_\mathrm{uc}} - 4\eta_{\rm{det}}\Gamma_{\rm{qba}} V(t)^2}{V(t)}-\ensuremath{\Gamma_\mathrm{m}} \end{align} Finally, let us stress once again that $\dot{\mathcal{I}}$ is minus the rate at which information is acquired by performing the measurement and tracking the outcomes~\cite{belenchia2019entropy_SI}. This quantity is, by definition, zero at the steady-state of the conditional evolution. This is intuitive since, at the conditional steady-state, the continuous measurement is not adding any more information with respect to what it was acquired until that point. However, it is crucial to note that the measurement process is still necessary to maintain the conditional steady-state. Turning off the detector's efficiency would drive the system back to the unconditional steady-state. This fact is encoded by the innovation matrix $\chi(V)$ and the corresponding term \begin{equation} \mathcal{G}(t) := -\rm{Tr}[V^{-1}\chi(V)]/2, \end{equation} in $\dot{\mathcal{I}}$ [Eq.~\eqref{SM_Idot}]. This term can be interpreted as the information gained by acquiring the measurement result at time $t$ with respect to the knowledge given by the outcome record up to time $t-dt$. This is what is called \textit{differential} gain of information in the main text. This quantity does not vanish at steady-state. It represents the on-going effort of the continuous measurement to keep the system in the conditional steady-state~\cite{CMarticle_SI}. \section{Estimation of the dynamical covariance matrix} Here we calculate the variance of the difference trajectory, $\mathbf{d}(t) = \ensuremath{\mathbf{r}}(t)-\ensuremath{\mathbf{r}_b}(t)$, as done in Ref.~\cite{PhysRevLett.123.163601_SI}, to the case of time-dependent predicted trajectory $\ensuremath{\mathbf{r}}(t)$. Experimentally, we can extract from the measurement the predicted and retrodicted trajectories, $\ensuremath{\mathbf{r}}(t)$ and $\ensuremath{\mathbf{r}_b}(t)$ respectively~\cite{PhysRevLett.123.163601_SI}. The variance of the ensemble of difference trajectory $\mathbf{d}(t)$ is \begin{eqnarray}\label{e:V_D_formal} \ensuremath{V_\mathrm{d}}(t) \equiv \mathbb{E}\left[(\ensuremath{\mathbf{r}}(t) - \ensuremath{\mathbf{r}_b}(t))^2\right] = \mathbb{E}\left[\ensuremath{\mathbf{r}}(t)^2\right] + \mathbb{E}\left[\ensuremath{\mathbf{r}_b}(t)^2\right] - 2\mathbb{E}\left[\ensuremath{\mathbf{r}}(t)\ensuremath{\mathbf{r}_b}(t)\right]. \end{eqnarray} The formal solutions for the forward and backward trajectories are \begin{subequations} \begin{align} \ensuremath{\mathbf{r}}(t) &= \sqrt{4\ensuremath{\Gamma_\mathrm{meas}}}\int_0^t V(s) e^{-\frac{\ensuremath{\Gamma_\mathrm{m}}}{2}(t-s)}\ensuremath{d\mathbf{W}}(s),\\ \ensuremath{\mathbf{r}_b}(r) &= 4\ensuremath{\Gamma_\mathrm{meas}} V_E \int_t^\infty e^{\lambda(t-s)}\ensuremath{\mathbf{r}}(s) ds + \sqrt{4\ensuremath{\Gamma_\mathrm{meas}}} V_E \int_t^\infty e^{\lambda(t-s)}\ensuremath{d\mathbf{W}} (s), \end{align} \end{subequations} where $\ensuremath{\Gamma_\mathrm{meas}}= \eta_{\rm{det}} \Gamma_\text{qba}$, $\lambda = 4\ensuremath{\Gamma_\mathrm{meas}} V_E - \ensuremath{\Gamma_\mathrm{m}}/2$ and $V_E = V + \ensuremath{\Gamma_\mathrm{m}}/(4\ensuremath{\Gamma_\mathrm{meas}})$. For the forward trajectory, the initial conditions are $\ensuremath{\mathbf{r}}(0) = 0$ and $V(0)=\ensuremath{V_\mathrm{uc}}$. For the backward trajectory, we assume that the initial conditions are far in the future, such that at time $t$ they are forgotten. This also implies that the backward conditional variance $V_E$, at time $t$, has reached the steady state. We move now to calculating the three terms in Eq.~\eqref{e:V_D_formal}, considering the general case where the conditional variance $V(t)$ is not at the steady state. This generalises the case considered in~\cite{PhysRevLett.123.163601_SI}. For the first term, we start by calculating the two-time correlator for the forward trajectory, \begin{eqnarray}\label{e:corr_rfw} \mathbb{E}\left[\ensuremath{\mathbf{r}}(t)\ensuremath{\mathbf{r}}(\tau)\right] = 4\ensuremath{\Gamma_\mathrm{meas}} e^{-\frac{\ensuremath{\Gamma_\mathrm{m}}}{2}(t+\tau)} \int_0^{\mathrm{min}(t,\tau)} e^{\ensuremath{\Gamma_\mathrm{m}} s} V(s)^2 ds. \end{eqnarray} We can compute the first term by taking $t=\tau$, then \begin{eqnarray}\label{e:var_rfw} \mathbb{E}\left[\ensuremath{\mathbf{r}}(t)^2\right] = 4\ensuremath{\Gamma_\mathrm{meas}} e^{-\ensuremath{\Gamma_\mathrm{m}} t} \int_0^t e^{\ensuremath{\Gamma_\mathrm{m}} s} V(s)^2 ds. \end{eqnarray} The second term has been already calculated in~\cite{PhysRevLett.123.163601_SI}, and is given by \begin{eqnarray}\label{e:var_rbw} \mathbb{E}\left[\ensuremath{\mathbf{r}_b}(t)^2\right] = 4\ensuremath{\Gamma_\mathrm{meas}} V_E^2/\ensuremath{\Gamma_\mathrm{m}} = V_E + \ensuremath{V_\mathrm{uc}}. \end{eqnarray} Finally, for the third term, we have \begin{eqnarray}\label{e:cross_corr_rfw_rbw} \mathbb{E}\left[\ensuremath{\mathbf{r}}(t)\ensuremath{\mathbf{r}_b}(t)\right] = 4\ensuremath{\Gamma_\mathrm{meas}} V_E \int_t^{\infty} e^{\lambda (t-s)} \mathbb{E}\left[\ensuremath{\mathbf{r}}(s)\ensuremath{\mathbf{r}}(t)\right] ds = 4\ensuremath{\Gamma_\mathrm{meas}} e^{-\ensuremath{\Gamma_\mathrm{m}} t} \int_0^t e^{\ensuremath{\Gamma_\mathrm{m}} s} V(s)^2 ds., \end{eqnarray} where we have used Eq.~\eqref{e:corr_rfw} and the property that Wiener increments at different time intervals are uncorrelated. The integral in Eqs.~\eqref{e:var_rfw} and \eqref{e:cross_corr_rfw_rbw} can be simplified by formally integrating Eq.~\eqref{e:V_c_dynamics}, from which \begin{eqnarray}\label{e:int_from_V_c_dynamics} 4\ensuremath{\Gamma_\mathrm{meas}}\int_0^t e^{\ensuremath{\Gamma_\mathrm{m}} (s-t)} V(s)^2 ds = \ensuremath{V_\mathrm{uc}} - V(t). \end{eqnarray} Combining Eqs.~\eqref{e:V_D_formal}, \eqref{e:var_rbw}, \eqref{e:var_rfw}, \eqref{e:cross_corr_rfw_rbw} and \eqref{e:int_from_V_c_dynamics} together we obtain \begin{eqnarray}\label{e:Vd_final} \ensuremath{V_\mathrm{d}}(t) = V(t) + V_E = V(t) + V + \ensuremath{\Gamma_\mathrm{m}}/(4\ensuremath{\Gamma_\mathrm{meas}}). \end{eqnarray} The conditional variance $V(t)$ can be estimated, then, from the experimentally accessible variance $\ensuremath{V_\mathrm{d}}(t)$, after the small offset value $V + \ensuremath{\Gamma_\mathrm{m}}/(4\ensuremath{\Gamma_\mathrm{meas}})$ is subtracted. In particular, in the large cooperativity ($\ensuremath{\Gamma_\mathrm{qba}}>>\ensuremath{\Gamma_\mathrm{m}}$) and large detection efficiency ($\eta_{\rm{det}}\approx 1$) limits, relevant to the experiment described in the main text, the offset $V + \ensuremath{\Gamma_\mathrm{m}}/(4\ensuremath{\Gamma_\mathrm{meas}})\approx V$, where $V$ is the steady state value of the conditional variance $V(t)$. Then, from Eq.~\eqref{e:Vd_final}, we have $V(t) \approx \ensuremath{V_\mathrm{d}}(t) - V$, where $V$ can be estimated from the steady state of the experimental $\ensuremath{V_\mathrm{d}}(t)$ using the fact that $\ensuremath{V_\mathrm{d}}(\infty)\approx 2V$ (see~\cite{PhysRevLett.123.163601_SI} for the derivation of this result).
1,108,101,563,577
arxiv
\section{Introduction} The first direct detection of gravitational waves from the merger of binary black holes,\cite{LIGO col} the most important scientific discovery of the 21st century, will open a new window of gravitational waves probe to explore the early history of the universe far before the last scattering epoch and hopefully around the origin of the universe. The tensorial part of spacetime fluctuations gives rise to gravitational waves, whose observation will reconstruct the spacetime and its evolution history. If the universe would have evolved back in time, the singularity theorem implies the Big Bang, which belongs to a quantum gravity regime.\cite{hawking-penrose} The inflationary spacetime, the most viable cosmological model, would have the singularity.\cite{BGV} The singularity theorem may raise a few fundamental questions in general relativity: what is the quantum spacetime and geometry including the Big Bang? how to quantize the spacetime as well as matter fields, that is, what is quantum gravity and quantum cosmology? how do a classical universe and the unitary quantum field theory emerge from quantum cosmology? Quantum cosmology is a quantum gravity theory for the universe, which quantizes simultaneously the spacetime geometry and matter fields. Two typical approaches to quantum cosmology are quantum geometrodynamics based on the Wheeler-DeWitt (WDW) equation\cite{dewitt67} and the no-boundary (HH) wave function, a path integral over spacetime manifolds and matters, by Hartle and Hawking.\cite{hartle-hawking,hawking84} In this proceedings we will not consider loop quantum gravity and other quantum gravity models (for review and references, see Ref.~\refcite{coule}). In quantum geometrodynamics the WDW equation is the relativistic field equation in a superspace of the spacetime geometry and the matter fields, in which both diffeomorphic invariant spacetime variables and matter fields are quantized. The quantum geometrodynamics has the advantage in predicting quantum gravity effects that can be tested by the current observational data based on classical cosmology such as CMB etc since as summarized in Fig. \ref{QSC}, the standard cosmology can be derived from the semiclassical quantum cosmology, which in turn can be derived from the WDW equation. In each stage for transitions from quantum to semiclassical and then to classical gravity, spacetime and matter fluctuations leave imprints of quantum gravity effects, which differ from the quantum field theory in the curved spacetime. \begin{figure}[pb] \centerline{\includegraphics[width=7.0cm]{QG-SG-CG.eps}} \vspace*{8pt} \caption{Emergence of classical gravity from quantum gravity and quantum effects. \label{QSC}} \end{figure} On the other hand, the HH no-boundary wave function sums over all compact four-dimensional Euclidean geometry and matter fields with a three-dimensional boundary to a Lorenztian geometry.\cite{hartle-hawking,hawking84} The HH wave function has the advantage of incorporating a boundary condition (initial condition) not to mention the quantum law. In fact, the path integral is peaked around the WDW equation at the tree level. From the view point of the standard cosmology, Page summarized the predictions of the HH wave function:\cite{page06} inflation of the universe to large size,\cite{hawking84} prediction of the near-critical density,\cite{hawking84,hawking-page} inhomogeneities starting in ground states,\cite{halliwell-hawking} arrow of time and initial low entropy,\cite{hawking85,page85,HLL,kim-kim94,kim-kim95} and decoherence and classicality of the universe.\cite{kiefer87,kim92} Starobinsky argued that inflation scenario relates quantum gravity and quantum cosmology to astronomical observations and produces (non-universal) arrow of time for our universe.\cite{starobinsky} In this paper, we review the quantum cosmology of the Friedmann-Robertson-Walker (FRW) universe minimally coupled to a massive scalar field and argue that the quantum gravity effects may resurrect the chaotic inflation model with a massive scalar model. The recent Planck data rules out the single field chaotic model with power greater than one, including the massive scalar.\cite{Planck2013} The Starobinsky inflation model of $R+ \alpha R^2$, however, which leads to a de Sitter-type acceleration without inflaton, is the most favored by the Planck data.\cite{starobinsky80} As noted by Starobinsky, $R^2$ comes from spacetime fluctuations due to quantum matter. It has been noticed that $R^2$-term is equivalent to a scalar field under a conformal transformation, $\tilde{g}_{\mu \nu} = (1+ 2 \alpha R) g_{\mu \nu}$ and $\Psi = \sqrt{3/2} \ln (1+ 2 \alpha R)$.\cite{whitt84,maeda88} Further, as shown in Fig. \ref{QSC}, the quantum gravity effects from quantum cosmology, which have both quantum corrections from spacetime as well as the expectation value of the energy-momentum stress tensor, differs from those from quantum field theory in a fixed curved spacetime. It is thus interesting and timely to revisit the quantum cosmology with a massive scalar field and to investigate the quantum gravity effects. \section{Why Massive Scalar Quantum Cosmology?} \label{mass quan cos} The FRW universe with a scale factor $a = e^{\alpha}$ and an inflaton $\phi$ has a superspace with the supermetric \begin{eqnarray} ds^2 = - da^2 + a^2 \phi^2 \label{sup met} \end{eqnarray} and leads to the super-Hamiltonian constraint \begin{eqnarray} H(a, \phi) = - \Bigl( \pi_a^2 + V_{\rm G} (a) \Bigr) + \frac{1}{a^2} \Bigl( \pi_{\phi}^2 + 2 a^6 V (\phi) \Bigr) = 0. \end{eqnarray} Then, the WDW equation takes the form (see, for instance, Ref. \refcite{kim92} and for a recent review and references see Refs. \refcite{kim13,kim14}) \begin{eqnarray} \Bigl[ - \nabla^2 - V_{\rm G} +2 a^4 V(\phi) \Bigr] \Psi (a, \phi) = 0, \label{wdw eq} \end{eqnarray} where \begin{eqnarray} \nabla^2 = - a^{-1} \frac{\partial}{\partial a} \Bigl(a \frac{\partial}{\partial a} \Bigr) + \frac{1}{a^2} \frac{\partial^2}{\partial \phi^2}, \quad V_{\rm G} (a) = k a^2 - 2 \Lambda a^4. \end{eqnarray} Note that the WDW equation becomes a relativistic wave equation in the superspace, which is generically true for any spacetime with more than two degrees of freedom or with a matter field. To compare the predictions of quantum cosmology with the inflation scenario and current observational data, we may introduce perturbations of spacetime and/or matter. The Fourier-modes $f_{k}$ of $\phi$-fluctuations, for instance, have the wave function $\Psi (\alpha, \phi, f_{k})$.\cite{halliwell-hawking} We assume the $\phi$-derivatives to be much smaller than the $\alpha$-derivatives, which corresponds to a slow-roll approximation in the inflation scenario. In the geometry belonging to a classical regime, the wave function of the WDW equation becomes a wavepacket and is peaked around a semi-classical trajectory, along which we may apply the the Born-Oppenheimer interpretation.\cite{kim95,kim96,BFV,kim97} Recently, Kiefer and Kr\"{a}mer have found the power spectrum corrected by quantum cosmology\cite{kiefer-kramer,kk-essay} \begin{eqnarray} \Delta_{(1)}^2 (k) = \underbrace{ \Delta_{(0)}^2 (k)}_{\rm classical~cosmology} \underbrace{\Bigl(1 - \frac{43.56}{k^3} \frac{H^2}{m_P^2} \Bigr)^{-3/2} \Bigl(1 - \frac{189.18}{k^3} \frac{H^2}{m_P^2} \Bigr)}_{\rm correction~from~quantum~cosmology}, \label{kk sp} \end{eqnarray} where $\Delta_{(0)}^2(k)$ is the spectrum from classical theory. Note that the power spectrum (\ref{kk sp}) is suppressed at large scales and shows a weaker upper bound than the tensor-to-scalar ratio. From the view point of density perturbations, there is a formulation of gauge invariant perturbations in quantum cosmology. Choosing gauge invariant perturbations is equivalent to selecting the diffeomorphism invariant variables for the superspace. The gauge invariant perturbations still have advantage in interpreting the observational data in cosmology. Recently, the gauge invariant super-Hamiltonian and super-momenta constraints have been introduced in terms of Mukhanov-Sasaki variables\cite{pinho-pinto,PSS,CFMO,CMM}. Then, the classical cosmology from the quantum cosmology may give a complete description of density perturbations with quantum effects included for CMB data. \section{Second Quantized Universes} \label{quan univ} The WDW equation (\ref{wdw eq}) is a relativistic wave in the superspace (\ref{sup met}), in which $a$ plays the role of an intrinsic time. As for quantum field theory in a curved spacetime, the WDW equation evolves an initial wave function $\Psi (a_0, \phi)$ to a final one $\Psi (a, \phi)$. The Cauchy initial value problem of the WDW equation has been well elaborated.\cite{kim92,kim-page92} Note that the HH wave function has a different Cauchy surface, $\alpha = \pm \phi$. Then, a question is how to prescribe the boundary condition that leads to the present universe. For the single-field inflation model with a monomial potential, Kim observed that the eigenfunctions for the inflaton\cite{kim92,kim-page92} \begin{eqnarray} H_{\rm M} (\phi, a) \Phi_n (\phi, a) = E_n (a) \Phi_n (\phi, a), \quad V(\phi) = \frac{\lambda_{2p}}{2p} \phi^{2p} \end{eqnarray} obey the Symanzik scaling-law \begin{eqnarray} E_n (a) = \Bigl(\frac{\lambda_{2p}a^6}{p} \Bigr)^{\frac{1}{p+1}} \epsilon_n, \quad \Phi_n (\phi, a)= \Bigl(\frac{\lambda_{2p}a^6}{p} \Bigr)^{\frac{1}{4(p+1)}} F_n \Bigl( \bigl(\frac{\lambda_{2p}a^6}{p} \bigr)^{\frac{1}{2(p+1)}} \phi \Bigr). \end{eqnarray} Here, $\epsilon_n$ is independent of $a$. Since the eigenfunctions constitute a basis denoted by an infinite vector $\vec{\Phi}^{\rm T} (\phi, a) = (\Phi_0, \Phi_1, \cdots)$, the rate of change of the eigenfunctions is given by a coupling matrix \begin{eqnarray} \frac{\partial}{\partial a} \vec{\Phi} (\phi, a) = \Omega (a) \vec{\Phi} (\phi, a), \label{con tran} \end{eqnarray} where $\Omega_{ln} (a)$ is a matrix inversely proportional to $a$ as \begin{eqnarray} \Omega_{ln} (a) = \frac{3}{4(p+1) a} \bigl( \epsilon_l - \epsilon_n \bigr) \int d \zeta F_l (\zeta) F_n (\zeta) \zeta^2. \label{coup mat} \end{eqnarray} The meaning of Eq. (\ref{con tran}) is continuous transitions among the eigenfunctions at each moment of intrinsic time $a$. For the Cauchy problem, expand the wave function by the eigenfunctions of the inflaton Hamiltonian, $\Psi (a, \phi) = \vec{\Phi}^{\rm T} (\phi, a) \cdot \vec{\Psi} (a)$, and find the two-component wave function\cite{kim92,kim-page92} \begin{eqnarray} \left( \begin{array}{c} \Psi (a, \phi) \\ \frac{\partial \Psi (a, \phi)}{\partial a} \\ \end{array} \right) = \left( \begin{array}{cc} \vec{\Phi}^{\rm T} & 0 \\ 0 & \vec{\Phi}^{\rm T} \\ \end{array} \right) {\rm T} \exp \left[ \int_{a_0}^a \left( \begin{array}{cc} \Omega & I \\ V_{\rm G} I - \frac{E}{a'^2} & \Omega \\ \end{array} \right) da' \right] \left( \begin{array}{c} \vec{\Psi} (a_0) \\ \frac{d \vec{\Psi} (a_0)}{da_0} \\ \end{array} \right), \label{cauchy} \end{eqnarray} where $E(a)= (E_0, E_1, \cdots)$. The WDW equation has another Cauchy problem of the Feshbach-Villars formulation.\cite{mostafazadeh} Taking only the off-diagonal components, which is equivalent to neglecting the coupling matrix $\Omega$, the equation for the gravitational part is approximately given by \begin{eqnarray} \frac{d^2 \vec{\Psi} (a)}{da^2} - \Bigl(V_{\rm G} (a) I - \frac{E(a)}{a^2} \Bigr) \vec{\Psi} (a) \approx 0. \end{eqnarray} From the view point of observational cosmology, the task of quantum cosmology is to construct the present Cauchy data based on observations and to evolve back to the early universe to understand the origin of the universe. Note that $\Omega$ diverges more rapidly than $E(a)/a^2$ for $p < 5$ as the universe approaches the Big Bang singularity, $a \approx 0$. For instance, a massive scalar field model with $p=1$ and $\lambda_2 = m^2$ has the harmonic wave functions and the coupling matrix \begin{eqnarray} E_n (a) = m a^3 (2n+1), \quad \Omega_{ln} (a) = \frac{3}{4a} \Bigl(\sqrt{l(l-1)} \delta_{l-n,2} - \sqrt{n(n-1)} \delta_{n-l,2} \Bigr), \end{eqnarray} and the time-ordered integral is thus approximated by \begin{eqnarray} {\rm T} \exp \left[ \int_{a_0}^a \left( \begin{array}{cc} \Omega (a') & 0 \\ 0 & \Omega (a') \\ \end{array} \right) da' \right] = \left( \begin{array}{cc} e^{ \ln (a/a_0) a \Omega (a)} & 0 \\ 0 & e^{ \ln (a/a_0) a \Omega (a)} \\ \end{array} \right). \end{eqnarray} Therefore, the wave function experiences an infinite number of transitions among different harmonic functions or oscillations near the singularity, which may lead to a chaotic behavior. Further, the probability for the wave function near the singularity is almost invariant, $|\Psi (a, \phi)|^2 \approx |\Psi (a_0, \phi)|^2$, as afar as the variation of $\vec{\Psi} (a)$ is finite.\cite{kim13} \section{Semiclassical and Classical Cosmology} \label{semi-class cos} The wave function peaked around a wave packet allows the de Broglie-Bohm pilot-wave theory. We assume that the WDW equation takes a general form \begin{eqnarray} \Bigl[- \frac{\hbar^2}{2M} \nabla^2 - M V_{\rm G} (h_a) + \hat{H}_{\rm M} (\phi, -i \frac{\delta}{\delta \phi} ; h_a) \Bigr] \Psi (h_a, \phi) = 0, \end{eqnarray} where $h_a$ denotes the superspace metric $h_{ij}$ and $M = m_P^2$ is the Planck mass squared. The de Broglie-Bohm pilot theory describes the quantum theory in an equivalent way that the oscillating wave function forms a wave packet around a trajectory prescribed by the Hamilton-Jacobi equation with a quantum correction and another equation for the conservation of probability.\cite{licata-fiscaletti} The standard cosmology is the Friedmann equation together with the principle of homogeneity and isotropy of the universe, which has been confirmed precisely by CMB and other observational data. To embody quantum cosmology into an observational cosmology, as shown in Fig. \ref{QSC}, it is necessary to obtain the semiclassical cosmology and then the classical cosmology with quantum corrections included. The stratagem toward Fig. \ref{QSC} is first to apply the Born-Oppenheimer idea, which separates a slow moving massive particle $M$ (gravity) from a fast moving light particle (inflaton), and then to expand quantum state for fast moving variable by a certain basis\cite{kim95,kim96,BFV,kim97} \begin{eqnarray} \vert \Psi (h_a, \phi) \rangle = \Psi (h_a) \vert \Phi (\phi, h_a) \rangle, \end{eqnarray} in which \begin{eqnarray} \vert \Phi (\phi, h_a) \rangle = \vec{C}^T (h_a) \cdot \vec{\Phi} (\phi, h_a). \end{eqnarray} The basis $\vec{\Phi}^T (\phi, h_a) = (\vert \Phi_0 \rangle, \vert \Phi_1 \rangle, \cdots)$, which is not necessarily the instantaneous eigenfunctions of $\hat{H}_{\rm M}$, will be chosen to make the semiclassical and classical cosmology as simple as possible. We then apply the de Broglie-Bohm pilot-theory to the gravity part only \begin{eqnarray} \Psi(h_a) = F (h_a) e^{i \frac{S(h_a)}{\hbar}}. \end{eqnarray} Now, in a semiclassical regime, the WDW equation is equivalent to set of equations\cite{kim97} \begin{eqnarray} \frac{1}{2M} \bigl( \nabla S \bigr)^2 - M V_{\rm G} (h_a) + H_{nn} (h_a) - \frac{\hbar^2}{2M} \frac{\nabla^2 F}{F} - \frac{\hbar^2}{M} {\rm Re} \bigl( Q_{nn} \bigr) = 0, \label{quan ein eq} \\ \frac{1}{2} \nabla^2 S + \frac{\nabla F}{F} \cdot \nabla S + {\rm Im} \bigl( Q_{nn} \bigr) = 0, \label{con eq} \end{eqnarray} where $H_{nk} (h_a)$ is the expectation value of the inflaton, $\vec{A}_{nk} (h_a)$ is the induced gauge potential due to the parametric interaction, $(\hbar^2/2M) (\nabla^2 F/F)$ is the quantum potential of Bohm and $Q_{nn}$ is the quantum back reaction of matter: \begin{eqnarray} H_{nk} (h_a) &=& \langle \Phi_n (\phi, h_a) \vert \hat{H}_{\rm M} \vert \Phi_k (\phi, h_a) \rangle, \nonumber\\ \vec{A}_{nk} (h_a) &=& i \langle \Phi_n (\phi, h_a) \vert \nabla\vert \Phi_k (\phi, h_a) \rangle, \nonumber\\ Q_{nn} (h_a) &=& \frac{\nabla F}{F} \cdot \Bigl( \frac{\nabla C_n}{C_n} - i \sum_{k} \vec{A}_{nk} \frac{C_k}{C_n} \Bigr). \end{eqnarray} The advancement of the de Broglie-Bohm quantum theory in Ref. \refcite{kim97} is that the continuity equation (\ref{con eq}) may be integrated along the semiclassical trajectory and provide another quantum back reaction to the semiclassical Einstein equation (\ref{quan ein eq}). To complete the transition from the quantum cosmology (\ref{wdw eq}) to the semiclassical cosmology, we introduce a cosmological time as the directional derivative along the semiclassical trajectory in the extended superspace \begin{eqnarray} \frac{\partial}{\partial \tau} = - \frac{1}{Ma} \frac{\partial S(a)}{\partial a} \frac{\partial}{\partial a}. \end{eqnarray} The cosmological time is equivalent to solving $\partial a(\tau)/\partial \tau = - (1/Ma) (\partial S/ \partial a)$. Then the Heisenberg matrix equation takes the form\cite{kim97} \begin{eqnarray} i \hbar \frac{\partial C_n}{\partial \tau} = \sum_{k} \Bigl[ \bigl( H_{nk} - H_{nn} \delta_{nk} \bigr) - \hbar B_{nk} - \frac{\hbar^2}{2Ma} \bigl( D_{nk} - D_{nn} \delta_{nk} \bigr) \Bigr] C_k, \label{heis eq} \end{eqnarray} where $B_{nk}$ is the gauge potential $\vec{A}_{nk}$ measured along the $\tau$-flow and $D_{nk}$ is given by \begin{eqnarray} B_{nk} (a(\tau)) &=& i \langle \Phi_n \vert \frac{\partial}{\partial \tau} \vert \Phi_k \rangle, \nonumber\\ D_{nk} (a(\tau)) &=& - \frac{1}{\dot{a}^2} \Bigl[\Bigl(\frac{\partial^2}{\partial \tau^2} - \frac{\ddot{a}}{\dot{a}} \frac{\partial}{\partial \tau}\Bigr)\delta_{nk} - 2 i B_{nk} \frac{\partial}{\partial \tau} + \langle \Phi_n \vert \frac{\partial^2}{\partial \tau^2} - \frac{\ddot{a}}{\dot{a}} \frac{\partial}{\partial \tau} \vert \Phi_k \rangle \Bigr]. \end{eqnarray} Here and hereafter, the dots denote the derivatives with respect to the cosmological time $\tau$. We may use the freedom to select the basis such that $H_{nk} = \hbar B_{nk}$ for $n \neq k$.\cite{kim96} Then, the Heisenberg equation (\ref{heis eq}) is the solution of the $\tau$-dependent Schr\"{o}dinger equation and a correction of order of $\hbar/M$, which comes from the relativistic theory. Now, we can show thereby that the chaotic inflation model necessarily contains (higher) curvature terms \begin{eqnarray} \Bigl(\frac{\dot{a}}{a} \Bigr)^2 + \frac{k}{a^2} - \Lambda = \frac{8 \pi}{3m_P^2 a^3} \bigl( H_{nn} + \delta \rho_{nn} \bigr), \label{qc frw eq} \end{eqnarray} where the quantum correction to the energy density is given by \begin{eqnarray} \delta \rho_{nn} = - \frac{4 \pi \hbar^2}{3m_P^2 a \dot{a}} U_{nn} {\rm Re} \bigl(R_{nn} \bigr) + \frac{2 \pi \hbar^2}{3m_P^2 a } \Bigl( U_{nn}^2 + \frac{1}{\dot{a}} \dot{U}_{nn} \Bigr), \label{qc cor} \end{eqnarray} where \begin{eqnarray} R_{nn} &=& \frac{\dot{C}_n}{C_n} - i \sum_{k} B_{nk} \frac{C_k}{C_n}, \nonumber\\ U_{nn} &=& - \frac{1}{2} \frac{\dot{a}^2 + a \ddot{a}}{a \dot{a}^2 + \frac{4 \pi \hbar}{3m_P^2} {\rm Im} \bigl(R_{nn} \bigr)}. \end{eqnarray} Finally, the chaotic model with a massive scalar with $m$ has $R_{nn}^{(0)} = 0$ and $U_{nn}^{(0)} = -(1/2) (1/a+ \ddot{a}/\dot{a}^2)$ and the expectation value of the inflaton is given by\cite{kim97} \begin{eqnarray} H_{nn} = \hbar a^3 \Bigl(n+ \frac{1}{2} \Bigr) \bigl(\dot{\varphi}^* \dot{\varphi} + m^2 \varphi^* \varphi \bigr), \end{eqnarray} which obeys the classical equation of motion \begin{eqnarray} \ddot{\varphi} + 3 \frac{\dot{a}}{a} \dot{\varphi} + m^2 \varphi = 0. \end{eqnarray} The gauge potential reads \begin{eqnarray} B_{nk} = \frac{n}{n+ \frac{1}{2}} H_{nn} \delta_{nk} + f \sqrt{(n+1)(n+2)} \delta_{n, k-2} + f^* \sqrt{(k+1)(k+2)} \delta_{n, k+2}, \end{eqnarray} where $f = - (\hbar a^3 /2) (\dot{\varphi}^2 + m^2 \varphi^2 )$. The solution of the Heisenberg equation (\ref{heis eq}) can be used to find systematically the quantum correction (\ref{qc cor}) and thus to the Friedmann equation (\ref{qc frw eq}). \section{Gauge Invariant Quantum Cosmology} Mukhanov and Sasaki obtained the gauge invariant formulation of density perturbations. Then a question can be raised whether one may find the Hamiltonian for scalar perturbations of metric and a massive scalar field, which is gauge invariant and leads the semiclassical equation for observational data. In fact, such a Hamiltonian was found\cite{pinho-pinto,PSS,CFMO,CMM} \begin{eqnarray} H = \bar{N}_0 \Bigl[H_0 + \sum_{\vec{n}, \pm} \breve{H}_2^{\vec{n}, \pm} \Bigr] + \sum_{\vec{n}, \pm} \breve{G}_2^{\vec{n}, \pm} \breve{H}_1^{\vec{n}, \pm} + \sum_{\vec{n}, \pm} \breve{K}_{\vec{n}, \pm} \tilde{H}_1^{\vec{n}, \pm}, \label{MS ham} \end{eqnarray} where $H_0$ is the unperturbed Hamiltonian for the FRW universe, and $\breve{H}_2^{\vec{n}, \pm}$ is the quadratic Hamiltonian of scalar, vector and tensor perturbations for the inhomogeneities as well as the massive scalar field, and $\breve{H}_1^{\vec{n}, \pm}$ and $\tilde{H}_1^{\vec{n}, \pm}$ are inhomogeneous linear perturbations. Extending Sec. \ref{semi-class cos} to quadratic perturbations, one may show that the semiclassical cosmology from the WDW equation (\ref{MS ham}) provides the master equation for the power spectrum of primordial scalar (vector and tensor) perturbations perturbations. The semiclassical cosmology with (higher) curvatures via the de Broglie-Bohm pilot theory and the Born-Oppenheimer idea may resurrect the chaotic inflation model with a massive scalar field. The detailed work in this direction will be addressed in a future publication. \section{Conclusion} In this paper we have studied the semiclassical gravity for the chaotic inflation model with a power-law greater than one. In classical gravity theory the chaotic inflation model with a convex power-law is highly likely to be excluded by current observational CMB data, such Planck and WMAP. Starobinsky model with a higher curvature term, however, seems to be a viable model. Higher curvature terms have a quantum origin due to fluctuations of a spacetime and/or a matter field in the curved spacetime. It is thus physically legitimate to investigate the chaotic inflation model in the framework of quantum cosmology since the quantum cosmological model with a chaotic inflaton necessarily involves quantum gravity effects due to spacetime and inflaton's fluctuations. In order to compare the predictions of the quantum cosmology with current observational data, the semiclassical cosmology should emerge from the quantum cosmology for the FRW cosmology minimally coupled to a chaotic inflaton, in particular, a massive inflaton. In fact, the de Broglie-Bohm pilot-theory together with the Born-Oppenheimer idea of separating the Planck mass scale from the inflaton mass scale leads to the semiclassical cosmology, in which both quantum corrections to the classical gravity as well as the matter field. It turns out that the semiclassical gravity equation indeed contains higher curvature terms for the FRW geometry. Further, the gauge invariant quantum cosmology using Mukhanov-Sasaki Hamiltonian with a massive scalar field may yield the semiclassical chaotic inflation model, which may be easily compared with observational data. It would be interesting to study numerically the semiclassical gravity for the FRW universe with a massive scalar field and to see whether the quantum cosmology can resurrect the chaotic inflation model. An alternative way to test the predictions of quantum cosmology is to simulate the evolution of the universe, in particular, the quantum effects of an expanding universe using laboratory experiments. It has been suggested that a static ion trap may simulate the quantum effects of expanding universe.\cite{MOM} It has also been observed that the quantum cosmology for the FRW universe minimally coupled to a massive scalar field is equivalent to a spinless charged particle in a homogeneous time-dependent magnetic field along a fixed direction.\cite{kim14b} The infinite oscillations near the singularity can be simulated by time-dependent magnetic fields of over-critical strength. \section*{Acknowledgments} This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (15B15770630).
1,108,101,563,578
arxiv
\section{Introduction} Dynamic hardware faults present a major impediment for the efficient implementation of digital circuits \cite{kim_inexact_2014}. They can occur e.g. due to noise at reduced safety margins in ultra low-power electronics, process variations or circuit aging. The faults can result into bit-flips at memories or registers, impairing the correctness of an electronic circuit \cite{dodd_basic_2003}. Nevertheless exists a multitude of applications, which do not depend on a completely exact result of computation in the data path. Prominently, computer vision or mobile communication intrinsically have to deal with noisy data from the input sensors. Accordingly, artificial neural networks (ANN) present a promising solution for stochastic computation near to the sensor {\cite{plastiras_edge_2018}}. Although the exact data is thereby not needed or available, it requires a high level of robustness and energy efficiency with respect to the stochastic computation. \medskip Further requirements have to be considered for space missions, where radiation additionally can cause run-time defects. Traditional countermeasures like triple majority redundancy (TMR), temporal redundancy or package shielding have additional overhead with respect to the timing properties or payload weight {\cite{schmidt2017temporal}}. Moreover, upcoming space missions require an increasing capability of autonomy, e.g. the TRIPLE/nanoAUV initiative intends to explore subglacial lakes at extraterrestrial moons like e.g. Jupiter's Europa, which are however highly affected by radiation \cite{waldmann2020triple} \cite{dachwald2020key}. While space operations are quite reluctant to stochastic approaches, there is nevertheless a strong need for advancements at least for the on-board scientific computation and the efficient processing of the sensor data. The satellite mission BIRD was the one of the first using ANN for processing infrared images of the Earth's surface {\cite{halle2000thematic}}. Similar, there is an ongoing trend to image processing on-board of satellites \cite{yuhaniz2005embedded} \cite{gregorek2019fpga}. \medskip While there is empirical evidence about intrinsic fault tolerance of ANN to single event upsets (SEU) {\cite{velazco1999evidences}}, previous work generally deals with static stuck-at failures or additional hardware overhead {\cite{torres-huitzil_fault_2017}}. In \cite{kausar2016artificial} authors claim: “[..a] major influence on the reliability of neural networks is the uniform distribution of information”, from which a redundancy based approach could be derived. In \cite{johnson2017homeostatic}, authors present a repair mechanism for spiking neural networks using additional tuning circuits. In \cite{su2016superior} authors propose a fault injection-based genetic algorithm to construct a fault-tolerant ANN. Orthogonal to that, this work investigates a customized deep learning approach for improved dynamic fault tolerance using the unmodified topology of a baseline ANN. While the assumptions about intrinsic error resiliency of ANNs typically deals with static errors or additional hardware overhead, this work is therefore not restricted to that assumptions. In particular our paper has the following contributions: \begin{itemize} \item The hypothesis that ANNs can extrinsically learn the capability of dynamic fault tolerance. \item A training approach to generically improve the fault tolerance of deep artificial neural networks. \item A theoretic use case, where we apply our fault tolerance learning approach to image compression by means of autoencoders. \end{itemize} As a potential outcome of our work, the benefits of a robust network, at a reduced safety margin and a lower power consumption, could be leveraged for near-sensor image processing. The remainder of this paper is organized as follows: Sec. II describes the training approach and the implemented models, Sec. III presents the results of our experiments and finally Sec. IV concludes our paper. \section{Approach and Implementation} \label{sec:impl} This section describes the training approach, the fault model and the architecture of the test network. Tab. \ref{tab:xrad} shows our general approach: During training and testing, the ANN processes input images $X$ and generates an output $Y$ depending on the configuration of the ANN. The baseline network is affected by dynamic faults during testing/deployment only. However, in this work we present a particular approach how to beneficially inject dynamic faults during the training of an ANN. \begin{table}[htb] \centering \caption{General Approach} \label{tab:xrad} \begin{tabular}{l||p{2.4cm}|p{2.4cm}} & Training & Testing/Deployment \\\hline\hline Baseline & \parbox[c]{1em}{\includegraphics[width=.12\textwidth]{base}} & \parbox[c]{1em}{\includegraphics[width=.12\textwidth]{xrad}} \\\hline This work & \parbox[c]{1em}{\includegraphics[width=.12\textwidth]{xrad}} & \parbox[c]{1em}{\includegraphics[width=.12\textwidth]{xrad}} \\ \end{tabular} \end{table} \subsection{Training Approach} We propose a demonstration based approach to improve the fault tolerance of ANNs by means of mock fault injection during ANN training. The ANN is constituted by a directed acyclic graph $G$. The faults are injected at the weights and biases of the ANN nodes $N$. Each node $N_i$ corresponds to a particular layer $L_i$ of the network graph $G$. Algorithm \ref{alg:mock} shows the general approach how to perform a fault training epoch on a particular network graph $G$ using the training data $(X_{train}, Y_{train})$. A random node of the network is selected during training of each batch. A parameter of the selected node is perturbed and the corresponding layer is frozen while the remaining network is trained. We therefore call our approach fault injection during training (FIT). Preceding to the fault training epochs, one or more regular training epochs may be performed to adjust the initial values of the parameters. \begin{algorithm} \caption{Fault training approach} \label{alg:mock} \begin{algorithmic}[1] \Procedure {fault\_training\_epoch}{$G$, $X_{train}$, $Y_{train}$} \For{ each training-batch } \State random select node $N_i$ from $G$ \State disturb node $N_i$ and freeze layer $L_i$ \State perform regular training \State reset selected layer $L_i$ \EndFor \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Fault Model} At the hardware level, the baseline ANN is assumed to be implemented with signed fixed-point numbers. While the ANN input values are typically normalized to range from -1 to +1, the internal values can be of larger magnitudes inside the network. For practical reasons, we assume a fixed bit-width of \mbox{$W=1+I+F$} bits for every value, having one sign bit, $I$ integer bits and $F$ bits for the fractional part. \medskip While for the training we inject exactly one fault per batch, a variable amount of faults is injected during testing. To keep the computational complexity (wall time) of the simulation short, we generate a random number $x_i$ during evaluation of each test batch. This random number $x_i$ is taken as the number of faults at the network for that given batch. We choose a Poisson distribution to to generate the number $x_i$ of independent random faults. Subsequently, a random node is selected for each of the faults. This approach avoids to iterate over all nodes, and simulate whether a fault has occurred there. \medskip In our fixed-point model, any fault occurring at a node causes a bit-flip in the corresponding fixed-point value. The affected bit position is chosen by a uniform distribution. Additionally, to the bit-flip fixed-point model, a tamed fault model using a Gaussian distribution with zero mean, and configurable standard deviation is also used. \subsection{Test Architecture} State-of-the-art convolutional and recurrent neural networks have a complex structure potentially consisting of many specialized layers and feedback paths. Due to its regular structure, we choose a deep fully-connected autoencoder as a test architecture. Fig. \ref{fig:ae} shows a sample architecture of the autoencoder (AE), which we use for image compression. The given sample AE consists of an input layer $L_x$, the center layer $L_c$ and an output layer $L_y$. The center layer thereby determines the number of features, respectively the ratio of the image compression. During training, the input images are taken as the reference for the desired output of the AE, i.e. after training the output $Y$ should match the input $X$. \begin{figure}[htb] \centering \includegraphics[width=.36\textwidth]{ae} \caption{Sample autoencoder for image compression} \label{fig:ae} \end{figure} \begin{figure*}[htb] \centering \subfloat[Golden Reference: Mean test loss vs. number of features]{\includegraphics[width=.48\textwidth]{xrad_feat_mean_latest.pdf}} \label{fig:eval:gold:feat} \subfloat[Golden Reference: Mean test loss vs. number of epochs]{\includegraphics[width=.48\textwidth]{xrad_jpeg_latest.pdf}} \label{fig:eval:gold:jpeg} \vspace*{8mm} \subfloat[Mean test loss vs. fault rate]{\includegraphics[width=.48\textwidth]{xrad_xpow_mean_latest.pdf}} \label{fig:eval:xrad:xpow} \subfloat[Mean test loss vs. number of training epochs]{\includegraphics[width=.48\textwidth]{xrad_epoc_mean_latest.pdf}} \label{fig:eval:xrad:epoc} \caption{Experimental Results} \end{figure*} \section{Experimental Results} \label{sec:eval} The following section presents the experimental results with respect to the image compression use case and our presented training approach. We have used Tensorflow/Keras to implement the AE and the fault injection using callback functions. Throughout the evaluation, the MNIST dataset is used. Initially, an automatic hyperparameter tuning using Ray Tune \cite{liaw_tune_2018} has been performed to find reasonable starting values for the hyperparameters (e.g. batch size, activation functions, etc.). \subsection{Golden Reference} To compare the results of our approach, a golden reference model has been firstly developed. Fig.~\ref{fig:eval:gold:feat} shows the test loss vs. the number of features / compression ratio of the golden reference when using 100 training epochs. Further, the amount of additional hidden layers is compared. Thereupon, an amount of 48 nodes at the central layer and 3 hidden layers have been chosen to achieve a reasonable trade-off between performance and complexity. \medskip Next, the performance of the AE is compared to the JPEG image compression standard. Our input images are stored as 8bit TIFF files. For the evaluation of JPEG, we convert the TIFF files to JPEG and then back to TIFF. We compare the MSE between the true original TIFFs vs. the reconstructed TIFFs. For the compression ratio, we compare the number of bits for the input images versus the JPEG file size. For the evaluation of the AE, we compare the MSE between the input and the output images. The AE compression ratio is assumed as the number of bits for the input image (28x28x8) vs. the number of bits at the central layer (here 48x32). In our results, the AE compresses the images to around 24.5\%, while JPEG achieves a compression of approx. 33.3\%. Fig.~\ref{fig:eval:gold:jpeg} shows the mean test loss of the AE in comparison to JPEG. After more than 30 training epochs, the AE outperforms JPEG regarding the MSE / compression quality. \subsection{Fault Effects} Finally, the impact of the dynamic hardware faults and our proposed training approach are evaluated. Due to the stochastic nature of the hardware faults, our results are further averaged over multiple runs. At training we have set a Gaussian fault model with $\sigma = 0.01$ and for testing the hardware related fixed-point fault model. For the fixed-point format, a width W=16 bits and F=12 bits for the fractional part have been set. Fig. \ref{fig:eval:xrad:xpow} shows the test loss vs. the rate of injected faults during testing. We thereby define the fault-rate as the average number of faults per node per sample, i.e. a fault rate of \mbox{1e-7} roughly corresponds to 3.5 faults per 100 sample images. While the golden reference is not impaired by the faults, the baseline using default training and our FIT approach using mock fault injection during training have a linear dependency to the fault rate. An exceptionally good result is shown for FIT at 100 training epochs and a small rate of injected errors. For 200 epochs FIT generally has around 2\% smaller test loss compared to the baseline. The improvement is quite small however it almost comes for free since there is no need for additional hardware overhead. Additionally, Fig. \ref{fig:eval:xrad:epoc} shows the test loss vs. number of training epochs at a fault injection rate of 3e-7 indicating a robust improvement of the test loss for FIT after around 50 training epochs. \section{Conclusion} \label{sec:conc} We presented a deep learning approach called fault injection training (FIT) for artificial neural networks. The implemented fault model allows the injection of hardware faults using the design framework Tensorflow/Keras. The fault model uses bit-flips at fixed-point data types or Gaussian weight perturbation. Our developed reference AE has a superior performance for image compression compared to JPEG using images from the MNIST dataset. The presented FIT training approach has the general potential to reduce test loss and mitigates the impairment due to hardware faults. Thereby, it does not require additional hardware or software overhead during deployment of the neural network. \medskip This work presents an initial attempt to match the network training more closely to the underlying stochastic properties of the hardware. We plan a more in-depth analysis for the future. In particular the detailed impact of the hardware faults needs to be better understood. According to the results, a further refinement of the FIT training approach should be possible. Further the required evaluation wall time needs to be reduced. Due to the lengthy simulations, e.g. results for a higher fault rate are still pending. Finally, the approach needs to be evaluated for other deep neural network architectures and use cases. \printbibliography \end{document}
1,108,101,563,579
arxiv
\section{Introduction}\label{intro} One of the open questions in neutrino physics is the mass ordering of the neutrinos; whether they are ordered normally or inverted. Many experiments intend to determine the mass ordering, of which the 50 kton Iron Calorimeter (ICAL) detector at the proposed India-based Neutrino observatory is one ambitious experiment \cite{WP}. ICAL will be a magnetised iron calorimeter mainly sensitive to muons produced in the charged-current (CC) interactions of atmospheric muon neutrinos (and anti-neutrinos) with the iron target in the detector. It can distinguish CC muon-neutrino-induced events from anti-neutrino-induced ones since the former interaction produces $\mu^-$ while the latter produces $\mu^+$ in the detector and ICAL has excellent muon charge identification (cid) capability. This is also crucial to determine precisely the momentum of the muons through bending in the magnetic field. Since matter effects are different between neutrino and anti-neutrino propagation in the Earth, this feature can help resolve the neutrino mass ordering by determining the sign of the 2--3 mass-squared difference $\Delta m^2_{32} \equiv m^2_3-m^2_2$, $m_i,~i=1,2,3$, being the neutrino mass eigenstates \cite{imsc-hie-paper}. In addition, the matter effects improve the sensitivity to the magnitude $\vert \Delta m^2_{32} \vert$ of this mass-squared difference as well as to the 2--3 mixing angle, $\theta_{23}$, provided the across-generation mixing angle $\theta_{13}$ is rather well-known, which is indeed the case \cite{double-chooz1,dayabay1,dayabay2,reno-results,double-chooz-results}. Many previous analyses have been reported, projecting the sensitivities of ICAL detector to oscillation parameters in the 2--3 sector \cite{TApre,TAhie,3dMMD,WP} as also the mass ordering. The sensitivity to mass hierarchy is directly proportional to the value of $\theta_{13}$ which is quite precisely known \cite{dayabay-2015,dayabay-2016,reno-2016,double-chooz-2014,double-chooz-2015,nu-fit}. It also depends on the ability of ICAL to separate neutrino and anti-neutrino events which is possible since ICAL is magnetised. While there is an uncertainty of about 20\% on the atmospheric neutrino fluxes themselves, the uncertainty on their {\em ratios} is much smaller, about 5\%, and was ignored in earlier analyses \cite{TApre,TAhie,3dMMD}. In this paper, we show that this smaller uncertainty on the ratio acts as a constraint that in turn significantly shrinks the allowed parameter space, especially for $\sin^2\theta_{23}$. For instance, we will see that the precision on $\sin^2\theta_{23}$ decreases from 13\% to 9\% in a certain analysis mode when this constraint is included. This is generally true for all magnetised detectors. To our knowledge such an effect has not been discussed in the literature earlier. The paper is structured as follows. All the results in this paper are with detailed simulation studies of the physics processes at the ICAL detector. The main steps involved in this are neutrino event generation, inclusion of detector responses and efficiencies, inclusion of oscillations, binning in observables and $\chi^2$ analysis. The procedure of neutrino event generation with the NUANCE neutrino \cite{nuance} generator and the implementation of oscillations are discussed in detail in Section~\ref{nu-evt}. The choice of observables and kinematic regions used in the analysis, along with the inclusion of detector responses are discussed in detail in Section~\ref{obs}. The effect of increasing the energy range of observed muons is also explained in this section. The detailed $\chi^2$ analysis and a discussion of the systematic errors that have been considered are presented in Section~\ref{chisq}. The results of precision measurements and hierarchy sensitivity studies are shown in Section~\ref{results}. The impact of the additional pull in the $\nu/\overline{\nu}$ flux ratio implemented in this analysis is discussed in detail in Section~\ref{eff-11-pull}. The summary and conclusions are given in Section~\ref{summary}. \section{Neutrino events generation}\label{nu-evt} The interactions of interest in ICAL are the CC interactions of $\nu_\mu$ and $\overline{\nu}_\mu$ with the iron target in ICAL. These $\nu_\mu$ ($\overline{\nu}_\mu$) in ICAL come from both $\nu_\mu$ and $\nu_e$ atmospheric fluxes via $\nu_\mu\rightarrow\nu_\mu$ and $\nu_e\rightarrow\nu_\mu$ oscillations. The first channel gives the number of $\nu_\mu$ events which have survived and the second, subdominant, one gives the number from oscillations of $\nu_e$ to $\nu_\mu$. The number of events ICAL sees will be a sum of these events. Thus, \begin{eqnarray} \nonumber \frac{d^2N}{d E_\mu d\cos\theta_\mu} & = & t \times {n_d}\times \int{d E_\nu d\cos\theta_\nu d\phi_\nu} \times \\ & & \hspace{0.5cm} \left[P_{\mu\mu} \frac{d^3\Phi_\mu}{d E_\nu d\cos\theta_\nu d\phi_\nu}+ P_{e\mu} \frac{d^3\Phi_e}{d E_\nu d\cos\theta_\nu d\phi_\nu} \right] \times \frac{d\sigma_\mu (E_\nu)}{d E_\mu d\cos\theta_\mu}~, \label{toteve} \end{eqnarray} where $n_d$ is the number of target nucleons in the detector, $\sigma_\mu$ is the differential neutrino interaction cross section in terms of the energy and direction of the CC lepton produced, $\Phi_\mu$ and $\Phi_e$ are the $\nu_\mu$ and $\nu_e$ fluxes and $P_{\alpha\beta}$ is the oscillation probability of $\nu_\alpha\rightarrow\nu_\beta$. The number of {\em unoscillated} events over an exposure time $t$ in a bin of ($E_\mu,\cos\theta_\mu$) is obtained from the NUANCE neutrino generator using the Honda 3D atmospheric neutrino fluxes \cite{honda-paper}, neutrino-nucleus cross-sections, and a simplified ICAL detector geometry. While NUANCE lists details of all the final state particles including the muon and all hadrons, ICAL will be optimised to determine accurately the energy and direction of the muons (seen as a clean track in the detector) and the summed energy of all the hadrons in the final state (since it cannot distinguish individual hadrons). Even though the analyses are done for a smaller number of years (say 10), a huge sample of NUANCE events for a very large number of years (here 1000 years) is generated and scaled down to the required number of years during the analysis. This is mainly done to reduce the effect of statistical (Monte Carlo) fluctuations on sensitivity studies, which may alter the results. A detailed discussion about the effect of fluctuations on oscillation sensitivity studies will be discussed in Appendix~\ref{fluct}. A sample of 1000 years of unoscillated events was generated using NUANCE. Two sets were generated: \begin{enumerate} \item CC muon events using the $\Phi_\mu$ flux and \item CC muon events obtained by swapping $\Phi_e\leftrightarrow\Phi_\mu$ fluxes. \end{enumerate} This generates the so-called muon- and swapped-muon events that correspond to the two terms in Eq.~\ref{toteve}. \subsection{Oscillation probabilities} \label{oscprob} These events are oscillated depending on the neutrino oscillation parameters being used. The oscillation probabilities are calculated by considering the full three flavour oscillations in the presence of matter effects. The Preliminary Reference Earth Model (PREM) profile \cite{prem} has been used to model the varying Earth matter densities encountered by the neutrinos during their travel through the Earth. The Runge-Kutta solver method is used to calculate the oscillation probabilities \cite{imsc-par-paper} for various energies $E_\nu$ and distances $L$, or equivalently, $\cos\theta_\nu$ ($\theta_\nu$ being the zenith angle) of the neutrino. Further discussion of the oscillation probabilities and plots of a few sample curves are presented in the next section after listing the kinematical range of interest. The oscillation is applied event by event (for both muon and swapped muon events) as discussed in detail in Ref.~\cite{3dMMD} and it is a time consuming process since the actual sample contains 1000 years of events. The central values of the oscillations parameters are given in Table~\ref{osc-par-3sig} along with their known $3\sigma$ range. Note that $\delta_{CP}$ is currently unknown and its true value has been assumed to be $0^\circ$ for the purposes of this calculation. Furthermore, since ICAL is insensitive \cite{TAhie} to this parameter, it has been kept fixed in the calculation, along with the values of the 1--2 oscillation parameters $\Delta m^2_{21}$ and $\sin^2 \theta_{12}$ which also do not affect the results. \begin{table}[htp] \centering \begin{tabular}{|c|c|c|} \hline Parameter & True value & Marginalization range \\ \hline $\theta_{13}$ & 8.729$^\circ$ & [7.671$^\circ$, 9.685$^\circ$] \\ $\sin^{2}\theta_{23}$ & 0.5 & [0.36, 0.66] \\ $\Delta{m^2_{\rm eff}}$ & $\pm2.4\times10^{-3}~{\rm eV}^2$ & [2.1, 2.6]$\times10^{-3}~{\rm eV}^2$ (NH) \\ & & [-2.6, -2.1]$\times10^{-3}~{\rm eV}^2$ (IH) \\ $\sin^{2}\theta_{12}$ & 0.304 & Not marginalised \\ $\Delta{m^{2}_{21}}$ & $7.6\times10^{-5}~{\rm eV}^2$ & Not marginalised \\ $\delta_{CP}$ & 0$^\circ$ & Not marginalised \\ \hline \end{tabular} \caption{Main oscillation parameters used in the current analysis. In the second column are the true values of these parameters used to simulate the ``observed'' data set. True value is the value at which the data is simulated. More details are given in the main text. For precision measurement of each parameter, all others are varied except that parameter in the analysis.} \label{osc-par-3sig} \end{table} It is convenient to define the effective mass-squared difference $\Delta m^2_{\rm eff}$ which is the measured quantity whose value is related to $\Delta m^2_{31}$ and $\Delta m^2_{21}$ as \cite{andre,nuno}: \begin{equation} \Delta m_{\rm eff}^2 = \Delta m_{31}^2-\Delta m_{21}^2 \left(\cos^2\theta_{12}-\cos\delta_{CP}\sin\theta_{13} \sin 2\theta_{12} \tan\theta_{23}\right)~. \label{dmeff2} \end{equation} When $\Delta m^{2}_{\rm eff}$ is varied within its 3$\sigma$ range, the mass-squared differences are determined according to \begin{eqnarray} \nonumber \Delta m^2_{31} & = & \Delta m^2_{\rm eff}+\Delta m_{21}^2 \left(\cos^2\theta_{12}-\cos\delta_{CP}\sin\theta_{13} \sin2\theta_{12}\tan\theta_{23}\right)~; \\ \Delta m^2_{32} & = & \Delta m^2_{31}-\Delta m^2_{21}~, \end{eqnarray} for normal hierarchy when $\Delta{m}_{\rm eff}^2 > 0$, with $\Delta m^2_{31} \leftrightarrow -\Delta m^2_{32}$ for inverted hierarchy when $\Delta{m}_{\rm eff}^2 < 0$. A neater definition of the mass ordering can be obtained by defining the quantity \begin{eqnarray} \Delta m^2 \equiv m_3^2 - \frac{(m_1+m_2)^2}{2} & = & \Delta m^2_{32} + \frac{1}{2} \Delta m^2_{21}~. \end{eqnarray} Then, switching the ordering from normal to inverted is exactly equivalent to the interchange $\Delta m^2 \leftrightarrow - \Delta m^2$, with no change in its magnitude. However, since the marginalisation is to be done on the observed quantity $\Delta m^2_{\rm eff}$, we use this quantity, but need to keep in mind that $\Delta m^2_{32} \leftrightarrow -\Delta m^2_{31}$ when the ordering is flipped between NH and IH in this case. \section{Choice of Observables and Kinematic Regions}\label{obs} The expression in Eq.~\ref{toteve} is for the ideal case when the detector has perfect resolutions and 100\% efficiencies. In this analysis, realistic resolutions and efficiencies obtained from GEANT-4 based simulation studies of ICAL \cite{mupaper1,peripheralmu,mthesis,hres1,hrest} have been incorporated; this not only reduces the overall events due to the reconstruction efficiency factor but also smears out the final state (or observed) muon energy and direction and that of the hadron energy as well. \subsection{ICAL detection efficiencies}\label{eff} Detailed simulations analyses of the reconstruction efficiency, direction and energy resolution of muons in ICAL have been presented in Refs.~\cite{mupaper1,peripheralmu}. In addition, the relative cid (charge identification) efficiency of muons (ability of ICAL to distinguish $\mu^-$ from $\mu^+$) has also been presented here. The detector has good direction reconstruction capability (better than about $1^\circ$ for few-GeV muons) and excellent cid efficiency (better than 99\% for few-GeV muons) also for muons. The detailed simulation studies of the response of ICAL to hadrons have been presented in Refs.~\cite{hres1,hrest}. Hadron hits are identified and calibrated to reconstruct the energy of hadrons in neutrino-induced interactions in ICAL. The present analysis has used these results to simulate the observed events in ICAL. Note that the efficiency of an event is taken to be the ability to see a muon, that is, to be able to reconstruct it. Hence when hadron energy is added as the third observable, the efficiency in detecting an event remains the same. At the time this calculation was begun, the responses of both muons and hadrons in the peripheral parts of ICAL was not completely understood. Hence, instead of propagating the NUANCE events through the simulated ICAL detector in GEANT and obtaining a more realistic set of ``observed'' values of the energy and momentum of the final state particles, the true values of these variables were smeared according to the resolutions obtained in the earlier studies \cite{mupaper1,hres1}. It should be noted that instead of reconstructing the neutrino energy and direction using the muon and hadron information and then binning in neutrino energy, the analyses have been done by taking all the observables separately. This is because of the poor energy and direction resolution of neutrinos in ICAL detector owing to the fact that they are driven by the responses of the detector to hadrons, which are worse compared to those of muons. Still, the addition of the extra information regarding hadrons improves the sensitivity of ICAL to oscillation parameters, as shown in Ref.~\cite{3dMMD}. \subsection{Effect of extending the energy range of observed muons}\label{ext} The first highlight of this paper is widening the energy range of the observables, especially that of observed muons. Since ICAL is optimised for muon detection, it is desirable to make use of all the events available to perform the oscillation analysis. As opposed to all the earlier studies in ICAL \cite{TApre,TAhie,3dMMD,WP} which restricted themselves to performing the analyses in the energy range of only 1--11 GeV of the observed muon energy, the analysis we present here uses the region of the observed muon energy $E^{obs}_\mu$ = 0.5--25 GeV. It will be seen in Section~\ref{results} the inclusion of the higher energy bins beyond the upper limit of 11 GeV used in earlier studies, improves the results. The motivation to use the extended range of the observed muon energy is seen in Fig.~\ref{fig:pmumu} where the dominant oscillation probability, $P_{\mu\mu}$, is shown as a function of the zenith angle $\cos\theta$ for different values of the neutrino energy, $E_\nu \ge 10$ GeV. With increase in energy, the curve smooths out (matter effects become small so that $P_{\mu\mu} \sim \overline{P}_{\mu\mu}$) and correspondingly $P_{e\mu}$ becomes vanishingly small. Note also the vanishing of $P_{\mu\mu}$ for high energies, $E_\nu \gtrsim 20$ GeV, in the upward direction ($\cos\theta \to 1$). The sensitivity to $\Delta m^2_{\rm eff}$ is shown for two different energies, $E_\nu = 10, 22$ GeV, (representing the last energy bins of the previous analysis and the present one respectively) in Fig.~\ref{fig:pmumu} as well. The minimum moves to the left with increasing $\Delta m^2_{\rm eff}$ so that the solid (dashed) line corresponds to $\Delta m^2_{\rm eff} = 2.1~(2.6) \times 10^{-3}$ eV$^2$, which is the presently allowed $3\sigma$ range. It can be seen that the position of the minimum of $P_{\mu\mu}$ is more sensitive to the value of $\Delta m^2_{\rm eff}$ at the larger value of energy, although the probability itself is not sensitive to the {\em sign} of this quantity at this energy. Hence the inclusion of the higher energy bins improves the sensitivity to these oscillation parameters, as we shall see. \begin{figure}[htb] \centering \includegraphics[width=0.49\textwidth]{pmumu-5-Es.eps} \hfill \includegraphics[width=0.49\textwidth]{pmumu-10-22-GeV-2.1-2.6-eV2.eps} \caption{Left: Oscillation probability $P_{\mu\mu}$ as a function of zenith angle for different values of neutrino energy, $E_\nu = 10,12,15,20,25$ GeV assuming true hierarchy as normal. Right: The same probability shown as solid (dashed) lines for two different energy values for $\Delta m^2_{\rm eff} = 2.1~(2.6) \times 10^{-3}$ eV$^2$ to show the sensitivity to this parameter at higher energies.} \label{fig:pmumu} \end{figure} \subsection{The binning scheme}\label{bin} This is similar to, and an extension of, the one used in the earlier analyses \cite{3dMMD}. The observables in the analysis are the observed (i.e., smeared) muon energy $E^{obs}_\mu$, observed muon direction $\cos\theta^{obs}_\mu$ and observed hadron energy $E'^{obs}_{had}$, where the true total hadron energy is defined as \cite{3dMMD} $E'_{had} \equiv E_\nu - E_\mu$. There are two different analysis sets, one in which only the muon energy and direction, $(E^{obs}_\mu, \cos\theta^{obs}_\mu)$, are used, called the 2D (mu only) binning scheme and the other in which all the three observables $(E^{obs}_\mu, \cos\theta^{obs}_\mu, E'^{obs}_{had})$ are used, which is also known as the 3D (or with-hadron) binning scheme. The details of the two binning schemes are shown in Table~\ref{mo-wh-bins}. \begin{table}[htb] \centering \begin{tabular}{|c|c|c|c|} \hline Observable & Range & Bin width & No.of bins \\ \hline & [0.5, 4] & 0.5 & 7 \\ & [4, 7] & 1 & 3 \\ $E^{obs}_{\mu}$ (GeV)& [7, 11] & 4 & 1 \\ {\color{blue} (15 bins)} & [11, 12.5] & 1.5 & 1 \\ & [12.5, 15] & 2.5 & 1 \\ & [15, 25] & 5 & 2 \\ \hline & [-1.0, 0.0] & 0.2 & 5 \\ $\cos\theta^{obs}_{\mu}$ & [0.0, 0.4] & 0.10 & 4 \\ {\color{blue} (21 bins)} & [0.4, 1.0] & 0.05 & 12 \\ \hline & [0, 2] & 1 & 2 \\ $E'^{obs}_{had}$ (GeV) & [2, 4] & 2 & 1\\ {\color{blue} (4 bins)} & [4, 15] & 11 & 1 \\ \hline \end{tabular} \caption{Bins of the three observables, muon energy and direction and hadron energy, used in the analysis.} \label{mo-wh-bins} \end{table} It should be noted that in the current analysis the direction $\cos\theta^{obs}_\mu = +1$ is taken as the up direction. Since atmospheric neutrino oscillations are mainly in the up direction, more bins are assigned in this region than in the down direction. The $E^{obs}_\mu$ bins upto 1--11 GeV are taken to be same as those used in Ref.~\cite{3dMMD}. A bin of width 0.5 GeV is added in the lower energy range. In the higher range of $E^{obs}_\mu$, four bins are added, two of them with bin width each of 1.5 GeV and 2.5 GeV respectively and the last two bins of width 5 GeV, thus making the total number of bins in $E^{obs}_\mu$ 15. The $\cos\theta^{obs}_\mu$ bins and the $E'^{obs}_{had}$ are kept the same as in Ref.~\cite{3dMMD}. Thus same bins are used in the overlapping energy range, and the additional bin sizes were optimised to obtain reasonable event rate as well as sensitivity to oscillation parameters. This gives an optimised result on the whole. As mentioned above, the number of hadron bins was retained as before, as also the energy range. No extension of hadron energies beyond $E'^{obs}_{had} = 15$ GeV was used, since this gave only a marginal improvement in $\chi^2$ while ICAL's sensitivity to hadrons at higher energies in terms of the number of hits in the detector tends to saturate \cite{hres1}. \subsection{Number of events}\label{nevt} The true number of oscillated events is given by: \begin{eqnarray} \nonumber N_{\mu^-} & = & N_{\mu^-}^{0}\times P_{\mu\mu} + N_{e^-}^{0}\times P_{e\mu}~, \\ \nonumber N_{\mu^+} & = & N_{\mu^+}^{0}\times \overline{P}_{\mu\mu} + N_{e^+}^{0}\times \overline{P}_{e\mu}~, \end{eqnarray} where $P_{\alpha\beta}$ is the oscillation probability of a flavour $\alpha$ to a flavour $\beta$, $N_{\mu^{\pm}}^0$ and $N_{e^{\pm}}^0$ refer to unoscillated muon and swapped-muon events generated by NUANCE arising from survived atmospheric $\nu_\mu \to \nu_\mu$ and oscillated atmospheric $\nu_e \to \nu_\mu$ interactions in ICAL corresponding to the two terms in Eq.~\ref{toteve}. The number of events per bin including the charge misidentified ones is given as, \begin{eqnarray} N^{tot}_{\mu^-}(E^{obs}_\mu,\cos\theta^{obs}_\mu) & = & N_{\mu^-} \epsilon_{rec} \epsilon_{cid} + N_{\mu^+} \epsilon_{rec} (1-\epsilon_{cid})~, \\ \nonumber N^{tot}_{\mu^+}(E^{obs}_\mu,\cos\theta^{obs}_\mu) & = & N_{\mu^+} \epsilon_{rec} \epsilon_{cid} + N_{\mu^-} \epsilon_{rec} (1-\epsilon_{cid})~, \end{eqnarray} where $N^{tot}_{\mu^-}$ ($N^{tot}_{\mu^+}$) is the total number of oscillated $\nu_\mu$ ($\overline{\nu}_\mu$) CC muon neutrino events observed in the bin $(E^{obs}_\mu,\cos\theta^{obs}_\mu)$. The quantity $\epsilon_{rec}$ is the reconstruction efficiency of muons with a given energy and direction and $\epsilon_{cid}$ is the relative charge identification efficiency of the same. The reconstruction and charge identification efficiencies for $\mu^-$ and $\mu^+$ have been taken to be the same; studies show \cite{mupaper1} that they are only marginally different in a few energy-$\cos\theta$ bins. Finally, the events in a bin are considered non-zero if there is at least one event in that bin. Now this 1000 year sample, oscillated according to the central values of the oscillation parameters listed in Table~\ref{osc-par-3sig}, is scaled to the required number of years to generate the ``data''. The current precision analysis is done for 10 years of exposure of 50 kton ICAL (500 kton year). In order to generate the ``theory'' for comparison with ``data'' for the $\chi^2$ analysis, the oscillation parameters are changed within their $3\sigma$ ranges and the aforementioned processes are repeated. Different theories are generated by changing the oscillation parameters. \section{$\chi^2$ analysis}\label{chisq} Systematic uncertainties play a very important role in determining the sensitivity to oscillation parameters in any experiment. The inclusion of these uncertaines always gives a worse $\chi^2$ than the one obtained when we have no uncertaines at all. In this new analysis, an extra systematic uncertainty compared to the older analyses is included: the uncertainty on the neutrino--antineutrino flux ratio. This has been considered for the first time in such an analysis and will be seen to have a great impact because ICAL is a magnetised detector that can separate $\mu^-$ and $\mu^+$ events. With the inclusion of this uncertainty, the $\chi^2$ can no longer be expressed as a sum of the separate contributions of neutrino and anti-neutrino events. When the systematic errors are implemented using the usual method of pulls \cite{kameda,ishitsuka,maltoni,kamland,sbnufact} we have, \begin{eqnarray} \chi^2_{11} & = & {\stackrel{\hbox{min}}{\displaystyle \xi_l^{\pm},\xi_6}} \sum^{N_{E^{obs}_{\mu}}}_{i=1}\sum^{N_{\cos\theta^{obs}_{\mu}}}_{j=1} \left(\sum^{N_{E'^{obs}_{had}}}_{k=1}\right) 2\left[ \left(T^{+}_{ij(k)} - D^{+}_{ij(k)} \right) - D^{+}_{ij(k)} \ln \left( \frac{T^{+}_{ij(k)}}{D^{+}_{ij(k)}} \right) \right] + \nonumber \\ & & 2\left[\left(T^{-}_{ij(k)} - D^{-}_{ij(k)}\right) - D^{-}_{ij(k)} \ln\left(\frac{T^{-}_{ij(k)}}{D^{-}_{ij(k)}}\right)\right] + \sum^{5}_{l^{+}=1} \xi^{2}_{l^{+}} + \sum^{5}_{l^{-}=1} \xi^{2}_{l^{-}} + \xi^{2}_6~, \label{chisq-11p} \end{eqnarray} where $i,j,k$ sum over muon energy, muon angle and hadron energy bins. The number of theory (expected) events in each bin, with systematic errors, is given by, \begin{eqnarray} T^{+}_{ij(k)} & = & T^{0+}_{ij(k)}\left(1+\sum^{5}_{l^{+}=1} \pi^{l^{+}}_{ij(k)}\xi_{l^{+}}+\pi_6\xi_6\right)~, \\ \nonumber T^{-}_{ij(k)} &= & T^{0-}_{ij(k)}\left(1+\sum^{5}_{l^{-}=1} \pi^{l^{-}}_{ij(k)}\xi_{l^{-}}-\pi_6\xi_6\right)~, \label{td-pi6xi6} \end{eqnarray} where $T^{0\pm}_{ij(k)}$ is the corresponding number of events without systematic errors, $D^{\pm}_{ij(k)}$ is the number of ``data'' (observed) events in each bin, and $\xi_{l^{\pm}}$ are the pulls corresponding to the same five systematic uncertainties, $l=1,\ldots,5$, for each of neutrino and anti-neutrino contributions, as considered in the earlier analyses by the INO collaboration \cite{WP,TApre,TAhie,3dMMD}. The five systematic uncertaines include the flux normalisation, shape (spectral or energy dependence) uncertainty or ``tilt'' and zenith angle uncertainties, the cross-section unertainties, and an overall systematic uncertainty due to the detector response (see Ref.~\cite{3dMMD} for details). The cross-section uncertainty is assumed to be process independent. At high energies, the cross-section is dominated by deep inelastic scattering (DIS) where the uncertainties are smaller; however, in the energy range of interest here, all processes (quasi-elastic (QE), resonance (RES) and DIS) have significant contributions and so a common cross-section uncertainty is used. Here the values of $\pi^l$ are taken to be the same for neutrino events and anti-neutrino events; i.e, $\pi^{l\pm} \equiv \pi^l; l = 1, \ldots, 5$. The values used in this analysis are the same as those used in the earlier analysis by the INO collaboration \cite{TApre,TAhie,3dMMD,WP}: \begin{enumerate} \item $\pi_1=20$\% flux normalisation error, \item $\pi_2=10$\% cross section error, \item $\pi_3=5$\% tilt error, \item $\pi_4=5$\% zenith angle error, \item $\pi_5=5$\% overall systematics. \end{enumerate} In Eq. \ref{chisq-11p} $\xi_6$ is the 11th (additional) pull and $\pi_6$ is taken to be 2.5\%. The effect of the new pull can be understood by considering its contribution alone on the ratio of neutrino to anti-neutrino events: \begin{eqnarray} \frac{N^+}{N^-} & \simeq & \frac{T^{0+}}{T^{0-}} \frac{(1+\pi_6\xi_6)}{(1-\pi_6\xi_6)} \\ & \simeq & \frac{T^{0+}}{T^{0-}}(1+2\pi_6\xi_6)~. \label{pi6-xi6} \end{eqnarray} This pull therefore accounts for the uncertainty in the flux ratio; $2\pi_6$ corresponds to the 1$\sigma$ error (when $\xi_6=1$); this gives the 1$\sigma$ error on the ratio to be 5\%, consistent with Ref.~\cite{honda-paper,honda-gaisser}. In the earlier analysis with 10 pulls only, the pulls for $N^-$ and $N^+$ were independent so that they could be in the same or opposite directions. The introduction of the 11th pull constrains the ratio and results in a (negative) correlation between the normalisations of the $T^+$ and $T^-$ events, as will become clear from the discussions presented in Sections~\ref{results} and \ref{eff-11-pull}. Without this pull, the total $\chi^2$ can be simply expressed as a sum over the $\mu^-$ and $\mu^+$ contributions : \begin{equation} \chi^2_{10} = \chi^2_{+}+\chi^2_{-}. \label{chisq-10p} \end{equation} Note that the observed muon events have contributions from both the $\Phi_\mu$ and $\Phi_e$ fluxes; here we have assumed the same systematic error on both the $\overline{\Phi}_\mu/\Phi_\mu$ and $\overline{\Phi}_e/\Phi_e$ ratios (although, in principle, this can be included separately). We have also ignored the small differences due to additional possible uncertainty in the $\Phi_\mu/\Phi_e$ flux ratios since the contribution from the second term in Eq.~\ref{toteve}, i.e., from $\nu_e \to \nu_\mu$ oscillations is subdominant/small. An 8\% prior at 1$\sigma$ is also added on $\sin^22\theta_{13}$, since this quantity is known to this accuracy \cite{dayabay1,dayabay2}. No prior is imposed on $\theta_{23}$ and $\Delta{m_{32}^2}$, since the precision measurements of these parameters are to be carried out with ICAL. The contribution to $\chi^2$ due to prior is defined as : \begin{equation} \chi_{\rm prior}^2 = \left(\frac{\sin^22\theta_{13}- \sin^22\theta_{13}^{\rm true}} {\sigma(\sin^22\theta_{13})}\right)^2~, \label{chi2-prior} \end{equation} where $\sigma(\sin^22\theta_{13})$ = $0.08\times\sin^2\theta_{13}^{\rm true}$. Thus the total $\chi^2$ is defined as: \begin{equation} \chi_{\rm ICAL}^{2} = \chi^2+\chi_{\rm prior}^2~, \label{chi2-ical} \end{equation} where $\chi^2$ corresponds suitably to $\chi^2_{11}$ (new analysis) or $\chi^2_{10}$ (repeat of the older analysis with extended energy range). During $\chi^2$ minimisation, $\chi_{\rm ICAL}^{2}$ is first minimised with respect to the pull variables $\xi_l$ for a given set of oscillation parameters, then marginalised over the ranges of the oscillation parameters $\sin^2\theta_{23}$, $\Delta m^2_{\rm eff}$ and $\sin^22\theta_{13}$ given in Table~\ref{osc-par-3sig}. The third column of the table shows the 3$\sigma$ range over which the parameter values are varied. These along with the best fit values of $\theta_{12}$ and $\Delta{m_{21}^2}$ are obtained from the global fits in Refs.~\cite{nufitpage,nufit-2012,forero,gonzalezgarcia,capozzi}. As mentioned earlier, the parameter $\delta_{CP}$ is kept fixed at zero throughout this analysis. The relative precision achieved on a parameter $\lambda$ (here $\lambda$ being $\sin^2\theta_{23}$ or $|\Delta{m^2_{\rm eff}}|$) at 1$\sigma$ is expressed as : \begin{equation} p(\lambda) = \frac{\lambda_{\hbox{max-}2\sigma}- \lambda_{\hbox{min-}2\sigma}}{4\lambda_{true}}~, \label{sig1-pre} \end{equation} where $\lambda_{\hbox{max-}2\sigma}$ and $\lambda_{\hbox{min-}2\sigma}$ are the maximum and minimum allowed values of $\lambda$ at 2$\sigma$; $\lambda_{true}$ is the true choice. The statistical significance of the obtained result is denoted by $n\sigma$, where, $n=\sqrt{\Delta\chi^2}$, which is given by: \begin{equation} \Delta\chi^2(\lambda) = \chi^2_{ICAL}(\lambda)-\chi^2_0~, \end{equation} $\chi^2_0$ being the minimum value of $\chi^2_{ICAL}$ in the allowed parameter range. With no statistical fluctuations, $\chi^2_0 = 0$. \section{Results: precision measurement of parameters}\label{results} The precision measurement of oscillation parameters in the atmospheric sector in the energy range 0.5--25 GeV is probed in the current analysis using 500 kton year exposure of ICAL detector. Comparisons with previous analyses in the 1--11 GeV energy range are also done to illustrate how much better is the sensitivity with our new analysis. Sensitivities to the oscillation parameters $\theta_{23}$ and $|\Delta{m_{\rm eff}^2}|$ are found separately, when the other parameter and $\theta_{13}$ are marginalised over their 3$\sigma$ ranges. Marginalisation is also done over the two possible mass hierarchies; since atmospheric neutrino events in ICAL are sensitive to the mass hierarchy (also discussed below), the wrong hierarchy always gives a worse value of $\chi^2$ during marginalisation. Typically, normal hierarchy (NH) is taken to be the true hierarchy (a couple of results with inverted hierarchy (IH) are shown for completeness) and 500 kton years of exposure is used (10 years of running the experiment). \subsection{Precision measurement of $\sin^2\theta_{23}$}\label{pre-stt23} The relative 1$\sigma$ precision on $\sin^2\theta_{23}$ obtained from different analyses, with normal hierarchy as the true hierarchy, is shown in Fig.~\ref{pre-parabolas-stt23} for different cases which are the combinations of energy ranges, binning schemes and number of pulls. \begin{figure}[htb] \centering \includegraphics[width=0.49\textwidth=0.5]{tt23-2D-chi2-1.eps} \hfill \includegraphics[width=0.49\textwidth]{tt23-3D-chi2.eps} \caption{$\Delta\chi^2_{\rm ICAL}$ at different values of $\sin^2\theta_{23}$ with true $\sin^2\theta_{23}=0.5$ and with normal hierarchy as the true hierarchy. The left panel shows the results for muon only (2D) analysis and the right panel shows those for the analysis with hadrons also (3D), for all possible combinations of energy ranges, pulls and binning schemes.} \label{pre-parabolas-stt23} \end{figure} The other parameters $|\Delta{m^2_{\rm eff}}|$ and $\theta_{13}$ have been marginalised over their 3$\sigma$ ranges as given in Table~\ref{osc-par-3sig}. Percentage precisions on $\sin^2\theta_{23}$ at 1$\sigma$ obtained with different analyses are shown in Table~\ref{pre-tab-stt23}. \begin{table}[htb]
1,108,101,563,580
arxiv
\section{Introduction} \IEEEPARstart{F}{ifth} generation (5G) cellular communications technology for commercial use is currently being deployed in various countries. Meanwhile, research into sixth generation (6G) systems is already under way as it is designed to meet the needs of ultra-high capacity, reliability and low latency \cite{Rajatheva2020}. Among existing and future technologies, multiple-input multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) will continue to play an important role that facilitates development and deployment. As an effective physical layer solution, the spectral efficiency (SE) of OFDM systems is superior to conventional single carrier systems and it can combat inter-symbol interference (ISI) through transforming a frequency-selective fading channel into many parallel flat-fading subchannels. However, OFDM systems are highly sensitive to carrier frequency offset (CFO) which can destroy the important orthogonality between subcarriers and this results in the degradation of bit error rate (BER) performance. Therefore, the estimation of accurate CFO is crucial to OFDM systems. Meanwhile, symbol timing offset (STO) can result in ISI and a rotated phase whose value is proportional to the subcarrier index at the FFT output in an OFDM receiver. The traditional approach towards the estimation of both STO (also known as the timing synchronization) and CFO (also known as the frequency synchronization), involves sending a preamble at OFDM transmitters and processing the signals at the receivers. These signal processing techniques have been studied extensively, and many seminal articles have been published since the 1990s. P.H. Moose addressed the issue of receiver frequency synchronization by proposing an algorithm for a maximum likelihood estimate (MLE) of the CFO using the discrete Fourier transform (DFT) of a repeated symbol, and a lower bound for signal-to-noise (SNR) has been derived \cite{Moose1994}. In \cite{Schmidl1997}, a method for the rapid and robust frequency and timing synchronization for OFDM has been presented by Schmidl \textit{et al.} Then, an implementation of an MIMO-OFDM-based wireless local area network (WLAN) system was demonstrated by \cite{Zelst2004}, in which a simple MIMO extension of Schmidl’s algorithm \cite{Schmidl1997} proposed in \cite{Mody2001} is deployed in a practical system. In \cite{1703862} the authors address the problem of training design for a frequency-selective channel and also CFO estimation in single- and multiple-antenna systems under different energy-distribution constraints. In \cite{7582444} and \cite{7933261}, a new framework referred to as sparse blind CFO estimation for interleaved uplink orthogonal frequency-division multiple access (OFDMA) and the sparse recovery assisted CFO estimator for the uplink OFDMA were respectively proposed. Timing and frequency synchronization, as well as channel estimation can be carried out jointly to achieve better performance. \cite{7982654} presents a novel preamble-aided method for joint estimation of timing, carrier frequency offset, and channel parameters for OFDM. \cite{8013752} considered the joint maximum likelihood estimator for the channel impulse response (CIR) and the CFO. In \cite{Nasir2016}, a comprehensive literature review and classification of the recent research progress in timing and carrier synchronization was presented. But, errors will nearly always remain in the estimation of STO and CFO, which are also known as residual STO (RSTO) and residual CFO (RCFO). This is due to the effects of fading and thermal noise. The influence of STO errors on channel interpolation is analyzed in \cite{Chang2008}.~Even a small RCFO can result in amplitude and phase distortion and also inter-carrier interference (ICI) among subcarriers.~Traditionally, in order to to mitigate the impact of RCFO, channel tracking methods are employed, and this is realized by inserting known pilots into specific subcarriers. However, this method reduces system SE. As regards RSTO, compensation method for channel correction need to be used. To reduce the sensitivity to synchronization errors, \cite{6963465} develops conditions for the selection of appropriate Zadoff-Chu sequences. In recent years, challenges to traditional methods have emerged that use by the data-based approaches relying on machine learning \cite{sonal2016MachineLF, van2019deep, li2018carrier, he2020improved}. A popular scheme is machine learning-based end-to-end communications systems. Basing on the idea of autoencoder, Dörner \textit{et al.} proposed a learning-based communication system, in which the task of synchronization is addressed through a neural network \cite{Dorner2018}. Similarly, in \cite{Wu2019}, a sampling time synchronization model using a convolutional neural network (CNN) for end-to-end communications systems is introduced. In \cite{Qing2020}, an extreme learning machine (ELM)-based frame synchronization method for a burst-mode communication system was proposed.~Finally, \cite{Ninkovic2020}~investigates a deep neural network (DNN)-based solution for packet detection and CFO estimation. Although the above-mentioned machine learning-based schemes achieve better performance or robustness than traditional methods, their shortcomings lead to serious difficulties in practical implementation. We summarize the challenges and deficiencies of these schemes as follows. \begin{itemize} \item In previous machine learning related works, synchronization technique is ignored and perfect synchronization is assumed. Since the trained parameters of the neural network depend highly on the input data, when the test signal has STO and CFO, these methods would crash. \item The mathematical theory of communication was exhaustively explored by Shannon in \cite{Shannon1948}, where the fundamental problem of communication is described as ``reproducing at one point either exactly or approximately a message selected at another point''. But, autoencoder-based methods \cite{Dorner2018,Wu2019} present a ``chicken and egg'' problem, because significant ``known information'' is actually necessary to train the autoencoder thus making it impractical. \item So, most of the previous works sink into this paradox of the ``chicken or the egg'' causality dilemma \cite{Dorner2018,Wu2019,Qing2020,Ninkovic2020}. Specifically, in the training stage, labeled data with exact timing location and CFO under given channel models are necessary. Unfortunately, it is impossible to acquire labeled data under real channel environments. \item The common disadvantage of most of the current learning-based techniques lies in the computational complexity because they are based on a DNN. DNNs usually have deep hidden layers, which requires prohibitive computational complexity. Even when an ELM-based scheme with only one hidden layer is applied in \cite{Qing2020}, a significant training data under a given channel realization is indispensable. \end{itemize} Motivated by the causality~``chicken or the egg''~dilemma and the prohibitive computational complexity in machine learning-based schemes, in this paper we first propose a robust ELM-based fine timing and frequency synchronization scheme to deal with the challenges above. The main contributions of this paper are summarized as follows. \begin{itemize} \item For MIMO-OFDM, we incorporate ELM with a traditional STO estimator. The fine timing synchronization can be carried out by ELM without the need for any prior information about the channel. \item We first propose a robust ELM-based scheme to realize RCFO estimation without the need for additional prior information, where the ELM can learn the mapping relationship between the preamble corrupted by both RSTO and RCFO. \item We provide a performance analysis of the proposed learning scheme in different cases. Specifically, computer simulation results show that the proposed scheme is superior to traditional STO and CFO estimation methods in terms of mean squared error (MSE). Also, extensive simulation results and comparisons have demonstrated the robustness and (machine learning) generalization ability of the proposed scheme. Finally, we give the complexity analysis of the complex ELM and DNN methods. \end{itemize} The remainder of this paper is organized as follows. The signal model of the MIMO-OFDM system and traditional timing and frequency synchronization for MIMO-OFDM are presented in Sections \ref{S2} and \ref{S3}, respectively. In Section \ref{S4}, we propose a scheme that incorporates ELM into the traditional MIMO-OFDM system, in which ELM is used to estimate RSTO and RCFO. Then, numerical results and analysis for evaluating the performance of the proposed scheme are provided in Section \ref{S5}, which is followed by conclusions in Section \ref{S6}. \textit{Notations:}~The notations adopted in the paper are as follows. We use boldface lowercase $\bf{x}$ and capital letters $\bf{X}$ to denote column vectors and matrices, respectively. Superscripts $^{-1}$,~$^*$,~$^T$,~$^H$ and~$^\dagger$ stand for inverse, conjugate, transpose,~Hermitian transpose and Moore-Penrose generalized matrix inverse, respectively.~$\otimes$,~$ \odot $,~$\circledast$, ${E}\left\{\cdot\right\}$,~$ \lfloor \cdot \rfloor $~and $j=\sqrt{-1}$ denote Kronecker product,~Hadamard product,~cyclic convolution, the expectation operation, floor function and the imaginary unit.~Note that $ \angle \left( \cdot \right) $~returns the phase angle of a complex number. Finally, $ {\rm{repmat}}~({\bf{A}},m,n) $ returns an array containing $ m $ and $ n $ copies of $ {\bf{A}} $ in the column and row dimensions, respectively. \section{MIMO-OFDM Signal Model} \label{S2} Let us consider a MIMO-OFDM system with $ N_t $ transmit (TX) and $ N_r $ receive (RX) antennas, which is usually denoted as a~$ N_t \times N_r $~system. Without loss of generality, we consider the frequency-domain MIMO-OFDM signal model, which is directly given as~\cite{Zelst2004} \begin{equation} {\bf{\tilde x}}\left( a \right) = {\bf{\tilde H\tilde s}}\left( a \right) + {\bf{\tilde n}}\left( a \right) \end{equation} where an $ N_rN_c $-dimensional complex vector $ {\bf{\tilde x}}\left( a \right) $ represents the frequency-domain received signal, $ {\bf{\tilde s}}\left( a \right) = {\left[ {{\bf{s}}{{\left( {0,a} \right)}^T} \cdots {\bf{s}}{{\left( {{N_c} - 1,a} \right)}^T}} \right]^T} \in {^{{N_tN_c} \times {1}}} $ and $ {\bf{s}}\left( {k,a} \right) $ represents an $ N_t $-dimensional complex vector transmitted on the $ k $th subcarrier of the $ a $th MIMO-OFDM symbol with $ {S_p}\left( {k,a} \right) $ is its $ p $ th element, i.e., transmitted on the the $ p $th TX antenna. $ {\bf{\tilde n}}\left( a \right) $ represents the frequency-domain noise vector, with i.i.d. zero-mean, complex Gaussian elements with variance $ 0.5\sigma _n^2 $ per dimension, and the channel frequency response is represented as a block diagonal matrix $ {{\bf{\tilde H}}} $ as follows: \begin{equation} {\bf{\tilde H}} = \left[ {\begin{array}{*{20}{c}} {{\bf{H}}\left( 0 \right)}&{}&0\\ {}& \ddots &{}\\ 0&{}&{{\bf{H}}\left( {{N_c} - 1} \right)} \end{array}} \right]. \end{equation} Now, $ {\bf{H}}\left( k \right) \in \mathbb{C}{^{{N_r} \times {N_t}}} $ represents the $ {{N_t} \times {N_r}} $ MIMO channel for the $ k $th subcarrier and can be shown to be \begin{equation} {\bf{H}}\left( k \right) = \sum\limits_{l = 0}^{L - 1} {{\bf{G}}\left( l \right)\exp \left( { - j2\pi \frac{{kl}}{{{N_c}}}} \right)}. \end{equation} where $ l $th path of MIMO CIR matrix $ {\bf{G}}\left( l \right) \in \mathbb{C}{^{{N_r} \times {N_t}}} $ and its $ \left( {q,p} \right) $th element is $ {{g_{q,p}}\left( l \right)} $. We assume that these taps are independent, zero-mean, complex Gaussian random variables with variance $ 0.5{P_l} $ per dimension.~The ensemble $ {P_l},~l = \left\{ {0, \cdots ,L - 1} \right\} $ is called the power delay profile (PDP) and its total power is assumed to be normalized to $ \sigma _c^2 = 1 $.~For each $ k $th subcarrier, the signal model can be written in its flat-fading form as \begin{equation} {\bf{x}}\left( {k,a} \right) = {\bf{H}}\left( k \right){\bf{s}}\left( {k,a} \right) + {\bf{n}}\left( {k,a} \right). \end{equation} \section{Traditional Timing and Frequency Synchronization for MIMO-OFDM} We consider the traditional preamble pattern \cite{Schmidl1997} and synchronization method \cite{Zelst2004} in this section. \label{S3} \begin{figure}[t] \centering \includegraphics[width=3.5in]{PreambleStructure.eps} \caption{Structure of a time orthogonal preamble for a $ 2\times2 $~MIMO-OFDM system.} \label{PreambleStructure} \end{figure} As shown in Fig. \ref{PreambleStructure}, in order to estimate the subchannels between the different TX and RX antennas, a time orthogonal preamble is chosen. The length of the preamble for all the TX antennas is $ N_{\rm{train}}=N_g+N_c $, where $ N_g $ and $ N_c $ denote the length of the cyclic prefix (CP) and one OFDM symbol, respectively. $ {\bf{c}}_{p,1} $ and $ {\bf{c}}_{p,2} $ are different pseudo-noise (PN) sequences transmitted by the $ p $th TX. The first part of the preamble $ \bm{c}_{p,1} $ comprises two identical halves in the time domain, which is used for symbol timing and fractional CFO estimation. This kind of time-domain identical structure can be obtained by transmitting a PN sequence only on the even frequencies while zeros are placed on the odd frequencies. The second part of the preamble $ \bm{c}_{p,2} $ contains a PN sequence on its odd frequencies to measure these subchannels and another PN sequence on the even frequencies to help determine the frequency offset. \subsection{Timing Synchronization} Before the estimation of CFO is conducted, the STO~$\left( \tau \right)$~needs to be estimated.~The method for timing synchronization is given by \begin{equation} \hat \tau = \mathop {{\rm{argmax}}}\limits_d \frac{1}{N_g}\sum\limits_{m = 0}^{N_g - 1} {\left[ {\frac{{\sum\limits_{p = 1}^{{N_t}} {{{\left| {\Lambda \left( {{d_p} + m} \right)} \right|}^2}} }}{{\sum\limits_{p = 1}^{{N_t}} {P{{\left( {{d_p} + m} \right)}^2}} }}} \right]} , \end{equation} where ${d_p} = d - \left( {{N_t} - p} \right){N_{{\rm{train}}}}$ and $ d $ is discrete variable.~$\Lambda \left( d \right)$ is complex correlation of the first part of preamble~$ \bf{c_1} $,~and is given by \begin{equation} \Lambda \left( d \right) = \sum\limits_{i = d - \left( {{N_c}/2 - 1} \right)}^d {\sum\limits_{q = 1}^{{N_r}} {r_q^*\left( {i - {N_c}/2} \right){r_q}\left( i \right)} } \end{equation} and $ r_q(i) $ is the $ i $th sample of the received signal on the $ q $th antenna. The received energy for the second half-symbol of $ \bf{c_1} $,~$P\left( d \right)$,~is defined by \begin{equation} P\left( d \right) = \sum\limits_{i = d - \left( {{N_c}/2 - 1} \right)}^d {\sum\limits_{q = 1}^{{N_r}} {r_q^*\left( i \right){r_q}\left( i \right)} } . \end{equation} Note that $ d $~is a time index corresponding to the first sample in a window of $ N_c $ samples. $ \frac{1}{N_g}\sum\limits_{m = 0}^{N_g - 1} {\left( \cdot \right)} $ is $N_g$-point moving average. \subsection{Frequency Synchronization} In this subsection, the CFO estimation method is based on \cite{Schmidl1997}. We define normalized CFO,~$ \varepsilon $,~as a ratio of the CFO $ f_{\rm{offset}} $ to subcarrier spacing $ \Delta f $, shown as $ \varepsilon = f_{\rm{offset}}/ \Delta f$. Let $ \varepsilon_i $ and $ \varepsilon_f $ denote the integer part and fractional part of $ \varepsilon $, respectively, and therefore $ \varepsilon = \varepsilon_i + \varepsilon_f $, where $ \varepsilon_i = \lfloor \varepsilon \rfloor $.~If~$ \left| \varepsilon \right| \le 1 $, the CFO can be estimated directly as \begin{equation} \hat \varepsilon {\rm{ = }}\frac{{\hat \theta }}{\pi }{\rm{ = }}\frac{{\angle \left[ {\sum\limits_{p = 1}^{{N_t}} {\Lambda \left( {\hat \tau } \right)} } \right]}}{\pi }, \label{Estimation CFO} \end{equation} where $ \hat{\theta} $ denotes the phase of the summation of the complex correlations of the preambles originating from the different transmitters. When $ \left| \varepsilon \right| > 1 $, the PN sequence on the even frequencies of $ \bf{c}_2 $ will be needed and the CFO can be given by \begin{equation} \varepsilon {\rm{ = }}\frac{\theta }{\pi } + 2g, \end{equation} where $ g $ is an integer. By partially correcting the frequency offset, adjacent carrier interference can be avoided, and then the remaining offset of $ 2g $ can be found. In order to estimate $ g $, the received preamble at $ q $th RX antenna from $ p $th TX antenna, corresponding to $ {\bf{c}}_{p,1} $ and $ {\bf{c}}_{p,2} $ need to be first frequency compensated by $ \hat{\theta} $ at first and then transformed into the frequency domain as $ {\bf{x}}_{q,p,1} $ and $ {\bf{x}}_{q,p,2} $, respectively. Then, $ g $ can be estimated by the difference correlation as follows: \begin{equation} \resizebox{.95\hsize}{!}{$ \hat g = \mathop {\arg \max }\limits_g \frac{{\sum\limits_{p = 1}^{{N_t}} {\sum\limits_{q = 1}^{{N_r}} {{{\left| {\sum\limits_{k \in {{\rm X}_{{\rm{Even}}}}} {X_{q,p,1}^*\left[ {k + 2g} \right]v_p^*\left[ k \right]{X_{q,p,2}}\left[ {k + 2g} \right]} } \right|}^2}} } }}{{2\sum\limits_{p = 1}^{{N_t}} {\sum\limits_{q = 1}^{{N_r}} {{{\left( {\sum\limits_{k \in {{\rm X}_{{\rm{Even}}}}} {{{\left| {{X_{q,p,2}}\left[ k \right]} \right|}^2}} } \right)}^2}} } }}, $} \end{equation} where $X_{\rm{Even}}$ represents the subset of even frequency indices and $ {v_p}\left[ k \right] = \sqrt 2 {c_{p,2}}\left[ k \right]/{c_{p,1}}\left[ k \right],~k \in X_{\rm{Even}} $. Finally, the estimate can be written as \begin{equation} \hat \varepsilon = \frac{{\hat \theta }}{\pi } + 2\hat g. \end{equation} \section{ELM-Based RSTO and RCFO Estimation} \label{S4} Due to the fading channel and thermal noise, small but significant RSTO and RCFO will always exist to degrade the performance of MIMO-OFDM systems. In order to perform synchronization more accurately, the methods of ELM-based RSTO and RCFO estimations will be introduced in this section. \subsection{ELM-Based RSTO Estimation} Inspired by the idea that a neural network (NN) can learn from appropriate data, we try to further exploit (by a NN) the implicit information inside the preamble to estimate RSTO and RCFO. Compared with a DNN, an ELM only has single hidden layer and thus it has lower computational complexity, but it still has excellent performance \cite{Liu2019}. For this reason, we choose to employ an ELM in this paper. Specifically and most importantly, we expect that the relationships between the corrupted preamble signal and synchronization offset can be ``learnt'' by the ELM. Therefore, it is necessary to first explain the effect of STO. Depending on the location of the estimated starting point of an OFDM symbol, the effect of STO can vary. Fig. \ref{EffectsOfSTO} shows four different cases of timing offset, in which the estimated starting point is perfectly accurate ({\bf{Case I}}), a little early (Case II), too early (Case III), or a little late compared to exact timing (Case IV). $ T_c $, $ T_g $ and $ \tau_{\rm{max}} $ represent the duration of the OFDM symbol, the CP and the maximum excess delay, respectively \cite{cho2010mimo}. \begin{figure}[t] \centering \includegraphics[width=3.6in]{EffectsOfSTO.eps} \caption{Four different cases of an OFDM symbol starting point subject to STO.} \label{EffectsOfSTO} \end{figure} In {\bf{Case II}}, the channel response to the $ (a-1) $th OFDM symbol does not overlap with the $ a $th OFDM symbol and so does not incur any ISI from the previous symbol. In this case, the received signal in the frequency domain is obtained by taking the FFT of the time domain received samples: \begin{equation} \label{CaseII} {\bf{x}}\left( {k,a} \right) = {\bf{H}}\left( k \right){\bf{s}}\left( {k,a} \right){e^{j2\pi k\tau /N}} + {\bf{n}}\left( {k,a} \right), \end{equation} where $ \tau $ denotes the STO. Equation (\ref{CaseII}) implies that the orthogonality among subcarrier frequency components can be completely preserved. However, there exists a phase offset that is proportional to the STO $ \tau $ and subcarrier index $ k $, forcing the signal constellation to be rotated around the origin in the complex plane. In {\bf{Case III}} and {\bf{Case IV}}, the orthogonality among subcarrier components is destroyed by the ISI from the previous and the succeeding OFDM symbols, respectively. In addition, ICI will occur. A quantitative analysis of the ISI and ICI resulting from STO has been exhaustively studied in \cite{cho2010mimo}. Therefore, a natural idea is that using ELM to learn the relationship between received preamble with ISI, ICI and RSTO $ {\tau _{\rm{R}}} $, where $ {\tau _{\rm{R}}}{\rm{ = }}\tau - \hat \tau $. Compared with DNN, ELM is considered as a general form of single layer feedforward neural networks, where the input weights and hidden layer biases of ELM are randomly generated. In other words, hidden layer outputs are always known. Hence, this structure allows the analytical calculation of the output weights during the training phase by means of least square solutions. As a result, ELM has a competitive advantage in terms of computational complexity. \begin{figure}[t] \centering \includegraphics[width=3.5in]{ELM-basedRSTOestimator.eps} \caption{The structure of an ELM-based RSTO estimator.} \label{ELM-basedRSTOestimator} \end{figure} The structure of a complex ELM-based RSTO estimator is illustrated in Fig. \ref{ELM-basedRSTOestimator}.~The ELM-based RSTO estimator has $ 2N_rN_tN_c $ input neurons, $ \tilde N $ hidden neurons and $ N_o $ output neurons. The input, output and weights of ELM can be fully complex.~$ {\hat X_{q,p,i}}\left[ k \right] $ is the input in the prediction stage, which denotes the equalized frequency domain received signal at the $ q $th RX antenna from the $ p $th TX antenna corresponding to $ {{\bf{c}}_{p,i}} $. The data formats of the input in training and prediction stages are given in \eqref{FFTandRemoveCP}-\eqref{Cp=repmat} and \eqref{X}, respectively.~The $ {\rm{real}}\left( \cdot \right) $ block returns the real part of the elements of the complex array and the $ \arg \max \left( \cdot \right) $ block returns the indices of the maximum values. The principle of the ELM-based RSTO estimator can be divided into two main stages: training and prediction stages. \subsubsection{Training Stage}In this stage, the training set $ {\bf{N}} = \left\{ {\left( {{{{\bf{\tilde X}}}_n},{{\bf{O}}_n}} \right)|n = 1, \cdots ,N} \right\} $ is first generated, where the $ n $th input data of training set $ {{\bf{\tilde X}}_n} \in {{\mathbb{C}}^{2{N_r}{N_t}{N_c} \times 1}}$~denotes the combination vector of the preamble signal in the frequency domain by taking the FFT of the time domain received samples with corresponding RSTO. Here~``\rm{index}'' represents an index array including different values of RSTO. In this paper, $ {\rm{index}} = [ - {N_g}, \cdots ,{N_g}] $.~The target output, $ {{\bf{O}}_n} $ is a one-hot vector including encoded information of corresponding RSTO $ {\tau _{{\rm{R,}}n}} $. Now, for example, $ {\left[ {1,0,\cdots,0} \right]^T} $ represents $ {\tau _{\rm{R}}} = - {N_g} $ and $ {\left[ {0,\cdots,0,1} \right]^T} $ represents $ {\tau _{\rm{R}}} = {N_g} $. Specifically, \begin{equation} {{{\bf{\tilde X}}}_n} = {\mathop{\rm FFT}\nolimits} \left( {{\mathop{\rm Remove}\nolimits} {\mathop{\rm CP}\nolimits} \left( {{{\bf{X}}_n}} \right)} \right), \label{FFTandRemoveCP} \end{equation} \begin{equation} {{\bf{X}}_n} = {\bf{\tilde c}}\left[ k \right] \otimes \delta \left[ {k - {\tau _{{\rm{R}},n}}} \right], \label{DelayPreamble} \end{equation} \begin{equation} {\bf{\tilde c}} = {\left[ {{{{\bf{\tilde c}}}_1}^T, \cdots ,{{{\bf{\tilde c}}}_p}^T, \cdots ,{{{\bf{\tilde c}}}_{{N_t}}}^T} \right]^T}, \end{equation} \begin{equation} {{\bf{\tilde c}}_p} = {\rm{repmat}}\left( {{{\left[ {{\rm{CP}}_{{{\bf{c}}_{p,1}}}^T,{{\bf{c}}_{p,1}}^T,{\rm{CP}}_{{{\bf{c}}_{p,2}}}^T,{{\bf{c}}_{p,2}}^T} \right]}^T},{N_r},1} \right). \label{Cp=repmat} \end{equation} and \begin{equation} {\tau _{{\rm{R}},n}} = {\rm{index}}\left[ {\mathop {\arg \max }\limits_{1 \le i \le 2{N_g} + 1} \left( {{o_i}} \right)} \right], \end{equation} where $ {{\bf{O}}_n} = {\left[ {{o_1},{o_2}, \cdots ,{o_{2{N_g} + 1}}} \right]^T} $. In Equation (\ref{FFTandRemoveCP}), the pseudo-function ``RemoveCP'' and ``FFT'' represent respectively removing all the CPs and taking the $ N_c $-point fast Fourier transform. Note that, in Equation (\ref{DelayPreamble}),~$ \delta \left[ {k - {\tau _{{\rm{R}},n}}} \right] $ denotes a delayed Kronecker delta function. The absent elements of $ {{\bf{\tilde c}}} $ will be filled by zero padding. In step1 of Algorithm \ref{alg:1} (see later), the complex input weight $ {\bm{\alpha}_k} $ and complex bias $ b_k $ (see also Fig.\ref{ELM-basedRSTOestimator}) are generated from the uniform distribution $ U\left( { - 0.1,0.1} \right) $, where ${\bm{\alpha}}_k\in{\mathbb{C}}^{2N_rN_tN_c \times 1}$ is the input weight vector connecting input neurons to the $k$th hidden neuron and $ {\bm{\alpha }} = \left[ {{{\bm{\alpha }}_1},\cdots, {{\bm{\alpha }}_k}, \cdots ,{{\bm{\alpha }}_{\tilde N}}} \right] $ and $ {\bf{b}} = \left[ {{b_1}, \cdots {b_k} \cdots ,{b_{\tilde N}}} \right] $. Once the input weights and biases are chosen, the output of the hidden layer can be given by \begin{equation} {{\bf{D}}_{{\rm{Training}}}} = {g_c}\left( {{{\bm{\alpha }}^T}{\bm{{\rm \tilde X}}} + {\bf{b}}} \right) \end{equation} where $ {\bf{\tilde X}} = \left[ {\begin{array}{*{20}{c}} {{{\bf{\tilde X}}_1}}&{{{\bf{\tilde X}}_2}}& \cdots &{{{\bf{\tilde X}}_N}} \end{array}} \right] \in {{\mathbb{C}}^{2{N_r}{N_t}{N_c} \times N }} $. We expect that the output of the ELM could be close to the target output $ \bf{O} $, so \begin{equation} {\bm{\beta }}{{\bf{D}}_{{\rm{training}}}} = {\bf{O}}. \end{equation} Generally,~$ {\bm{\beta }} = \left[ {{{\bm{\beta }}_1}, \cdots ,{{\bm{\beta }}_k}, \cdots ,{{\bm{\beta }}_{\tilde N}}} \right] \in {{\mathbb{C}}^{{N_o} \times \tilde N}} $ and $ {{\bm{\beta }}_k} = {\left[ {{\beta _{k1}},{\beta _{k2}}, \cdots ,{\beta _{k{N_o}}}} \right]^T} \in {{\mathbb{C}}^{{N_o} \times 1}} $, where $ {{\bm{\beta }}_k} $ denotes the output weight vector connecting the $ k $th hidden neuron and the output neurons and $ N_o $ denotes the number of output neurons. For the ELM-based RSTO estimator, $ N_o=2N_g+1$. Under the criterion of minimizing the squared errors, the least squares (LS) solution is given by \begin{equation} {\bm{\hat \beta }} = \mathop {\min }\limits_{\bm{\beta }} \left\| {{\bm{\beta }}{{\bf{D}}_{{\rm{Training}}}} - {\bf{O}}} \right\| = {\bf{OD}}_{{\rm{Training}}}^\dag . \end{equation} The training algorithm for an ELM-based RSTO estimator can be summarized as follows: \begin{algorithm}[H] \caption{The Training Algorithm for an ELM-based RSTO Estimator} \label{alg:1} \begin{algorithmic} \STATE We are given a training set ${\bf{N}} = \left\{ {\left( {{{{\bf{\tilde X}}}_n},{{\bf{O}}_n}} \right)|n = 1, \cdots ,N} \right\}$, complex activation function $g_c\left( \cdot \right)$, and hidden neuron number $\tilde N$. ${\bf{\tilde X}}_n \in {{\mathbb{C}}^{2{N_r}{N_t}{N_c} \times 1}}$,~$ {{\bf{O}}_n} $ is a one-hot vector and these two correspond to the input and desired output of the ELM, respectively. \STATE {\bf{Step 1:}} Randomly choose the values of complex input weight ${\bm{\alpha}}_k$ and the complex bias $b_k$, $k=1,\cdots,\tilde N$. \STATE {\bf{Step 2:}} Calculate the complex hidden layer output matrix ${\bf{D_{\rm{Training}}}}$. \STATE {\bf{Step 3:}} Calculate the complex output weight ${\bm{\beta }}$ using $\bm{\hat \beta}={\bf{O}}{\bf{D}}^\dag_{\rm{Training}} $, where $ {\bf{O}} \in {{\mathbb{C}}^{{N_o} \times N}} $. \end{algorithmic} \end{algorithm} \subsubsection{Prediction Stage} For an ELM-based RSTO estimator, we assume that LS channel estimation is used or the perfect CSI is known. Thus the equalized preamble can be given by\footnote{A minimum mean-square error (MMSE) channel estimator cannot be deployed before the STO is estimated because the STO can degrade the performance of the MMSE channel estimator \cite{Beek}. However, the simulation results in Section \ref{S5} still include the ELM-based STO estimator with MMSE channel estimation. The introduction of the MMSE channel estimator is given in the next subsection.} \begin{equation} {\hat X_{q,p,i}}\left[ k \right] = {X_{q,p,i}}\left[ k \right]/{{\bf{H}}_{q,p,i}}\left[ k \right]. \end{equation} The output of hidden layer can be calculated as \begin{equation} {{\bf{D}}_{{\rm{Prediction}}}} = {g_c}\left( {{{\bm{\alpha }}^T}{{\bf{\hat x}}} + {\bf{b}}} \right) \label{Prediction} \end{equation} where \begin{equation} {\bf{\hat x}} = {\left[ {{\bf{\hat x}}_{_{1,1,1}}^T,{\bf{\hat x}}_{_{1,1,2}}^T, \cdots ,{\bf{\hat x}}_{_{{N_t},{N_r} - 1,1}}^T,{\bf{\hat x}}_{_{{N_t},{N_r} - 1,2}}^T,{\bf{\hat x}}_{_{{N_t},{N_r},1}}^T,{\bf{\hat x}}_{_{{N_t},{N_r},2}}^T} \right]^T}. \label{X} \end{equation} Note that the input, output and all the weights and biases of ELM in this paper are complex values but RSTO and RCFO are real values. Therefore, the operator ${\rm{real}}\left( \cdot \right)$ following the output of ELM is necessary. By expressing $ {\rm{real}}\left( {\bm{\hat \beta} {{\bf{D}}_{{\rm{Prediction}}}}} \right) $ as $ {\rm{real}}\left( {\bm{\hat \beta} {{\bf{D}}_{{\rm{Prediction}}}}} \right) = {\left[ {{{\hat o}_1},{{\hat o}_2}, \cdots ,{{\hat o}_{2{N_g} + 1}}} \right]^T} $,~the RSTO is given as \begin{equation} {\hat \tau _{\rm{R}}} = {\rm{index}}\left[ {\mathop {\arg \max }\limits_{1 \le i \le 2{N_g} + 1} \left( {{{\hat o}_i}} \right)} \right]. \end{equation} \subsection{ELM-Based RCFO Estimation} \begin{figure}[t] \centering \includegraphics[width=3.5in]{ELM-basedRCFOestimator.eps} \caption{The structure of an ELM-based RCFO estimator.} \label{ELM-basedRCFOestimator} \end{figure} As Fig. \ref{ELM-basedRCFOestimator} illustrates, the ELM-based RCFO estimator has $ 2N_rN_tN_c $ input neurons, $ \tilde N $ hidden neurons and only one output neuron.~Here, we use $ {\varepsilon _{\rm{R}}} $ to denote the RCFO, where $ {\varepsilon _{\rm{R}}}{\rm{ = }}\varepsilon - \hat \varepsilon $. \subsubsection{Training Stage} Before ELM can be deployed to estimate RCFO, it has to learn the prior knowledge from the training set. The training algorithm for an ELM-based RCFO estimator can be summarized as follows: \begin{algorithm}[H] \caption{The Training Algorithm for an ELM-based RCFO Estimator} \label{alg:2} \begin{algorithmic} \STATE We are given a training set ${\bf{N}} = \left\{ {\left( {{{{\bf{\tilde I}}}_n},{O_n}} \right)|n = 1, \cdots ,N} \right\}$, complex activation function $g_c\left( \cdot \right)$, and hidden neuron number $\tilde N$. ${\bf{\tilde I}}_n \in {{\mathbb{C}}^{2{N_r}{N_t}{N_c} \times 1}}$,~$ {O_n} $ is a real number and these two correspond to the input and desired output of the ELM, respectively. Here, $ O_n $ denotes a given RCFO. \STATE {\bf{Steps 1, 2 and 3:}} Refer to {\bf{Algorithm \ref{alg:1}}}. \end{algorithmic} \end{algorithm} Specifically,~the $ n $th input data of training set $ {{\bf{\tilde I}}_n} $~denotes the preamble with corresponding RCFO,~$ {\varepsilon _{\rm{R}}} = {O_n} $,~where \begin{equation} {{{\bf{\tilde I}}}_n} = {\mathop{\rm FFT}\nolimits} \left( {{\mathop{\rm Remove}\nolimits} {\mathop{\rm CP}\nolimits} \left( {{{\bf{I}}_n}} \right)} \right), \end{equation} \begin{equation} {{{\bf{I}}_n} = {{\left[ {{{{\bf{\tilde c}}}_1}^T, \cdots ,{{{\bf{\tilde c}}}_p}^T, \cdots ,{{{\bf{\tilde c}}}_{{N_t}}}^T} \right]}^T}\odot{{\left[ {{{{\bf{\tilde o}}}_1}^T, \cdots ,{{{\bf{\tilde o}}}_p}^T, \cdots ,{{{\bf{\tilde o}}}_{{N_t}}}^T} \right]}^T}}, \end{equation} \begin{equation} {{\bf{\tilde c}}_p} = {\rm{repmat}}\left( {{{\left[ {{\rm{CP}}_{{{\bf{c}}_{p,1}}}^T,{{\bf{c}}_{p,1}}^T,{\rm{CP}}_{{{\bf{c}}_{p,2}}}^T,{{\bf{c}}_{p,2}}^T} \right]}^T},{N_r},1} \right), \end{equation} \begin{equation} {{{{\bf{\tilde o}}}_p} = {\rm{repmat}}\left( {{{\bf{o}}_p},{N_r},1} \right)}, \end{equation} and \begin{equation} {{{\bf{o}}_p} = \left[ {\begin{array}{*{20}{c}} {{e^{2\pi j\left[ {1{\rm{ + 2}}\left( {p - 1} \right)\left( {{N_c} + {N_g}} \right)} \right]{O_n}/{N_c}}}}\\ \vdots \\ {{e^{2\pi j\left[ {2\left({N_c+{N_g}}\right) + 2 \left( {p - 1} \right)\left( {{N_c} + {N_g}} \right)} \right]{O_n}/{N_c}}}} \end{array}} \right]}. \end{equation} In {\bf{step1}} of {\bf{Algorithm \ref{alg:2}}}, the generation of the input weight $ {\bm{\alpha}_k} $ and bias $ b_k $ are the same as {\bf{step1}} in {\bf{Algorithm \ref{alg:1}}}. The output of the hidden layer can be given by \begin{equation} {{\bf{D}}_{{\rm{Training}}}} = {g_c}\left( {{{\bm{\alpha }}^T}{\bm{{\rm \tilde I}}} + {\bf{b}}} \right) \end{equation} where $ {\bf{\tilde I}} = \left[ {\begin{array}{*{20}{c}} {{{\bf{\tilde I}}_1}}&{{{\bf{\tilde I}}_2}}& \cdots &{{{\bf{\tilde I}}_N}} \end{array}} \right] \in {{\mathbb{C}}^{2{N_r}{N_t}{N_c} \times N }} $. We would expect that the output of the ELM could be close to the target output $ \bf{O} $, so $ {\bm{\beta }}{{\bf{D}}_{{\rm{training}}}} = {\bf{O}} $. For the ELM-based RCFO estimator, $ N_o=1$. The LS solution is then given by \begin{equation} {\bm{\hat \beta }} = {\bf{OD}}_{{\rm{Training}}}^\dag. \end{equation} \subsubsection{Prediction (Estimation) Stage} Since the received preambles have been corrupted by the fading channel, channel estimation and equalization need to be carried out. Now that $ {\bf{c}}_{p,2} $ is known and fully occupies the subcarriers, the minimum mean-square error (MMSE) estimate of frequency impulse response \cite{Beek} from the $ p $th TX antenna to the $ q $th RX antenna is given by \begin{equation} {\left[ {{{{\rm{\hat H}}}_{q,p}}\left[ 0 \right], \cdots ,{{{\rm{\hat H}}}_{q,p}}\left[ {{N_c} - 1} \right]} \right]^T} = {\bf{F}}{{\bf{Q}}_{{\rm{MMSE}}}}{{\bf{F}}^H}{\bf{c}}_{q,p,2}^H{{\bf{x}}_{q,p,2}}, \end{equation} where ${\bf{Q}}_{\rm{MMSE}}$ can be shown to be \begin{equation} \begin{split} {\bf{Q}}_{\rm{MMSE}}= {\bf{R}}_{\rm{gg}} \left[\left({\bf{F}}^H {\bf{c}}_{q,p,2}^H {\bf{c}}_{q,p,2} {\bf{F}} \right)^{-1}{\sigma^2_n}+{\bf{R}}_{\rm{gg}} \right]^{-1}\\ \times \left({\bf{F}}^H{\bf{c}}_{q,p,2}^H {\bf{c}}_{q,p,2} {\bf{F}} \right)^{-1}. \end{split} \end{equation} ${\bf{R}}_{\rm{gg}}$ is the auto-covariance matrix of $ {\left[ {{g_{q,p}}\left( 0 \right), \cdots ,{g_{q,p}}\left( {L - 1} \right)} \right]^T} $ for any $ \left( {q,p} \right) $. In other words, all the auto-covariance matrices of subchannels are the same.~$\sigma^2_n$ denotes the noise variance ${\rm{E}}\left\{\left|n^2_k\right|\right\}$. Here, we assume that the auto-covariance matrices of all the subchannels between the TX antennas and the RX antennas are the same, so the subscripts of ${\bf{R}}_{\rm{gg}}$ are omitted. Then, the equalized preamble can be given by \begin{equation} {\hat X_{q,p,i}}\left[ k \right] = {X_{q,p,i}}\left[ k \right]/{{\bf{\hat H}}_{q,p,i}}\left[ k \right]. \end{equation} The calculation of the output of the hidden layer and $ {\bf{\hat x}} $ are same as Equation \eqref{Prediction} and \eqref{X}, respectively. Finally, the RCFO is estimated as \begin{equation} {\hat \varepsilon _{\rm{R}}} = {\mathop{\rm real}\nolimits} \left( {{\bm{\hat \beta }}{{\bf{D}}_{{\rm{Prediction}}}}} \right). \end{equation} \subsection{How Do We Deal with the ``Chicken or the Egg'' Causality Dilemma?} Most of the existing machine learning-based physical techniques are require labeled data with perfect CSI in the training stage. The question raised in these studies was that it is impossible to acquire (estimate) the perfect CSI in real channel scenarios due to the existence of thermal noise. In other words, which came first: the labeled data or the prefect CSI? The essential reason for this dilemma is that CSI is random and therefore it cannot be estimated or predicted perfectly. Besides, the computational cost to learn the relationship between random variables and target output is extremely high. Nevertheless, can we deal with this causality dilemma? In this paper, we generate training data by letting preamble to be corrupted by RSTO and RCFO without the effects of multipath fading channel and thermal noise. There are two main purposes for this. The first aims to make ELM learn the relationship between the corrupted preamble and its corresponding RSTO and RCFO. The second purpose is to avoid the ``chicken and the egg'' causality dilemma. However, it should be noticed that the input of the ELM in the prediction stage is the preamble corrupted by RSTO, RCFO, multipath fading channel and thermal noise. Therefore, the only question is whether the purpose method can outperform traditional method. The following section will provide exhaustive simulation results and comparisons. \section{Simulation Results and Comparisons} \label{S5} \subsection{Simulation Setup} In this section, the performance of the proposed ELM-based RSTO and RCFO estimators is demonstrated. In order to avoid the occurrence of the {\bf{Case IV}} in Fig. \ref{EffectsOfSTO}, the timing point will be set $ N_g/4 $ points ahead of the estimated value from the traditional estimator in the case of a fading channel.\footnote{In order to avoid ISI, the FFT window start position has to be put in advance of the estimated point obtained by the coarse STO estimation algorithm \cite{Chang2008}.} For the estimation of CFO, the performance of the estimator based on the traditional method and its Cramér-Rao lower bound (CRLB) will also be studied and used as a benchmark. Note that the CRLB is equal to the variance of traditional CFO estimator \cite{Zelst2004} \begin{equation} {\mathop{\rm var}} \left( {\hat \varepsilon - \varepsilon } \right) = \frac{1}{{{\pi ^2}{N_t}{N_r}V\rho }} \label{CRLB} \end{equation} where $ V $ is the length of identical halves in the first part of the preamble and ~$ \rho = \left( {P/{N_t}} \right)\sigma _n^2 $ denotes the SNR per receive antenna when the preamble is transmitting and $ P $ is the total transmit power.\footnote{\eqref{CRLB} also can be written as ${\rm{var}}\left( {\hat \varepsilon - \varepsilon } \right) = \frac{1}{{{\pi ^2}{N_r}V\rho '}}$ where the SNR per receive antenna is defined as $ \rho ' = P\sigma _n^2 $.}~Note that \eqref{CRLB} is approximately accurate under the condition of small errors $ \left( {\hat \varepsilon - \varepsilon } \right) $ and high SNR and it is derived in the appendix. We assume that timing synchronization is perfect~($ \hat{\tau}=\tau $) ~when the performance of both the traditional and the ELM-based CFO estimators are evaluated. For the numerical simulations, we set $ {N_c} = 64 $,~$ N_g=N_c/4 $ and sampling frequency $ f_s=4 \times {10^6} $.~The wireless fading channel is modeled as an exponential model and quasistatic assumption is guaranteed during each OFDM symbol.~The activation function $g_c$ is ${\rm{arcsinh}}\left(z\right)=\int_0^zdt/\left[\left(1+t^2\right)^{1/2}\right]$ \cite {Kim2003}, where $z \in \mathbb{C}$. Note that, for a fair comparison, we keep the total transmitting power the same as in the single-input single-output (SISO) case. Therefore, the power per TX antenna is scaled down by a factor $ N_t $. For an exponentially decaying PDP, the root mean square (RMS) delay spread $ {\tau _{{\rm{RMS}}}}{\rm{ = 2}} \times {\rm{1}}{{\rm{0}}^{ - 6}}{\rm{s}} $, and the coherence bandwidth $ {B_c} = 1/{\tau _{{\rm{RMS}}}} = {\rm{5}} \times {\rm{1}}{{\rm{0}}^5}{\rm{Hz}} $. The PDP is given by \begin{equation} {P_l} = \exp \left( {\frac{{ - {\rm{2}}\pi {B_c}{\tau _l}}}{{\sqrt 3 }}} \right) \end{equation} where the delay of $ l $th path is set as $ {\tau _l} = l{T_s} $ and $ L=8 $. \subsection{The Performance of the ELM-based RSTO Estimator} As it is instructive to observe the bias as well as the MSE of an estimator (where $ {\rm{MSE}} = {\rm{variance}} + {\left( {{\rm{bias}}} \right)^2} $), we will examine both the bias and the MSE of the proposed estimator and compare this with the traditional estimator. \begin{figure}[t] \centering \includegraphics[width=3.4in]{RSTOMean2X2.eps} \caption{Bias performance comparison between the traditional STO estimator and the proposed ELM-based STO estimator for a $ 2 \times 2 $ system with a multipath fading channel.} \label{RSTOMean2X2} \end{figure} In Fig. \ref{RSTOMean2X2}, the bias of the STO estimation is demonstrated as a function of the average SNR per receive antenna, where $ \tilde{N}=2^{14} $. The results from Monte Carlo simulations averaged over $ {3\times10^5} $ channel realizations are shown. In the case of both an AWGN channel and a frequency-selective fading channel with perfect CSI information, it can be seen that the proposed ELM-based estimator has a much smaller bias than the traditional estimator. In the cases of a frequency-selective fading channel with LS and MMSE channel estimation, the ELM-based STO estimator does not show a gain in terms of bias. Fortunately, without perfect knowledge about the fading channel, the ELM-based STO estimator still achieve a gain compared with the traditional STO estimator in terms of MSE. In Fig. \ref{RSTOMSE2X2}, the MSE of the STO estimation is demonstrated as a function of the average SNR per receive antenna, where $ \tilde{N}=2^{14} $. Even if the ELM-based estimator acquires the imperfect CSI just by using a LS or an MMSE channel estimate, the ELM-based STO estimator shows significant gains compared with the traditional STO estimator. \begin{figure}[t] \centering \includegraphics[width=3.4in]{RSTOMSE2X2.eps} \caption{MSE performance comparison between the traditional STO estimator and the proposed ELM-based STO estimator for a $ 2 \times 2 $ system with a multipath fading channel.} \label{RSTOMSE2X2} \end{figure} \subsection{Performance of the ELM-based RCFO Estimator } The performance of a NN depends on the training set. Specifically, for an ELM-based RCFO estimator, the range and interval of the desired output in a training set should be chosen carefully. In this paper, the training sets for different MIMO systems are summarized in Table \ref{RCFOTrainingSets}. \begin{table}[t] \caption{Training Sets for Different MIMO Systems} \label{RCFOTrainingSets} \begin{tabular}{|l|c|c|} \hline \multicolumn{1}{|c|}{MIMO System} & Range of RCFO & Interval of RCFO \\ \hline 1X1 & $\left[ { - {\rm{0}}{\rm{.0025,0}}{\rm{.0025}}} \right]$ & $ 5.0 \times {{10}^{ - 6}} $ \\ \hline 2X2 & $\left[ { - {\rm{0}}{\rm{.0025,0}}{\rm{.0025}}} \right]$ & $ 2.5 \times {{10}^{ - 6}} $ \\ \hline 3X3 (Fading Channel) & $ \left[ { - {\rm{0}}{\rm{.0030,0}}{\rm{.0030}}} \right] $ & $ 5.0 \times {{10}^{ - 6}} $ \\ \hline 3X3 (AWGN Channel) & $ \left[ { - {\rm{0}}{\rm{.05,0}}{\rm{.05}}} \right] $ & $ 1.0 \times {{10}^{ - 4}} $ \\ \hline \end{tabular} \end{table} \subsubsection{MSE Performance} In Fig. \ref{RCFOMSE1X1}, the MSE of the CFO estimation is demonstrated as a function of the average SNR per receive antenna, where $ \tilde{N}=2^{11} $. The theoretical value from (\ref{CRLB}) is shown together with results from Monte Carlo simulations averaged over $ {10^5} $ channel realizations. \begin{figure}[t] \centering \includegraphics[width=3.4in]{RCFOMSE1X1.eps} \caption{MSE performance comparison between the traditional CFO estimator and the proposed ELM-based CFO estimator for a $ 1 \times 1 $ system from theory and simulations with AWGN and a multipath fading channel ($ \varepsilon {\rm{ = }} - 0.05 $, $ {O_n} \in \left\{ {\left. {5.0 \times {{10}^{ - 6}}k} \right|k = - 500, \cdots ,500} \right\} $ and $ \tilde N = {2^{11}} $).} \label{RCFOMSE1X1} \end{figure} As seen from Fig. \ref{RCFOMSE1X1}, the MSE curve of the traditional method almost perfectly overlaps that of the CRLB. The theoretical value is a good estimate of the MSE for high SNR values but underestimates the MSE compared with simulation results for low SNR. Note that the CRLB expresses a lower bound on the variance of unbiased estimators of a deterministic parameter. A biased approach can result in both a variance and a MSE that are below the unbiased CRLB. Specifically, in the case of AWGN, when $ \rm{SNR}=-3\rm{dB} $, ELM obtains a slightly larger MSE value compared with simulation results for the traditional method. Fortunately, we can see that the performance improvement between ELM and the traditional method increases with SNR, and when $ \rm{SNR}=21\rm{dB} $ ELM achieves an SNR gain of about 9dB over the traditional method at a MSE value of $ 4.22\times10^{-6} $. This can be explained by the fact that the ELM can reuse the preamble exhaustively and also the mapping relationships between RCFOs and their corresponding preambles with RCFOs. In the case of a frequency-selective fading channel, the largest SNR gain over a traditional method, about 4.5dB, is achieved when $ \rm{SNR}=6\rm{dB} $. This kind of the gain becomes insignificant when $ {\rm{SNR > 27dB}} $. Besides, in order to prove that the gain of ELM is not just a result of channel estimation and equalization, the curve of ``Traditional\&Eq.'' shows the MSE performance of the traditional method with channel estimation and equalization. Specifically, the method ``Traditional\&Eq.'' performs traditional CFO estimation twice. The first CFO estimation uses the traditional method. Then, channel estimation and equalization are performed by using the frequency corrected preamble signal. Finally, the traditional CFO estimation method is performed again to estimate RCFO by using the frequency corrected and equalized preamble signal. It can be seen that its performance seriously degrades, which means that channel estimation and equalization cannot enhance the performance of the traditional CFO estimator. \begin{figure}[t] \centering \includegraphics[width=3.4in]{RCFOMSE2X2.eps} \caption{MSE performance comparison between the traditional CFO estimator and the proposed ELM-based CFO estimators for a $ 2 \times 2 $ system from theory and simulations with AWGN and a multipath fading channel ($ \varepsilon {\rm{ = }} - 0.05 $, $ {O_n} \in \left\{ {\left. {2.5 \times {{10}^{ - 6}}k} \right|k = - 1000, \cdots ,1000} \right\} $ and $ \tilde N = {2^{14}} $).} \label{RCFOMSE2X2} \end{figure} Fig. \ref{RCFOMSE2X2} presents the MSE curves of the traditional and proposed ELM-based CFO estimators for a $ 2\times2 $ system, where $ \tilde{N}=2^{14} $. Similar to the observations in Fig. \ref{RCFOMSE1X1}, in the case of an AWGN channel, the MSE performance of the ELM still outperforms that of the traditional method. Using the ELM, when $ \rm{SNR}=18\rm{dB} $, about 9dB SNR gain over traditional method is achieved at $ \rm{MSE}=2.16\times10^{-6} $. In the case of a frequency-selective fading channel, by comparing Fig. \ref{RCFOMSE2X2} with Fig. \ref{RCFOMSE1X1}, we find that the gains of ELM over the traditional method for a $ 2\times2 $ system (about 1.5dB) are lower than that for $ 1\times1 $ system (about 1.5-4.5dB). We conjecture that this is because the accuracy of channel estimation limits the CFO estimation performance of the ELM. In order to prove this conjecture, we also perform the simulation for the ELM estimator with perfect channel state information (CSI). In Fig. \ref{RCFOMSE2X2}, the curve ``Perfect-ELM'' illustrates the performance of the ELM with perfect CSI. It can be seen that, under the condition of knowing perfect CSI, the MSE performance of the ELM under the condition of fading is closer to that under the condition of AWGN compared to that of ELM without perfect CSI. The gain from perfect CSI increases with the increase of SNR. So we can conjecture that the performance of the ELM-based scheme is highly dependent on the accuracy of the CSI. \begin{figure}[t] \centering \includegraphics[width=3.4in]{RCFOMSE3X3.eps} \caption{MSE performance comparison between the traditional CFO estimator and the proposed ELM-based CFO estimator for a $ 3 \times 3 $ system from theory and simulations with AWGN and a multipath fading channel ($ \varepsilon {\rm{ = }} - 0.05 $, $ \tilde N = {2^{17}} $ and AWGN: $ {O_n} \in \left\{ {\left. {1.0 \times {{10}^{ - 4}}k} \right|k = - 500, \cdots ,500} \right\} $;~Fading: $ {O_n} \in \left\{ {\left. {5.0 \times {{10}^{ - 6}}k} \right|k = - 600, \cdots ,600} \right\} $).} \label{RCFOMSE3X3} \end{figure} Fig. \ref{RCFOMSE3X3} presents the MSE curves of the traditional and the proposed ELM-based CFO estimators for a $ 3\times3 $ MIMO system, where $ \tilde{N}=2^{17} $. Note that for a $ 3\times3 $ system, the ELM is trained by different training sets separately in order to achieve the best performance under AWGN and fading channel conditions. In the case of an AWGN channel, when $ {\rm{SNR = }} - {\rm{3dB}} $, the MSE performance of the ELM is slightly better than the CRLB but its MSE decreases rapidly with an increase of SNR. When $ {\rm{SNR = 12dB}} $, about 18dB SNR gain over the traditional method is achieved by the ELM-based method. By comparing Fig. \ref{RCFOMSE3X3} with Fig. \ref{RCFOMSE2X2} and Fig. \ref{RCFOMSE1X1}, the gain of the ELM-based method over the traditional method increases with the increase in the number of RX antennas. This is because the MSE performance of the ELM relates to the number of the receive antennas. When ${\rm{SNR}}\ge21$dB, the gains are not as high as the highest gains. The reason is that the gain of ELM is from both noise suppression and mining information exhaustively from preambles. In the case of a frequency-selective fading channel, the ELM can obtain about 1.5dB gain of MSE when ${\rm{SNR}} \ge {\rm{3dB}}$, which is similar with the case for a $ 2\times2 $ system. \subsubsection{Robustness Analysis} In this section, we analyse the robustness of the proposed ELM scheme. Fig. \ref{RobustnessRCFO} shows the MSE of ELM under various RCFOs when $ {\rm{SNR = 15dB}} $~and $ {\rm{30dB}} $. It can be seen that the MSE increases with an increase of RCFO. By comparing Fig. \ref{RobustnessRCFO} with Fig. \ref{RCFOMSE2X2}, when $ {\rm{SNR = 30dB}} $ and $ {\rm{RCFO}}\ge 0.0012 $, the MSE of the ELM is higher than that in Fig. \ref{RCFOMSE2X2} ($ {\rm{MSE}}> {10^{ - 6}} $). However, when $ {\rm{SNR = 15dB}} $ and $ {\rm{RCFO}}\le 0.0024 $, the MSE of ELM is lower than that in Fig. \ref{RCFOMSE2X2} ($ {\rm{MSE}}< {10^{ - 5}} $). This can be explained by the fact that the performance advantage and robustness of the ELM-based method are more significant in medium SNR. \begin{figure}[t] \centering \includegraphics[width=3.4in]{RobustnessRCFO.eps} \caption{MSE versus RCFO curves of the proposed ELM-based CFO estimator for a $ 2 \times 2 $ system ($ {O_n} \in \left\{ {\left. {2.5 \times {{10}^{ - 6}}k} \right|k = - 1000, \cdots ,1000} \right\} $ and $ \tilde N = {2^{14}} $).} \label{RobustnessRCFO} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3.4in]{RobustnessChannel.eps} \caption{MSE versus the number of paths of a fading channel $ \left( L \right) $ of the proposed ELM-based CFO estimator for a $ 2 \times 2 $ system ($ {O_n} \in \left\{ {\left. {2.5 \times {{10}^{ - 6}}k} \right|k = - 1000, \cdots ,1000} \right\} $ and $ \tilde N = {2^{14}} $).} \label{RobustnessChannel} \end{figure} Fig. \ref{RobustnessChannel} shows the MSE of the ELM under a different number of channel paths when $ {\rm{SNR = 15dB}} $~and $ {\rm{30dB}} $. It can be seen that, with increase of $ L $, the MSE of the ELM increases slightly, which means that the proposed method is robust enough to handle frequency-selective fading channels with different numbers of paths. \subsubsection{Generalization Error Analysis} Generalization is a term used to describe the ability of a model to react to new data. That is, after being trained on a training set, can a model ``digest'' new data and make accurate predictions? In this paper, we also use MSE to evaluate the generalization ability of the ELM. Specifically, an RCFO not belonging to the training set is used to verify the generalization of a trained ELM-based RCFO estimator. The generalization ability of ELM can be analyzed according to Fig. \ref{GeneralizationRCFO}, where we have used RCFOs that do not belong to the training set $ {O_n} \in \left\{ {\left. {2.5 \times {{10}^{ - 6}}k} \right|k = - 1000, \cdots ,1000} \right\} $. By comparing Fig. \ref{GeneralizationRCFO} with Fig. \ref{RCFOMSE2X2}, the performance of the ELM-based RCFO estimator does not change significantly when it handles those unfamiliar RCFOs. It can be concluded that the ELM-based RCFO estimator shows excellent generalization when it processes an RCFO not belonging to the training set. \begin{figure}[t] \centering \includegraphics[width=3.4in]{GeneralizationRCFO.eps} \caption{MSE versus the RCFO not belonging to a training set of the proposed ELM-based CFO estimator for a $ 2 \times 2 $ system ($ {O_n} \in \left\{ {\left. {2.5 \times {{10}^{ - 6}}k} \right|k = - 1000, \cdots ,1000} \right\} $ and $ \tilde N = {2^{14}} $).} \label{GeneralizationRCFO} \end{figure} \subsection{Complexity Analysis} We use the number of complex multiplications (CMs) to measure the computational complexity. For simplicity, the numbers of CMs for calculating the Moore-Penrose generalized matrix inverse of a $I \times O$ matrix is denoted as $C_{\rm{pinv}}\left(OI^2\right)$. We compare the proposed complex ELM-based method with DNN-based method. We assume that the input dimension and output dimension of the complex ELM-based method are $I$ and $O$, respectively. For DNN-based method, we split a complex number into a real part and an imaginary part. Thus, the input dimension and the output dimension of DNN-based method are $2I$ and $2O$, respectively. In DNN-based method, there are real-valued multiplications. When calculating the computational complexity, we consider that four real-valued multiplications are equivalent to one CM. The machine learning-based method has two phases, i.e., the training phase and the prediction phase, and we analyse the computational complexity for the two phases individually. As for the training phase, the calculation of the output weights of complex ELM-based method requires $C_{\rm{pinv}}\left(N\tilde N^2\right)+N\left(\tilde N+O\right)$ CMs where $\tilde N$ is the number of hidden neurons in the complex ELM and $N$ is the number of the training samples. The training complexity of DNN-based method is difficult to derive using the number of CMs because it is trained iteratively with forward propagation (FP) and backpropagation (BP). Generally, the training complexity of DNN is obviously higher than complex ELM, and this results that the time consumption of training is hard to satisfy the latency constraint in practical uses. As for the prediction phase, the required numbers of CMs of complex ELM and DNN are $IO\tilde N$ and $ IO\sum\nolimits_{l = 1}^{{N_l}} {{n_l}{n_{l - 1}}} $, respectively. $N_l$ and $n_l$ denotes the number of hidden layers and the number of neurons at the $l$th hidden layer, respectively. It can be seen that the prediction complexity of DNN is high than the complex ELM. \section{Conclusions and future work} \label{S6} In this paper, we have proposed an ELM-based fine timing and frequency synchronization scheme in order to improve the performance of existing estimators. The proposed scheme does not require additional preamble and the training processes can be carried out fully offline without any prior information about the channels. Simulation results have shown that the proposed ELM-based synchronization scheme outperforms or achieves comparable performance in terms of MSE with existing traditional synchronization algorithms. In addition, the proposed scheme shows robustness under various channels with different parameters and a generalization ability towards any RCFO outside the training set. The simulation results have shown that the performance of ELM-based scheme relates to the accuracy of the CSI. Besides, it should be noticed that channel equalization can neutralize the effects of both STO and fading channel. Therefore, this makes it difficult to obtain the received preamble signal affected by just the STO alone. In other words, the accurate CSI is still indispensable for the deployment of machine learning in communications systems. Therefore, incorporating ELM into transmitter design, synchronization and channel estimation and equalization jointly within system design is a promising future research direction. \appendices \section{Variance of the CFO Estimation for a MIMO System Under AWGN Channel~(See \eqref{CRLB})} We use the method in \cite{Moose1994} and \cite{schenk2008rf} to derive the variance of the CFO estimate for a MIMO system. According to \eqref{Estimation CFO}, for a given $ \epsilon $, subtract the corresponding phase, $ 2\pi\epsilon $, from each product to obtain the tangent of the phase error \begin{equation} \tan \left[ {\pi \left( {\hat \varepsilon - \varepsilon } \right)} \right] = \frac{{\sum\limits_{p = 1}^{{N_t}} {\sum\limits_{q = 1}^{{N_r}} {\sum\limits_{i = {d_p} - \left( {V - 1} \right)}^{{d_p}} {{\rm{Im}}\left[ {r_q^*\left( {i - V} \right){r_q}\left( i \right){e^{ - 2\pi j\varepsilon }}} \right]} } } }}{{\sum\limits_{p = 1}^{{N_t}} {\sum\limits_{q = 1}^{{N_r}} {\sum\limits_{i = {d_p} - \left( {V - 1} \right)}^{{d_p}} {{\rm{Re}}\left[ {r_q^*\left( {i - V} \right){r_q}\left( i \right){e^{ - 2\pi j\varepsilon }}} \right]} } } }} \label{Appendix1} \end{equation} where ${d_p} = d - \left( {{N_t} - p} \right){N_{{\rm{train}}}}$,~$ V $ denotes the length of identical halves in the first part of the preamble and $ {r_q}\left( i \right) = {{\tilde r}_q}\left( i \right) + {n_q}\left( i \right) $. $ {n_q}\left( i \right) $ denotes the time domain noise of $ i $th sample of the received signal on the $ q $th antenna. For$ \left| {\hat \varepsilon - \varepsilon } \right| \ll 1/\pi $, the tangent can be approximated by its argument so that \begin{equation} \resizebox{.95\hsize}{!}{$ \hat \varepsilon - \varepsilon \approx \frac{{\sum\limits_{p = 1}^{{N_t}} {\sum\limits_{q = 1}^{{N_r}} {\sum\limits_{i = {d_p} - \left( {V - 1} \right)}^{{d_p}} {{\rm{Im}}\left\{ {\left[ {{{\tilde r}_q}\left( {i - V} \right) + {n_q}\left( i \right){e^{ - 2\pi j\varepsilon }}} \right]\left[ {\tilde r_q^*\left( i \right) + {n_q}\left( i \right){e^{ - 2\pi j\varepsilon }}} \right]} \right\}} } } }}{{\pi \sum\limits_{p = 1}^{{N_t}} {\sum\limits_{q = 1}^{{N_r}} {\sum\limits_{i = {d_p} - \left( {V - 1} \right)}^{{d_p}} {{\rm{Re}}\left\{ {\left[ {{{\tilde r}_q}\left( {i - V} \right) + {n_q}\left( i \right){e^{ - 2\pi j\varepsilon }}} \right]\left[ {\tilde r_q^*\left( i \right) + {n_q}\left( i \right){e^{ - 2\pi j\varepsilon }}} \right]} \right\}} } } }}. $} \label{Appendix2} \end{equation} According to the method in \cite{Moose1994}, at high SNR, a condition compatible with successful communications signalling means that \eqref{Appendix2} may be approximated by \begin{equation} \resizebox{.95\hsize}{!}{$ \hat \varepsilon - \varepsilon \approx \frac{{\left\{ {\sum\limits_{p = 1}^{{N_t}} {\sum\limits_{q = 1}^{{N_r}} {\sum\limits_{i = {d_p} - \left( {V - 1} \right)}^{{d_p}} {{\rm{Im}}\left[ {{n_q}\left( i \right)\tilde r_q^*\left( {i - V} \right){e^{ - 2\pi j\varepsilon }} + {{\tilde r}_q}\left( {i - V} \right)n_q^*\left( {i - V} \right)} \right]} } } } \right\}}}{{\left\{ {\pi \sum\limits_{p = 1}^{{N_t}} {\sum\limits_{q = 1}^{{N_r}} {\sum\limits_{i = {d_p} - \left( {V - 1} \right)}^{{d_p}} {{{\left| {{{\tilde r}_q}\left( i \right)} \right|}^2}} } } } \right\}}}. $} \label{Appendix3} \end{equation} It is easy to show that \begin{equation} E\left[ {\left. {\hat \varepsilon - \varepsilon } \right|\varepsilon ,\left\{ {{{\tilde r}_q}} \right\}} \right] = 0. \label{Appendix4} \end{equation} Therefore, for small errors, the estimate is conditionally unbiased. Then, the conditional variance of the estimate is easily determined for \eqref{Appendix3} as \begin{equation} {\rm{Var}}\left[ {\left. {\hat \varepsilon } \right|\varepsilon ,\left\{ {{{\tilde r}_q}} \right\}} \right] = \frac{1}{{{\pi ^2}{N_t}{N_r}V\rho }}. \label{Appendix5} \end{equation} Finally,~note that in this paper, $ \rho = \sigma _{_{\tilde r,q}}^2/\sigma _n^2 = \left( {P/{N_t}} \right)\sigma _n^2 $ denotes the SNR per receive antenna when the preamble is transmitting and $ P $ is the total transmit power. \section*{Acknowledgement} The authors would like to acknowledge the help received from Longguang Wang. In addition, Jun Liu gratefully acknowledges the financial support received from the China Scholarship Council (CSC) and the School of Electronic and Electrical Engineering, University of Leeds, UK. He also wants to thank, in particular, the inspiration and care received from Yanling (Julia) Zhu during the period of this COVID-19 pandemic. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,108,101,563,581
arxiv
\section{Introduction} There is currently a lot of research work in both academia, and the industry alike in 5G communication technology that uses millimeter-wave (mmWave) frequency bands up to $60$ GHz \cite{mmW_5g}. Apart from conventional cellular communication, diverse applications in the field of wireless cognition, autonomous driving, positioning, augmented reality, etc., will need higher performance than what mmWave bands can offer \cite{rappaport}. In cellular networks and Internet of Things (IoT) applications, the increase in the number of wireless devices has put a huge strain on existing communication technologies. There is thus a need to explore high-frequency bands, such as those in the Terahertz (0.3-3 THz) and sub-Terahertz (60-300 GHz) range. These bands offer more flexibility and are also economically more viable than the conventional backhaul infrastructures in difficult terrains and hence will be the frequency bands of choice for future backhaul networks \cite{thz_opt}. However, the THz band has its challenges and limitations. While the high-frequency bandwidths can offer much higher rates and lower latency than even 5G systems, signals in these bands suffer from severe path loss and atmospheric attenuation \cite{Kim2015,Kokkoniemi_2018, Wu2020,Sarieddeen2019}. The path loss is higher because of molecular absorption at such small wavelengths. The authors in \cite{Kim2015} have developed an experimental characterization of the THz channel whereas in \cite{Kokkoniemi_2018}, the authors have examined the effect of scattering and absorption losses in the THz band. There is also considerable loss due to antenna misalignment and radio frequency (RF) hardware issues \cite{Boluda_2017,KOKKONIEMI2020}. Hence the physical range of these systems is minimal. Cooperative communication systems are a practical solution to mitigate the impact of path loss and channel conditions \cite{Nosratinia2004}. This motivates us to analyze and explore the performance of such relayed THz systems for the dual-hop scenario. {\em Related Works:} The dual-hop relaying for THz and heterogeneous networks has also been discussed in the literature \cite{precoding_2020,nano_thz,invivo_thz,outage_thz,mixed_thz_rf,Pranay_2021_TVT}. In \cite{precoding_2020}, the authors derived a closed-form solution for hybrid precoding in two-way relayed THz multiple input multiple output (MIMO) wireless systems. The sum capacity and energy efficiency of the proposed system were better than existing solutions for the THz MIMO relay system. The authors in \cite{nano_thz} investigate the performance of a cooperative relay transmission system for nano-sensor networks in the THz band using both amplify-and-forward (AF) and decode-and-forward (DF) relaying assuming a line of sight (LOS) channel in the THz band. Similarly, in \cite{invivo_thz}, the authors focused on the application of THz communication in in-vivo nano-devices. In \cite{outage_thz}, the authors analyze the outage performance of a dual-hop THz wireless system considering the effect of antenna misalignment error. \cite{mixed_thz_rf} considers a mixed dual-hop case, with an RF and THz link with misalignment error. The small-scale multi-path fading is considered to be a generalized $\alpha$-$\mu$ distribution, which includes Weibull, negative exponential, Nakagami-m, and Rayleigh fading distributions as special cases. In \cite{Pranay_2021_TVT}, the authors analyzed the performance of a THz-RF wireless link over an $\alpha$-$\mu$ fading channel by deriving closed-form expressions for various performance metrics such as outage probability, average BER, and ergodic capacity. It is desirable to analyze a dual-hop cooperative communication system operating purely with THz links which can be useful for upcoming generations of wireless systems. In this paper, we present the performance analysis of a dual-hop THz-THz wireless system by considering generalized $\alpha$-$\mu$ fading combined with misalignment errors using AF protocol. Using the probability density function (PDF) and cumulative distribution function (CDF) for the dual-hop AF relay system, we derive exact closed-form expressions of outage probability, average BER, average SNR, and a lower bound on the ergodic capacity of the relayed system. We also develop diversity order of the system to provide better insights on the system performance at high SNR. We validate derived analysis using Monte-Carlo simulations and demonstrate the performance of the THz-THz system for backhaul application. {\em Notations:} Some notations that are used in the paper are as follows. $\Gamma(.)$, $\Gamma(.,.)$, and $\psi(.)$ denote the Gamma, upper incomplete Gamma and digamma function, respectively. $\mathcal{B}_{z} (.,.)$ denotes the Beta function and $ {}_2F_1 (.,.;.;.)$ denotes the Gaussian Hypergeometric function. Finally, $G_{p,q}^{m,n} \left( x \bigg| \begin{array}{c} a_1, \ldots a_n\\b_1, \ldots b_q\end{array}\right)$ denotes the Meijer's G-function. \section{System Model}\label{sec:system_model} We consider a dual-hop relay system in which the information is transmitted from source (S) to destination (D) via a relay (R) node with AF protocol. The considered system may be well suited for backhaul transmissions in small-cell networks, cell-free wireless networks, etc., to transmit the information to the central processing unit (CPU). The THz band facilitates the communication on both the hops, i.e., between source to relay and relay to destination. We assume that direct transmission from the source to the relay is not possible due to obstacles between the source and destination. We consider generalized i.i.d $\alpha$-$\mu$ short-term fading along with zero boresight pointing errors. The received signal at the relay or destination can be represented as \begin{equation} y_{i} = h_{l_i} h_{pf}{x}_i + w_{i}, \end{equation} where $i=\{1,2\}$ denote the first hop (S-R) and second hop (R-D), respectively. Here, $ {x}_i $ and $w_i$ are the transmitted signal and additive white Gaussian noise at the $i$th link, and $h_{l_i}$ is the path-gain of the THz channel: \begin{equation} h_{l_i} = \frac{c\sqrt{G_{t_i}G_{r_i}}}{4\pi f d_i} \exp\left(-\frac{1}{2}k(f,T,\psi,p)d_i\right) \end{equation} where $k$ is the absorption coefficient \cite{Boulogeorgos_Analytical}. The random variable $h_{pf}$ combines the effect of fading and antenna misalignment having PDF \cite{mixed_thz_rf}: \begin{equation}\label{eq:pdf_hpf} f_{|h_{pf}|}(x) = A x^{\phi-1} \Gamma \left(B, Cx^\alpha \right) \end{equation} where $\alpha$, $\mu$ are the fading parameters and $ A = \phi S_0^{-\phi} \frac{\mu^{\frac{\phi}{\alpha}}}{\Omega^\alpha \Gamma (\mu)} $, $ B = \frac{\alpha \mu - \phi}{\alpha} $, and $C = \frac{\mu}{\Omega^\alpha} S_0^{-\alpha}$. Here, $\Omega$ is the $\alpha$ root mean value of the fading channel envelope and $\phi$, $ S_0 $ are the pointing error parameters as presented in \cite{Farid2007}. Using \eqref{eq:pdf_hpf}, the PDF of instantaneous SNR of the THz link can be represented as, \begin{equation}\label{eq:pdf_thz} f_{\gamma_i}(\gamma) = \frac{A}{2\sqrt{\gamma \gamma_0}} \bigg (\sqrt{\frac{\gamma}{\gamma_0}}\bigg)^{\phi-1} \Gamma \bigg(B, C \bigg(\sqrt{\frac{\gamma}{\gamma_0}}\bigg)^\alpha \bigg) \end{equation} and the CDF is given as \cite{Pranay_2021_TVT}, \begin{eqnarray} \label{eq:cdf_thz} &F_{\gamma_i}(\gamma)= \frac{A C^{-\frac{\phi}{\alpha}}}{\phi} \bigg[ \gamma\bigg(\mu,C\Big(\sqrt{{\gamma}/{\gamma_{0}}}\Big)^{\alpha}\bigg) \nonumber \\ & + C^{\frac{\phi}{\alpha}}\Big(\sqrt{{\gamma}/{\gamma_{0}}}\Big)^{\phi}\times \Gamma\Big(B,C\Big(\sqrt{{\gamma}/{\gamma_{0}}}\Big)^{\alpha}\Big) \bigg] \end{eqnarray} The instantaneous SNR of the THz link is denoted by $\gamma = \gamma_0 |h_{pf}|^2$ where $\gamma_0= {P_i |h_{l_i}|^2}/{\sigma_{w_i}^2}$ is the SNR term without channel fading for the THz link with transmit power $ P_i $. The end-to-end SNR for a channel state information (CSI) assisted AF relay is given by \cite{Hasna_2004_AF} \begin{eqnarray} \gamma= \frac{\gamma_1\gamma_2}{\gamma_1+\gamma_2+1} \end{eqnarray} \section{Performance Analysis} In this section, we will derive the analytical expressions of outage probability, average BER, ergodic capacity, and average SNR for the dual-hop relaying system. Since the exact analysis of the CSI-assisted AF relay demands the use of more complex mathematical functions like Fox's H and rigorous computations, we use an upper bound for the analysis of the considered dual-hop system for which the end to end SNR is given by \cite{papoulis_2002}: \begin{eqnarray} \gamma \leq \min\{\gamma_1,\gamma_2\} \end{eqnarray} Thus, distribution functions are given by \begin{eqnarray} \label{eq:cdf_relay} F_{\gamma}(\gamma) = F_{\gamma_1}(\gamma)+F_{\gamma_2}(\gamma)-F_{\gamma_1}(\gamma)F_{\gamma_2}(\gamma) \end{eqnarray} \begin{eqnarray}\label{eq:pdf_relay} f_{\gamma}(\gamma) = f_{\gamma_1}(\gamma)+f_{\gamma_2}(\gamma)-f_{\gamma_1}(\gamma)F_{\gamma_2}(\gamma)-F_{\gamma_1}(\gamma)f_{\gamma_2}(\gamma) \end{eqnarray} where $f_{\gamma_1}(\gamma)$, $f_{\gamma_2}(\gamma)$ denote the PDFs of the first and second hops, respectively and $F_{\gamma_1}(\gamma)$ and $ F_{\gamma_2}(\gamma) $ denote the CDFs of the first and second hops, respectively. \subsection{Outage Probability}\label{sec:perf_anal} Outage probability is defined as the probability of instantaneous SNR value being less than some threshold value $\gamma_{th}$ such that the system goes into outage i.e., $ P_{\rm out} = P(\gamma <\gamma_{th}) $. The outage probability can be evaluated by substituting $\gamma_{th}$ into \eqref{eq:cdf_relay}, where $ F(\gamma) $ is given in \eqref{eq:cdf_thz}. The diversity order of the relayed system can be obtained using the asymptotic analysis in high SNR regime \cite{Pranay_2021_TVT} \begin{eqnarray}\label{eq:diversity order} M = \min \bigg\{\frac{\alpha\mu}{2}, \frac{\phi}{2} \bigg\} \end{eqnarray} \begin{figure*} \small \begin{eqnarray} \label{eq:ber} &\bar{P}_e \approx \frac{\sqrt{2} \alpha^{(\frac{\alpha(B-1)+2p+\phi-3}{2})} \big(\frac{2q+1}{2}\big)^{-(\frac{\alpha(B-1)+2p+\phi-1}{2})} (C\gamma_0^{\frac{-\alpha}{2}})^{(B-1)}}{(2\pi)^\frac{\alpha}{2}} G_{\alpha,2}^{2,\alpha} \Bigg(\frac{4(C\gamma_0^{\frac{-\alpha}{2}}) \alpha^\alpha 2^{-\alpha}}{(2q+1)^\alpha} \Bigg| \begin{matrix} \Delta(\alpha, \frac{\alpha(B-1)+2p+\phi-1}{2}) \\ \Delta(2,0) \end{matrix} \Bigg) \Bigg[\Bigg(\frac{A C^{-\frac{\phi}{2}} q^p} {2\sqrt{2\pi}\phi \Gamma(p)}\Bigg) \nonumber \\ &\times \left(C^{-\frac{\phi}{2}} \gamma_0^{-\frac{\phi}{2}}\right)+ \left(\frac{A} {2\sqrt{2\pi }\phi \gamma_0^{\frac{\phi}{2}}}\right)^2 \frac{q^p}{\Gamma(p)} 2(C^{-\frac{\phi}{2}} \gamma_0^{-\frac{\phi}{2}}) \Bigg] +\Bigg(\frac{A C^{-\frac{\phi}{2}} q^p} {2\sqrt{2\pi}\phi \Gamma(p)}\Bigg) \Bigg[\Gamma(\mu) \sqrt{2\pi} \nonumber \\ & - \sum_{k=0}^{\mu-1} \frac{\Gamma(\mu)(C\gamma_0^{\frac{-\alpha}{2}})^k}{k!} \frac{\sqrt{2} \alpha^{(\frac{2p+\alpha k-3}{2})} \big(\frac{2q+1}{2}\big)^{-(\frac{2p+\alpha k-1}{2})}}{(2\pi)^\frac{\alpha}{2}} G_{\alpha,2}^{2,\alpha} \Bigg(\frac{4(C\gamma_0^{\frac{-\alpha}{2}})^2 \alpha^\alpha 2^{-\alpha}}{(2q+1)^\alpha} \Bigg| \begin{matrix} \Delta(\alpha, \frac{2p+\alpha k-1}{2}) \\ \Delta(2,0) \end{matrix} \Bigg) \Bigg] + \Bigg(\frac{A} {2\sqrt{2\pi }\phi \gamma_0^{\frac{\phi}{2}}}\Bigg)^2 \frac{q^p}{\Gamma(p)} \nonumber \\ & \hspace{-1mm} \Bigg[\left(C^{\hspace{-1mm}-\frac{\phi}{2}} \gamma_0^{\frac{\phi}{2}}\right)^2 (\Gamma(\mu))^2\sqrt{2\pi} \hspace{-1mm}+\hspace{-1mm} \frac{\sqrt{2} \alpha^{\frac{2\alpha(B-1)+2p+\phi-3}{2}} \big(\frac{2q+1}{2}\big)^{-\frac{2\alpha(B-1)+2p+\phi-1}{2}} (C\gamma_0^{\frac{-\alpha}{2}})^{2B-2}}{(2\pi)^\frac{\alpha}{2}} G_{\alpha,2}^{2,\alpha} \Bigg(\hspace{-1mm}\frac{16(C\gamma_0^{\frac{-\alpha}{2}})^2 \alpha^\alpha 2^{-\alpha}}{(2q+1)^\alpha} \Bigg| \begin{matrix} \Delta(\alpha, \frac{2\alpha(B-1)+2p+\phi-1}{2}) \\ \Delta(2,0) \end{matrix} \Bigg) \nonumber \\ & + \left(C^{-\frac{\phi}{2}} \gamma_0^{\frac{\phi}{2}}\right)^2 \sum_{k_1=0}^{\mu-1} \sum_{k_2=0}^{\mu-1} \hspace{-1mm} \frac{\big(\Gamma(\mu)\big)^2 (C\gamma_0^{\frac{-\alpha}{2}})^{k_1+k_2}}{k_1!k_2!} \frac{\sqrt{2} \alpha^{(\frac{2p+\alpha k_1+ \alpha k_2-3}{2})} \big(\frac{2q+1}{2}\big)^{-(\frac{2p+\alpha k_1+ \alpha k_2-1}{2})}}{(2\pi)^\frac{\alpha}{2}}\nonumber \\ & G_{\alpha,2}^{2,\alpha} \Bigg(\frac{4(C\gamma_0^{\frac{-\alpha}{2}})^2 \alpha^\alpha 2^{-\alpha}}{(2q+1)^\alpha} \Bigg| \begin{matrix} \Delta(\alpha, \frac{2p+\alpha k_1+ \alpha k_2-1}{2}) \\ \Delta(2,0) \end{matrix} \Bigg) + 2\big(C^{-\frac{\phi}{2}} \gamma_0^{\frac{\phi}{2}}\big)^2 \Gamma(\mu) \sum_{k=0}^{\mu-1} \frac{\Gamma(\mu)(C\gamma_0^{\frac{-\alpha}{2}})^k}{k!} \frac{\sqrt{2} \alpha^{(\frac{2p+\alpha k-3}{2})} \big(\frac{2q+1}{2}\big)^{-(\frac{2p+\alpha k-1}{2})}}{(2\pi)^\frac{\alpha}{2}} \nonumber \\ & G_{\alpha,2}^{2,\alpha} \Bigg(\frac{4(C\gamma_0^{\frac{-\alpha}{2}})^2 \alpha^\alpha 2^{-\alpha}}{(2q+1)^\alpha} \Bigg| \begin{matrix} \Delta(\alpha, \frac{2p+\alpha k-1}{2}) \\ \Delta(2,0) \end{matrix} \Bigg) - 2\left(C^{-\frac{\phi}{2}}\gamma_0^{\frac{\phi}{2}}\right) \nonumber\\ &\sum_{k=0}^{\mu-1}\frac{\Gamma(\mu)(C\gamma_0^{\frac{-\alpha}{2}})^{B+k-1}}{k!}\frac{\sqrt{2} \alpha^{(\frac{\alpha k+\alpha(B-1)+2p+\phi-3}{2})} \big(\frac{2q+1}{2}\big)^{-(\frac{\alpha k+\alpha(B-1)+2p+\phi-1}{2})} (C\gamma_0^{\frac{-\alpha}{2}})^{2(B-1)}}{(2\pi)^\frac{\alpha}{2}} \nonumber\\ & \times G_{\alpha,2}^{2,\alpha} \Bigg(\frac{16(C\gamma_0^{\frac{-\alpha}{2}})^2 \alpha^\alpha 2^{-\alpha}}{(2q+1)^\alpha} \Bigg| \begin{matrix} \Delta(\alpha, \frac{\alpha k+\alpha(B-1)+2p+\phi-1}{2})\\ \Delta(2,0)\end{matrix}\Bigg) \Bigg] \end{eqnarray} \hrule \end{figure*} \subsection{Average BER} The average BER of the presented system is given as \cite{Ansari2011} \begin{eqnarray} \label{eq:ber_eqn} \bar{P_e} = \frac{q^p}{2\Gamma(p)}\int_{0}^{\infty} \gamma^{p-1} {e^{{-q \gamma}}} F_{\gamma} (\gamma) d\gamma \end{eqnarray} where $p$ and $q$ are modulation parameters. \begin{my_lemma} if $\phi$ and $S_0$ are the pointing error parameters, and $\alpha$, $\mu$ are the fading parameters, then average BER of the relay-assisted link is given in \eqref{eq:ber}. \end{my_lemma} \begin{IEEEproof} See Appendix A for the proof. \end{IEEEproof} It is to be noted that the diversity order for average BER is same as that for the outage probability given in \cite{Pranay_2021_TVT}. \subsection{Ergodic Capacity}\label{sec:capacity} Using \eqref{eq:pdf_relay} and the identity $\log_2(1+\gamma)\geq \log_2(\gamma)$, we define a lower bound on the ergodic capacity \begin{align}\label{eq:rate_eqn} \overline{C} =& \int_{0}^{\infty} \log_2 (\gamma) f_{\gamma} (\gamma) d\gamma \end{align} \begin{my_lemma} if $\phi$ and $S_0$ are the pointing error parameters, and $\alpha$, $\mu$ are the fading parameters, then ergodic capacity of the relay-assisted link is given as \begin{align}\label{eq:rate} \overline{C} & = \bigg(\frac{4 A C^{\frac{-\phi}{\alpha}}\Gamma(\mu) \{-\alpha + \phi (\ln{\gamma_0}-\ln{C}+\psi(\mu))\}}{\alpha \phi^3 \ln{2}}\bigg) \nonumber \\ \times & (\phi - A C^{\frac{-\phi}{\alpha}}\Gamma(\mu)) + \frac{4 A^2 C^{\frac{-2\phi}{\alpha}} }{\alpha^{2} \phi \ln{2}} \nonumber \\ \times & \bigg \{ G_{2,1:1,2:2,2}^{0,2:2,0:1,2}\left(\begin{array}{c}1-\frac{\phi}{\alpha}-B,1-\frac{\phi}{\alpha} \\ \frac{-\phi}{\alpha} \end{array}\bigg| \begin{array}{c} 1\\ \mu, 0\\ \end{array}\bigg| \begin{array}{c} 1,1\\ 1,0\\ \end{array}\bigg| 1,\frac{\gamma_0}{C} \right) \nonumber \\ - G&_{2,1:1,2:2,2}^{0,2:2,0:1,2}\left( \begin{array}{c} 1-\frac{2\phi}{\alpha}-B,1-\frac{2\phi}{\alpha}\\ \frac{-2\phi}{\alpha} \end{array}\bigg| \begin{array}{c} 1\\ B, 0\\ \end{array}\bigg| \begin{array}{c} 1,1\\ 1,0\\ \end{array}\bigg| 1,\frac{\gamma_0}{C} \right) \bigg\} \end{align} \end{my_lemma}. \begin{IEEEproof} We substitute \eqref{eq:pdf_thz} in \eqref{eq:rate_eqn}, and define $\overline{C} = \int_{0}^{\infty} \log_2 (\gamma) 2 (1 - F_{\gamma} (\gamma) )f_{\gamma} (\gamma) d\gamma$. Further, substituting $\big(\sqrt{{\gamma}/{\gamma_0}}\big)^\alpha$= $t$ we get \begin{flalign}\label{eq:rate_int} &\overline{C} = \frac{4A}{\alpha^2} \Bigg[\int_{0}^{\infty} \log_2 (\gamma_0t) t^{\frac{\phi}{\alpha} - 1} \Gamma \bigg(B, C t\bigg) dt \nonumber \\ -& \frac{A C^{\frac{-\phi}{\alpha}}}{\phi} \Gamma(\mu) \int_{0}^{\infty}\log_2 (\gamma_0t) t^{\frac{\phi}{\alpha} - 1} \Gamma \big(B, C t\big) dt \nonumber \\ -& \frac{A C^{\frac{-\phi}{\alpha}}}{\phi} \int_{0}^{\infty} \log_2 (\gamma_0 ~t) t^{\big(\frac{2\phi}{\alpha} - 1\big)} \Big( \Gamma \big(B, C t\big) \Big)^2 dt \nonumber \\ + & \frac{A C^{\frac{-\phi}{\alpha}}}{\phi} \int_{0}^{\infty} \hspace{-2mm} \log_2 (\gamma_0 ~t) t^{\frac{\phi}{\alpha} - 1} \Gamma \big(B, C t\big) \Gamma \big(\mu, C t\big) dt \Bigg] \end{flalign} For the first and the second integral, we use the integration-by-parts method with $\log_2 (\gamma_0 t)$ being the first and $t^{\frac{\phi}{\alpha} - 1} \Gamma (B, C t)$ being the second term. For the third and the fourth integral, we once again make the transformation from $\log_2 (\gamma_0 ~t)$ to $\log_2 (1+ \gamma_0 ~t)$, and transform the $\log(.)$ and $\Gamma(.,.)$ functions to their Meijer's G equivalents \cite{meijer_equi}. Further, applying the identity \cite[07.34.21.0081.01]{meijer}, we get the closed form expression for these integrals. Finally, by substituting the solutions of integrals in \eqref{eq:rate_int}, we get the analytical expression for lower bound on ergodic capacity as given in \eqref{eq:rate}. \end{IEEEproof} \begin{figure*}[tp] \begin{center} \subfigure[Outage probability analysis for different $\phi$ and fading parameters $(\alpha,\mu)$.]{\includegraphics[scale = 0.35]{fig1a}} \subfigure[Average BER analysis for different $\phi$ and fading parameters $(\alpha,\mu)$.]{\includegraphics[scale = 0.35]{fig1b}} \caption{Outage probability and average BER performance of relay-assisted THz wireless system.} \label{fig:outage_ber} \end{center} \end{figure*} \subsection{Average SNR} The average SNR is the expected value of the instantaneous SNR. using \eqref{eq:pdf_relay}, average SNR of the dual-hop system is given as \begin{flalign}\label{eq:avg_snr_eqn} \bar{\gamma} =& \int_{0}^{\infty} \gamma f_{\gamma} (\gamma) d\gamma \end{flalign} \begin{my_lemma} if $\phi$ and $S_0$ are the pointing error parameters, and $\alpha$, $\mu$ are the fading parameters, then the average SNR of the relay assisted link is given as \begin{eqnarray}\label{eq:avg_snr} &\bar{\gamma} = \frac{A \gamma_0 C^{\frac{-2(1+\phi)}{\alpha}}}{\phi (1+\phi)(2+\phi)} \nonumber \bigg[ {2(1+\phi)(C^{\frac{\phi}{\alpha}} - A \Gamma(\mu))\Gamma \left(\frac{2}{\alpha} + \mu \right)} \nonumber \\ & + A \phi \Gamma \big(\frac{2+\alpha \mu + \phi}{\alpha} \big) \Gamma \left(B \right) + A \Gamma \left(2\left( \frac{1}{\alpha} + \mu \right) \right) \nonumber \\ & \times \Big\{ 2 (-1^{-(\frac{2+\alpha \mu}{\alpha})} ) (1 +\phi) \mathcal{B}_{-1}\left[\frac{2}{\alpha} + \mu, 1 -2\left( \frac{1}{\alpha} + \mu \right) \right] \nonumber \\ & \times (-1^{(1-B)})\phi \mathcal{B}_{-1}\left[B, 1 -2\big( \frac{1}{\alpha} + \mu \big) \right] \nonumber \\ & - \left(\frac{\alpha (2+\phi) {}_2F_1 \big(2( \frac{1}{\alpha} + \mu ),\frac{2+\alpha \mu + \phi}{\alpha}; \frac{2+\alpha + \alpha \mu + \phi}{\alpha};-1 \big)}{2+\alpha \mu + \phi}\right) \bigg\} \bigg] \end{eqnarray} \end{my_lemma} \begin{IEEEproof} We substitute \eqref{eq:pdf_relay} in \eqref{eq:avg_snr_eqn}, and define $ \bar{\gamma} = \int_{0}^{\infty} 2 \gamma (1 - F_{\gamma} (\gamma)) f_{\gamma} (\gamma) d\gamma$. Further, substituting $\big(\sqrt{{\gamma}/{\gamma_1^0}}\big)^{\alpha_1} = t$ we get the average SNR of the relay assisted link \begin{eqnarray} \label{eq:avg_snr_int} &\bar{\gamma} = \frac{2 \gamma_0 A}{\alpha}\bigg[\int_{0}^{\infty} ~t^{\frac{2+\phi}{\alpha}-1} \Gamma \big(B, C t\big) dt - \frac{A C^{\frac{-\phi}{\alpha}}}{\phi} \Gamma(\mu) \nonumber \\ & \int_{0}^{\infty} t^{\frac{2+\phi}{\alpha}-1} \Gamma \big(B, C t\big) dt -\frac{A}{\phi}\int_{0}^{\infty} t^{\frac{2(1+\phi)}{\alpha}-1} \big( \Gamma \big(B, C t\big) \big)^2 dt \nonumber \\ & + \frac{A C^{\frac{-\phi}{\alpha}}}{\phi} \Gamma(\mu) \int_{0}^{\infty} t^{\frac{2}{\alpha}} t^{\frac{\phi}{\alpha} - 1} \Gamma \big(B, C t\big) \Gamma \big(\mu, C t\big) \bigg] dt \end{eqnarray} The first and the second integral can be solved using the identity \cite[6.455/1]{integrals}. The third integral is solved by applying the integration-by-parts method with $\Gamma \big(B, C t\big)$ as the first and $t^{\frac{2(1+\phi)}{\alpha}-1} \Gamma \big(B, C t\big)$ as the second term. We follow a similar procedure to solve the fourth integral. Finally, using the limits of the integrals in the following identity, we get the solution for both the integrals. \begin{equation} \int t^{x-1} \Gamma(a,t) dt = \frac{t^x \Gamma(a,t) - \Gamma(a+x,t)}{x} \end{equation} Further, to simplify the expression, we use the following identity to represent hypergeometric functions, \begin{equation} {B}_{z}(a,b) = \frac{z^a }{a} {}_2F_1 (a,1-b;a+1;z) \end{equation} By solving the integrals and substituting them into \eqref{eq:avg_snr_int}, we get the closed form expression for the average SNR as given in \eqref{eq:avg_snr}. \end{IEEEproof} \section{Simulation and Numerical Results}\label{sec:sim_results} \begin{figure*}[tp] \begin{center} \subfigure[Average SNR for different values of $\phi$, $\alpha=1$ and $\mu=1.5$.]{\includegraphics[scale = 0.35]{fig2a}} \subfigure[Total capacity for different link distances, $\phi=2.4$, $\alpha=1$ and $\mu=1.5$.]{\includegraphics[scale = 0.35]{fig2b}} \caption{Average SNR and data rate performance of relay-assisted THz wireless system.} \label{fig:snr_rate} \end{center} \end{figure*} In this section, we validate the derived analytical expressions with the help of Monte-Carlo simulations (averaged over $10^7$ channel realizations) using MATLAB software. We consider an operating frequency of $ 275 $ \mbox{GHz} with transmit and receive antenna gains as $45$ \mbox{dBi}. The path-loss is calculated for the THz link using the parameters given in \cite{Boulogeorgos_Analytical}. The value of fading parameters $\alpha$ and $\mu$ are taken in the range of $1-3$ to model the generalized $\alpha$-$\mu$ distribution. Pointing error parameters $\phi$ and $S_0$ are calculated using the values provided in \cite{Farid2007}. We take the noise PSD for THz to be $-174$ \mbox{dBm} and the channel bandwidth as $10$ \mbox{GHz} \cite{Sen_2020_Teranova}. In Fig. \ref{fig:outage_ber}a, we illustrate the outage probability performance of the relayed system for an SNR threshold of $4$ \mbox{dB}. It is evident from the figure that the outage performance decreases with an increase in parameter $\phi$, for the same $\alpha$-$\mu$ fading parameter. We can observe the impact of the fading and pointing error parameters on the diversity order for the outage performance. The diversity order depends on the pointing error parameter for different values of fading parameters $\alpha$ and $\mu$ when $\phi=6.7$, which can be verified by the change in slope of the plots. In Fig. \ref{fig:outage_ber}b, we present the average BER performance of the system. Similar to the outage probability, the average BER decreases when the parameter $\phi$ increases. When the values of the channel parameters $\alpha$=1 and $\mu$=1.5, the diversity order remains unchanged, thus, the slope of the BER plot does not change. However, when $\alpha$ and $\mu$ are increased to $2$ and $2.5$ respectively, the diversity order becomes dependent on $\phi$ and a change in the slope of plots confirms the diversity order analysis. In Fig. \ref{fig:snr_rate}a, we demonstrate the average SNR performance of the relaying system. The average SNR is plotted versus pointing error parameter $\phi$ for different transmitted power, and $\alpha = 1$ and $\mu = 1.5$. As $\phi$ is increased, the average SNR first increases and then becomes constant, which implies that there is not much improvement in the link performance beyond a certain value of $\phi$. We can also see a significant increase in the average SNR as the transmitted power is increased. Finally, in Fig. \ref{fig:snr_rate}b, we analyze the ergodic capacity of the dual-hop system over a bandwidth of $10$ \mbox{GHz}. As expected, the capacity decreases when the link distance increases. Further, an increase in the transmit power from $0$ dBm to $30$ dBm increases the ergodic capacity by nearly $100$ Gbps. Fig. \ref{fig:snr_rate}b also provides valuable insights on the link distance versus data rate performance. A high data rate of $100$ Gbps is achievable even when the link distance is $40$ \mbox{m} with a transmit power of $30$ dBm. Thus, different capacity requirement for backhaul applications can be achieved by configuring the dual-hop system with appropriate transmit power and link distance. \section{Conclusion and Future Work}\label{sec:conc} In this paper, we investigated the performance of a dual-hop THz-THz wireless system. We considered a generalized i.i.d $\alpha$-$\mu$ fading channel model combined with the statistical pointing errors. We derived exact analytical expressions for the outage probability, average BER, average SNR, and a lower bound on the ergodic capacity of the relay-assisted system. The derived results were verified with Monte-Carlo simulations under different channel conditions and system parameters. We also verified the diversity order of the system to provide insights on the system performance at a high SNR. The ergodic capacity analysis demonstrates that data rates up to several \mbox{Gbps} can be achieved with the THz transmissions, which may fulfill the demands of future generation wireless systems. The work presented in this paper can be extended with an exact analysis on the performance of CSI-assisted AF relay for the dual-hop THz-THz system with hardware impairments. \section*{Appendix A: Proof of Lemma 1} Using \eqref{eq:cdf_relay} in \eqref{eq:ber_eqn}, we define the average BER as $\bar{P_e} = \bar{P_e}_{I_1}+\bar{P_e}_{I_2}$, where $\bar{P_e}_{I_1} = \frac{2q^p}{\Gamma(p)} \int_{0}^{\infty} \gamma^{p-1} e^{\frac{q\gamma}{2}} F_\gamma(\gamma) d\gamma$, which can be simplified as \begin{eqnarray} \label{eq:I_1_int} &\bar{P_e}_{I_1} = \left(\frac{A C^{-\frac{\phi}{2}} q^p} {2\sqrt{2\pi}\phi \Gamma(p)}\right) \bigg[\int_{0}^{\infty} \Gamma(\mu) e^{-\gamma \big(\frac{2q+1}{2}\big)} \gamma^{\frac{2p-3}{2}} d\gamma \nonumber \\ &+ \Big({C^{-\frac{\phi}{2}} \gamma_0^{-\frac{\phi}{2}}}\Big) \int_{0}^{\infty} \Gamma\big(B,C\big(\frac{\gamma}{\gamma_0}\big)^{\frac{\alpha}{2}}\big) e^{-\gamma \big(\frac{2q+1}{2}\big)} \gamma^{\frac{2p-3}{2}} d\gamma \nonumber \\& - \int_{0}^{\infty} \Gamma\big(\mu,C\big(\frac{\gamma}{\gamma_0}\big)^{\frac{\alpha}{2}}\big) e^{-\gamma \big(\frac{2q+1}{2}\big)} \gamma^{\frac{2p-3}{2}} d\gamma \bigg] \end{eqnarray} The first integral is straight forward and can be solved easily. For the second integral we use the approximation for $\Gamma\big(B,C {\gamma_{0}}^{\frac{-\alpha}{2}} \gamma^{\frac{\alpha_1}{2}} \big) $ using $\Gamma(a,x) \approx e^{-x}x^{a-1}$. Further, we apply the identity of the product of two Meijer's G-function \cite{meijer} to get the solution. For the third integral, we use the series expansion of ${\Gamma\big(\mu, B {\gamma_{0}}^{\frac{-\alpha}{2}} \gamma^{\frac{\alpha}{2}}\big)}$ using $\Gamma(a,bx)$ =$(a-1)! e^{(-bx)}$ $\sum_{k=0}^{a-1}\frac{(bx)^k}{k!}$ and apply the identity \cite{meijer}. We define $\bar{P_e}_{I_2} = \frac{q^p}{\Gamma(p)} \int_{0}^{\infty} \gamma^{p-1} e^{\frac{q\gamma}{2}} \big(F_\gamma(\gamma)\big)^2 d\gamma$, which can be rewritten as \begin{flalign}\label{eq:I_2_int} &\bar{P_e}_{I_2}\hspace{-1mm} = \hspace{-1mm}\Bigg(\hspace{-1mm}\frac{A} {\sqrt{8\pi }\phi \gamma_0^{\frac{\phi}{2}}}\hspace{-1mm}\Bigg)^2 \hspace{-2mm}\frac{q^p}{\Gamma(p)} \Bigg[\hspace{-1mm}\Big(\hspace{-1mm}{C^{-\frac{\phi}{2}} \gamma_0^{\frac{\phi}{2}}}\Big)^{\hspace{-1mm}2} \hspace{-2mm}\int_{0}^{\infty} \hspace{-4mm} {(\Gamma(\mu))}^2 e^{-\gamma \big(\frac{2q+1}{2}\big)} \gamma^{\frac{2p-3}{2}} d\gamma \nonumber \\& + \int_{0}^{\infty} \bigg(\Gamma\big(B,C\big(\frac{\gamma}{\gamma_0}\big)^{\frac{\alpha}{2}}\big)\bigg)^2 e^{-\gamma \big(\frac{2q+1}{2}\big)} \gamma^{\frac{2p-3}{2}} d\gamma \nonumber \\ &+ \Big({C^{-\frac{\phi}{2}} \gamma_0^{\frac{\phi}{2}}}\Big)^2 \int_{0}^{\infty} \bigg( \Gamma\big(\mu,C\big(\frac{\gamma}{\gamma_0}\big)^{\frac{\alpha}{2}}\big)\bigg)^2 e^{-\gamma \big(\frac{2q+1}{2}\big)} \gamma^{\frac{2p-3}{2}} d\gamma \nonumber \\ &+ 2\Big({C^{-\frac{\phi}{2}} \gamma_0^{\frac{\phi}{2}}}\Big)^2 \int_{0}^{\infty} \Gamma(\mu) \Gamma\big(\mu,C\big(\frac{\gamma}{\gamma_0}\big)^{\frac{\alpha}{2}}\big) e^{-\gamma \big(\frac{2q+1}{2}\big)} \gamma^{\frac{2p-3}{2}} d\gamma \nonumber \\ &+ 2\Big({C^{-\frac{\phi}{2}} \gamma_0^{\frac{\phi}{2}}}\Big) \int_{0}^{\infty} \Gamma(\mu) \Gamma\big(B,C\big(\frac{\gamma}{\gamma_0}\big)^{\frac{\alpha}{2}}\big) e^{-\gamma \big(\frac{2q+1}{2}\big)} \gamma^{\frac{2p-3}{2}} d\gamma \nonumber \\ &-\hspace{-1mm} 2\Big(\hspace{-1mm}{C^{\hspace{-1mm}-\frac{\phi}{2}} \gamma_0^{\frac{\phi}{2}}}\Big) \hspace{-2mm}\int_{0}^{\infty} \hspace{-4mm} \Gamma\big(\mu,C\big(\frac{\gamma}{\gamma_0}\big)^{\hspace{-1mm}\frac{\alpha}{2}}\big) \Gamma\big(B,C\big(\frac{\gamma}{\gamma_0}\big)^{\hspace{-1mm}\frac{\alpha}{2}}\big) e^{\hspace{-1mm}-\gamma \big(\hspace{-1mm}\frac{2q+1}{2}\hspace{-1mm}\big)} \hspace{-1mm}\gamma^{\frac{2p-3}{2}} d\gamma \Bigg] \end{flalign} following the similar procedure applied for \eqref{eq:I_1_int}, we solve \eqref{eq:I_2_int}. The first integral is straight forward and can be solved easily. For second integral we use approximation for $\Gamma\big(B,C {\gamma_{0}}^{\frac{-\alpha}{2}} \gamma^{\frac{\alpha_1}{2}} \big) $ using $\Gamma(b,y) \approx e^{-y}y^{b-1}$. Further, we apply the identity \cite[07.34.21.0012.01 ]{meijer} to get the solution. For solving the third integral, we use the series expansion for ${\Gamma\big(\mu, B {\gamma_{0}}^{\frac{-\alpha}{2}} \gamma^{\frac{\alpha}{2}}\big)}$ using $\Gamma(p,by)$ =$(p-1)! e^{(-by)}$ $\sum_{k=0}^{p-1}\frac{(by)^k}{k!}$ and apply the identity \cite{meijer}. Likewise, following the similar procedure for subsequent integrals and upon adding the solutions we get \eqref{eq:ber}. \bibliographystyle{IEEEtran}
1,108,101,563,582
arxiv
\section{Introduction} Many biological systems can be structurally described as random line networks. A typical example are gels that self-organize by the polymerization and subsequent crosslinking of filamentous proteins, such as collagen or fibrin. In order to characterize the stochastic geometry of such systems, a frequently used parameter is the average poresize $\overline{r}_{pore}$. It is determined by finding, for a reprensentative fraction of network pores, the largest spheres that can be fit into that pores and then computing the average of the radii of these maximum spheres. While this can be done numerically in a straight forward yet time consuming way, this definition of poresize is not suited very well for exact analytical calculations. Therefore, we suggest as an alternative measure the most probable nearest obstacle distance $\sigma$ for randomly chosen test points and show that it is directly related to the poresize. \begin{figure} \includegraphics[width=0.65\linewidth]{figs/fig0.pdf} \caption{\label{fig:fig0} Collagen gel with a concentration of 1.2 mg/ml (Scale bar is 10 $\mu$m). Shown is the maximum intensity projection from a stack of 15 single confocal images recorded at a z-distance of 340 nm (total height = 5.1 $\mu$m). The fibers are straight on the scale of a typical pore size $\overline{r}_{pore}$ and their diameter is much smaller than $\overline{r}_{pore}$. Therefore, the structure of the system can be well approximated by a Mikado line network in the long fiber limit.} \end{figure} We start our investigations with a `Mikado'-like network model that has two parameters, the length $l$ of the line segments and the volume density $\rho$ of their centers. It is possible to compute the distribution $p(r_{no})$ of nearest obstacle distances analytically in this model. In the limit of zero line length, the Mikado model contains the case of point networks. More interesting is the opposite limit, where $l$ is much larger than the average poresize. In this case, the Mikado model converges towards a more general model that represents any random line network with a large persistence length. Indeed, the single system parameter in this limiting case is the overall line density $\lambda$, i.e. the total line length per unit volume. This parameter only sets the spatial scale of the network, and no other details matter for the distributions $p(r_{no})$ or $W(r_{pore})$. For example, a network composed of random circles with identical overall line density would yield the same universal Rayleigh distribution $p(r_{no})$ as the Mikado model, provided the radius of the circles is much larger than the average pore size. We compare these analytical results to a numerical simulation that is directly based on the exaxt analytic geometry of points and lines, thus avoiding any possible artifacts arising from voxelation. After demonstrating perfect agreement of the simulations with the analytic results, we use the simulations to determine the poresize distribution for line networks of various density parameters $\lambda$. As expected from scaling arguments, the average poresize $\overline{r}_{pore} = c \sigma$ is simply proportional to $\sigma$, allowing us to determine the conversion factor as $c\approx$1.86. \section{Model and Theory} \subsection*{Distribution of nearest obstacles distances $p(r_{\rm{no}})$, accessible volume fraction $Q(r)$ and pore sizes} We consider random biphasic networks, in which every point of 3-dimensional space either belongs to phase 0 (pore, liquid) or phase 1 (material, solid). In order to map out the stochastic geometry of the network, one can repeatedly choose a random point $\vec{R}_0=(x,y,z)$ within the 0-phase of the network and then find its `nearest obstacle distance' $r_{\rm{no}}(\vec{R}_0)$, defined as the Euclidean distance from that point $\vec{R}_0$ to the closest point of the 1-phase (compare Fig. \ref{fig:fig1}(a)). The network is then characterized by the distribution $p(r_{\rm{no}})$ of the nearest obstacle distances. Closely related to $p(r_{\rm{no}})$ is the `accessible volume fraction' $Q(r)$, defined as the fraction of the 0-phase in which a sphere of radius $r$ (from now on called a r-sphere) could be centered without overlapping the 1-phase (compare Fig. \ref{fig:fig1}(b)). In general, the dimensionless quantity $Q(r)$ has the value $Q(r\!=\!0)\!=\!1$ and decreases monotonically for all radii $r\!>\!0$. The complemental quantity $1-Q(r)$ is the fraction of 0-phase for which an r-sphere overlaps the 1-phase. It corresponds to the probability that a random 0-phase point $\vec{R}_0$ has a nearest obstacle distance $r_{\rm{no}}$ smaller than $r$, or \begin{eqnarray} 1-Q(r) &=& \mbox{Prob}(r_{\rm{no}}<r)\nonumber\\ &=& \int_0^r p(r_{\rm{no}}) dr_{\rm{no}}. \end{eqnarray} The derivative of this equation with respect to $r$ shows that $Q(r)$ is just the negative cumulative probability of $p(r_{\rm{no}})$: \begin{equation} p(r=r_{\rm{no}}) = - \frac{d}{dr} Q(r). \end{equation} While both quantities carry the same information about the network, the cumulative $Q(r)$ is more convenient for analytical considerations, as will be demonstrated below. Another way to characterize pores of a network is to find the maximum sphere that fits to each pore and to define the `pore size' $r_{\rm{pore}}$ as the radius of this maximum sphere. The concept is also illustrated in Fig. \ref{fig:fig1}(c). We denote the distribution of pore sizes by $W(r_{\rm{pore}})$. \begin{figure} \includegraphics[width=0.95\linewidth]{figs/fig1.pdf} \caption{\label{fig:fig1} \textbf{2D illustration of various statistical measures used for networks of line segments.} (a) Nearest obstacle distances $r_{\rm{no}}$ (thin lines) for a few selected points (circles). (b) Accessible volume (shaded areas) for spheres of a given radius. (c) Maximum spheres fitting into network pores, thereby defining the pore sizes $r_{pore}$. (d) A homogeneous, isotropic random distribution of straight line segments. The segments have a prescribed length $l$ and their center points a spatial density of $\rho$. (e) Classification of line segments in the 2D Mikado model. 1-group (squares): Centers within r-sphere. 2-group (full circles): Centers outside r-sphere, yet with chance of overlap. 3-group (empty circles): Remote segments without chance of overlap. (f) 2D sketch of an r-sphere (green), a concentric spherical shell of radius $R$ (gray) and a specific point (red) within this shell. From all line segments centered at the red point, only those can intersect the r-sphere with orientations falling into a cone of apex angle $\omega$.} \end{figure} \subsection*{Random Line Networks: The Mikado model} In the following we consider random networks in which the 1-phase consists of straight line segments of fixed length, with isotropic orientations and a homogeneous distribution throughout the 3D volume. We refer to this model as the Mikado model. Each individual line segment (LS) can be described by its center point and a unit direction vector. In order to avoid ambiguities, we require that all unit vectors have a positive z-component and thus `point upwards' (compare Fig. \ref{fig:fig1}(d)). The two parameters of the Mikado model are the length $l$ of the LSs and the volume density $\rho\!=\!\frac{N}{V}$ of their center points, where $N$ is the number of line segments within a volume $V$. Consider first the extreme case $l\rightarrow0$, where all LSs degenerate into their center points, and place a r-sphere randomly into the system. Note that the configuration of LS-centers throughout the volume is a spatial Poisson process with `event rate' that is identical to the volume density $\rho$. On average, the r-sphere will contain a number of \begin{equation} n_{av,l\rightarrow0}(r)=\rho\frac{4}{3}\pi r^3 \end{equation} LS-centers. The probability $Q(r)$ that not a single LS-center lies within the r-sphere is given by the Poisson probability for $k\!=\!0$ events, which is \begin{equation} Q(r)=\mbox{Poisson}\left\{k=0,n_{av}=n_{av,l\rightarrow0}(r)\right\}=e^{-n_{av,l\rightarrow0}(r)}. \end{equation} Therefore, in the case of the random point network the accessible volume fraction is given by \begin{equation} Q(r)_{l\rightarrow0}=e^{-\frac{4\pi}{3}\rho r^3}. \end{equation} We now turn back to the general case $l>0$. As before, we can write \begin{equation} Q(r)=e^{-n_{av}(r)}. \end{equation} In order to compute $n_{av}(r)$, we note that with respect to a given r-sphere, the LSs can be classified into 3 groups (compare Fig. \ref{fig:fig1} (e)): \begin{itemize} \item 1-group with LS-centers inside the r-sphere. \item 2-group with LS-centers outside the r-sphere, but yet with a possibility of intersecting the r-sphere. \item 3-group with LS-centers too far away to touch the r-sphere. \end{itemize} Only the groups 1 and 2 contribute to $n_{\rm{av}}(r)$. The contribution of the 1-group is identical to the case of point networks above: \begin{equation} n_{av,1}(r)=\rho\frac{4}{3}\pi r^3. \end{equation} The 2-group consists of LSs with centers in a sphere of radius $r+(l/2)$ around the center of the r-sphere. We now consider in more detail the ones in an infinitesimal spherical shell of radius $R$ around the center of the r-sphere, with $0\!<\!R\!<\!r\!+\!(l/2)$. This R-shell contains a number of \begin{equation} dN^{\prime}=\rho dV=\rho 4\pi R^2dR \end{equation} candidates for intersection. Among them, only those LSs will actually overlap the r-sphere that have orientations within a certain cone (compare Fig. \ref{fig:fig1}(f)). This cone has an apex angle of $\omega=2\arcsin(r/R)$ and the corresponding solid angle is \begin{eqnarray} \Omega(R)&=&4\pi\sin^2(\omega/4)\nonumber\\ &=& 4\pi\left[ \sin\left(\frac{1}{2}\arcsin(r/R)\right)\right]^2\nonumber\\ &=&2\pi\left(1-\sqrt{1-(r/R)^2}\right). \end{eqnarray} Since the total solid angle available for LS orientations is $\Omega_{tot}=2\pi$ (according to our convention that all unit direction vectors are pointing upward), the intersecting LSs amount to a fraction of $\Omega(R)/\Omega_{tot}=\left(1-\sqrt{1-(r/R)^2}\right)$. We conclude that the average number of actual intersections from LSs within the R-shell is \begin{eqnarray} dN(R)&=&dN^{\prime}\left(1-\sqrt{1-(r/R)^2}\right)\nonumber\\ &=&4\pi\rho \left(1-\sqrt{1-(r/R)^2}\right) R^2 dR. \end{eqnarray} The total contribution from all LSs of the 2-group is obtained by integration over the relevant R-shells: \begin{equation} n_{av,2}(r)=\int_{R=r}^{R=r+(l/2)} dN(R). \end{equation} This integral can be performed analytically. Using the abbreviation \begin{equation} f(s):= \frac{1}{3}\left[ s^3 - (s^2-1)^{3/2} \right], \end{equation} one obtains \begin{equation} n_{av,2}(r)= 4\pi\rho r^3 \left[ f(1+\frac{l}{2r}) - f(1) \right]. \end{equation} By adding the contributions of both relevant groups, $n_{\rm{av}}(r)=n_{av,1}(r)+n_{av,2}(r)$, and using $Q(r)=e^{-n_{av}(r)}$, we arrive at an analytic expression for the accessible volume fraction in the Mikado model. Defining another useful abbreviation \begin{equation} g(x) := 3 \left[ f(1+\frac{x}{2})-\frac{1}{3}\right], \end{equation} the result can be cast into the form \begin{equation} Q(r)= e^{-\frac{4\pi}{3}\rho r^3\left[ 1 + g(l/r)\right]}. \end{equation} It correctly contains the limit of point networks, since $g(l/r)\rightarrow 0$ for $l \rightarrow 0$. All the differences between point and LS networks are included in the `perturbation function' $g(l/r)$. From the accessible volume fraction $Q(r)$, we immediately obtain the distribution of nearest obstacle distances $p(r_{\rm{no}}\!=\!r) = - \frac{d}{dr} Q(r)$ in the Mikado model. With increasing $r_{\rm{no}}$, this distribution starts with $p(r_{\rm{no}}\!=\!0)\!=\!0$, develops a single peak and then decays exponentially for distances much larger than the average pore size $\overline{R}_{pore}$ of the network. \subsection*{Mikado model in the long fiber limit} We next consider the case $l \gg r$, where the LSs are much longer than the typical distances of interest. Since $p(r_{\rm{no}})$ is exponentially small for distances beyond the average pore size, this limit can also be interpreted as $l \gg \overline{R}_{\rm{pore}}$. Note that this is a typical situation for networks of semi-flexible fibers, such as collagen. It is straight-forward to show that in this limit the perturbation function diverges as $g(l/r) \rightarrow \frac{3}{4}\frac{l}{r}$. One therefore obtains \begin{equation} Q_{l\gg r}(r)= e^{-(\pi\rho l) r^2} = e^{-\frac{1}{2}(r/\sigma)^2} \label{equ:qlim} \end{equation} which is the `right half' of a Gaussian bell curve with standard deviation \begin{equation} \sigma=1/\sqrt{2\pi\rho l}. \end{equation} The corresponding distribution of nearest obstacle distances is a Rayleigh distribution \begin{equation} p_{l\gg r}(r)= \frac{r}{\sigma^2} e^{-\frac{1}{2}(r/\sigma)^2}. \label{equ:pnodlim} \end{equation} The most probable nearest obstacle distance, i.e. the value of $r$ at which $p(r)$ is maximum, is given by $\sigma$. We note that the accessible volume fraction in Eq.(\ref{equ:qlim}) depends only on the ratio $r/\sigma$. Therefore, all nearest obstacle distance distributions $p_{l\gg r}(r)$ should collapse onto a universal distribution when the distance $r$ is measured in units of $\sigma$. In the long fiber limit, a dense and a dilute Mikado network cannot be distinguished from each other, if the spatial scale is unknown. \begin{figure} \includegraphics[width=0.6\linewidth]{figs/fig2.pdf} \caption{\label{fig:fig2} (a) Distribution of nearest obstacle distances in 3D networks of straight line segments, for three different density parameters $\lambda$. Analytical predictions of the Mikado model in the long fiber limit (dashed lines) are compared to numerical simulations (solid lines). The unit of length was set equal to the linear size $L$ of the simulation box, which in turn was equal to the length $l$ of the line segments. (b) Distribution of nearest obstacle distances $p(r)$ (line) and pore size distribution $W(r)$ (line with symbols) in a 3D network of straight line segments, for a density parameter of $\lambda\!=\!100\!/\!L^2$. (c) Most probable obstacle distance $\sigma$ (squares) and average pore size $r_{pore}^{av}$ (circles) as a function of the density parameter $\lambda$. In the long fiber limit, the ratio is constant with $\overline{r}_{pore}/\sigma\approx1.86$.} \end{figure} \subsection*{Relating $\sigma$ to line density} It is remarkable that in the long fiber limit of the Mikado model, the properties of the network are completely determined by the parameter combination $\rho l$, which appears in the quantity $\sigma=1/\sqrt{2\pi\rho l}$. Remembering the definition of $\rho$ as the volume density of LS centers, we can write \begin{equation} \rho l = \frac{N}{V}l = \frac{L_{tot}}{V} =: \lambda, \end{equation} where $L_{tot}$ is the total length of all LSs. The new density parameter $\lambda$ corresponds to the {\em total `fiber' length per unit volume}. It follows that \begin{equation} \sigma=1/\sqrt{2\pi\lambda}. \label{equ:sigma} \end{equation} \subsection*{Numerical test of the Mikado model \label{sec:numTest}} In order to test the predictions of the Mikado model, we have simulated random line networks and compared the resulting numerical $p(r_{\rm{no}})$ with the analytical results above. In the simulation, each line segment (of constant length $l$) was numerically represented by its center coordinates and a unit direction vector, as depicted schematically in Fig. \ref{fig:fig1}(d). Initially, a list of $N$ such line objects was generated, with the center points distributed randomly throughout a cubic simulation box of linear dimension $L$ (with homogeneous density $\rho=\frac{N}{V}=\frac{N}{L^3}$) and with random, isotropic direction vectors \footnote{More precisely, in order to avoid boundary effects, we extended the simulation box on each side by $l/2$ and distributed a correspondingly larger number of $N^* = N (L+l)^3/L^3$ line centers within this extended box.}. The distribution $p(r_{\rm{no}})$ was determined by randomly choosing $K=10^5$ test points $\vec{R}_{k=1\ldots K}$ within the simulation box, finding the nearest obstacle distance $r_{\rm{no}}(\vec{R}_k)$ for each test point and then computing a histogram of these distances. The distance $r_{\rm{no}}(\vec{R}_k)$ is found by first computing the distances $r_{kn}$ between test point $\vec{R}_k$ and all the lines $n$ of the network and then finding the smallest of those values. Note that the distance $r_{kn}$ between a point and a line segment can be obtained exactly (without any `voxelization' required). For the numerical test of the Mikado model in the long fiber limit, we prescribed the density parameter $\lambda$, set $L=l=1$ and computed the required number of fibers as $N=\frac{\lambda L^3}{l}$. We found an excellent agreement between the analytical prediction and the simulation (compare Fig. \ref{fig:fig2}(a)). \subsection*{Relation between the most probable nearest obstacle distance $\sigma$ and the average pore size $\overline{r}_{pore}$} For any concrete network, it is possible to compute the nearest obstacle distance $r_{\rm{no}}(\vec{R}_0)$ for each spatial point $\vec{R}_0$, resulting in a so-called `Euclidean distance map' (EDM). The pore centers of the network can then be defined as the positions $\vec{R}_0=\vec{R}_{max}^{(i)}$ of the local maxima of the EDM and the pore size distribution $W(r_{pore})$ is the distribution of the distance values $r_{pore}^{(i)} = r_{\rm{no}}(\vec{R}_{max}^{(i)})$ taken at these local maxima. Based on our numerically exact simulation of random line networks, as described in Sect.\ref{sec:numTest}, we have computed the pore size statistics $W(r_{pore})$ and compared it to the corresponding distribution $p(r_{\rm{no}})$ of nearest obstacle distances. As expected, $W(r_{pore})$ is peaked at a larger value than $p(r_{\rm{no}})$ (compare Fig. \ref{fig:fig2}(c)). In the long fiber limit, the ratio $\overline{r}_{pore}/\sigma$ between the average pore size and the most probable obstacle distance is a constant, i.e. independent from the density parameter $\lambda$ of the network. This follows from the fact that the distribution $p(r_{\rm{no}})$ is universal in length units of $\sigma$. To demonstrate the constant ratio, we have plotted $\overline{r}_{pore}(\lambda)$ and $\sigma(\lambda)$ double-logarithmically (compare Fig. \ref{fig:fig2}(d)). \section{Summary} In this paper we have theoretically investigated random line networks, modelled as isotropic and macroscopically homogeneous distributions of straight line segments in 3D space. In the limiting case when the line segments are much longer than the average poresize $\overline{r}_{pore}$, the distances $r_{\rm{no}}$ of random test points to the nearest line segment are distributed according to a Rayleigh distribution \begin{equation} p(r_{\rm{no}})= \frac{r_{\rm{no}}}{\sigma^2} e^{-\frac{1}{2}(r_{\rm{no}}/\sigma)^2}. \end{equation} The most probable distance $\sigma$ (peak position of the distribution) is determined by the overall line density $\lambda$, i.e. the total line length per unit volume, by \begin{equation} \sigma=\frac{1}{\sqrt{2\pi\lambda}}. \end{equation} The average poresize $\overline{r}_{pore}$, defined via the radii of maximum spheres fitting into the pores, is proportional to $\sigma$, with $\overline{r}_{pore} \approx 1.86 \;\sigma$. \begin{acknowledgments} This work was supported by grants from Deutsche Forschungsgemeinschaft. \end{acknowledgments} \end{document}
1,108,101,563,583
arxiv
\section{Introduction} \label{intro} Since the invention of transistors in 1948 \cite{Bardeen1948}, germanium has been used in a broad variety of applications, ranging from gamma-ray detection \cite{HPGeDet} to fiber optics \cite{friebeleGriscom:defect, ballato:glassclad} to search for dark matter \cite{MJD:DM2017, cogent:APS, CDMS2020}. The state-of-the-art technology allows the production of detector blanks with lengths and diameters of 8-9 cm using the Czochralski method. With a level of impurities of the order of $10^{10}$ atoms/cm$^3$, such crystals can be converted into High Purity Germanium (HPGe) detectors. A HPGe detector is a semiconductor device. Two electrodes on the crystal surface are used to apply a bias voltage and extend the semiconductor junction throughout the full detector volume. When a gamma-ray or charged particle interacts within the detector it creates a large number of charge carriers, i.e. electrons and holes. Charge carriers of the same sign drift together towards the electrodes as cluster, following the electric field lines. Their motion induces a signal on the electrodes that is typically read-out by a charge sensitive amplifier. Similar to a time projection chamber, the analysis of the time structure of the read-out signal contains information on the topology of the event, i.e. on the number and location of the energy depositions. An important field of applications for germanium detectors is the search for neutrinoless $\beta\beta$ decay ($0\nu\beta\beta$), a nuclear transition predicted by many extensions of the Standard Model of particle physics in which two neutrons decay simultaneously into two protons and two electrons. For this search, detectors are fabricated from germanium material isotopically enriched to $\sim$90\% in the candidate double-beta decaying isotope $^{76}$Ge. Thus, the decay occurs inside the detector and the electrons are absorbed within $\mathcal{O}(mm)$, producing a point-like energy deposition. For $0\nu\beta\beta$~experiments it is hence of primary interest to discriminate single-site energy depositions (typical of the sought-after signal) from multiple-site energy depositions (typical of background events induced by multi-Compton scattering), as well as surface events (which, for geometrical reasons, are more likely to be external $\alpha$ or $\beta$ particles). The time development of the signal depends on the geometry of the detector, its electrode scheme, and its impurity concentration. Thus, an accurate modeling of the signal formation and evolution is an essential ingredient to design the detector and enhance the accuracy of the topology reconstruction and event discrimination. As an example, simulations have been extensively used in gamma-spectroscopy, such as modeling the segmented detectors of AGATA and GRETA~\cite{agatagretinareview, agata:pulseshape}, while in $0\nu\beta\beta$~experiments they led to Broad Energy Germanium (BEGe) and P-type Point Contact (PPC) detectors \cite{DusanBEGe, MJDExperienceWGermanium}. In the effort to increase the detector mass, new geometries such as the Inverted Coaxial (IC) \cite{RadfordCooperIC} have recently drawn increasing attention. In this new type of detectors, the time needed to collect electrons and holes is much longer than in the aforementioned geometries. In this article we investigate the collective effects in a cluster of charge carriers and their impact on the signal formation in the detector geometries of interest for $0\nu\beta\beta$~searches. Section \ref{sec:BasicModeling} summarizes the charge-carrier collection and signal formation for the detector geometries under consideration. Section \ref{sec:CollectiveEffects} describes collective effects in charge-carriers' clusters, which include self-repulsion, thermal diffusion and velocity dispersion. Section \ref{sec:impact0nbb} discusses the impact of such effects on the signal and background discrimination in $0\nu\beta\beta$~searches and Section \ref{sec:conclusion} finally discusses the results and puts them in the context of the future \textsc{Legend}~experiment. We performed comprehensive simulations of germanium detectors and validated them against the data acquired with a custom designed IC detector produced in collaboration with Baltic Scientific Instruments (BSI) and Helmholtz Research Center (Rossendorf). Its geometry is the one used as reference for this paper. Our work builds on the results of~\cite{Mertens}, which reports the first observation of such effects in PPC detectors and discusses how to accurately model them. Our simulations have been carried out with the \textsc{Mage}~\cite{MaGe} software framework based on \textsc{Geant-4}~\cite{geant4}, and a modified version of the SigGen software package~\cite{Radware} which already included the modeling of the collective effects and was used in~\cite{Mertens}. More details on simulations are given in \ref{app:sim}. \section{Charge-carrier collection and signal formation in germanium detectors} \label{sec:BasicModeling} \begin{figure*}[ht] \includegraphics[width=\textwidth]{fig1.png} \caption{Weighting field $E_\omega$ for a cross section of the three geometries used in current and future $0\nu\beta\beta$~experiments: (from left) PPC, BEGe and inverted coaxial. The thick black and gray lines are the p$^+$ and n$^+$ electrode, respectively. The yellow points are locations of an energy deposition, the white trajectories connecting them to the p$^+$ electrode are the drift paths of holes and those connecting them to the n$^+$ electrode are the drift paths of electrons.} \label{fig:Wfield} \end{figure*} When gamma-rays or charged particles interact within the germanium detector they release energy. About $10^6$ electron-hole pairs are created for each MeV released in the active detector volume. Once produced, the two kinds of carriers drift as two clusters in opposite directions following the electric field lines until they reach the electrodes. The signal induced by the motion of these charges can be to a first approximation modeled by the Shockley–Ramo theorem \cite{Shockley, Ramo}. The theorem states that the instantaneous current $I(t)$ induced at a given electrode by a drifting cluster of charge q is given by \begin{equation} I(t) = q \, \vec{v} (\vec{x}(t)) \cdot \vec{E_\omega} (\vec{x}(t)) \label{eq:ShockleyRamo} \end{equation} where $\vec{v} (\vec{x}(t))$ is the instantaneous drift velocity and $\vec{E_\omega}(\vec{x}(t))$ is the weighting field at position $\vec{x}(t)$. The weighting field is defined as the electric field created by the considered electrode set at 1 V, all other electrodes grounded and all charges inside the device removed. Thus, the signal induced at the electrode is the product of the instantaneous drift velocity and the projection of the weighting field in the direction of motion, weighted by the deposited charge. Often events induced by gamma-rays result in multiple energy depositions well separated compared to the dimension of the charge clusters. In this case, each cluster drifts independently of the others and the resulting signal is the superposition of the individual signals, each of them weighted by the charge in each cluster. Three illustrative HPGe detector geometries are analyzed in this article. These are the geometries used by the current and future $0\nu\beta\beta$~experiments: \textsc{Gerda}~\cite{Gerda:science}, \textsc{Majorana Demonstrator}~(\textsc{MJD}) \cite{MJD:PRL2019}, \textsc{Legend}~\cite{Legend:Medex2017}. All of them are p-type detectors, with a Lithium-diffused n$^+$ electrode and a B-implanted p$^+$ electrode. The three detector types are shown in Fig.~\ref{fig:Wfield} along with the resulting weighting field and illustrative trajectories. The PPC detectors have a cylindrical shape and have masses up to 1 kg. Their geometry is characterized by a small ($\sim$2 mm diameter) p$^+$ electrode on one of the flat surfaces, while the rest of that flat surface is passivated. The remaining surface of the detector is covered by the n$^+$ electrode. Electrons are collected on the n$^+$ electrode that is kept at a few kV operational voltage, while holes on the p$^+$ electrode, that is grounded and used to read-out the signal. This geometry creates a weighting field that increases rapidly in the immediate vicinity of the p$^+$ electrode. This results in a characteristic peak-like structure in the current signal when the hole clusters approach the p$^+$ electrode. Compared to PPC detectors, the BEGe detectors are shorter but have a larger radius. The major difference between the two geometries is the structure of the electrodes: the p$^+$ electrode is larger for BEGe (up to $\sim$15 mm diameter) and surrounded by a passivated groove with typical depths of $\sim$3 mm. The BEGe detectors’ n$^+$ electrode extends down to the groove, wrapping around the crystal on all surfaces. This structure has a strong impact on the trajectories of the carriers, as it creates a \emph{funnel} effect \cite{Agostini_SignalModeling}: holes are pushed towards the center of the detector and then move to the p$^+$ electrode along a fixed path that is independent by their starting point (see central plot of Fig.~\ref{fig:Wfield}). Since that is the volume in which the weighting field is highest, according to Eq.~\ref{eq:ShockleyRamo}, the majority of the induced signals in a BEGe detector share the same maximum value of the current $I(t)$. The inverted coaxial detector has the same electrode structure as a BEGe, though it is about twice as long. In order to keep a high electric field throughout the whole volume, a hole is drilled on the opposite side of the p$^+$ electrode and constitutes part of the n$^+$ contact. It normally extends down to within 25-35 mm from the p$^+$ electrode. With the wrap-around n$^+$ electrode, the funneling is preserved and the trajectories converge in the region of high weighting field (see Fig.~\ref{fig:Wfield}). \section{Charge-carrier collective effects} \label{sec:CollectiveEffects} The modeling of the signal formation presented in the previous section does not account for the cluster spatial extension that is $\mathcal{O}(mm)$ for a MeV energy deposition. It can be extended to account for the non-null dimensions of the cluster. If we define $\vec{r}(t)$ as the distance of every charge in the cluster from the center of the distribution, the instantaneous signal induced at the electrode will be the integral of equation \ref{eq:ShockleyRamo} over the spatial charge distribution $Q(\vec{r}(t))$ of the cluster: \begin{equation} \widetilde{I}(t) = \int d\vec{r} \, Q(\vec{r}(t)) I(t). \label{eq:extendedSR} \end{equation} If the electric field varies on scales similar to the cluster size, charges at the opposite side of the cluster will experience different forces (accelerations), leading to a deformation of the cluster during its drift towards the electrodes. Moreover, the stochastic diffusion and self-interaction of the charge carriers will progressively increase the size of the cluster during its motion. The diffusion consists of a random thermal motion of the carriers while the self-interaction is the result of the Coulomb force. In this work, such processes are treated as collective effects~\cite{Radware}. That allows an analytical treatment and keeps the computational requirements to an affordable level. We compared this approximated collective description with a full multi-body simulation\footnote{We simulated the individual motion of 10000 charges in the field generated by the detector and the instantaneous configuration of the other charges.} and found that it does not introduce noticeable inaccuracies. In our collective treatment, we consider the effects of mutual repulsion and diffusion separately from those of acceleration, because the formers act in all directions, while the latter breaks the spherical symmetry and acts exclusively in the direction of motion. The dynamics of drifting charges in the presence of mutual repulsion and diffusion can be treated assuming spherical symmetry, and is described by the continuity equation \cite{gatti}: \begin{equation} \frac{\partial^2 Q}{\partial r^2} - \frac{2}{r}\frac{\partial Q}{\partial r} - \frac{1}{D} \frac{\partial Q}{\partial t} - Q \frac{\partial Q}{\partial r}\frac{1}{V_T} \frac{1}{4\pi\epsilon r^2} = 0 \label{eq:contEq} \end{equation} where $D$ is the diffusion coefficient, $\epsilon$ the permittivity in germanium and $V_T$ the thermal voltage $V_T = k_B T/q$ with $q$ being the elementary charge. The general solution of Eq. \ref{eq:contEq} when the Coulomb repulsion term is neglected describes a gaussian profile for the charge cluster, whose width is \begin{equation} \sigma_D = \sqrt{2Dt}. \label{eq:diff} \end{equation} When charges drift in an electric field, the diffusion coefficient $D$ has a longitudinal and transverse component. Both are calculated in SigGen~\cite{Radware} in the respective direction, but only the longitudinal is the responsible for the deformation of the signal. As reported in~\cite{jacoboni}, this component is lower as the electric field strength increases. This implies that, with a sufficiently high impurity concentration, the effect of diffusion can be strongly limited (as stated also in~\cite{Mertens}). Neglecting the first two terms of Eq. \ref{eq:contEq} and considering only the Coulomb self-repulsion, we obtain a solution in which the charge distribution behaves like an expanding sphere of radius $\sigma_R$: \begin{equation} \sigma_R = \sqrt[3]{\frac{3 \mu q}{4\pi \epsilon} N t} \label{eq:rep} \end{equation} where N is the number of charge carriers in the distribution and $\mu$ is the mobility of the carrier, which is related to the diffusion coefficient by the Einstein equation $D=\mu k_BT/q$. Both Eq. \ref{eq:diff} and \ref{eq:rep} describe a distribution which gets monotonously broader with time, with the difference that Eq. \ref{eq:diff} is completely determined by the detector properties, while Eq. \ref{eq:rep} depends on the deposited energy. The drifting in the electric field of the detector, on the other hand, enlarges or decreases the size of the cluster, according to whether it experiences accelerations or decelerations. The modeling of such effect is obtained from basic kinematics, and can be easily calculated for each time-step $t_i$ as: \begin{equation} \sigma_A(t_{i+1}) = \sigma_A(t_{i}) \cdot \frac{v(t_{i+1})}{v(t_i)} \end{equation} It is clear that in the direction of motion there is a strong interplay between the three described effects, which can give rise to non-linear effects on the cluster size. \begin{figure*}[] \includegraphics[width=\textwidth]{fig2.pdf} \caption{Breakdown of the collective effects on a charge cluster. The top-left plot shows the drift velocity field of an IC detector with superimposed in brown the drift path of the holes' cluster for an interaction location marked by the star. The cluster's drift velocity along the path is shown in the bottom-left plot. The evolution of the cluster's size and $\sigma_\tau$ is displayed in the top-right and bottom-right plot, respectively. The initial size of the cluster is 0.5 mm, the average for energy depositions of 1.6 MeV.} \label{fig:breakdown} \end{figure*} Fig. \ref{fig:breakdown} displays the contribution of the mentioned processes to the charge cluster deformation\footnote{The initial cluster size is given here in Full Width Half Maximum, and it has been determined as a function of energy through Monte Carlo simulation. See details in~\ref{app:sim}.}. The top-left plot shows the drift velocity field on an IC detector cross section, where superimposed in brown is the trajectory of holes for an energy deposition on the position marked with the star. As holes travel through the detector, they experience accelerations (decelerations) according to the electric field, stretching (shrinking) the cluster size in the direction of motion as shown in the top-right panel (light blue curve). In the same plot, the broadening effect due to the described Coulomb and diffusion processes are shown with the yellow and green curves, respectively: as described by equations \ref{eq:diff} and \ref{eq:rep}, their effect is a monotonic enlargement of the cluster size. Finally, the dark blue curve shows the evolution of the cluster dimensions, when all effects act simultaneously. As anticipated, the total size is not just the simple sum of the three contributions, as they are not independent: an enlargement of the cluster size, for instance due to Coulomb or diffusion effects, emphasizes the difference in the drift velocity field of charges at the edge of the distribution, thus amplifying the effect of acceleration. This amplification effect has been tested with our full multi-body simulation mentioned above. In our multi-body simulation, we calculated the motion of every single charge induced by the field created by the detector, superimposed to the field created by the other charges in the cluster. That approach confirmed the evolution of the cluster size as modeled by the collective description presented above. In particular, it reproduces the amplification effect of acceleration and mutual repulsion, thus further confirming the modeling in SigGen. The impact of the different collective effects on the signal formation can be characterized through the time spread of the cluster, which we define in the following as $\sigma_\tau(t)$. The evolution in time of such parameter is displayed in the bottom right plot of Fig.~\ref{fig:breakdown}. The light blue curve shows that $\sigma_\tau$ is constant if only acceleration effects are considered. As other effects are switched on, their interplay gives a total time spread which can be up to a factor 5 larger than the initial value. The enlargement of the cluster size through the parameter $\sigma_\tau$ as a function of the interaction position is shown in Fig. \ref{fig:DeltaTau&Pulse} (top), separately for the three considered geometries. For PPC detectors, the maximum enlargement is for interactions in the corners, where $\sigma_\tau$ reaches about 15 ns. The corners are the part of the detector from which the hole drift path is the longest. For BEGe detectors the maximum is slightly larger, up to 20 ns for radii larger than 30 mm. For inverted coaxial detectors the effect is much stronger, up to a factor 2 and it affects more than half of the detector volume. The impact on the signal shape is shown in the bottom row of Fig. \ref{fig:DeltaTau&Pulse}, where signals are shown with (light blue) and without (dark blue) the deformation caused by collective effects. The difference between the two cases is less than 0.5$\%$ of the signal amplitude in BEGe and PPC detectors (see green curve), but it is larger for inverted coaxials, where the maximum of the current signal is lowered by $\sim$2 \% when group effects are switched on. \begin{figure*} \includegraphics[width=\textwidth]{fig3.pdf} \caption{Top: values of the $\sigma_\tau$ parameter as a function of the interaction position, for the three geometries considered. Bottom: simulated signals for the interactions and drift paths indicated by the brown point and curve, with and without Collective Effects (CE). Higher values of $\sigma_\tau$, as in inverted coaxial detectors, imply lower values of the current $I(t)$.} \label{fig:DeltaTau&Pulse} \end{figure*} The collective effects described in this section are expected for all detector geometries. Their impact on the signal shape, however, will depend on the geometry and the impurity profile. In the second part of this paper, we will evaluate such impact on advanced event reconstruction techniques such as those for $0\nu\beta\beta$~experiments. \section{Event discrimination in $0\nu\beta\beta$~experiments} \label{sec:impact0nbb} $0\nu\beta\beta$~experiments using HPGe detectors rely heavily on the analysis of the time structure of the signal in order to reconstruct the topology of the energy deposition and thus discriminate between $0\nu\beta\beta$~and background. This kind of analysis is commonly referred to as Pulse Shape Analysis (PSA). $0\nu\beta\beta$~events are characterized by a single energy deposition while background can be generated by gamma-rays scattering multiple times within the detector, or $\alpha$ and $\beta$ particles depositing energy next to the detector surface\footnote{These surface events generate peculiar pulse-shapes, the recognition of which is beyond the scope of this work.}. PSA techniques are based on the recognition of a few specific features of the signal time evolution which allows for a discrimination between signal- and background-like events. The effects discussed in the previous section have the net result of blurring these features and, consequently, of worsening the performance of any PSA technique. In this section we evaluate their impact on a particular PSA technique that is the standard in the field: the so called $A/E$ method~\cite{DusanBEGe}. The $A/E$ technique is based on a single parameter that is the maximum value of the current signal ($A$), normalized by the total deposited energy ($E$) (or $q$ in Eq. \ref{eq:ShockleyRamo}). In case of a single energy deposition, the signal has a single peak structure with amplitude $A$, which corresponds to the moment when the holes' cluster passes through the region of maximum weighting field. If the energy is deposited in multiple locations, multiple clusters are simultaneously created and the total signal is the superposition of the signal induced by the motion of each of them. Different clusters will reach the region of maximum weighting field at different times, creating a multiple peak structure. Since the amplitude of each peak is proportional to the total charge in the cluster generating it, events with multiple energy depositions $E_i\propto q_i$ will have a lower $A/E$ value compared to single-site events in which all energy is concentrated in a single cluster $E\propto \sum_i q_i$. When normalized to the total charge $q$, the signal from a multiple energy deposition gives lower $A/E$ values compared to a single energy deposition. More details are given in \ref{app:A/E}. The $A/E$ parameter is independent of the interaction position and its discrimination efficiency is constant throughout the whole detector volume. This is due to the fact that the holes approach the region of maximum weighting field along the same trajectory\footnote{This is true for BEGe and IC detectors. The funneling effect is not present in the PPCs, because for that geometry the weighting field at the p$^+$ electrode is spherical, hence the signal does not depend on the angle from which the holes arrive.}, independent of the original location where the cluster was created. Without considering the collective effects, the $A/E$ parameter is expected to have the same value for clusters with a given energy generated in most of the detector volume. The only exception is for interactions nearby the read-out electrode, for which the $A/E$ parameter is larger than usual because of extra contribution of the electrons' cluster that now moves in a region of strong electric and weighting field and its contribution on the signal shape is not negligible as in the rest of the detector. The uniformity of the $A/E$ parameter in the detector volume has been studied in detail in~\cite{Agostini_SignalModeling}. Collective effects depend on the interaction position -- as shown by the $\sigma_\tau$ parameter in Fig. \ref{fig:DeltaTau&Pulse} -- and this creates an $A/E$ dependence from the interaction position Fig. \ref{fig:HPGe_ART} shows the value of the $A/E$ parameter for mono-energetic energy depositions simulated throughout the whole detector volume considering the collective effects described in Sec.~\ref{sec:CollectiveEffects}. The $A/E$ value varies by a few percent between the corners and the center of the detector in the BEGe and PPC geometry. As already mentioned, the value is significantly amplified only in about 3\% of the detector volume around the p+ electrode. For inverted coaxial detectors, while the bottom half of the volume exhibits features similar to the BEGe geometry, the upper part shows a consistent 0.3\% reduction of the $A/E$ value. This reduction of the $A/E$ has been experimentally confirmed by studying the response of our prototype inverted coaxial detector against low-energetic gamma-rays used to create well-localized energy depositions on different parts of the detector surface. Maximizing the detector volume is of primary importance for $0\nu\beta\beta$~experiments. However, the larger the collection path, the stronger the impact of these collective effects will be. In the following we evaluate the event-reconstruction performance of inverted coaxial detectors and discuss possible analysis techniques to correct for these collective effects. To quantify the performance we focus on the acceptance of $0\nu\beta\beta$-like events and of typical backgrounds of the experiments. \begin{figure*} \includegraphics[width=\textwidth]{fig4.pdf} \caption{$A/E$ (top) and rise time (bottom) values for the three analyzed geometries. In PPC and BEGe detectors rise times range up to 600-800 ns, while for inverted coaxials they can be twice as big, and saturate for high z-positions, where the threshold at 0.5\% is no longer a good approximation of the beginning of charge collection. A correlation between $A/E$ and rise time is visible for the inverted coaxial detector. } \label{fig:HPGe_ART} \end{figure*} The event discrimination based on the $A/E$ parameter is calibrated using the Double Escape Peak (DEP) events from $^{208}$Tl as a proxy for $0\nu\beta\beta$~events, as they both consist in a single energy deposition (for more details on the calibration of the analysis, we refer to \ref{app:A/E}). The $A/E$ distribution of DEP events is used to set a cut value which keeps 90\% of their total number. This value cannot be directly translated to $0\nu\beta\beta$~acceptance, for two reasons: the first is that DEP and $0\nu\beta\beta$~events have a slight different topology\footnote{DEP events consist in an electron and positron sharing 1.6 MeV, while $0\nu\beta\beta$~events produce two electrons sharing 2 MeV. This changes the initial cluster size, as well as the Bremsstrahlung probability.}. The second, DEP events are concentrated on corners, $0\nu\beta\beta$ s are homogeneously distributed. In order to estimate the $0\nu\beta\beta$~acceptance, we performed a Monte Carlo simulation of the energy deposited in 300000 $0\nu\beta\beta$~and DEP events. The Monte Carlo simulation takes into account all the physical differences between the two classes of events and their spatial distribution within the detector. For each event, the total signal is computed using the modeling described in Sec.~\ref{sec:BasicModeling} and \ref{sec:CollectiveEffects} and analyzed to extract the $A/E$ parameter. From the $A/E$ distribution of DEP events, we set the cut value and applied it to the $0\nu\beta\beta$~population. This resulted in a final $0\nu\beta\beta$~acceptance of $(86.1\pm 0.1$(stat))\%, which is compatible with the typical values for BEGe detectors \cite{Gerda:science} (see Tab.~\ref{tab:SP}). Technical details on Monte Carlo and pulse shape simulation, as well as on the signal processing can be found in \ref{app:sim}. From the Monte Carlo simulation of $^{208}$Tl, we also extracted the $A/E$ distributions of events from $^{208}$Tl Full Energy Peak (FEP), $^{208}$Tl Single Escape Peak (SEP) as well as from the Compton continuum (CC) from $^{208}$Tl and $^{214}$Bi, which constitute background at Q$_{\beta\beta}$. We applied the cut obtained from DEP events to these distributions and obtained the survival fraction of $(5.1\pm0.3$)\% and $(7.4\pm0.1$)\% for SEP and FEP events, respectively (see Tab. \ref{tab:SP}), and $(45.1\pm0.3$)\% and $(20.3\pm0.4$)\% for the Compton continuum at Q$_{\beta\beta}$ from $^{208}$Tl and $^{214}$Bi, respectively. The values, reported in Tab. \ref{tab:SP}, are in agreement with the typical theoretical values for BEGe detectors \cite{Agostini_SignalModeling}. \begin{table*} \begin{tabular}{cclclc|clclcl} \hline\noalign{\smallskip} & \multicolumn{5}{c}{\textbf{Simulations}} & \multicolumn{6}{c}{\textbf{Data}} \\ & \multicolumn{4}{c}{IC} & BEGe\cite{Agostini_SignalModeling, Gerda:science} & \multicolumn{4}{c}{IC (this work)} & \multicolumn{2}{c}{BEGe \cite{Agostini_SignalModeling, DusanBEGe} } \\ Event class & \multicolumn{2}{c}{\emph{Standard}} & \multicolumn{2}{c}{\emph{RT corr}} & \emph{Standard} & \multicolumn{2}{c}{\emph{Standard}} & \multicolumn{2}{c}{\emph{RT corr}}& \multicolumn{2}{c}{\emph{Standard}} \\ \noalign{\smallskip}\hline\noalign{\smallskip} $^{208}$Tl DEP & 90.00 & (8) & 90.08 & (8) & 90 \quad (1) & 90.1 & (8) & 90.1 & (8) & 90 & (1)\\ $^{208}$Tl SEP & 5.1 & (3) & 5.8 & (3) & 8 \,\ \quad (1) & 5.0 & (3) & 5.3 & (3) & 5.5 & (6) \\ $^{208}$Tl FEP & 7.4 & (1) & 8.1 & (1) & 12 \quad (2) & 7.64 & (5) & 7.92 & (5) & 7.3 & (4) \\ CC @Q$_{\beta\beta}$ ($^{208}$Tl)& 45.1 &(3) & 46.7 &(3) & 42 \quad (3) & 32.3 &(2)& 33.1 & (2) & 34 &(1)\\ CC @Q$_{\beta\beta}$ ($^{214}$Bi)& 20.3 &(4) & 21.8 &(4) & -- & \multicolumn{2}{c}{--} & \multicolumn{2}{c}{--} & 21 & (3) \\ \noalign{\smallskip}\hline\noalign{\smallskip} $0\nu\beta\beta$ & 86.07 & (6) & 85.47 &(6) & 88\quad (2) &\multicolumn{2}{c}{--} & \multicolumn{2}{c}{--} & \multicolumn{2}{c}{--} \\ \noalign{\smallskip}\hline \end{tabular} \caption{Percentage of events classified as single-site for different event samples and detectors, taken from simulations and experimental data. For inverted coaxial detectors, the results are given both before (\emph{Standard}) and after a correction based on the rise time (\emph{RT corr}).} \label{tab:SP} \end{table*} As pointed out above, the impact of the collective effects is correlated with the time needed to collect the hole cluster. Following the proposal of \cite{commDavid}, we tested a correction on the $A/E$ parameter based on the reconstructed collection time of the signals, in order to restore the position independence. In this work we reconstruct such a quantity by taking the time between two arbitrary thresholds on the signal, i.e. what is called the rise time\footnote{Normally, the thresholds are set on the signal which is experimentally accessible, which means the output of the charge sensitive pre-amplifier. That is the charge signal $V(t)$, which is the integral of the current signal $I(t)$.}. Noise conditions can prevent accurate determination of the start time for thresholds below 0.5\% at the energies of interest for $0\nu\beta\beta$~search. Hence, for this work we refer to rise time as the time between 0.5\% and 90\% of signal development\footnote{Other techniques, based on the convolution of the signal function with a well tuned impulse response function, could lead to the identification of lower thresholds, such as 0.1\% of the signal amplitude.}. A map of the mean rise time as a function of the interaction position within the detector is shown in Fig. \ref{fig:HPGe_ART} for the three geometries considered. These rise time and $A/E$ values are correlated in the inverted coaxial geometry. This is shown explicitly in Fig. \ref{fig:RTvsAoE} for DEP (\ref{subfig:RTvsA/E_DEP}) and $0\nu\beta\beta$~(\ref{subfig:RTvsA/E_0nbb}) events. Both plots suggest that a linear correlation could be used to align the $A/E$ values in the bottom and top part of the detector volume. This double peak structure has been first reported in \cite{DomulaHult, comellatoPisa}. Its origin is connected by our work to the collective effects and the spatial distribution of DEP events within the detector. Indeed, the configuration of the inverted coaxial detector creates a region on the top and one on the bottom part of the detector in which rise time and $A/E$ saturate to a limit value, which gets more represented than the others. This effect is even more pronounced for DEP events, which are more likely to occur on the detector edges. \begin{figure*} \centering \subfloat[][DEP events from data (filled colored contour) and simulations (gray contour lines) ]{\includegraphics[width=.48\textwidth]{fig5a.pdf}\label{subfig:RTvsA/E_DEP}} \quad \subfloat[][$0\nu\beta\beta$~events from simulations ]{\includegraphics[width=.48\textwidth]{fig5b.pdf}\label{subfig:RTvsA/E_0nbb}} \caption{Distribution of the $A/E$ and rise time for (\ref{subfig:RTvsA/E_DEP}) DEP events and (\ref{subfig:RTvsA/E_0nbb}) $0\nu\beta\beta$~events. The distributions are shown for experimental data (color maps) and simulated data (contour lines).} \label{fig:RTvsAoE} \end{figure*} Motivated by the correlation shown in Fig. \ref{fig:RTvsAoE}, we explored the impact of a first order linear correction of the $A/E$ value based on the rise time for each event. The $A/E$ maps before and after such correction are shown in Fig. \ref{fig:AoE_w-wo_corr}. The linear correction reduces the difference among A/E values: the volume that exhibits an $A/E$ value of $(1.000 \pm 0.002)$ increases from 71\% before correction to 89\% after. At the same time, it creates a bulk volume where $A/E$ values get lowered by almost 0.5\%. This is due to the interplay between collective effects, which combine in such a way that the cluster deformation (hence $A/E$) is not univocally associated to the length of the drift paths. \begin{figure*} \includegraphics[width=\textwidth]{fig6.pdf} \caption{$A/E$ maps from Monte Carlo $0\nu\beta\beta$~events. The left plot shows the values of $A/E$ normalized according to the energy correction (see \ref{app:A/E}) and the right plot shows the values after rise time correction.} \label{fig:AoE_w-wo_corr} \end{figure*} In order to determine whether it is convenient to apply the rise time correction or not, we tested it on the simulations of $^{208}$Tl and $0\nu\beta\beta$. The results are reported in the second column of Tab. \ref{tab:SP}. The survival fraction of $0\nu\beta\beta$~events decreases after rise time correction from a value of $(86.1\pm 0.1$)\% to $(85.5\pm 0.1$)\%. In terms of background, the rise time correction increases the survival fraction of events at Q$_{\beta\beta}$ by $(1.5\pm0.3)$\%. The correction does not improve the overall efficiencies, but reduces the volume dependence of the PSA performance, possibly reducing the systematic uncertainties of the experiment. It might become more and more relevant as the detector volume keeps on increasing. The distribution of the $A/E$ and rise time from experimental data is shown in the coloured filled contour of Fig. \ref{subfig:RTvsA/E_DEP}, in comparison with simulations, represented by the gray contour lines. The 0.3\% displacement in $A/E$ between the two blobs is well reproduced by our work. This is not the case if collective effects are not included. The excess in data at low values of $A/E$ is expected, as DEP events cluster on corners, where a fraction of events occurs in a transition layer where there is no electric field and the charge carriers move because of diffusion. This effect is not included in our simulation. The rise time is systematically underestimated by $\sim30$ ns in our simulation. This disagreement does not affect the conclusions of our work and could in principle be improved by tuning the unknown parameters of the crystal, such as the impurity profile along the symmetry axis, or the hole mobility. Experimental data for $^{208}$Tl have been collected using a $^{228}$Th source (a progenitor of $^{208}$Tl, details in \ref{app:Th}) and used to extract the survival fractions of the different classes of events, both before and after rise time correction. The numbers, reported in Tab. \ref{tab:SP}, show an agreement $<0.5\%$ with simulations for SEP and FEP events. Some tension appears when comparing the survival fractions of the Compton continuum at Q$_{\beta\beta}$. This can been traced back to inaccuracies in the positioning of the source. The distance between radioactive source and detector changes the fraction of multiple-site events from cascade of gammas (this was also observed in \cite{Agostini_SignalModeling}). This does not affect the populations of SEP and FEP events, since for them a statistical subtraction of the side-bands is performed (details in \ref{app:A/E}). The impact of the rise time correction on data, even if not statistically significant, reflects what is found with simulations, namely that it increases the acceptance of FEP and SEP events, as well as of background at Q$_{\beta\beta}$. In summary, the modeling developed reproduces the $A/E$ results within 0.2\% and hence its systematic uncertainties are lower than the impact of the collective effects that we wanted to study. \section{Conclusions and discussion} \label{sec:conclusion} In this paper we discussed the collective effects in clusters of charge carriers in germanium detectors and the impact of such effects on signal formation, with particular focus on the consequences for $0\nu\beta\beta$~experiments with $^{76}$Ge. We determined that the deformation of the signal due to collective effects is relevant for detectors with long drift paths. In particular, we observed in the inverted coaxial geometry a position dependence of the standard pulse shape discrimination parameter used in $0\nu\beta\beta$~experiments ($A/E$). With the combined use of Monte Carlo and pulse shape simulations of $^{208}$Tl and $0\nu\beta\beta$ s of $^{76}$Ge, we determined that such volume dependence does not impact the pulse shape discrimination performances significantly. This proved to be the case both using the standard $A/E$ analysis, and implementing a correction based on the reconstruction of the drift path. As detector volumes keep on increasing, the impact of collective effects on $A/E$ might become stronger~\cite{comellatoPisa}. Moreover, the background composition at Q$_{\beta\beta}$ will change, too, for different detector geometries. With such conditions, it is meaningful to compare detector performances at the same $0\nu\beta\beta$~acceptance. This could be used in the future to fix the $A/E$ cut on DEP events. A visual representation of the $0\nu\beta\beta$~acceptance as a function of the acceptance of DEP events is displayed in Fig.~\ref{fig:GWD6022_SPDEPvs0nbb}, both before and after rise time correction. No appreciable difference was observed when the true drift time (extracted from the simulations) was used for the correction. \begin{figure} \includegraphics[width=0.5\textwidth]{fig7.pdf} \caption{Acceptance of $0\nu\beta\beta$~events as a function of DEP's, in the case of no-correction on $A/E$ (blue curve), or after rise time (green curve) and drift time (yellow curve) correction.} \label{fig:GWD6022_SPDEPvs0nbb} \end{figure} As planned by \textsc{Legend}, inverted coaxial detectors will be deployed in environments which are more challenging than a vacuum cryostat and exhibit different electronics noise conditions. In this work we explored the impact of a factor 5 higher noise level on pulse shape discrimination performances. This yields (for a cut at 90\% DEP acceptance) an increase in the $0\nu\beta\beta$~acceptance of 3\%, but at the same time an increase of 5\% in the background events surviving the $A/E$ cut at Q$_{\beta\beta}$. This is compatible with values of other BEGe detectors already in use in \textsc{Gerda}~\cite{Gerda:science}. We also explored the performances of inverted coaxial detectors with lengths in the range $8-9$ cm and determined that the performances are still compatible with those presented here. This fact, together with the other results of this work, confirms the inverted coaxial detectors as a high-performance design for the search for neutrinoless $\beta\beta$ decay. \begin{acknowledgements} We are very grateful to David Radford who developed SigGen as an open source project. SigGen is the software that we used to model the HPGe detector signal and included already the modeling of collective effect that we used to study the performance of our three detector geometries. We are also thankful to D. Radford for many suggestions and enlightening discussions during the work as well as his help during the preparation of this manuscript. We are also thankful to all the members of the GERDA and LEGEND collaborations for their valuable feedback. This work has been supported in part by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 786430 - GemX) and by the SFB1258 funded by the Deutsche Forschungsgemeinschaft (DFG). \end{acknowledgements}
1,108,101,563,584
arxiv
\section{Introduction} Geometric frustration can prevent magnetic ordering despite the presence of a sizable magnetic exchange interaction. In insulating systems this leads to exotic ground states like spin ices \cite{Bramwell2001Spin}, spin glasses \cite{Mydosh1993Spin} or spin liquids \cite{Balents2010Spin}. The existing theoretical concepts to describe these states involve localized magnetic moments and usually their nearest and next-nearest neighbor exchange interactions without considering the charge degrees of freedom \cite{Ramirez1994Strongly}. In metals the situation is different due to the presence of conduction electrons coupling to the local moments. Firstly, especially in $f$-electron systems, the indirect long-range Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction renders the description by nearest and next-nearest neighbor exchange not in general sufficient. In $4f$-electron systems partial frustration is found in a few materials where one part of the magnetic moments forms long-range order, the other part remains disordered down to lowest temperature. Examples are systems like ${R}$InCu$_4$ with $R = $ Gd, Tb, Dy, Ho, Er and Tm \cite{Nakamura1993Anomalous,Fritsch2004Spin,Fritsch2006Correlation} or CePdAl to be presented here \cite{Doenni1996Geometrically}. Secondly, due to the Kondo effect local moments can be screened by the conduction electrons, leading to a non-magnetic ground state. Magnetic frustration and Kondo effect can in this respect be regarded as being antithetic: The Kondo effect, yielding a delocalization of the magnetic moments due to virtual transitions of $4f$ electrons to the Fermi level, is not beneficial for the formation of a frustrated state. On the other hand, magnetic exchange interactions between the local moments can result in a breakdown of Kondo screening \cite{Senthil2004Weak}. Hence, the frustration parameter, the ratio between Curie-Weiss and N\'{e}el temperature as defined for insulating systems, is no longer useful in the presence of the Kondo effect. CePdAl is a partially frustrated Kondo lattice with a Kondo temperature $T^{\ast} \approx 6\,\si{K}$ \cite{Doenni1996Geometrically,Goto2002Field,Woitschach2013Characteristic}. Therefore it provides the possibility to investigate the interplay between magnetic frustration and Kondo physics. Here we present measurements of the magnetization, the electrical resistivity and the specific heat as well as experimental data from neutron scattering experiments indicating that the frustrated moments in CePdAl remain fluctuating down to lowest temperature \cite{Oyamada2008Ordering} and affect the long-range order and magnetic excitations of the non-frustrated moments. \begin{figure} \begin{center} \includegraphics[width=14pc]{MdurchBvsT.pdf}\hspace{4pc} \begin{minipage}[b]{14pc} \caption{DC susceptibility $\chi = M/B$ versus temperature $T$ in an external magnetic field $B = 100$~mT aligned parallel (blue circles) and perpendicular (red triangles) to the crystallographic $c$ axis. The maximum of $\chi(T)$ indicates antiferromagnetic order, while the maximum of $d\chi/dT$ yields the Ne\'{e}l temperature $T_N$ \cite{Fisher1962Relation}. \label{fig:chi}} \end{minipage} \end{center} \end{figure} CePdAl crystallizes in the hexagonal ZrNiAl structure. The magnetic moments are located at the cerium sites, which form distorted kagom\'{e} planes stacked exactly on top of each other. At $T_{\rm N} = 2.7\,\si{K}$ antiferromagnetic order sets in. It is an Ising system with the magnetic moments aligned along the $c$ axis, as shown by the susceptibility measurements presented in Fig.~\ref{fig:chi}. These data are in nice agreement with previously published susceptibility data by Isikawa et al. \cite{Isikawa1996Magnetocrystalline}, who attributed the magnetocrystalline anisotropy to the effect of the crystal electric field. The magnetic structure derived from powder neutron diffraction with a wave vector $\bm{q} = \left(\frac{1}{2},0,\frac{1}{3}+\tau^{\ast}\right)$ \cite{Doenni1996Geometrically} is visualized in Fig.~\ref{fig:structure}. Within the basal plane, two thirds of the cerium moments form ferromagnetic chains, which are coupled antiferromagnetically and separated by the frustrated moments. Thus in contrast to the crystallographic structure with a single cerium site, the magnetic structure below $T_{\rm N}$ of CePdAl features three inequivalent cerium sites. \begin{figure} \begin{center} \includegraphics[width=14pc]{frustPlanes.pdf}\hspace{4pc} \begin{minipage}[b]{14pc} \caption{\label{fig:structure} Magnetic structure of CePdAl: two-thirds of the Ce moments form corrugated antiferromagnetic planes in the $ac$-plane with a sine-like modulation along the $c$ axis, which are separated by planes of frustrated moments (shown in yellow). The sketch neglects the small incommensuration $\tau^{\ast}$ along the $c$ direction} \end{minipage} \end{center} \end{figure} Due to the geometrical frustration present in the $ab$ plane in this compound one third of the magnetic moments does not participate in the long-range magnetic order \cite{Doenni1996Geometrically}. These moments form a rectangular lattice in the $ac$ plane (shown in yellow in Fig.~\ref{fig:structure}), separated from each other by antiferromagnetically ordered corrugated planes (grey shaded in Fig.~\ref{fig:structure}) \cite{Fritsch2014Approaching}. \section{Experimental Details} A single crystal of CePdAl was grown by the Czochralski method \cite{Abell1989Handbook,Ames}. A Physical Properties Measurement System (PPMS, Quantum Design) was used to obtain the specific heat and resistivity data in the temperature range between $350\,\si{mK}$ and room temperature. Specific-heat and resistivity measurements at lower temperatures were performed in a dilution refrigerator. The electrical resistivity was measured with a standard four-contact method employing a LR700 resistance bridge, for the specific heat measurements a standard heat-pulse technique was used. The magnetization measurements at $T = 0.5$ and $5$~K were performed in a Magnetic Properties Measurement System (MPMS, Quantum Design) equipped with a SQUID Magnetometer in magnetic fields up to $B = 7\,\si{T}$. The magnetization data at $T = 1.6$~K and the dc susceptibility between $1.6\,\si{K}$ and room temperature were measured in a Vibrating Sample Magnetometer (VSM, Oxford Instruments) in magnetic fields up to $B = 12$~T. Neutron scattering was performed at the instrument RESI (FRM2, Munich) with a neutron wavelength $\lambda =1.03\,\si{{\AA}}$ in a $^3$He cryostat. \section{Results and Discussion} \begin{figure} \begin{center} \includegraphics[width=14pc]{widerstand.pdf}\hspace{4pc}\includegraphics[width=14pc]{rhovsB.pdf} \caption{\label{fig:widerstand} (a) Resistivity $\rho$ vs. temperature $T$ of CePdAl with the current $i$ applied along the $c$ axis in zero field (blue symbols) and an external magnetic field aligned parallel to the $c$ axis with $B = 3\,\si{T}$ (purple symbols) and $B = 7\,\si{T}$ (green symbols). The red line is a guide to eye, showing the logarithmic increase of the resistivity related to the Kondo effect. (b) Field dependence of the resistivity at $T = 60$~mK with the magnetic field aligned parallel to the $c$ axis and the current parallel (red circles) and perpendicular (blue triangles) to the applied field.} \end{center} \end{figure} The electrical resistivity of CePdAl is shown in Fig.~\ref{fig:widerstand}~(a) in the temperature range between $40\,\si{mK}$ and $300\,\si{K}$ in zero field (blue symbols) and an external magnetic field applied parallel to the $c$ axis with $B = 3\,\si{T}$ (purple symbols) and $B = 7\,\si{T}$ (green symbols). In zero field a logarithmic increase of $\rho(T)$ (illustrated by the red line) is observed when lowering the temperature, characteristic for the presence of the Kondo effect. At $T_{max} \approx 4\,\si{K}$ the resistivity passes through a maximum and then drops steeply due to the onset of lattice coherence and antiferromagnetic order. A minimum around $T \approx 30\,\si{K}$ in the thermopower \cite{Huo2002Thermoelectric} confirms the presence of the Kondo effect in this material. An external magnetic field of $B = 3\,\si{T}$ lowers and broadens the maximum in $\rho(T)$ due to the lower N\'{e}el temperature $T_{\rm N} = 1.7\,\si{K}$, while the low-$T$ resistivity increases by roughly $30\%$ in $B = 3$~T. In an external magnetic field $B = 7\,\si{T}$ the resistivity at the lowest attainable temperature drops to $7.5\,\mu\Omega\,\si{cm}$ with the room temperature resistivity of $126\,\mu\Omega\,\si{cm}$ yielding a residual-resistivity ratio $RRR = 17$, which indicates a satisfying sample quality. Of course one could argue that magnetic impurities would be suppressed in an external magnetic field as well, however that would usually happen at significantly lower fields. The non-monotonic low-$T$ $\rho(T,B)$ is confirmed when considering the field dependence of the isothermal resistivity at lowest temperature, as presented in Fig.~\ref{fig:widerstand}~(b). The resistivity increases with increasing field with peaks at $3.4$~T and $3.9$~T (the latter being a shoulder for $i//B$) indicating magnetic transitions in nice agreement with the $\rho(B)$ data by Goto {\it et al.} \cite{Goto2002Field}, who also suggested that these transitions indicate the successive alignment of the in zero field frustrated Ce moments with increasing magnetic field. \begin{figure} \begin{center} \includegraphics[width=\linewidth]{VF541CTundS.pdf} \end{center} \caption{\label{fig:spezHeat} (a) Specific heat of CePdAl in the representation $C/T$ vs $T$ in zero field (red circles) and in an external magnetic field along the easy axis $B = 7\,\si{T}$ (purple triangles) and $B = 12\,\si{T}$ (orange squares). The black line is a fit to the zero-field data as described in the text. The blue line is a fit with the Kondo model after ref.~\cite{Schotte1975Interpretation}, see text for details. (b)~Corresponding magnetic entropies $S(T)$. The blue line is a fit with the Kondo model as used in panel (a). (c)~Enlarged presentation of the zero-field specific heat and the corresponding fit. (d)~Temperatures $T_{max}$ of the maxima of the Schottky anomaly in CePdAl vs magnetic field $B$.} \end{figure} In order to obtain insight into the magnetic low-energy states, we analyzed the specific heat of CePdAl in zero field down to very low temperatures. Figure~\ref{fig:spezHeat}~(a) shows the $4f$-electron contribution $C_{4f}$ to the specific heat of of CePdAl in zero field, $B = 7\,\si{T}$ and $B = 12\,\si{T}$ with $B // c$. $C_{4f}$ was obtained by subtracting the specific heat of the non-magnetic parent compound LuPdAl as well as the nuclear contribution $\propto 1/T^2$ due to the electric quadrupolar moments of Pd and Al calculated from high-field data \cite{Sakai2016Signature}. The black line, better visible in panel~(c) of Fig.~\ref{fig:spezHeat} is a fit of the zero field data with \begin{equation} C_{4f} = \gamma T + b\cdot T^2 \exp\left(-\frac{\Delta}{k_B T}\right) \label{eq:Cfit}. \end{equation} Here $\gamma T$ represents the electronic linear contribution to the specific heat with $\gamma = 250\,\si{mJ / mol K^2}$, which we attribute to the frustrated Ce moments. The second term $b\cdot T^2 \exp\left(-\frac{\Delta}{k_B T}\right)$ suggests the presence of gapped spin waves with $\Delta/k_B = 920\,\si{mK}$ in addition to the low-energy excitations due to frustrated Ce moments also found in previous NMR measurements \cite{Oyamada2008Ordering}. The observed $T^2$ behavior is reminiscent of two-dimensional antiferromagnetic spin waves \cite{Ramirez1990Strong}, in agreement with the corrugated antiferromagnetic planes visualized in Fig.~\ref{fig:structure}. The energy levels of the Ce are split into three doublets due to the local $m2m$ symmetry of the Ce ion. The crystal field levels were found at $\Delta/k_{\mathrm B} = 244$ and $510\,\si{K}$ by inelastic neutron scattering \cite{Woitschach2013Characteristic}. The ground state wave function is an almost pure $\left|\left.\frac{5}{2}\right>\right.$ doublet state, in line with the Ising anisotropy shown in the magnetization and susceptibility (see figs.~\ref{fig:chi} and \ref{fig:magnetization}). Thus at low temperatures we can assume an effective spin-$\frac{1}{2}$ system with an enhanced $g$-factor $g= 4.2$ to account for $J = \frac{5}{2}$ \cite{Abragam2012Electron33}. The magnetic entropy calculated from the specific heat is shown in Fig.~\ref{fig:spezHeat}~(b). For the zero-field measurement at the N\'{e}el temperature $T_{\rm N} = 2.7\,\si{K}$ less than $50\%$ of the expected entropy of $R \ln 2$ is gained, which is another hint to the frustration present in CePdAl. \begin{figure} \begin{center} \includegraphics[width=14pc]{VF541M.pdf}\hspace{4pc} \begin{minipage}[b]{14pc} \caption{\label{fig:magnetization} Magnetization of CePdAl at $T = 1.6$~K with the magnetic field aligned parallel to the $\left[001\right]$, $\left[110\right]$ and $\left[1\overline{1}0\right]$ direction. The red curve is the polycrystalline average of all three directions. The black line shows the magnetization $M$ of CePdAl at $T = 0.5$~K with the magnetic field aligned parallel to the $\left[001\right]$.} \end{minipage} \end{center} \end{figure} As demonstrated by the specific heat data in $B = 7$ and $12$~T a Schottky anomaly evolves in magnetic fields. Fig.~\ref{fig:spezHeat}~(d) shows the temperature $T_{max}$ of the maxima in $C_{4f}$ of this Schottky anomaly in magnetic fields larger than $6$~T. The linear field dependence confirms that it is indeed a Schottky anomaly. From the slope of the data we estimate the Zeeman splitting of the levels and the involved magnetic moment, the latter being $1.77\,\mu_{\mathrm{B}}$, which is in fair agreement with the ordered moment of $1.6\,\mu_{\mathrm{B}}$ found by D\"{o}nni {\it et al.} \cite{Doenni1996Geometrically} and compares well to $1.83\,\mu_\mathrm{B}$ found by Prok\v{e}s {\it et al.} \cite{Prokes2006Magnetic}. The specific heat in $B = 12\,\si{T}$ can be described very nicely with the single-ion resonance-level Kondo model \cite{Schotte1975Interpretation}. The solid blue lines in Fig.~\ref{fig:spezHeat}~(a) and (b) are fits with the resulting fit parameters as follows: Kondo temperature $T^{\ast} = 3.23 \pm 0.03\,\si{K}$ and Zeeman splitting $\Delta E/k_\mathrm{B} = 22.9\,\si{K}$, where the latter nearly perfectly agrees with $T_{max} = 9.5$~K when considering $k_B\cdot T_{max} = 0.42\cdot \Delta E$. In lower fields the emerging correlation effects compromise this fit by producing an additional background, which, however, does not shift $T_{max}$. Thus, although the magnetization data (see Fig.~\ref{fig:magnetization} and ref.~\cite{Goto2002Field}), suggest that in an external field $B = 7\,\si{T}$ aligned parallel to the $c$-axis the system is already in the paramagnetic state the specific heat in $B = 7$~T cannot be described with the single-ion resonance-level Kondo model. The presence of correlations between the magnetic moments, even in regimes beyond the long-range ordered state, is in line with the recent results of Prok\v{e}s {\it et. al} \cite{Prokes2015Probing}, who found that the three magnetic cerium sites still carry different magnetic moments at $T = 100\,\si{mK}$, $p = 0.85\,\si{GPa}$ and in $B = 9\,\si{T}$. From the Sommerfeld coefficient $\gamma$ and the value of the magnetic susceptibility $\chi$ at low temperatures the Wilson ratio can be calculated \cite{Hewson1997Kondo}. The Sommerfeld coefficient was determined from the value of $C_{4f}/T$ extrapolated to zero temperature to be $\gamma = 250$, $134$ and $57$~mJ/mol~K$^2$ in $B = 0$, $7$ and $12$~T. The magnetization, as presented in Fig.~\ref{fig:magnetization}, was measured at $T = 1.6$~K in magnetic fields up to $B = 12$~T aligned along the easy axis $\left[001\right]$ and perpendicular along the hard axes $\left[110\right]$ and $\left[1\overline{1}0\right]$. The saturation moment at $B = 12\,\si{T}$ is $\mu = 1.44\,\mu_B/$f.u., roughly in line with the ordered moment $\mu \approx 1.6 \,\mu_B/$f.u. found in previous neutron diffraction experiments \cite{Doenni1996Geometrically} and our estimations from the specific heat data discussed above. In a high magnetic field with all moments aligned a saturation moment $m = 2.1\,\mu_B$ is expected for free Ce$^{3+}$. The significant reduction of our measured data is a further fingerprint of the Kondo screening effects in CePdAl. For $B \perp c$ no transitions are found and $M/B$ is constant in field. From ref.~\cite{Isikawa1996Magnetocrystalline} we know that $M/B$ is temperature-independent below $T \approx 3$~K as well, so we can assume $\chi = dM/dB = 0.00712\,\mu_B\si{/mol\,T}$ for all temperatures below $3$~K and fields applied along the hard axis. For $B = 0$ we use our data taken at $T = 0.5$~K, which are in agreement with the data reported previously by Goto {\it et al.} \cite{Goto2002Field}. For the magnetic field aligned parallel to the easy axis the data at $T = 0.5$ and $1.6\,\si{K}$ above $B = 6$~K converge, thus we can use the data from our measurement at $T = 1.6$~K for $B = 7$~and $12$~T. For the calculation of the Wilson ratio we take the polycrystalline average of these data. The resulting values are $\chi = 0.039$, $0.019$ and $0.0078$~$\mu_B$/mol T in $B = 0$, $7$ and $12$~T. The Wilson ratios calculated thereof are $R_W = 1.44$, $1.18$ and $1.17$ in $B = 0$, $7$ and $12$~T, which is close to the value for free electrons in all fields. In a phenomenological ``two-component model'', which was for example used previously for the analysis of CeCu$_{6-x}$Au$_x$ and CeAl$_2$, the specific heat at low temperatures is separated into two parts: on the one hand magnetic excitations of the antiferromagnetic state, rapidly vanishing with decreasing temperature, and on the other hand heavy fermion excitations \cite{Schlager1993Magnetic,Bredl1978Specific}. In analogy one could argue that in zero field only the one third of frustrated, disordered moments contributes to $\gamma$, which would result in a corrected Sommerfeld coefficient $\gamma' = 0.750$~J/mol~K$^2$ and a corrected Wilson ratio $R'_W = 4.32$. These values would qualify CePdAl as a strongly correlated heavy fermion system. In higher fields of $7$ and $12$~T this argument is not valid anymore, here the magnetic moments contribute equally to $\gamma$, resulting in a Wilson ratio as expected for normal metals. \begin{figure} \begin{center} \includegraphics[width=14pc]{cepdal_neutron_fig2.pdf}\hspace{4pc} \begin{minipage}[b]{14pc} \caption{\label{fig:neutrons} Scans along $\left[001\right]$ across a magnetic Bragg $(0.5~0~0.35)$ peak at $T = 2.1$ and $0.4$~K.} \end{minipage} \end{center}\end{figure} In order to take a closer look on the magnetic ground state of CePdAl we performed elastic neutron scattering experiments. Figure~\ref{fig:neutrons} displays scans across a magnetic Bragg position of CePdAl at $T = 2.1$ and $0.4$~K. The peak width of the magnetic Bragg peak increases towards lower temperatures, even well below $T_{\rm N}$. This measured width corresponds to antiferromagnetic domains with a size of the order of 200 {\AA} at $T = 0.4$\,K. Such a behavior is in marked contrast to usual magnetically ordered systems with true long-range order at lowest temperatures. There must exist an effect limiting the size of the antiferromagnetic domains towards lower temperatures. A possible origin of this behavior are the magnetically frustrated moments in CePdAl. Although no other long-range order is detected, one can assume that the frustrated moments - bearing a non-zero moment - couple to the ordered moments perturbing the long-range ordered structure resulting in a reduction of the domain size. \section{Conclusion} Our results show clearly that magnetic frustration and the Kondo effect are present in CePdAl simultaneously. Our data suggest a bipartite system: on the one hand the antiferromagnetically ordered part, on the other hand the frustrated part. However, these subsystems are not completely independent, as evidenced by the broadening of the magnetic Bragg peaks towards lower temperatures. Furthermore the missing entropy in magnetic fields above the magnetic transitions points to the presence of magnetic correlations and possibly magnetic frustration in areas beyond the long-range ordered regime. Overall CePdAl offers an excellent opportunity to explore the interplay between magnetic frustration and Kondo physics. \section*{Acknowledgements} This work was supported by the Deutsche Forschungsgemeinschaft through FOR 960, the Helmholtz Association through VI 521 and JSPS Postdoctoral Fellow for Research Abroad. \section*{References} \providecommand{\newblock}{}
1,108,101,563,585
arxiv
\section{Introduction} Future generations of cellular networks will provide groundbreaking network capacity, in conjunction with a significantly lower delay and ubiquitous coverage~\cite{Ericss_rep}. \gls{5g} and beyond deployments will support new mobile broadband use cases, e.g., Augmented (AR) and \gls{vr}, and expand into new vertical markets, enabling an unprecedented degree of automation in industrial scenarios, \gls{v2x} communications, remote medical surgery and smart electrical grids. To this end, the \gls{3gpp} has introduced various technological advancements with the specifications of the \gls{5g} \gls{ran} and \gls{cn}, namely \gls{nr} and \gls{5gc}~\cite{3gpp_38_300}. In particular, \gls{nr} features a user and control plane split, a flexible \gls{ofdm} frame structure, and the support for \gls{mmwave} communications, while the \gls{cn} introduces virtualization and slicing~\cite{yousaf2017nfv}. Specifically, the \gls{mmwave} band, i.e., the portion of the spectrum between 30 GHz and 300 GHz, represents the major technological enabler toward the Gbit/s capacity target. These frequencies are characterized by the availability of vast chunks of contiguous and currently unused spectrum, in stark contrast with the crowded sub-6 GHz frequencies. However, \glspl{mmwave} exhibit unfavorable propagation characteristics such as high isotropic losses and a marked susceptibility to blockages and signal attenuation~\cite{khan2011mmwave, rangan2014millimeter}. These issues can be partially mitigated using beamforming through large antenna arrays, thanks to the small wavelengths and advances in low-power \gls{cmos} RF circuits~\cite{hemadeh2017millimeter}; nevertheless, their introduction alone is not enough for meeting the high service availability requirement. Therefore, \glspl{mmwave} networks also need densification, to decrease the average distance between mobile terminals and base stations and improve the average \gls{sinr}. The theoretical effectiveness of this technique is well understood~\cite{gomez2017capacity}; however, achieving dense \gls{5g} deployments is extremely challenging from a practical point of view. Specifically, providing a fiber backhaul among base stations and toward the \gls{cn} is deemed economically impractical, even more so in the initial \gls{5g} deployments~\cite{polese2020integrated}. Recently, wireless backhaul solutions for \gls{5g} networks have emerged as a viable strategy toward cost effective, dense \gls{mmwave} deployments. Notably, the \gls{3gpp} has promoted \gls{iab}~\cite{3gpp_38_874}, i.e., a wireless backhaul architecture which dynamically splits the overall system bandwidth for backhaul and access purposes. \gls{iab} has been integrated in the latest release of the \gls{3gpp} \gls{nr} specifications. Prior research has highlighted that \gls{iab} represents a cost-performance trade-off~\cite{stoch_geom2, polese2020integrated}, as base stations need to multiplex access and backhaul resources, and as the wireless backhaul at \glspl{mmwave} is less reliable than a fiber connection. In particular, \gls{iab} networks may suffer from excessive buffering (and, consequently, high latency and low throughput) when a suboptimal partition of access and backhaul resources is selected, thus hampering the benefits that the high bandwidth \gls{mmwave} links introduce~\cite{polese2020integrated,polese2018end}. Therefore, it is fundamental to solve these non-trivial challenges to enable a smooth integration of \gls{iab} in \gls{5g} and beyond deployments. \subsection{Contributions} This article tackles the access and backhaul partitioning problem by proposing an optimal, semi-centralized resource allocation scheme for \gls{3gpp} \gls{iab} networks, based on the \gls{mwm} problem on graphs. It receives periodic \gls{kpi} reports from the nodes of the \gls{iab} deployment, constructs a spanning tree that represents the deployment, and uses a simplified, low-complexity version of the \gls{mwm} to partition the links between access and backhaul. After a feedback step, each node can then schedule the resources at a subframe-level among the connected devices. To the best of our knowledge, this is the first \gls{mwm}-based resource allocation framework for \gls{3gpp} \gls{iab} networks at \glspl{mmwave}, designed with three goals, i.e., it is flexible, integrated with the \gls{3gpp} specifications, and has low complexity. The flexibility makes it possible to easily adapt the resource allocation strategy to different requirements, use cases, and classes of traffic for \gls{5g} networks. We achieve this by developing a generic optimization algorithm, which identifies with a configurable periodicity the access and backhaul partition that optimizes a certain utility function. The selection of the utility function prioritizes the optimization of different metrics, e.g., throughput or latency, which in turn can be mapped to different classes of traffic. To achieve the second goal, i.e., the compliance with the \gls{3gpp} \gls{iab} specifications, the resource allocation framework relies only on information that can be actually exchanged and reported in a \gls{3gpp} deployment. In this regard, we also review the latest updates related to the \gls{3gpp} \gls{iab} standardization activities. Finally, the algorithm operates with a low complexity, i.e., we propose a version of the \gls{mwm} algorithm that can be applied on spanning trees with linear complexity in the number of nodes in the network infrastructure, and demonstrate its equivalence to the generic (and more complex) \gls{mwm}. Additionally, the proposed framework also relies on a feedback exchange that is linear in the number of base stations, and is thus decoupled from the number of users. Furthermore, we evaluate the performance of the proposed scheme with an end-to-end, full-stack system-level simulation, using the ns-3 mmWave module~\cite{mezzavilla2018end} and its \gls{iab} extension~\cite{polese2018end}. This represents the first evaluation of an optimized resource allocation scheme for \gls{iab} with a simulator that is based on a \gls{3gpp}-compliant protocol stack, uses \gls{3gpp} channel models, and integrates realistic applications and transport protocols. The extended performance evaluation highlights how the proposed scheme improves the throughput of a diverse set of applications, with a 5-fold increase for the worst case users, with different packet sizes and transport protocols, while decreasing the latency and buffering at intermediate nodes by up to 25 times for the smallest packet sizes. \subsection{State of the Art} This section reviews relevant research on resources allocation in a multi-hop wireless network, deployed through either \gls{iab} or other wireless mesh solutions~\cite{gambiroza2004end}. The literature adopts different approaches to model and solve the resource allocation problem. The first, discussed in~\cite{qualcomm1, qualcomm2, kulkarni2018max, lei2020deep, rasekh2015interference, bilal2014time, cruz2003optimal} is based on conventional optimization techniques. Specifically, the authors of~\cite{qualcomm1} present a simple and thus tractable system model and find the minimal number of \glspl{gnb} featuring a wired backhaul that are needed to sustain a given traffic load. Their work is further extended in~\cite{qualcomm2}, which provides an analysis of the performance benefits introduced by additional, fiber-less \glspl{gnb}. In~\cite{kulkarni2018max}, the mobile network is modeled as a noise-limited, $k$-ring deployment. Such model is then used to obtain closed-form expressions for the max-min rates achieved by \glspl{ue} in the network. Moreover,~\cite{lei2020deep} proposes a system model which leads to an NP-hard optimization problem, even though it considers single-hop backhaul networks only, and uses deep \gls{rl} to reduce its computation complexity. In~\cite{rasekh2015interference}, the joint routing and resource allocation problem is tackled via a \gls{lp} technique. Notably, this work assumes that data can be transmitted (received) toward (from) multiple nodes at the same time. Similarly, the authors of~\cite{bilal2014time} formulate a \gls{tdd}, multi-hop resource allocation optimization problem which leverages the directionality of \gls{mmwave} antennas, albeit in the context of \gls{wpans}. Since such problem is also NP-hard, a sub-optimum solution is found. Finally,~\cite{cruz2003optimal} focuses on joint link scheduling, routing and power allocation in multi-hop wireless networks. As in previous cases the obtained optimization problem is not tractable: in this instance such obstacle is overcome by studying the dual problem via an iterative approach. The second approach relies on stochastic geometry to model \gls{iab} networks~\cite{stoch_geom1, stoch_geom2}. Specifically,~\cite{stoch_geom1} determines the rate coverage probability of \gls{iab} networks and compares different access/backhaul resource partitioning strategies. Similarly,~\cite{stoch_geom2} provides a comparison of orthogonal and integrated resource allocation policies, although limited to single-hop wireless networks. Another significant body of literature leverages \glspl{mc} to study \gls{iab} networks; some of these works can be interpreted as a direct application of such theory~\cite{singh2018throughput, ji2012throughput}, while others~\cite{vu2018path,garcia2015analysis, gomez2016optimal, gomez2019optimal} exploit a more complex framework. The papers which belong to the former class are based on the pioneering work of~\cite{tassiulas1990stability}, which inspects the stability of generic multi-hop wireless networks and formulates a throughput-maximizing algorithm known as \textit{back-pressure}. In particular,~\cite{singh2018throughput} focuses on the optimization of the timely-throughput, i.e., takes into account that packets usually have an arrival deadline. Such problem is then addressed by formulating a \gls{mdp}, leading to a distributed resource allocation algorithm. Similarly,~\cite{ji2012throughput} proposes an algorithm that also targets throughput optimality but, contrary to the back-pressure algorithm, manages to avoid the need for per-flow information. On the other hand, the body of literature which belongs to the latter class uses the \gls{mc}-derived \gls{num} framework first introduced in~\cite{kelly1997charging} and ~\cite{kelly1998rate}. Specifically, the authors of~\cite{vu2018path} focus on satisfying the \gls{urllc} \gls{qos} requirements by jointly optimizing routing and resource allocation. Then, the problem is solved using both convex optimization and \gls{rl} techniques. In~\cite{garcia2015analysis}, an in-depth analysis of a \gls{mmwave}, multi-hop wireless system is presented, proposing and comparing three different interference frameworks, under the assumption of a dynamic \gls{tdd} system. This work is extended in~\cite{gomez2016optimal} and~\cite{gomez2019optimal}, which consider respectively a \gls{sdma} and a \gls{mu}-\gls{mimo} capable system. Finally, only a small portion of the literature~\cite{polese2018end, polese2018iab, polese2020integrated} analyzes the end-to-end performance of \gls{iab} networks. Specifically, the authors of~\cite{polese2018end} extend the ns-3 \gls{mmwave} module, introducing realistic \gls{iab} functionalities which are then used to characterize the benefit of deploying wireless relays in \gls{mmwave} networks. Their work is extended in~\cite{polese2018iab}, where path selection policies are formulated and their impact on the system performance is inspected. A further end-to-end analysis of \gls{iab} networks is carried out in~\cite{polese2020integrated}, providing insights into the potentials of this technology and the related open research challenges. Concluding, the literature exhibits the presence of algorithms relying on a varying degree of assumptions on the network topology and the knowledge of system. Furthermore, most of the aforementioned studies lack an end-to-end, full-stack system-level analysis of the proposed solution. Conversely, this paper proposes a semi-centralized resource allocation scheme, which also has a low complexity, both computationally and in terms of required feedback. Moreover, we provide considerations on how our proposed solution can be implemented and deployed in standard-compliant \gls{3gpp} \gls{iab} networks, and compare such solution to the state of the art with an end-to-end, realistic performance analysis. \subsection{Paper structure} The remainder of this paper is organized as follows. Sec.~\ref{Sec:Sys-model} describes our assumptions and the system model. Then, Sec.~\ref{Sec:scheme_main} presents a novel scheme for resource partitioning in \gls{mmwave} \gls{iab} networks, along with considerations on how it can be implemented in \gls{3gpp} NR. Finally, Sec.~\ref{Sec:perf_eval} describes the performance evaluation results and Sec.~\ref{Sec:conc} concludes this paper. \section{IAB networks} \label{Sec:Sys-model} The following paragraphs identify the characteristics and constraints of \gls{mmwave} \gls{iab}, according to the \gls{3gpp} design guidelines presented in~\cite{3gpp_38_874} and the specifications of~\cite{3gpp_38_174}. \subsection{Network topology} In general, an \gls{iab} network is a deployment where a percentage of \glspl{gnb} (i.e., the \gls{iab}-nodes) use wireless backhaul connections to connect to a few \glspl{gnb} (i.e., the \gls{iab}-donors) which feature a wired connection to the core network, as can be seen in Fig.~\ref{Fig:IAB_top_not}. Moreover, these deployments exhibit a \textit{multi-hop} topology where a strict parent-child relation is present. The former can be represented by the \gls{iab}-donor itself or an \gls{iab}-node; the latter by either \gls{ue}s or downstream \gls{iab}-nodes. In~\cite{3gpp_38_874}, no a priori limit on the number of backhaul hops is introduced. As a consequence, \gls{3gpp} argues that \gls{iab} protocols should provide sufficient flexibility with respect to the number of backhaul hops. Moreover, the \gls{si} on \gls{iab}~\cite{3gpp_38_874} highlights the support for both the topologies depicted in Fig.~\ref{Fig:IAB_topology}, i.e., \gls{st} and \gls{dag} \gls{iab}. Clearly, the former exhibits less complexity but, at the same time, poses some limits in terms of network performance: the possible presence of obstacles may result in a service interruption, due to the unique backhaul route established by the \glspl{ue}. On the other hand, a \gls{dag} topology offers routing redundancy, which can be used not only to decrease the probability of experiencing a ``topological blockage," but also for load balancing purposes. \begin{figure}[tbp] \centering \subfloat[\gls{st} and \gls{dag} topologies.\label{Fig:IAB_topology}]{ \includegraphics[width=0.46\linewidth]{IAB_Topology.pdf}} \hfill \subfloat[System model notation.\label{Fig:IAB_notation}]{ \includegraphics[width=0.46\linewidth]{IAB_Notation.pdf}} \caption{Comparison of the \gls{iab} network topologies analyzed in~\cite{3gpp_38_874} and related notation.} \label{Fig:IAB_top_not} \vspace{-.6cm} \end{figure} \subsection{Multiple access schemes and scheduling} An in-band, dynamic partitioning of the access and backhaul spectrum resources is currently preferred by \gls{3gpp}~\cite{3gpp_38_874, 3gpp_38_174}, together with half-duplex operations of the \gls{iab}-nodes. Moreover, most of the literature suggests that \gls{5g} \gls{mmwave} systems will operate in a \gls{tdd} fashion~\cite{khan2011mmwave, dutta2017frame}. This choice is mainly driven by the stringent latency requirements which the next generation of mobile networks will be required to support, and by the usage of analog or hybrid beamforming. The usage of \gls{fdd}, in conjunction with the presence of large chunks of bandwidth, would lead to severe resource under-utilization and make channel estimation more difficult. Based on these considerations, the system model exhibits a \gls{tdd}, \gls{tdma}-based scheduling where the access/backhaul interfaces are multiplexed in a half-duplex manner. It follows that at any given time instant, each node of the \gls{iab} network cannot be simultaneously involved in more than one transmission or reception. In particular, \gls{iab}-nodes cannot schedule time and frequency resources which are already allocated by their parent for backhaul communications which involve them. Finally, the introduction of resource coordination mechanisms and related signaling is explicitly supported in the \gls{iab} specification drafts~\cite{3gpp_38_874, 3gpp_38_174}. Nevertheless, these solutions must reuse as much as possible the available NR specifications and require at most minimal changes to the Rel.15 \gls{5gc} and \gls{nr}. \subsection{System model} \label{Subsec:sys_model} According to these assumptions and referring to Fig.~\ref{Fig:IAB_notation}., a generic \gls{iab} network can be modeled as a directed graph $\mathcal{G} = \{\mathcal{N}, \mathcal{E} \}$, where the set of nodes $\mathcal{N} \equiv \{ n_1, \, n_2, \, \ldots \, n_{\vert \mathcal{N} \vert } \}$ comprises the \gls{iab}-donor, the various \gls{iab}-nodes and the \glspl{ue}. Accordingly, the set of directed edges $\mathcal{E} \equiv \{ e_{1} , \, e_{2}, \, \ldots \, \allowbreak e_{\vert \mathcal{E} \vert} \} \equiv \{ e_{n_j \to n_k} \}_{j, k} $, where the edge $e_{n_j \to n_k}$ originates at the parent node $n_j$ and terminates at the children $n_k$, comprises in all the active cell attachments, either of mobile terminals to a \gls{gnb} or from \gls{iab}-nodes towards their parent node. Since the goal of this paper is to study backhaul/access resource partitioning policies, this generic model can be actually simplified: in fact, all the \glspl{ue} connected to a given \gls{gnb} can be represented by a single node in $\mathcal{G}$ without any loss of generality. Similarly, the same holds true for their links toward the serving \gls{gnb}, which can then be represented by a single edge. Furthermore, this work focuses on \gls{st} topologies only. Nevertheless, the proposed framework can be easily extended to the case of a \gls{dag} \gls{iab} network: it suffices to introduce a preliminary routing step where an \gls{st} $\mathcal{G^{'}}$ is computed from the actual \gls{dag} network $\mathcal{G}$, for example by using the strategies presented in~\cite{polese2018iab}. Such process can be possibly repeated at each allocation instance, effectively removing any constraint on the network topology. We define as \textit{feasible schedule} any set of links $\mathcal{E'} \subseteq \mathcal{E}$ such that none of them share a common vertex, i.e., $ \forall \, e_{n_j \to n_k} \neq e_{n_l \to n_m} \in \mathcal{E'}$ it holds that $n_{j} \neq n_{m}$ and $n_{l} \neq n_{k}$. Let then $f_u$ be a utility \textit{additive map}, namely, a function such that the overall utility experienced by the system when scheduling edges $e_1$ and $e_2$ satisfies $f_u (e_1, e_2) = f_u (e_1) + f_u(e_2)$. Let also $\mathcal{W} \equiv \{ w_1, \, w_2, \, \ldots \, w_{\vert \mathcal{E} \vert } \}$ be the set of positive weights whose generic entry $w_j$ represents the utility which is obtained when scheduling the $j$-th edge, namely, $w_j \equiv f_u (e_j)$. Then, the overall utility of the system is $\mathcal{U} \equiv \sum_{e_k \, \in \, \mathcal{E'}} f_u(e_k) = \sum_{e_k \, \in \, \mathcal{E'}} w_k $. The goal is to find the feasible set $\mathcal{E'}^{*}$ which maximizes the overall utility, i.e., $\underset{\mathcal{E'}}{\mathrm{argmax}} \,\, \mathcal{U} $. In computer science, this task is typically referred to as the \textit{Maximum Weighted Matching} problem~\cite{Korte2002}. Finding the \gls{mwm} of a given graph, in the general case, is not trivial from a computational point of view. In fact, the fastest known \gls{mwm} algorithm for generic graphs has a complexity of \bigO{\vert V \vert \vert E \vert + \vert V \vert^2 \log{\vert V \vert}}~\cite{1990Gabow}, posing serious limitations to the suitability of such algorithm to \gls{5g} and beyond networks, which target a connection density of 1 million devices per km\textsuperscript{2}. However, we argue that under the aforementioned assumptions on the system model, which restrict the network to an \gls{st} topology, it is possible to design an \gls{mwm}-based centralized resource partitioning framework which exhibits linear complexity with respect to the network size and which, as a result, is able to satisfy the scalability requirements highlighted by \gls{3gpp} in~\cite{3gpp_38_874}. \section{Semi-centralized resource allocation scheme for IAB networks} \label{Sec:scheme_main} This section presents an \gls{mwm} algorithm for \gls{st} topologies (Sec.~\ref{Sec:T-MWM}), an efficient and \gls{mwm}-based centralized resource partitioning framework for \gls{iab} networks (Sec.~\ref{Sec:Cent-scheme}) and some considerations about its implementation (Sec.~\ref{Sec:ns3-impl}). \subsection{\gls{mwm} for \gls{st} graphs} \label{Sec:T-MWM} As the first of our contributions, we present an algorithm, hereby called \texttt{T-MWM}, which computes the \gls{mwm} of an \gls{st} in linear time. In particular, \texttt{T-MWM} is a bottom-up algorithm which, upon receiving as input a weighted \gls{st} $\mathcal{G}$ described by its edge list $\mathbf{E}$ and the corresponding weight list $\mathbf{W}$, produces as output a set of active edges $\mathbf{E^*}$ which are an \gls{mwm} of $\mathcal{G}$. Furthermore, $\mathbf{E}$ is from now on assumed to exhibit the following invariant: each \gls{iab} parent precedes its children in the list, hence avoiding the need for a recursion. This is automatically obtained as each \gls{iab} child connects after its parent, and is thus added to the list in a subsequent position. Nevertheless, this assumption can be easily relaxed, albeit at the cost of losing as a side-effect the bottom-up design. As can be seen in Alg.~\ref{Alg:tmwm}, the \texttt{T-MWM} algorithm basically performs two traversals. During the first one, the utility yielded by the various nodes and their children is computed. Then, during the second traversal, this knowledge is used for computing an \gls{mwm} of the network. The correctness of this procedure can be easily proved. Consider the sub-tree of $\mathcal{G}$ whose root is represented by the generic node $n_k$. Let also $\mathbf{F}(n_k)$ be the utility yielded by a \gls{mwm} of such sub-tree which activates a link originating from $n_k$, and $\mathbf{G}(n_k)$, conversely, the utility provided when the \gls{mwm} contains no links which originate from such node. Then, the correctness of the first phase of the algorithm, namely the computation of the $\mathbf{F}$ and $\mathbf{G}$ vectors, follows directly from the following Lemmas. \begin{algorithm}[tbp!] \small \caption{Tree-Maximum Weighted Matching} \label{Alg:tmwm} \hspace*{\algorithmicindent} \textbf{Input:} A weighted \gls{st} $\mathcal{G}$ encoded by a list $\mathbf{E}$, which associates each node in $\mathcal{G}$ to its edges, and the corresponding weights list $\mathbf{W}$. \\ \hspace*{\algorithmicindent} \textbf{Output:} An \gls{mwm} $\mathbf{E^*}$ of $\mathcal{G}$. \begin{algorithmic}[1] \Procedure{\texttt{T-MWM}}{$\mathbf{E}, \mathbf{W}$} \State $\mathbf{F} \gets \mathbf{0}$; $\mathbf{G} \gets \mathbf{0}$ \Comment{Initialize the utility vectors to zero vectors} \State $\mathbf{E^*} \gets \{ \}$ \Comment{Initialize the set of active edges as empty} \For{each node $n_k \in \mathbf{E}$} \Comment{\parbox[t]{.43\linewidth}{Iterate over the various nodes, in ascending order w.r.t. to their depth in $\mathcal{G}$}} \State $maxUtil \gets 0$; $\mathbf{maxEdge}(n_k) \gets \{ \}$ \For{each edge $e_{k,j} \equiv (n_k, n_j)$} \Comment{Iterate over its edges} \State $\mathbf{G}(n_k) \gets \mathbf{G}(n_k) + \mathbf{F}(n_j)$ \State $currUtil \gets \mathbf{W}(e_{k, j}) + \mathbf{G}(n_j) - \mathbf{F}(n_j)$ \If{$currUtil > maxUtil$} \State $maxUtil \gets currUtil$; $\mathbf{maxEdge}(n_k) \gets e_{k, j}$ \State $\mathbf{F}(n_k) \gets \mathbf{G}(n_k) + maxUtil$ \EndIf \EndFor\label{edgesFor} \EndFor\label{nodesFor} \For{each node $n_k \in \mathbf{E}$} \Comment{\parbox[t]{.43\linewidth}{Iterate over the various nodes, in ascending order wrt to their depth in $\mathcal{G}$}} \If { $\mathbf{F}(n_k) \geq \mathbf{G}(n_k)$} \State $\mathbf{E^*} \gets \mathbf{E^*} \cup e_{k, \mathbf{maxEdge}(n_k)}$ \State $\mathbf{F}(\mathbf{maxEdge}(n_k)) \gets -1$ \Comment{\parbox[t]{.3\linewidth}{Ensure child does not get activated multiple times}} \EndIf\label{activeIf} \EndFor\label{edgesFor2} \State \textbf{return} $\mathbf{E^*}$ \EndProcedure \end{algorithmic} \end{algorithm} \begin{lem} \label{lemma_ch_par} Let $n_k$ be an arbitrary internal node of $\mathcal{G}$ and $\{ n_j \}_k$ be the set of its children. Then, any \gls{mwm} of $\mathcal{G}$ must contain an edge which has as one of its vertices either $n_k$ or an element of $\{ n_j \}_k$. \end{lem} \begin{IEEEproof} Suppose there exists an \gls{mwm} $\mathbf{E^*}$ of $\mathcal{G}$ which does not contain any such edge. Then the set $\hat{\mathbf{E}}^* \equiv \mathbf{E^*} \cup \{ e_{n_k \to n_m} \}$, where $ e_{n_k \to n_m} $ is the edge from $n_j$ to its (arbitrary) child $n_m$ is still a feasible activation set, since no edge in $\mathbf{E^*}$ shares such vertices. Furthermore, since the weights are positive we have that $f_u (\hat{\mathbf{E}}^*) \equiv f_u (\mathbf{E^*}) + \mathbf{W} (e_{n_k \to n_m}) > f_u (\mathbf{E^*}) $, which is clearly a contradiction. \end{IEEEproof} \begin{lem} \label{lemma_utils} For any internal node $n_k$: \[ \begin{cases} \mathbf{G}(n_k) = \sum\limits_{ \{ n_j \}_k} \mathbf{F}(n_j) \\ \mathbf{F}(n_k) = \sum\limits_{ \{ n_j \}_k } \mathbf{F}(n_j) + \underset{\{ n_j \}_k }{\max} \{ \mathbf{W} ( e_{n_k \to n_j} ) + \mathbf{G}(n_k) - \mathbf{F}(n_k)\} \end{cases} \] where the set $\{ n_j \}_k$ comprises all the children of $n_k$. Conversely, for leaf nodes $\mathbf{F}(n_l) \equiv \mathbf{G}(n_l) \equiv 0 $. \end{lem} \begin{IEEEproof} This lemma can be proved by induction over the height $h_k$ of the sub-tree corresponding to node $n_k$. The base case is $h_k = 0$, i.e., when $n_k$ is a leaf node; in this case, trivially, both $\mathbf{F}(n_k)$ and $\mathbf{G}(n_k)$ are zero since no links exhibit $n_k$ as parent node and the sub-tree of $\mathcal{G}$ which originates in $n_k$ consists of $n_k$ only, respectively. Consider then the node $n_k$ having a sub-tree of height $h_k > 0$. From Lemma~\ref{lemma_ch_par} we know that either $n_k$ or (at least) one of its children must be included in any \gls{mwm}. If on the one hand we do not activate any edge which originates from $n_k$, then no constraints on the children's activation are introduced. Therefore, in this case the maximum utility is obtained when \textit{all} the children are active, hence $\mathbf{G}(n_k) = \sum_{ \{ n_j \}_k} \mathbf{F}(n_j)$. On the other hand, if we activate an edge from $n_k$ to one of its children $n_m$ then no additional edges which originate from the latter can be added to $\mathbf{E^*}$. It follows that the utility obtained in this instance reads: \[ \sum_{ \{ n_j \neq n_m\}_k } \mathbf{F}(n_j) + \mathbf{W} ( e_{n_k \to n_m} ) + \mathbf{G}(n_m) \] and can be rewritten as: \[ \sum_{ \{ n_j \}_k } \mathbf{F}(n_j) + \mathbf{W} ( e_{n_k \to n_m} ) + \mathbf{G}(n_m) - \mathbf{F}(n_m) \] Such utility is maximized when $n_m$ is chosen as $ \underset{ \{ n_j \}_k }{\mathrm{argmax}} \, \{ \mathbf{W} ( e_{n_k \to n_j} ) + \mathbf{G}(n_j) - \mathbf{F}(n_j) \}$, yielding: \[ \mathbf{F}(n_k) = \sum\limits_{ \{ n_j \}_k } \mathbf{F}(n_j) + \underset{\{ n_j \}_k }{\max} \, \{ \mathbf{W} ( e_{n_k \to n_j} ) + \mathbf{G}(n_k) - \mathbf{F}(n_k)\} \qedhere \] \end{IEEEproof} Finally, the validity of the last phase of \texttt{T-MWM} follows from Lemma~\ref{lemma_active_edges}. \begin{lem} \label{lemma_active_edges} Given an \gls{st} $\mathcal{G}$, let $\mathcal{G}_k$ be its sub-tree of root $n_k$. Then, an \gls{mwm} of $\mathcal{G}$ can be computed by performing, in a recursive fashion and starting from the root, the following procedure: \begin{enumerate} \item If $ \, \mathbf{F}(n_k) \geq \mathbf{G}(n_k) $, add to $\mathbf{E^*}$ the edge from $n_k$ to $n_m$, where the latter is defined as $n_m \equiv \underset{ \{ n_j \}_k }{\mathrm{argmax}} \, \{ \mathbf{W} ( e_{n_k \to n_j} ) + \mathbf{G}(n_j) - \mathbf{F}(n_j) \}$. Then, repeat recursively on all the sub-trees corresponding to $n_k$'s children $\{ n_j\}_k \, \mathrm{s.t.} \, n_j \neq n_m$ and on the children of $n_m$ itself. \item If $ \, \mathbf{F}(n_k) < \mathbf{G}(n_k) $, repeat recursively on all the sub-trees which exhibit the children of $n_k$ as their root. \end{enumerate} \end{lem} \begin{IEEEproof} This Lemma follows directly from the definitions of $\mathbf{F}$ and $\mathbf{G}$ and the previous Lemmas. Specifically, the above procedure always yields a feasible activation, i.e., a matching of $\mathcal{G}$. In fact, in either options we never recurse on a node which has already been activated, hence no pair of edges $\in \mathbf{E}$ can share any vertices. Furthermore, due to the properties of $\mathbf{F}$ and $\mathbf{G}$ and their validity for each sub-tree in $\mathcal{G}$, the edges of $\mathbf{E^*}$ comprise a \textit{maximal} matching, i.e., they yield the maximum possible utility among all the feasible schedules. \end{IEEEproof} Regarding the computational complexity of the proposed algorithm, it can be observed that during the first phase the main loop effectively scans each edge of $\mathcal{G}$, hence exhibiting a complexity \bigO{\vert E \vert}. Moreover, the second phase of \texttt{T-MWM} has complexity \bigO{\vert V \vert}, since it loops through all the network nodes. Therefore, we can conclude that the overall asymptotic complexity of the algorithm is \bigO{\vert V \vert + \vert E \vert}, or, equivalently, \bigO{\vert V \vert} since in an \gls{st} the number of edges equals $\vert V \vert - 1$. \subsection{Semi-centralized resource partitioning scheme} \label{Sec:Cent-scheme} Based on the system model introduced in Sec.~\ref{Sec:Sys-model}, and the \texttt{T-MWM} algorithm, we present a generic optimization framework which partially centralizes the backhaul/access resource partitioning process, in compliance with the guidelines of~\cite{3gpp_38_874}. The goal of this framework is to aid the distributed schedulers, adapting the number of \gls{ofdm} symbols allocated to the backhaul and access interfaces to the phenomena which exhibit a sufficiently slow evolution over time, i.e., large scale fading and local congestion. This optimization is undertaken with respect to a generic additive utility function $f_u$. An \gls{iab} network of arbitrary size is considered, composed of a single \gls{iab}-donor, multiple \gls{iab}-nodes and a (possibly time-varying) number of \glspl{ue} which connect to both types of \glspl{gnb}. Furthermore, let the topology of the \gls{iab} network be pre-computed, for instance by using the policies of~\cite{polese2018iab}, and assume that a central controller is installed on the \gls{iab}-donor. The proposed framework can be subdivided into the following phases, which are periodically repeated every $T_{alloc}$ subframes: \begin{enumerate} \item \label{Enum_frameword:item_one} \textbf{Initial setup}. This step consists in the computation of the simplified \gls{iab} network graph $\mathcal{G} \equiv \{ \mathcal{V}, \mathcal{E} \}$. Specifically, $\mathcal{V}$ is composed of the donor, the various \gls{iab}-nodes and, possibly, of additional nodes which represent the set of \glspl{ue} that are connected to a given \gls{gnb}. Accordingly, $\mathcal{E}$ contains the active cell associations of the aforementioned nodes. This process, depicted in Fig.~\ref{Fig:Phase0}, must be repeated every time the \gls{iab} topology changes, i.e., whenever a new \gls{ue} performs its \gls{ia} procedure or an \gls{iab}-node connects to a different parent due to a \gls{rlf}. \begin{figure}[t] \centering \includegraphics[width=0.65\linewidth]{Phase0.pdf} \caption{Creation of the \gls{iab} network graph. The original topology, exhibiting the actual cell attachments, is depicted on the left. Conversely, the reduced topology is shown on the right.} \label{Fig:Phase0} \vspace{-.6cm} \end{figure} \item \label{Enum_frameword:item_two} \textbf{Information collection}. During this phase, the various \gls{iab}-nodes send to the central controller a pre-established set of information for each of their children in $\mathcal{G}$. For instance, this feedback may consist in their congestion status and/or information regarding their channel quality. To such end, the implementation of this paper uses modified versions of pre-existing \gls{nr} Release 16 \glspl{ce}, as strongly recommended in the \gls{iab} SI~\cite{3gpp_38_874}. However, the scheme does not actually impose any limitations in such regard. \item \label{Enum_frameword:item_three} \textbf{Centralized scheduling indication}. Upon reception of the feedback information, the central controller calculates the set of weights $\mathcal{W}$ accordingly. Then, an \gls{mwm} of $\mathcal{G}$ is computed using the \texttt{T-MWM} algorithm and producing as output the activation set $\mathbf{E^*}$, which maximizes the overall utility of the system with respect to the chosen utility function. Subsequently, $\mathbf{E^*}$ is used in order to create a set of \textit{favored} downstream nodes, i.e., of children which will be served with the highest priority by their parent, as depicted in Fig.~\ref{Fig:Phase2}. Finally, these scheduling indications are forwarded to the various \gls{iab}-nodes which act as parents in the edges of $\mathbf{E^*}$. \begin{figure}[t] \centering \includegraphics[width=0.65\linewidth]{Phase2.pdf} \caption{Computation of the \gls{mwm} and of the corresponding scheduling indications.} \label{Fig:Phase2} \vspace{-.6cm} \end{figure} \item \textbf{Distributed scheduling allocation}. During this phase, the various \gls{iab}-nodes make use of the indications received by the central controller, if available, in order to perform the actual scheduling (which is, therefore, predominantly distributed). Specifically, the favored nodes are served with the highest priority, while the remaining downstream nodes are scheduled if and only if the resource allocation of the former does not exhaust the available \gls{ofdm} symbols. \end{enumerate} It is important to note that since $\mathcal{G}$ contains only the \gls{iab}-nodes, the donor and at most one ``representative" \gls{ue} per \gls{gnb}, the proposed scheme effectively performs only the backhaul/access resource partitioning in a centralized manner. On the other hand, the actual \gls{mac}-level scheduling is still undertaken in a distributed fashion, albeit leveraging the indications produced by the central controller. The major advantages which this two-tier design exhibits, compared to a completely centralized solution, are the presence of a relatively light signaling overhead and the ability to promptly react to fast channel variations, for instance caused by small scale fading. \subsection{Implementation of centralized allocation schemes in mmWave \gls{iab} networks} \label{Sec:ns3-impl} The remainder of this section discusses how the proposed scheme can be implemented in \gls{iab} deployments, with references to how the \gls{3gpp} specifications can support it. Basically, the resource allocation framework requires (i) a central controller, which is installed on the \gls{iab}-donor, or could be deployed in a \gls{ric} following the O-RAN architecture~\cite{bonati2020open}; and (ii) a scheduler which exchanges resource coordination information with the former and computes the weights for the resource allocation. In particular, referring to the aforementioned phases of the proposed scheme, the following implementation considerations can be made. \subsubsection{Initial setup} The setup of the various centralized mechanisms is subdivided into two sub-phases: an initial configuration, where the relevant entities are initialized, and a periodic update of the topology information. The former takes place when the \gls{iab}-nodes are first connected to the network. During this phase, the controller acquires preliminary topology information, by leveraging the configuration messages which are already exchanged during the usual Rel.16 \gls{ia} procedure, generating a map which associates the depth in the \gls{iab} network to a pair of child-parent global identifiers (which from now on will be referred to as ``IDs"). Since this phase takes place when no \gls{ue} has performed its \gls{ia} procedure yet, the exchanged topology information concerns the donor and \gls{iab}-nodes only. Moreover, the central controller is in charge of periodically updating the topology information. In order to minimize the signaling overhead, this process does not require any additional control information: in fact, the status information which is already collected in a periodic manner can be leveraged in such regard. Specifically, the periodic info received from the various \gls{iab}-nodes, which will carry a list of ID-value pairs, is analyzed. The child-parent associations are then compared with the ones known by the controller, updating the latter whenever the two topology information happens to differ. \subsubsection{Information collection} The generation of the feedback information is performed in a distributed manner by both the \gls{iab}-nodes and the \gls{iab}-donor. To such end, the current implementation features the forwarding of information on the channel quality and buffer status, in the form of \glspl{cqi} and \glspl{bsr} respectively. This choice is driven by both the will of maximizing the re-utilization of the \gls{nr} Rel.16 specifications and the goal of making use of \gls{mac}-level \glspl{ce} only, hence avoiding the introduction of any constraint regarding the supported \gls{iab}-relaying architecture. In particular, the \gls{cqi} and \gls{bsr} information is generated by analyzing the corresponding \glspl{ce} which are received by the host \gls{gnb}, and checking whether the source \gls{rnti} belongs to an \gls{iab}-node or to a \gls{ue}. In the first case, the corresponding ID is retrieved and an entry carrying such identifier along with its \gls{cqi}/\gls{bsr} value is generated. The feedback information concerning the \glspl{ue}, instead, is averaged in the case of the \glspl{cqi} and added up for the \glspl{bsr}, to obtain a single value for each \glspl{gnb}. It can be noted that both \glspl{cqi} and \glspl{bsr} are available to the scheduler, since the UL buffer statuses are already periodically reported by the downstream nodes via their \gls{bsr}s and the DL statuses can be easily retrieved by the former, since the \gls{rlc} buffers reside on the same node as the scheduler itself, i.e., the \gls{gnb}. Referring to the \gls{3gpp} specifications of~\cite{3gpp_38_874, 3gpp_38_321, 3gpp_38_331}, the buffers occupancy can then be forwarded to the \gls{iab}-donor by introducing a periodic-only \gls{bsr} whose period is controlled by an ad hoc \gls{rrc} timer. Similarly, the channel qualities can be reported by the various \gls{iab}-nodes via additional periodic \glspl{cqi} which would carry only the \gls{cqi} index, hence neglecting the \gls{ri} and \gls{pmi}, since this information would generate unnecessary signaling overhead. These \glspl{ce} would preferably leverage pre-existing \gls{nr} measurements: the main novelty would be the introduction of their periodic reporting to the \gls{iab}-donor. To such end, the \gls{5g} \gls{cqi} and \gls{bsr} data-structures require an additional field which carries the ID, if the chosen \gls{iab}-relaying architecture does not feature an Adaptation Layer~\cite{3gpp_38_874}. Conversely, relaying solutions which support the latter can reuse the \gls{nr} \glspl{ce} and let such layer introduce an additional header. \subsubsection{Centralized scheduling indication} \label{Sec:cent_indic} Periodically, the controller located at the donor makes use of the feedback received by the \gls{iab}-nodes to first compute the weights of the various network links and then generate the centralized scheduling indications. We propose the following policies to compute the weights for the \gls{mwm} problem: \begin{enumerate} \item \textbf{\gls{msr}}. This policy maximizes the overall \gls{phy}-layer throughput, i.e., the utility function is \[ f_u^{\mathrm{MSR}} \equiv \sum_{e_{i \to k} \, \in \, \mathbf{E^*}} c_{i, \, k}, \] and the weight assigned to the edge from node $i$ to node $k$ reads $w_{i, \, k} \equiv c_{i, \, k}$, where $c_{i, \, k}$ is the capacity of the link $e_{i \to k}$. \item \textbf{\gls{ba}}. This resource partitioning strategy aims at avoiding congestion. Therefore, the system utility is: \[ f_u^{\mathrm{BA}} \equiv \sum_{e_{i \to k} \, \in \, \mathbf{E^*}} q_{i, \, k}, \] where the weight $w_{i, \, k}$ reads $q_{i, \, k}$, namely, the amount of buffered data which would reach its next hop in the \gls{iab} network by crossing the link $e_{i \to k}$. \item \textbf{\gls{mrba}}. This represents the most balanced option among the three, since it exploits favorable channel conditions while also preventing network congestion and favoring network fairness. The weight assigned to link $e_{i \to k}$ is: \[ w_{i, \, k} \equiv c_{i, \, k} + \eta \cdot q_{i, \, k} \cdot \left( \frac{\mu}{\mu_{thr}} \right)^{k}, \] where $\eta$, $\, \mu_{thr}$ and $k$ are arbitrary parameters and $\mu$ represents the number of subframes which have elapsed since the last time edge $e_{i \to k}$ has been marked as favored. \end{enumerate} Once the weights are computed, the controller obtains an \gls{mwm} of the network via an implementation of the aforementioned \texttt{T-MWM}. This function outputs the activation set $\mathbf{E^*}$, i.e., a map associating the ID of the parent \gls{gnb} to the one of its favored downstream node. Notably, this set does not necessarily contain scheduling indications for \textit{each} \gls{iab}-node in the network: an entry corresponding to a given \gls{gnb} is present if and only if such node is indeed active in the \gls{mwm}. First, this map is used by the controller in order to keep track of which link has not been favored and for how long; this information may then be used to introduce a weight prediction mechanism, improving the robustness of the scheme with respect to the information collection period. Finally, these scheduling indications are forwarded to the corresponding \gls{iab}-nodes. \subsubsection{Distributed scheduling allocation} The last phase of the resource allocation procedure consists in the distributed \gls{mac}-level scheduling. Before assigning the available resources, the various schedulers check whether any indication has been received from the controller. Based on this condition, the buffer occupancy information is then split into two groups. The first contains the \gls{bsr}s related to the favored \gls{rnti} (if any), with the caveat that if the latter indicates the cumulative access link, then this set contains the \gls{bsr}s of all the \gls{ue}s attached to the host \gls{gnb}, while the other comprises the remaining control information. The resource allocation process is then undertaken twice: first considering the set of favored \gls{bsr}s only, then the remainder of these \glspl{ce}. Thanks to this design, the favored link(s) is (are) scheduled with the highest priority, while the rest of the network only gets the remaining resources. In such a way, the information received by the controller is actually used as an \textbf{indication} and not as the eventual \textbf{resource allocation}. For instance, the \glspl{gnb} are free to override these indications whenever the buffer of the favored child is actually empty, due to discrepancies between its actual status and the related information available to the controller. In such a way, the unavoidable delay between the information collection and the reception of the scheduling does not lead to any resource underutilization. Moreover, this is achieved with minimal changes to the state of the art schedulers, making the proposed scheme relatively easy to implement and deploy in real-world networks. \section{Performance evaluation} \label{Sec:perf_eval} We implemented the proposed resource allocation scheme in the popular open source simulator ns-3, exploiting the \gls{mmwave} module~\cite{mezzavilla2018end} and its \gls{iab} extension~\cite{polese2018end}, to characterize the system-level performance of the proposed solution with realistic protocol stacks, scenarios, and user applications. The ns-3 \gls{mmwave} module is based on~\cite{baldo2011open} and introduces \gls{mmwave} channel models, including the \gls{3gpp} channel model for 5G evaluations~\cite{zugno2020implementation}, and highly customizable \gls{phy} and \gls{mac} layer implementation, with an NR-like flexible \gls{ofdm} numerology and frame structure. Additionally, the \gls{iab} module~\cite{polese2018end} models wireless relaying functionalities which mimic the specifications presented in~\cite{3gpp_38_874}. Specifically, this module supports both single and multi-hop deployment scenarios, auto-configuration (within the network) of the \gls{iab}-nodes and a detailed \gls{3gpp} protocol stack, allowing wireless researchers to perform system-level analyses of \gls{iab} systems in ns-3. It is of particular relevance to understand how the scheduling operations are implemented in the \gls{iab} module, since they offer not only the baseline for the proposed scheme, but also valid guidelines for real-world deployments. The current ns-3 \gls{iab} schedulers exhibit a \gls{tdma}-based multiplexing between the access and backhaul interfaces. Moreover, scheduling decisions are undertaken in a distributed manner across the \gls{iab} network, i.e., each \gls{gnb} allocates the resources which its access interface offers (to both \gls{ue}s and \gls{iab}-nodes) independently of the other \gls{gnb}s in the network. In fact, in an \gls{iab} network these scheduling decisions are \textit{almost} independent of one another: if a parent node schedules the backhaul interface of a downstream node, clearly the latter will be constrained in its own scheduling decisions, as it will not be allowed to allocate the time resources which have already been scheduled for backhaul transmissions by its parent. Therefore, in a tree-based, multi-hop wireless network the various \glspl{gnb} need to know in advance the scheduling decisions performed by their upstream nodes: to solve this problem, the authors of the \gls{iab} module for ns-3 introduced a ``\textit{look-ahead backhaul-aware scheduling mechanism}"~\cite{polese2018end}. Such mechanism features an exchange of \gls{dci} between the access and backhaul interfaces: in such a way, any time resources already scheduled by the parent for backhaul communications can be marked as such by the corresponding downstream node, preventing any overlap with other transmissions. Furthermore, the \textit{look-ahead} mechanism requires the schedulers of the various \gls{gnb}s to commit to their resource allocation for a given time $T$ at a time $T - k$, where $k - 1$ is the maximum distance (in terms of wireless hops) of any node from the donor. In such a way, the \gls{dci}s will have time to propagate across the \gls{iab} network and reach the farthest node at time $T - 1$, thus allowing its scheduler to perform the resource allocation process at least one radio subframe in advance. \subsection{Simulation scenario and parameters} The purpose of these simulations is to understand the performance of the proposed resource partitioning framework in the context of its target deployment, i.e., a multi-hop \gls{iab} network. As a consequence, the reference scenario consists of a dense urban deployment with a single \gls{iab}-donor and multiple \gls{iab}-nodes, as depicted in Fig.~\ref{Fig:Sim_scen}. In particular, the various \glspl{gnb} are distributed along an urban grid where the donor is located at the origin while the \gls{iab}-nodes are deployed along the street intersections, with a minimum inter-site distance of 100 m. The \gls{iab}-nodes attachments are computed using the so-called \textit{HQF} policy presented in~\cite{polese2018iab}; however, this choice does not introduce any loss of generality since such parameter is fixed for all the runs. A given number of \glspl{ue} are deployed within the surroundings of these base stations, with an initial position which is randomly sampled from circles of radius $\rho$ and whose centers are the various \glspl{gnb}. \begin{figure}[tbp] \centering \subfloat[\label{Fig:Sim_scen}]{ \setlength\fwidth{0.38\columnwidth} \setlength\fheight{0.25\columnwidth} \raisebox{-.5\height}{\input{Figures/Sim_scenario.tex}} } \subfloat[\label{Tab:Sim_params}]{ \footnotesize \begin{tabular}{cc} \multicolumn{2}{c}{\textsc{Simulation parameters}}\\ \hline \textsc{Parameter} & \textsc{Value} \\ \hline Number of runs $N_{runs}$ & 25 \\ \rowcolor{gray!15} Simulation time $T_{sim}$ & 3 s \\ \gls{mwm} period $T_{alloc}$ & $\{ 1, 2, 4\}$ subframes \\ \rowcolor{gray!15} Layer 4 protocol & $\{ $UDP, TCP$ \}$ \\ UDP packet size $s_{UDP}$ & $\{50, 100, 200, 500 \}$ B \\ \rowcolor{gray!15} Weight policy $f_u$ & $\{$\gls{msr}, \gls{ba}, \gls{mrba}$\}$ \\ \hline \end{tabular} } \label{Float:Sim_scen_and_params} \caption{On the left, a realization of the simulation scenario is depicted; the dotted lines represent the cell-attachments of the \gls{iab}-nodes. On the right, a brief list of simulation parameters is provided.} \vspace{-.6cm} \end{figure} Both the \gls{iab}-donor and the \gls{iab}-nodes are equipped with a phased array featuring 64 antenna elements, and transmit with a power of 33 dBm; conversely \glspl{ue} are equipped with 16 antenna elements and their transmission power is restricted to 23 dBm. Notably, the presence of additional antenna elements at the \glspl{gnb} is a key (but reasonable) assumption, as it allows base stations to achieve a high beamforming gain. In turn, it is possible to achieve a high capacity, which is fundamental to avoid performance bottlenecks, given the absence of a fiber backhaul. The \glspl{ue} download data which originates from sources that are installed on a remote host; both the \gls{udp} and the \gls{tcp} are used. For the \gls{udp} simulations, the rate of the sources is varied from 4 to 40 Mbps to introduce different degrees of saturation in the network. Therefore, in these simulations only DL traffic is considered. Finally, the performance of the proposed policies is hereby compared with the baseline of~\cite{polese2018end}, indicated as ``Dist.'' by examining end-to-end throughput, latency, and a network congestion metric. \subsection{Throughput} The first metric which is inspected in this analysis is the end-to-end throughput at the application layer. As a consequence, only the packets which are correctly received at the uppermost layer of the destination node in the network are taken into account. In particular, for each \gls{ue} and each simulation run, the long-term average throughput is computed as follows: \[ S^{\mathrm{APP}}_{k, n} \equiv \frac{B(T_{sim}, k, n)}{T_{sim}} \] where $B(t, j)$ is the cumulative number of bits received up to time $t$ by the $k$-th \gls{ue}, during the $n$-th simulation run. Then, the distribution of $\mathbf{S}^{\mathrm{APP}}$, namely, the vector containing the collection of the $S^{\mathrm{APP}}_{k, n}$ values across the different runs and \glspl{ue}, is analyzed. Figs.~\ref{Fig:Thr_ECDF_100} and~\ref{Fig:Thr_ECDF_500} report the \gls{ecdf} of $\mathbf{S}^{\mathrm{APP}}$, for a \gls{udp} packet size of 100 and 500 bytes, respectively, and the policies introduced in Sec.~\ref{Sec:ns3-impl}. In the former, we can notice that the introduction of the centralized framework increases by up to 15\% the percentage of \glspl{ue} whose throughput matches the rate of the \gls{udp} sources. Moreover, by focusing on the leftmost portion of Fig.~\ref{Fig:Thr_ECDF_100} we can observe another interesting result, concerning the throughput experienced by the \glspl{ue} which do not fulfill their \gls{qos} requirements. In fact, with respect to the first quartile the distributed scheduler achieves the worse performance, with 25\% of the \glspl{ue} obtaining a throughput smaller than 3.3~Mbps. The centralized framework significantly improves these results, even though the extent of such improvements varies quite dramatically across the different policies. \begin{figure}[tbp] \centering \subfloat[$s_{UDP}$ = 100 B, i.e., \gls{udp} rate of 8 Mbps.\label{Fig:Thr_ECDF_100}]{ \setlength\fwidth{0.4\columnwidth} \setlength\fheight{0.2\columnwidth} \input{Figures/Sim_results/E2E_throughput_ECDF_packet_size_100.tex} } \hfill \subfloat[$s_{UDP}$ = 500 B, i.e., \gls{udp} rate of 40 Mbps.\label{Fig:Thr_ECDF_500}]{ \setlength\fwidth{0.4\columnwidth} \setlength\fheight{0.2\columnwidth} \input{Figures/Sim_results/E2E_throughput_ECDF_packet_size_500.tex} } \caption{Per-\gls{ue} end-to-end throughput \glspl{ecdf}. The dashed line represents the rate of the \gls{udp} sources.} \label{Fig:thr_ECDF} \vspace{-.6cm} \end{figure} Compared with the distributed case, the \gls{msr} policy achieves a higher throughput with respect to all the percentiles, albeit exhibiting the same high variance of the former. Instead, the \gls{ba} and \gls{mrba} policies have a dramatic impact on the system performance, introducing a 5-fold increase of the worst case throughput coupled with a significantly lower variance. These results can be explained as follows: since a \gls{udp} packet size of 100 bytes does not saturate the capacity of the access links, the main performance bottleneck of this configuration is represented by the buffering of the aggregated traffic on the intermediate backhaul links. Therefore, the \gls{msr} policy provides only minimal improvements compared to the performance of the distributed scheduler, since it simply favors the links which exhibit a higher \gls{sinr}. Conversely, the prioritization of the most congested links which is introduced by the other two strategies successfully tackles the former problem. In particular, the \gls{ba} policy exhibits the highest worst case throughput, albeit at the cost of satisfying the \gls{qos} requirements of approximately 20\% of the \glspl{ue}. On the other hand, the bias towards high \gls{sinr} channels introduced by the \gls{mrba} strategy has the opposite effect, improving mostly the higher percentiles but also outperforming \gls{msr} and the baseline in the lower percentiles. By increasing the \gls{udp} packet size to 500 bytes, the network becomes noticeably saturated, as depicted by Fig.~\ref{Fig:Thr_ECDF_500}; in fact, in this instance only a minority of the \glspl{ue} achieves a throughput which is comparable to the source rate. With this configuration, the \gls{ba} strategy achieves the worst performance, providing a significantly lower throughput across all the percentiles. On the other hand, the differences among the behavior of the remaining strategies are less evident. In particular, the \gls{msr} policy exhibits only a slight improvement over the distributed solution, albeit noticeable across the whole \gls{ecdf}. The \gls{mrba}, conversely, introduces performance benefits which mostly affect the bottom percentiles only. However, with this strategy only a limited portion of the \glspl{ue} achieves the target throughput of 40 Mbps. As a consequence, we can conclude that with the configuration depicted in Fig.~\ref{Fig:Thr_ECDF_500} the network is approaching the capacity of the \gls{mmwave} channels. Therefore, buffering phenomena are likely occurring at each intermediate \gls{iab}-node. Moreover, we can say that in a saturated network the congestion is so severe that prioritizing the bottleneck links is not enough: we also need to take into account the channel conditions and prioritize the links which not only are congested, but also have the ``biggest chance" of getting rid of the buffered data due to the temporary better channel quality. \begin{figure}[tbp] \subfloat[First quartile.\label{Fig:Throughput_first_quartile}]{ \setlength\fwidth{0.37\columnwidth} \setlength\fheight{0.2\columnwidth} \input{Figures/Sim_results/E2E_first_quartile_throughput.tex} } \hfill \subfloat[Third quartile.\label{Fig:Throughput_third_quartile}]{ \setlength\fwidth{0.37\columnwidth} \setlength\fheight{0.2\columnwidth} \input{Figures/Sim_results/E2E_third_quartile_throughput.tex} } \vspace*{-3mm} \caption{End-to-end throughput quartiles, for $s_{UDP}$ $\in$ $\{50, 100, 200, 500 \}$ B.} \label{Fig:Throughput_quartiles} \vspace{-.6cm} \end{figure} Finally, Fig.~\ref{Fig:Throughput_quartiles} presents the first and third quartiles of $\mathbf{S}^{\mathrm{APP}}$ as a function of the \gls{udp} packet size $s_{UDP}$. It can be noted that, with respect to the first quartile, the \gls{mrba} outperforms all the other policies by delivering a throughput which is up to 90\% higher than the one obtained by the distributed scheduler. In fact, Fig.~\ref{Fig:Throughput_third_quartile} shows how \gls{mrba} achieves also the best third quartile, albeit the improvement over the distributed solution is less dramatic. Furthermore, we can observe how the positive impact of the \gls{ba} strategy is inversely proportional to the saturation in the network. We can then conclude that the bias it introduces loses its effectiveness as the buffering phenomena start to affect the majority of the \gls{iab}-nodes. \subsection{Latency} Just like the aforementioned metric, the latency is measured end-to-end at the application layer. Thanks to this choice, the resulting delay accurately represents the system-level performance, as it includes the latency which is introduced at each hop in the \gls{iab} network. \begin{figure}[tbp] \centering \subfloat[ECDF, for $s_{UDP}$ = 100 B.\label{Fig:Del_ECDF_100}]{ \setlength\fwidth{0.48\columnwidth} \setlength\fheight{0.2\columnwidth} \input{Figures/Sim_results/E2E_delay_ECDF_packet_size_100.tex} } \hfill \subfloat[Third quartile, for $s_{UDP}$ $\in$ $\{50, 100, 200, 500 \}$ B.\label{Fig:Delay_third_quartile}]{ \setlength\fwidth{0.24\columnwidth} \setlength\fheight{0.2\columnwidth} \input{Figures/Sim_results/E2E_third_quartile_delay.tex} } \vspace*{-3mm} \caption{Per-\gls{ue} end-to-end delay statistics.} \label{Fig:Delay_stats} \vspace{-.6cm} \end{figure} In particular, for each packet correctly received at the uppermost layer of its final destination, the following quantity is traced: \[ D^{\mathrm{APP}}_{i} \equiv \sum_{l_k \, \in \, \mathcal{E}_{i}} D^{l_k}_i \] where $\mathcal{E}_{i}$ comprises the links in the \gls{iab} network that are crossed by the $i$-th packet, while the term $D^{l_k}_i$ indicates its point-to-point latency over the path link $l_i$. Finally, these values are collected for each of the various runs into the vector $\mathbf{D}^{\mathrm{APP}}$ and its statistical properties are inspected. Fig.~\ref{Fig:Del_ECDF_100} shows the empirical \gls{ecdf} of $\mathbf{D}^{\mathrm{APP}}$ for a packet size of 100 bytes. It can be noticed that, in this case, the 90th percentile achieved by the \gls{ba} and the \gls{mrba} policies are approximately 20 \% smaller than the one obtained by the distributed scheduler. Moreover, these strategies manage to dramatically reduce the number of packets received with extremely high delay, i.e., in the order of seconds, showing the dramatic impact of buffering in the baseline configuration. Conversely, the \gls{msr} policy provides the best performance with respect to the best case delay only, although it still outperforms quite significantly the distributed strategy. These trends are exacerbated by Fig.~\ref{Fig:Delay_third_quartile}, which shows the third quartile of $\mathbf{D}^{\mathrm{APP}}$ as a function of the \gls{udp} packet size $s_{UDP}$. In fact, we can notice that the effectiveness of the \gls{ba} policy is inversely proportional to the network saturation; the opposite holds true with respect to the \gls{msr} strategy. It follows that, for \gls{udp} rates in the order of 5 to 10 Mbps, the network is mainly plagued by local congestion which causes the insurgence of buffering in some of the nodes. Conversely, as the rate of the \gls{udp} sources increases the system shifts to a capacity-limited regime, a phenomenon which explains the dominance of the \gls{msr} and \gls{mrba} policies. \subsection{Network congestion} The network congestion is measured by collecting, every $T_{alloc}$ subframes, the \gls{rlc} buffers status of the various nodes into the vector $\mathbf{B}^{\mathrm{RLC}}$. It must be noted that, since \gls{rlc} \gls{am} is used, these values will indicate data which is related to both new packets and possible retransmissions. Figs.~\ref{Fig:BSR_median_ue} and~\ref{Fig:BSR_median_node} show the median of $\mathbf{B}^{\mathrm{RLC}}$, for traffic flows whose next hop in the network is represented by either \glspl{ue} or \gls{iab}-nodes respectively. The \gls{ba} strategy achieves the worst performance in this metric, leading to unstable systems in the cases of $s_{UDP}$ = $\{$200, 500$\}$ B. A reason for this behavior can be found in the ``locality" of the \gls{ba} policy criteria and the lack of influence of the past allocations on the weights. These characteristics may lead to favoring the same link in a repeated manner, hence offering little remedy to the end-to-end congestion. \begin{figure}[tbp] \centering \subfloat[Medians, toward \glspl{ue}.\label{Fig:BSR_median_ue}]{ \setlength\fwidth{0.19\columnwidth} \setlength\fheight{0.2\columnwidth} \input{Figures/Sim_results/Median_BSR_UE.tex} } \hfill \subfloat[Medians, toward \gls{iab}-nodes.\label{Fig:BSR_median_node}]{ \setlength\fwidth{0.19\columnwidth} \setlength\fheight{0.2\columnwidth} \input{Figures/Sim_results/Median_BSR_node.tex} } \hfill \subfloat[Third quartile vs. depth in the \gls{iab} network, for $s_{UDP}$ = 200 B.\label{Fig:BSR_third_quartile_depth}]{ \setlength\fwidth{0.23\columnwidth} \setlength\fheight{0.2\columnwidth} \input{Figures/Sim_results/Third_quartile_BSR_vs_depth.tex} } \vspace*{-3mm} \caption{Buffer occupancy statistics, for $s_{UDP}$ $\in$ $\{50, 100, 200, 500 \}$~B.} \label{Fig:BSR_median} \vspace{-.6cm} \end{figure} On the other hand, the buffer occupancy achieved by the \gls{msr} strategy depicts a system behavior which, in accordance with previous observations, is extremely similar to that of the distributed case. Interestingly, with these configurations the network congestion occurs primarily at the donor and, in general, on the backhaul links towards \gls{iab}-nodes. This phenomenon can be explained as follows: even though, on average, the channel qualities of the backhaul links experience a better \gls{sinr}, the maximum number of such links which can be concurrently activated is limited, due to the \gls{tdd} configuration. Therefore, the \gls{msr} policy may introduce a bias towards the access links instead, since their activation yields the highest sum capacity, despite their lower channel quality. Finally, the \gls{mrba} policy achieves the lowest amount of \gls{rlc} buffering. Specifically, Fig.~\ref{Fig:BSR_median_node} shows that, compared to the \gls{msr} and distributed strategies, the median buffer occupancy among backhaul links is up to 60\% smaller, albeit at the cost of slightly more congested \gls{ue} buffers. Finally, Fig.~\ref{Fig:BSR_third_quartile_depth} depicts the third quartiles of $\mathbf{B}^{\mathrm{RLC}}$ as a function of the depth of the corresponding \gls{gnb} in the \gls{iab} network. It is possible to notice that, regardless of the policy in use, the amount of buffering at the various \glspl{gnb} generally decreases as their distance to the donor increases. This follows from the fact that nodes which have a lower depth exhibit, on average, a bigger subtending tree; therefore the amount of traffic which makes use of their backhaul links is significantly higher. \subsection{Performance with \gls{tcp} traffic} This subsection extends the aforementioned analysis by inspecting the performance of the proposed scheme in the case of \gls{tcp} traffic. Specifically, a \gls{tcp} full-buffer source model is used, and the various centralized resource allocation policies are compared against the distributed scheduler. \begin{figure}[tbp] \centering \subfloat[Delay \gls{ecdf}.\label{Fig:TCP_delay_ECDF_256}]{ \setlength\fwidth{0.36\columnwidth} \setlength\fheight{0.2\columnwidth} \input{Figures/Sim_results/TCP_E2E_delay_ECDF_packet_size_256.tex} } \hfill \subfloat[Throughput \gls{ecdf}.\label{Fig:TCP_throughput_ECDF_256}]{ \setlength\fwidth{0.36\columnwidth} \setlength\fheight{0.2\columnwidth} \input{Figures/Sim_results/TCP_E2E_throughput_ECDF_packet_size_256.tex} } \caption{End-to-end delay and throughput statistics, for \gls{tcp} layer 4 protocol.} \label{Fig:TCP_ECDFs} \vspace{-.6cm} \end{figure} Fig.~\ref{Fig:TCP_delay_ECDF_256} shows the \gls{ecdf} of the end-to-end delay experienced by the successfully received packets. Similarly to the \gls{udp} case, the distributed scheduler exhibits the worse performance in this regard. However, the behavior of the centralized policies is remarkably different. In particular, with this configuration the \gls{msr} policy provides the best results, followed quite closely by the \gls{mrba} and \gls{ba} strategies. Fig.~\ref{Fig:TCP_throughput_ECDF_256}, which depicts the statistics of the end-to-end throughput achieved by the various \glspl{ue}, helps explain these results. The net effect of the \gls{ba} and \gls{msr} policies is, approximately, a 20\% increase of the peak throughput. Conversely, the \gls{mrba} strategy causes a redistribution of the achieved data rate, massively improving the lower quartiles (up to the 80-th), albeit at the expense of the maximum throughput. Therefore, we can conclude that regardless of the specific policies used, the proposed scheme improves the system performance by limiting the insurgence of local buffering, aiding the end-to-end congestion control mechanism offered by \gls{tcp}. Furthermore, it can be noted that both a prioritization of the most congested links and of the channels featuring a higher quality results in performance benefits in the average case, although it also causes a decrease of the network fairness. On the other hand, the \gls{mrba} policy manages to optimize the backhaul/access resource partitioning, while introducing an increase in the throughput fairness at the same time. \subsection{Further considerations} It is of particular relevance to analyze the performance of the centralized policies when relaxing the most restrictive hypothesis, i.e., the capability of reliably exchanging feedback information in a timely manner, and to understand how restrictive such assumption actually is. To such end, Fig.~\ref{Fig:Alloc_period_impact} shows the performance of the proposed framework as a function of the centralized allocation period $T_{alloc}$. In particular, each of the depicted points represents the joint end-to-end throughput and delay achieved with the different configurations. \begin{figure}[tbp] \subfloat[Combined per \gls{ue} end-to-end throughput first quartile and end-to-end delay third quartile, as a function of the centralized allocation period $T_{alloc}$.\label{Fig:Alloc_period_impact}]{ \setlength\fwidth{0.45\columnwidth} \setlength\fheight{0.2\columnwidth} \input{Figures/Sim_results/Perf_vs_alloc_period.tex} } \hfill \subfloat[\gls{mwm} runtime as a function of the number of \gls{iab}-nodes in the network.\label{Fig:MWM_runtime}]{ \setlength\fwidth{0.3\columnwidth} \setlength\fheight{0.2\columnwidth} \input{Figures/Sim_results/MWM_runtime_vs_nodes.tex} } \caption{Considerations on the formulated assumptions.} \label{Fig:Assump} \vspace{-.6cm} \end{figure} As expected, in general the effectiveness of the various centralized policies progressively deteriorates as the frequency of the scheduling indications decreases. Interestingly, the \gls{ba} policy exhibits the lowest performance degradation with respect to an increase of the allocation period, which suggests that this phenomenon has a slower evolution over time compared to the one exhibited by the channels quality. Nevertheless, the key takeaway is that all of the proposed allocation strategies outperform the distributed solution, across both metrics. However, the trend depicted by Fig.~\ref{Fig:Alloc_period_impact} also suggests that there exists a threshold value of $T_{alloc}$ after which the performance of the proposed frameworks brings only marginal performance benefits. Additionally, the running time of the \gls{mwm} algorithm presented in Sec.~\ref{Sec:T-MWM} was analyzed, in order to understand whether it may partially invalidate the timely feedback assumption. Specifically, Fig.~\ref{Fig:MWM_runtime} presents the statistics of the various \gls{mwm} execution times, obtained on a machine equipped with an i7-6700 4-core processor clocked at 3.4~{GHz}. The first observation which can be made is that this empirical analysis confirms the previously estimated asymptotic complexity, depicting a running time which exhibits a linear dependence on the number of \glspl{gnb} in the network. Furthermore, it can be noted that the runtime of the \gls{mwm} algorithm does not exceed 6~$\mu$s, even for a significant number of \gls{iab}-nodes connected to the same \gls{iab}-donor. As a consequence, we can conclude that the execution times of the centralized allocation process do not pose any threat to the timely feedback assumption, since they are reasonably smaller than the duration of the minimum centralized allocation period. \section{Conclusions} \label{Sec:conc} In this paper we proposed a centralized resource partitioning scheme for \gls{5g} and beyond \gls{iab} networks, coupled with a set of allocation policies. We showed that the introduction of this light resource allocation cooperation dramatically improves the end-to-end throughput and delay achieved by the system already, preventing (or at the very least limiting) the insurgence of network congestion on the backhaul links. Specifically, the \gls{mrba} policy exhibits the most promising results, offering up to a 5-fold increase in the worst-case throughput and approximately a 50\% smaller worst-case latency, compared to the distributed scheduler. On the other hand, the effectiveness of the \gls{ba} and \gls{msr} policies varies quite significantly across the specific system configuration and inspected metric. We provided considerations on the implementation of a centralized resource allocation controller in real world deployments. In particular, we acknowledged that the proposed scheme relies on the assumption of \gls{iab}-nodes being capable of exchanging timely feedback information with the \gls{iab}-donor. Even though the amount of signaling data which the proposed solution requires is quite low, and its performance is quite robust with respect to an increase of the central allocation period, we argue that this remains a significant constraint. Moreover, such drawback is exacerbated by the unfavorable \glspl{mmwave} propagation characteristics. As a consequence, we deem that centralized solutions, which rely on timely exchange of control information with the \gls{iab}-donor, are likely to require dedicated control channels, possibly at sub-6~{GHz}, in order to grant the utmost priority and reliability to the feedback information. Therefore, we can conclude that the aforementioned framework can bring dramatic performance benefits to \gls{iab} networks, although its introduction in \gls{5g} and beyond deployments requires additional research efforts. For this reason, as part of our future work we plan to design machine-learning algorithms which predict the network evolution at the \gls{iab}-donor. This improvement will allow us to relax the timely feedback assumption, by increasing the minimum centralized allocation period which leads to performance benefits over distributed strategies. Moreover, we foresee to implement mechanisms which adapt the parameters of the \gls{mrba} policy to the system load and configuration, and additional resource partitioning strategies. Finally, the generalization of the proposed framework to \gls{sdma} systems will be studied. The use of such multiple access scheme should significantly improve the performance of \gls{mmwave} wireless backhauling by introducing the possibility of concurrently serving multiple terminals, provided that they exhibit a sufficient distance among them. \glsresetall \glsunset{nr} \bibliographystyle{IEEEtran}
1,108,101,563,586
arxiv
\section{Introduction} \label{sec:intro} The behaviour of photons in a local gravitational field is well described by General Relativity (GR), and observational data have confirmed without exception that solutions to the general field equations are exact when applied to static or rotating localised gravitational masses. To the early successes of precession of the perihelion of Mercury and gravitational bending of star light during a solar eclipse have been added many other confirmatory observations. These include gravitational redshift \citep{2002hrxs.confE..10C}, the production of Einstein rings by DM halos \citep{2015ApJ...811..115W}, X-ray emission data in the neighbourhood of black holes \citep{2004A&A...413..861M,2009ApJ...706..925B,2014MNRAS.441.3656R}, and the Sunyaev-Zeldovich effect \citep{2012Crowell,2016A&A...594A..24P}. The first postulate of special relativity (SR) dictates that the velocity of light is constant for all observers in their local reference frame. Geometrically, this may be represented by the locus of a logarithmic spiral to generate a curve of constant angle to the local time axes (Fig.~\ref{fig:galaxies}). This geodesic of SR may be illustrated as a hyperbolic curve crossing diverging imaginary time axes, and is independent of the spatial curvature which is allowed to be flat, spherical or hyperbolic \citep{2016IJMPC..2750055M}. The Friedmann, Lema\^{i}tre, Robertson, Walker (FLRW) equation, however, only allows this expansion curvature of SR to be reintroduced by the hyperbolic curvature of space as the combined mass-energy of space $\rightarrow0$, which contrasts with observations that show space to be essentially flat. In GR, curvature occurs by the distortion of space by gravitational energy, and these gravitational effects on the curvature of the Universe will increase in significance as look-back time extends and temperature and energy densities increase towards the CMB radiation and the early universe at $z\simeq1089$ \citep{2016A&A...594A..13P}. This loss of an innate hyperbolic curvature of expansion may be mimicked in GR by introducing extra mass as dark matter (DM) and dark energy as a variable acceleration component, with both components being required and arbitrarily adjusted to match current cosmological observations. \begin{figure} \centering \includegraphics[height=8.5cm, width=6cm]{fig1.pdf} \caption{Geodesic for a photon traversing mass-free space, from frame $\Sigma_e$ moving at velocity $V$ relative to an observer $\Sigma_0$, with a small element of the geodesic $\delta{}S$ for reference frames $V$ and $V-\delta{V}$ rotated through $\delta\psi$. The photon path (red line) is then a logarithmic spiral $1+z=\exp\psi$ ($c=1\equiv{45}^\circ $) across diverging galaxies on the complex plane. Redshifts referenced to $\Sigma_0$.} \label{fig:galaxies} \vspace*{8pt} \end{figure} This paper presents an extension to the geometrical analysis of the spacetime manifold of GR to suggest that space is fundamentally curved by its own expansion, in addition to the curvature produced by the enclosed mass-energy density. In the standard model, this expansion curvature is only required for the dust model and is generated by letting space adopt a hyperbolic curvature, but by here adopting an additional curvature term to accommodate the motion of photons across an expanding space \citep{2016IJMPC..2750055M}, the Einstein equations retain the standard components of GR, yet reduce to the equations of SR as $\Omega_m\rightarrow 0$ without requiring a change in $\Omega_k$, the spatial curvature term. The model is tested by comparing its predictions for luminosity distance with the extensive apparent magnitude data of supernovae type 1a (SNe 1a), and a wide range of recently published angular diameter distances out to $z=2.36$. On both measures, it is comparable to the best $w$-CDM model. \section{The FLRW metric} \label{sec:FLRW} A model geometry of the evolving Universe may be constructed as a simply connected smooth Riemannian manifold $R_m$ with metric $g_{\mu\nu}$. It is taken as axiomatic that the Universe is homogeneous and isotropic in space, but not in time. Of the eight Thurston 3-manifold Riemannian geometries, only three fulfil the criteria of homogeneity and isotropy for the observable Universe: the 3-sphere $S^3$, the 3-D Euclidean space $E^3$, and the 3-D hyperbolic space $H^3$. Finite volume manifolds with $E^3$ geometry are all compact, and have the structure of a Seifert fibre space, remaining invariant under Ricci flow. $S^3$ manifolds are exactly closed 3-manifolds with finite fundamental group, and under Ricci flow such manifolds collapse to a point in finite time. In contrast, manifolds with $H^3$ hyperbolic geometry are open and expand under Ricci flow \citep{2003Milnor}. Using a Lie group acting on the metric to compute the Ricci tensor $R_{\mu\nu}$, these manifolds are deformed by Ricci flow as a function of time $t$ and we may then define the geometric evolution equation, $\partial_t d_{ij}=-2R_{ij}$, with normalised Ricci flow given by \citep{2008Perelman}: \begin{equation}\label{eq:Ricci_flow} \partial_t g_{ij}=-2R_{ij}+\frac{2}{3}R g_{ij}\,. \end{equation} This is equivalent to a Universe that can be foliated into space-like slices, and spacetime itself may therefore be represented by $\Gamma$--$\mathbb{R}^3$ where $\Gamma$ represents the time direction, with the general form $\textnormal{\,d}{s^2}=g_{\mu\nu}\textnormal{\,d}{x^\mu}\textnormal{\,d}{x^\nu}$ in the standard notation. $\mathbb{R}^3$ must be a maximally symmetric space to conform to a homogeneous and isotropic three-manifold, with metric $\textnormal{\,d}\sigma^2=\gamma_{ij}\textnormal{\,d}{x}^i\textnormal{\,d}{x}^j$. By scaling $t$ such that $g_{00}=-1$ with $\textnormal{c}=1$, we may write the metric as: \begin{equation} \label{eq:metric2} \textnormal{\,d}{s^2}=-\textnormal{\,d}{t^2}+a(t)^2\gamma_{ij}(x)\textnormal{\,d}{x^i}\textnormal{\,d}{x^j}\,, \end{equation} where $\gamma_{ij}$, $x^i$, $x^j$ are the co-moving co-ordinates. In cosmology, homogeneity and isotropy imply that $\mathbb{R}^3$ has the maximum number of Killing vectors, and with the additional constraint of the metric being torsion-free (the Levi-Civita connection), $\gamma_{ij}$ is the maximally symmetric metric of $\mathbb{R}^3$. This yields the general solution to Einstein's equation \citep{1970Misner,1993Peebles,2004Carroll} which may be stated in polar coordinates (Eq.~\ref{eq:FLRW2}): \begin{equation}\label{eq:FLRW2} \textnormal{\,d}{s^2}=-\textnormal{\,d}{t^2}+a(t)^2\left[\textnormal{\,d}{r^2}+ S_k^2(r)\textnormal{\,d}\Omega^2\right]\,, \end{equation} \begin{equation}\label{eq:Sk} \mbox{where~~~}S_k^2(r) \equiv \left\{ \begin{array}{lcl} \Re_0^2\sin^2(r/\Re_0) & \mbox{for} & \Re_0>0 \\ r^2 & \mbox{for} & \Re_0=\infty \\ \Re_0^2\sinh^2(r/\Re_0) & \mbox{for} & \Re_0<0 \,, \end{array}\right. \end{equation} \begin{equation}\label{eq:Sk2} \mbox{or~~~} S_k(r) \equiv \frac{1}{\surd{K}}\sin\left(r\sqrt{K}\right)\,, \end{equation} and $K=\textnormal{sgn}(\Re_0)/\Re_0^2$ is the curvature. With $\chi$ as a third angular coordinate, $r=\Re_0\chi$ is the radial distance along the surface of the manifold, $\Re_0$ is the comoving 4-space radius of $\mathbb{R}^3$ at the present epoch, and $\textnormal{\,d}\Omega^2=\textnormal{\,d}\theta^2+\sin^2\theta\textnormal{\,d}\phi^2$ is the angular separation. The signature {\bf{diag}}$(-,+,+,+)$ defines this as a Pseudo-Riemannian manifold with metric $g_{\mu\nu}$ and spatial metric $g_{ij}$, and $a(t)$ is the scale factor at proper time $t$. The actual form of $a(t)$ is determined by the curvature of the manifold and the energy tensor of Einstein's field equations, with curvature $K$ (or radius $\Re$), and scale factor $a(t)$ to be determined. Intrinsic curvature is a mathematical construct relating the deviation of parallel lines towards or away from each other, and does not require higher dimensions. However, to understand physical reality we may invoke geometrical representations, with intrinsic curvature equivalent to embedding in higher dimensions, but this geometric dimensionality is distinct from other attempts to introduce extra physical dimensions into GR by quantum gravity or string theory \citep{PhysRevLett.116.071102}. It should be emphasised that this manifold is a curved 3-D volume embedded in Euclidean 4-space, just as the surface of a sphere is a curved 2-D manifold embedded in Euclidean 3-space. Measurements on the surface of a 2-D sphere involve a distance and an angle, with the third dimension the implicit radius of the sphere. For the 3-D volume, $\chi$ is a third angular measure, with the implicit radius $\Re$ now the fourth dimension \citep{2009Komissarov}. For an expanding 2-D manifold in 3-D space, time is geometrically a fourth dimension, and---by extension---for the expanding 3-D volume the time axis represents a fifth dimension. The curvature or shape of the homogeneous hyper-surfaces are defined by the spatial 3-metric $\gamma_{ij}\textnormal{\,d}{x}^i\textnormal{\,d}{x}^j$ of Eq.~\ref{eq:metric2}, but the whole dynamics of the Universe are embodied only in the expansion factor, $a(t)$ \citep{1970Misner}. With $r$ as the radial coordinate, radial distances are Euclidean but angular distances are not. If we are only interested in photon redshift distances, $\textnormal{\,d}\Omega=0$ and Eq.~\ref{eq:FLRW2} is the more useful form of the metric. Setting $\textnormal{\,d}{s}^2=0$ and $g_{\theta\theta}=g_{\phi\phi}=0$, $\textnormal{\,d}{r}$ now represents a radial photon distance from the era of emission $t_e$ to the present epoch at $t_0$, with: \begin{equation} R_\gamma = \int\!\textnormal{\,d}{r} = \int_{t_0}^t\!\frac{\textnormal{\,d}{t}}{a(t)}\,. \label{eq:R0a} \end{equation} $R_\gamma$ is a function of $a(t)$ only, and may be independent of the curvature of the spatial manifold. Symmetry ensures that proper time for standard clocks at rest relative to the spatial grid is the same rate as the cosmological time $(t)$, making the interval $\textnormal{\,d}{t}$ Lorentzian. Any coordinate system in which the line element has this form is said to be synchronous because the coordinate time $t$ measures proper time along the lines of constant $x^i$ \citep{1970Misner}. The substitution $\chi=\sin(r/\Re)$, $\chi=r/\Re$, or $\chi=\sinh(r/\Re)$ into $S_k(r)$ in Eq.~\ref{eq:FLRW2} makes $\chi$ a radial coordinate with $\Re$ absorbed into $a(t)$, and now angular distances are Euclidean but radial distances are not (Eq.~\ref{eq:FLRW1}): \begin{equation}\label{eq:FLRW1} \textnormal{\,d}{s^2}=-\textnormal{\,d}{t^2}+\Re(t)^2\left[\frac{\textnormal{\,d}{\chi^2}}{1-k\chi^2}+\chi^2\textnormal{\,d}\Omega^2 \right]\,. \end{equation} This form is useful for measuring angular distances on a shell of fixed radius ($g_{\chi\chi}=(1-k\chi^2)^{-1}\,,~\textnormal{\,d}{\chi}=0$), such as the proper diameters of clusters or spatial volume for galaxy counts. \section{The expanding Universe as geometry} \label{sec:geometry} Milne described a dust universe expanding with constant relative velocity assigned to each galaxy, and with a mass-energy density sufficiently low that any deceleration could be neglected \citep{1935Milne}. Such a universe does not have to be spatially flat, but it does have the property that $\dot{a}(t)= $~constant, and hence $a(t)=a_0 (t/t_0)$, where $a_0$ is the scale factor at the current epoch $t_0$, defined to be $a_0=1$. Taking Eq.~\ref{eq:FLRW2} to be the FLRW metric for the photon path, we may state that $\textnormal{\,d}\theta=\textnormal{\,d}\phi=0$, and hence $\textnormal{\,d}\Omega=0$ and consider only the radial coordinate $\textnormal{\,d}{r}$. This modified Milne model (M3) is therefore independent of the space curvature: this may be an expanding 3-sphere, a flat 3-sheet, or a 3-saddle. What M3 does demand is that the time-like foliation of these 3-spaces is linear; the space itself may be infinite or closed, but will maintain its initial curvature signature whether expanding forever or contracting. Einstein's first postulate in a system of non-accelerating inertial frames may be summarised as: the velocity of light is constant for any observer, independent of the velocity of the source. Interpreting the time coordinate as the imaginary axis has become depreciated, but to do so forces the proper time axis to be a radius of length $\tau=ict$ and allows a graphical interpretation of the interval $S$ to be unvarying under rotation, providing a geometric visualisation to this postulate \citep{2016IJMPC..2750055M}. In Figure~\ref{fig:galaxies}, the infinitesimal geodesic is extended to illustrate the path of photons between galaxies in the uniformly expanding homogeneous, isotropic universe of M3. This geometrical figure is generated by assuming that: (1) observed redshifts represent a true relative motion (whatever the underlying cause); (2) galaxies are moving apart with a velocity that is constant over the time period of the observations, generating a set of diverging inertial reference frames in space; (3) photons traverse these reference frames at constant velocity $c$ to all local observers, in their local Minkowski space under a Lorentzian transformation; (4) this is a 'dust' Universe, with no gravitational effects. Any individual volume of space such as a specific galaxy may be considered stationary within its own reference frame. Let us define this reference frame as $\Sigma_0$ for our own local galactic space (Fig.~\ref{fig:galaxies}). This neglects small-scale local movements, being a simple representation and first order approximation of an idealised world line for a particle in space, because the components of $v$ are assumed to relate only to local motions that are generally much less than the recessional velocity, and are taken to be zero in most theoretical models of the Universe. The relative motion of two inertial frames, $\Sigma_0$ and $\Sigma_e$, diverging from a common origin with velocity $v$ may then be viewed as a hyperbolic rotation $\psi$ (the rapidity) of the spacetime coordinates on the imaginary plane (Fig.~\ref{fig:galaxies}), a Lorentz boost with a rotational 4-matrix {$\Lambda_{\,\nu\,'}^{\mu}$}: \begin{equation} x^\mu=\Lambda^{\mu}_{\nu\,'}x^{\nu\,'} \end{equation} \begin{center} $\Lambda^{\mu}_{\nu\,'} = \left( \begin{array}{cccc} \cosh{\psi} & \sinh{\psi} & 0 & 0 \\ \sinh{\psi} & \cosh{\psi} & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) $ \end{center} where $\cosh\psi=(1-v^2/c^2)^{-1/2}=\gamma$, $\tanh\psi=\beta=v/c$, and $\sinh\psi=\beta\gamma$, in the standard notation, with {\bf{det}}$\Lambda=+1$. Now consider a volume of space receding from us with velocity $v$ as defined by its redshift, with a proper radial distance $\Re_e$ at the time of emission. The photon path can now be represented geometrically as a logarithmic spiral on the complex plane ($PQ$ in Fig.~\ref{fig:galaxies}). It will be noted that $\psi$ is the hyperbolic angle, so the geometry allows $\psi>360^{\circ}$ because $v/c =\tanh{\psi}\rightarrow 1$ as $v\rightarrow c$ and $\psi\rightarrow \infty$, whereas local velocities are represented by real angles with trigonometric functions. The scale is chosen by convention such that $\alpha=45^\circ$ with $c=1$, hence the maximum angle in the local frame of reference corresponds to the standard light cone with $\textnormal{\,atan}(1)=45^\circ$. Although the spatial component of the M3 model may have curvature, M3 has no matter density and Fig.~\ref{fig:galaxies} is therefore geometrically flat as a consequence of the linear relationship between the radial and time axes. For a photon, $\delta{}S=0$ (null geodesic for photon). It then follows that $\delta{}\Re^2=c^2\delta t^2$, or $\delta{}\Re=\pm c\delta{}t$, where the sign represents an incoming or outgoing photon. But $\delta{}\Re=ct\delta{}\psi$, thus $\delta{}t/t=\mp \delta\psi$. Using $-\delta\psi$ for the incoming photon and integrating: \begin{equation} \int_{t_e}^{t_0}\frac{\textnormal{d}{t}}{t}=\int_\psi^0-\textnormal{d}\psi\,. \label{eq:Int_dt} \end{equation} \begin{equation} \textnormal{i.e. } \ln{(t_0/{t_e})}=\psi \textnormal{ or } t_0/{t_e}=e^{\psi}\,. \label{eq:log_t} \end{equation} Although all diverging world lines are equivalent and will ``see'' photons intercepting and leaving them at velocity $c$, the source lines are Doppler red-shifted with a wavelength of emission $\lambda_e$ in $\Sigma_e$, and a wavelength at observation $\lambda_0$. Redshift is defined as: \begin{equation} z=\frac{\lambda_0-\lambda_e}{\lambda_e}=\lambda_0/\lambda_e-1 \label{eq:Doppler} \end{equation} and setting $\lambda_e=\Delta{t_e}$, $\lambda_0=\Delta{t_0}$, it is easy to show that \begin{equation} 1+z = \Delta{t_0}/\Delta{t_e} = t_0/t_e = e^\psi \,. \label{eq:z} \end{equation} But $e^\psi=\cosh\psi+\sinh\psi$, hence \begin{equation} 1+z=\gamma+\gamma\beta=\gamma(1+\beta)\,, \label{eq:gammabeta} \end{equation} which is the relativistic Doppler shift in SR, with $z\rightarrow{\infty}$ as $v\rightarrow{c}$. We may perform a topological transform of the Milne model of Fig.~\ref{fig:galaxies} into an imaginary 4-cone (Fig.~\ref{fig:MilneCone}) without loss of generality. From Eq.~(\ref{eq:z}), $\psi=\log(1+z)$, and the three galaxies represented in Fig.~\ref{fig:MilneCone}, with redshifts of 0.5, 1.0 and 1.5, have corresponding hyperbolic angles of $\psi=23.2^\circ, 39.7^\circ,$ and $52.5^\circ$ respectively. \begin{figure} \centering \includegraphics[width=6cm]{fig2.pdf} \caption{The Milne manifold of Fig.~\ref{fig:galaxies} as a 3--D cone for two photons crossing expanding space, originating at redshift $z = 1.5$ and crossing the paths of galaxies at redshift 1.0 and 0.5 at constant ($45^\circ$) angle. The increase in Doppler wavelength ($\Delta\tau_e$ to $\Delta\tau_0$ equivalent to $\lambda_e$ to $\lambda_0$) is visualised in this exaggerated plot.} \label{fig:MilneCone} \end{figure} Despite the appearance of curvature, there is no acceleration ($\dot{a}=$~constant; $\ddot{a}=0$) and this remains a topologically flat figure. The imaginary proper time axes (e.g.~$\tau_0$ and $\tau_e$) are straight lines that diverge linearly. Likewise, the radii of curvature round the vertical axis are proportional to $a(t)$, the radial distances on the manifold at constant cosmological (proper) times (e.g. $\Re_0$ and $\Re_e$) are orthogonal functions of $a(t)$ only, and the locus of each photon track is a line of constant angle. \section{GR as geometry} The presence of mass-energy in the Universe introduces a non-linear component to $a(t)$ with consequent curvature of the time axis, and an additional curvature to the path of the photon. This cannot be displayed on a flat 2-D diagram, but can be demonstrated using the topological transform of Fig.~\ref{fig:MilneCone}. The presence of acceleration introduces curvature to the imaginary $\tau$ coordinate (Fig.~\ref{fig:GRCone}), representing accelerations from gravitational or dark mass and dark energy that may be attractive/positive or negative/repulsive respectively. \begin{figure} \centering \includegraphics[width=6cm]{fig3.pdf} \caption{The cone manifold of Fig.~\ref{fig:MilneCone} with curvature of the imaginary time axis by the presence of matter, and two photons crossing expanding space at constant angle.} \label{fig:GRCone} \end{figure} The manifold of a sphere in 3-space is sufficiently described as a curved two-dimensional surface. Similarly the extra dimensions required to visualise the geometry of expanding curved spacetime do not represent real dimensions, but are a helpful aid to geometrical visualisation of the manifold. Because 3-space with curvature require a 4-dimensional space and the curved time coordinate occupies a further dimension, space-time now exists in 5-space, compacted in Fig.~\ref{fig:GRCone} to a 2-manifold in 3-space. Integration of the photon path across this surface may be represented by considering a thin wedge or petal of the time-space manifold in GR (Fig.~\ref{fig:petal}), with the imaginary surface curved by mass-energy as well as by expansion. The new radius of curvature is $R(\tau)=1/(\textnormal{\,d}\beta/\textnormal{\,d}{\tau})$, and this is independent of the spatial curvature, $K$. In the Milne model, the manifold is flat with $\textnormal{\,d}\beta/\textnormal{\,d}\tau=0$, and $R=\infty$, and the cone base angle, $\beta_0$, can take any arbitrary value, with $\beta_0=\pi/2$ for Fig.~\ref{fig:galaxies}. Referring to Fig.~\ref{fig:petal}, the lines of longitude are the imaginary time axis, with $\textnormal{\,d}\tau = i\textnormal{\,d}{t}$, whilst the lines of latitude represent the spatial component defined by $\textnormal{\,d}{L}=\gamma_{ij}(x)\textnormal{\,d}{x^i}\textnormal{\,d}{x^j}$ (Eq.~\ref{eq:metric2}); $\Delta{L_0}$ is the comoving distance; $\Delta{L}=a(t)\Delta{L_0}$ is the proper distance at time $t$; and the curvature $1/R^2=f(\ddot{a}\,)$ is the acceleration. It may be noted that---in contrast to a general $radius~v.~time$ plot with $t$ as the vertical axis---the time axis is here embedded in the manifold. Unlike Fig.~\ref{fig:MilneCone}, the apex of this cone does not converge onto the vertical axis, but curls round itself as $R\rightarrow0$ and $\ddot{a}\rightarrow\infty$. The model therefore still requires an inflationary scenario to close the gap and ensure causal connectedness. \begin{figure} \centering \includegraphics[width=6cm]{fig4.pdf} \caption{Thin slice of curved GR manifold, $\Delta{L}$ vs. $\tau$. $\Delta\beta=\Delta\tau/R$ is rate of change of expansion; $\beta=\sinh^{-1}(a/\tau)=f(H)$. The mass-energy radius of curvature, $R$, is considerably foreshortened in this exaggerated plot. (Imaginary values are shown in red)} \label{fig:petal} \vspace*{8pt} \end{figure} \section{Geometry with curvature} Geometrically, redshift is observed when otherwise parallel photon paths diverge from each other, as evidenced in the flat Minkowsky Milne model of Fig.~\ref{fig:galaxies}. The modified GR model is an attempt to present the geometrical curvature of diverging (redshifted) photons as a clear but separate curvature superimposed on both the secondary curvature of spacetime through gravitational mass and any intrinsic primary curvature of space itself. Standard vectors are restricted in the presence of curvature on the spacetime manifold, but we may use Cartan vectors as operators \citep{2009Komissarov}. Assign to each particle in the Universe the set of observer-dependent coordinates $x^\mu$. This represents an invariant line element with proper time $x^0=t$, whose spacetime geometry is encoded in the metric $\textnormal{\,d} s^2=g_{\mu\nu}\textnormal{\,d} x^{\mu}\textnormal{\,d} x^{\nu}$, with space coordinates $x^i=x^i(t)$. Free particles then move along curved geodesics, with 4-velocity \begin{equation}\label{eq:Umu} U^\mu=\frac{\textnormal{\,d}{x}^\mu}{\textnormal{\,d}{s}}\,. \end{equation} With $t$ as a parameter, the spatial derivatives are the velocity components $U^i=\textnormal{\,d}{x^i}/\textnormal{\,d}{t}$, and we may introduce the differential operator $\textnormal{\,d}/\textnormal{\,d}{t}=U^i \partial/\partial{x^i}$, which is the directional derivative along the curve \citep{2009Komissarov}. The components $U^i$ of the operator now form the local coordinate basis, \begin{equation} \label{eq:U} \overrightarrow{U}=\frac{\textnormal{\,d}}{\textnormal{\,d}{t}}\,;~~\overrightarrow{e_i}=\frac{\partial}{\partial{x^i}}\,, \end{equation} and the basis vectors $\overrightarrow{U}=U^i~\overrightarrow{e_i}$ define the parametrised vector space associated with the point $x^\mu$. Acceleration may be expressed in terms of Eq.~\ref{eq:Umu}: \begin{equation} \label{eq:dU} \frac{\textnormal{\,d} U^{\mu}}{\textnormal{\,d} s}=\frac{\partial U^{\mu}}{\partial x^{\alpha}}\frac{\textnormal{\,d} x^{\alpha}}{\textnormal{\,d} s}=U^{\alpha}\frac{\partial U^{\mu}}{\partial x^{\alpha}}\,. \end{equation} Their motion is then described by the geodesic equation: \begin{equation} \frac{\textnormal{\,d}{U^\mu}}{\textnormal{\,d}{s}}+\Gamma^\mu_{\alpha\beta}U^\alpha U^\beta=0\,, \end{equation} \begin{equation} \mathrm{i.e.~~} U^{\alpha}\left(\frac{\partial U^{\mu}}{\partial x^{\alpha}}+\Gamma^{\mu}_{\alpha\beta}U^{\beta}\right)\equiv U^{\alpha}\nabla_{\alpha} U^{\mu}=0\,, \end{equation} where $\Gamma^{\,\mu}_{\alpha\beta}$ are the Christoffel symbols, defined by: \begin{equation} \Gamma^{\,\mu}_{\alpha\beta} = \frac{1}{2}g^{\mu\lambda}(\partial_\alpha g_{\beta\lambda}+\partial_\beta g_{\alpha\lambda}-\partial_\lambda g_{\alpha\beta}) \,. \end{equation} \subsection{Curvature of space from the velocity vector} Parallel transport of a vector is different over different paths. For redshift observations, we are interested in the parallel transport of photons across an expanding space, whose rate of expansion changes with time and distance. The standard FLRW metric is generally written as a symmetrical function (Eq.~\ref{eq:metric2}), with $\mu,\nu=0\cdots 3$. However, as demonstrated in Section~\ref{sec:FLRW}, a further curvature term representing the divergence of space may be added to the $R$-axis as a consequence of its expansion. This requires an additional dimension represented by $z'=\tau\cos{i\psi}=\tau\cosh\psi$ on the imaginary plane, with divergent angle $\psi$, and $\mu,\nu=0\cdots 4$. Unlike local velocities that are represented by real angles with trigonometric functions, because $\psi$ is a hyperbolic angle this geometry allows $\psi>360^{\circ}$. In contrast to SR, this divergence velocity is not a physical separation velocity in a static space, but an observational velocity defined by the redshift of space itself, and this introduces a new component $\gamma_{\psi\psi}=\tau^2\sinh^2\psi$ to the geodesic equation (Eq.~\ref{eq:metric3}): \begin{equation}\label{eq:metric3} \textnormal{\,d}{s^2}=-\textnormal{\,d}{t^2}+\gamma_{ij}(x)\textnormal{\,d}{x^i}\textnormal{\,d}{x^j}+\tau^2\sinh^2\psi\textnormal{\,d}{\psi}^2\,. \end{equation} The time component is $-\textnormal{\,d}{t}^2$; the spatial component is $a(t)^2\left[\textnormal{\,d}{r^2}+ S_k^2(r)\textnormal{\,d}\Omega^2\right]$; and the expansion component is $\tau^2\sinh^2\psi\textnormal{\,d}{\psi}^2$. The corresponding metric for the geodesic $g_{\mu\nu}$ is: \begin{equation} \label{eq:g_mn2} \left[ \begin{array}{ccccc} -1 & 0 & 0 & 0 & 0 \\ 0 & a(t)^2 & 0 & 0 & 0\\ 0 & 0 & a(t)^2S_k(r)^2 & 0 & 0 \\ 0 & 0 & 0 & a(t)^2S_k(r)^2\sin^2\theta & 0 \\ 0 & 0 & 0 & 0 & \tau^2\sinh^2\psi \\ \end{array} \right] \end{equation} \newline \subsection{Christoffel symbols and Ricci curvature} This new curvature term introduces an extra component to Eqs.~\ref{eq:Umu} and \ref{eq:dU}, with $\textnormal{\,d}{U^\psi}/\textnormal{\,d}{s}$ the time rate of change of the curvature of expansion. The new non-zero Christoffel symbols from Eq.~\ref{eq:metric3} are then given by: \begin{equation}\label{eq:Christoffel} \Gamma^{t}_{\psi \psi} = \tau \dot{\tau} \sinh^2\psi; ~~~ \Gamma^{\psi}_{t \psi} = \Gamma^{\psi}_{\psi t} = \dot{\tau}/\tau; ~~~\Gamma^{\psi}_{\psi \psi} = 1/\tanh\psi\,. \end{equation} The non-zero components of the Ricci tensor are now: \begin{eqnarray} &R_{00}= -3\frac{\ddot{a}}{a} \\ &R_{ij}= \left[\frac{\ddot{a}}{a}+2\left(\frac{\dot{a}}{a}\right)^2+2\frac{K}{a^2}+\frac{\dot{a}}{a}\frac{\dot{\tau}}{\tau}\right]g_{ij} \\ &R_{\psi\psi}= 3\left(\frac{\dot{a}}{a}\right)\tau\dot{\tau}\sinh^2\psi \end{eqnarray} and the Ricci curvature is: \begin{equation}\label{eq:Rc} R = 6\left[\frac{\ddot{a}}{a} + \left(\frac{\dot{a}}{a}\right)^2 + \frac{K}{a^2} + \frac{\dot{a}}{a}\frac{\dot{\tau}}{\tau}\right]\,. \end{equation} A consequence of these new non-zero Christoffel symbols (Eq.~\ref{eq:Christoffel}) is discussed in Section~\ref{sec:Discussion}. \subsection{The Einstein equation and mass-density tensor} The Einstein field equation is a geometric theory of gravitation that describes gravity as a manifestation of the curvature of spacetime. In particular, the curvature of spacetime is directly related to the energy--stress tensor through the Einstein field equation (Eq.~ \ref{eq:Einstein}): \begin{equation}\label{eq:Einstein} R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}=\frac{8\pi G}{c^4}T_{\mu\nu}-\frac{\Lambda}{c^2}g_{\mu\nu}\,, \end{equation} where $R_{\mu\nu}$ and $R$ are functions of $g_{\mu\nu}$ and its first two derivatives, and $T_{\mu\nu}$ and $\Lambda$ are the stress-energy tensor and the cosmological expansion parameter respectively \citep{2013RAA....13.1409G}. It may be noted that in the standard solution, the source of curvature is attributed entirely to matter, including dark matter and the mass equivalent of dark energy. Here, we are introducing an additional curvature term that directly corresponds to the expansion of the Universe. For an ideal fluid with mass/unit volume $\rho$ and pressure $P$, the stress-energy tensor in the rest frame of the fluid is \begin{equation} T^{\mu}_{~~\nu}=(\rho+P)U^\mu U_\nu+P\delta^\mu_\nu, \end{equation} \begin{equation} \textnormal{or:~~~}T_{\mu\nu}=(\rho+P)U_\mu U_\nu+Pg_{\mu\nu}\,. \end{equation} From which, by assuming symmetry with all off-diagonal components $=0$, setting $c=1$, and using $\textnormal{\,d}{a}/\textnormal{\,d}\tau=a/\tau$ (Fig.~\ref{fig:petal}) and $\tau^2=-t^2$, we may solve Eq.~\ref{eq:Einstein} in terms of $\dot{a}/a$ and $\ddot{a}/a$. \begin{equation} \label{eq:FLRW_3} \left(\frac{\dot{a}}{a}\right)^2 +\frac{K}{a^2}-\frac{1}{t^2}= \frac{8}{3}\pi G\rho+\frac{\Lambda}{3} \end{equation} \begin{equation} \label{eq:FLRW_4} \left(\frac{\dot{a}}{a}\right)^2+2\left(\frac{\ddot{a}}{a}\right)+\frac{K}{a^2}-\frac{2}{t^2}=-8\pi G P+\Lambda\,. \end{equation} or eliminating $\dot{a}/a$ from Eqs.~\ref{eq:FLRW_3} and \ref{eq:FLRW_4}, \begin{equation}\label{eq:FLRW_a2a} H(t)^2=\frac{8}{3}\pi G\rho-\frac{K}{a^2}+\frac{1}{t^2}+\frac{\Lambda}{3} \end{equation} \begin{equation}\label{eq:FLRW_a2} \frac{\ddot{a}}{a}=-\frac{4\pi G}{3}(\rho+3P)+\frac{1}{2t^2}+\frac{\Lambda}{3}\,. \vspace*{8pt} \end{equation} Defining $\rho_c\equiv3H_0^2/8\pi G$ as the critical density of the Universe, and setting Eq.~\ref{eq:FLRW_a2a} to the present epoch with $H(t)=H_0$, $a_0=1$, and $t=T_0$, \begin{eqnarray}\label{eq:rho_0} \rho_c=\rho_0-\frac{3K_0}{8\pi G}+\frac{3}{8\pi GT_0^2}+\frac{\Lambda_0}{8\pi G}\,, \end{eqnarray} \begin{eqnarray*}\label{eq:omegas} \textnormal{and defining:~~~}\Omega_m\equiv\frac{8\pi G \rho_0}{3H_0^2}\quad \Omega_K\equiv-\frac{K}{H_0^2}& \\ \Omega_C\equiv\frac{1}{H_0^2T_0^2}\quad \Omega_\Lambda\equiv\frac{\Lambda}{3H_0^2}&\,, \end{eqnarray*} Eq.~\ref{eq:rho_0} may now be rewritten as $1=\Omega_m+\Omega_K+\Omega_c+\Omega_\Lambda$. Using $a/a_0=1/(1+z)$, $\dot{a}/a=-\dot{z}/(1+z)$, $\rho=\rho_0 (a_0/a)^{3}$, and the defined density parameters, we may write \citep{1993Peebles}: \begin{equation}\label{eq:d_c_int} d_C=\int_{t_0}^{t_e}\frac{dt}{a(t)}=\int_0^z\left(\frac{a}{\dot{a}}\right)dz=\int_0^z\frac{dz}{H_0 E(z)} \end{equation} where $d_C$ is the comoving distance, $\dot{a}/a=H_0E(z)$, and \begin{equation} E(z)=[\Omega_m(1+z)^3+\Omega_K(1+z)^2+\Omega_C(1+z)^2+\Omega_\Lambda]^{1/2}\,. \end{equation} \subsection{Solutions} \label{sec:solutions} Letting $\Omega_\Lambda=\Omega_P=0$, and assuming a flat Euclidean Universe with $\Omega_K=0$, we may state $\Omega_C=1-\Omega_m$. This has an analytical solution in $z$ (Eq.~\ref{eq:soln_z}), \begin{equation}\label{eq:soln_z} \begin{split} d_C=&\frac{c}{H_0}\frac{1}{\sqrt{1-\Omega_m}} \\ &\times \log\left(\frac{(1+z)\left((1-0.5\Omega_m)+\sqrt{1-\Omega_m}\,\right)}{1+0.5\Omega_m(z-1)+\sqrt{(1-\Omega_m)\,(1+\Omega_m z)}}\right) \end{split} \end{equation} that reduces to $d_C=(c/H_0)\ln(1+z)$ in the Milne limit $\Omega_m\rightarrow0$. This new derivation for $d_C$ is compared with luminosity distance measures (Section~\ref{sec:dL}) and the recently extended angular diameter distance measures (Section~\ref{sec:dA}). In both cases the modified GR model gives a better fit to the data than the standard $\Lambda$CDM model, and is comparable to the best $w$-CDM models. \section{Luminosity Distance} \label{sec:dL} \begin{figure*} \centering \includegraphics[width=0.9\linewidth]{fig5.pdf} \caption{Hubble diagram of the combined sample of 920 SNe\,Ia with the observed peak magnitudes in rest frame $B$-band ($m^*_{\,B}$) \citep{2014A&A...568A..22B}. Overlain are the weighted RMS-minimisation fit for the modified GR model (solid line) and the best-fit $\Lambda$CDM cosmology with $H_0=70$~km~s$^{-1}$~Mpc$^{-1}$ and $\Omega_m=0.295$, $\Omega_\Lambda=0.705$ (dashed line). Redshifts are corrected to CMB background. } \label{fig:Betoule} \vspace*{8pt} \end{figure*} Correlation between the distance modulii derived from the standard $\Lambda$CDM and modified GR model was assessed using the extensive type Ia supernovae (SNe Ia) observations \citep{2014A&A...568A..22B}. These include SN 1a data for 740 sources (Table F.3. \citep{2014A&A...568A..22B}) covering the redshift range $0.01\le z\le 1.3$ and include data from: the Supernova Legacy Survey (SNLS) \citep{2006A&A...447...31A}; the SDSS SNe survey \citep{2014arXiv1401.3317S}; the compilation comprising SNe from SNLS, HST and several nearby experiments \citep{2011ApJS..192....1C}; photometry of 14 very high redshift ($0.7<z<1.3$) SNe Ia from space-based observations with the HST \citep{2007ApJ...659...98R}; and low-z ($z<0.08$) SNe from the photometric data acquired by the Harvard-Smithsonian Center for Astrophysics (CfA3) \citep{2009ApJ...700..331H}. The corrected apparent brightness parameter $m^*_{\,B}$ for each SN Ia was plotted against its CMB-corrected redshift ($z_{CMB}$) to create the Hubble diagram of Fig.~\ref{fig:Betoule}. Normalisation depends on the assumed absolute magnitude of the SNe and $H_0$; varying either is equivalent to sliding the curves vertically. \cite{2014A&A...568A..22B} fitted a $\Lambda$CDM cosmology to the SNe measurements by assuming an unperturbed FLRW geometry \citep{1988ARA&A..26..561S}, using a fixed fiducial value of $H_0 = 70$~km s$^{-1}$ Mpc$^{-1}$ ($M_B=-19.12\pm0.05$) to obtain a best fit value for $\Omega_m$ of $0.295\pm0.034$, with $\Omega_\Lambda = 0.705$ (dashed line). The modified GR model curve (solid line) was fitted by weighted RMS-minimisation to the full data set assuming $\Omega_m=0.04$ as the best current assessment of the mean total observed baryonic density of the Universe, and is comparable to that for the $\Lambda$CDM model (weighted RMS $\pm0.016$ and $\pm0.017$ respectively). Their $\Lambda$CDM model is 0.15~mag fainter than the modified GR model at $z_{cmb}=1.0$, and the two curves differ by $^{+0.11}_{-0.15}~m^*_{\,B}$~mag over the range $0.01<z<1.3$. \cite{2014A&A...568A..22B} made a substantial effort to correct the distance modulus for each individual SN, using a parameter ($X_1$) for time stretching of the light-curve, and a colour-correction parameter ($C$) for the supernova colour at maximum brightness \citep{1998A&A...331..815T}. Using a corrected distance modulus $\mu_B=m^*_{\,B}-(M^*_{\,B}-\alpha X_1+\beta C)$, the resultant plots had less scatter than the raw $m^*_B$ data and became progressively fainter than the $\Lambda$CDM curve with increasing $z_{cmb}$ (Fig.~\ref{fig:Binned}). To correct for this, they considered three alternatives to the basic $\Lambda$CDM model: (a) a non-zero spatial curvature, $\Omega_k$; (b) a $w-$CDM model with an arbitrary constant equation of state for the dark energy with the parameter $w$ equivalent to the jerk parameter of \cite{2004ApJ...607..665R}; (c) a time-dependent equation of state with a third-order term equivalent to the snap parameter, $w'$ \citep{2004ApJ...607..665R}. They concluded that the best overall fit was to a flat universe with typical $\Omega_k\simeq{0.002}\pm0.003$, and a $w$-CDM model, with $w=-1.018\pm0.057$ (stat+sys), and with these corrections their $w-$CDM curve overlays the binned plots at the faint end (Fig.~\ref{fig:Binned}). The modified GR model was normalised to the standard model at $z = 0.01$. The overall unweighted RMS errors remain comparable for the $w-$CDM and modified GR models, being $\pm0.151$ and $\pm0.136$~$\mu_B$~mag. respectively, differing by $^{+0.00}_{-0.24}~\mu_B$~mag. over the range $z_{cmb}=0.01-1.3$. \section{Angular Diameter Distance} \label{sec:dA} Angular diameter distance $d_A$ is defined for an object of known proper size $D$, that subtends an angle $\phi$ to the observer such that \begin{equation}\label{eq:16} d_A=D/\phi \,. \end{equation} If a suitable measuring rod can be found that is independent of galactic evolution, then the points of $D$ are fixed in space and lie on the surface of the space-like sphere defined by the proper radius $\Re_e$ of Figs.~\ref{fig:MilneCone} and \ref{fig:GRCone}, where we may identify $\Re_e$ with the angular size distance. This may be used with the standard expression for $d_A$ \citep{2000astro.ph..1419H,2006ApJ...647...25B}, in terms of $d_C$ from equation (\ref{eq:d_c_int}): \begin{equation}\label{eq:dA} d_A=\frac{d_C}{(1+z)}\,. \end{equation} Experimental verification for curves of this type is notoriously difficult because of the unknown evolution of galaxies, clusters and quasars \citep{2006ApJ...647...25B,doi...2004,2015ApJS..216...27B}, but recent work using the phenomenon of baryonic acoustic oscillation (BAO) has enabled measurements of $d_A$ with considerable accuracy. \subsection{Baryon acoustic oscillations} \label{sec:BAO} \begin{figure} \centering \includegraphics[width=\linewidth]{fig6.pdf} \caption{ Hubble diagram of 920 SNe\,Ia binned logarithmically in $z_{cmb}$, with corrected distance modulii $\mu_B$. Overlain are the unweighted least-squares fit for the modified GR model (solid line; RMS error $\pm0.136 \mu_B$~mag) and the best-fit $w-$CDM cosmology with $\Omega_m=0.305$, $\Omega_\Lambda=0.695$ (dashed line; RMS error $\pm0.151 \mu_B$~mag). Data from Table F.1. \citep{2014A&A...568A..22B}. } \label{fig:Binned} \end{figure} The BAO signal is one of the key modern methods for measuring the expansion history. The BAO arose because the coupling of baryons and photons in the early Universe allowed acoustic oscillations to develop that led to anisotropies of the cosmic microwave background (CMB) radiation and a rich structure in the distribution of matter \citep{2005ApJ...631....1G,2012MNRAS.427.3435A}. The acoustic scale length ($r_S$) can be computed as the comoving distance that the sound waves could travel from the Big Bang until recombination. The imprint left by these sound waves provides a feature of known size in the late-time clustering of matter and galaxies, and by measuring this acoustic scale at a variety of redshifts, one can infer $d_A(z)$ and $H(z)$. Determination of $r_S$ comes from the the matter-to-radiation ratio and the baryon-to-photon ratio, both of which are well measured by the relative heights of the acoustic peaks in the CMB anisotropy power spectrum \citep{1998ApJ...504L..57E,2013PhR...530...87W}. Both cosmological perturbation theory and numerical simulations suggest that this feature is stable to better than 1\% accuracy, making it an excellent standard ruler. The propagation distance of the acoustic waves becomes a characteristic comoving scale fixed by the recombination time of the Universe after approximately 379,000 years, at a redshift of $z = 1089$ \citep{1970ApJ...162..815P,1970Ap&SS...7....3S,1978SvA....22..523D}. \cite{2007ApJ...664..660E} provide a discussion of the acoustic signal in configuration space, and reviews of BAO as a probe of dark energy \citep{2008PhT....61d..44E}. The acoustic scale is expressed in absolute units (Mpc) rather than $h^{-1}$~Mpc, and is imprinted on very large scales ($\sim150$~Mpc) thereby being relatively insensitive to small scale astrophysical processes, making BAO experiments less sensitive to this type of systematic error \citep{2013PhR...530...87W}. \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{fig7.pdf} \caption{ A plot of the distance-redshift relation from the spectroscopic data BAO measurements and quasar Lyman-$\alpha$ BOSS, plotting $D_V(z)(r_{s,fid}/r_d)$ (Table~\ref{table1}). Overlain are the modified GR model fitted by weighted RMS-minimisation to $H_0=67.6\pm0.25$ with $\Omega_m=0.04, \Omega_C=0.96$ (red solid line) and the best-fitting flat $\Lambda$CDM 1-$\sigma$ prediction from WMAP under the assumption of a flat universe with a cosmological constant ($\Omega_m=0.308$; $\Omega_\Lambda=0.692$) (dashed line) \citep{2011ApJS..192...18K,2012MNRAS.427.3435A}. } \label{fig:BAO} \end{figure*} Figure \ref{fig:BAO} combines the BAO results from a number of sources using spectroscopic data sets, and the quasar Lyman-$\alpha$ results from the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS). The volume $D_V(z)$ corresponds to the peak position for an isotropic distribution of galaxy pairs and the 2-point isotropic clustering strength $\xi(z)$ of the observations, computed using Eq.~(\ref{eq:DV}) to convert the line-of-sight distance into an equivalent transverse length scale, where $d_A$ is the angular diameter distance and $H(z)$ is the Hubble parameter in the appropriate model. As the BAO method actually measures $D_V/r_d$, this quantity was multiplied by the fiducial scale length $r_{s,fid}$ to restore a distance \citep{2012MNRAS.427.3435A,2005ApJ...633..560E}. \begin{equation} D_V\equiv[d_A^{\,2}\times \frac{cz}{H(z)}(1+z)^2]^{1/3} \label{eq:DV} \end{equation} Included are the acoustic peak detection from the 6dF Galaxy Survey at $z = 0.106$ \citep{2011MNRAS.416.3017B}; the MGS survey at $z=0.15$ \citep{2015MNRAS.449..835R}; a combination of Sloan Digital Sky Survey (SDSS)-II DR7 LRG and main sample galaxies combined with the 2dF data (B1) at $z=0.275$ \citep{2010MNRAS.401.2148P}; the BOSS CMASS measurements at $z=0.32$ and $z=0.57$ \citep{2014MNRAS.441...24A,2016MNRAS.457.1770C}; the SDSS-II LRG (B2) measurement at $z = 0.35$ using reconstruction to sharpen the precision of the BAO measurement \citep{2012MNRAS.427.2132P,2013MNRAS.431.2834X}; and the WiggleZ measurement of three partially covariant data sets at $z = 0.44$, $0.6$, and $0.73$ \citep{2014MNRAS.441.3524K}. The published values for $D_V(z)$ are presented in Table~\ref{table:BAO}. Font-Ribera et al \citep{2014JCAP...05..027F} measured the large-scale cross-correlation of quasars with the Lyman-$\alpha$ forest absorption, using over 164,000 quasars from DR11 of the SDSS-III BOSS. Their result was an absolute measure of $d_A=1590\pm60$~Mpc at $z=2.36$, that translates to $D_V=6474\pm163$~($r_d/r_{s,fid}$)~Mpc, with $r_d = 147.49$~Mpc. \begin{table} \caption{Detailed parameters from the BAO surveys} \label{table1} \begin{center} \begin{tabular}{l c c c} \hline \noalign{\smallskip} Survey&$z$&$D_V (r_d/r_{d,fid})$&Ref \\ & & (Mpc)\\ \hline \noalign{\smallskip} 6dFGS&0.106 & 456$\pm27$&[1] \\ MGS&0.15&664$\pm25$&[2] \\ BOSS (B1)&0.275&1104$\pm30$&[3] \\ BOSS LowZ&0.32 &1264$\pm$25&[4,5] \\ BOSS (B2)&0.35&1356$\pm25$&[6,7] \\ WiggleZ (W1)&0.44&1716$\pm83$&[8] \\ CMASS&0.57&2056$\pm$20&[4,5] \\ WiggleZ (W2)&0.6&2221$[8]\pm101$&[8] \\ WiggleZ&0.73&2516$\pm86$&[8] \\ Lyman-$\alpha$ forest&2.36& 6474$\pm$163&[9] \\ \hline \end{tabular} \end{center} [1]~\cite{2011MNRAS.416.3017B}; [2]~\cite{2015MNRAS.449..835R}; [3]~\cite{2010MNRAS.401.2148P}; [4]~\cite{2014MNRAS.441...24A}; [5]~\cite{2016MNRAS.457.1770C}; [6]~\cite{2012MNRAS.427.2132P}; [7]~\cite{2013MNRAS.431.2834X}; [8]~\cite{2014MNRAS.441.3524K}; [9]~\cite{2014JCAP...05..027F} \label{table:BAO} \end{table} The data of Fig.~\ref{fig:BAO} are overlain with the best-fit curves for the two models. The solid curve is the modified GR model with $\Omega_m=0.04$, $\Omega_C=0.96$, and the dashed line is the $\Lambda$CDM prediction from WMAP under the assumption of a flat universe with a cosmological constant using Planck Collaboration data ($\Omega_m=0.308\pm0.012$; $\Omega_\Lambda=0.692\pm0.012$; $\Omega_K=0$) \citep{2016A&A...594A..13P}. As in Section~\ref{sec:dL}, changing $H_0$ slides the curves up or down the vertical axis, but does not alter the shapes of the curves which were fitted by weighted RMS~minimisation against the combined BAO samples of Table~\ref{table:BAO} to give $H_0=67.6\pm0.25$ with weighted RMS errors of $\pm0.034$~Mpc for the modified GR model, in good concordance with the most recent Planck results of $H_0=66.93\pm0.62$ \citep{2016A&A...596A.107P}, rather than the high value of Riess ($H_0=73.24\pm1.7$) \citep{2018ApJ...861..126R}. For the $\Lambda$CDM model, $H_0=70.0\pm0.25$ with weighted RMS errors $\pm0.085$~Mpc which is intermediate between the two extremes. The uncertainties in the two lines come largely from uncertainties in $\Omega_m h^2$, but as with the luminosity distance measures, the standard model can be improved with non-linear parameters added to $\Omega_\Lambda$ in a $w'$-CDM model. \section{Discussion} \label{sec:Discussion} A central tenet of GR is that it is always valid to choose a coordinate system that is locally Minkowskian. This was developed further by \cite{2011MNRAS.413..585C} who suggested that the frequency shift coincides with decomposition into a Doppler (kinematic) component and a gravitational one, and by \cite{2014MNRAS.438.2456K} who suggested that even where gravitational redshift dominates, redshift can always be formally expressed using the Doppler formula such that the observed cosmological redshift can be interpreted as either a gravitational redshift, or a kinematic redshift by the integration of infinitesimal Doppler shifts. Performing such a transport along the null geodesic of photons arriving from a receding galaxy, they considered that the frequency shift is purely kinematic, corresponding to a family of comoving observers, and hence was more natural. The modified GR model presented in this paper incorporates both kinematic and gravitational components as $\Omega_C$ and $\Omega_m$ respectively, with parallel transport along the photon path, and rotation across curved diverging time lines. By considering GR as a geometrical manifold with an imaginary time-axis, adjacent photon paths trace out a thin ribbon that everywhere subtends an angle of $45^\circ$ to the expanding time axes, this being the locally Minkowskian metric. In a static universe with no relative velocity between emitter and receiver, this is a plane Euclidean quadrilateral with parallel photon paths and time-lines, and it retains this form when wrapped round a cylinder. In the SR model this ribbon becomes curved and stretched by both the relative velocity and by any intrinsic curvature of space to produce the observed redshift. This curvature, however, can still be wrapped round a uniform cone (Figs.~\ref{fig:galaxies}, \ref{fig:MilneCone}). In the Milne SR model, redshift is accounted for solely by the relative velocities. In contrast, the presence of mass-energy represented by $\rho_0$ and $P$ generates an additional curvature and twist in the ribbon (Figs.~\ref{fig:GRCone} and \ref{fig:petal}) that require Einstein's equations and are generally solved using the standard FLRW metric. Assuming spatial curvature to be zero, the observed matter in the Universe is insufficient to account for the measured redshifts and requires the inclusion of an additional dark-matter component, while to conform to the more detailed SNe 1a measurements an additional dark-energy $\Lambda$ term is included, mathematically equivalent to a gravitationally repulsive negative mass \citep{1999ApJ...517..565P}. Deeper and more detailed SNe 1a measurements \citep{2014A&A...568A..22B,2004ApJ...607..665R} have required second and third order refinements to $\Lambda$, with jerk ($w$) and snap ($w'$) parameters. While the nature of dark matter and dark energy remain elusive, several alternative theories to standard GR have emerged such as \cite{2017ApJ...834..194M}, which explores scale invariance as an alternative to dark energy. However, recently published work following the observation of gravitational waves from the binary neutron star GW170817 \citep{2017arXiv171106843T} have determined $c_g$ with sufficient accuracy to suggest that $c_g=c\pm10^{-15}$. This has eliminated some alternative scenarios proposed to account for the unobserved dark energy fraction of Einstein's equation \citep{2017PhRvL.119y1303S,2017PhLB..765..382L}, and several gravitational theories that predict an anomalous $c_g$ propagation speed such as some MOND-like gravities and TeVeS, Hern\'{a}ndez forms, Einstein-Aether, Generalised Proca, and Ho\u{r}ava gravity \citep{2017arXiv171005901M}. Non-zero Christoffel symbols are imposed by any acceleration, whether caused by a gravitational field, by the action of fields other than those associated with gravitational mass, or by curvilinear motion \citep{1972Weinberg}. The emergence of new non-zero Christoffel symbols (Eq.~\ref{eq:Christoffel}) supports the presence of curvilinear motion imposed on the red-shifted photons by the expansion of space that is distinct from the curvature of space by the presence of mass. The two curvature terms, $\Omega_K$ and $\Omega_C$, are derived from quite different principles, the former being an intrinsic curvature within space itself, while the latter emerges from the Hubble flow. Nevertheless, it will be noted that there is a mathematical correspondence between the two that may imply an identity with $K=-\,(cT_0)^{-2}$. If so, this identity may arise at a fundamental level, implying that expanding space is not Euclidean with $\Omega_K=0$ but hyperbolically curved by the expansion itself, modified by the presence of its contained mass-energy. The extension to GR proposed in this paper can accommodate a scenario in which the observations of SNe 1a and BAO do not require additional parameters from DM or accelerating dark energy. The introduction of an additional curvature term into Einstein's equation follows directly from the geometry of the Hubble expansion, and is a logical extension to the geometrical model of the photon path as a logarithmic spiral in expanding space \citep{2016IJMPC..2750055M}. The use of $\Omega_C$ as a Hubble curvature allows a smooth transition to the Einstein equation for full GR as density increases from zero, without requiring a discontinuity in the curvature parameter, $\Omega_K$. The introduction of $\Omega_C$ generates a magnitude-redshift curve that well matches current SNe 1a observations out to $z=1.3$, assuming only that $\rho_m$ represents observable baryonic mass. BAO measurements for angular diameter distances also give an excellent fit from low-$z$ out to $z=2.36$, without requiring additional arbitrary parameters. Weighted RMS-minimisation fitting to the combined BAO samples of Table~\ref{table:BAO} gave $H_0=67.6\pm0.25$, in good agreement with the recent Planck results \citep{2016A&A...596A.107P}. \section*{Acknowledgments} I am grateful to Professor Serguei Komissarov of Leeds University for his encouragement and unique insight into the geometry of GR. \bibliographystyle{2from}
1,108,101,563,587
arxiv
\section{Introduction} The AdS/CFT correspondence has played a pivotal role in modern theoretical physics for more than a decade because of its conceptual importance and its broad applications \cite{Maldacena1999-AdSCFT}. It conjectures that for some gravitational theory on a $D$-dimensional spacetime of the form $\text{AdS}_d \times \text{M}_{D-d}$ where M is a compact space of $D-d$ dimensions ($D=10$ for string-theory and $D=11$ for M-theory) then there also exists a mathematically equivalent $(d-1)$-dimensional Conformal Field Theory (CFT). As a consequence, solutions to the vacuum Einstein equations in $(d-1)$-dimensions have particular importance since they furnish both ground states for string theory and background manifolds for the dual CFT. An interesting case in point are generalizations of the Eguchi-Hanson (EH) metric. EH metrics are solutions of the four-dimensional vacuum Euclidean Einstein equations, and can be regarded as a special case of the Atiyah-Hitchen metric. This latter metric can be embedded in M-theory to furnish new M2- and M5-brane solutions \cite{Ghezelbash2004-AtiyahHitchin}. Odd-dimensional generalizations of the formerly 4-dimensional EH metric were discovered a few years ago in Einstein gravity and are referred to as Eguchi-Hanson solitons. Originally obtained in 5 dimensions upon taking a particular limit of Taub-NUT space \cite{Clarkson2006-5DSoliton}, they can also be obtained from a consideration of static regular bubbles with AdS asymptotic behaviour \cite{Copsey2007-Bubbles2} and generalize to any odd-dimensionality \cite{Clarkson2006-OddSoliton}. These latter cases can be derived from a set of inhomogeneous Einstein metrics on sphere bundles fibred over Einstein-Kahler spaces \cite{Page1987-Inhomogeneous,Lu2004-Inhomogeneous}. EH solitons are asymptotic to AdS$_d$/$\mathbb{Z}_p$ ($p \ge 3$) and have Lorentzian signature. As such they furnish interesting non-simply connected background manifolds for the CFT boundary theory. Perturbatively the EH soliton has the lowest energy in its topological class, suggesting that it is either a new ground state in string theory or that some as yet unknown ground state exists. Quantum gravitational corrections generally induce higher-order curvature terms in the low-energy effective action \cite{Ogawa2011-EGBAdS}. When Einsteinian gravity is considered in higher dimensions $n \ge 4$, the applicability of the theory still stands but it is not sufficient in realizing the free-field dynamics involved \cite{Dadhich2005-GB}. Consequently much attention to this end has concentrated on Lovelock gravity \cite{Lovelock1971-LG1,Lovelock1970-LG2,Lovelock1975-LG3}, since its field equations are always of 2nd order, even though the associated action is non-linear in the curvature tensor. The most commonly studied special case of Lovelock gravity is Einstein-Gauss-Bonnet (EGB) gravity, which only contains curvature-squared terms, and implications for AdS/CFT of incorporating this term have recently been carried out \cite{Ogawa2011-EGBAdS}. In this paper we investigate the existence of EH soliton solutions in EGB gravity in five dimensions. We find that generalizations of the 5-dimensional EH soliton \cite{Clarkson2006-5DSoliton} do indeed exist. We construct such solutions via semi-analytic and numerical methods. These solutions smoothly approach the Einsteinian EH soliton solutions in the limit that the Gauss-Bonnet parameter $\alpha\to 0$. The outline of our paper is as follows. In Section \ref{sec:EH_soliton}, we will introduce the reader of the EH soliton. In Section \ref{sec:EGB_gravity}, the action formalism of EGB gravity will be introduced and EGB field equations will be shown in Einstein-notation. Using a similar notation to \cite{Clarkson2006-5DSoliton,Clarkson2006-OddSoliton} for the EH soliton, we transform the metric to a more convenient form in Section \ref{sec:EGB_gravity}, and also present the explicit EGB field equations scaled in dimensionless units. We then compute large$-r$ and small-$r$ power-series solutions to the equations in Section \ref{sec:semianalytical_results}, and discuss the required metric regularity conditions for a soliton solution. The full numerical results of solution are given in Section \ref{sec:numerical_results}. Finally, Section \ref{sec:conclusions_and_discussions} contains a discussion of our results. Throughout this paper, we will use both greek and latin characters for upper and lower indices of the tensors but reserve latin characters for 'dummy' variables. These indices will run through $n$ variables where $n=5$ is the dimensionality of the spacetime. \section{The Eguchi-Hanson Soliton} \label{sec:EH_soliton} In this section we briefly review the structure and properties of the EH soliton. Its metric has the form \cite{Clarkson2006-5DSoliton} \begin{equation} ds^2 = -g(r) dt^2 + \frac{r^2f(r)}{4} [ d\psi + \cos \theta d\phi ]^2 + \frac{dr^2}{f(r)g(r)} + \frac{r^2}{4} d\Omega_2^2 \label{eqn:EH5D_metric} \end{equation} where $d\Omega_2^2 = [d\theta^2 + \sin^2\theta d\phi^2]$ is the unit 2-sphere and \begin{equation} \begin{array}{ccc} g(r) = 1+\frac{r^2}{\ell^2}, & f(r) = 1-\frac{a^4}{r^4}, & \Lambda = -\frac{6}{\ell^2} \end{array} \label{eqn:EH5D_metric_fg} \end{equation} The EH soliton is an exact solution to the 5-dimensional Einstein equations. In the $\ell \rightarrow \infty$ limit, the 5D EH soliton \eqref{eqn:EH5D_metric} recovers the EH metric \begin{equation} ds^2 = \frac{r^2f(r)}{4} [ d\psi + \cos \theta d\phi ]^2 + \frac{dr^2}{f(r)} + \frac{r^2}{4} d\Omega_2^2 \label{eqn:EH4D_metric} \end{equation} on a constant $t$ hypersurface. In general the metric \eqref{eqn:EH5D_metric} will not be regular unless certain conditions are imposed on its parameters. As $r\rightarrow a $, regularity in the $\left( r,\psi \right) $ section implies that $\psi $ has period $2\pi/\sqrt{g(a)}$. Removing string singularities at $\left( \theta =0,\pi \right) $ implies that an integer multiple of this quantity must equal $4\pi $. Consequently \begin{equation} a^{2}=\ell ^{2}\left( \frac{p^{2}}{4}-1\right) \label{EHmatch} \end{equation}% where $p\geq 3$ is an integer, implying that $a>\ell $. The metric \eqref{eqn:EH5D_metric} can be rewritten as \begin{equation} ds^{2}=-\left( 1+\frac{r^{2}}{\ell ^{2}}\right) dt^{2}+\frac{r^{2}}{p^{2}}% f(r)\left[ d\psi +\frac{p}{2}\cos (\theta )d\phi \right] ^{2}+\frac{dr^{2}}{% \left( 1+\frac{r^{2}}{\ell ^{2}}\right) f(r)}+\frac{r^{2}}{4}d\Omega _{2}^{2}~~~ \label{mtrcreg} \end{equation}% where now \begin{equation*} f(r)=1-\frac{\ell ^{4}}{r^{4}}\left( \frac{p^{2}}{4}-1\right) ^{2} \end{equation*} and we have rescaled $\psi$ so as to have period $2\pi$. The regularity condition (\ref{EHmatch}) implies that the metric \eqref{mtrcreg} is asymptotic to AdS$_5$/$\mathbb{Z}_p$ where $p\geq 3$. The AdS/CFT correspondence conjecture states that string theory on spacetimes that asymptotically approach AdS$_{5}$\ $\times $ $S^{5}$ is equivalent to a conformal field theory (CFT) ($N=4$\ super Yang-Mills $U(N)$ gauge theory) on its boundary $\left( S^{3}\times \mathbb{R}\right) \times S^{5}$. Here the boundary of the EH soliton is a quotient of AdS, implying the existence of extra light states in the gauge theory. The density of low energy states is not affected even though the volume of $S^{3}$ has been reduced to $S^{3}/\mathbb{Z}_p$. As noted in the introduction, the EH soliton perturbatively has the lowest energy in its topological class. In the context of string theory, the implications are either \cite{Clarkson2006-OddSoliton}: \begin{enumerate} \item If the soliton is non-perturbatively stable, then it is a new ground state to string theory \item Otherwise, if the soliton is non-perturbatively unstable, then there exists another unknown ground state to string theory \end{enumerate} Moreover, the perturbed solutions from the EH soliton that are still asymptotic to AdS$_5$/$\mathbb{Z}_p$ ($p \ge 3$) have higher energies than the EH soliton which suggests its position in a energetically local minimum. \section{Einstein-Gauss-Bonnet Gravity} \label{sec:EGB_gravity} The unique combination of curvature-squared terms that is contributed by Gauss-Bonnet gravity towards the Einstein-Hilbert action is \begin{equation} \mathcal{L}_{\text{GB}} = R_{abcd}R^{abcd} - 4R_{ab}R^{ab} + R^2 \label{eqn:GB_action_terms} \end{equation} So that the total action $\mathcal{I}_{\text{EGB}}$ becomes \begin{equation} \mathcal{I}_{\text{EGB}} = \frac{1}{16 \pi \text{G}} \int d^n x \sqrt{-g} \left[ -2 \Lambda + R + \alpha \mathcal{L}_{\text{GB}} \right] \label{eqn:EGB_action} \end{equation} where $n$ is the dimensions of the metric and $\alpha$ is the Gauss-Bonnet coefficient for controlling the relative magnitude of the contributing curvature-squared terms. If the action from \eqref{eqn:EGB_action} is varied with respect to the metric (and neglecting boundary terms) we obtain the EGB field equations \begin{equation} \begin{array}{lcll} \Bigl[ R_{\mu \nu} - \frac{1}{2}Rg_{\mu \nu} + \Lambda g_{\mu \nu} \Bigr] & - & \alpha \Bigl[ \ \frac{1}{2}g_{\mu \nu}(R_{abcd}R^{abcd} - 4R_{ab}R^{ab} + R^2) - 2RR_{\mu \nu} & \\ & & \ \ \ \ \ + 4R_{\mu a}R^{a}_{\nu} + 4R^{a b}R_{\mu a \nu b} - 2R^{abc}_{\mu}R_{\nu a b c} \ \ \ \ \ \ \ \ \ \Bigr] = 0 & \end{array} \label{eqn:EGB_fieldequations} \end{equation} Eqn \eqref{eqn:EGB_fieldequations} shows explicitly the contribution of the Gauss-Bonnet terms towards the pure Einstenian field equations. \noindent In order to search for EH-type soliton solutions to the equations \eqref{eqn:EGB_fieldequations}, we employ the ansatz \begin{equation} ds^2 = - \frac{r^2}{\ell^2} g(r) dt^2 + \frac{r^2f(r)}{4} [ d\psi + \cos \theta d\phi ]^2 + \frac{\ell^2}{r^2} \frac{dr^2}{f(r)h(r)} + \frac{r^2}{4} d\Omega_2^2 \label{eqn:EH5D_our_metric} \end{equation} and require that $\{ f(r),g(r),h(r) \}$ each have an asymptotic limit of $1$. In the limit $\alpha \to 0$, \begin{equation} \begin{array}{cccc} g(r) = 1+\frac{\ell^2}{r^2}, & h(r) = 1+\frac{\ell^2}{r^2}, & f(r) = 1-\frac{a^4}{r^4}, & \Lambda = -\frac{6}{\ell^2} \end{array} \label{eqn:EH5D_our_metric_fgh} \end{equation} recovering the metric \eqref{eqn:EH5D_metric}. The EGB field equations \eqref{eqn:EGB_fieldequations} can be written explicitly into a set of ordinary differential equations using \eqref{eqn:EH5D_our_metric}. We obtain 7 non-trivial field equations $\{ \mathcal{E}_{tt},\mathcal{E}_{rr},\mathcal{E}_{\psi\psi},\mathcal{E}_{\theta\theta},\mathcal{E}_{\phi\phi},\mathcal{E}_{\psi\phi},\mathcal{E}_{\phi\psi} \}$. that in turn can be reduced to 3 independent field equations $\{ \mathcal{E}_{rr},\mathcal{E}_{\psi\psi},\mathcal{E}_{\theta\theta} \}$. Rescaling the parameters and variables in the field equations via \begin{equation} \begin{array}{ c c c c c } x = \frac{r}{\ell}, & f_{xx}(x) = \ell^2f_{rr}(r), & f_x(x) = \ell f_r(r), & \Lambda_{*} = \ell^2 \Lambda, & \alpha_{*} = \frac{\alpha}{\ell^2} \end{array} \label{eqn:dimensionless_parameters} \end{equation} where the subscript denotes the derivative with respect to the relevant variable, we find \begin{equation} \begin{array}{l} \Big[ \ 12f^2h + 12x^4f_xh_xfh - 64fh - 32h_xf + 8xhf^2 - 128f_xh + 24x^2f_x^2h -16x^2f_xh_x \\ \ \ - 32x^2f_{xx}h + 8x^4f_x^2h^2 + 48x^2f^2h^2 + 24x^2f_{xx}fh + 12x^2f_xh_xf +80xf_xfh \\ \ \ + 8x^4f_{xx}fh^2 + 64x^3f_xfh^2 + 24x^3h_xf^2h \Big] \alpha_* \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \\ \Big[ \ 16 - 24x^2fh - 16x^3f_xh - x^4f_xh_x - 6x^3h_xf - 2x^4f_{xx}h - 4f - 4\Lambda_*x^2 \Big] = 0 \end{array} \label{eqn:EGB_fieldequation_tt} \end{equation} \begin{equation} \begin{array}{l} \Big[ \ - 48f^2g^2h + 32xfh_xg^2 - 24xf^2h_xg^2 - 16x^2fg_x^2h + 32xf_xg^2h - 48x^2f^2g^2h^2 \\ \ \ + 64fg^2h + 4x^4f^2g_x^2h^2 - 24xff_xg^2h - 24x^3ff_xg^2h^2 + 12x^2f^2g_x^2h + 32x^2fg_{xx}gh \\ \ \ - 72xf^2g_xgh + 96xfg_xgh - 8x^4f^2g_{xx}gh^2 + 16x^2f_xg_xgh - 40x^3f^2g_xgh^2 \\ \ \ - 24x^3f^2h_xg^2h - 12x^2f^2g_xh_xg + 16x^2fg_xh_xg - 12x^2ff_xg_xgh - 12x^4ff_xg_xgh^2 \\ \ \ - 12x^4f^2g_xh_xgh - 24x^2f^2g_{xx}gh \Big] \alpha_* \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \\ \Big[ \ 4x^2\Lambda_*g^2 + 12fg^2 - x^4fg_x^2h + 2x^4fg_{xx}gh + 24x^2fg^2h + 6x^3f_xg^2h \\ \ \ + 6x^3fh_xg^2 + x^4f_xg_xgh + x^4fg_xxh_xg + 10x^3fg_xgh - 16g^2 \Big] = 0 \end{array} \label{eqn:EGB_fieldequation_psispi} \end{equation} \begin{equation} \begin{array}{l} \Big[ \ 16f^2g^2h - 4x^2f^2g_x^2h - 48x^2f^2g^2h^2 + 8xf^2h_xg^2 + 4x^4f^2g_x^2h^2 + 8xff_xg^2h \\ \ \ + 8x^2f^2g_{xx}gh + 24xf^2g_xgh - 8x^4f^2g_{xx}gh^2 - 64x^3ff_xg^2h^2 - 24x^3f^2h_xg^2h \\ \ \ + 4x^2f^2g_xh_xg + 4x^2ff_xg_xgh - 40x^4ff_xg_xgh^2 - 12x^4f^2g_xh_xgh - 8x^4f_{xx}fg^2h^2 \\ \ \ - 4x^5f_x^2g_xgh^2 + 2x^5f_xg_x^2fh^2 - 4x^5g_{xx}f_xfgh^2 - 4x^5f_{xx}g_xfgh^2 \\ \ \ - 12x^4f_xh_xfg^2h - 8x^4f_x^2g^2h^2 - 40x^3f^2g_xgh^2 - 6x^5f_xg_xh_xfgh \Big] \alpha_* \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \\ \Big[ \ 4x^2\Lambda_xg^2 - 4fg^2 + 6x^3fh_xg^2 + 24x^2g^2h + 16x^3f_xh^2h - x^4fg_x^2h + 10x^3fg_xgh \\ \ \ + x^4f_xh_xg^2 + 2x^4f_xh_xgh + x^4fg_xh_xg + 2x^4fg_{xx}gh + 2x^4f_{xx}g^2h \Big] = 0 \end{array} \label{eqn:EGB_fieldequation_thetatheta} \end{equation} \section{Series Solutions} \label{sec:semianalytical_results} We were unable to obtain an analytic solution to the EGB field equations for the ansatz \eqref{eqn:EH5D_our_metric}. We therefore turn to power-series expansions and numerical techniques to find soliton solutions to EGB gravity. \subsection{Power-series solutions} There are two particularly interesting power-series for the soliton, near infinity (the large-$r$ expansion) and near $r=r_0$ (the near-$r_0$ expansion). While not sufficient for demonstrating a full soliton solution these series are useful guides as to the behaviour of the solution in interesting physical limits. \noindent For large-$r$ we find that \begin{equation} \begin{array}{llcl} {} & f(r) &=& 1 + \frac{a_4}{r^4} - \frac{a_4 (4\alpha a_4 - b_4 \ell^2)}{2(4\alpha-\ell^2)r^8} - \frac{a_4 \ell^2(12\alpha a_4 - 48\alpha b_4 + 5 b_4\ell^2)}{15(4\alpha-\ell^2)r^{10}} + {\cal O}\left(\frac{1}{r^{10}}\right) \\ {} & g(r) &=& 1 + \frac{\ell^2}{r^2} + \frac{b_4}{r^4} - \frac{b_4 (4\alpha b_4 - \ell^2 a_4)}{2(4 \alpha - \ell^2)r^8} + \frac{a_4 \ell^2 (252\alpha a_4 - 5 b_4 \ell^2 + 48 \alpha b_4)}{45(4\alpha - \ell^2)r^{10}} + {\cal O}\left(\frac{1}{r^{10}}\right) \\ \ {} & h(r) &=& 1 + \frac{\ell^2}{r^2} + \frac{b_4}{r^4} - \frac{b_4 (-7\ell^2 a_4 + 12 \alpha b_4)}{6(4 \alpha - \ell^2)r^8} + \frac{a_4 \ell^2 (756\alpha a_4 - 5 b_4 \ell^2 + 336 \alpha b_4)}{45(4\alpha - \ell^2)r^{10}} + {\cal O}\left(\frac{1}{r^{10}}\right) \\ \end{array} \label{eqn:EGB_EH_fgh_pow_large_r} \end{equation} where $4\alpha-\ell^2 \neq 0$ ($4\alpha_* - 1 \neq 0$). There are two free parameters $(a_4,b_4)$ in the solution, governing the falloff rate of the metric functions. Their falloff rates suggest that the mass of the soliton, $\mathbb{M}$, is governed by these two quantities. Using the conformal formalism based on the electric Weyl tensor of \cite{Ashtekar2000-ConservedQuantities,Das2000-MoreConservedQuantities} to compute the conserved quantities, the conformal mass in Einstein gravity is given by \begin{equation} \mathbb{M} = -\frac{\pi(3b_4-a_4)}{8\ell^2\text{G}p} \label{eqn:mass_soliton_einstein} \end{equation} Setting $a_4=-r_0^4$ and $b_4=0$ in \eqref{eqn:mass_soliton_einstein}, we recover the soliton mass from the counterterm subtraction method \cite{Clarkson2006-5DSoliton} and from the Hamiltonian formalism \cite{Copsey2007-Bubbles2}. To obtain the near-$r_0$ power-series solution, we impose the conditions $f(r_0)=0$, $g(r_0) \neq 0$, and $h(r_0) \neq 0$ at the bubble-edge $r_0$. We then find \begin{equation} \begin{array}{lcrl} f(r) &=& {} &A_1 (r-r_0) + A_2 (r-r_0)^2 + A_3 (r-r_0)^3 + \mathcal{O}\left((r-r_0)^4\right) \\ \\ g(r) &=& B_0 \ \ + &B_1 (r-r_0) + B_2 (r-r_0)^2 + B_3 (r-r_0)^3 + \mathcal{O}\left((r-r_0)^4\right) \\ \\ h(r) &=& C_0 \ \ + &C_1 (r-r_0) + C_2 (r-r_0)^2 + C_3 (r-r_0)^3 + \mathcal{O}\left((r-r_0)^4\right) \end{array} \label{eqn:EGB_EH_fgh_pow_near_r0_ansatz} \end{equation} where $\Lambda = -\frac{6}{\ell^2} + \frac{12\alpha}{\ell^4}$ and $B_0 \neq 0$ and $C_0 \neq 0$. The coefficients in the preceding expression are extremely cumbersome and we relegate them to Appendix \ref{app:large_r_pow_series_coeff}. Before moving to a numerical solution of the equations, we revisit the issue of metric regularity for the EGB case. The Kretschmann scalar $\mathcal{K} = R^{abcd}R_{abcd}$, \begin{eqnarray} \mathcal{K} &=& \frac{1}{4 \ell^4 g r^4} \big( 4g^4r^8f_{rr}^2h^2-4r^8gh^2f^2g_{rr}g_r^2+ 32g^4r^7f_rh^2f_{rr}+ 4r^8g^2h^2fg_{rr}g_rf_r \nonumber \\ && + 4r^8g^2hf^2g_{rr}g_rh_r+ 16g^4r^6h^2ff_{rr}+ 16r^6g^4h_r^2f^2+r^8h^2f^2g_r^4+88r^6g^4f_r^2h^2 \nonumber \\ && + 160r^4g^4h^2f^2+16r^6g^3h^2f^2g_{rr}+ 8r^7g^3h^2fg_{rr}f_r+ 4r^8g^2h^2f^2g_{rr}^2 \nonumber \\ && + 24r^7g^2h^2f^2g_rg_{rr}+ 8r^7g^3f_r^2h^2g_r+4r^7g^3h_r^2f^2g_r+ 8r^7g^3hf^2g_{rr}h_r \nonumber \\ && + 8g^4r^7f_{rr}hh_rf+ r^8g^2g_r^2h_r^2f^2+ 16g^4r^7f_r^2hh_r+ 48f_r^2h\ell^2g^4r^4- 12r^7gh^2f^2g_r^3 \nonumber \\ && + 8r^7g^3h_rfg_rf_rh+ 2r^8g^2g_r^2h_rff_rh+ 4g^4r^8f_{rr}hh_rf_r+ 4g^4r^7h_r^2ff_r \\ && - 128g^4\ell^2r^2fh+ 32g^4r^2f^2h\ell^2+ 2r^8g^2g_r^2f_r^2h^2+ 40r^6g^2h^2f^2g_r^2+ 64r^5g^4hf^2h_r \nonumber \\ && + 160r^5g^4h^2ff_r+ 96r^5g^3h^2f^2g_r+ 176g^4\ell^4f^2- 384g^4f\ell^4+ g^4r^8h_r^2f_r^2 \nonumber \\ &&+ 256\ell^4g^4 - 2r^8hf^2g_r^3gh_r+ 8r^7g^2hf^2g_r^2h_r+ 12r^7g^2h^2fg_r^2f_r \nonumber \\ &&+ 64r^6g^4h_rff_rh + 48r^6g^3h^2fg_rf_r- 32g^4r^3f_rhf\ell^2-2r^8h^2fg_r^3gf_r \nonumber \\ && + 32r^6g^3hf^2g_rh_r \big) \nonumber \label{eqn:kretschmann_scalar_general} \end{eqnarray} and will be finite for all $r \geq r_0$ provided $g(r) \neq 0$ for all $r > r_0$. The only remaining possible singularities will be conical singularities at $r = r_0$ and string singularities. The former will not be present provided $\psi$ has a period of $2\pi/\mathcal{P}$ where \begin{equation} \mathcal{P}^2 \equiv \frac{r_0^4 A_1^2 R^2 C_0}{16 \ell^2} \label{eqn:special_P} \end{equation} String-singularities will be eliminated at the north and south poles if $\psi$ has a period of $4\pi/p$ where $p \in \mathbb{Z} \setminus \{0\}$. Together these conditions imply \begin{equation} A_1 = \sqrt{\frac{4 \ell^2 p^2}{r_0^4 C_0}} \label{eqn:metric_reg_condition} \end{equation} \noindent This is a generalization of the regularity condition for the Einsteinian case \cite{Clarkson2006-OddSoliton}. For $\alpha=0$ it is straightforward to show that $A_1 = \frac{4}{r_0}$ and $C_0 = 1+\frac{\ell^2}{r_0^2}$ giving \begin{equation} r_0^2 = \ell^2\left(\frac{p^2}{4}-1\right) \label{eqn:metric_reg_condition_einstein} \end{equation} which is the regularity condition (\ref{EHmatch}), with $a=r_0$. \section{Numerical Results} \label{sec:numerical_results} In this section we present the numerical solutions of the EGB field equations. The power-series solutions from Section \ref{sec:semianalytical_results} provide a useful guide for approximating the initial conditions (hereafter IC) of the large-$r$ and near-$r_0$ field equations. Both positive and negative values of the cosmological constant are possible, depending on the choice of Gauss-Bonnet coefficient $\alpha_*$. We have compared the full numerical solutions with their respective power-series expansion as a cross-check on our numerical work. As shown more fully in Appendix \ref{app:powerseries_expansion}, the large-$r$ numerical solution agrees very well with the corresponding power series solution when expanded to order $1/r^{10}$, even for reasonably small $r$. The near-$r_0$ power-series agrees well with the numerical solution when the corresponding power series solution is expanded up to $(r-r_0)^3$ order, but can quickly deviate from the numerical solution when $r $ is appreciably larger than $r_0$. We use the power series expansions given in Section \ref{sec:semianalytical_results} for the initial conditions. With consistent results from the power-series approximations, we now present the full numerical solutions below. \subsection{Initial Conditions and Numerical Procedure} \label{subsec:initial_conditions} \noindent We find that there is numerical instability if the values of $|\alpha_*|$ are too large. From the series solutions \eqref{eqn:EGB_EH_fgh_pow_large_r} we see that $\alpha_* \neq 1/4$, so we shall consider only small $\alpha_* < 1/4$ henceforth. We can choose freely the parameter $p$, which is a positive integer from condition \eqref{eqn:metric_reg_condition} required to satisfy metric regularity conditions. In Einstein gravity, we have the bound $p \ge 3$ from the regularity condition \eqref{eqn:metric_reg_condition_einstein}. The analogous lower bound on $p$ in EGB gravity follows from condition \eqref{eqn:metric_reg_condition} since the coefficients $A_1$ and $C_0$ will depend on $r_0$ and $\ell$, which we do not know a priori. For sufficiently small $\alpha_*$ we expect that $p=5$ will be sufficient, and we shall make this choice throughout the paper. It is a straightforward (albeit tedious) exercise to numerically obtain solutions for larger $p$. All other parameters are constrained by the shooting method, a trial-and-error method of tuning the values of chosen IC parameters that are not known a priori, so that when the equations are numerically integrated away from the starting point, the solution will satisfy desired asymptotic properties. The IC parameters $\{a^*_4,b^*_4\}$ and $\{B_0,C_0\}$ will be used to tune the required desired properties of the solution for the large-$r$ and near-$r_0$ solutions, respectively. For numerical integration beginning near-$r_0$, we use the shooting method to require $\{f,g,h\}$ to follow the required large-$r$ asymptotic conditions. Conversely, for large-$r$ numerical integration, the starting point is at $r=\infty$, and the shooting method is used to set the value of the bubble edge at $r=r_0$. For convenience, we list here the various parameters we employ for both large-$r$ and near-$r_0$ numerical integration of the field equations. \begin{itemize}[] \item {\bf \ $\alpha_*$ } \\ This is the dimensionless Gauss-Bonnet coefficient, chosen to be small to avoid issues of numerical instability as noted above. \item {\bf \ $p$ } \\ This is a non-zero integer that emerges from the the metric regularity condition \eqref{eqn:metric_reg_condition}, whose choice selects one of a countably infinite set of EH soliton solutions in EGB gravity. The 5D Einsteinian EH soliton required that $p \ge 3$ \cite{Clarkson2006-5DSoliton}. For practical purposes, we will use $p=5$ for our numerical results. \item \ $B_0$ \\ This the value of $g(r=r_0)$; it must be positive and converge to $1 + \ell^2 / r_0^2$ in the $\alpha \rightarrow 0$ limit. The near-$r_0$ shooting method will constrain this parameter. \item \ $C_0$ \\ Similar to $B_0$, this is exactly the value of $h(r=r_0)$ and should be positive and converging to $1 + \ell^2 / r_0^2$ in the $\alpha \rightarrow 0$ limit. The near-$r_0$ shooting method will constrain this parameter. \item \ $x_0$ \\ This is the dimensionless radius of the EH soliton bubble edge $r_0$ in units of $\ell$ defined by $x_0 \equiv r_0/\ell$. The spacetime exists only for $x \ge x_0$ (equivalently $r \ge r_0$). The large-$r$ shooting method will constrain this parameter. \item {\bf \ $a_4$ } \\ This parameter contributes to the mass of the soliton. In the Einsteinian limit $\alpha \to 0$, $a_4 = - r_0^4$, where $r_0$ is the soliton radius. This interpretation will not necessarily hold for nonzero $\alpha$ and $b_4$. The large-$r$ shooting method will constrain this parameter, whose rescaled version is $a^*_4 = a_4 / \ell^4$. \item {\bf \ $b_4$ } \\ The $b_4$ parameter also contributes to the mass of the soliton; in Einstein gravity the relevant expression is that of Eqn \eqref{eqn:mass_soliton_einstein}. We will consider both $b_4 = 0$ and $b_4 \neq 0$ cases. The large-$r$ shooting method will constrain this parameter, whose rescaled version is $b^*_4 = b_4 / \ell^4$. \item \ $s_0$ \\ This is the reciprocal dimensionless radius of the EH soliton bubble edge, defined by $s_0 = \ell/r_0$. To numerically analyze the $r \rightarrow \infty$ limit, we make a variable change $r \mapsto s = \ell/r_0$ and analyze its $s \rightarrow 0$ behavior. Note that the spacetime is valid for $s \le s_0$ only. \end{itemize} \noindent The outline of the numerical procedure is as follows. For some $\alpha_*$ small, we choose $p=5$. We then make an initial choice of the IC parameters $\{a^*_4,b^*_4\}$ (large-$r$) or $\{B_0,C_0\}$ (near-$r_0$). The other parameters in the respective power-series solutions are then fully determined from this guess. We then apply the Runge-Kutta-Fehlberg method to integrate the field equations ${\mathcal{E}_{rr},\mathcal{E}_{\psi\psi},\mathcal{E}_{\theta\theta}}$, with the starting point at $r=r_0$ (near-$r_0$) or at $r=\infty$ (large-$r$). If the required asymptotic behaviour of the soliton (when starting at $r=r_0$) or bubble edge behaviour (when starting at $r=\infty$) is satisfied, we are done and have obtained the numerical solution. If they are not satisfied within a given tolerance (we use a tolerance of $\epsilon = 10^{-4}$), then we repeat the process with improved guesses of the ICs and calculate the numerical solution. \subsection{Large-$r$ behavior} We employ the parameter $s \equiv \ell/r < 1$ to investigate the large-$r$ behavior of our solutions. Our choice of $p=5$ implies that $s^{\alpha=0}_0 = 1/x^{0}_0 = \left(p^2/4 -1\right)^{-1/2} \mathop{\approx}_{p=5} 0.4364$ in the Einsteinian case. In Einstein gravity, we also have $a_4^{*,\alpha=0} = -x_0^4 \mathop{=}_{p=5} -27.5625$ and $b_4^{*,\alpha=0} = 0$. We find that the field equations for large-$r$ are numerically more difficult than near-$r_0$, so we only display results that we numerically trust. The free parameters chosen for our large-$r$ numerical results are given in Table 1. We present our results for the parameter set $\mu \in \{a^*_4,b^*_4,s_0\}$ in terms of their deviations $\Delta \mu = \mu - \mu^{\alpha=0}$ relative their values in Einstein gravity. \begin{table}[H] \centering \begin{tabular}{ C{0.5cm} C{2.0cm} | C{0.9cm} | C{2.4cm} | C{2.4cm} | C{2.4cm} | C{2.4cm} | } \cline{3-7} {} & {} & \multicolumn{5}{|c|}{\textbf{Numerical Parameters -- large-$r$}} \\ \cline{3-7} {} & {} & \multicolumn{1}{|c|}{$\alpha_*=0$} & \multicolumn{2}{|c|}{$\alpha_*=-0.04$} & \multicolumn{2}{|c|}{$\alpha_*=0.002$} \\ \hline \multicolumn{2}{|c|}{\bf \texttt{Figure Number}} & {\bf \texttt{\#\ref{fig:fgh_sol_large_r_num_alpha0_1}}} & {\bf \texttt{\#\ref{fig:fgh_sol_large_r_num_alpha-001_1}}} & {\bf \texttt{\#\ref{fig:fgh_sol_large_r_num_alpha-001_2}}} & {\bf \texttt{\#\ref{fig:fgh_sol_large_r_num_alpha0001_1}}} & {\bf \texttt{\#\ref{fig:fgh_sol_large_r_num_alpha0001_2}}} \\ \hline \multicolumn{1}{|c|}{$p$} & Free & $5$ & $5$ & $5$ & $5$ & $5$ \\ \hline \multicolumn{1}{|c|}{$\Delta a_4^*$} & Shooting method & $0$ & $7.80$ & $7.80$ & $-0.20$ & $-6.30$ \\ \hline \multicolumn{1}{|c|}{$\Delta b_4^*$} & Shooting method & $0$ & $0$ & $-2.96$ & $0$ & $0.200$ \\ \hline \multicolumn{1}{|c|}{$\Delta s_0$} & Calculated & $0$ & $5.17 \times 10^{-2}$ & $2.00 \times 10^{-2}$ & $2.20 \times 10^{-3}$ & $-2.11 \times 10^{-2}$ \\ \hline \multicolumn{1}{|c|}{$\Delta \mathbb{M}^*$} & Calculated & $ 0$ & $1.56$ & ${3.34}$ & ${-0.04}$ & ${-1.38}$ \\ \hline \end{tabular} \caption{ Numerical values for the large-$r$ numerical solution. The symbols used are $s_0 = 1/x_0$, $s^{\alpha=0}_0 \approx 0.436436$, $a_4^{*,\alpha=0} = -27.5625$, $b_4^{*,\alpha=0} = 0$, and $\mathbb{M}^{*,\alpha=0} = -5.51$. Deviations from the $\alpha=0$ values are defined by $\Delta a_4^* = a_4^* - a_4^{*,\alpha=0}$, $\Delta b_4^* = b_4^* - b_4^{*,\alpha=0}$, and $\Delta s_0 = s_0 - s_0^{\alpha=0}$. The mass deviation parameter $\Delta \mathbb{M}^* = \mathbb{M}^* - \mathbb{M}^{*,\alpha=0}$ is in units of $\pi \ell^2/8G$, where the latter term is given in Eqn \eqref{eqn:mass_soliton_einstein}.} \label{tab:numerical_values_large_r} \end{table} \noindent We find that the numerical integration is much more sensitive to the choice of $b^*_4$ than $a^*_4$, and so require more fine-tuning of $b_4^*$. Figure \ref{fig:fgh_sol_large_r_num_alpha0_1} -- Figure \ref{fig:fgh_sol_large_r_num_alpha0001_2} depict the solutions $\{f(s),g(s),h(s)\}$ where $s=\ell/r$ for $\alpha_* = 0,-0.04,0.002$ from Table \ref{tab:numerical_values_large_r}, respectively. We see that all metric functions approach $1$ in the $s \rightarrow 0$ limit (at $r \rightarrow \infty$) as required. Our solutions are valid all the way from infinity to the edge of the soliton, whose values are given in Table \ref{tab:numerical_values_large_r} for the various cases. We also see from Fig. \ref{fig:fgh_sol_large_r_num_alpha-001_2} that a nonzero value for $b_4$ can have a dramatic effect relative to the $b_4=0, \alpha=-0.04$ solutions shown in Fig. \ref{fig:fgh_sol_large_r_num_alpha-001_1}. This effect is less pronounced for the $\alpha=0.002$ solutions, as shown in Fig. \ref{fig:fgh_sol_large_r_num_alpha0001_1} and Fig. \ref{fig:fgh_sol_large_r_num_alpha0001_2} possibly due to the decreasing magnitude of $\alpha$. \begin{figure}[H] \vspace{-10pt} \begin{center} \includegraphics[scale=0.45]{larger_alpha0__1.pdf} \end{center} \vspace{-20pt} \caption{Large-$r$ numerical solutions $\{f,g,h\}$ for $\alpha_* = 0$ where $s=\frac{\ell}{r}$. We have $a_4^* = a_4^{*,\alpha=0}$, $b^*_4 = 0$ and $s_0 = s_0^{\alpha=0}$ here. The color coding is $f(s) = \text{blue}$, $g(s) = \text{red}$, and $h(s) = \text{green}$.} \vspace{-10pt} \label{fig:fgh_sol_large_r_num_alpha0_1} \end{figure} \begin{figure}[H] \vspace{-10pt} \begin{center} \includegraphics[scale=0.45]{larger_alphan004_b40__2.pdf} \end{center} \vspace{-20pt} \caption{Large-$r$ numerical solutions $\{f,g,h\}$ for $\alpha_* = -0.04$ where $s=\frac{\ell}{r}$. We have $a_4^* = a_4^{*,\alpha=0} + 7.80$, $b^*_4 = 0$, $s_0 = s_0^{\alpha=0} + 0.0517$ here. The color coding is $f(s) = \text{blue}$, $g(s) = \text{red}$, and $h(s) = \text{green}$.} \vspace{-10pt} \label{fig:fgh_sol_large_r_num_alpha-001_1} \end{figure} \begin{figure}[H] \vspace{-10pt} \begin{center} \includegraphics[scale=0.45]{larger_alphan004_b4__3.pdf} \end{center} \vspace{-20pt} \caption{Large-$r$ numerical solutions $\{f,g,h\}$ for $\alpha_* = -0.04$ where $s=\frac{\ell}{r}$. We have $a_4^* = a_4^{*,\alpha=0} + 7.80$, $b^*_4 = -2.96$, $s_0 = s_0^{\alpha=0} + 0.02$ here. The color coding is $f(s) = \text{blue}$, $g(s) = \text{red}$, and $h(s) = \text{green}$.} \vspace{-10pt} \label{fig:fgh_sol_large_r_num_alpha-001_2} \end{figure} \begin{figure}[H] \begin{center} \vspace{-10pt} \includegraphics[scale=0.45]{larger_alphap0002_b40__4.pdf} \end{center} \vspace{-20pt} \caption{Large-$r$ numerical solutions $\{f,g,h\}$ for $\alpha_* = 0.002$ where $s=\frac{\ell}{r}$. We have $a_4^* = a_4^{*,\alpha=0} - 0.20$, $b^*_4 = 0$, $s_0 = s_0^{\alpha=0} - 0.0022$ here. The color coding is $f(s) = \text{blue}$, $g(s) = \text{red}$, and $h(s) = \text{green}$.} \vspace{-10pt} \label{fig:fgh_sol_large_r_num_alpha0001_1} \end{figure} \begin{figure}[H] \begin{center} \vspace{-10pt} \includegraphics[scale=0.45]{larger_alphap0002_b4__5.pdf} \end{center} \vspace{-20pt} \caption{Large-$r$ numerical solutions $\{f,g,h\}$ for $\alpha_* = 0.002$ where $s=\frac{\ell}{r}$. We have $a_4^* = a_4^{*,\alpha=0} - 6.30$, $b^*_4 = 0.20$, $s_0 = s_0^{\alpha=0} - 0.0211$ here. The color coding is $f(s) = \text{blue}$, $g(s) = \text{red}$, and $h(s) = \text{green}$.} \vspace{-10pt} \label{fig:fgh_sol_large_r_num_alpha0001_2} \end{figure} \subsection{Near-$r_0$ behavior} The free and calculated parameters with their numerical values for the near-$r_0$ solution are shown in Table \ref{tab:numerical_values_near_r0}. In the Einsteinian case, $B_0^{\alpha=0} = C_0^{\alpha=0} = 1 + (1/x^{\alpha=0}_0)^2 = 1 + \left( p^2/4-1 \right)^{-1} \mathop{\approx}_{p=5} 1.1905$ where $x^{\alpha=0}_0 = \left(p^2/4-1\right)^{1/2}\mathop{\approx}_{p=5} 2.2913$. As in the large-$r$ case, we express our results in terms of deviations from Einstein gravity. \begin{table}[H] \centering \begin{tabular}{ C{1.0cm} C{3.5cm} | C{1.6cm} | C{2.7cm} | C{2.7cm} | } \cline{3-5} {} & {} & \multicolumn{3}{|c|}{\textbf{Numerical Parameters -- near-$r_0$}} \\ \cline{3-5} {} & {} & $\alpha_*=0$ & $\alpha_*=-0.04$ & $\alpha_*=+0.002 $ \\ \hline \multicolumn{2}{|c|}{\bf \texttt{Figure Number} } & {\bf \texttt{\#\ref{fig:fgh_sol_near_r0_num_alpha0_1}}} & {\bf \texttt{\#\ref{fig:fgh_sol_near_r0_num_alpha-004_1}}} & \bf{ \texttt{\#\ref{fig:fgh_sol_near_r0_num_alpha0002_1}}} \\ \hline \multicolumn{1}{|c|}{$p$} & Free Parameter & $5$ & $5$ & $5$ \\ \hline \multicolumn{1}{|c|}{$\Delta B_0$} & Shooting method & $0$ & $-0.1423$ & $0.0087$ \\ \hline \multicolumn{1}{|c|}{$\Delta C_0$} & Shooting method & $0$ & $-0.1782$ & $0.0065$ \\ \hline \multicolumn{1}{|c|}{$\Delta x_0$} & Calculated & $0$ & $-0.2442$ & $0.0116$ \\ \hline \end{tabular} \caption{ Numerical values for the near-$r_0$ solution, where $B^{\alpha=0}_0 = C^{\alpha=0}_0 \approx 1.1905$ and $x^{\alpha=0}_0 = 2.2913$. Deviations from the $\alpha=0$ values are defined by $\Delta B_0 = B_0 - B_0^{\alpha=0}$, $\Delta C_0 = C_0 - C_0^{\alpha=0}$, and $\Delta x_0 = x_0 - x_0^{\alpha=0}$. } \label{tab:numerical_values_near_r0} \end{table} The only free parameter in the near-$r_0$ solution is $p$; all other parameters can be obtained numerically via the shooting method and from solving the constraint equations. \begin{figure}[H] \vspace{-10pt} \begin{center} \includegraphics[scale=0.45]{near-r0-numerical-fgh-alpha0_1__6.pdf} \end{center} \vspace{-20pt} \caption{Near-$r_0$ numerical solutions of $\{f,g,h\}$ for $\alpha_* = 0$, $p=5$, $B_0 = B_0^{\alpha=0}$, $C_0 = C_0^{\alpha=0}$, and $x_0 = x_0^{\alpha=0}$. The color coding is $f(x) = \text{blue}$, $g(x) = \text{red}$, and $h(x) = \text{green}$.} \vspace{-10pt} \label{fig:fgh_sol_near_r0_num_alpha0_1} \end{figure} \begin{figure}[H] \vspace{-10pt} \begin{center} \includegraphics[scale=0.45]{near-r0-numerical-fgh-alpha-004_1__7.pdf} \end{center} \vspace{-20pt} \caption{Near-$r_0$ numerical solutions $\{f,g,h\}$ for $\alpha_* = -0.04$, $p=5$, $B_0 = B_0^{\alpha=0} - 0.1423$, $C_0 = C_0^{\alpha=0} - 0.1782$, and $x_0 = x_0^{\alpha=0} - 0.2442$. The color coding is $f(x) = \text{blue}$, $g(x) = \text{red}$, and $h(x) = \text{green}$.} \vspace{-10pt} \label{fig:fgh_sol_near_r0_num_alpha-004_1} \end{figure} \begin{figure}[H] \vspace{-10pt} \begin{center} \includegraphics[scale=0.45]{near-r0-numerical-fgh-alpha0002_1__8.pdf} \end{center} \vspace{-20pt} \caption{Near-$r_0$ numerical solutions $\{f,g,h\}$ for $\alpha_* = +0.002$, $p=5$, $B_0 = B_0^{\alpha=0} + 0.0087$, $C_0 = C_0^{\alpha=0} + 0.0065$, and $x_0 = x_0^{\alpha=0} + 0.0116$. The color coding is $f(x) = \text{blue}$, $g(x) = \text{red}$, and $h(x) = \text{green}$.} \vspace{-10pt} \label{fig:fgh_sol_near_r0_num_alpha0002_1} \end{figure} \section{Conclusions} \label{sec:conclusions_and_discussions} We have semi-analytically and numerically found five-dimensional Eguchi-Hanson soliton solutions in Einstein-Gauss-Bonnet (EGB) gravity. We have illustrated the typical form of the metric functions for small positive and negative values of $\alpha$, integrating from both large-$r$ to the edge of the soliton, and from the edge of the soliton to infinity. These numerical solutions are fully consistent with large-$r$ and the near-$r_0$ power-series solutions with 2 free parameters $\alpha$ and $p$. We have found numerical evidence (in the context of the large-$r$ problem) that $b_4$ can be non-vanishing in EGB gravity. This indicates a broader class of soliton solutions, in which both $a_4$ and $b_4$ can be nonzero. A full exploration of these solutions, along with a study of a broad range of $\alpha_*$ (going beyond the choice we believe falls in a safe range of stability), remain interesting subjects for future investigation. \bigskip \noindent \textbf{Acknowledgements} \\ We would like to acknowledge helpful discussions with K. Copsey and D. Kubiz\v{n}\'{a}k. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada. \section{Appendix}
1,108,101,563,588
arxiv
\section{Introduction} Virasoro conformal block functions $\mathcal{F}(x|\Delta, \tilde \Delta,c)$ \cite{Belavin:1984vu} are not known in closed form for general values of conformal dimensions $\Delta, \tilde \Delta$ and the central charge $c$. On the other hand, the AdS/CFT correspondence motivates the study of the regime when the central charge tends to infinity, $c\to \infty$. If external and intermediate conformal dimensions $\Delta, \tilde \Delta$ are {\it heavy}, i.e. they scale linearly with large $c$, then the original conformal block takes a simpler, exponential form \cite{Zamolodchikov1986}. However, such large-$c$ conformal blocks with heavy operators are still quite complicated functions. In the bulk, the Brown-Henneaux relation \cite{Brown:1986nw} says that the large-$c$ blocks can be reproduced from the three-dimensional quantum gravity path integral evaluated in the semiclassical approximation. Further simplification can be achieved by considering the so-called heavy-light expansion \cite{Fitzpatrick:2014vua}, when a number of original heavy primary operators forms a background for other heavy primary operators, i.e. $\Delta_p/\Delta_b \ll 1$, where $\Delta_b$ and $\Delta_p$ are dimensions of the background and perturbative operators, respectively. The resulting perturbative conformal blocks are much simpler as compared to the original large-$c$ blocks. From the holographic perspective, the perturbative blocks are calculated by lengths of geodesic trees stretched in the bulk space created by the background heavy operators \cite{Asplund:2014coa,Fitzpatrick:2014vua,Hijano:2015rla,Fitzpatrick:2015zha,Alkalaev:2015wia,Hijano:2015qja,Banerjee:2016qca,Alkalaev:2016rjl}. Let L$^k$H$^{n-k}$ denote $n$-point perturbative conformal block with $n-k$ background heavy operators and $k$ perturbative heavy operators. The most studied cases include the 4-point LLHH blocks \cite{Asplund:2014coa,Fitzpatrick:2014vua,Hijano:2015rla,Fitzpatrick:2015zha,Hijano:2015qja,Fitzpatrick:2016mtp}, the 5-point LLLHH blocks \cite{Alkalaev:2015wia,Alkalaev:2015lca,Alkalaev:2015fbw,Belavin:2017atm}, the $n$-point L$^{n-2}$HH blocks \cite{Banerjee:2016qca,Alkalaev:2016rjl,Alkalaev:2018nik}, and 4-point LHHH block \cite{Alkalaev:2019zhs}. In this paper, we continue the study of L$^k$H$^{n-k}$ perturbative conformal blocks by revealing previously hidden structure that underlies the heavy-light expansion. By that we mean that the perturbative blocks allow for very special parameterization that we call {\it holographic variables}. Since $n$ coordinates of $n$-point perturbative blocks are naturally split into two parts, one can transform coordinates of the perturbative operators by means of a particular mapping function, while keeping coordinates of the background operators intact. The mapping function can be explicitly defined by using solutions to the auxiliary Fuchsian equation. It is parameterized by coordinates of the background operators. Such a transformation allows to reorganize the original coordinate dependence of perturbative blocks so that now they depend on the holographic variables only. In fact, the holographic variables realize the general observation of \cite{Fitzpatrick:2015zha} that, owing to fact that the stress tensor is not primary, the dependence on the background operators can be absorbed by performing a particular conformal transformation. A remarkable consequence of using the holographic variables is the {\it uniformization} of perturbative conformal blocks already discussed in \cite{Fitzpatrick:2015zha,Fitzpatrick:2016mtp,Anous:2019yku} in the case of two background operators. For $n$-point blocks, it may be formulated as follows: the perturbative blocks of L$^k$H$^{n-k}$ and L$^k$H$^{m-k}$ types being represented in terms of the holographic variables have the same form at $m\neq n$. From the holographic perspective, the uniformization is quite natural. Indeed, the background operators define the bulk space while the perturbative operators are realized via dual geodesic trees. The shape of geodesic trees is defined by perturbative operators only and not by the background operators. The outline of this paper is as follows. In Section \bref{sec:mon} we discuss the monodromy method and formulate the heavy-light expansion which finally defines $n$-point L$^k$H$^{n-k}$ perturbative blocks. In Section \bref{sec:HoloV} we introduce the holographic variables and formulate the uniformization property of the perturbative blocks. Section \bref{sec:E} contains examples of LLHH and LLHHH blocks which demonstrate the use of the holographic variables. In Section \bref{sec:bulk} the holographic variables are explicitly related to building the dual three-dimensional geometry created by the background operators. Here, using the holographic variables we identify a dual geodesic tree which length calculates the perturbative LLHHH block. Section \bref{sec:sum} summarizes our findings. \section{Classical conformal blocks and heavy-light expansion} \label{sec:mon} We consider holomorphic Virasoro $n$-point conformal block $\mathcal{F}(x|\Delta, \tilde \Delta,c)$ in a given OPE channel \cite{Belavin:1984vu}. Here, $x = \{x_1,.., x_n\}$ denotes coordinates of primary operators with holomorphic conformal dimensions $\Delta$, intermediate holomorphic conformal dimensions are denoted by $\tilde \Delta$, and $c$ is the central charge. Let all external and intermediate conformal dimensions be heavy, i.e. grow linearly with the central charge, $\Delta = \mathcal{O}(c)$ and $\tilde \Delta = \mathcal{O}(c)$. In the large-$c$ regime the conformal block behaves exponentially \cite{Zamolodchikov1986,Besken:2019jyw} \begin{equation} \label{classical} \mathcal{F}(x| \Delta, \tilde \Delta,c) \,\Big |_{c\to\infty} \;\rightarrow\;\; \exp\big[\,\frac{c}{6}f(x| \epsilon, \tilde \epsilon)\,\big]\;, \quad \epsilon_i = \frac{6\Delta_i}{c}\;,\quad \tilde \epsilon = \frac{6\tilde\Delta}{c}\;, \end{equation} where $f(x| \epsilon, \tilde \epsilon)$ is the classical conformal block which depends on the central charge only through the classical dimensions $\epsilon, \tilde \epsilon$. A convenient way to calculate large-$c$ conformal blocks is the monodromy method.\footnote{For review and recent studies of the monodromy method see e.g. \cite{Harlow:2011ny,Hartman:2013mia,Fitzpatrick:2014vua,Hijano:2015rla,Alkalaev:2015lca,Anous:2016kss,Alkalaev:2016rjl,Anous:2017tza,Kusuki:2018nms}.} To this end, one considers an auxiliary $(n + 1)$-point conformal block with an additional degenerate operator of light conformal dimension $\mathcal{O}(c^0)$. Due to the fusion rules the auxiliary block in the large-$c$ regime factorizes as \begin{equation} \psi(y|x) \exp\big[\,\frac{c}{6}f(x| \epsilon, \tilde \epsilon)\,\big]\;, \end{equation} where $f(x|\epsilon,\tilde\epsilon)$ is $n$-point classical block \eqref{classical} and $\psi(y|x)$ stands for the large-$c$ contribution of the degenerate operator. Imposing the BPZ condition one obtains the Fuchsian type equation \cite{Belavin:1984vu} \begin{equation} \label{BPZ} \left[\frac{d^2}{dy^2} + T(y|x)\right]\psi(y|x) = 0\;, \qquad T(y|x) = \sum_{m=1}^n \frac{\epsilon_m}{(y-x_m)^2} + \frac{c_m}{y-x_m}\;, \quad c_m = \frac{\partial f(x| \epsilon, \tilde \epsilon)}{\partial x_m} \;, \end{equation} with $n$ singular points given by positions of the original primary operators. Here, the function $T(y|x)$ is the stress tensor, the gradients $c_m$ are the accessory parameters which can be found by studying the monodromy properties of the Fuchsian equation \eqref{BPZ} (see below). Note that there are three constraints \begin{equation} \label{linear} \sum_{m=1}^n c_m = 0\;, \qquad \sum_{m=1}^n (c_m x_m+ \epsilon_m) = 0\;, \qquad \sum_{m=1}^n (c_m x^2_m+ 2\epsilon_m x_m) = 0\;, \end{equation} ensuring that the algebraic part of \eqref{BPZ} has no singularity at $y\to\infty$. Knowing all the accessory parameters one can integrate the gradient equations to obtain the classical block. \paragraph{Heavy-light expansion.} Finding classical blocks can be drastically simplified by employing the so-called heavy-light expansion \cite{Fitzpatrick:2014vua}. Suppose now that $n-k$ heavy operators with classical dimensions $\epsilon_j$ are much heavier than other $k$ heavy operators, \begin{equation} \label{perl} \epsilon_i \ll \epsilon_j\;, \qquad i = 1,..,k \;,\qquad j = k+1,..,n \;. \end{equation} Then, the positions of all operators can be split into two subsets: perturbative sector and background sector $x = \{ z \,, \,{\bf z}\} \equiv \{z_1,.., z_k, {\bf z}_{k+1},.., {\bf z}_{n}\}$. Now, we implement the heavy-light expansion \begin{equation} \label{decos} \begin{gathered} \psi(y| z , {\bf z}) = \psi^{(0)}(y|{\bf z}) + \psi^{(1)}(y| z , {\bf z})+...\,, \qquad T(y|z, {\bf z}) = T^{(0)}(y|{\bf z}) + T^{(1)}(y|z , {\bf z})+...\,, \\ f(z, {\bf z}|\epsilon, \tilde{\epsilon}) = f^{(0)}({\bf z}|\epsilon, \tilde{\epsilon}) + f^{(1)}(z, {\bf z}|\epsilon, \tilde{\epsilon}) + ...\,, \qquad c_{m}(z , {\bf z}|\epsilon, \tilde{\epsilon}) = c_{m}^{(0)}({\bf z}|\epsilon,\tilde{\epsilon}) + c_{m}^{(1)}(z , {\bf z}|\epsilon, \tilde{\epsilon}) + ...\,, \end{gathered} \end{equation} where $m = 1,...,n$. By construction, the zeroth-order accessory parameters of the perturbative operators are zero, $c_{i}^{(0)} = 0$, $i=1,...,k$. The constraints \eqref{linear} can be expanded similarly as \begin{equation} \label{linear0} \sum_{j=k+1}^n c^{(0)}_j = 0\;, \qquad \sum_{j=k+1}^n (c^{(0)}_j {\bf z}_j+ \epsilon_j) = 0\;, \qquad \sum_{j=k+1}^n (c^{(0)}_j {\bf z}^2_j+ 2\epsilon_j {\bf z}_j) = 0\;, \end{equation} \begin{equation} \label{linear1} \sum_{m=1}^n c^{(1)}_m = 0\;, \qquad \sum_{m=1}^n c^{(1)}_m z_m+ \sum_{i=1}^k\epsilon_i = 0\;, \qquad \sum_{m=1}^n c^{(1)}_m z^2_m+ \sum_{i=1}^k 2\epsilon_i z_i = 0\;. \end{equation} \vspace{1mm} \paragraph{Zeroth-order solutions.} In the zeroth-order, the Fuchsian equation \eqref{BPZ} takes the form \begin{equation} \label{fuchs0} \left[\frac{d^2}{dy^2} + T^{(0)}(y|{\bf z})\right]\psi(y|{\bf z}) = 0\;, \quad \text{where} \quad T^{(0)}(y|{\bf z}) = \sum_{j=k+1}^n \frac{\epsilon_j}{(y-{\bf z}_j)^2} + \frac{c^{(0)}_j}{y-{\bf z}_j}\;, \end{equation} and its solutions are given by two independent branches \begin{equation} \label{sol0} \psi^{(0)}_{\pm} = \psi^{(0)}_{\pm}(y|{\bf z},\epsilon, c^{(0)})\;. \end{equation} Here, $c_j^{(0)}$, $j = k+1,...,n$, are independent parameters that can be found by solving constraints \eqref{linear0} and the gradient equations \begin{equation} c_j^{(0)} = \frac{\partial f^{(0)}({\bf z}|\epsilon, \tilde{\epsilon}) }{\partial {\bf z}_j}\;, \qquad j = k+1,...,n\;. \end{equation} Since the background conformal block is assumed to be known, then $c_j^{(0)}$ can be found explicitly and substituted back into \eqref{sol0} to obtain $\psi^{(0)}_{\pm} = \psi^{(0)}_{\pm}(y|{\bf z},\epsilon)$.\footnote{\label{foot} In this form, the solutions $\psi^{(0)}_{\pm}$ are explicitly known for two \cite{Fitzpatrick:2014vua} and three \cite{Alkalaev:2019zhs} background operators, see also Section \bref{sec:E}.} Usually, the linear constraints are solved in the very beginning to isolate $n-3$ independent accessory parameters. However, equally one can keep $n$ accessory parameters independent and solve the three constraints \eqref{linear} at later stages. In that case, solutions to the Fuchsian equation in the zeroth order are explicitly parameterized by the background accessory parameters \eqref{sol0}. Two comments are in order. First, the solution \eqref{sol0} near singular points ${\bf z}_j$ behaves as \begin{equation} \label{sing} \psi^{(0)}_{\pm} \sim (y-{\bf z}_j)^{\frac{1\pm \alpha_j}{2}}\;, \qquad \alpha_j = \sqrt{1-4 \epsilon_j}\;, \qquad j=k+1,...\,,n\;, \end{equation} that follows from that the leading asymptotics are defined by the most singular terms in \eqref{fuchs0}. The exponents are restricted as\footnote{\label{foot1} For two background operators with dimension $\epsilon_{n-1} =\epsilon_n \equiv \epsilon_h$ the range \eqref{alpha} corresponds to conical singularities in $AdS_3$, whereas for $\epsilon_h \geq \frac14$ we have a (threshold) BTZ black hole \cite{Fitzpatrick:2014vua, Asplund:2014coa}. In this case, the solutions \eqref{sing} are analytically continued to purely imaginary $\alpha$. For three and more background operators, one would expect that the range $\epsilon_j \geq \frac14$ would correspond to multi BTZ-like solutions though their explicit form and general properties are yet unknown (see, however, discussions in \cite{Brill:1995yc,Coussaert:1994if,Barbot:2005qk,Mansson:2000sj}). Also, for many background operators, it is possible to consider a combination of conical singularities and BTZ black holes.}. \begin{equation} \label{alpha} 0 < \epsilon_j < \frac14\;, \qquad 0<\alpha_j <1\;. \end{equation} Second, the zeroth-order solutions are hard to find for any number of heavy background insertions except for two and three background operators in which case the Fuchsian equation can be solved explicitly (see the footnote \bref{foot}). More than three background operators require the knowledge of higher-point classical conformal blocks $f^{(0)}({\bf z}|\epsilon, \tilde{\epsilon})$, which can be calculated only as power series in coordinates ${\bf z}$. \paragraph{First-order solutions.} In the first-order the equation \eqref{BPZ} is reduced to \begin{equation} \label{T1} \begin{array}{c} \displaystyle \left[\frac{d^2}{dy^2} + T^{(0)}(y|{\bf z})\right]\psi^{(1)}(y|z, {\bf z}) = - T^{(1)}(y|z, {\bf z}) \psi^{(0)}(y|{\bf z})\;, \\ \\ \displaystyle T^{(1)}(y|z, {\bf z}) = \sum_{i=1}^{k} \left(\frac{\epsilon_i}{(y-z_i)^2} + \frac{c^{(1)}_i}{y-z_i} \right) + \sum_{j=k+1}^n \frac{c^{(1)}_j}{y-{\bf z}_j}\;. \end{array} \end{equation} The solution is given in terms of the zeroth-order solutions \eqref{sol0} as \begin{equation} \label{sol1} \begin{array}{l} \displaystyle \psi^{(1)}_{\pm}(y|z, {\bf z}) = \frac{1}{W({\bf z})}\left(\psi^{(0)}_{+}(y|{\bf z}) \int dy \; \psi^{(0)}_{-}(y|{\bf z}) T^{(1)}(y|z, {\bf z})\psi^{(0)}_{\pm}(y|{\bf z}) \right. \\ \\ \displaystyle \hspace{50mm}\left.-\psi^{(0)}_{-}(y|{\bf z})\int dy \; \psi^{(0)}_{+}(y|{\bf z}) T^{(1)}(y|z, {\bf z})\psi^{(0)}_{\pm}(y|{\bf z})\right)\;, \end{array} \end{equation} where the Wronskian is \begin{equation} \label{wronskian0} W({\bf z}) \equiv - \psi^{(0)}_{+}(y|{\bf z}) \frac{d \psi^{(0)}_{-}(y|{\bf z})}{dy} + \psi^{(0)}_{-}(y|{\bf z}) \frac{d \psi^{(0)}_{+}(y|{\bf z})}{dy}\;. \end{equation} The Wronskian is independent of $y$ that is the general property of Fuchsian equations. \paragraph{Monodromy analysis.} The monodromy method consists of comparing the monodromy of solutions to the Fuchsian equation against that of the original correlation function. This yields a system of algebraic equations on the accessory parameters. In principle, the system can be solved and then the problem of finding the classical block can be reduced to solving the gradient equations \eqref{BPZ}. To this end, let us consider contours $\Gamma_p$ encircling points $\{z_1,..., z_{k}, {\bf z}_{k+1},..., {\bf z}_{p+1}\}$, where $p = 1,...,n-3$. The monodromy matrices along $\Gamma_p$ are defined as \begin{equation} \psi_a(\Gamma_p \circ y|z,{\bf z}) = M_{ab}(\Gamma_p|z,{\bf z}) \psi_b(y|z,{\bf z})\;, \qquad a,b = \pm\;, \end{equation} and, within the heavy-light expansion, the monodromy matrices can be decomposed as \begin{equation} \label{dec} M_{ab}(\Gamma_p|z,{\bf z}) = M_{ab}^{(0)} (\Gamma_p|{\bf z}) + M_{ab}^{(1)}(\Gamma_p|z,{\bf z}) +...\;, \end{equation} where $M_{ab}^{(0)} (\Gamma_p|{\bf z})$ is defined by the zeroth-order solution \eqref{sol0}, $M_{ab}^{(1)}(\Gamma_p|z,{\bf z})$ is defined by the first-order solution \eqref{sol1}. Due to the form of \eqref{sol1} the first-order correction factorizes as \begin{equation} \label{mf1} M_{ab}^{(1)}(\Gamma_p|z,{\bf z}) = - M_{ac}^{(0)} (\Gamma_p|{\bf z}) I_{cb}(\Gamma_p|z,{\bf z})\;, \qquad I_{cb} = \begin{pmatrix} I^{(p)}_{++}\;\;& I^{(p)}_{+-}\\ I^{(p)}_{-+}\;\;& I^{(p)}_{--} \end{pmatrix} \,, \end{equation} where \begin{equation} \label{int} \begin{array}{c} \displaystyle I^{(p)}_{+\pm}(z, {\bf z})= \frac{1}{W({\bf z})}\int_{\Gamma_p} dy \; \psi^{(0)}_{+}(y|{\bf z}) T^{(1)}(y|z, {\bf z}) \psi^{(0)}_{\mp}(y|{\bf z})\;, \\ \\ \displaystyle I^{(p)}_{-\mp}(z, {\bf z}) = -\frac{1}{W({\bf z})}\int_{\Gamma_p} dy \; \psi^{(0)}_{\pm}(y|{\bf z}) T^{(1)}(y|z, {\bf z}) \psi^{(0)}_{-}(y|{\bf z})\;. \end{array} \end{equation} The above integrals are straightforward to calculate since the stress tensor $T^{(1)}$ \eqref{T1} has a simple pole structure, and \begin{equation} \label{assim} \psi^{(0)}_{-}\psi^{(0)}_{-} \sim (y-{\bf z}_j)^{1-\alpha_j}\;, \qquad \psi^{(0)}_{+}\psi^{(0)}_{+} \sim (y-{\bf z}_j)^{1+\alpha_j}\;, \qquad \psi^{(0)}_{-}\psi^{(0)}_{+} \sim (y-{\bf z}_j)\;, \end{equation} with the exponents satisfying \eqref{alpha}. On the other hand, traversing the light degenerate operator $V_{{(2,1)}}(y)$ in the original $(n+1)$-point correlation function along contours $\Gamma_p$ we find the respective monodromy matrices \begin{equation} \label{pres} \widetilde{M}_p = - \begin{pmatrix} e^{i \pi \gamma_p}& 0\\ 0& e^{-i \pi \gamma_p} \end{pmatrix}\,,\qquad \gamma_p = \sqrt{1- 4\tilde{\epsilon}_p}\;, \qquad p = 1,..., n - 3\;. \end{equation} Equating the eigenvalues of these matrices with those of \eqref{mf1} yields a system of $n-3$ algebraic equations on perturbative accessory parameters. Recalling that there are three additional constraints \eqref{linear1} we conclude that in total there are $n$ equations on $n$ accessory parameters. \section{Holographic variables} \label{sec:HoloV} Let us consider {\it the holographic function} and its derivative defined as \begin{equation} \label{phv} w(y|{\bf z}) = \frac{\psi_{+}^{(0)}(y|{\bf z})}{\psi_{-}^{(0)}(y|{\bf z})}\;, \qquad w'(y|{\bf z}) = \frac{W({\bf z})}{\left(\psi_{-}^{(0)}(y| {\bf z})\right)^2}\;, \end{equation} where $\psi_{\pm}^{(0)}(y|{\bf z})$ are solutions to the zeroth-order Fuchsian equation \eqref{sol0} and the prime denotes a derivative with respect to $y$. The second relation follows from the first one by virtue of \eqref{wronskian0}. Note that function $w(y|{\bf z})$ is determined up to the M\"obius\; transformation since in \eqref{phv} we can equally take linear combinations of solutions. Recalling \eqref{sing} we find that the functions \eqref{phv} behave near the singular points ${\bf z}_j$ as \begin{equation} \label{sing_w} w(y|{\bf z}) \sim (y-{\bf z}_j)^{\alpha_j}\;, \qquad w'(y|{\bf z}) \sim (y-{\bf z}_j)^{\alpha_j-1}\;, \end{equation} where the exponents are restricted by \eqref{alpha}. Now, we consider a partial conformal map such that coordinates of the perturbative operators are replaced by values of the holographic function $w(y|{\bf z})$, i.e. \begin{equation} \label{partial} \{z_1,..., z_k, {\bf z}_{k+1},..., {\bf z}_{n}\}\; \rightarrow \;\; \{w(z_1|{\bf z}),...,w(z_k|{\bf z}), {\bf z}_{k+1},..., {\bf z}_{n}\}\;. \end{equation} We leave the coordinates of the background operators intact, otherwise $w({\bf z}_j|{\bf z}) = 0$, $j=k+1,...,n$ due to the singular behaviour \eqref{sing_w}. Evaluating functions \eqref{phv} at $y=z_i$ we denote \begin{equation} \label{hv} w_{i} \equiv w(z_i|{\bf z})\;, \qquad\;\; i = 1,... ,k\;. \end{equation} The values $w_i$ can be called {\it holographic coordniates} because of the special role they play in the dual bulk geometry (see Section \bref{sec:GT}). Equivalently, the holographic function \eqref{phv} defines the map of $k$-dimensional complex spaces $\mathbb{C}^k \to \mathbb{C}^k$, which is parameterized by ${\bf z}$. This map is invertible. Indeed, the Jacobi matrix is diagonal $J_{ij} = w'_i \,\delta_{ij}$, where $w'_i=w'(z_i|{\bf z})$ are derivatives \eqref{phv} evaluated at $y=z_i$. Since $w'(y|{\bf z})$ can have zeros/poles only at points $y={\bf z}_j$ \eqref{sing_w}, then the Jacobi matrix is non-degenerate. \paragraph{Monodromy integrals.} Using the holographic variables the monodromy integrals along contours $\Gamma_p$ \eqref{int} can be represented as \begin{equation} \begin{array}{c} \label{HI} \displaystyle I^{(p)}_{+-}(z, {\bf z}) = \int_{\Gamma_p} dy\, \frac{ w^2(y|{\bf z}) }{w'(y|{\bf z})}\,T^{(1)}(y|z, {\bf z})\;, \qquad I^{(p)}_{-+}(z, {\bf z}) = -\int_{\Gamma_p} dy \, \frac{1}{w'(y|{\bf z})}\,T^{(1)}(y|z, {\bf z})\;, \\ \\ \displaystyle I^{(p)}_{++}(z, {\bf z}) = -I^{(p)}_{--}(z, {\bf z})= \int_{\Gamma_p} dy\, \frac{ w(y|{\bf z}) }{w'(y, {\bf z})}\,T^{(1)}(y|z, {\bf z})\;, \end{array} \end{equation} and explicitly calculated by means of the residue theorem, \begin{equation} \begin{array}{l} \label{GS} \displaystyle \frac{I^{(p)}_{++}}{2 \pi i} = \sum^{min\{p+1,k\}}_{i=1} \left(X_i w_i+\epsilon_i\right)\,, \\ \\ \displaystyle \frac{I^{(p)}_{-+}}{2 \pi i} = -\sum^{min\{p+1,k\}}_{i=1} X_i \,, \\ \\ \displaystyle \frac{I^{(p)}_{+-}}{2 \pi i} = \sum^{min\{p+1,k\}}_{i=1} \left(X_i w^2_i+ 2 \epsilon_i w_i\right)\,, \end{array} \end{equation} where instead of original first-order accessory parameters we introduced \begin{equation} \label{def_par} X_i = \frac{1}{w'_i}\left(c^{(1)}_i- \epsilon_i \frac{w''_i}{w'_i}\right)\;, \quad i = 1,...,k\;, \qquad \;\; Y_j = c^{(1)}_{j}\;, \quad j=k+1,...,n\;. \end{equation} A few comments are in order. Firstly, it is crucial that the upper limit value $min\{p+1,k\}$ leads to that all integrals over contours $\Gamma_p, \; p \geq k - 1$ are equal to $I^{(k)}_{\pm\pm}$. This is why the integrals are independent of the accessory parameters $Y_j$ \eqref{def_par}. Secondly, the monodromy integrals explicitly depend on the holographic variables only, while dependence on $z_i$ and ${\bf z}_j$ is implicit. Thirdly, the integrals are simple linear functions of new parameters $X_i$ and remarkably mimic the linear constraints \eqref{linear}. \paragraph{Zeroth-order monodromy.} Now, comparing eigenvalues of the monodromy matrices \eqref{mf1} and \eqref{pres} in the zeroth order yields the conditions \begin{equation} \label{fusion} \epsilon_{j+1} = \tilde{\epsilon}_{j}\;, \qquad j = k+1,...,n-3 \;. \end{equation} It means that the heavy-light expansion is possible only if all pairs of adjacent external and intermediate dimensions in the background part of the original $n$-point classical block $f(z, {\bf z}|\epsilon, \tilde{\epsilon})$ are equated so that the background block $f^{(0)}({\bf z}|\epsilon, \tilde{\epsilon})$ is not general. It means that some of expansion coefficients in coordinates of $f(z, {\bf z}|\epsilon, \tilde{\epsilon})$ contain poles in perturbative dimensions $\epsilon_i$ with prefactors $(\epsilon_{j+1} - \tilde{\epsilon}_j)$. \paragraph{First-order monodromy.} In the first order, the monodromy equations take the form \begin{equation} \label{mon1} I^{(p)}_{++}I^{(p)}_{++} + I^{(p)}_{+-} I^{(p)}_{-+} = - 4\pi^2 \tilde{\epsilon}^2_p\;, \qquad p = 1,...,k-1\;, \end{equation} \begin{equation} \label{mon2} I^{(p)}_{++} = I^{(k-1)}_{++}=0\;, \qquad p = k,...,n-3\;. \end{equation} We see that there are linearly dependent equations in \eqref{mon2} so that in total there are $k$ independent equations in \eqref{mon1}, \eqref{mon2} for $k$ variables $X_i$, $i=1,...,k$. It follows that the monodromy equations allow to find the accessory parameters of the perturbative operators only, i.e. $X_i = X_i(w|\epsilon, \tilde{\epsilon})$ as functions of holographic variables. \paragraph{Perturbative blocks.} Since the accessory parameters depend on the holographic variables only it follows that the $n$-point perturbative block that solves the gradient equations \eqref{BPZ} in the sector of the perturbative operators depends only on the holographic variables, $f^{(1)}= f^{(1)}(w, w'|\epsilon, \tilde{\epsilon})$, where functions $w,w'$ are defined in \eqref{phv}. On the other hand, the conformal transformation \eqref{partial} act on the perturbative conformal block as \begin{equation} \label{con} f^{(1)}(w, w'|\epsilon, \tilde{\epsilon}) = f^{(1)}(w|\epsilon, \tilde{\epsilon}) + \sum^{k}_{i=1} \epsilon_i \log w'_i\;, \end{equation} where the block on the left-hand side is given in the original $z$-coordinates, while the block on the right-hand side is given in the new $w$-coordinates. Since the accessory parameters are the gradients of the conformal block \eqref{BPZ} then \begin{equation} \begin{array}{c} \displaystyle c^{(1)}_i \to \frac{1}{w'_i}\left(c^{(1)}_i- \epsilon_i \frac{w''_i}{w'_i}\right), \quad i=1,...,k\;, \\ \\ \displaystyle c^{(1)}_j \to c^{(1)}_j\;, \quad j = k+1,..., n\;, \end{array} \end{equation} which is exactly the definition \eqref{def_par}. Indeed, the prefactor in $c_i^{(1)}$ is the Jacobian while the second term in the brackets is the derivative of the $\epsilon_i\log w'_i$. The gradient equations in the sector of the perturbative operators now read as \begin{equation} \label{efb} \frac{\partial f^{(1)}(w|\epsilon, \tilde{\epsilon})}{\partial w_i} = X_i( w|\epsilon, \tilde{\epsilon})\;, \qquad i = 1,..., k\;. \end{equation} Thus, the conformal block function depends on $n$ independent variables $x = \{z, {\bf z}\}$ only through $k<n$ holographic coordinates $w_i$, $i=1,...,k$ \eqref{hv}. \paragraph{Two and more background operators.} In order to move further we recall that up to now all coordinates of the primary operators were kept arbitrary. Now, let the last three coordiniates be fixed, $x_{_{\hspace{-0.5mm}fix}} = (\hat x_{n-2}$, $\hat x_{n-1}$, $\hat x_n)$. Since we always have two or more heavy background operators, then $x_{_{\hspace{-0.5mm}fix}} = (\hat z_{n-2}, \hat {\bf z}_{n-1}, \hat {\bf z}_{n})$ or $x_{_{\hspace{-0.5mm}fix}} = (\hat {\bf z}_{n-2}, \hat {\bf z}_{n-1}, \hat {\bf z}_{n})$. In a given OPE channel coordinates $x_m$ with $m\leq n-3$ must be separated from $x_{_{\hspace{-0.5mm}fix}}$ through a particular OPE ordering. Supplementing the monodromy equations \eqref{mon1}, \eqref{mon2} with the linear constraints \eqref{linear1} we obtain the equation system of $k+3$ independent conditions for $n$ accessory parameters. It follows that the first-order accessory parameters of the background operators remain unfixed by the monodromy equations. In this respect, let us consider two different situations: \vspace{1mm} \noindent $\bullet$ Two background operators, i.e. $k=n-2$ perturbative operators. In this case, the equations \eqref{mon2} are absent and we have $n-3$ equations \eqref{mon1} for $n-2$ variables $X_i$, $i=1,...,n-2$. Adding the three constraints \eqref{linear1} along with two accessory parameters of the background operators $Y_{n-1}, Y_{n}$ we obtain in total $n$ equations for $n$ accessory parameters. The three constraints \eqref{linear1} can be solved for three accessory parameters $X_{n-2}$, $Y_{n-1}$, $Y_n$ \eqref{def_par} of the operators located at $x_{_{\hspace{-0.5mm}fix}} = (\hat z_{n-2}, \hat {\bf z}_{n-1}, \hat {\bf z}_{n}) = (z_{n-2},1,\infty)$. We notice that then the three constraints depend on coordinates $z_i$, $i=1,...,n-2$ only. On the other hand, since the holographic map is invertible (see our comments below \eqref{hv}) we can introduce inverse functions $z_i = z_i(w|{\bf z})$ such that $w_i \circ z_i = 1$. Then, the three constraints can be rewritten in terms of the holographic variables, hence the accessory parameters still depend on the holographic variables only. \vspace{1mm} \noindent $\bullet$ For three or more background operators, i.e. $k\leq n-3$ perturbative operators. In this case, there are exactly $k$ equations \eqref{mon1}, \eqref{mon2} for $k$ variables $X_i$, $i=1,...,k$. The three constraints \eqref{linear1} can be solved for three accessory parameters $Y_m$ \eqref{def_par} with $m = n-2,n-1,n$ of the operators located at $x_{_{\hspace{-0.5mm}fix}} = (\hat {\bf z}_{n-2}, \hat {\bf z}_{n-1}, \hat {\bf z}_{n}) = (0,1,\infty)$. Other parameters $X_i$, $i=1,...,k$ and $Y_j$, $j=k+1,..., n-3$ are independent. Then, recalling that holographic variables \eqref{phv} are functions of coordinates of the background insertions ${\bf z}_j$ and using \eqref{def_par}, we can evaluate the first derivatives of the perturbative block function to find the first-order accessory parameters of the background operators, \begin{equation} \label{rem_a_p} Y_j(w, w',{\bf z}|\epsilon, \tilde{\epsilon}) = \frac{\partial f^{(1)}(w|\epsilon, \tilde{\epsilon})}{\partial {\bf z}_j} \;, \qquad j = k+1,..., n-3\;. \end{equation} \paragraph{Uniformization property.} To summarise this section, we can formulate {\it the uniformization property} of L$^{k}$H$^{n-k}$ perturbative conformal blocks: in holographic variables, the form of $n$-point block function $f^{(1)}(w|\epsilon, \tilde{\epsilon})$ is defined by $k$ perturbative operators only. In particular, using the holographic parameterization, instead of $n$ equations on the accessory parameters we essentially have $k<n$ equations. So, for instance, the calculation of the (known) 4-point LLHH block and $n$-point LLH$^{n-2}$ block is essentially the same and gives the same expression in the holographic variables, see Sections \bref{sec:G43B} and \bref{sec:G5B}. The only difference is that the holographic function \eqref{phv} is different for different numbers of the background operators so that the perturbative block functions in the $z$-parameterization will be different as well. \section{Examples with two and three background operators} \label{sec:E} In this section, we utilize the holographic variables to work out a few examples of 4-point and 5-point conformal blocks with two and three background operators. In such cases, the background blocks are the known 2-point and 3-point functions of the operators located in $(1,\infty)$ and $(0,1,\infty)$. Hence, the heavy-light expansion can be elaborated in details and perturbative blocks $f^{(1)}(x|\epsilon, \tilde \epsilon)$ can be found explicitly. The zeroth-order solutions to the Fuchsian equation are known for two background operators ($\epsilon_3 = \epsilon_4$ and $({\bf z}_3, {\bf z}_4) = (1,\infty)$) \cite{Fitzpatrick:2014vua} \begin{equation} \label{psi_2} \psi_{\pm}^{(0)}(y|{\bf z}) = (1-y)^{\frac{1\pm \alpha}{2}}\;, \end{equation} where $\alpha = \sqrt{1 - 4 \epsilon_4}$, and for three background operators ($\epsilon_3 \neq \epsilon_4 = \epsilon_5$ and $({\bf z}_3, {\bf z}_4, {\bf z}_5) =(0,1,\infty)$) \cite{Alkalaev:2019zhs} \begin{equation} \label{psi_3} \psi_{\pm}^{(0)}(y|{\bf z}) = (1-y)^{\frac{1+\alpha}{2}} y^{\frac{1\pm\beta}{2}}\, {}_2F_1\left(\frac{1\pm\beta}{2},\frac{1\pm\beta}{2}+ \alpha, 1\pm\beta, y\right)\;, \end{equation} where $\alpha = \sqrt{1 - 4 \epsilon_4}$ and $\beta = \sqrt{1 - 4 \epsilon_3}$. The respective holographic functions can be explicitly found \begin{equation} \label{w_2} w(y|{\bf z}) = (1-y)^{\alpha}\;, \end{equation} \begin{equation} \label{w_3} w(y|{\bf z}) = y^{\beta}\, \frac{\, {}_2F_1\left(\frac{1+\beta}{2},\frac{1+\beta}{2}+ \alpha, 1+\beta, y\right)}{\, {}_2F_1\left(\frac{1-\beta}{2},\frac{1-\beta}{2}+ \alpha, 1-\beta, y\right)}\,. \end{equation} \subsection{4-point LLHH conformal block} \label{sec:G43B} Here, $x = (z_1, z_2, 1, \infty)$. In this case, the block is determined by two holographic variables $w_1 = w(z_1)$, $w_2 = w(z_2)$ \eqref{hv}, where $w(y)$ is given by \eqref{w_2}, and two accessory parameters $X_1, X_2$. The monodromy integrals \eqref{GS} read \begin{equation} \begin{array}{c} \displaystyle \frac{I^{(1)}_{++}}{2 \pi i} = X_1+ X_2 +\epsilon_1 w_1+ \epsilon_2 w_2 \,, \qquad \frac{I^{(1)}_{-+}}{2 \pi i} = -(X_1 + X_2) \,, \\ \\ \displaystyle \frac{I^{(1)}_{+-}}{2 \pi i} = X_1 w^2_1 + X_2 w^2_2 + 2 \epsilon_1 w_1 + 2 \epsilon_2 w_2\,. \end{array} \end{equation} The only monodromy equation \eqref{mon1} in this case reads \begin{equation} \label{m1} I^{(1)}_{++}I^{(1)}_{++} + I^{(1)}_{+-} I^{(1)}_{-+} = - 4\pi^2 \tilde{\epsilon}_1^2\;. \end{equation} Solving the constraints \eqref{linear1} yields the relation, which can be rewritten in the form \begin{equation} \label{m2} I^{(1)}_{++} = 0\;. \end{equation} Thus we have two equations \eqref{m1} and \eqref{m2} for $X_1$ and $X_2$ which are solved as \begin{equation} \label{X4} \begin{array}{c} \displaystyle X_1 = \frac{\epsilon_1 + \epsilon_2}{w_2-w_1} + \frac{\epsilon_2 - \epsilon_1}{2 w_1} + \frac{\sqrt{(\epsilon_1 - \epsilon_2)^2 (w_1-w_2)^2 + 4 \tilde{\epsilon}^2_1 w_1 w_2}}{2 w_1(w_2-w_1)}\;, \\ \\ \displaystyle X_2 = \frac{\epsilon_1 + \epsilon_2}{w_1-w_2} + \frac{\epsilon_2 - \epsilon_1}{2 w_2} - \frac{\sqrt{(\epsilon_1 - \epsilon_2)^2 (w_1-w_2)^2 + 4 \tilde{\epsilon}^2_1 w_1 w_2}}{2 w_2(w_1-w_2)}\;, \end{array} \end{equation} where a sign of the radical term is fixed by the asymptotic behaviour of the resulting conformal block. Integrating the gradient equations \eqref{efb} we find the perturbative 4-point LLHH block function \begin{equation} \label{pb4} \begin{array}{c} \displaystyle f^{(1)}(w| \epsilon, \tilde{\epsilon}) = - (\epsilon_1 + \epsilon_2) \log (w_1 - w_2) \\ \\ + (\epsilon_1 - \epsilon_2)\log \left( (\epsilon_1 - \epsilon_2)(w_1 - w_2) + \sqrt{(\epsilon_1 - \epsilon_2)^2 (w_2 - w_1)^2 + 4 \tilde{\epsilon}^2_1 w_2 w_1 }\right) \\ \\ \displaystyle -\frac{\tilde\epsilon_1}{2}\log\left[\frac{\tilde \epsilon_1 (w_1 + w_2) + \sqrt {(\epsilon_1 - \epsilon_2)^2 (w_1 -w_2)^2 + 4\tilde{\epsilon}^2_1 w_1 w_2 }}{\tilde \epsilon_1 (w_1 + w_2) - \sqrt{(\epsilon_1 - \epsilon_2)^2 (w_1 -w_2)^2 + 4\tilde{\epsilon}^2_1 w_1 w_2}}\right]\;. \end{array} \end{equation} At $\epsilon_1 = \epsilon_2$ it reproduces the block function in the $w$-parametrization found in \cite{Anous:2019yku}. In particular, the identity block (by definition, $\tilde \epsilon_1 = 0$, whence, $\epsilon_1 = \epsilon_2$) reads \begin{equation} \label{pbv4} \displaystyle f^{(1)}(w|\epsilon, 0) = - 2 \epsilon_1 \log (w_1 - w_2) \;. \end{equation} Going back to the $z$-parameterization by performing the conformal transformation \eqref{con} we reproduce the 4-point block functions found in \cite{Fitzpatrick:2014vua,Hijano:2015rla}. E.g., the identity block \eqref{pbv4} will be given by \begin{equation} \label{pbv40} \displaystyle f^{(1)}(z|\epsilon, 0) = \log\left[\frac{w'(z_1) w'(z_2)}{(w(z_1) - w(z_2))^2}\right]^{\epsilon_1}, \end{equation} with the holographic function \eqref{w_2} (the same expression was obtained in \cite{Fitzpatrick:2016mtp} by a different method). \subsection{4-point LHHH conformal block} \label{sec:G4B} Here, $x= (z_1, {\bf z}_2, {\bf z}_3, {\bf z}_4)$. The holographic variable is given by $w_1 = w_1(z_1)$ \eqref{hv}, where $w(y)$ is given by \eqref{w_3}, and one accessory parameter $X_1$. The only monodromy equation \eqref{mon2} and its solution read \begin{equation} \label{m4} I^{(1)}_{++} \equiv X_1 w_1 + \epsilon_1 = 0\;, \qquad\; X_1 = - \frac{\epsilon_1}{w_1}\;. \end{equation} Also, the background external and intermediate dimensions are restricted by \eqref{fusion}: $\tilde \epsilon_1 = \epsilon_2$. The respective block is found by integrating \eqref{efb}, \begin{equation} \label{4V} f^{(1)}(w_1|\epsilon, \tilde\epsilon) = - \epsilon_1 \log w_1\;. \end{equation} Going back to the $z$-parameterization by using \eqref{w_3} the above function reproduces the 4-point LHHH block found in \cite{Alkalaev:2019zhs}. \subsection{5-point LLHHH conformal block} \label{sec:G5B} Here, $x = (z_1, z_2,{\bf z}_3, {\bf z}_4, {\bf z}_5)$. The block is determined by two holographic variables $w_1 = w(z_1)$, $w_2 = w(z_2)$ \eqref{hv}, where $w(y)$ is now given by \eqref{w_3}, and two accessory parameters $X_1, X_2$. The monodromy integrals \eqref{GS} read \begin{equation} \label{S5} \begin{array}{c} \displaystyle \frac{I^{(1)}_{++}}{2 \pi i} = X_1+ X_2 +\epsilon_1 w_1+ \epsilon_2 w_2 \,, \qquad \frac{I^{(1)}_{-+}}{2 \pi i} = -(X_1 + X_2) \,, \\ \\ \displaystyle \frac{I^{(1)}_{+-}}{2 \pi i} = X_1 w^2_1 + X_2 w^2_2 + 2 \epsilon_1 w_1 + 2 \epsilon_2 w_2\,. \end{array} \end{equation} The monodromy equations \eqref{mon1} and \eqref{mon2} are given by \begin{equation} \label{m5} I^{(1)}_{++}I^{(1)}_{++} + I^{(1)}_{+-} I^{(1)}_{-+} = - 4\pi^2 \tilde{\epsilon}_1^2\;, \qquad I^{(1)}_{++} =0\;. \end{equation} The background external and intermediate dimensions are restricted by \eqref{fusion}: $\tilde \epsilon_2 = \epsilon_3$. The solution to \eqref{m5} is given by \begin{equation} \label{X5} \begin{array}{c} \displaystyle X_1 = \frac{\epsilon_2 + \epsilon_1}{w_2-w_1} + \frac{\epsilon_2 - \epsilon_1}{2 w_1} + \frac{\sqrt{(\epsilon_1 - \epsilon_2)^2 (w_2-w_1)^2 + 4 \tilde{\epsilon}^2_1 w_1 w_2}}{2 w_1(w_1-w_2)}\;, \\ \\ \displaystyle X_2 = \frac{\epsilon_2 + \epsilon_1}{w_1-w_2} + \frac{\epsilon_2 - \epsilon_1}{2 w_2} - \frac{\sqrt{(\epsilon_1 - \epsilon_2)^2 (w_2-w_1)^2 + 4 \tilde{\epsilon}^2_1 w_1 w_2}}{2 w_2(w_2-w_1)}\;. \end{array} \end{equation} Integrating the gradient equations \eqref{efb} we find the perturbative 5-point LLHHH block function \begin{equation} \label{pb5} \begin{array}{c} \displaystyle f^{(1)}(w| \epsilon, \tilde{\epsilon}) = - (\epsilon_1 + \epsilon_2) \log (w_1 - w_2) \\ \\ + (\epsilon_1 - \epsilon_2)\log \left( (\epsilon_1 - \epsilon_2)(w_1 - w_2) + \sqrt{(\epsilon_1 - \epsilon_2)^2 (w_2 - w_1)^2 + 4 \tilde{\epsilon}^2_1 w_2 w_1 }\right) \\ \\ \displaystyle -\frac{\tilde\epsilon_1}{2}\log\left[\frac{\tilde \epsilon_1 (w_1 + w_2) + \sqrt {(\epsilon_1 - \epsilon_2)^2 (w_1 -w_2)^2 + 4\tilde{\epsilon}^2_1 w_1 w_2 }}{\tilde \epsilon_1 (w_1 + w_2) - \sqrt{(\epsilon_1 - \epsilon_2)^2 (w_1 -w_2)^2 + 4\tilde{\epsilon}^2_1 w_1 w_2}}\right]\;. \end{array} \end{equation} In particular, the identity block (by definition, $\tilde \epsilon_1 = 0$, whence, $\epsilon_1 = \epsilon_2$) reads \begin{equation} \label{pbv5} \displaystyle f^{(1)}(w|\epsilon, 0) = - 2 \epsilon_1 \log (w_1 - w_2) \;. \end{equation} Note that the above monodromy equations are exactly the same as those in the LLHH case \eqref{m1} and \eqref{m2}, hence, the accessory parameters \eqref{X4} and \eqref{X5} are also the same. In this way, we demonstrate the uniformization property formulated in the end of Section \bref{sec:HoloV}: in the holographic variables the LLHH block \eqref{pb4} and LLHHH block \eqref{pb5} have the same form. On the other hand, substituting functions \eqref{w_2} and \eqref{w_3} we will obtain, of course, different block functions in $z$-coordinates. \section{Conformal blocks as geodesic trees} \label{sec:bulk} The holographic function introduced earlier to describe the perturbative blocks also occurs when describing the dual bulk geometry. It allows to identify the dual space as three-dimensional AdS$_3[n-k]$ space with $n-k$ conical singularities created by the background heavy operators. We explicitly show that the 5-point LLHHH perturbative block is calculated by the length of particular geodesic tree in AdS$_3[3]$. The geodesic tree is the same as for the 4-point LLHH perturbative block but in AdS$_3[2]$. \subsection{Dual geometry} \label{sec:BM} Let us consider the three-dimensional metric in the Ba\~{n}ados form \cite{Banados:1998gg} \begin{equation} \label{Banados} ds^2 = -H(z) dz^2 -\bar{H}(\bar{z})d\bar{z}^2 + \frac{u^2}{4}\, H(z) \bar{H}(\bar{z}) \, dz d\bar z+ \frac{du^2 + dz d\bar{z}}{u^2}\;, \end{equation} where $u\geq 0$, $z, \bar z \in \mathbb{C}$, and $H, \bar H$ are (anti)holomorphic functions on $\mathbb{C}$ , and the AdS radius is set to one. In the context of the AdS$_3$/CFT$_2$ correspondence, the function $H(z)$ is related to the stress tensor of background operators in $CFT_2$ by \begin{equation} \label{TH} T(z) = \frac{c}{6} H(z)\;, \end{equation} where the central charge $c = 3R /2 G_N$ \cite{Brown:1986nw,Balasubramanian:1999re}. Under the boundary conformal transformations $z \rightarrow w(z)$ the stress tensor changes as \begin{equation} \label{trans} H(z) = \left(w^\prime\right)^2 H(w) + \frac{1}{2}\, \{w,z\}\;, \qquad \text{where} \quad \{w,z\} = \frac{w'''}{w'} - \frac{3}{2}\left(\frac{w''}{w'} \right)^{2}\;. \end{equation} The Ba\~{n}ados metric \eqref{Banados} can be cast into the Poincare form \begin{equation} \label{PP} ds^2 = \frac{dv^2+dq d\bar{q}}{v^2}\;, \end{equation} with $v\geq 0$, $q, \bar q \in \mathbb{C}$, by changing the coordinates as follows \cite{Roberts:2012aq} \begin{equation} \label{coot} \begin{gathered} q(z,\bar z,u) = w(z) - \frac{2 u^2 w^\prime(z)^2 \bar w^{\prime\prime}(\bar z)}{4w^\prime(z)\bar w^{\prime}(\bar z)+u^2 w^{\prime\prime}(z)\bar w^{\prime\prime}(\bar z)} \;, \\ \bar q(z,\bar z,u) = \bar w(\bar z) - \frac{2 u^2 \bar w^\prime(\bar z)^2 w^{\prime\prime}(z)}{4w^\prime(z)\bar w^{\prime}(\bar z)+u^2 w^{\prime\prime}(z)\bar w^{\prime\prime}(\bar z)} \;, \\ v(z,\bar z,u) = u\, \frac{4\left( w^\prime(z)\bar w^\prime(\bar z)\right)^{3/2}}{4w^\prime(z)\bar w^{\prime}(\bar z)+u^2 w^{\prime\prime}(z)\bar w^{\prime\prime}(\bar z)}\;, \end{gathered} \end{equation} where the function $w(z)$ solves the equation (see \cite{Asplund:2014coa,Fitzpatrick:2015zha,Cresswell:2018mpj,Alkalaev:2019zhs} for more details) \begin{equation} \label{c_sch} \{w,z\} = \frac{1}{2} H(z)\;. \end{equation} It is remarkable that the solution $w(z)$ can be constructed by means of two independent solutions to the auxiliary Fuchsian equation $\psi''(z)+ H(z)\psi(z) =0$ as \begin{equation} \label{w_dual} w(z) = \frac{A \psi_1(z) + B \psi_2(z)}{C \psi_1(z) + D \psi_2(z)}\;, \qquad A D - B C \neq 0\;. \end{equation} Identifying the stress tensor of the background operators with the metric-defining function $H(z)$ according to \eqref{TH} we immediately conclude that $\psi_{1,2}(z)$ can be considered as solutions \eqref{sol0} to the auxiliary Fuchsian equation of the monodromy method in the zeroth-order \eqref{fuchs0}. It follows that the mapping function \eqref{w_dual} is exactly the holographic function \eqref{phv}: its values at points of the boundary primary operators define the holographic variables \eqref{hv}. Finally, the length of a geodesic stretched between two points $(q_1, \bar{q}_1, v_1)$ and $(q_2, \bar{q}_2, v_2)$ is given by \begin{equation} \label{bg} \mathcal{L} = \text{arccosh}\;P = \log (P + \sqrt{P^2 +1})\;, \qquad P = \frac{(q_1 - q_2)(\bar q_1 - \bar q_2)+ v_1^2 +v_2^2}{2v_1 v_2}\;. \end{equation} Here $\mathcal{L}$ is a real-valued function of endpoint coordinates $(q_i, \bar{q}_i, v_i)$, $i =1, 2$. In the sequel, we will consider geodesic graphs composed of several geodesic segments lying on the surface $q\bar{q} + v^2 = 1$. In the global coordinates this surface is mapped onto the fixed-time slice.\footnote{Such a condition is convenient, but not necessary. See also Fig. \bref{figure}.} Thus, the total length $\mathcal{L}$ can be expressed in terms of local coordinates $(\eta,\bar{\eta})$ on this 2-dimensional surface and factorized into the sum of holomorphic $\mathcal{L}(\eta)$ and antiholomorphic $\bar{\mathcal{L}}(\bar{\eta})$ functions, \begin{equation} \label{hl} 2\mathcal{L} = \log \left(X(\eta)\bar{X}(\bar{\eta}) \right) \equiv \mathcal{L}(\eta) + \bar{\mathcal{L}}(\bar{\eta})\;, \quad \text{where} \quad\sqrt{X\bar{X} } \equiv P + \sqrt{P^2 + 1}\;. \end{equation} Finally, note that for a given geodesic graph with a number of boundary attachments the (anti-)holomorphic lengths are functions of boundary endpoint coordinates. \subsection{Geodesic trees} \label{sec:GT} In this section we consider the geometry created by three background operators AdS$_3[3]$ and geodesic trees dual to LHHH and LLHHH perturbative blocks. Since the zeroth-order stress tensor \eqref{TH} has three singular points, then in the Ba\~{n}ados coordinates $(z, \bar{z}, u)$ these operators create three singular lines: $(0, 0, u), (1, 1, u)$ and $(\infty, \infty, u)$ stretched along $u\geq 0$. In the Poincare coordinates, the geometry is completely determined by the properties of the function $w(z)$ given by \begin{equation} \label{dw} \displaystyle w(z) = z^{\beta}\, \frac{_2F_1\left(\frac{1+\beta}{2},\frac{1+\beta}{2}+ \alpha, 1+\beta, z\right)}{_2F_1\left(\frac{1-\beta}{2},\frac{1-\beta}{2}+ \alpha, 1-\beta, z\right)}\;. \end{equation} By construction, this is the same function as \eqref{w_3}. Near the singular points $(0, 1, \infty)$ it can be represented \begin{equation} \label{asymptotics} \begin{aligned} z\to 0: \qquad\;& w(z) \sim z^{\beta}(1+ \mathcal{O}(z))\;,\\[2pt] z\to 1: \qquad\;& w(z) \sim (1-z)^{-\alpha}(1+ \mathcal{O}(1-z))\;,\\[2pt] z\to \infty: \qquad & w(z) \sim z^{-\alpha}(1+ \mathcal{O}(1/z))\;, \\[2pt] \end{aligned} \end{equation} where $\sim$ means that the coefficients in the Laurent series near these points are omitted. The function \eqref{dw} is known as the Schwarz triangle function which maps the complex plane $(z, \bar{z})$ onto a curvilinear Schwarz triangle on the plane $(w, \bar{w})$ with vertices at points $w(0), w(1), w(\infty)$ \cite{nehari}, \begin{equation} \label{values} w(0) = 0\;, \qquad w(1) = \infty\;, \qquad w(\infty) = e^{i \pi \beta} \; \frac{\Gamma(1+\beta) \; \Gamma(\frac{1-\beta}{2} + \alpha) \; \Gamma(\frac{1-\beta}{2})}{\Gamma(1-\beta) \; \Gamma(\frac{1+\beta}{2} + \alpha)\; \Gamma(\frac{1+\beta}{2})}\;. \end{equation} The asymptotic behaviour \eqref{asymptotics} suggests that near the singular points corresponding to the background operators the Schwarz triangle describes angle excesses/deficits: angle deficits $\beta$ and $\alpha$ at $0$ and $\infty$, an angle excesses $-\alpha$ at $1$. Let us consider now the singular lines of the background operators in the Poincare coordinates $(q, \bar{q}, v)$. From the asymptotics \eqref{asymptotics} we find that \begin{equation} \begin{array}{l} z\to 0\;\;:\qquad v(z,\bar z,u) \sim \; u^{-1}(z \bar z)^{\frac{1+\beta}{2}}(1+\mathcal{O}(z \bar z))\;, \vspace{2mm} \\ z\to 1\;\;\;:\qquad v(z,\bar z,u) \sim \; u^{-1}(\left[(1-z)(1-\bar z)\right]^{\frac{1-\alpha}{2}}(1+\mathcal{O}((1-z)(1-\bar z))\;, \vspace{2mm} \\ z\to \infty\;:\qquad v(z,\bar z,u) \sim u\; (z\bar z)^{-\frac{1+\alpha}{2}}(1+\mathcal{O}(1/(z\bar z)))\;. \end{array} \end{equation} Hence, the singular lines in the Ba\~{n}ados coordinates are mapped into the boundary ($v=0$) points $w(0), w(1), w(\infty)$ \eqref{values} in the Poincare coordinates which correspond to the vertices of the Schwartz triangle (see \cite{Alkalaev:2019zhs} for more details). The general claim of the AdS$_3$/CFT$_2$ correspondence in the large-$c$ regime within the heavy-light expansion reduces to the correspondence formula that relates $n$-point L$^k$H$^{n-k}$ perturbative blocks and holomorphic geodesic lengths \begin{equation} \label{correspondence} f_{(k, n-k)}^{(1)}(w|\epsilon, \tilde{\epsilon})\, = \, -\mathcal{L}_{_{\hspace{-0.5mm}AdS_{_3}[n-k]}}(w|\epsilon, \tilde{\epsilon})\;, \end{equation} where a geodesic tree is stretched in the AdS$_3[n-k]$ space with $n-k$ singularities created by $n-k$ background heavy operators. The uniformization of perturbative blocks suggests that the form of geodesic trees depends only on the number of perturbative operators. In the bulk, the holographic variables $w_i$, $i=1,...,k$ appear as coordinates of the boundary attachments of the perturbative operators. \paragraph{LHHH block.} The corresponding geodesic tree is a line connecting a boundary point $(w_1, \bar w_1, \varepsilon)$ where the cut-off $\varepsilon \to 0$ and the selected bulk point $(0, 0, 1)$ \cite{Alkalaev:2019zhs}. The point is an intersection point of the surface $q \bar{q} + v^2 = 1$ and the line $(0,0,v)$. This geometrical construction is most manifest in global coordinates where AdS$_3$ is a cylinder. The 2-surface is mapped to a fixed-time slice while the line $(0,0,v)$ is mapped to a line going through the center of cylinder along the time direction (vertical red line $(0,0, \tau)$ on fig. \bref{figure}). Such a line can be visualized as one of legs of the 3-vertex of background operators that created the background geometry. \begin{figure}[H] \centering \begin{minipage}[h]{0.35\linewidth} \includegraphics[width=1\linewidth]{block} \end{minipage} \qquad\qquad \centering \begin{minipage}[h]{0.15\linewidth} \includegraphics[width=1\linewidth]{cylinder} \end{minipage} \caption{The 4-point LHHH block and its holographically dual realization in the three dimensional bulk (a rigid cylinder) global coordinates $\tau \in (-\infty, +\infty)$, $\phi \in [0,2\pi)$, $\rho \in [0, \pi/2)$. The red lines inside the cylinder visualize the 3-point function $\langle \mathcal{O}_H \mathcal{O}_H \mathcal{O}_H\rangle$ of heavy operators that created this conical defect geometry. The wavy blue line denotes the perturbative operator $\mathcal{O}_L$ propagating in the background. Here $\theta \equiv \tau+i \phi = w_1$. The surface $q\bar q + v^2 = A^2,\; A > 0$ in Poincare coordinates is realized in global coordinates as fixed-$\tau$ slice, $\tau = 2 \log A$. } \label{figure} \end{figure} Expanding \eqref{bg} in the cut-off parameter we find that according to the (anti-)holomorphic representation \eqref{hl} the length function is given by \begin{equation} 2\mathcal{L} \equiv \cL_{_{\hspace{-0.5mm}AdS_{_3}[3]}}(w|\epsilon) + \bar \mathcal{L}_{_{\hspace{-0.5mm}AdS_{_3}[3]}}(\bar w|\epsilon) = \epsilon_1 \log w_1 + \epsilon_1 \log \bar w_1 \;. \end{equation} Its holomorphic part coincides with the 4-point LHHH perturbative block \eqref{4V}. \paragraph{LLHHH block.} Let us consider first a geodesic arc stretched between two boundary points $(z_1,\bar z_1, \varepsilon)$ and $(z_2,\bar z_2, \varepsilon)$, where the cut-off $\varepsilon \rightarrow 0$. Expanding the length function \eqref{bg} in the cut-off we find the weighted length of the arc \begin{equation} \cL_{_{\hspace{-0.5mm}AdS_{_3}[3]}}(w|\epsilon) + \bar \mathcal{L}_{_{\hspace{-0.5mm}AdS_{_3}[3]}}(\bar w|\epsilon) = 2\epsilon_1 \log (w_2 - w_1)+2\epsilon_1 \log (\bar w_2 - \bar w_1)\;. \end{equation} This function (its holomorphic part) coincides with the identity 5-point LLHHH perturbative block given by \eqref{pbv5}. Now, we consider a geodesic tree with a single trivalent vertex connecting three edges. The two edges are attached to the conformal boundary at $(w_1, \bar{w}_1, \varepsilon)$ and $(w_2, \bar{w}_2, \varepsilon)$, where the cut-off $\varepsilon \rightarrow 0$, the third edge ends at the selected point $(0,0,1)$ in the bulk. The vertex is the Fermat–Torricelli point $(q, \bar{q}, v)$ which minimizes the corresponding weighted length function. Then, using \eqref{bg} and condition $q\bar{q} + v^2 = 1$ we compose lengths of three geodesic segments as \begin{equation} \label{length} \begin{array}{c} \displaystyle 2 \mathcal{L} = \sum_{i=1}^2\epsilon_i\log \frac{(q - w_i)(\bar{q} - \bar{w}_i)}{1 - q\bar{q}} + \tilde{\epsilon}_1 \log \frac{1 + \sqrt{q\bar{q}}}{1 - \sqrt{q\bar{q}}}\;. \end{array} \end{equation} Representing $q = t \exp[i \phi]$ and minimizing \eqref{length} with respect to $(t, \phi)$ we find that the Fermat–Torricelli point is given by \begin{equation} t = \frac{\left(4 (w_1 w_2)^{\frac{1}{2}} - (\epsilon_1 - \epsilon_2)^2 (w_2 - w_1)^2\right)^{\frac{1}{2}} - a_1 (w_2 - w_1)}{ w_2 + w_1 + a_2 (w_2 - w_1)}\;, \qquad \cos \phi = \frac{a_3t^2 +2t + a_3}{t^2+2 a_3 t+ 1}\;, \end{equation} where \begin{equation} a_1 = (\epsilon_1 + \epsilon_2) a_2 \;, \qquad a_2 = \left(\frac{\tilde{\epsilon}^2_1 - (\epsilon_1 - \epsilon_2)^2}{\tilde{\epsilon}^2_1 - (\epsilon_1 + \epsilon_2)^2}\right)^{\frac{1}{2}}, \qquad a_3 = \frac{\epsilon^2_2-\epsilon^2_1 - \tilde{\epsilon}^2_1}{\epsilon_1 \tilde{\epsilon}_1}\;. \end{equation} Substituting these expressions into \eqref{length} we obtain the holomorphic part of the legnth function \begin{equation} \begin{array}{c} \label{GBWL} \displaystyle \cL_{_{\hspace{-0.5mm}AdS_{_3}[3]}}(w|\epsilon, \tilde \epsilon) = (\epsilon_1 + \epsilon_2) \log (w_1 - w_2) \\ \\ - (\epsilon_1 - \epsilon_2)\log \left( (\epsilon_1 - \epsilon_2)(w_1 - w_2) + \sqrt{(\epsilon_1 - \epsilon_2)^2 (w_1 - w_2)^2 + 4 \tilde{\epsilon}^2_1 w_2 w_1 }\right)\\ \\ \displaystyle +\frac{ \tilde\epsilon_1}{2}\log\left[\frac{\tilde \epsilon_1 (w_1 + w_2) + \sqrt {(\epsilon_1 - \epsilon_2)^2 (w_1 -w_2)^2 + 4\tilde{\epsilon}^2_1 w_1 w_2 }}{\tilde \epsilon_1(w_1 + w_2) - \sqrt{(\epsilon_1 - \epsilon_2)^2 (w_1 - w_2)^2 + 4\tilde{\epsilon}^2_1 w_1 w_2}}\right]\;, \end{array} \end{equation} which reproduces the 5-point LLHHH perturbative block \eqref{pb5}. \section{Summary} \label{sec:sum} We showed that using the holographic variables allows to formulate the uniformization property of $n$-point L$^k$H$^{n-k}$ perturbative blocks which claims that their form essentially depends on the number of the perturbative operators $k$ and not on the background operators. In other words, the perturbative conformal block function can be reorganized so that all coordinates are packed into $k$ functions of original coordinates. In this new parameterization, the $n$-point block function has the same form for any given number $k$. The uniformization property for large-c blocks was originally established for 4-point LLHH blocks \cite{Fitzpatrick:2015zha}, where the coordinate transformation eliminating dependence on the background operators was understood using the standard CFT$_2$ technique when conformal blocks are represented through matrix elements of Virasoro states.\footnote{In particular, this resulted in the method to calculate large-$c$ perturbative blocks starting from the known $sl(2)$ global blocks \cite{Fitzpatrick:2015zha} (see also the case of 5-point LLLHH blocks \cite{Alkalaev:2015fbw}).} The uniformizing coordinate transformation in the framework of the monodromy method was considered in \cite{Anous:2019yku} in the case of two background operators. Unlike the discussion in Section \bref{sec:HoloV}, the $w$-coordinate transformation in \cite{Anous:2019yku} is implemented already in the Fuchsian equation so then the monodromy problem is reduced to studying a regularity of the new stress tensor on the $w$-plane. In our case, we follow the standard monodromy analysis noticing that the $z$-dependence can be packed into some new functions and their derivatives \eqref{phv}. A true coordinate change is performed only at the final stage when the the complete set of algebraic equations on the accessory parameters is formulated, see \eqref{con}--\eqref{efb}. Practically, it would be useful to relate two approaches. So, the remarkable form of the monodromy integrals \eqref{GS} suggests that they can be somehow related to regularity of the stress tensor in the new parameterization (see also our comments below \eqref{GS}). The holographic variables are indeed holographic as they reappear in the bulk analysis as the boundary coordinates of the perturbative operators in the three-dimensional space AdS$_3[n-k]$ with $n-k$ conical singularities produced by the background operators. From this perspective, the uniformization property is more obvious because it is quite natural that $k$ perturbative operators produce the same geodesic tree no matter how many background operators created the bulk space. In fact, the background operators with dimensions $\Delta < c/24$ (cf. \eqref{alpha}) produce conical singularities so that AdS$_3[n-k]$ is locally AdS$_3$. By casting the original Ba\~{n}ados metric to the Poincare form, all dependence on positions of the background operators is now hidden inside the mapping function and its domain of definition. It turns out that the same function defines the holographic coordinates in the boundary CFT$_2$ because of the same Fuchsian equation that underlies both bulk and boundary calculations. Here, the Schwarz triangle function \eqref{dw} which is the mapping function in AdS$_3[3]$ and the holographic function \eqref{w_3} in CFT$_2$ clearly illustrates all details. We have explicitly demonstrated this machinery for LLHH and LLHHH perturbative blocks. Going beyond more than three background operators faces the problem of lacking explicit expressions for higher-point conformal blocks\footnote{For recent study of higher-point conformal blocks, see \cite{Rosenhaus:2018zqn,Parikh:2019ygo,Jepsen:2019svc,Parikh:2019dvm}.} that define the background part H$^{n-k}$ of the original $n$-point large-$c$ block function. Nonetheless, the uniformization property claims that the perturbative L$^k$ part will be the same. The large-$c$ multi-point Virasoro blocks considered in the present paper can be used in the study of many interesting physics problems. Here, the main up-to-date application can be found in analyzing the entanglement entropy that according to \cite{Calabrese:2004eu,Calabrese:2009qy} basically reduces to calculating higher-point correlators of heavy operators ($\Delta = \mathcal{O}(c)$). For recent studies of large-$c$ conformal blocks in this context see, e.g. \cite{Hartman:2013mia,Banerjee:2016qca, Anous:2019yku}. Another possible application is related to the quantum chaos and its characterization known as out-of-time ordered correlators (OTOC) (see, e.g. recent \cite{Kusuki:2019gjs} and references therein). Also, let us note that the monodromy approach in terms of holographic variables can be applied to systems enjoying large-$N$ expansion and symmetries other than Virasoro. For example, it would be interesting to identify holographic variables for BMS$_3$ blocks considered in large-$c$ regime in the context of the flat-space holography (see e.g. \cite{Bagchi:2016geg,Hijano:2019qmi,Hijano:2018nhq}). \vspace{4mm} \noindent \textbf{Acknowledgements.} The work was supported by the RFBR grant No 18-02-01024 and by the Foundation for the Advancement of Theoretical Physics and Mathematics “BASIS”. \providecommand{\href}[2]{#2}\begingroup\raggedright
1,108,101,563,589
arxiv
\section{Introduction}\label{sec:introduction} One of the most important experimental findimgs reported by the H1 and ZEUS collaborations at the HERA collider, working at the center of the mass-energy of about $\sqrt{s} = 318$ GeV, is the observation of a significant fraction of about 8-10\% of large rapidity gap events in a diffractive deep inelastic scattering (DIS) processes~\cite{ZEUS:2008xhs,H1:2006zyl,ZEUS:2009uxs,H1:2006zxb,H1:2007oqt}. Such diffractive DIS processes allow us to define a non-perturbative diffractive parton distribution functions (diffractive PDFs) that can be extracted from a QCD analysis of relevant data~\cite{H1:2006zyl,ZEUS:2009uxs}. According to the factorization theorem, the diffractive cross-section can be expressed as a convolution of the diffractive PDFs and partonic hard scattering cross-sections of the subprocess which is calculable within perturbative QCD. The diffractive PDFs have properties very similar to the standard PDFs, especially they obey the same standard DGLAP evolution equation~\cite{Dokshitzer:1977sg,Gribov:1972ri,Lipatov:1974qm,Altarelli:1977zs}. However, they have an additional constraint due to the presence of a leading proton (LP) in the final state of diffractive processes, $\ell (k) + p(P) \rightarrow \ell (k^{\prime}) + p(P^{\prime}) + X(p_{X})$. Considering the diffractive factorization theorem, the diffractive PDFs can be extracted from the reduced cross-sections of inclusive diffractive DIS data by a QCD fit. It should be noted here that if the factorization theorem would be violated in hadron-hadron scattering, then there is no universality for example for the diffractive jet production in hadron-hadron collisions~\cite{Collins:1992cv,Wusthoff:1999cr}. Starting from perturbative QCD, in the first approximation, diffractive DIS is described in the dipole framework and formed by the quark-antiquark (${q} \bar{{q}}$) and quark-antiquark-gluon (${q} \bar{q}$) system. So far all, several groups extracted the diffractive PDFs from QCD analyses of the diffractive DIS data at the next-to-leading order (NLO) and next-to-next-to-leading order (NNLO) accuracy in perturbative QCD~\cite{Goharipour:2018yov,Khanpour:2019pzq,ZEUS:2009uxs,Ceccopieri:2016rga,H1:2006zyl,Maktoubian:2019ppi}. In Ref.~\cite{Goharipour:2018yov}, the authors presented the first NLO determination of the diffractive PDFs and their uncertainties within the {\tt xFitter} framework~\cite{Alekhin:2014irh,xFitterDevelopersTeam:2022koz}. Ref.~\cite{Khanpour:2019pzq} presented the first NNLO determination of the diffractive PDFs, and the framework of fracture functions is used in the QCD analysis~\cite{Trentadue:1993ka,deFlorian:1998rj,Ceccopieri:2016rga}. Some of these studies, such as ZEUS-2010-dPDFs~\cite{ZEUS:2009uxs}, also include the dijet cross-section measurement to determine the well-constrained gluon PDFs. In this paper, we report a new QCD fit of diffractive PDFs to the HERA inclusive data in diffractive DIS at the NLO in perturbative QCD within the {\tt xFitter} framework~\cite{Alekhin:2014irh}. The present fit also includes the high-precision H1/ZEUS combined measurements of the diffractive DIS cross-section~\cite{H1:2012xlc}. The inclusion of the most recent HERA combined data, together with the twist-4 corrections from the longitudinal virtual photons and the contribution of subleading Reggeon exchanges to the structure-function, provide well-established diffractive PDFs sets. We show that these corrections, in particular, the twist-4 contribution allows us to explore the high-$\beta$ region, and the inclusion of the subleading Reggeons gives the best description of the diffractive DIS data. In addition, by considering such corrections, one could relax the kinematical cuts that one needs to apply to the data. This paper is organized in the following way: In Sec.~\ref{Theoretical-Framework} we discuss the theoretical framework of the {\tt SKMHS22} diffractive PDFs determination, including the computation of the diffractive DIS cross-section, the evolution of diffractive PDFs, and the corresponding factorization theorem. This section also includes our choice of physical parameters and the heavy quark contributions to the diffractive DIS processes. The higher twist contribution considered in {\tt SKMHS22} QCD analysis also discussed in detail in this section. In Sec.~\ref{global-QCD-analysis} we present the details of the {\tt SKMHS22} diffractive PDFs global QCD analysis and fitting methodology. Specifically, we focus on the {\tt SKMHS22} parametrization, the minimization strategy, and the method of uncertainty estimation. We also present the diffractive data set used in {\tt SKMHS22} analysis, along with the corresponding observables and kinematic cuts applied to the data samples. In Sec.~\ref{sec:results} we present in detail the {\tt SKMHS22} sets. The perturbative convergence upon inclusion of the higher twist corrections is also discussed in this section. The fit quality and the theory/data comparison are presented and discussed in this section as well. Finally, in Sec.~\ref{Conclusion} we summarize our findings and outline possible future developments. \section{Theoretical Framework}\label{Theoretical-Framework} In the following section, we describe the standard theoretical framework which is perturbative QCD (pQCD) for the typical event with a large rapidity gap (LRG) for diffractive DIS processes. We discuss in detail the calculation of diffractive DIS reduced cross-section, the relevant factorization theorem, and our approaches to consider the heavy flavors contributions. We also provide the details of the diffractive structure-function (diffractive SF) taking the twist-4 and Reggeon corrections into account. \subsection{Diffractive DIS cross section}\label{sec:Diffractive-DIS-cross-section} In Fig.~\ref{fig:Feynman}, we display the Feynman diagram for diffractive DIS in the single-photon approximation. In the neutral current (NC) diffractive DIS process $ep \rightarrow ep{X}$, we have the incoming positron or electron in the initial state in which scatters off an an incoming proton with the four-momentum ${k}$ and ${P}$, respectively. As one can see from the Feynman diagram, in the final state, the proton with the four-momentum of $P^\prime$ remains intact and there is a rapidity gap between the proton in the final state and the diffractive system $X$ and outgoing electron with four-momentum $k^\prime$. In order to calculate the reduced diffractive cross-section for such a process, one needs to introduce the standard set of kinematical variables \begin{equation} \label{eq:kinematic-variable} Q^{2} = -q^{2} = ({k} - {k}^{\prime})\,, \quad {y} = \frac{{P.q}}{{P.k}}\,, \quad {x} = \frac{{-q^2}}{{2P.q}}\,, \end{equation} which are the photon virtuality $Q^{2}$, the inelasticity ${y}$, and the Bjorken variable $x$, respectively. In the case of diffractive DIS, one needs to introduce an additional kinematical variable ${\beta}$ which is defined to be the momentum fraction carried by the struck parton with respect to the diffractive exchange. The kinematical variable ${\beta}$ is given by, \begin{equation} \label{eq:beta} {\beta} = \frac{Q^{2}} {{2} (P - P^{\prime})} = \frac{Q^{2}} {M_X^{2} + Q^{2} - t} \,, \end{equation} where $M_{X}$ is the invariant mass of the diffractive final state, produced by the diffractive dissociation of the exchanged virtual photon, and the variable $t = ({P} - {P}^{\prime})$ is the squared four-momentum transferred at the proton vertex. The experimental diffractive DIS data sets are provided by the H1 and ZEUS collaborations at HERA in the form of the so-called reduced cross section $\sigma_r^{D3}(\beta, Q^{2}; {x}_{\pom}, t)$, where $x_{\pom}$ is defined to be the longitudinal momentum fraction lost by the incoming proton. The longitudinal momentum fraction $x_{\pom}$ satisfies the relation $x = {\beta} x_{\pom}$. The $t$-integrated differential cross-section for the diffractive DIS processes can be written in terms of the reduced cross-section as, \begin{equation} \label{eq:cross-section} \frac{d\sigma^{e p \rightarrow e p {X}}} {d{\beta} dQ^{2} dx_{\pom}} = \frac{{2} {\pi} \alpha^{2}}{{\beta} Q^{4}} \left[1 + (1 - {y})^{2} \right] \sigma_r^{D3}({\beta}, Q^{2}; {x_{\pom}})\,. \end{equation} In the one-photon approximation, the reduced diffractive cross-section can be written in terms of two diffractive structure functions~\cite{H1:2006zyl,ZEUS:2009uxs,Goharipour:2018yov}. This reads, \begin{align} \label{eq:reduced cross-section} \sigma_r^{D(3)}({\beta}, Q^2; x_{\pom})=& F_2^{D(3)}({\beta}, {Q}^{2}; {x_{\pom}}) \\ \nonumber & -\frac{y^{2}}{1 + (1 - {y})^{2}} F_{L}^{D(3)} ({\beta}, {Q}^{2}; {x_{\pom}}). \end{align} It should be noted here that for the ${y}$ not close to the unity, the contribution of the longitudinal structure function $F_{L}^{D(3)}$ to the reduced cross-sections can be neglected. Since we use the diffractive DIS data sets at HERA for the reduced cross-section, we follow the recent study by {\tt GKG18}~\cite{Goharipour:2018yov} and consider this contribution in our QCD analysis. \begin{figure}[t!] \vspace{0.20cm} \resizebox{0.450\textwidth}{!}{\includegraphics{Feynman_New.jpg}} \begin{center} \caption{{\small Diagram for diffractive DIS $ep \rightarrow ep {X}$. The four-momenta are indicated as well (in teh round brackets). The diffractive scattered proton is distinguished from the diffractive mass ${X}$.} \label{fig:Feynman}} \end{center} \end{figure} \subsection{Factorization theorem for the diffractive DIS}\label{Factorization theorem } The main idea of diffractive DIS was proposed for the first time by Ingelman and Schlein~\cite{Ingelman:1984ns}. According to the Ingelman-Schlein (IS) model, the diffractive processes in DIS are interpreted in terms of the exchange of the leading Regge trajectory. The diffractive process includes two steps which are the emission of the pomeron from a proton and subsequent hard scattering of a virtual photon on partons in the pomeron. Therefore the pomeron is considered to have a partonic structure as do hadrons. Hence, the diffractive structure functions factorizes into a pomeron flux and a pomeron structure function~\cite{Collins:1996fb,Collins:2001ga,Collins:1997sr}. In analogy to the inclusive DIS, the diffractive structure functions can be written as a convolution of the non-perturbative diffractive PDFs which satisfy the standard DGLAP evolution equations~\cite{Dokshitzer:1977sg,Gribov:1972ri,Lipatov:1974qm,Altarelli:1977zs}, and the hard scattering coefficient functions. It is given by, \begin{align} \label{eq:diffractive-structure-function} F&_{2/L}^{D(4)}({\beta}, Q^{2}; x_{\pom}, {t})= \nonumber \\ &\sum_{i} \int_{\beta}^{1} \frac{dz}{z} {C}_{{2}/{L}, i} \left(\frac{{\beta}}{z}\right) {f}_{i}^{D}(z, Q^{2}; {x}_{\pom}, {t}), \end{align} where the sum runs over all parton flavors (gluon, $d$-quark, $u$-quark, etc.). Here the long-distance quantity $f_{i}^D(z, Q^{2}; x_{\pom}, {t})$ denotes the non-perturbative part which can be determined by a global QCD analysis of the available diffractive experimental data sets. The Wilson coefficient functions $C_{{2}/{L}, {i}}$ in the above equation describe the hard scattering of the virtual photon on a parton ${i}$ and are the same as the coefficient functions known from the inclusive DIS and calculable in perturbative QCD~\cite{Vermaseren:2005qc}. It has been shown that the description of the experimental data is very good when factorization is assumed~\cite{H1:2006zyl,ZEUS:2009uxs,Goharipour:2018yov}. The vertex factorization states that the diffractive PDFs should be factorized into the product of two terms, one of them depends on the $x_{\pom}$ and $t$, and the another one is a function of ${\beta}$ and $Q^{2}$. Hence, the diffractive PDFs $f_{i/p}^D({\beta}, Q^{2}; x_{\pom}, {t})$ is given by, \begin{align} \label{eq:DPDFs} f_{i/p}^D({\beta}, Q^{2}; x_{\pom}, {t})= & f_{\pom/p}(x_{\pom}, {t}) f_{i/\pom}({\beta}, Q^{2}) \nonumber\\ +&f_{\reg/p}(x_{\pom}, {t}) f_{i/\reg}^{\reg}({\beta}, Q^{2}), \end{align} where $f_{\pom/p}(x_{\pom}, {t})$ and $f_{\reg/p}(x_{\pom}, {t})$ are the Pomeron and Reggeon flux-factors, respectively. These describe the emission of the Pomeron and Reggeon from the proton target. The Pomeron and Reggeon partonic structures given by the parton distributions $f_{{i}/\pom}({\beta}, {Q}^{2})$ and $f_{i/\reg}^{\reg}({\beta}, Q^{2})$. The parametrization and determination of these functions will be discussed in detail in section~\ref{global-QCD-analysis}. \subsection{Heavy flavour contributions}\label{subsec:Heavy flavour} The study of the heavy quark flavor contributions to the DIS processes enables us to precisely test QCD and the strong interactions. In this respect, their contributions have an important impact on the PDFs~\cite{Ball:2021leu,Boroun:2021ckh,Harland-Lang:2014zoa,Ball:2022hsh}, FFs~\cite{Salajegheh:2019nea,Salajegheh:2019ach,Soleymaninia:2022qjf}, and diffractive PDFs~\cite{Maktoubian:2019ppi,Khanpour:2019pzq,Goharipour:2018yov} extracted from the global QCD analysis. Generally speaking, there are two regimes for the treatment of heavy quark production. The first region is $Q^{2} \sim m_{h}^{2}$ where $m_{h}$ is the heavy quark mass. The massive quarks are produced in the final state and they are not treated as an active parton within the nucleon. This regime is interpreted using the ``Fixed Flavour Number Scheme'' (FFNS). In this scheme, the light quarks are considered to be the active partons inside the nucleon, and the number of flavors needs to be fixed to $n_{f} = 3$. The FFNS is not accurate and reliable for scales much greater than the heavy quark mass threshold $m_{h}^{2}$. At higher energy scales, ${Q}^{2} \gg m_{h}^{2}$, the heavy quarks behave as massless partons within the hadron. It that case, logarithmic terms $\sim Q^{2}/m_h^{2}$ are automatically summed through the solutions of the DGLAP evolution equations for the heavy quark distributions. The simplest approach which describes the ${Q}^{2}/m_h^{2} \rightarrow \infty$ limit is the ``Zero Mass Variable Flavor Number Scheme'' (ZM-VNS), which ignores all the $\mathcal{O}(m_{h}^{2}/{Q}^{2})$ corrections. In summary, at very large scales, where the resummation of large logarithms are pertinent, the ZM-VFNS would be more precise, and in the limits close to the heavy quark mass threshold ${m}_{h}$, the FFNS works well enough. In order to present the correct scheme and to obtain the best description of these two limits of $Q^2\leq m_h^2$ and $Q^2\gg m_h^2$, one needs to take into account the ``General Mass Variable Flavor Number Scheme'' (GM-VFNS)~\cite{Thorne:2012az}. In this scheme, the DIS structure function can be written as follows, \begin{equation} \label{eq:GM-VFNS} F(x, Q^{2}) = C_{j}^{{\text {GMVFN}}, n_{f} + m} (Q^{2}/m_{h}^{2}) \otimes f_{j}^{n_{f} + {m}}(Q^{2}), \end{equation} where ${n}_{f}$ is the number of active light quark flavors and ${m}$ is the number of heavy quarks. Unlike the ZM-VFNS, the hard scattering coefficient functions ${C}_{k}^{{\text {FF}}, n_{f}}$ depend on the $(Q^2/m_h^2)$ but reduce to the zero mass approach as $Q^{2}/m_h^{2} \rightarrow \infty$. Considering the transition from ${n}_{f}$ active quarks to ${n}_{f} + 1$ ones, one could write, \begin{align} \label{eq:GM-VFNS1} F(x, {Q}^{2})=&{C}_{j}^{{\text {GMVFN}}, n_{f} + 1}(Q^{2}/m_{h}^{2}) \otimes f_{j}^{n_{f} + 1}(Q^{2}) \nonumber \\ =&C_{j}^{{\text {GMVFN}}, n_{f} + {1}}(Q^{2}/m_{h}^{2}) \otimes A_{jk}(Q^{2}/m_{h}^{2}) \nonumber \\ & \qquad\qquad \qquad \qquad \quad \quad \otimes {f}_{k}^{n_{f}}(Q^{2}) \nonumber\\ \equiv& C_k^{FF, {n}_{f}}(Q^{2}/{m}_{h}^{2}) \otimes f_{k}^{n_{f}}(Q^{2}), \end{align} where the matrix elements $A_{jk}(Q^{2}/m_{h}^{2})$ are calculated and presented in Ref.~\cite{Buza:1996wv}. The $C_{k}^{{\text {FF}}, {n}_{f}}(Q^2/m_{h}^{2})$ coefficient in the above equation is given by, \begin{equation} \label{eq:GM-VFNS2} C_{k}^{{\text {FF}}, n_{f}}(Q^{2}/m_{h}^{2}) \equiv A_{jk}(Q^2/m_{h}^{2}) \otimes f_{k}^{n_f}(Q^{2}). \end{equation} As we mentioned before, the coefficient functions must behave towards the massless limit as ${Q}^{2}/m_h^{2} \rightarrow {\infty}$, as given by Eq.~\eqref{eq:GM-VFNS2}. The analysis presented in this work is based on the Thorne and Roberts (TR) GM-VFNS which covers smoothly from FFNS scheme at low ${Q}^{2}$ to the ZM-VFNS descritption at high energy scale ${Q}^{2}$, and in this regard, our choice gives the best description of the heavy flavor effects on the diffractive structure functions. In this study, following the MMHT14 collaboration, we adopt the values for heavy quarks as ${m}_{c} = 1.40\,$GeV and ${m}_{b} = 4.75\,$GeV~\cite{Harland-Lang:2015qea}. The strong coupling constant is fixed to ${\alpha}_s(M_Z^2) = 0.1185$~\cite{ParticleDataGroup:2018ovx}. \subsection{Twist-4 correction}\label{subsec:twist4} In this section, we describe in detail the twist-4 correction considered in the {\tt SKMHS22} QCD analysis. Generally speaking, the higher twist contribution is proportional to the terms depending on ${1}/Q^{2}$, and hence, would be highly suppressed at a high-energy scale ${Q}^{2}$ in the DIS process. Nonetheless, in the color dipole framework this term for ${M} \rightarrow 0$ or ${\beta} \rightarrow {1}$ dominates over the twist-2 contribution. Hence, the dipole picture provides a powerful framework in which the QCD-based saturation models can be used to investigate the diffractive DIS data. For interaction of the color dipole with a proton, one could consider two different models which are the quark dipole (${q} \bar{q}$ system) and gluon dipole (${q} \bar{q}g$) system. We refer the reader to the Ref.~\cite{Golec-Biernat:2008mko} for a clear review. As we mentioned earlier, for the diffractive cross-section, the virtual photon could be transversely or longitudinally polarized. Therefore, according to the different polarization, one can decompose it into a transverse and a longitudinal part. It turns out for the leading order in ${Q}^{2}$ for ${\beta} \rightarrow {1}$, the ${q} \bar{q}$ and ${q} \bar{q}g$ from transverse virtual photons vanish proportional to the $({1} - {\beta})$, whereas the longitudinal part which is of higher twist and gives a finite contribution. In diffractive DIS, for ${M} \ll {Q}^{2}$, the ${q} \bar{q}$ contribution to the final state dominates over ${q} \bar{q} {g}$ of the photon wave function. Thus, the ${L}{q} \bar{q}$ component is not negligible at higher value of ${\beta}$ and has an important contribution in the longitudinal diffractive structure function. The ${F}_{Lq \bar{q}}^{D}$ can be written in term of the Bessel functions ${J}_{0}$ and ${K}_{0}$ and dipole cross section $\hat{\sigma}({x}_{\pom}, {r})$~\cite{Golec-Biernat:2007mao,Golec-Biernat:2008mko}, \begin{align} \label{twist-4} F_{Lq\bar{q}}^{D} = &\frac{3} {16 {\pi}^{4} x_{\pom}} e^{-B_{D} \vert t \vert}\sum_{f} e_{f}^{2}\frac{{\beta}^3} {({1}-{\beta})^{4}} \nonumber \\ &\times \int_{0}^{[Q^2(1 - {\beta})]/{4} {{\beta}}} dk^{2} \frac{k^{2}/Q^{2}} {\sqrt{{1} - \frac{{4} {\beta}} {{1} - {\beta}} \frac{k^{2}}{Q^2}}} \nonumber \\ &\times \left( k^{2} \int_{0}^{\infty} dr r {K}_{0} \left(\sqrt{\frac{{\beta}}{{1} - {\beta}}}kr\right) {J}_{0}(kr) \hat{\sigma}({x}_{\pom}, {r}) \right)^{2}\,. \end{align} The main idea for the dipole approach is that the photon splits up into a quark-antiquark pair (dipole), which then scatters on the proton target. Following the studies presented in Refs.~\cite{Golec-Biernat:1998zce}, the dipole cross-section is considered to have the following simple form in our QCD analysis, \begin{equation} \hat{\sigma}({x}_{\pom}, {r}) = \sigma_{0} \{{1} - \exp(-r^{2} {Q}_{s}^{2}/{4})\}, \end{equation} where ${r}$ indicates the separation between the quark and the antiquark. The saturation momentum $Q_{s}^{2}=(x_{\pom}/x_{0})^{-\lambda}$ GeV$^{2}$ is responsible for the transition to the saturation regime. The parameters $\sigma_{0} = 29$~mb, $x_{0} = 4\times10^{-5}$, and $\lambda = 0.28$ are taken from Ref.~\cite{Golec-Biernat:1998zce}. This definition of the dipole-proton cross-section presents a good description of the inclusive HERA diffractive DIS data sets. Our QCD analysis which includes the twist-4 correction is called {\tt SKMHS22-tw2-tw4}. The effect of such corrections on the extracted diffractive PDFs is discussed in section.~\ref{sec:results}. \subsection{Reggeon contribution}\label{subsec:Reggeon} For the higher value of ${x}_{\pom}$, these diffractive DIS data sets include the contributions which decrease with energy. In order to truly describe this effect one can include the contributions of the subleading Reggeon which breaks the factorization of the diffractive structure function. In order to take into account such contributions, we add the following Reggeon contribution to the diffractive structure function~$F^2_D$~\cite{Golec-Biernat:1996bax,Golec-Biernat:1997vbr} \begin{equation}\label{eq:ReggeonSF} \frac{dF_{2}^{R}}{d {x}_{\pom} dt} ({x}, Q^{2}, x_{\pom}, {t}) = f^R(x_{\pom}, {t}) \, F_{2}^{R}({\beta}, {Q}^{2})\,, \end{equation} where the ${f}^{R}(x_{\pom}, {t})$ is the Reggeon flux, and the ${F}_{2}^{R}({\beta}, Q^{2})$ is the Reggeon structure function. In principle, we should consider different Regge pole contributions and sum over them and include interference terms. Approximately, one can neglect the interference terms between Reggeons and the Pomeron and also between different Reggeons. Therefore for the Reggeon flux, we consider the following formulas: \begin{equation} \label{eq:Reggeonsum} f^{R}({x}_{\pom}, {t}) = \sum_{R_{i}} f^{R_{i}} (x_{\pom}, {t})\,, \end{equation} and \begin{equation}\label{eq:Reggeonflux} f^{R_{i}}(x_{\pom}, {t})= \frac{F^{2}_{i}(0)} {{8} \pi} e^{-\vert t \vert /{\lambda}_{i}^{2}} {C}_{i}(t) {x}_{\pom}^{{1} - {2} \alpha_{i}(t)}\,. \end{equation} Here, $C_i(t) = {4} \cos^2[\pi {\alpha}_i(t)/2]$ and $C_i(t) = {4} \sin^2[\pi{\alpha}_i(t)/2]$ are the signature factors for the even Reggeon ($f_{2}$) and the odd Reggeon (${\omega}$), respectively. The Reggeon trajectory is given by ${\alpha}_{i}(t) = 0.5475 + (1.0$ GeV$^{-2})t$. From Ref.~\cite{Golec-Biernat:1997vbr}, $F^{2}_{f_{2}}(0) = 194$~GeV$^{-2}$ and $F^{2}_{\omega}(0)=52$~GeV$^{-2}$ which denote the Reggeon couplings to the proton. $\lambda_{i}^{2} = 0.65$~GeV is known from Reggeon phenomenology in hadronic reactions. The analysis for the isoscalar Reggeons $f_{2}, {\omega}$ shows that the Reggeon contribution to the diffractive structure-function becomes important for ${x}_{\pom} > 0.01$~\cite{H1:2006uea}. Our assumption for the Reggeon structure functions is that they are the same for all Reggeons and ${F}_{2}^{R}({\beta}, {Q}^{2})$ is related to the parton distributions in the Reggeons in a conventional way and can be determined by the fit to diffractive DIS data from HERA~\cite{H1:2012xlc,H1:2012pbl,H1:2011jpo}. For the Reggeon structure-function, we consider the following parametrization form, \begin{equation} \label{eq:Reggeonstructure} {F}_{2}^{R}({\beta}) = w_{1} {\beta}^{w_{2}} (1 - {\beta})^{w_3} (1 + w_4 \sqrt{{\beta}} + w_{5} {\beta}^2)~, \end{equation} where $w_1$, $w_2$, $w_3$ and $w_4$ are free fit parameters and they need to be determined from a global QCD analysis. The parameter $w_2$ controls the shape of $F_2^R({\beta})$ in the low-$\beta$ region, while $w_3$ controls the high-$\beta$ region. The parameters $w_4$ and $w_5$ are considered to be fixed to zero, as the current diffractive DIS data sets do not have enough power to constrain all the shape parameters. Our QCD analysis which includes the Reggeon contribution is called {\tt SKMHS22-tw2-tw4-RC}, and the effect arising from the inclusion of such correction is discussed in section.~\ref{sec:results}. \section{Diffractive PDFs global QCD analysis details}\label{global-QCD-analysis} The analysis of diffractive PDFs is a QCD optimization problem and can be seen as having four elements to it. The first and most important element is of course the experimental data and their uncertainty. The second element is a theoretical model used to describe these phenomenon. For the analysis of non-perturbative objects such as diffractive PDFs this theoretical framework can have some adjustable parameters as well as a number of constraints (say for ensuring of factorization). Another part of the problem is the objective function which we wish to minimize/maximize. Here, this function is a $\chi^{2}$ function that is defined by the theoretical framework and includes the data uncertainty. The last part of the problem is the optimization method which is the essential step to finding the solution. In this section, we present the methodology of our global QCD analysis to obtain the diffractive PDFs and their uncertainties at NLO accuracy in perturbative QCD. We describe the details of the data sets used for the following fit in section~\ref{subsec:data} , the parametrization of diffractive PDFs is discussed in section~\ref{subsec:param}, and the methodology of minimization and uncertainties of our diffractive PDFs are described in section~\ref{subsec:uncertainies}. \subsection{Experimental data sets}\label{subsec:data} \begin{table*} \caption{\small List of all diffractive DIS data points with their properties used in the {\tt SKMHS22} global QCD analysis. For each data set we provide the kinematical coverage of ${\beta}$, ${{x}_{\pom}}$, and ${Q}^{2}$. The number of data points is displayed as well. The details of the kinematical cuts applied on these data sets are explained in the text. } \label{tab:DDISdata} \begin{tabular}{l | c c c c c c } \hline\hline {\text{Experiment}} & {\text{Observable}} & [$\beta^{{\text{min}}}, {\beta}^{{\text{max}}}$] & [${x_{\pom}}^{\rm{min}}, {x_{\pom}}^{\rm{max}}$] & ${Q}^{2}\,[{\text{GeV}}^2]$ & \# of points & {\text{Reference}} \tabularnewline \hline\hline {\text{H1-LRG-11}} $\sqrt{s} = {225}$ GeV & ${\sigma}_{r}^{D(3)}$ & [$0.033$--$0.88$] & [$5{\times} 10^{-4}$ -- $3{\times} 10^{-3}$] & 4--44 & \textbf{22} & \cite{H1:2011jpo} \\ {\text{H1-LRG-11}} $\sqrt{s} = {252}$ GeV & ${\sigma}_{r}^{D(3)}$ & [$0.033$--$0.88$] & [$5{\times} 10^{-4}$ -- $3{\times} 10^{-3}$] & 4--44 & \textbf{21} & \cite{H1:2011jpo} \\ {\text{H1-LRG-11}} $\sqrt{s} = {319}$ GeV & ${\sigma}_{r}^{D(3)}$ & [$0.089$--$0.88$] & [$5{\times} 10^{-4}$ -- $3{\times} 10^{-3}$] & 11.5--44 & \textbf{14} & \cite{H1:2011jpo} \\ {\text{H1-LRG-12}} & ${\sigma}_{r}^{D(3)}$ & [$0.0017$--$0.80$] & [$3{\times} 10^{-4}$ -- $3{\times} 10^{-2}$] & 3.5--1600 & \textbf{277} & \cite{H1:2012pbl} \\ {\text{H1/ZEUS combined}} & ${\sigma}_{r}^{D(3)}$ & [$0.0018$--$0.816$] & [$3{\times} 10^{-4}$ -- $9 {\times} 10^{-2}$] & 2.5--200 & \textbf{192} & \cite{H1:2012xlc} \\ \hline \hline \multicolumn{1}{c}{\textbf{Total data}} ~~&~~ &~~ &~~& ~~&~~\textbf{526} \\ \hline \hline \end{tabular} \end{table*} In this section, we describe all available inclusive diffractive DIS data sets in detail. However, one needs to apply some kinematical cuts in order to avoid nonperturbative effects and ensure that only the data sets for which the available perturbative QCD treatment is adequate are included in the QCD analysis. We attempt to include more data sets, and hopefully by including the twist-4 and Reggeon contributions, one could relax some kinematical cuts that need to be applied to the data. The kinematic coverage in the (${\beta}$, ${x_{\pom}}$, {Q}$^2$) plane of the complete {\tt SKMHS22} data set is displayed in Fig.~\ref{fig:databetaxQ2}. The data points are classified by different experiments at HERA. As customary, some kinematic cuts need to be applied to the diffractive DIS cross-section measurements. \begin{figure*} \vspace{0.20cm} \resizebox{0.90\textwidth}{!}{\includegraphics{data.pdf}} \begin{center} \caption{{\small The kinematic coverage in the ($\beta$, ${x_{\pom}}$, Q$^2$) plane of the {\tt SKMHS22} data set. The data points are classified by different experiments at HERA. } \label{fig:databetaxQ2}} \end{center} \end{figure*} The list of diffractive DIS data sets and their properties used in the {\tt SKMHS22} global analysis are shown in Table~\ref{tab:DDISdata}. All the measurements are presented in terms of the $t$-integrated reduced diffractive DIS cross-section measurements ${D}_{r}^{D(3)}(ep \rightarrow ep{X})$. In the {\tt SKMHS22} QCD analysis, we also analyze the combined measurement of the inclusive diffractive cross-section presented by the H1 and ZEUS Collaborations at HERA (H1/ZEUS combined)~\cite{H1:2012xlc}. They used samples of diffractive DIS $ep$ data at the center-of-mass energy of $\sqrt{s} = {318}$~GeV at the HERA collider where the leading protons are detected by appropriative spectrometers. This high-precision measurement combined all the previous H1 FPS HERA I~\cite{H1:2006uea}, H1 FPS HERA II~\cite{Aaron:2010aa}, ZEUS LPS 1~\cite{ZEUS:2004luu}, and ZEUS LPS 2~\cite{ZEUS:2008xhs} data sets. These measurements cover the photon virtuality interval $2.5 <{Q}^{2}< 200$~GeV$^{2}$, $3.5 {\times} 10^{-4}< {x_{\pom}} < 0.09$ in proton fractional momentum loss, $0.09< |t| < 0.55$~Gev$^{2}$ in squared four-momentum transfer at the proton vertex and $1.8 {\times} 10^{-3} < {\beta} < 0.816$. We should highlight here that all H1-LRG data are published for the range of $\vert t \vert<1$~GeV$^{2}$, while the recent combined H1/ZEUS data sets are restricted to the range of $0.09 < \vert t \vert < 0.55$~GeV$^{2}$, and hence, one needs to use a global normalization factor to extrapolate from $0.09 < \vert t \vert < 0.55$~GeV$^{2}$ to $\vert t \vert<1$~GeV$^{2}$~\cite{Goharipour:2018yov}. Therefore, the combined H1/ZEUS data are corrected for the region of $\vert t \vert<1$~GeV$^{2}$. Another data set that we have used in our QCD analysis is the Large Rapidity Gap (LRG) data from H1-LRG-11, which was measured by the H1 detector in 2006 and 2007. These data are derived for three different center of mass energies of $\sqrt{s}={225}$, ${252}$ and ${319}$~GeV~\cite{H1:2011jpo}. The H1 collaboration measured the reduced cross-section in the photon virtualities range $4\,$GeV$^{2} {\leq} Q^{2} {\leq} 44\,$GeV$^{2}$ for the center of mass $\sqrt{s}={225}$, ${252}$ GeV, and $11.5$ GeV$^{2} {\leq} Q^2 {\leq} 44$ GeV$^2$ for the center-of-mass energy of $\sqrt{s}={319}$ GeV. The diffractive final state masses and the proton vertex are in the range of $1.25<{M}_{X}<10.84$~GeV and $\vert t \vert<1$~GeV$^{2}$, respectively. The diffractive variables are considered in the range of $5 {\times} 10^{-4} < {x_{\pom}} < 3 {\times} 10^{-3}$, $0.033 < {\beta} < 0.88$ for $\sqrt{s}=225$, $252$~GeV, and $0.089< {\beta} <0.88$ for $\sqrt{s}=319$~GeV. Finally the last data set that we have employed is the H1-LRG-12~\cite{H1:2012pbl}. These data for the process $e p \rightarrow e{X}{Y}$ have been derived by the H1 experiment at HERA. H1-LRG-12 data covers the range of $3.5 < {Q}^{2} < 1600\,$GeV$^{2}$, $0.0003 \leq {x_{\pom}} \leq 0.03$, and $0.0017 {\leq} {\beta} {\leq} 0.8$. We applied some kinematical cuts on ${\beta}$, $M_{X}$, and $Q^{2}$. The kinematical cuts we applied are similar to those used in Refs.~\cite{Goharipour:2018yov,ZEUS:2009uxs}, except for the case of ${\beta}$. By considering the twist-4 and Reggeon corrections, we can relax the cuts to include more data sets than those have been used in Ref.~\cite{Goharipour:2018yov}. For the extraction of diffractive PDFs we apply $\beta \leq 0.90$ and ${M}_{X}<2$ GeV over all data sets used in this analysis. The sensitivity of data sets to the ${Q}^2$ has been tested in Refs.~\cite{Goharipour:2018yov,ZEUS:2009uxs}. The authors finalized the cut on $Q^{2}$ by making a $\chi^2$ scan. In Ref.~\cite{Maktoubian:2019ppi} the authors used standard higher twist for structure functions and they obtained the best $\chi^2$ by considering the $Q^2_{\text{min}}= 6.5$~GeV$^2$, and in Ref.~\cite{Goharipour:2018yov} to extract the diffractive PDFs they found the best cut is $Q^2_{min}=9$~GeV$^2$. In this work and after testing the results for the $\chi^2$ we found the best agreement of theory and data will be achieved by taking $Q^2_{min}=9$~GeV$^2$. After applying these kinematical cuts the total number of data points is reduced to the 302. \subsection{{\tt SKMHS22} Diffractive PDFs parametrization form}\label{subsec:param} Diffractive PDFs are nonperturbative quantities and cannot be calculated in perturbative QCD. Therefore, for their functional dependence a parametric form with some unknown parameters should be considered at the input scale. Due the lack of diffractive DIS experimental data, for the quark distributions we consider all light quarks and anti-quarks densities to be equal, ${f}_{u} = {f}_{d} = {f}_{s} = f_{\bar{u}} = f_{\bar{d}} = f_{\bar{s}}$. The scale dependence of the quarks and gluon distributions ${f}_{q, g}({\beta}, {Q}^{2})$ needs to be determined by the standard DGLAP evolution equations. We fit the diffractive quark and gluon distributions at the starting scale ${Q}_{0}^{2} = {1.8}$~GeV$^2$ which is below the charm threshold ($m_c^2=1.96$~GeV$^2$). Since the diffractive DIS data sets can only constrain the sum of the diffractive PDFs and due to the small amount of the inclusive diffractive DIS data sets, one needs to consider the less flexible parametrization form for diffractive PDFs. We fit the diffractive parton distribution function at the initial scale $Q_0^2 = 1.8$ GeV$^2$ with the following pomeron parton distributions which have been used in several analysis~\cite{Goharipour:2018yov,Maktoubian:2019ppi,H1:2006zyl}: \begin{align} zf_{q}(z, {Q}_{0}^{2}) = & {\alpha}_qz^{{\beta}_q}({1} - {z})^ {{\gamma}_q}({1} + {\eta}_q\sqrt{z} + {\xi}_q z^2), \label{eq:parametrizationformq} \\ zf_{g}(z, {Q}_0^2) = & \alpha_gz^{{\beta}_{g}}(1 - {z})^ {{\gamma}_g}({1} + {\eta}_g\sqrt{z} + {\xi}_{g} z^2), \label{eq:parametrizationformg} \end{align} where ${z}$ in above equation is the longitudinal momentum fraction of struck parton with respect to the diffractive exchange. At the lowest order, $z = {\beta}$ but by including the higher orders this parameter differs from ${\beta}$ and this leads to the ${0} < {\beta} < {z}$. In order to let the parameters ${\gamma}_q$ and ${\gamma}_g$ have enough freedom to achieve negative or positive values in the QCD fit we follow the QCD analyses avalible in the litrature and add an additional term ${e}^{- \frac{0.001}{{1} - {z}}}$ to ensure that the distributions vanish for ${z} \rightarrow {1}$. It turned out that four parameters ${\eta}_q$, ${\eta}_g$, ${\xi}_q$ and ${\xi}_g$ are not well constrained by the experimental data, and hence, one needs to set them to zero. The heavy flavors are generated through the DGLAP evolution equations at the scale ${Q}^{2} > m_{c, b}$. As we mentioned, to consider the contributions of heavy flavors, we apply the Thorne-Roberts scheme GM-VFN scheme. The $x_{\pom}$ dependence of the diffractive PDFs $f_i^D({\beta}, Q^{2}; x_{\pom}, {t})$ in Eq.~\ref{eq:DPDFs} is determined by the Pomeron and Reggeon flux with linear trajectories, $\alpha_{{\pom}, {\reg}}(0) + \alpha^\prime_{{\pom}, {\reg}}t$~\cite{Goharipour:2018yov,Maktoubian:2019ppi,H1:2006zyl} \begin{equation} \label{eq:pomeronReggeonflux} {f}_{{\pom}, {\reg}}({x}_{\pom}, t) = {A}_{{\pom}, {\reg}} \frac{e^{B_{{\pom}, {\reg}}}t} {{x}_{\pom}^ {{2} \alpha_{{\pom}, {\reg}}(t)-{1}}}, \end{equation} where the normalization factor of Reggeon ${A}_{\reg}$ and the Pomeron and Reggeon intercepts, ${\alpha}_{\pom}(0)$ and ${\alpha}_{\reg}(0)$ are free parameters and will be determined from fit to the diffractive DIS data. We should note here that, according to Eq~\eqref{eq:DPDFs} the parameter ${A}_{\pom}$ is absorbed into the ${\alpha}_{q}$ and ${\alpha}_{g}$. The remaining parameter appearing in Eq.~\eqref{eq:pomeronReggeonflux} are taken from~\cite{Goharipour:2018yov}. In general we have twelve free fit parameters: six from Eq.~\eqref{eq:parametrizationformq} and \eqref{eq:parametrizationformg}, three form Eq.~\eqref{eq:pomeronReggeonflux}, and three from the Reggeon structure function in Eq.~\eqref{eq:Reggeonstructure}. \subsection{Minimization and diffractive PDF uncertainty method}\label{subsec:uncertainies} In this section, we present two important parts of the {\tt SKMHS22} QCD analysis. First, we discuss the optimization method, and then we show how to take into account the data uncertainty in the final results. In order to estimate the free parameters of the diffractive PDFs given the experimental data of section~\ref{subsec:data} we apply the maximum log-likelihood method. If we assume that the data arise from a Gaussian distribution as is usually done, this method coincides with minimizing the $\chi^2$ estimator. Here we adopt the following form for the $\chi^{2}$ function~\cite{H1:2012qti,Goharipour:2018yov}, \begin{eqnarray} \label{eq:chi2} &&\chi^{2} (\{\zeta_k\})\nonumber\\&& = \displaystyle\sum_{i} \frac{\left[ \mathcal{E}_{i} - \mathcal{T}_{i} \left(\{\zeta\}\right) \left(1 - \sum_{j} \gamma_{j}^{i} {b}_{j}\right) \right]^{2} } {\delta^{2}_{{i}, \mathrm{unc}} \mathcal{T}_{i}^{2}(\{\zeta\})+ {\delta}^{2}_{i, \mathrm{stat}} \mathcal{E}_{i} \mathcal{T}_{i} (\{\zeta\}) \left(1 - \sum_{j} \gamma^{i}_{j} {b}_{k}\right)} \nonumber \\ && + \sum_{i} \ln \frac{\delta^{2}_{{i}, \mathrm{unc}} \mathcal{T}_{i}^{2} (\{\zeta\}) + \delta^{2}_{{i}, \mathrm{stat}} \mathcal{E}_{i} \mathcal{T}_{i} (\{\zeta\}) } { \delta^{2}_{{i}, \mathrm{unc}} \mathcal{E}_{i}^{2} + \delta^{2}_{{i}, \mathrm{stat}} \mathcal{E}^{2}_{i}} + \sum_{j} b_{j}^{2}\,, \end{eqnarray} with $\mathcal{E}$ is the measured experimental value, and $\mathcal{T}$ is the theoretical prediction based on the fit parameters $\{\mathbf{\zeta}_k\}$. The parameters ${\delta}_{i,\mathrm{stat}}$ , $\delta_{i,\mathrm{unc}}$, and $\gamma^{i}_{j}$ denote the relative statistical, uncorrelated systematic, and correlated systematic uncertainties, respectively. The nuisance parameters ${b}_{{k}}$ are related to correlated systematic uncertainty and are determined with the $\{\zeta_k\}$ parameters simultaneously in the QCD fit. The above ${\chi}^{2}$ function is incorporated in the {\tt xFitter} framework, in conjunction with other tools required for a perturbative QCD analysis of diffractive PDFs. One then can use this package to perform all the essential operations such as DGLAP evolution up to NNLO accuracy, theoretical calculation of the relevant observables and finding out optimal parameter values and deducing their uncertainty collectively. As we specified above in order to find the optimal fit parameter values one needs to minimize the ${\chi}^{2}$ function, this is achieved by utilizing the {\tt MINUIT} CERN package~\cite{James:1975dr}. This package finds the parameter uncertainties by considering the $\chi^{2}$ function around its minimum. {\tt MINUIT} has five minimization algorithms, here we choose to work with \texttt{MIGRAD} which is the most commonly used method of minimization. In practice, we need to propagate these parameter uncertainties to the observables or other quantities like the diffractive PDFs themselves. For this aim, a set of eigenvector PDF sets along with the the central values of the diffractive PDFs is formed, for our fit that has 9 free parameters, the total number of PDF sets will be 19. Each member of the error set is derived by either increasing or decreasing one of the parameters by its uncertainty. Then, the uncertainty of a quantity ${\cal O}$ due to its dependence on PDFs is given as in Ref.~\cite{Nadolsky:2008zw}, \begin{eqnarray} \Delta {\cal O} = \frac{1}{2} \sqrt{\sum_{{i}={1}}^{N} \left({\cal O}_{i}^{(+)} - {\cal O}_{i}^{(-)} \right)^{2}}\,, \end{eqnarray} where ${\cal O}_{i}^{(\pm)}$ refer to the values of ${\cal O}$ which are calculated from PDF sets of the $i$-th parameter along with the $\pm$ directions. In the derivation of this relation, it is assumed that the variation of ${\cal O}$ can be approximated by linear terms of its Taylor series and then the gradient is approximated, which produces the above result. In the next section, we present our main results and findings of the {\tt SKMHS22} diffractive PDFs and some observables in order to show the quality of the analysis. \section{Fit results}\label{sec:results} This section includes the main results of the {\tt SKMHS22} diffractive PDFs analysis. As we discussed in detail earlier, in this QCD analysis we present three different QCD fits to determine the diffractive PDFs which are {\tt SKMHS22-tw2}, {\tt SKMHS22-tw2-tw4} and {\tt SKMHS22-tw2-tw4-RC}. In this section we will present and discuss these results in turn. The similarity and difference of these results over different kinematical ranges will be highlighted, and the stability of the results upon the inclusion of higher-twist corrections will be discussed in detail. This section also includes a detailed comparison with the diffractive DIS data analyzed in this work. The best fit parameters for three sets of {\tt SKMHS22} diffractive PDFs are shown in Table~\ref{tab-dpdf-all} along with their experimental errors. Considering the numbers presented in this table, some comments are in order. The parameters $\eta_g$ and $\eta_q$ are considered to be fixed at zero as the present diffractive DIS data sets do not have enough power to constrain all shape parameters of the distributions. The parameter \{$w_i$\} for the Reggeon structure function for the {\tt SKMHS22-tw2-tw4-RC} analysis presented in Eq.~\ref{eq:Reggeonstructure} are determined along the fit parameters and then keep fixed to their best fitted values. The parameters $w_4$ and $w_5$ are keep fixed at zero. The values for the strong coupling constant $\alpha_s(M_Z^2)$ and the charm and bottom quark masses also are shown in the table as well. \begin{table*}[ht] \begin{center} \caption{\small Best fit parameters obtained with the {\tt SKMHS22-tw2}, {\tt SKMHS22-tw2-tw4} and {\tt SKMHS22-tw2-tw4-RC} fits at the initial scale of $Q_{0}^{2} = 1.8 \, {\text{GeV}}^2$ along with their experimental uncertainties. The values marked with the ({*}) are fixed in the fit. } \begin{tabular}{ c | c | c | c } \hline \hline Parameters & {\tt SKMHS22-tw2} & {\tt SKMHS22-tw2-tw4} & {\tt SKMHS22-tw2-tw4-RC} \\ \hline \hline $\alpha_g$ & $1.00 \pm 0.16$ & $1.07 \pm 0.17$ & $1.43 \pm 0.23$ \\ $\beta_g$ & $0.226 \pm 0.066$ & $0.332 \pm 0.070$ & $0.447 \pm 0.070$ \\ $\gamma_g$ & $0.27 \pm 0.15$ & $0.19 \pm 0.14$ & $0.37 \pm 0.14$\\ $\eta_g$ & $0.0^*$ & $0.0^*$ & $0.0^*$\\ $\alpha_q$ & $0.305 \pm 0.022$ & $0.517 \pm 0.041$ & $0.727 \pm 0.059$ \\ $\beta_q$ & $1.474 \pm 0.069$ & $1.887 \pm 0.081$ & $2.149 \pm 0.0584$\\ $\gamma_q$ & $0.509 \pm 0.034$ & $0.980 \pm 0.0948$ & $1.137 \pm 0.050$\\ $\eta_q$ & $0.0^*$ & $0.0^*$ & $0.0^*$ \\ $\alpha_{\pom}(0)$ & $1.0934 \pm 0.0032$ & $1.1021 \pm 0.0037$ & $1.0965 \pm 0.0037$ \\ $\alpha_{\reg}(0)$ & $0.316 \pm 0.053$ & $0.400 \pm 0.053$ & $0.418 \pm 0.054$ \\ $A_{\reg}$ & $21.7 \pm 5.7$ & $15.0 \pm 3.9$ & $13.2 \pm 3.5$ \\ $w_1$ & $0.0^*$ & $0.0^*$ & $0.23^*$ \\ $w_2$ & $0.0^*$ & $0.0^*$ & $3.79^*$ \\ $w_3$ & $0.0^*$ & $0.0^*$ & $14.9^*$ \\ $w_4$ & $0.0^*$ & $0.0^*$ & $0.0^*$ \\ $w_5$ & $0.0^*$ & $0.0^*$ & $0.0^*$ \\ \hline $\alpha_s(M_Z^2)$ & $0.1185^*$ & $0.1185^*$ & $0.1185^*$ \\ $m_c$ & $1.40^*$ & $1.40^*$ & $1.40^*$ \\ $m_b$ & $4.75^*$ & $4.75^*$ & $4.75^*$ \\ \hline \hline \end{tabular} \label{tab-dpdf-all} \end{center} \end{table*} In the following, we turn our attention to the detailed comparison of the three {\tt SKMHS22} diffractive PDFs sets. We display the {\tt SKMHS22-tw2}, {\tt SKMHS22-tw2-tw4} and {\tt SKMHS22-tw2-tw4-RC} diffractive PDFs parameterized in our QCD fits, Eqs.~\eqref{eq:parametrizationformq} and \eqref{eq:parametrizationformg}, along with their uncertainty bands in Fig.~\ref{fig:DPDF-Q0} at the input scale $Q_0^2 = 1.8$ GeV$^2$. The higher Q$^2$ value results at 6 and 20~GeV$^2$ are shown in Figs.~\ref{fig:DPDF-Q6} and Fig.~\ref{fig:DPDF-Q20}, respectively. The lower panels show the ratio of {\tt SKMHS22-tw2-tw4} and {\tt SKMHS22-tw2-tw4-RC} to the {\tt SKMHS22-tw2}. In Fig.~\ref{fig:DPDF-c-b}, we display the perturbatively generated {\tt SKMHS22} diffractive PDFs for the charm and bottom quark densities along with their error bands at the scale of $Q^2 = 60$~GeV$^2$ and 200 GeV$^2$. All three diffractive PDFs sets are shown for comparison. A remarkable feature of the {\tt SKMHS22} diffractive gluon and quark PDFs shown in these figures is the difference both in the shape and error bands, which reflects the effect arising from the inclusion of higher twist corrections. As can be seen, for all cases the inclusion of the twist-4 and Reggeon corrections lead to slightly small error bands. This is consistent with the ${\chi^{2}}$ value presented in Table~\ref{tab:chi2-all}. The inclusion of the twist-4 corrections and Reggeon contributions leads to an enhancement of the gluon diffractive PDFs at high $\beta$, and a reduction of the singlet PDFs as well. Considering such corrections also affects the small regions of $\beta$ and leads to the reduction of the central values of all distributions. These findings indicate that for the given kinematical range of the diffractive DIS data from the ZEUS and H1 Collaborations, considering the higher twist corrections is of crucial importance. For the gluon PDFs, we see a very large error band for a small region of $\beta$, namely $\beta < 0.01$, which indicates that the available diffractive DIS data do not have enough power to constrain the low-$\beta$ gluon density. To better constrain the gluon PDFs, the diffractive dijet productions data need to be taken into account~\cite{H1:2014pjf,H1:2015okx}. In terms of future work, it would be very interesting to repeat the QCD analysis described here to study the impact of diffractive dijet production data on the diffractive gluon PDF and its uncertainty. \begin{figure}[htb] \vspace{0.50cm} \resizebox{0.480\textwidth}{!}{\includegraphics{1-DPDF.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{1-DPDF-Ratio.jpg}} \begin{center} \caption{ \small The {\tt SKMHS22} diffractive PDFs at the input scale of $Q_0^2 = 1.8$ GeV$^2$. All three diffractive PDFs sets are shown for comparison. The lower panels represent the ratio of {\tt SKMHS22-tw2-tw4} and {\tt SKMHS22-tw2-tw4-RC} to the {\tt SKMHS22-tw2}.} \label{fig:DPDF-Q0} \end{center} \end{figure} \begin{figure}[htb] \vspace{0.50cm} \resizebox{0.480\textwidth}{!}{\includegraphics{2-DPDF.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{2-DPDF-Ratio.jpg}} \begin{center} \caption{ \small Same as in Fig.~\ref{fig:DPDF-Q0} but this time for the the higher energy scale of 6 GeV$^2$. } \label{fig:DPDF-Q6} \end{center} \end{figure} \begin{figure}[htb] \vspace{0.50cm} \resizebox{0.480\textwidth}{!}{\includegraphics{3-DPDF.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{3-DPDF-Ratio.jpg}} \begin{center} \caption{ \small Same as in Fig.~\ref{fig:DPDF-Q0} but this time for the the higher energy scale of 20 GeV$^2$. } \label{fig:DPDF-Q20} \end{center} \end{figure} The same discussion also holds for the case of heavy quark diffractive PDFs. As one can see from Fig.~\ref{fig:DPDF-c-b}, the {\tt SKMHS22-tw2-tw4-RC} charm and bottom quark densities lie below other results for all regions of $\beta$. \begin{figure}[htb] \vspace{0.50cm} \resizebox{0.480\textwidth}{!}{\includegraphics{cb60.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{cb200.jpg}} \begin{center} \caption{ \small The perturbatively generated {\tt SKMHS22} diffractive PDFs for the charm and bottom quark densities along with their error bands at the scale of $Q^2 = 60$ and 200~GeV$^2$. All three diffractive PDFs sets are shown for comparison. } \label{fig:DPDF-c-b} \end{center} \end{figure} In Table.~\ref{tab:chi2-all}, we present the values for the $\chi^2$ per data point for both the individual and the total diffractive DIS data sets included in the {\tt SKMHS22} analysis. The values are shown for all three sets of {\tt SKMHS22-tw2}, {\tt SKMHS22-tw2-tw4} and {\tt SKMHS22-tw2-tw4-RC}. Concerning the fit quality of the total diffractive DIS data sets, the most noticeable feature is an improvement in the inclusion of the Reggeon contribution. The improvement of the individual $\chi^2$ per data point is particularly pronounced for the H1-LRG 2012 when the Reggeon contribution in considered in the analysis. This finding demonstrates the fact that the inclusion of the higher-twist corrections improves the description of the diffractive DIS data sets. \begin{table*}[htb] \caption{ \small The values of $\chi^2/N_{\text{pts}}$ for the data sets included in the {\tt SKMHS22} global fits. } \label{tab:chi2-all} \begin{tabular}{l c c c c } \hline \hline & ~~{\tt SKMHS22-tw2}~~ & ~~{\tt SKMHS22-tw2-tw4}~~ & ~~{\tt SKMHS22-tw2-tw4-RC}~~ \\ \hline \text{Experiment} & ${\chi}^2/{N}_{\text{pts}}$ & ${\chi}^{2}/{N}_{\text{pts}}$ & ${\chi}^2/{N}_{\text{pts}}$ \tabularnewline \hline \text{H1-LRG-11} $\sqrt{s} = {225}$~GeV~\cite{H1:2011jpo} & 11/13 & 11/13 & 12/13 \\ \text{H1-LRG-11} $\sqrt{s} = {252}$~GeV~\cite{H1:2011jpo} & 20/12 & 21/12 & 19/12\\ \text{H1-LRG-11} $\sqrt{s} = {319}$~GeV~\cite{H1:2011jpo} & 6.6/12 & 3.7/12 & 4.6/12\\ \text{H1-LRG-12}~\cite{H1:2012pbl} & 136/165 & 143/165 & 124/165 \\ \text{H1/ZEUS combined}~\cite{H1:2012xlc} & 129/100 & 124/100 & 125/100 \\ \hline \text{Correlated} ${\chi}^{2}$ & 11 & 16 & 19 \\ \hline \text{Log penalty} ${\chi}^{2}$ & +11 & +22 & +15 \\ \hline \hline \multicolumn{1}{c}{~\textbf{${\chi}^{2}/{\text{dof}}$}~} & $~324/293=1.10~$ & $~319/293=1.16~$ & $~319/293=1.08~$ \\ \hline \end{tabular} \end{table*} In order further high-light the {\tt SKMHS22} analysis, in the following, we present a detailed comparison of the diffractive DIS data set analyzed in this work to the corresponding NLO theoretical predictions obtained using the NLO diffractive PDFs from all three sets. As we will show, in general, overall good agreements between the diffractive data and the theoretical predictions are achieved for all H1 and ZEUS data. We start with the comparison to the H1-LRG-11 $\sqrt{s} = 225$, 252 and 319~GeV data. In Fig.~\ref{fig:S225}, the {\tt SKMHS22} theory predictions are compared with some selected data points. The comparisons are shown as a function of $\beta$ and for selected bins of $x_{\pom}$=0.0005 and 0.003. As one can see, very nice agreement is achieved for all regions of $\beta$ \begin{figure*}[htb] \vspace{0.50cm} \resizebox{0.480\textwidth}{!}{\includegraphics{S225-1.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{S225-2.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{S225-3.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{S225-4.jpg}} \begin{center} \caption{ \small Comparison between the diffractive DIS data set from the H1-LRG-11 $\sqrt{s} = 225$, 252 and 329~GeV and the corresponding NLO theoretical predictions using all three diffractive PDFs sets. We show both the absolute distributions and the data/theory ratios. } \label{fig:S225} \end{center} \end{figure*} The same comparisons are also shown in Fig.~\ref{fig:H1-LRG-12} for the H1-LRG-12 for some selected bins of $x_{\pom}$ namely 0.01, 0.03, and 0.003. In these plots, the {\tt SKMHS22} theory precision for all three different sets are compared with the H1-LRG-12 diffractive DIS data. The comparisons are shown as a function of $\beta$ and for different values of Q$^2$. As one can see, very good agreements between the data and theory predictions are achieved, consistent with the total and individual $\chi^{2}$ presented in Table~\ref{tab:chi2-all}. \begin{figure*}[htb] \vspace{0.50cm} \resizebox{0.480\textwidth}{!}{\includegraphics{H1-LRG-12-1.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{H1-LRG-12-2.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{H1-LRG-12-3.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{H1-LRG-12-4.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{H1-LRG-12-5.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{H1-LRG-12-6.jpg}} \begin{center} \caption{ \small Same as Fig.~\ref{fig:S225} but this time for the H1-LRG-12 data. } \label{fig:H1-LRG-12} \end{center} \end{figure*} Finally, in Fig.~\ref{fig:combined}, we display a detailed comparison of the theory predictions using the three different sets of {\tt SKMHS22} and the H1/ZEUS combined data. The plots are presented as a function of $x_{\pom}$ and for different values of $\beta$ and Q$^2$. As one can see, very good agreement is achieved. \begin{figure*}[htb] \vspace{0.50cm} \resizebox{0.480\textwidth}{!}{\includegraphics{com-1.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{com-2.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{com-3.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{com-4.jpg}} \resizebox{0.480\textwidth}{!}{\includegraphics{com-5.jpg}} \begin{center} \caption{ \small Same as Fig.~\ref{fig:S225} but this time for the H1/ZEUS combined data. } \label{fig:combined} \end{center} \end{figure*} As one can see from Figs.~\ref{fig:S225}, \ref{fig:H1-LRG-12} and \ref{fig:combined}, in general, overall good agreement between the diffractive DIS data and the NLO theoretical predictions are achieved for all experiments, which is consistent with the individual and total $\chi^2$ values reported in Table.~\ref{tab:chi2-all}. Remarkably, the NLO theoretical predictions and the data are in reasonable agreement in the small- and large-$\beta$ regions, and the whole range of $x_{\pom}$. \section{Discussion and Conclusion}\label{Conclusion} In this work, we performed fits of the diffractive PDFs at NLO accuracy in perturbative QCD called {\tt SKMHS22} using all available and up-to-date diffractive DIS data sets from the H1 and ZEUS Collaborations at HERA. We present a mutually consistent set of diffractive PDFs by adding the high-precision diffractive DIS data from the H1/ZEUS combined inclusive diffractive cross-sections measurements to the data sample. In addition to the standard twist-2 contributions, we also considered the twist-4 correction and Reggeon contribution to the diffractive structure function, which dominate in the region of large $\beta$. The effect of such contributions on the diffractive PDFs and cross-sections were carefully examined and discussed. The twist-4 correction and Reggeon contribution lead to the gluon distribution which is peaked stronger at high-$\beta$ than in the case of the standard twist-2 QCD fit. The well-established {\tt xFitter} fitting methodology, widely used to determine the unpolarised PDFs, are modified to incorporate the higher-twist contributions. This fitting methodology is specifically designed to provide faithful representations of the experimental uncertainties on the PDFs. The QCD analysis presented in this work represents the first step of a broader program. A number of updates and improvements are foreseen for the future study. The important limitation of the {\tt SKMHS22} QCD analysis is the fact that it is based on the inclusive diffractive DIS cross-section measurements. Despite the fact that the diffractive DIS is the cleanest process for the extraction of diffractive PDF, it is scarcely sensitive to the gluon density. For the near future, our main aim is to include very recent diffractive dijet production data~\cite{H1:2014pjf,H1:2015okx}, which we expect to provide a good constraint on the determination of the gluon diffractive PDFs. This will require the numerical implementation of the corresponding observables at NLO and NNLO accuracy in perturbative QCD in the {\tt xFitter} package. A further improvement for the future {\tt SKMHS22} analyses, as a long-term project, is the inclusion of other observables from hadron colliders which could carry some information on flavor separation. A {\tt FORTRAN} subroutine, which evaluates the three sets of {\tt SKMHS22} NLO diffractive PDFs presented in this work for given values of $\beta$, $x_{\pom}$ and Q$^2$ can be obtained from the authors upon request. These diffractive PDFs sets are available in the standard {\tt LHAPDF} format. \begin{acknowledgments} Hamzeh Khanpour, Hadi Hashamipour and Maryam Soleymaninia thank the School of Particles and Accelerators, Institute for Research in Fundamental Sciences (IPM) for financial support of this project. Hamzeh Khanpour also is thankful to the Physics Department of University of Udine, and University of Science and Technology of Mazandaran for the financial support provided for this research. This work was also supported in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the funds provided to the Sino-German Collaborative Research Center TRR110 ``Symmetries and the Emergence of Structure in QCD'' (DFG Project-ID 196253076 - TRR 110). The work of UGM was supported in part by the Chinese Academy of Sciences (CAS) President's International Fellowship Initiative (PIFI) (Grant No. 2018DM0034) and by VolkswagenStiftung (Grant No. 93562). \end{acknowledgments} \clearpage
1,108,101,563,590
arxiv
\section*{Introduction} Recent work of \citet{PS16} and the author \citep{K16:monster} has suggested that the $FSZ$ properties for a (perfect) group are completely determined by the $FSZ$ properties of its Sylow subgroups. A suggestive point of investigation is to consider this connection for the case of the alternating groups $A_n$, or the symmetric groups $S_n$. These groups were established to be $FSZ$ by \citet{IMM}. The Sylow $p$-subgroups of $S_n$ are well-known to be described by direct products of iterated regular wreath products of $\BZ_p$ with itself; see \citep{Rot99} for example. More generally, the Sylow $p$-subgroups of classical simple groups are often given by iterated wreath products with $\BZ_p$ \citep{Weir:ClassicalSylow}. This paper was born from efforts to understand how well-behaved the $FSZ$ properties of $p$-groups are with respect to regular wreath products with $\BZ_p$, with the goal of determining if the Sylow subgroups of $S_n$ are always $FSZ$. While this remains an open question in general, we are able to provide some partial results. In \cref{cor:symm-syl-high} we show that $S_{p^j}$ has an $FSZ_{p^{j-1}}$ Sylow $p$-subgroup, and in \cref{thm:Sp3} we show that the Sylow $p$-subgroup of $S_{p^3}$ is $FSZ$. Along the way, several examples of bad behavior are discovered, including some new constructions of non-$FSZ$ groups. We also provide in \cref{prop:wreath-condition} a sufficient condition for the $FSZ$ properties to hold for regular wreath products of the form $D\wr_r\BZ_p$ with $D$ a $p$-group. Our work on such wreath products culminates in \cref{thm:fsz-not-plus}, which provides an $FSZ$ $p$-group of order $p^{(p+1)^2}$ which is not $FSZ^+$ for any prime $p>3$. The paper is structured as follows. In \cref{sec:background} we detail the necessary background material, definitions, and notation. Then in \cref{sec:central} we establish a connection between the $FSZ$ properties of central products with abelian groups and quotients by central subgroups. In \cref{sec:family} we construct examples of $FSZ_{p^j}$ groups with non-$FSZ_{p^j}$ central quotients and central products. \Cref{sec:main} is the main section of the paper, and investigates regular wreath products $D\wr_r\BZ_p$ with $D$ a $p$-group. These investigations are applied in \cref{sec:not-plus} to construct examples of $FSZ$ groups which are not $FSZ^+$, using the groups constructed in the previous sections. The proof of the $FSZ$ property for these examples is then adapted to show that the Sylow $p$-subgroup of $S_{p^3}$ is $FSZ$. The appendix enumerates all isomorphism classes of non-$FSZ$ groups of order $5^7$, as determined with \citet{GAP4.8.4}. \section{Background and notation}\label{sec:background} All groups are finite. For a group $G$ and $x\in G$ we let \[ o(x) = \mbox{order of the element } x. \] We define $\BN=\{1,2,3,...\}$ to be the set of positive integers. Given groups $G,H$, subgroups $A\subseteq Z(G)$ and $B\subseteq Z(H)$, and an isomorphism $\phi\colon A\to B$, let $N=\{(a,\phi(a\inv))\in G\times H \ : \ a\in A\}$. Then we define the central product $G\ast H$ as the quotient group $(G\times H)/N$. The direct product arises as the special case where $A,B$ are trivial. In general the isomorphism class of $G\ast H$ depends on $A,B,\phi$. The specific choices will either always be made explicit, or will not be important, hence our choice to omit these dependencies from the notation. In an obvious fashion there is an equivalent definition of central product using generators and relations, which we will also use whenever convenient. For our purposes, we will be primarily interested in the case where $A,B,$ and $H$ are all cyclic. In this case, to define $G\ast H$ it suffices to state a relation $x^m=z$ where $H=\cyc{x}$, $z\in Z(G)$, and $m\in\BN$. Given a group $G$ and a prime $p$, the regular wreath product $G\wr_r\BZ_p$ is defined as the semidirect product $G^p\rtimes \BZ_p$, where $\BZ_p$ acts by cyclicly permuting the factors of $G^p$. The $FSZ$ properties for groups were introduced by \citet{IMM}, and were inspired by invariants of representation categories of semisimple Hopf algebras \citep{KSZ2,NS07a,NS07b}. These invariants and their generalizations have proven extremely useful in a wide range of areas; see \citep{NegNg16} for a detailed discussion and references. While multiple characterizations of the $FSZ$ properties exist in the literature \citep{IMM,PS16}, for our purposes these properties are concerned with the following sets. \begin{df} For a group $G$, $m\in\BN$, and $u,g\in G$ we define \[ G_m(u,g) = \{ a\in G \ : \ a^m = (au)^m = g\}.\] \end{df} \begin{rem} We note that in the original definitions for these sets, based on the original formulas for the Frobenius-Schur indicators of the Drinfeld double of $G$ \citep{KSZ2,IMM}, one uses $u\inv$ instead of $u$. In this setting an irreducible representation of this Hopf algebra is parameterized by a pair $(u,\eta)$, with $u\in G$ and $\eta$ an irreducible character of $C_G(u)$ \citep{DPR}. The isomorphism class of such a representation depends only on the conjugacy class of $u$, and the isomorphism class of $\eta$. The $m$-th indicator of $(u,\eta)$ is then calculated, in our definition, using the sets $G_m(u\inv,g)$. In principle, then, there is a difference between using $u$ or $u\inv$ in the definition, and either one can be argued to be the more convenient choice. However, there is a bijection $G_m(u,g)\to G_m(u\inv,g)$ for all $u,g\in G$ and $m\in\BN$ given by $a\mapsto au$. Since the $m$-th indicator values depend only on the cardinalities of the sets $G_m(u,g)$ \citep{KSZ2}, rather than the sets themselves, there is in fact no loss of convenience in either setting. \end{rem} We then define the $FSZ$ properties as follows. \begin{df} Let $m\in\BN$. We say the group $G$ is $FSZ_m$ if for any $n\in \BN$ with $(n,|G|)=1$ we have $|G_m(u,g)|=|G_m(u,g^n)|$ for all $u,g\in G$. We say $G$ is $FSZ$ if it is $FSZ_m$ for all $m$. Furthermore, we say that $G$ is $FSZ_m^+$ if $C_G(g)$ is $FSZ_m$ for all $g\in G$. We say $G$ is $FSZ^+$ if it is $FSZ_m^+$ for all $m$. \end{df} The fundamental facts of $FSZ$ and $FSZ^+$ groups can be found in \citep{IMM}, as well as many examples and classes of $FSZ^+$ groups. We record the facts we will use frequently throughout the paper in the following. \begin{lem}(\citet{IMM}) \begin{enumerate} \item $G_m(u,g)=\emptyset$ if $u\not\in C_G(g)$, and $G_m(u,g)\subseteq C_G(g)$. \item Every group is $FSZ_m$ for $m\in \{1,2,3,4,6\}$. \item If $G=H\times K$ is a direct product of groups, then \[ G_m((u_H,u_K),(g_H,g_K)) = H_m(u_H,g_H)\times K_m(u_K,g_K).\] In particular, $G$ is $FSZ_m$ if and only if both $H$ and $K$ are $FSZ_m$. \item Every regular $p$-group is $FSZ^+$. \item The irregular $p$-group $\BZ_p\wr_r\BZ_p$ is $FSZ^+$. \item There are 32 isomorphism classes of (necessarily irregular) non-$FSZ$ groups of order $5^6$. They are all non-$FSZ_5$ and necessarily have exponent 25, maximal class, and a center of size 5. \item The symmetric and alternating groups are all $FSZ^+$. \item A group is $FSZ_m^+$ if and only if for all $n\in\BN$ with $(n,|G|)=1$ and $u,g\in G$ with $[u,g]=1$ the sets $G_m(u,g)$ and $G_m(u,g^n)$ are isomorphic permutation modules for $C_G(u,g)$. \end{enumerate} \end{lem} Several additional examples of $FSZ^+$ and non-$FSZ$ groups can be found in \citep{K16:p-examples,K16:monster,K,K2,PS16,Etingof:SymFSZ}. It has been an open question as to whether or not there exists an $FSZ$ group which is not $FSZ^+$. We will show that such groups exist in \cref{thm:fsz-not-plus}. \begin{rem} We wish to note an important contrast between regular $p$-groups and its parent class of $FSZ$ $p$-groups. It is well-known that a direct product of regular $p$-groups is not necessarily regular, whereas the $FSZ$ properties are preserved by direct products. On the other hand, every subquotient of a regular $p$-group is again regular, whereas there seems to be little necessary connection between the $FSZ$ properties of a group and its subquotients. That $S_n$ is $FSZ^+$ for all $n$ and the existence of non-$FSZ$ groups shows that an $FSZ^+$ group can have many non-$FSZ$ subgroups. Furthermore, any proper subquotient of a non-$FSZ$ group of order $p^{p+1}$ is necessarily $FSZ^+$. In \cref{sec:central} we will show that an $FSZ$ (indeed, $FSZ^+$ by \cref{cor:fp1-plus}) group can have a non-$FSZ$ quotient, even if we restrict attention to quotients by central subgroups. This reflects the general difficulty, both theoretical and computational, of working with the $FSZ$ properties. \end{rem} \section{Connecting central products and central quotients}\label{sec:central} Given that the class of $FSZ_m$ groups and the class of $FSZ_m^+$ groups are both closed under direct products for every $m\in\BN$, one might hope that they are also closed under central products, or at least central products with abelian groups. By \citet{IMM} the classes of $FSZ_m$ and $FSZ_m^+$ groups are trivially closed under central products for any $m\in\{1,2,3,4,6\}$, since these are the same as the class of all groups. Unfortunately, this closure property turns out to fail for prime powers $m=p^j$ for any prime $p>3$ and $j\in\BN$, and it is our goal to illuminate the process by which we can construct examples demonstrating this failure. By using iterated central products it is easy to see that if we want to prove any sort of closure of $FSZ$ groups with respect to central products with abelian groups, then it suffices to assume the abelian group is cyclic. Indeed, for $H=G\ast C$ with $C$ cyclic it suffices to suppose that $[H:G]=p$ is a prime. We can also observe that, by the isomorphism theorems, to prove any sort of closure of $FSZ$ groups with respect to quotients by central subgroups it would suffice to consider the case where the central subgroup is cyclic of prime order. We will soon see that the following sets are intimately connected to determining these closure properties, and the failure thereof. \begin{df}\label{df:trip-sets} Let $G$ be a group, and let $u,g\in G$ and $z\in Z(G)$. For any $m\in\BN$ we define \[ G_m(u,g,z) = \{ a\in G \ : \ a^m = z(au)^m = g\}.\] \end{df} We have the trivial equality $G_m(u,g,1)=G_m(u,g)$ in the usual sense, and we easily see that $G_m(u,g,z)\subseteq C_G(g)$ and that $G_m(u,g,z)=\emptyset$ if $[u,g]\neq 1$. We note that the restriction $z\in Z(G)$ can be relaxed to $z\in C_G(g)$ without changing these properties, but for our purposes we will only be interested in the case of $z\in Z(G)$. We begin by considering quotients by cyclic central subgroups. \begin{thm}\label{thm:cent-quot-decomp} Let $G$ be a group with $1\neq z\in Z(G)$. Define $A=\cyc{z}$ and $H=G/A$ with $\pi\colon G\to H$ the canonical epimorphism. For fixed $m\in\BN$, $u,g\in G$ and any $n\in\BN$ with $(n,|G|)=1$ define \[ X_m(u,g,n)= \bigcup_{i=1}^{o(z)}\bigcup_{j=1}^{o(z)} G_m(u,z^{ni}g^n,z^j).\] Then $H_m(uA,g^nA)=\pi(X_m(u,g,n))$ and $X_m(u,g,n)A = X_m(u,g,n)$. As a consequence, $H$ is $FSZ_m$ if and only if for all $u,g\in G$ and $n\in\BN$ with $(n,|G|)=1$ there is a bijection $X_m(u,g,1)\to X_m(u,g,n)$. \end{thm} \begin{proof} By definitions we have \[ aA\in H_m(uA,g^nA) \iff \forall a_1\in A \, \exists a_2,a_3\in A \left( a^m a_1 = (au)^m a_2 = g^n a_3\right).\] Multiplying this last identity by $a_1\inv$ and using that $A=\cyc{z}$ and $(n,|G|)=1$ shows that \[aA\in H_m(uA,g^nA)\iff a\in X_m(u,g,n).\] This establishes the first claim. It then immediately follows that $X_m(u,g,n)$ is a union of $A$ cosets. Therefore $|X_m(u,g,n)| = |A| |H_m(uA,g^nA)|$, and the final equivalence is then immediate. \end{proof} \begin{rem} Observe that the unions defining $X_m(u,g,n)$ are in fact disjoint unions. In other words, $G_m(u,z^{ni}g^n,z^j)\cap G_m(u,z^{ni'},z^{j'}) = \emptyset$ for $i\not\equiv i' \bmod o(z)$ or $j\not\equiv j'\bmod o(z)$. \end{rem} We can also exhibit the following connection to the sets $G_m(u,g,z)$. \begin{lem}\label{lem:cent-prod-lem} Let $G$ be a group, and $K=G\ast C$ where $C=\cyc{x}$ and the central product is given by the identification $x^m=z\in Z(G)$ for some $m\in\BN$. Then for this $m$, any $u,g\in G$, any $n\in\BN$ with $(n,|K|)=1$ and $0\leq j<m$ we have a bijection \[ K_m(ux^j,g^n) \to \bigsqcup_{i=0}^{m-1} G_m(u,z^{ni}g^n,z^{j}).\] In particular, \[|K_m(ux^j,g^n)| = \sum_{i=0}^{m-1} |G_m(u,z^{ni}g^n,z^{j})|.\] \end{lem} \begin{proof} By definitions we have for $0\leq i < m$ and $a\in G$ that \begin{align*} ax^{-ni} \in K_m(ux^j,g^n) &\iff z^{-ni} a^m = z^{-ni}z^{j} (au)^m = g^n\\ &\iff a^m = z^{j} (au)^m = z^{ni}g^{ni}\\ &\iff a\in G_m(u,z^{ni}g^n,z^{j}). \end{align*} The claims now follow. \end{proof} \begin{cor}\label{cor:cent-prod} Let assumptions be as in the preceding lemma. If also $o(z)\geq m$ then we have a bijection \[ K_m(ux^j,g^n) \to \bigcup_{i=0}^{m-1} G_m(u,z^{ni}g^n,z^{j}).\] \end{cor} \begin{proof} The assumption on $o(z)$ guarantees, by definition, that the sets in the union are all pairwise disjoint. \end{proof} So we see there is a strong connection between the $FSZ$ properties for central products and quotients by suitable central subgroups. \begin{thm}\label{thm:quot-to-prod} Let $G$ be a group with $1\neq z\in Z(G)$. Set $A=\cyc{z}$, $H=G/A$. Let $m\in \BN$ be divisible by $o(z)$. Define $K=G\ast C$, where $C=\cyc{x}$ and the central product is defined by $x^m=z$. Then $H$ is $FSZ_m$ if and only if for all $u,g\in G$ and $n\in\BN$ with $(n,|K|)=1$ there is a bijection \[\bigcup_{j=0}^{m-1} K_m(ux^j,g)\to \bigcup_{j=0}^{m-1} K_m(ux^j,g^n).\] As a consequence, if $K$ is $FSZ_m$ then $H$ is $FSZ_m$. Equivalently, if $H$ is not $FSZ_m$ then $K$ is not $FSZ_m$. \end{thm} \begin{proof} In the case that $o(z)=m$ the claims follow from \cref{thm:cent-quot-decomp,cor:cent-prod}. When $o(z)$ divides $m$ every set in the disjoint union from \cref{lem:cent-prod-lem} occurs the same number of times, and so when applying this bijection to $\bigcup_{j=0}^{m-1} K_m(ux^j,g)$ we see that we obtain every term in $X_m(u,g,z)$ exactly $m/o(z)$ times. So the result again follows from \cref{thm:cent-quot-decomp}. \end{proof} Thus we can show that the class of $FSZ_m$ groups is not closed under central products by showing that it is not closed under quotients by central subgroups. We know there are non-$FSZ_5$ groups of order $5^6$ by \citet{IMM}, and \citet{GAP4.8.4} has the groups of order $5^7$ in its \verb"SmallGroups" library. By eliminating cases where central quotients are necessarily $FSZ_5$, we can use GAP to show that of the \numprint{34,297} groups of order $5^7$, there are exactly 83 (isomorphism classes of) $FSZ$ groups of order $5^7$ admitting a non-trivial cyclic subgroup $A$ such that $G/A$ is not $FSZ_5$. Indeed, all such groups are also $FSZ^+$. The ids---the value $n$ in \verb"SmallGroup"($5^7$,n)---for these groups are given in \cref{tab:id-nums}. \begin{table}[ht] \centering \caption{ID numbers of $FSZ$ groups of order $5^7$ with non-$FSZ$ central quotients.} \label{tab:id-nums} \begin{tabular}{CCCCCCCC} 348& 350& 352& 354& 383& 388& 391& 394\\ 397& 453& 458& 463& 467& 472& 477& 530\\ 571& 573& 577& 584& 585& 595& 619& 626\\ 638& 639& 640& 641& 644& 645& 646& 647\\ 654& 660& 667& 700& 701& 702& 703& 709\\ 718& 724& 834& 835& 836& 837& 838& 842\\ 843& 844& 845& 846& 850& 851& 852& 853\\ 854& 858& 859& 860& 861& 862& 866& 867\\ 868& 869& 870& 875& 876& 877& 878& 881\\ 884& 887& 890& 893& 896& 911& 916& 921\\ 926& 931& 936& \end{tabular} \end{table} The following is a specific example that demonstrates this. \begin{thm}\label{thm:specific} Let $G=\verb"SmallGroup"(5^7,348)$ in GAP. Then $G$ is $FSZ^+$ and there exists $A\subseteq Z(G)$ with $A\cong\BZ_5$ such that $G/A$ is not $FSZ_5$. There also exists a cyclic group $C$ of order $25$ and a central product $K=G\ast C$ such that $K$ is not $FSZ_5$. \end{thm} \begin{proof} The last claim follows from \cref{thm:quot-to-prod}, though it can also be constructed and tested in GAP directly. We will use \citet{GAP4.8.4} and the \verb"FSZtest" function of \citet{PS16} to establish the claims. \begin{lstlisting} G := SmallGroup(5^7,348); Center(G); A := Subgroup(G,[G.6*G.7]); H := G/A; FSZtest(G); FSZtest(H); \end{lstlisting} The \verb"FSZtest" function demonstrates that $G$ is $FSZ$ as desired, and that $H$ is not $FSZ$, and so must necessarily be non-$FSZ_5$. That $G$ is in fact $FSZ^+$ can be obtained by running \verb"FSZtest" over all suitable centralizers in $G$. \begin{lstlisting} cl := RationalClasses(G);; cl := List(cl,Representative);; cl := Filtered(cl,x->not x in Center(G));; ForAll(cl,x->FSZtest(Centralizer(G,x))); \end{lstlisting} This returns true, which completes the proof. \end{proof} \section{A family of \texorpdfstring{$FSZ$}{FSZ} groups with non-\texorpdfstring{$FSZ$}{FSZ} central quotients}\label{sec:family} In this section we show how to obtain an infinite family of examples similar to the one from \cref{thm:specific}. For this, we will use the groups $S(p,j)$ from \citep{K16:p-examples}. It was shown in \citep[Theorem 1.9]{K16:p-examples} that $S(p,j)$ was not $FSZ_{p^j}$ for any prime $p>3$ and $j\in\BN$. We will construct an $FSZ_{p^j}$ group which has $S(p,j)$ as one of its quotients by a central subgroup. This construction will strongly mirror the one for $S(p,j)$. So fix a prime $p$ and $j\in\BN$. Consider the abelian $p$-group \[ Q_{p,j} = \BZ_{p^{j+1}} \times \BZ_p^{p^j-1}.\] Note for $p>2$ that $Q_{p,j}=P_{p,j}\times \BZ_p$ as defined in \citep{K16:p-examples}; $P_{p,j}$ was not explicitly defined for $p=2$ for technical reasons, but otherwise makes sense with $P_{2,1}=\BZ_4$. Let $Q_{p,j}$ have generators $a_1,...,a_{p^j}$, where $a_1$ has order $p^{j+1}$ and the rest have order $p$. We define an endomorphism $b_{p,j}$ of $Q_{p,j}$ by \begin{align*} a_1 &\mapsto a_1 a_2\inv\\ a_k &\mapsto a_k a_{k+1}, \ 2\leq k < p^j\\ a_{p^j} &\mapsto a_{p^j}. \end{align*} We can equivalently write $b_{p,j}$ as a $p^j\times p^j$ lower triangular matrix $B_{p,j}$ which acts from the left on $Q_{p,j}$ in the obvious fashion. The entries in the first row of $B_{p^j}$ are taken modulo $p^{j+1}$, while all other entries are taken modulo $p$. We have \begin{align*} B_{p,j} &= \begin{pmatrix} 1 & 0 & 0 &\cdots & 0 & 0 &0\\ -1 & 1 & 0 &\cdots & 0 & 0 &0\\ 0& 1 & 1 & \cdots & 0 & 0 & 0\\ &\vdots&&&&\vdots&\\ 0&0&0&\cdots & 1&1&0\\ 0&0&0&\cdots&0&1&1 \end{pmatrix}. \end{align*} The entries of $B_{p,j}^k$ for $1\leq k < p^j$ are then naturally described by binomial coefficients, or equivalently the entries of Pascal's triangle. \begin{lem}\label{lem:b-is-aut} The endomorphism $b_{p,j}$ is an automorphism of order $p^j$. \end{lem} \begin{proof} The proof is nearly identical to \citep[Lemma 1.1]{K16:p-examples}, so we only sketch the proof. By writing $B_{p,j}=I+S$ where $I$ is the identity matrix, we can use the binomial theorem to expand \[(I+S)^{p^j} = I + S^{p^j} + \sum_{k=1}^{p^{j}-1} \binom{p^j}{k} S^k.\] The powers of $S$ are easily computed, and $S$ is readily observed to be nilpotent with $S^{p^j}=0$. Moreover, since each binomial coefficient in the summation is a multiple of $p$ and the first row of $S$ is the zero vector, we conclude that every term in the summation is the zero matrix. Finally, the first column of every power $B_{p,j}^i$ for $1\leq i < p^j$ contains an entry of $-1$, and is $B_{p,j}^i$ therefore not the identity matrix. This completes the proof. \end{proof} \begin{df} Let $p$ be a prime and $j\in\BN$. Define a group of order $p^{p^j+2j}$ by \[ F(p,j) = Q_{p,j}\rtimes \cyc{b_{p,j}}.\] \end{df} By considering the (right) $1$-eigenvectors of $B_{p,j}$ it is not difficult to see that $Z(F(p,j)) = \cyc{a_1^{p}, a_{p^j}}\cong \BZ_{p^j}\times\BZ_p$. We identify $Q_{p,j}$ and $\cyc{b_{p,j}}$ as subgroups of $F(p,j)$ in the usual fashion. We then relate the group $F(p,j)$ to $S(p,j)$ in the following manner. \begin{prop}\label{prop:has-bad-quot} Suppose $p$ is an odd prime, and let $G=F(p,j)$ be as above. Define $A=\cyc{a_1^{p^j} a_{p^j}}\subseteq Z(G)$. Then $G/A\cong S(p,j)$, as defined in \citep{K16:p-examples}. \end{prop} \begin{proof} Taking $Q_{p,j}$ modulo $A$ yields $P_{p,j}$ in a natural way, and rewriting the definitions of our $b_{p,j}$ modulo $A$ yields exactly the automorphism of $P_{p,j}$ that defines $S(p,j)$ (in the same natural way). \end{proof} From now on, whenever convenient we will omit the subscripts from $b_{p,j}$, $Q_{p,j}$, and $B_{p,j}$. To describe $p^j$-th powers in $F(p,j)$, we need to consider the sums over all elements of $\cyc{B^{p^l}}$ for any $0\leq l \leq j$ and their action on $Q$. By definition this matrix sum is independent of the choice of generator of the cyclic group in question. Explicitly, we define for $0\leq l \leq j$ \begin{align} X_{p,j}(p^l) = \sum_{t=1}^{p^{j-l}} B^{tp^l}.\end{align} Note that $X_{p,j}(p^j)$ is the identity matrix. \begin{rem} The $X_{p,j}(p^l)$ play the same role that the $Y_{p,j}(p^l)$ played in \citep{K16:p-examples}. The next few lemmas mirror the corresponding lemmas for the $Y_{p,j}(p^l)$. \end{rem} \begin{lem}\label{lem:x-1} The matrix $X_{p,j}(1)$ is given by \[ X_{p,j}(1).\prod_i a_i^{n_i} = a_1^{p^j n_1} a_{p^j}^{-n_1}.\] \end{lem} \begin{proof} Let $X$ be shorthand for $X_{p,j}(1)$. Now $X$ satisfies the identity $BX=X$, meaning that the columns of $X$ are 1-eigenvectors of $B$. These are easily seen to be multiples of $(1,0,...,0,-1)$. It therefore suffices to show that all columns but the first of $X$ must be zero. For this, note that $X$ also satisfies $XB=X$, so that the row vectors of $X$ are left 1-eigenvectors of $B$. These are easily seen to all be multiples of $(1,0,...,0)$, which gives the desired claim. \end{proof} \begin{lem}\label{lem:x-L} The matrices $X_{p,j}(p^l)$ for $1\leq l \leq j$ all satisfy \[ p^l X_{p,j}(p^l).\prod_i a_i^{n_i} = a_1^{p^j n_l}.\] \end{lem} \begin{proof} Let $X$ be shorthand for $X_{p,j}(p^l)$. Since $l>0$ the multiplication by $p^l$ guarantees that all entries of $p^l X$, except possibly those in the first row, are zero. The (1,1) entry of every matrix in the sum defining $X$, on the other hand, is $1$. Since there are $p^{j-l}$ terms in the summation, the $(1,1)$ entry of $p^l X$ is therefore $p^j$. Since the only non-zero entry in the first row of $B$ is the (1,1) entry, every other entry on the first row in every power of $B$ is automatically 0. This completes the proof. \end{proof} We can then describe the $p^j$-th powers in $F(p,j)$ by \begin{align}\label{eq:gen-powers} \left( a_1^{n_1}\cdots a_{p^j}^{n_{p^j}} b^{p^l r} \right)^{p^j} &= p^{l}X_{p,j}(p^l). \left(a_1^{n_1}\cdots a_{p^j}^{n_{p^j}}\right), \end{align} where $p\nmid r$. \begin{lem}\label{lem:pj-pows} For $a=a_1^{n_1}\cdots a_{p^j}^{n_{p^j}} b^t\in F(p,j)$ we have \begin{align*} a^{p^j} &= \begin{cases} a_1^{n_1 p^j},& p\mid t\\ a_1^{n_1 p^j} a_{p^j}^{-n_1},& p\nmid t \end{cases}. \end{align*} \end{lem} \begin{proof} Follows directly from \cref{lem:x-1,lem:x-L} applied to \cref{eq:gen-powers}. \end{proof} \begin{cor}\label{cor:exp} $F(p,j)$ has exponent $p^{j+1}$. \end{cor} \begin{thm}\label{thm:fpj-main} Let $F(p,j)$ be as above. Then the following all hold. \begin{enumerate} \item $F(p,j)$ is $FSZ_{p^j}$. \item If $p>3$, then $F(p,j)$ has a central subgroup $A\cong \BZ_p$ such that $F(p,j)/A$ is not $FSZ_{p^j}$. \item If $p>3$, then for a cyclic group $C$ of order $p^{j+1}$ there exists a central product $H=F(p,j)\ast C$ such that $H$ is not $FSZ_{p^j}$. \end{enumerate} \end{thm} \begin{proof} Set $G=F(p,j)$. The last two parts follow immediately from \citep[Theorem 1.9]{K16:p-examples}, \cref{prop:has-bad-quot,thm:quot-to-prod}. For the first part, let $1\neq g=a_1^{k_1}\cdots a_{p^j}^{k_{p^j}} b^r$, $u = a_1^{j_1}\cdots a_{p^j}^{j_{p^j}}b^s$, and $n\in\BN$ with $p\nmid n$. From \cref{lem:pj-pows} we see that \[ G_{p^j}(u,g) = \emptyset\] if $g\not\in \cyc{a_1^{p^j},a_{p^j}}\subseteq Z(G)$. Let $V$ be the subgroup of $Q$ generated by $a_2,...,a_{p^j}$. By \cref{lem:pj-pows} the $p^j$-th powers do not depend on elements in $V$. As such we are free to suppress all elements of $V$ when taking $p^j$-th powers. Letting $a= a_1^{n_1}v b^t$ for some $v\in V$, we then have \[ a^{p^j} = (a_1^{n_1} b^t)^{p^j},\] and \[ (au)^{p^j} = (a_1^{n_1+j_1} b^{t+s})^{p^j}.\] By \cref{lem:pj-pows} we see that for equality to hold with $1\neq g$ we must have that either $t$ and $t+s$ are both invertible modulo $p$, or both are divisible by $p$. From this it is then easy to see that $|G_{p^j}(u,g)| = |G_{p^j}(u,g^n)|$ for any $p\nmid n$. This shows that $F(p,j)$ is $FSZ_{p^j}$ as desired, and so completes the proof. \end{proof} \begin{example} The group $F(5,1)$ is exactly \verb"SmallGroup"($5^7$,654) in GAP. This group was the basis for the above construction, in much the same fashion that $S(5,1)=$ \verb"SmallGroup"($5^6$,632) was the motivating example in \citep{K16:p-examples}. \end{example} \begin{example} Since $p^{p+1}$ is the smallest possible order for any non-$FSZ$ $p$-group, for $p>3$ we see that $F(p,1)$ has the smallest possible order of any $FSZ$ $p$-group with a quotient that is not $FSZ$. We do not know if $S(p,j)$ has minimal order amongst the non-$FSZ_{p^j}$ $p$-groups. We therefore do not know if $F(p,j)$ has the smallest possible order for an $FSZ_{p^j}$ $p$-group with a (central) quotient that is not $FSZ_{p^j}$. The minimal orders for, and even the existence of, non-$FSZ$ 2-groups and 3-groups remains an open question. \end{example} \begin{rem} The groups $F(p,j)$, much like the groups $S(p,j)$, were designed so that there was a simple formula for $p^j$-th powers. However, lower powers are in general more complex. In particular, these will depend on the $t$ in $b^t$, rather than just the order of $b^t$. As our ultimate goal of finding an $FSZ$ group which is not $FSZ^+$ will only need to consider $F(p,1)$, we will make no attempt here to understand the $FSZ_{p^l}$ properties of $F(p,j)$ for $l<j$. \end{rem} \section{Regular wreath products with \texorpdfstring{$\BZ_p$}{Z/pZ}}\label{sec:main} Our goal now is to use the results of the preceding section to help us construct, for every prime $p>3$, a group of order $p^{(p+1)^2}$ which is $FSZ$ but not $FSZ_p^+$. These will yield the first known examples that demonstrate that the $FSZ^+$ properties are strictly stronger then the $FSZ$ properties in general. The idea is to look for a group $G$ constructed from $F(p,1)$. We desire that the centralizers in $G$ are constructed from those in $F(p,1)$ in a nice fashion, and in particular that for some $g\in G$ we have $C_G(g)\cong F(p,1)\ast \BZ_{p^2}$ is not $FSZ_p$, but $G_p(u,g^n)=\emptyset$ for all $u\in G$ and $(n,|G|)=1$. We will see that the regular wreath product $F(p,1)\wr_r \BZ_p$ gives exactly what we want. First, we will need to review such regular wreath products, and investigate how they behave with respect to the $FSZ$ properties. For wreath products of the form $D\wr_r \BZ_p$ we write the typical element as \[(d_0,...,d_{p-1},i),\] where $d_0,...,d_{p-1}\in D$ and $i$ is an integer. The last coordinate and the indices for the first $p$ coordinates are all taken modulo $p$. \begin{lem}\label{lem:cents} Let $D$ be a group, $p$ a prime, and set $G=D\wr_r \BZ_p$. Then the following hold. \begin{lemenum} \item \label{lem-part:cents-1}If $i\not\equiv 0\bmod p$, then any element of the form $(d_0,...,d_{p-1},i)$ is conjugate to $(x,1,...,1,i)$ where $x$ is any element of the conjugacy class of $d_0\cdots d_{p-1}$ in $D$. Letting $\Delta C_D(x)$ be the image of the diagonal embedding of $C_D(x)$ into $G$, we have \begin{align*} C_G((x,1,...,1,i)) &= \cyc{\Delta C_D(x),(x,1,...,1,i)}\\ &= \Delta C_D(x)\ast\cyc{(x,1,...,1,i)} \end{align*} is a central product with respect to the central subgroup $\cyc{x}$. In particular, $\Delta C_D(x)$ is a normal subgroup of index $p$, whose coset representatives can be taken as the first $p$ powers of $(x,1,...,1,i)$. Indeed, $\Delta C_D(x)$ commutes elementwise with these representatives. \item \label{lem-part:cents-2}Any element of the form $(d_0,...,d_{p-1},0)$ such that the $d_i$ are all conjugate to each other is conjugate to $(d,...,d,0)$ where $d$ is any element conjugate to $d_0$ in $D$. In this case, we have \[ C_G(d,...,d,0) = C_D(d)\wr_r \BZ_p.\] \item \label{lem-part:cents-3}Any element of the form $(d_0,...,d_{p-1},0)$ such that $d_i$ and $d_j$ are not conjugate for some $i,j$ is conjugate to $(x_0,...,x_{p-1},0)$, where $x_i$ is any element conjugate to $d_i$ in $D$. In this case, we have \[ C_G(d_1,..,d_p,0) = C_D(d_1)\times\cdots\times C_D(d_p).\] \end{lemenum} Moreover, these enumerate all of the conjugacy classes of $G$. In particular, no element satisfying one of the cases is conjugate to any element from one of the other cases, or to an element with a different last coordinate. \end{lem} \begin{proof} All claims are proven by standard manipulations in wreath products, so we sketch only the proof of the first item. In all cases the result mirrors the determination of the conjugacy classes for iterated wreath products of cyclic groups given by \citet{Orellana2004}. Let $i\not\equiv 0\bmod p$, and let $g=(d_0,...,d_{p-1},i)\in G$. Set $x=d_0\cdots d_{p-1}$ and define $h_0,...,h_{p-1}$ by \[ h_{ki} = \Big(\prod_{l=k}^{p-1} d_{li}\Big)\inv, \ 0 < k < p\] and $h_0=1$. Then for $h=(h_0,...,h_{p-1},0)$ we have $g^h = (x,1,...,1,i)$, as desired. It is clear that $\cyc{\Delta C_D(x),(x,1...,1,i)}\subseteq C_G((x,1,...,1,i))$. A straightforward check of the commutation relation $g^y=g$ for $y=(y_0,...,y_{p-1},j)$ then gives the reverse equality. \end{proof} We will also use the following standard lemma, the proof of which is elementary. \begin{lem}\label{lem:order-irrev} Let $G$ be any group and suppose $g_0,...,g_n\in G$ are such that $g_0,...,g_n\in C_G(g_0\cdots g_n)$. Then $g_0\cdots g_n = g_{i}\cdots g_{n+i}$ for all $i\in\BZ$, where indices are taken modulo $n+1$. Equivalently, the product is invariant under a cyclic permutation of the terms. \end{lem} For notational convenience, when investigating the $FSZ$ properties for regular wreath products $D\wr_r \BZ_p$ we will sometimes write a typical element $(x_0,...,x_{p-1},j)$ in the short-hand form $(x_l,j)$. As the main example, if $x_l = \prod_{s=0}^{p-1} y_{l+sj}$ for some $j$ and all $l$, instead of writing $(\prod_{s=0}^{p-1} y_{sj},\prod_{s=0}^{p-1} y_{1+sj},...,k)$, we write simply $(\prod_{s=0}^{p-1} y_{l+sj},k)$. \begin{thm}\label{prop:wreath-condition} Let $D$ be an $FSZ_{p^t}$ and $FSZ_{p^{t-1}}$ group for some $t\in\BN$. If for any $d,u_0\in D$ the number of elements $(x_0,...,x_{p-1})\in D^p$ satisfying \begin{align}\label{eq:wreath} x_l^{p^t} = \Big(u_{0} x_1 x_2\cdots x_{p-1} x_0\Big)^{p^{t-1}} = d^n, \ \text{for all } l, \end{align} does not depend on $n$ when $p\nmid n$, then $D\wr_r\BZ_p$ is $FSZ_{p^t}$. \end{thm} \begin{proof} We have that $\exp(G)=p\cdot \exp(D)$, so it suffices to consider the $FSZ_{p^j}$ properties for $p^j\mid \exp(D)$. It is clear that all $p$-th powers in $G$ are elements of $D^p$, whence $G_{p^t}(u,g)=\emptyset$ whenever $g\not\in D^p$. By assumptions on $D$ it then follows from \cref{lem:cents} that we need only consider the case where $g=(d,...,d,0)$ for some $d\in D$. Fix $n\in\BN$ with $p\nmid n$. We now proceed to investigate the identities $a^{p^t} = (au)^{p^t} = g^n = (d^n,0)$. Let $a=(x_l,i)$, $u = (u_l,j)$. Then \begin{align}\label{eq:ab-pt-a-pow} a^{p^t} &= \begin{cases} ( \Big(\prod_s x_{l+si}\Big)^{p^{t-1}},0),& p\nmid i\\ (x_l^{p^t},0),& p\mid i \end{cases},\\ (au\inv)^{p^t} &= \begin{cases} ( \left(\prod_s x_{l+s(i+j)} u_{l+i+(i+j)s}\right)^{p^{t-1}},0),& p\nmid i+j\\ ( (x_l u_{l+i})^{p^t},0),& p\mid i+j \end{cases}.\label{eq:ab-pt-au-pow} \end{align} So we have a number of natural cases to consider. Suppose first that $p\mid i$ and $p\mid i+j$ (so that $p\mid j$). Then the equations take the form \[ x_l^{p^t} = (x_l u_l)^{p^t} = d^n\] for all $l$. The number of solutions to this is independent of $n$ for all $l$ since $D$ is $FSZ_{p^t}$ by assumption. Next suppose that $p\nmid i$ and $p\nmid i+j$. Then the equations take the form \[ \left(\prod_s x_{l+si}\right)^{p^{t-1}} = \left(\prod_s x_{l+s(i+j)} u_{l+i+s(i+j)}\right)^{p^{t-1}} = d^n.\] So suppose we have any arbitrary elements $x_1,...,x_{p-1}$, and then rewrite the above for $l=0$ as \[ (x_0 b)^{p^{t-1}} = (x_0 b v)^{p^{t-1}} = d^n,\] with \begin{align*} b = & \prod_{k=1}^{p-1} x_{ki}\\ v = & b\inv u_{i}\prod_{k=1}^{p-1} x_{(i+j)k} u_{i+(i+j)k}. \end{align*} It follows from \cref{lem:order-irrev} that we have a bijection from $D_{p^{t-1}}(v,d^n)$ to the $x_0$ value of those $(x_0,...,x_{p-1},i)\in G_{p^t}(u,g^n)$ with the given $x_1,...,x_{p-1}$ and $i$ via $a\mapsto ab\inv$. Since $x_1,...,x_{p-1}$ were arbitrary and $u$ is fixed, $v$ does not depend on $n$ and $|D_{p^{t-1}}(v,d^n)|$ does not depend on $n$ by assumptions. Therefore the number of solutions in this case is also independent of $n$. Note that the cases so far combine to completely cover the case where $p\mid j$. So in the remainder of the cases we have $p\nmid j$. Now $u\in C_G(g) = C_D(d)\wr_r\BZ_p$ and by \citep[Proposition 7.2]{KSZ2} we always have a bijection $G_m(x,y)\to G_m(x^t,y^t)$ given by $a\mapsto a^t$ for any group $G$, $m\in\BN$ and $x,y,t\in G$. Therefore by \cref{lem-part:cents-1} we may suppose without loss of generality that $u=(u_0,1,...,1,j)$. For the third case, we suppose that $p\mid i$ and $p\nmid i+j$ (and so $p\nmid j$). Then the equations take the form \[ x_l^{p^t} = \left(\prod_s x_{l+sj} u_{l+sj}\right)^{p^{t-1}} = d^n\] for all $l$. By \cref{lem:order-irrev} and assumptions on $u$ the middle term is necessarily always equal to \[\Big(u_0\prod_{s=1}^p x_s \Big)^{p^{t-1}},\] or equivalently it is equal to \[\left(\Big(\prod_{s=1}^{p} x_s\Big)u_0\right)^{p^{t-1}}.\] Therefore these equations are of the form given in the statement of the result. So we continue on to the next case. So consider the last case: $p\nmid i$ and $p\mid i+j$. Then the equations take the form \[ \left(\prod_s x_{l+si}\right)^{p^{t-1}} = (x_l u_{l+i})^{p^t} = d^n.\] Making the substitution $y_l = x_l u_{l+i}$ and $v_l = u_{l+i}\inv$, the equations become \[ y_l^{p^t} = \left(\prod_s y_{l+si} v_{l+si}\right)^{p^{t-1}} = d^n,\] which is same form as the equations from the previous case. Defining $v = (v_l,-j)$, these equations in fact come from exactly the previous case when replacing $u$ by $v$ and the $x_l$ by the $y_l$, so the same manipulations and simplifications hold to get them into exactly the form from the statement. This completes the proof. \end{proof} \begin{rem} If, for fixed $d^n$, the number of solutions to the third and fourth cases in the preceding proof are equal, then we have the converse statement: $D\wr_r\BZ_p$ is $FSZ_{p^t}$ implies that the number of solutions to \cref{eq:wreath} does not depend on $n$ when $p\nmid n$. Moreover, we can easily see that in the final case of the proof we have $v\inv=u$, and the bijection $G_m(u,g^n)\to G_m(u\inv,g^n)$ given by $a\mapsto au$ preserves the first two cases but swaps the last two cases. Nevertheless, there seems to be no clear reason for the third and fourth cases to always have the same number of solutions. In particular, it seems possible that $D$ and $D\wr_r\BZ_p$ are both $FSZ$ but nevertheless fail to satisfy the theorem. Similarly, it seems possible for $D$ to be $FSZ$ but yet for $D\wr_r\BZ_p$ to be non-$FSZ$. The author was unable to find any examples demonstrating either behavior, however. This is in large part due to the prohibitively large number of elements to check even when $D$ has relatively small order. All such $D$ and $p$ with $|D\wr_r\BZ_p|\leq 50,000$ were verified to be $FSZ$ with \citet{GAP4.8.4}---most of which are 2-groups with $|D|=128$, or trivially $FSZ^+$ by the exponent criterion of \citep[Corollary 5.3]{IMM}. \end{rem} \begin{cor}\label{cor:wreath} Let $D$ be a $p$-group. If $D\wr_r\BZ_p$ is $FSZ_{p^j}$ then $D$ is $FSZ_{p^j}$. If we suppose $\exp(D)=p^j$ then $D\wr_r\BZ_p$ is $FSZ_{p^j}$ if and only if $D$ is $FSZ_{p^{j-1}}$. \end{cor} \begin{proof} The first statement follows from \cref{lem:cents}, while the second follows from \cref{prop:wreath-condition}. \end{proof} This suffices to obtain a partial result on the $FSZ$ properties of the Sylow subgroups of symmetric groups. \begin{thm}\label{cor:symm-syl-high} Let $j\in\BN$, and let $P$ be a Sylow $p$-subgroup of $S_{p^j}$. Then $P$ is $FSZ_{p^{j-1}}$. \end{thm} \begin{proof} We proceed by induction on $j$. The case $j=1$ has $P\cong \BZ_p$ and the case $j=2$ has $P\cong \BZ_p\wr_r\BZ_p$, both of which are $FSZ^+$ \citep{IMM}. Suppose the result holds for some $j\geq 2$. For the case $j+1$ we have $P\cong T\wr_r\BZ_p$ where $T$ is (isomorphic to) a Sylow $p$-subgroup of $S_{p^{j}}$. Now $T$ has exponent $p^{j}$, so the inductive hypothesis and \cref{cor:wreath} shows that $P$ is $FSZ_{p^{j}}$, as desired. This completes the proof. \end{proof} It seems difficult to establish the condition from \cref{prop:wreath-condition} in general, as this seems to require \textit{ad hoc} methods for each choice of $D$. However, the case where $D$ is an abelian $p$-group is relatively simple. The following result generalizes \citep[Example 4.4]{IMM}. \begin{thm}\label{thm:ab-wreath} Let $A$ be an abelian $p$-group. Then $G= A\wr_r\BZ_p$ is $FSZ^+$. \end{thm} \begin{proof} Since all abelian groups are $FSZ^+$, to show $G$ is $FSZ$ we need only show that the conditions from \cref{prop:wreath-condition} hold. Moreover, by \cref{lem:cents}, and the fact that central products of (two) abelian groups are again abelian, the only centralizers in $G$ which are not $FSZ$ by assumptions are equal to $G$. So the result follows as soon as we show that $G$ is $FSZ$. Let $A=\bigoplus_i \cyc{a_i}$. Fix $t\in\BN$ as in \cref{prop:wreath-condition}. For $x_l,d,u_{0}\in A$, since $d$ is a $p^t$-th powers in $A$ we may write \begin{align*} x_l &= \prod_k a_k^{n_{l,k}};\\ d&=\prod_k a_k^{d_k p^t};\\ u_{0}&= \prod_k a_k^{m_k}. \end{align*} Since $A$ is abelian \cref{eq:wreath} for $n=1$ becomes \[ x_l^{p^t} = \Big( \prod x_s^{p^{t-1}}\Big) u_0^{p^{t-1}} = d.\] Since $A$ is abelian, it suffices to consider the power on a single generator $a_k$ at a time. Indeed, without loss of generality we may assume that $a_k$ has order greater than $p^t$. The equality $x_k^{p^t}= d$ then becomes $p^t n_{l,k} \equiv p^t d_k \bmod o(a_k)$, which is equivalent to $n_{l,k}\equiv d_k \bmod o(a_k)/p^t$. So we may write $n_{l,k} = d_k + y_{l,k}\cdot o(a_k)/p^t$ for some integer $y_{l,k}$. We can do this for all $l$ and any such $k$. The equality \[ \Big(\prod_s x_s^{p^{t-1}}\Big) u_0^{p^{t-1}} = d \] becomes \[ m_k+\sum_l y_{k,l}\cdot o(a_k)/p \equiv 0 \bmod o(a_k)\] for the given (but arbitrary) $k$. It follows that the map \[ n_{l,k}=d_k + y_{l,k}\cdot o(a_k)/p^t\mapsto n d_k + y_{l,k}\cdot o(a_k)/p^t\] yields the necessary bijection between solutions for any $n\in\BN$ with $p\nmid n$. This completes the proof. \end{proof} \section{An \texorpdfstring{$FSZ$}{FSZ} but not \texorpdfstring{$FSZ^+$}{FSZ+} group}\label{sec:not-plus} We are now prepared to investigate the $FSZ$ properties of $F(p,1)\wr_r \BZ_p$. By \cref{lem:cents} we will need to know the centralizers of $F(p,1)$. \begin{lem}\label{lem:Fp1-cents} Let $G=F(p,1)$. Then the centralizers of $G$ are given as follows. \begin{enumerate} \item For any $g\in Z(G)$, $C_G(g) = G$. \item For $g\in Q$ with $g\not\in Z(G)$, $C_G(g)=Q$. \item For all other $g\in G$, $C_G(g) = \cyc{g, a_1^p, a_p} =\cyc{g, Z(G)}$. \end{enumerate} In particular, the centralizer of every non-central element in $G$ is abelian. \end{lem} \begin{proof} The first statement is trivial. For the second, clearly $Q\subseteq C_G(g)$. Moreover, the only elements of $Q$ centralized by a non-trivial power of $b$ are the elements of $Z(G)$, which gives the reverse inclusion. For the final statement, it suffices to consider the case with $g=bq$ for some $q\in Q$. Note that for every $s$ with $p\nmid s$ we have $g^s = b^s q'$ for some $q'\in Q$. So suppose that $b^s r\in C_G(g)$ for some $r\in Q$ and any fixed $s$. Then $bqb^s r = b^sr b q$ if and only if $(B^s q)q\inv = (Br)r\inv$. Now for any $s$ (even $p\mid s$), the map $x\mapsto (B^s x)x\inv$ is a group endomorphism of $Q$, and the kernel of this map for $p\nmid s$ is easily seen to be $Z(G)$. We conclude that $b^s r = (bq)^s \cdot z$ for some $z\in Z(G)$, from which the desired claim follows. \end{proof} The result is false for $F(p,j)$ with $j>1$, as then there are non-central elements with non-abelian centralizers. In particular, the centralizer of $b^p$ will be non-abelian. \begin{cor}\label{cor:fp1-plus} $F(p,1)$ is $FSZ^+$ for any odd prime $p$. \end{cor} \begin{proof} By \cref{lem:Fp1-cents,thm:fpj-main} all centralizers are $FSZ$, which means $F(p,1)$ is $FSZ^+$ by definition. \end{proof} We can now completely determine the $FSZ$ properties of $F(p,1)\wr_r\BZ_p$. \begin{thm}\label{thm:higher-wreath-FSZ} Let $G=F(p,1)\wr_r\BZ_p$. Then $G$ is $FSZ_{p^t}^+$ for all $t>1$. \end{thm} \begin{proof} Let $H=F(p,1)$. We have that $\exp(H)=p^2$, so that $\exp(G)=p^3$. So we need only consider the case $t=2$. Since $H$ is $FSZ$, by \cref{cor:wreath} we conclude that $G$ is $FSZ_{p^2}$. Moreover, by \cref{lem:Fp1-cents,lem:cents,thm:ab-wreath} we conclude that all proper centralizers that are not described by central products are $FSZ_{p^2}$. By \cref{lem:Fp1-cents,cor:exp} the centralizers that are described by central products have exponent $p^2$ and so are also $FSZ_{p^2}$. This completes the proof. \end{proof} \begin{thm}\label{thm:fsz-not-plus} Let $G=F(p,1)\wr_r\BZ_p$. Then $G$ is $FSZ_p$, and is $FSZ_p^+$ if and only if $p\leq 3$. As a consequence, for $p>3$ $G$ is an $FSZ$ group which is not $FSZ^+$. \end{thm} \begin{proof} Let $H=F(p,1)$. By \citep[Corollary 5.4]{IMM} every group is $FSZ_2^+$ and $FSZ_3^+$, so it suffices to consider the case $p>3$. To see why the $FSZ_p^+$ condition necessarily fails, we note that by \cref{lem-part:cents-1,thm:fpj-main} there exists $d\in Z(H)$ such that $C_G((d,1,...,1,i))\cong H\ast \BZ_{p^2}$ is not $FSZ_p$ if $p>3$. The final claim follows from the first claim combined with \cref{thm:higher-wreath-FSZ}, so we need only prove the $FSZ_p$ property. We will do so by using \cref{prop:wreath-condition}, from which it suffices to solve \begin{align}\label{eq:mixed-eq} x_l^p = \Big(\prod_{s=1}^{p} x_{s}\Big)u_0 = d^n, \end{align} and show the number of solutions is independent of $n$ when $p\nmid n$. By \cref{lem:pj-pows} we may suppose that for some $d_1\in\BN$ with $p\nmid d_1$ that either $d= a_1^{p d_1}$ or $d= a_1^{p d_1}a_p^{-d_1}$. Now when $d=a_1^{p d_1}$, by \cref{lem:pj-pows} we must have $x_l\in Q$ for all $l$ and $u_0 \in Q$. Whence in this case the calculation is reduced to establishing the condition of \cref{prop:wreath-condition} in $Q\wr_r\BZ_p$, which follows from \cref{thm:ab-wreath}. So we may suppose that $d=a_1^{p d_1} a_p^{-d_1}$ with $p\nmid d_1$, and that $x_l\not\in Q$ for all $l$. At this point it will be helpful to use additive and vector notation for $Q$; so instead of writing a typical element as $a_1^{n_1}\cdots a_p^{n_p}$, we write it as $(n_1,...,n_p)$. Let $X=\sum_{t=0}^{p-1} B^t$. By \cref{lem:x-1} $X(n_1,...,n_p) = (pn_1,0,...,0,-n_1)$, and so in particular the left action of $X$ depends only on the first coordinate modulo $p$. Note that for all $t\in\BN$ $B^t(n_1,...,n_p)=(n_1,n_2',...,n_p')$ for some $n_2',...,n_p'$ which are $\BZ_p$-linear combinations of the $n_1,...,n_p$ (taken modulo $p$). Now since $d=(pd_1,0,...,0,-d_1)$, any solution $(x_0,...,x_{p-1})$ to \cref{eq:mixed-eq} must have $x_i = q_i b^{-t_i}$ for some $p\nmid t_i$ and $q_i\in Q$. Moreover, writing $u_0 = b^s v_0$ with $v_0\in Q$, by \cref{lem:order-irrev} we see that \cref{eq:mixed-eq} can be equivalently rewritten as the three equations \begin{align} X(q_i) &= d,\qquad 0\leq i< p;\label{eq:mixed-eq-1}\\ d-v_0&= \sum_{l=0}^{p-1} B^{\sum_{i=0}^{l-1} t_i} q_l;\label{eq:mixed-eq-2}\\ \sum_{i=0}^{p-1} t_i &\equiv s \bmod p.\label{eq:mixed-eq-3} \end{align} For fixed $1\leq t_0,...,t_{p-1}<p$ satisfying \cref{eq:mixed-eq-3} we then consider the group homomorphism $F\colon Q^p\to Q^{p+1}$ defined by \[(y_0,...,y_{p-1})\mapsto (X(y_0),...,X(y_{p-1}),\sum_{l=0}^{p-1} B^{\sum_{i=0}^{l-1} t_i} y_l).\] If a solution exists to \cref{eq:mixed-eq} for any $n$ with $p\nmid n$ and with the given $t_i$, then to establish the desired bijection it suffices to show that $(d,...,d)$ is in the image of $F$. By bijectivity of the powers of $B$, given $y_0,...,y_{p-1}\in Q$ we can (uniquely) define $r_0,..,r_{p-1}\in Q$ by $r_l = B^{\sum_{i=0}^{l-1} t_i} y_l$ for $0\leq l <p$, and conversely we can obtain the $y_l$ (uniquely) from given $r_l$. Note that under these relations $X(y_i)=X(r_i)$ for all $i$. Moreover, we have \[F(y_0,...,y_{p-1}) = (X(r_0),...,X(r_{p-1}),\sum_i r_i).\] So taking $r_0 = (d_1,0,...,0,-d_1)$ and $r_i = (d_1,0,...,0,0)$ for $1\leq i< p$ we get $(X(r_0),...,X(r_{p-1}),\sum_i r_i) = (d,...,d)$, as desired. So we may apply \cref{prop:wreath-condition} as desired to conclude that $G$ is $FSZ_p$, which completes the proof. \end{proof} The proof of \cref{thm:fsz-not-plus} yields a reasonably general method for investigating the $FSZ_p$ property of $(A\rtimes\BZ_p)\wr_r\BZ_p$ when $A$ is an (elementary) abelian $p$-group and $A\rtimes\BZ_p$ is $FSZ_p$. Note that $A\rtimes\BZ_p$ can be non-$FSZ_p$ for some choices of $A$ and the action on it \citep{K16:p-examples}. We demonstrate this with the following result. \begin{thm}\label{thm:Sp3} Let $p$ be a prime. Then $(\BZ_p\wr_r\BZ_p)\wr_r\BZ_p$ is $FSZ$. In particular, every Sylow $p$-subgroup of $S_{p^3}$ is $FSZ$. \end{thm} \begin{proof} The structure of the Sylow subgroups of symmetric groups is well-known; see, for example, \citep{Rot99}. In the case of $S_{p^3}$ the Sylow $p$-subgroup is isomorphic to $(\BZ_p\wr_r\BZ_p)\wr_r \BZ_p$. So let $P=(\BZ_p\wr_r\BZ_p)\wr_r\BZ_p$. Since $P$ has exponent $p^3$ and \cref{cor:symm-syl-high} implies $P$ is $FSZ_{p^2}$, we need only show that $P$ is $FSZ_p$. In the notation of \cref{prop:wreath-condition}, we have $D=\BZ_p\wr_r\BZ_p = \BZ_p^p\rtimes\BZ_p$. We let $Q=\BZ_p^p$, written in vector notation (as a $\BZ_p$ vector space, in particular). Let $b$ be an element of $D$ which cyclicly permutes the factors of $Q$, and let $B$ be the matrix which acts on $Q$ from the left which describes right conjugation by $b$. We have a group endomorphism $J\colon Q\to Q$ given by $J(n_1,...,n_p)=(\sum_i n_i,...,\sum_i n_i)$. By \citep[Example 4.4]{IMM} this endomorphism describes $p$-th powers of elements $x\in Q\rtimes \BZ_p=\BZ_p\wr_r\BZ_p$ with $x\not\in Q$; all other elements have order dividing $p$. Now consider \cref{eq:wreath} for $t=1$. Then for any solutions to exist for $g\neq 1$ we must have $x_l\not\in Q$ for all $l$. Moreover, by the properties of $J$ we must have that $d=(t,...,t)\in Q$ for some $t\in \BZ_p$. So write $x_l = (n_{l,1},...,n_{l,p})b^{-t_l}$ for each $l$, with $p\nmid t_l$. Let $u_0=b^s q_0$, with $q_0\in Q$. We fix $t_1,...,t_p$ with $\sum_i t_i = s$ arbitrarily. As in the proof of the preceding theorem, we are naturally led to consider the group homomorphism $F\colon Q^p\to Q^{p+1}$ defined by \[(q_1,...,q_p)\mapsto (J(q_1),...,J(q_p), \sum_{l=1}^{p-1} B^{\sum_{i=0}^{l-1} t_i}q_l).\] We can exploit the bijectivity of the powers of $B$, exactly as in the preceding proof, to find $r_1,..,r_p$ for given $q_1,...,q_p$ (or conversely) such that \[ F(q_1,...,q_p)= (J(r_1),...,J(r_p),\sum_i r_i).\] As in the previous proof, the desired bijection will follow if we show that there exists $(r_1,...,r_p)\in Q^p$ such that $(J(r_1),...,J(r_p),\sum_i r_i)=(d,...,d)$. Since we have noted that $d= (t,..,t)$, writing $r_i = (m_{i,1},...,m_{i,p})$ the necessary and sufficient conditions for $(d,...,d)$ to be in the image of $F$ is \begin{align*} \sum_k m_{i,k} = t , \ \mbox{ for all } i;\\ \sum_k m_{k,i} = t, \ \mbox{ for all } i. \end{align*} So defining $m_{i,j} = t \delta_{i,j}$, where $\delta_{i,j}$ is the Kronecker delta, we see that $(d,...,d)$ is in the image, as desired. Thus $P$ is $FSZ_p$ by \cref{prop:wreath-condition}, and this completes the proof. \end{proof}
1,108,101,563,591
arxiv
\section{Introduction} Modern theories of particle physics generally rely on two types of symmetries. One class of them is global space-time symmetries like Lorentz invariance. The other class is gauge symmetries and all fundamental interactions of particle physics are based on gauge theories. Gravity is both similar and different. On one hand, it gauges local space-time symmetries like translations and rotations and it is therefore based again on the gauge principle. On the other, the nature of the gauge symmetry is rather different from gauge theories. It has long been claimed that both Lorentz invariance as well as gauge symmetry are IR ``accidents", \cite{nielsen}. The claims on Lorentz invariance have been analyzed on different occasions and different contexts, especially recently, \cite{CG}-\cite{lorentz}. An emerged IR Lorentz invariance remains a possibility, although it seems ``fine tuning" is needed. On the other hand, a gauge symmetry that emerges in the IR, is a radical departure from the paradigm that developed in the last 50 years, crowning the success of the Standard Model (SM), where the gauge principle emerged as the fundamental principle. It is believed by most in the field that the gauge principle is central to particle interactions, {\em at all scales}. However, slowly and sporadically, there were attempts to investigate alternative routes, in which gauge invariance appears or emerges in the IR. In the condensed matter literature the notion of emergence is less of a taboo, and the issue of emergent gauge invariance has been entertained on various occasions. An early appearance of emergent gauge fields is at Resonance Valence Bond (RVB) models of antiferromagnetism, \cite{RVB}. It has been advocated in lattice models that due to string-net condensation emergent gauge invariance and photons can appear, \cite{wen}. And it has been also generalized to emergent gravity, \cite{wen2}. The notion of emergence from strings is also interesting. Although it is rarely viewed in this light, (fundamental) strings provide emergent gauge bosons and gravitons. The input in NS-R string theory is the two-dimensional theory of scalars and fermions on the world-volume. The output is space-time gauge bosons and gravitons, among others. In condensed matter another instance of emergence of gauge invariance appeared in the context of fractionalization phenomena and the emergence of fractionalized quasiparticles, \cite{frac}. This culminated in the so called ``deconfined quantum critical points", \cite{Senthil}. Such points are second order quantum phase transitions where the critical theory contains an emergent gauge field and ``deconfined degrees of freedom" associated with the fractionalisation of order parameters. Motivated by the AdS/CFT correspondence, novel condensed matter systems were designed and studied, prtraying the emergence phenomenon, \cite{Lee1},\cite{Lee2}. In the high energy community, efforts for describing photons as emergent particles were initially motivated by ideas on superconductivity, transferred to the particle physics realm by Nambu and Jona-Lasinio, \cite{NJL}. Following the same lead, it was argued that the four-fermion Heisenberg theory with current-current interactions leads to an emergent photon, \cite{Bjorken1,Bjorken2,BB,BZ}. This theory however, being four-dimensional and non-renormalizable did not allow the study to go far. There has been however, the same phenomenon, studied in two dimensions in the context of the $CP^{N-1}$ $\sigma$-model, \cite{cpn,wcpn,Polyakov}. In such a case, it was established beyond doubt the emergence of a (propagating) vector particle in that theory. Another form of emergence is at work in the work of Seiberg on the phases of N=1 superQCD, \cite{seiberg}. In \cite{Komar} the analogue of the $\rho$-mesons of the strongly coupled IR theory were interpreted as the dual, weakly coupled magnetic gauge bosons of Seiberg, providing another interesting example of (non-abelian) emergence. That said, there already existed various models in the context of high energy physics such as the $\mathbb{CP}^N$ model in which a global symmetry becomes effectively gauged at low energies that have a close affinity to the models studied traditionally in condensed matter physics. The AdS/CFT correspondence, \cite{malda}, and its generalizations (holography), have given an extra boost, and a ``new dimension" to the concept of emergence. In holographic duals, what is a (gauge-invariant) bound-state of generalized gluons, becomes in the gravitational description a graviton, and photon, or any other bulk particle. Gauge invariance and diffeomorphism invariance is emergent (similarly to how it happens in string theory). The emergence of gauge bosons and related constraints was studied in \cite{Harlow}, where the connection also with the \emph{weak gravity conjecture} was highlighted. In \cite{SMGRAV} a program was outlined, motivated by the holographic correspondence, on how to obtain gravity and other long range interactions from ``hidden" holographic theories, coupled to the Standard Model (SM) weakly in the IR. The emergence of axions was studied in detail in \cite{axion} and new forms of axionic dynamics has been uncovered. {The effective action for the emergent vector has many similarities with the actions studied in \cite{KT}. The context studied there is of vector bosons (and gravitons) that are Goldstone bosons of broken Lorentz invariance.} In composite approaches to gauge interactions, the major difficulties lie in producing a theory with a sensible strong-coupling structure, and appropriate long-distance dynamics. The popularity of making composite gravitons (and gauge bosons) has also motivated the well-known Weinberg-Witten (WW) Theorem, \cite{WW} that provides strong constraints on composite gravitons (and composite gauge bosons). Under a set of assumptions that include Lorentz invariance, well-defined particle states, a conserved covariant energy momentum tensor and a Lorentz covariant and gauge-invariant conserved global currents, the theorem excludes $\bullet$ massless particles of spin $s>{1\over 2}$ that can couple to a conserved current, and $\bullet$ massless particles with spin $s>1$ that can couple to the energy momentum tensor. Its assumptions however, allow for several loop-holes that help evading the theorem in known cases. In particular, in does not exclude Yang-Mills theory, as in that case the global conserved currents are not gauge-invariant. It also does not exclude a ``fundamental" graviton coupled to matter, as the Lorentz-covariant energy-momentum tensor is not conserved (but only covariantly conserved). Another counterexample is presented by the massless $\rho$-mesons at the lower-end of the conformal window of N=1 sQCD\footnote{{There is long history on attempts to describe (composite) $\rho$-mesons in terms of a low-energy gauge theory, coming under the name of ``hidden symmetry", \cite{hidden}. Its proper realization was explained in the context of the AdS/CFT correspondence, \cite{hh}. The situation in sQCD however is distinct, as it has further ingredients that allow the $\rho$-mesons to become light/massless and weakly interacting.}}, \cite{Komar}. They evade the WW theorem because of the emergent gauge invariance, associated to them. In any relativistic quantum field theory, with a global U(1) symmetry, there is at least one state with the quantum number of a U(1) gauge boson. It is the state generated out of the vacuum by the action of the (conserved) current. In weakly-coupled theories this state is unique, while in strongly coupled theories there may be several such states generated by the action of the current. If a theory possesses a large-N, strong coupling limit, then the width of such states vanishes and there is an infinite number (or a continuum) of them. In weakly-coupled theories a state generated by the current, is a multi-particle state and therefore its effective interactions are expected to be non-local. In the opposite case, where the interactions are strong, we expect such a state to be tightly bound. If its ``size" is $L$, then we might hope that at distances $\gg L$ the effective interactions of such a state may generate a vector-interaction, plausibly a gauge theory. In particular, in a theory with an infinite coupling, we expect to have an emergent photon with a point-like structure. If the theory is not conformal, and has a gap, then we expect a discrete spectrum of such states, associated to the (generically complex) poles of the two-point function of the U(1) current. In a generic strongly-coupled theory, most such states will be unstable. In YM, to pick a concrete example, such states are generated by a vector composite operator (that is not a conserved current) and are in the trajectory of the $1^{+-}$ glueball. If we consider QCD instead of YM, then we have conserved currents and novel massive vectors associated with them. If instead, the strongly-coupled theory is conformal, then the spectrum of vector bound-states forms a continuum. In the context of holography, the masslessness of the higher-dimensional gauge bosons is explained by the conservation of the associated global current of the dual QFT. Once this conservation is violated, either explicitly by a source boundary condition, or spontaneously (a vev boundary condition) the gauge field in the bulk obtains a mass, and the associated current an anomalous dimension. Therefore the ``masslessness" of the higher dimensional vector is the avatar of the global invariance of the dual QFT and the conservation of the associated boundary current. This does not however imply that the four-dimensional (emergent) vector is massless. Indeed, in N=4 sYM defined on Minkowski space we obtain a continuum of spin one states starting at zero mass\footnote{This is the reason this theory evades the WW theorem which assumes among other things a isolated bound-state. The WW theorem involves a subtle limit to define the helicity amplitudes that determine the couplings of massless states to the stress tensor or a local current. This limiting procedure is not valid in theories where the states form a continuum.}. If instead we define the theory on $R\times S^3$, then the vector spectrum is discrete, but the theory has lost Lorentz invariance. We therefore learn from the holographic duality that, $\bullet$ Strong coupling in QFT makes emergent/composite vectors tightly-bound states. $\bullet$ Large N makes emergent/cosmposite vectors weakly interacting. Both properties are essential in obtaining a semiclassical and local theory of (composite) vector interactions. The AdS/CFT intuition therefore suggests that semiclassical effective vector interactions with composite vectors are expected to emerge from holographic quantum field theories, {where the semiclassical nature of the interaction is related to the large-N limit}. If we are to describe ``dark photons" as emergent vectors coupled to the SM then we must seek their emergence in a hidden holographic theory. The simplest way\footnote{There may be more exotic variations on this theme. The SM could be part of the semiclassical holographic theory, and its elementary fields, to be composites of more elementary fields. Or it could be that parts of the SM are composite and others elementary. Crude holographic translations of these possibilities have been considered in the past in the context of the RS realizations of the SM, \cite{csaki}.} is to postulate that, \cite{SMGRAV}, $\bullet$ The whole of physics is described by a four-dimensional QFT. $\bullet$ The total UV QFT contains a holographic part that is distinct from the (UV limit of the) SM. We shall call it the ``hidden" (holographic) theory. $\bullet$ This holographic part is coupled in the UV to the standard model via a messenger sector. It consists of fields transforming as bi-fundamentals under the gauge group of the ``hidden" theory, and the gauge group of the SM. We shall call these fields the messengers. Their mass $M$ shall be assumed to be much larger than any of the SM scales. $\bullet$ At energies $\ll M$ we may integrate out the messengers and we end up with an effective theory consisting of the SM coupled to the hidden holographic theory via irrelevant interactions\footnote{There is one possible exception to this statement and it is connected to the gauge hierarchy problem.}. $\bullet$ Although all operators of the hidden theory are coupled weakly at low energies to the SM, the SM quantum corrections generate ${\cal O}(M)$ masses for all of them with a few notable exceptions that are protected by symmetries: the graviton, the universal axion, \cite{axion}, and exactly conserved global currents, \cite{u1}. Some of the relevant issues of this rather general setup have been discussed in \cite{SMGRAV}. In this paper we undertake a closer look at the emergence of vector interactions. The case of emergent gravity will be treated in a companion paper, \cite{grav}. \subsection{Results and Outlook} Our results are as follows \begin{itemize} \item We define and establish the dynamics of emergent/composite vectors in a given QFT using an appropriately defined effective action for an abelian global U(1) current that is conserved. We show that the Schwinger functional can be defined so that it is locally gauge invariant. The effective action involves a dynamical vector and has no gauge invariance. It summarizes the propagation and interactions of composites generated by the global U(1) current. \item In the presence of charged sources the emergent U(1) vector is effectively massive, and in gapped theories one can expand the effective action in a derivative expansion. \item In the absence of charged sources, the interactions of the emergent vector are non-local. They can be described however using an auxiliary antisymmetric two tensor that turns out to be massive. {The reason for the non-locality is not due to effects of massless modes. Rather it is the effect of an emergent gauge invariance, which produces zero modes and renders the two-point function of the global current non-invertible. To state it simply, in the absence of charged sources, the U(1) current is exactly conserved and its two point-function non-invertible.} \item When two independent QFTs are coupled via a current current interaction, a linearized analysis indicates that the presence of the coupling induces an vector-mediated interaction of currents in each theory. From the point of view of one of the theories (the ``visible" theory), this interaction is mediated by an emergent vector field and its propagator is the inverse of the current-current correlator of the ``hidden" theory. This interaction is always repulsive among identical sources if the theories are unitary. Isolated poles in the two-point function of the current in the hidden theory amount to massive vector exchange while continuous spectra in the correlator provide other non-standard types of behavior for the emergent vector interaction. \item The linearized analysis can be complemented with a fully non-linear formulation that describes the dynamics of the emergent vector field in the visible theory. There are several distinct cases of coupling the two theories that give rise to distinct symmetries and dynamics that we enumerate below. \begin{enumerate} \item A hidden theory with a global U(1) symmetry and a current-current coupling to a SM global (non-anomalous) symmetry. An example could be $B-L$ in the SM. In that case, the result is that the hidden global symmetry will generate a (generically massive) vector boson that will couple to the B-L charges of the SM. In such a case, the combined theory still has two independent U(1) symmetries, but one of them only is visible in the SM, (the hidden symmetry is not visible from the point of view of the SM). \item A hidden theory with a global U(1) symmetry and a coupling between a charged operator in the hidden theory to a charged operator of the SM under a SM global (non-anomalous) symmetry. Such a coupling breaks the two U(1)'s into a single diagonal U(1). This leftover U(1) couples to the emergent vector boson. \item In the two cases above, the U(1) global symmetry of the SM may also be an anomalous global symmetry, like baryon or lepton number. Although, some of the properties of the new vector interaction remain similar to what was described above, there are new features that are related to the anomaly of the global SM symmetry. In such a case, we expect to have similarities with the anomalous U(1) vector bosons of string theory\footnote{Anomalous U(1) symmetries abound in string theory, \cite{review}. In SM realizations in orientifold vacua, the SM stacks of branes always contain two anomalous U(1) symmetries and generically three, \cite{akt,ADKS}. Their role in the effective theory can be important as (a) they are almost always the lightest of the non-standard model fields, due to the fact that their masses are effectively one-loop effects, \cite{KA,AKR}. A typical such effective theory is analyzed in detail in \cite{CIK}.}. \item A hidden theory with a global U(1) symmetry and a current-current coupling to the (gauge-invariant) hypercharge current. In this case both the hypercharge gauge field and the emergent vector couple to the hypercharge current. By a (generically non-local) rotation of the two vector fields, a linear combination will become the new hypercharge gauge field while the other will couple to $|H|^2$ where $H$ is the SM Higgs. \item A hidden theory with a global U(1) symmetry, whose current is $J_{\mu}$ and a coupling with the hypercharge field strength of the SM of the form \begin{equation} S_{int}={1\over m^2}\int d^4 x F^{\mu\nu}(\partial_{\mu}J_{\nu}-\partial_{\nu}J_{\mu}) \end{equation} where the scale $M$ is of the same order as the messenger mass scale. By an integration by parts this interaction is equivalent to the previous case, using the equations of motion for hypercharge. \end{enumerate} In all of the above we have an emergent U(1) vector boson that plays the role of a dark photon, and in this context the single most dangerous coupling to the standard model is the leading dark photon portal: a kinetic mixing with the hypercharge, \cite{ship}. We can make estimates of such dangerous couplings using (weak coupling) field theory dynamics, but it is also interesting to make such estimates using dual string theory information at strong coupling. Such a study is underway, \cite{u1}. \item The case where the hidden theory is a holographic theory (large-N, strong coupling) holds special interest. In such a case, the hidden theory is described by a bulk gravitational theory, that is coupled with the visible theory at a special bulk scale. The gravitational picture is that of a brane embedded in a holographic bulk, as in earlier brane-world setups. The global U(1) current in the (hidden) holographic theory is now dual to a bulk gauge field that couples to charged states on the brane. An induced brane localized kinetic term emerges on the brane due to SM quantum corrections. In the ultimate IR, the effective vector coupling $g_{IR}$ and mass $m_{IR}$ for such a emergent photon are given by \begin{equation} {1\over g_{IR}^2}\simeq {d_2\over d_0^2} {1\over mg_5^2}+{1\over g_4^2} \;\;\;,\;\;\; {m_{IR}^2\over g_{IR}^2}\simeq {m^2\over d_0(mg_5^2)}+{m_0^2\over g_4^2} \label{i26}\end{equation} In the formulae above, the first contribution on the left-hand side is due to the bulk theory and the second contribution is due to the SM. In particular, $m$ is the dynamical scale of the hidden theory, $g_5$ the bulk gauge coupling constant, $g_4$ is the dimensionless induced gauge coupling constant on the brane, due to the SM quantum corrections. $m_0$ is a possible mass generated on the brane, if spontaneous symmetry breaking occurs. Finally, the coefficients $d_0,d_2$ are dimensionless coefficients appearing in the bulk to bulk propagator of the gauge boson. \item In the holographic context, the coupling controlling the interactions of the emergent vector is naturally weak, in the large N limit. The same applies to the bulk contributions to its mass, similarly to what happens to gravitons in a similar setting, \cite{self}. \item Unlike fundamental dark photons, the emergent photons described here, especially in the holographic context, have propagators that at intermediate energies behave differently from fundamental photons and therefore can have different phenomenology and different constraints from standard elementary dark photons. The same was found recently for , emergent axions in \cite{axion} \item The most dangerous coupling of emergent vectors to the SM is via the hypercharge portal, \cite{ship}, which is a kinetic mixing with the field strength of the hypercharge. Such a coupling is severely constrained by data. This coupling is naturally suppressed in our context (large N), but a detailed study of this issue will appear in the near future, \cite{u1}. It turns out that when the hidden theory is at strong coupling, there is additional suppressions of the hypercharge mixing term. \end{itemize} The structure of this paper is as follows. In section \ref{Generalsetup} we describe our general setup concerning the required properties of the hidden theory and its coupling to the SM. In section \ref{effe} we describe the definition and properties of the effective action for a global U(1) current in a QFT. In section \ref{JJinteractionlinearised} we describe the coupling of a hidden and a visible theory, and the induced vector interaction in the visible theory in the linearized approximation. In section \ref{couplednonlinear} the non-linear theory for the emergent photon is formulated. In section \ref{Hologaxion} the holographic emergence theory is treated. In section \ref{singleNL} we analyze the non-local effective action for the emergent vector in the absence of sources. Finally section \ref{Dis} contains a discussion of the results. In appendix \ref{Examples} we present the current two point function in free bosonic and fermionic theory. Appendix \ref{Spectralrepresentation} contains a review of the K\"allen-Lehmann representation of the current two-point function. In appendix \ref{effectivestaticpotential} we survey the long-distance behavior of the static potential mediated by the emergent vector. In appendix \ref{multiplefields} we analyze the effective action of the current vev in the presence of multiple charged sources. In appendix \ref{high} we analyze the structure of the higher derivative terms in the effective action and the issue of emergent gauge invariance. In appendix \ref{complete} we derive the full effective action by Legendre-transforming also the charged sources. Finally in appendix \ref{gau12} we derive the vector bulk propagators relevant for the holographic case. \section{The general setup}\label{Generalsetup} { The starting point of our analysis is described in \cite{SMGRAV}, namely a UV-complete theory that in the IR splits into two weakly interacting IR sectors\footnote{There are several similarities between these models and what have been called ``hidden valley" models in the literature, \cite{stras}.} . One of these sectors, will be identified with the ``visible" theory which for all practical purposes is the Standard Model and some of its direct extensions. The other, that we call the ``hidden sector", and we shall denote as $\h{QFT}$, is a theory that in the IR is weakly interacting with the SM. The complete theory is defined on a flat (non-dynamical) space-time background $g_{\mu\nu}\equiv \eta_{\mu\nu}$. From the IR point of view, the two IR sectors are connected by irrelevant interactions. These interactions have a characteristic scale $M$, that si assumed to be well above all scales of the SM and the IR $\h{QFT}$. A way to think about the origin of such a coupling, is to think of two distinct gauge theories that are coupled together with a set of bi-fundamental fields (we call them the messengers) with mass $M$ in such a way that the total theory is UV-complete. This amounts to the fact that the UV limit of the total theory is well defined and is given by a four-dimensional CFT. The messenger mass $M$ controls the strength of the interactions between the two IR sectors, the SM and the hidden theory, $\h{QFT}$. At energy scales much smaller than the messenger mass, $M$, the messengers can be integrated-out leaving the hidden $\h{QFT}$ interacting with the visible one via a series of non-renormalizable interactions. In this paper, we focus in particular in studying the effective induced interactions that are related to global $U(1)$ symmetries. The study of other types of symmetries is undertaken in \cite{grav,axion} and results in effective theories of emergent gravity and axions. We now give a few more details and review the precise setup originally described in~\cite{SMGRAV,grav}. \subsection{The assumptions} Having described the idea, we now make explicit our assumptions on the class of theories we consider. } Our starting point is a local relativistic quantum field theory. We assume that this quantum field theory has the following features: \begin{itemize} \item[$(a)$] It possesses a large scale $M$ and all the other characteristic mass scales $m_i \ll M$. \item[$(b)$] At energies $E\gg M$ the dynamics is described by a well-defined ultraviolet theory. For example this could be a UV fixed point described by a four-dimensional conformal field theory.\footnote{We could also envisage a more exotic UV behavior involving higher dimensional QFTs or some form of string theory.} \item[$(c)$] At energies $E\ll M$ there is an effective description of the low energy dynamics in terms of two separate sets of distinct quantum field theories communicating to each other via irrelevant interactions. We shall call the first quantum field theory the {\it visible} QFT and shall denote all quantities associated with that theory with normal font notation. We call the second quantum field theory the {\it hidden} $\h{QFT}$ and shall denote all its quantities with a hat notation. Schematically, we have the following low energy description in terms of an effective action \begin{equation} \label{setupaa} S_{IR} = S_{visible}(\Phi) + S_{hidden}(\h{\Phi}) + S_{int}(\Phi,\h{\Phi}) ~, \end{equation} where $\Phi$ are collectively the fields of the visible QFT and $\h{\Phi}$ the fields of the hidden QFT. The interaction term $S_{int}$ can be formally described by a sum of irrelevant interactions of increasing scaling dimension \begin{equation} \label{setupab} S_{int} = \sum_i \int d^4 x\, \lambda_i \, \vis{O}_i(x) \h{O}_i (x) ~, \end{equation} where $\vis{O}_i$ are general operators of the visible QFT and $\h{O}_i$ are general operators of the hidden QFT. $S_{int}$ arises by integrating out massive messenger degrees of freedom of the UV QFT with characteristic mass scale $M$. This scale then defines a natural UV cutoff of the effective description. It is hence a physical scale determining the point in energy where the theory splits into two sectors, weakly interacting with each other at low energies. \item[$(d)$] If we further assume that the hidden $\h{QFT}$ is a theory with mass gap $m$, at energies $E$ in the range $m\ll E \ll M$ we can employ the description \eqref{setupaa} to describe a general process involving both visible and hidden degrees of freedom. For energies $E\ll m \ll M$ on the other hand, it is more natural to integrate out the hidden degrees of freedom and obtain an effective field theory in terms of the visible degrees of freedom only. \end{itemize} Our main focus then will be the low energy $(E\ll M)$ behaviour of observables {\it defined exclusively in terms of elementary or composite fields in the visible QFT}, relevant for observers who have only access to visible QFT fields. In addition we focus in an effective description of global $U(1)$ symmetries and the possibility that these symmetries have to appear as gauged symmetries in the low energy effective description. More explicitly, we consider the generating functional of correlation functions (Schwinger functional) for the visible QFT defined as \begin{equation} \label{setupac} e^{- W({\cal J})} = \int [D\Phi] [D\h{\Phi}] \, e^{-S_{visible}(\Phi,{\cal J}) - S_{hidden}(\h{\Phi}) - S_{int}(\vis{O}_i, \h{O}_i)} ~. \end{equation} We use a Euclidean signature convention (that can be rotated to Lorentzian) where ${\cal J}$ is collective notation that denotes the addition of arbitrary sources in the visible QFT. This path integral is a Wilsonian effective action below the UV cutoff scale $M$. By integrating the hidden sector fields $\h{\Phi}$ we obtain \begin{equation} \label{setupad} e^{- W({\cal J})} = \int [D\Phi] \, e^{-S_{visible}(\Phi,{\cal J}) - {\cal W} (\vis{O}_i) } ~, \end{equation} where ${\cal W}$ is the generating functional in the hidden QFT, \begin{equation} e^{-{\cal W}(\h{J})}\equiv \int [D\h{\Phi}] \, e^{- S_{hidden}(\h{\Phi}) - \int \h{O}\h{J}} ~. \end{equation} We first observe that from the point of view of the hidden QFT, the visible operators $\vis{O}_i$ appearing in the interaction term $S_{int}$ in \eqref{setupaa} and \eqref{setupab} are dynamical sources. {In \eqref{setupad}, an observer in the visible sector registers a formal series of increasingly irrelevant interactions. We would like to understand when it is possible to reformulate these interactions by integrating-in a set of (semi-)classical fields. We focus in the case of $U(1)$ symmetries and we shall try to understand under which conditions, the effective action for these fields is a sensible $U(1)$ vector theory.} \section{The effective action for a global conserved current\label{effe}} In this section we consider a simpler framework for the emergence of gauge invariance. We consider a single QFT with an exact U(1) global symmetry. Our analysis is general but we also present an explicit example by expanding the Schwinger functional in an IR derivative expansion in section \ref{Quadraticexample}. Other global internal symmetries can be treated in a similar fashion. Anomalies in global symmetries can be treated also but we shall not do this here. Some elements in this direction were given in \cite{lorentz}. In appendix~\ref{Effectiveactionexamples} we present more general effective actions, for example in the case of multiple fields or in the presence of higher derivative corrections. \subsection{The effective action for a U(1) symmetric theory}\label{effectiveactionsingletheory} Consider a theory with a global U(1) symmetry, and an associated conserved current, $J_{\mu}$. We define and study the Schwinger source functional for the current as well as charged operators in that theory. To simplify matters, we include a single charged scalar operator $O(x)$ with charge one, as it is enough to indicate the relevant issues. Let $O$ be a complex operator charged (with charge 1) under the global symmetry current $J_{\mu}$. We construct the extended source functional by adding the appropriate sources to the action of the theory \begin{equation} S(\phi,A,\Phi)=S(\phi)+\int d^4x\left[ J^{\mu}(x)A_{\mu}(x)+\Phi(x) O^*(x)+\Phi^*(x) O(x)\right]+\cdots \label{A18p}\end{equation} where $\phi$ denote collectively the quantum fields of the theory. The Schwinger source functional is then defined as \begin{equation} e^{-W(A,\Phi)}\equiv \int {\cal D}\phi~e^{-S(\phi,A,\Phi)} \label{A19p}\end{equation} It is well known that the Schwinger functional of a source gauge field $A_{\mu}$ coupled to a conserved global symmetry current $J_{\mu}$ has a local gauge invariance if defined properly, (see \cite{lorentz} for example). This remains true if the global symmetry has the usual triangle anomalies, \cite{lorentz}. The ellipsis in the formula (\ref{A18p}) contains possible terms that have to be added to restore local gauge invariance. The functional $W(A,\Phi)$ is therefore locally gauge-invariant under the U(1) gauge transformations \begin{equation} A_{\mu}\to A_{\mu}+\partial_{\mu}\epsilon(x) \;\;\;,\;\;\; \Phi\to \Phi~e^{-i\epsilon(x)} \label{A1}\end{equation} This is equivalent to the standard Ward identity \begin{equation} \partial_{\mu}{\delta W\over \delta A_{\mu}}+i\left(\Phi{\delta W\over \delta \Phi}-\Phi^*{\delta W\over \delta \Phi^*}\right)=0 \label{A20p}\end{equation} The gauge invariant completion of the Schwinger functional is not unique. Like in the case of translational symmetries, the conserved current can be modified by adding topological\footnote{By ``topological" we mean currents that are identically conserved independent of the dynamics of the theory. Standard topological currents that are associated with topological invariants, as for example the Chern-Simons current, are special cases of our definition here. For example, for any antisymmetric local gauge-invariant operator $O_{\mu\nu}$, $J_{\nu}\equiv \partial^{\mu}O_{\mu\nu}$ is conserved identically. This example is of course the tip of the iceberg. For example the current $J^{\mu}={\delta W\over \delta A_{\mu}}$ with $W(A)$ an arbitrary local gauge-invariant functional, without minimally charged sources, is identically conserved.} currents (currents that are conserved identically without using the equations of motion). This is equivalent to adding local gauge invariant terms to $W$. As an example, the following addition \begin{equation} W'=W+{1\over 4}\int d^4x ~Z(|\Phi|^2)F_{\mu\nu}F^{\mu\nu} \label{a2}\end{equation} modifies the current by \begin{equation} J'_{\mu}\equiv {\delta W'\over \delta A^{\mu}}=J_{\mu}-\delta J_{\mu}\;\;\;,\;\;\; \delta J_{\mu}= \partial^{\nu}\left[Z(|\Phi|^2)F_{\nu\mu}\right] \label{a3}\end{equation} As is obvious $\delta J_{\mu}$ is identically conserved, \begin{equation} \partial^{\mu}\delta J_{\mu}=0\;. \end{equation} without using the equations of motion. To fix this ambiguity we define the gauge-invariant extension of the Schwinger functional by minimal substitution, which in the U(1) case is a unique prescription as covariant derivatives commute. Generically however, if we define the U(1) current in the presence of arbitrary gauge fields, this fixes the enormous scheme ambiguity of the Schwinger functional. It should be also mentioned that the gauge invariance of the Schwinger functional is explained by the fact that the operator $\partial^{\mu}J_{\mu}$ is zero on-shell, and is therefore a redundant operator of the theory. Perturbing the theory with it, has no effect. It should be stressed that the gauge invariance of the Schwinger functional is always underlying a U(1) global symmetry. However, in the presence of arbitrary non-trivial charged sources, the U(1) current obtained from the Schwinger functional is not anymore conserved. Its divergence is a linear combination of charged sources and ``equations of motion". This is explicitly visible in the Ward identity (\ref{A20p}). In the special case that the Schwinger functional is extremal with respect to the charged sources, ($\Phi$ in our case), then the current is conserved. This amounts to setting ${\delta W\over \delta \Phi}={\delta W\over \delta \Phi^*}=0$ in (\ref{A20p}). Extremality of the functional with respect to the charged sources is equivalent to the absence of charged expectation values, as expected. Having fixed this ambiguity, we should remember that, as usual in QFT, $W(A)$ is UV divergent and requires regularization and renormalization. We shall not delve a lot in this direction as it has been studied for decades, but suffice to say that this can be done even for chiral non-anomalous symmetries. That may be done without breaking the global symmetry in the regularized theory. Even when the global symmetry is broken at finite cutoff, techniques exist to provide a finite renormalized Schwinger functional that is gauge-invariant, and has a few extra parameters associated with standard scheme dependence because of renormalization. We now assume that we have in hand a concrete renormalized Schwinger functional with U(1) gauge invariance. We shall define the effective action for the current by Legendre-transforming $W$, only with respect to the gauge field. Since the fields $\Phi$ could also be sources with no dynamical equations of motion, we therefore need an appropriate definition that holds both for dynamical and non-dynamical $\Phi$. The Legendre transform should also respect the original gauge invariance exemplified by~\ref{A1} and~\ref{A20p}. We now define the Legendre transform with respect to an arbitrary original background $\mathbf{A}_\mu$. $\mathbf{A}_\mu$ could be also trivial. The Legendre transform is \begin{equation} \Gamma(\tilde V,\Phi,\mathbf{A}_\mu)=\int d^4 x\left[-\tilde V^{\mu}(A_{\mu}-\mathbf{A}_\mu) \right] + W(A,\Phi) \, . \label{B22p}\end{equation} The expectation value of the current is \begin{equation} \langle \tilde V_{\mu} \rangle \equiv {\delta W\over \delta A^{\mu}} \Bigg|_{A_\mu = \mathbf{A}_\mu} \, . \label{B21p}\end{equation} The conservation law (\ref{A20p}) hence becomes \begin{equation} \partial^{\mu} \langle \tilde V_{\mu} \rangle + i\left(\Phi{\delta W\over \delta \Phi}-\Phi^*{\delta W\over \delta \Phi^*}\right) \Bigg|_{A_\mu = \mathbf{A}_\mu} = 0 \label{aa3}\end{equation} With these definitions, we maintain gauge invariance explicitly, since both the background and the gauge field transform in the same way, while $\tilde{V}$ is invariant. The effective action $\Gamma$ is constructed to have the property that it is extremal with respect to $\tilde V_{\mu}$. To show this we first obtain by direct functional differentiation \begin{equation} {\delta \Gamma\over \delta \tilde V^{\mu}} = (\mathbf{A}_\mu - A_\mu) -{\delta A^{\nu}\over \delta \tilde V^{\mu}}\tilde V_{\nu}+{\delta W\over \delta \tilde V^{\mu}} \label{a4}\end{equation} and by using the chain rule \begin{equation} {\delta W\over \delta \tilde V^{\mu}}={\delta W\over \delta A^{\nu}}{\delta A^{\nu}\over \delta \tilde V^{\mu}}=\tilde V_{\nu}{\delta A^{\nu}\over \delta \tilde V^{\mu}} \, , \label{a5}\end{equation} the variation with respect to the emerging vector field $\tilde{V}_\mu$ is \begin{equation} \frac{\delta \Gamma (\tilde V,\Phi,\mathbf{A}_\mu)}{\delta \tilde{V}_\mu} = (\mathbf{A}_\mu - A_\mu) \, . \end{equation} We therefore find that it is extremal on the background solution \begin{equation} \frac{\delta \Gamma (\tilde V,\Phi,\mathbf{A}_\mu)}{\delta \tilde{V}_\mu} \Bigg|_{A_\mu = \mathbf{A}_\mu} = 0 \end{equation} In addition with this definition we find \begin{equation} \frac{\delta \Gamma (\tilde V,\Phi,\mathbf{A}_\mu)}{\delta \Phi} = (\mathbf{A}_\mu - A_\mu) \frac{\delta \tilde{V}_\mu}{\delta \Phi} + \frac{\delta W}{\delta \Phi} \end{equation} So on the background, the result is \begin{equation}\label{B20} \frac{\delta \Gamma (\tilde V,\Phi,\mathbf{A}_\mu)}{\delta \Phi} \Bigg|_{A_\mu = \mathbf{A}_\mu} = \frac{\delta W}{\delta \Phi} \Bigg|_{A_\mu = \mathbf{A}_\mu} = 0 \end{equation} where in the last equality we used the on-shell condition on $W$ from the EOM's of $\Phi$. In case $\Phi$ is a non-dynamical source, equation (\ref{B20}) determines the expectation values of the charged sources. The original conservation law~\ref{aa3} is now written as \begin{equation} \partial_{\mu}\tilde V^{\mu}+i\left(\Phi{\delta \Gamma \over \delta \Phi}-\Phi^*{\delta \Gamma \over \delta \Phi^*}\right)\Bigg|_{A_\mu = \mathbf{A}_\mu}=0 \end{equation} We therefore find that the two formulations of the problem are equivalent on the background $\mathbf{A}_\mu$, regardless on whether the fields $\Phi$ are dynamical or not. The effective action $\Gamma(V_{\mu})$, describes the complete quantum dynamics of the state generated out of the vacuum by the global U(1) current $J^{\mu}$. The poles of the current two-point function, become by construction the zeros of the quadratic part of the effective action, $\Gamma(V_{\mu})$, and determine therefore the mass and coupling constant of the emergent vector. \subsection{An explicit example}\label{Quadraticexample} We would like now to investigate what is the structure of the vector theory described by the action $\Gamma$ and whether it is a gauge theory in disguise. To analyze this in detail, we use a large-distance expansion parametrization of the Schwinger functional, valid in theories with a mass gap\footnote{The mass gap may not be explicit, but generated by appropriate non-trivial sources.}. We assume, beyond the gauge field source $A_{\mu}$, the presence of the source $\Phi$, coupled to an operator with non-trivial U(1) global charge. In the Schwinger functional, $\Phi$ is minimally charged under the gauge field $A_{\mu}$. We expand the Schwinger functional in a long-distance (derivative) expansion, \begin{equation} W(A,\Phi)=\int d^4x\left(W_0(|\Phi|^2)+{W_1(|\Phi^2|)\over 4}F_A^2+{W_2(|\Phi|^2)\over 2}|D\Phi|^2+{\cal O}(\partial^4)\right) \label{A24p}\end{equation} where $F_A=d A$, \begin{equation} D_{\mu}\Phi=(\partial_{\mu}+iA_{\mu})\Phi\;\;\;,\;\;\; D_{\mu}\Phi^*=(\partial_{\mu}-iA_{\mu})\Phi^*\;. \label{A25p}\end{equation} From \eqref{A24p} we can compute the current as \begin{equation} \tilde V_{\nu}\equiv {\delta W\over \delta A^{\nu}}=-\partial^{\mu}(W_1F^A_{\mu\nu})-i{W_2\over 2}(\Phi^*\partial_{\nu}\Phi-\Phi\partial_{\nu}\Phi^*)+W_2A_{\nu}|\Phi|^2\,+\,{\cal O}(\partial^3) \label{A26p}\end{equation} We now invert the previous expression and compute $A_{\mu}$ as a function of $\tilde V_{\mu}$ in a derivative expansion. In particular, we obtain an expansion of the form $A_\mu = \sum_{i=0}^\infty A_\mu^{(i)}$, where the $A_\mu^{(i)}$ contains $(i)$-derivatives. The result for the first few terms is \begin{equation}\label{A31p} A_\nu^{(0)} = \hat{V}_{\nu} \, , \end{equation} \begin{equation} A_\nu^{(1)} = \frac{i}{2}\partial_{\nu}\log{\Phi\over \Phi^*} \, , \end{equation} \begin{equation} A_\nu^{(2)} = \frac{1}{W_2\,|\Phi|^2}\,\partial^\mu\left(W_1\,F^{\hat V}_{\mu\nu}\right)\;, \end{equation} where \begin{equation} \hat{V}_\mu\,\equiv\,{\tilde V_{\mu}\over W_2|\Phi|^2} \label{a6}\end{equation} and $F^{\hat{V}}_{\mu\nu}=\partial_{\mu} \hat{V}_{\nu}-\partial_{\nu} \hat{V}_{\mu}$. We shall truncate our expansion up to two derivatives since our original functional~\ref{A24p} was also valid up to two derivative terms. Note, that from (\ref{A31p},~\ref{a6}) that $\hat{V}_{\mu}$ is gauge-invariant under the original gauge transformation~\ref{A1}. We may rewrite the equation in (\ref{A26p}) on the background as \begin{equation}\label{B21} \frac{1}{W_2\,|\Phi|^2}\partial^\mu\left(W_1\,F^{\hat V}_{\mu\nu}\right)+~\left(\hat V_{\nu} +\frac{i}{2}\partial_{\nu}\log{\Phi\over \Phi^*}\right)+\cdots = {\bf A}_\nu \end{equation} Interestingly, this equation is gauge-invariant under a different gauge transformation as well: \begin{equation} \hat V_{\mu}\to \hat V_{\mu}+\partial_{\mu}\lambda \;\;\;,\;\;\; \Phi\to \Phi~e^{ i\lambda} \, . \end{equation} This is however an artifact of the first orders in the derivative expansion as shown in appendix \ref{high}. We now proceed to derive explicit expressions for the functionals in a derivative expansion. Using the definition~\ref{B22p} and the original functional~\ref{A24p}, we compute the effective action $\Gamma$ first in terms of $A_\mu$ \begin{eqnarray} &\Gamma(A_\mu,\Phi,\mathbf{A}_\mu) = \int d^4x\left(W_0(|\Phi|^2)+{W_1(|\Phi^2|)\over 4}F_A^2+{W_2(|\Phi|^2)\over 2}|D\Phi|^2+\cdots\right) + \nonumber \\ &+ \int d^4 x (\mathbf{A}^\nu - A^\nu) \left(-\partial^{\mu}(W_1F^A_{\mu\nu})-i{W_2\over 2}(\Phi^*\partial_{\nu}\Phi-\Phi\partial_{\nu}\Phi^*)+W_2A_{\nu}|\Phi|^2\,+\,\dots \right) \nonumber \\ \end{eqnarray} and by using \ref{B21} and keeping terms up to two derivatives we find \begin{eqnarray}\label{B22} \Gamma(\hat{V}_\mu,\Phi,\mathbf{A}_\mu) &=& \int d^4x\left[W_0 - {1\over 4}W_1 (F^{\hat V})^2+{W_2\over 2}|\partial\Phi|^2-{W_2\over 2}|\Phi|^2\left(\hat{V}_{\mu}+\frac{i}{2}\partial_{\mu}\log{\Phi\over \Phi^*}\right)^2 \right] + \nonumber \\ &+& \int d^4x \left[ W_2 |\Phi|^2 \mathbf{A}^\mu \hat{V}_\mu +\cdots\right] \end{eqnarray} or equivalently \begin{eqnarray}\label{B22b} \Gamma(\hat{V}_\mu,\Phi,\mathbf{A}_\mu) &=& \int d^4x\left[W_0 - {1\over 4}W_1 (F^{\hat V})^2+{W_2\over 2}\left(\partial|\Phi| \right)^2-{W_2\over 2}|\Phi|^2 \hat{V}_{\mu} \hat{V}^{\mu}\right]+ \nonumber \\ &+& \int d^4x \left[ W_2 |\Phi|^2 \hat{V}_\mu \left( \mathbf{A}^\mu - \frac{i}{2}\partial_{\mu}\log{\Phi\over \Phi^*} \right) +\cdots\right]\;. \end{eqnarray} Splitting into radial and phase components $\Phi = R e^{- i \Theta}$ \begin{eqnarray}\label{B22c} \Gamma(\hat{V}_\mu,R, \Theta,\mathbf{A}_\mu) &=& \int d^4x\left[W_0 - {1\over 4}W_1 (F^{\hat V})^2+{W_2\over 2}\left(\partial R \right)^2-{W_2\over 2} R^2 \hat{V}_{\mu} \hat{V}^{\mu} \right]+ \nonumber \\ &+& \int d^4x \left[ W_2 R^2 \hat{V}_\mu \left( \mathbf{A}^\mu -\partial_\mu \Theta \right) +\cdots\right] \end{eqnarray} Some remarks are in order, regarding the EOM's and gauge invariance. In particular the EOM's are given now from (\ref{B21}) where the gauge field on the right-hand side must be replaced by the background gauge field ${\bf A}_{\nu}$. The equations of motion in (\ref{B21}) (as well as the functionals~\ref{B22},~\ref{B22b},~\ref{B22c}) are gauge-invariant with respect to the original gauge symmetry \begin{equation} A_\mu \rightarrow A_\mu + \partial_\mu \epsilon \, , \qquad \Phi \rightarrow \Phi e^{-i \epsilon} \, . \label{g1}\end{equation} We can improve on~\ref{B22c} by absorbing the non-dynamical term $\partial_\mu \Theta$ into the background by shifting $\mathbf{A}_\mu \rightarrow \mathbf{A}_\mu + \partial_\mu \Theta$ to obtain \begin{eqnarray}\label{B24} \Gamma(\hat{V}_\mu,R, \Theta,\mathbf{A}_\mu) &=& \int d^4x\left[W_0 - {1\over 4}W_1 (F^{\hat V})^2+{W_2\over 2}\left(\partial R \right)^2-{W_2\over 2} R^2 \hat{V}_{\mu} \hat{V}^{\mu} + \right] \nonumber \\ &+& \int d^4x \left[{W_2\over 2} R^2 \hat{V}_{\mu} \mathbf{A}^\mu +\cdots\right] \end{eqnarray} We end up with a dynamical theory for the vector $V_{\mu}$ that has the structure of a vector theory without gauge invariance. The gauge-degree of freedom of the Schwinger functional has disappeared. In the single field case, studied here, this degree of freedom corresponds to the phase of $\Phi$. In the multi-charged field case that is worked out in appendix \ref{multiplefields}, what is removed is the overall gauge degree of freedom. The end result is that the effective action involves the vector $V_{\mu}$ and gauge-invariant (ie. chargeless) combinations of the charged fields. The structure of the action in (\ref{B24}) is that of a single vector with a standard kinetic terms, but coupled to a real (uncharged) field $R=|\Phi|$. It was shown by Coleman that a massive U(1) theory without gauge invariance has a sensible quantum theory, \cite{Coleman}. The vector here has a mass term that is a function of $R$. This is generic in the presence of charged sources. The uncharged case will be analyzed later in section \ref{singleNL}. The class of theories we are considering here are generalizations of the above. Among others, their dynamics contains a general effective potential for the emergent vector. \section{Emergent coupled U(1)'s: the linearized theory}\label{JJinteractionlinearised} So far our discussion concerned the case of a single theory. We shall now move our discussion to the case of a system of coupled QFT's, focusing only on the current operators in the two theories. The effective description of the various interactions, discussed in~\ref{Generalsetup} much below the messenger mass scale $M$, is in terms of an effective current-current interaction $\lambda J \hat{J}$. Later we also discuss the case where the interaction between the two sectors is mediated by charged operators with a term $\lambda \mathcal{O} \widehat{\mathcal{O}}$. This more general possibility is discussed in section~\ref{couplednonlinear}. In particular, we now study the case of two QFTs coupled by the $J \hat{J}$ interaction via (we now work directly in mostly plus Lorentzian signature) \begin{equation} \label{linearaac} S_{int} = \lambda \, \int d^d x\, J^\mu(x) {\hat J}_\mu(x) ~,\end{equation} where $J^\mu$ and ${\hat J}^\mu$ are conserved abelian U(1) currents for the visible and hidden QFTs. The parameter scales as $\lambda \sim 1/M^{2}$. For all dimensions above two, this is an irrelevant interaction. In two dimensions it is marginal top leading order and has been studied widely in the past. A similar setup of this deformation, similar to how we currently treat the $TT$ interactions was advanced in \cite{gk} in the marginal case, and more recently in \cite{sf} for the relevant case. In the presence of \eqref{linearaac}, the generating functional of correlation functions in the visible QFT is \begin{align} \begin{split} \label{linearab} e^{i W({\cal J})} &= \int [D\Phi] [D\h{\Phi}] \, e^{i S_{vis}(\Phi,{\cal J}) + i S_{hid}(\h{\Phi}) + i \lambda \int d^4 x\, J^\mu(x) {\hat J}_\mu(x)} \\ &= \int [D\Phi] [D\h{\Phi}] \, e^{i S_{vis}(\Phi,{\cal J}) + i S_{hid}(\h{\Phi})} \bigg[ 1 + i \lambda \int d^4 x\, J^\mu(x) {\hat J}_\mu(x) \\ &\hspace{1.5cm} - \frac{1}{2} \lambda^2 \int d^4 x_1 d^4 x_2 \, J^\mu(x_1) {\hat J}_\mu(x_1) J^\nu(x_2) {\hat J}_\nu(x_2) +{\cal O}(\lambda^3) \bigg] ~, \end{split} \end{align} where in the second equality we expanded the path integral perturbatively in $\lambda$ up to second order. The second term on the second line involves the one-point function of the current ${\hat J}_\mu$ in the undeformed hidden theory and the term in the third line its two-point function. We also assume that in the absence of the interaction \eqref{linearaac}, ${\hat J}_\mu$ is the conserved current of a Lorentz-invariant QFT. This means that the one point function of the current operators in the vacuum is taken to be zero. Recall now, the standard derivation of the Ward identities associated with the global U(1) symmetry in the hidden QFT \begin{eqnarray} \label{genpertuae} 0 = \int {\cal D} {\hat \Phi} \, e^{i \int d^4 x {\hat {\cal L}}} \bigg\{ -i \int d^4 x \, \partial_\mu \theta(x) \bigg[ {\hat J}^\mu(x) {\hat J}^\nu(y) \bigg] + \delta_\theta {\hat J}^\nu (y) \bigg\} ~.\end{eqnarray} Using $$\delta_\theta {\hat J}^\nu = -i \partial^\rho \theta \, {\hat {\mathfrak J}}^{\nu}~\hskip-5pt_{\rho}$$ and dividing by the partition function $Z$ we obtain the Ward identity \begin{equation} \label{genpertuaf} \langle \partial_\mu {\hat J}^\mu (x) {\hat J}^\nu (y) \rangle = - \partial^\rho \left(\delta(x-y) \langle {\hat {\mathfrak J}}^{\nu}~\hskip-5pt_{\rho} \rangle \right) ~. \end{equation} Integrating both sides with $\int d^d x \, e^{-ik x}$ converts to the momentum space expression \begin{equation} \label{genpertuag} k_\mu \langle {\hat J}^\mu (k) {\hat J}^\nu (-k) \rangle = - k^\rho \langle {\hat {\mathfrak J}}^{\nu}~\hskip-5pt_{\rho}\rangle ~. \end{equation} The 1-point function on the RHS of this equation does not necessarily vanish. Typically, the operator ${\hat {\mathfrak J}}^{\nu}~\hskip-5pt_{\rho}$ is non-zero and its 1-point function will not vanish if the operator is mixing with the identity. Such mixing is possible in theories with intrinsic scales, for example if the hidden theory has a mass gap $m$. From Lorentz invariance we therefore expect \begin{equation} \label{genpertuai} \langle {\hat {\mathfrak J}}^{\nu}~\hskip-5pt_{\rho}(y) \rangle = i {\cal A}\, {\delta^\nu}_\rho \end{equation} where ${\cal A}$ is a dimensionfull constant. Its dimension arises from the mass scale $m$ of the hidden QFT. Then, \begin{equation} \label{genpertuaj} k_\mu \langle {\hat J}^\mu (k) {\hat J}^\nu (-k) \rangle = - i {\cal A} ~k^\nu ~. \end{equation} As explicit examples, in appendix~\ref{Examples}, we consider the case of free massive bosons and fermions. For free massive bosons $\varphi$ we have \begin{equation} {\hat {\mathfrak J}}^{\nu}~\hskip-5pt_{\mu} \propto \varphi \varphi^* \, {\delta^{\nu}}_\mu\;, \end{equation} and the vev $\langle (\varphi \varphi^*)(x) \rangle$ is indeed non-vanishing at non-vanishing mass $m$ (as a simple perturbative computation in $m$ reveals). We have verified this mixing by a straightforward computation of the 2-point function $\langle {\hat J}^\mu (k) {\hat J}^\nu (-k) \rangle$ in the appendix \ref{Examples}. On the other hand, for free massive fermions, ${\hat {\mathfrak J}}^{\mu}~\hskip-5pt_{\nu}=0$ identically. As a result, in this case we do not expect a contact term violation of the classical Ward identity as can be again verified by explicit computation. Since we have coupled the hidden theory to the visible sector, we now use the upper-index $(0)$ and the lower-index $hid$ to denote that such expectation values are to be computed in the undeformed hidden theory. In particular the undeformed one and two-point functions that we shall use are \begin{equation} \label{genpertuai2} \langle {\hat {\mathfrak J}}^{\nu}~\hskip-5pt_{\rho} \rangle^{(0)}_{hid} = i {\cal A}\, {\delta^\nu}_\rho \, , \end{equation} \begin{equation} \label{genpertuad} i {\hat G}_{\mu\nu}(k) = \langle {\hat J}_\mu (k) {\hat J}_\nu(-k) \rangle^{(0)}_{hid} ~,\end{equation} where ${\hat G}_{\mu\nu}(k)$ is the momentum space propagator. A spectral representation of the two point function for a general hidden theory can be found in appendix~\ref{Spectralrepresentation}. Finally, denoting the partition function of the undeformed hidden theory as $e^{i W^{(0)}_{hid}}$ and assuming that the currents are conserved in the corresponding undeformed theories --- so that we can use the results of the Ward identity --- we can recast \eqref{linearab} up to ${\cal O}(\lambda^3)$ as \begin{align} \begin{split} \label{linearad} e^{i W({\cal J})} & = e^{i W^{(0)}_{hid}} \int [ D \Phi ]\, e^{i S_{vis}(\Phi, {\cal J})} \bigg[ 1 - \frac{i}{2}\lambda^2 \int d^4 x_1 d^4 x_2 \, J^\mu (x_1)\, J^\nu (x_2) {\hat G}_{\mu\nu}(x_1 - x_2) \bigg] ~. \end{split} \end{align} In this expression, the one-point function of the hidden current is taken to be zero in the Lorentz invariant vacuum. In addition, this expression reveals that from the point of view of the visible theory, the interaction \eqref{linearaac} with the hidden theory has induced effective interactions for the visible current. Working up to quadratic order in $\lambda$, we can exponentiate these interactions in an effective action of the form \begin{equation} \label{linearaf} \delta S_{vis} = - \frac{1}{2} \lambda^2 \int d^4 x_1 d^4 x_2 \, J^\mu (x_1)\, J^\nu (x_2)\,{\hat G}^{c}_{\mu\nu}(x_1 - x_2) ~. \end{equation} In this last equation we have also used an upper script $c$ to denote the connected part of this two point function, since it is this connected part that appears in the exponent and obeys the Ward-identity \eqref{genpertuaf}. We observe the emergence of a quadratic visible current-current interaction. In the most general case, we can use the spectral representation of the current two point function (analysed in appendix~\ref{Spectralrepresentation}) \begin{equation}\label{spectralpositionmain} {\hat G}^{c}_{\mu\nu}(x_1 - x_2) = - \int_0^\infty d \mu^2 \int \frac{d^d k}{(2\pi)^4} \frac{e^{- i k (x_1 - x_2)}}{k^2 + \mu^2 - i \epsilon} \rho_{\mu \nu}(k, \, \mu) \, , \end{equation} where the spectral weight, $\rho$, is split into longitudinal and transverse parts \begin{equation} \rho_{\mu \nu}(k, \, \mu) = \left( \eta_{\mu \nu} - \frac{k_\mu k_\nu}{\mu^2} \right){\cal B}( \mu) + \eta_{\mu\nu} \, {\cal A}( \mu)\;. \label{splitmain}\end{equation} The current-current interaction can also be expressed in momentum space as \begin{equation} \label{linearafa} \delta S^{JJ}_{vis} \equiv - \frac{\lambda^2}{2} \int \frac{d^4 k }{(2\pi)^4} \, J^\mu(-k)\, J^\nu(k)\, {\hat G}^{c}_{\mu\nu}(k) ~. \end{equation} This part can be reformulated as an interaction with a classical spin-1 field $A_{\mu}$. At quadratic order, the effective action of an emergent vector field $A_\mu$ reads \begin{equation} \label{genpertuab} S_{eff} = \int d^d k \bigg[ A_\mu (-k) J^\mu (k) - \frac{1}{2} {\cal P}^{\mu\nu}(k) A_\mu(-k) A_\nu(k) \bigg] ~.\end{equation} In this action, the tensor ${\cal P}^{\mu\nu}$ is proportional to the inverse of the hidden current-current 2-point function \begin{equation} \label{genpertuac} \left( {\cal P}^{-1} \right)_{\mu\nu} (k) = - \lambda^2 {\hat G}^{(c)}_{\mu\nu}(k) ~,\end{equation} The 2-point function is evaluated in the undeformed, $\lambda=0$, theory. The Ward identity \eqref{genpertuaj} (together with \eqref{spectralpositionmain}) implies that \begin{equation} \label{genpertuak} {{\hat G}_{(c)}}^{\mu\nu}(k) = - {\cal A} \, \eta^{\mu\nu} + {\cal B}(k^2) \left( k^\mu k^\nu - k^2 \eta^{\mu\nu} \right) ~,\end{equation} so that \begin{equation} \label{genpertual} \left( {\cal P}^{-1} \right)^{\mu\nu}(k) = \lambda^2 \left( {\cal A} \, \eta^{\mu\nu} - {\cal B}(k^2) \left( k^\mu k^\nu - k^2 \eta^{\mu\nu} \right) \right) ~.\end{equation} There are now two distinct possibilities. If the constant ${\cal A} \neq 0$, then the inversion is straightforward and gives \begin{equation}\label{genpertuam} {\cal P}^{\mu\nu} ={1 \over \lambda^2 {\cal A}}\left(\eta^{\mu\nu}+{{\cal B}\over {\cal A}+ {\cal B} k^2}(k^{\mu}k^{\nu}-k^2 \eta^{\mu\nu})\right) \, . \end{equation} We therefore find up to quadratic order in the momentum expansion \begin{equation} \label{genpertuam2} {\cal P}^{\mu\nu} = \lambda^{-2} {\cal A}^{-1} \left( \eta^{\mu\nu} + \frac{{\cal B}(0)}{{\cal A}} (k^\mu k^\nu - k^2 \eta^{\mu\nu} ) \right) + {\cal O}(k^4) \end{equation} The effective action \eqref{genpertuab} takes the form (in real space) \begin{equation} \label{genpertuan} S_{eff} = \int d^d x \bigg[ A_\mu J^\mu - \frac{1}{2} \lambda^{-2} {\cal A}^{-1} A_\mu A^\mu - \frac{1}{4} \lambda^{-2} {\cal A}^{-2} {\cal B}(0) F_{\mu\nu}F^{\mu\nu} + {\cal O}(\partial^4) \bigg] ~. \end{equation} This is an example of a massive-photon action, with its mass arising from a Higgs effect due to the non-vanishing vacuum expectation value ${\cal A} \neq 0$. On the other hand, if the constant ${\cal A}=0$ (as in the case of free fermions) then the inversion of ${\cal P}^{-1}$ can be performed in two different ways. One possibility is to form a \emph{non-local} effective action \eqref{genpertuab} (taking into account properly gauge fixing conditions etc.).This will be explored further in section \ref{singleNL}. The other possibility is to add and subtract a contact term in \eqref{genpertuak}, so that the inversion is possible. Of course this second possibility is ambiguous and reflects the fact that the true effective interaction between visible sector currents is contained in the action \eqref{linearafa} and not in the IR expansion of \eqref{genpertuab}. Using therefore \eqref{genpertuan} as an effective action can sometimes be misleading, since it can truncate important degrees of freedom\footnote{By expanding in momenta a propagator and its inverse we miss-estimate the mass term that is relevant for the interaction and it can obtain admixtures from contact terms that are irrelevant.} . We conclude that the effective interaction (\ref{linearafa}) for the visible theory current is unambiguous, whereas the resolved dynamical action in (\ref{genpertuan}) is scheme dependent. Some specific examples of the effective action in the cases that the hidden sector fields are free bosons or fermions are given in appendix~\ref{Examples}. In this case the spectral weight $\rho_{\mu \nu}(\mu)$ has a mass gap $m$ above which there is a continuum of states. For a more general theory, we can use the general spectral representation for the hidden theory current correlator provided in~\ref{Spectralrepresentation}. In particular, for a strongly coupled hidden theory with a discrete spectrum, we expect the appearance of poles in the spectral function. Near such poles one finds a massive photon state as shown in appendix~\ref{Spectralrepresentation}. In particular the effective current interaction on the visible sector takes the following form near such poles \begin{equation} \label{linearafapoles} \delta S^{JJ}_{vis} \equiv \frac{\lambda^2}{2} \int \frac{d^4 k }{(2\pi)^4} \, J^\mu(-k)\, J^\nu(k)\, \frac{R(m_i)}{k^2 + m_i^2} \left(\eta_{\mu \nu} - \frac{k_\mu k_\nu}{m_i^2} \right) ~, \end{equation} where $R(m_i)>0$ is the positive spectral weight residue near the pole. This interaction can be resolved with a standard Proca field. The effective action is similar to \eqref{genpertuan}, with the difference that the mass term is now governed by the location of the pole $m_i$, and the coupling to the emergent vector field by the residue $R(m_i)$. The interaction between two charged sources of equal sign is repulsive as expected. In appendix \ref{effectivestaticpotential} we survey the long -distance behavior of the emergent vector interaction, as a function of the structure of the spectral density of the vector two-point function. In the case of the discrete spectrum with zero widths and masses $m_i$ we obtain for the static potential a sum of Yukawa interactions. \begin{equation}\label{s9} \Phi(r) \sim \frac{1}{4 \pi r} \sum_i e^{- m_i r} \, . \end{equation} The force between two equal charges is then a repulsive force as expected from the exchange of a massive vector boson. The sign is fixed due to the positivity of the spectral weight. If the isolated pole is at zero momentum, we obtain a long range potential due to an exchange of a massless photon-like state \begin{equation} \Phi(r) \sim \frac{1}{4 \pi r} \, . \end{equation} {This case is not obviously excluded by the WW theorem, \cite{WW}, as one of the assumptions is that the massless pole should correspond to a charged state under the current whereas the states generated by a U(1) currets are chargeless.} A continuum spectral density starting above a mass $M$ as $\rho^{(1)}(\mu)\sim (\mu^2-M^2)^a$ gives a static potential that at large distances behaves as \begin{equation} \Phi(r) \sim {e^{-Mr}\over r^{a+2}} \label{s10}\end{equation} If the continuum starts at $M=0$ then (\ref{s10}) is modified to \begin{equation} \Phi(r) \sim {1\over r^{2a+3}} \label{s11} \end{equation} It is clear that in both cases, (\ref{s10}) and (\ref{s11}), as the spectral density must be integrable, the exponent in the denominator, is allowed to approach 1, but cannot reach it as in that limit a logarithmic divergence appears in the density of states. Finally for a conserved current in a CFT, we obtain \begin{equation} \Phi(r) \sim {1\over r^{5}} \end{equation} in agreement with (\ref{i22}). We conclude by mentioning that the form of the IR expansion \eqref{genpertuan} is \emph{universal} for all the possible choices of a hidden theory, and is dictated solely by the $U(1)$ symmetry and the associated Ward identity that leads to \eqref{genpertuaj} and \eqref{genpertuak}. \section{The non-linear theory of the coupled system}\label{couplednonlinear} In this section we present, in general terms, the non-linear extension of the mechanism explained in the previous section in the two-theory case. In particular, we shall make use of the global symmetries of the system, to advocate for the emergence of a dynamical vector field and its consistent dynamics. We first assume that both the visible and hidden theory, when uncoupled, have an independent U(1) global invariance, therefore they have a total $U(1) \times \widehat{U(1)}$ symmetry. We start by defining the generating functional of the correlation functions of the visible theory. We write down this Schwinger functional in terms of external vector potentials $A_\mu, \hat{A}_\mu$ that can couple to the visible/hidden sector respectively. It is straightforward to generalise this functional with the addition of scalar and other types of sources, but we refrain from doing so, in order to keep our equations as transparent and compact as possible. The full theory is also defined on a flat and not dynamical geometric background $g_{\mu\nu}\equiv \eta_{\mu\nu}$. When we wish to set the external sources to their background values, we will use a bold notation i.e ${\bf A}_\mu, {\bf \hat{A}}_\mu$. Normally these are taken to be zero in a Lorentz invariant vacuum. The Schwinger functional is therefore given by \begin{equation} e^{-W(A, \hat{A})}\,=\,\int\,\left[D \Phi\right] [D \h \Phi]\,e^{-S_{visible}\left(\Phi,A\right)-S_{hidden}\left(\h \Phi,\hat{A}\right)\,-\,S_{int}\left(\mathcal{O}_i,\h {\mathcal{O}}_i \right)} \label{fun} \end{equation} where $\Phi^i$ and $\h \Phi^i$ are respectively the fields of the visible QFT and the hidden $\h{QFT}$ and the interacting part is defined as: \begin{equation} S_{int}\,=\,\int\,d^4x\,\sum_i\,\lambda_i\,\mathcal{O}_i(x)\,\h{\mathcal{O}}_i(x) \label{12}\end{equation} where $\mathcal{O}_i$ are operators of the visible QFT, $\h{\mathcal{O}}_i$ operators of the hidden $\h{QFT}$ and the $\lambda_i$ are generic couplings. There are now, two different possibilities. The first is that the operators appearing in \eqref{12} are uncharged under the independent global symmetries and the second is that some of them are charged. This first possibility is also the one analysed at the linearised level in section~\ref{JJinteractionlinearised}. For the second possibility, the operators in \eqref{12} are chosen to be charged under the visible and hidden U(1) as follows\footnote{It is clear that this case can be easily generalized to more complicated cases but we shall refrain from doing so, here.}: \begin{equation} \mathcal{Q}\left(\mathcal{O}_i\right)\,=\,1\,,\quad \h{\mathcal{Q}}(\h{\mathcal{O}}_i)\,=\,-1\,. \end{equation} This means that the two independent U(1) global symmetries are broken into the diagonal subgroup \begin{equation} U(1)\,\times\,\h{U(1)}\,\,\rightarrow\,\,U(1)_{diag}\label{sym} \end{equation} which corresponds to the U(1) invariance of the total functional defined in \eqref{fun}. Had we chosen the first possibility, the functional would simply have retained the two separate global symmetries. The difference can be summarised in the following statement: In the presence of only a single global symmetry, we may identify $A_\mu \equiv \hat{A}_\mu$, since there is a common background field for the single $U(1)_{diag}$ and this is the field for which we expect a gauge invariance. We now remark that the theory, as written in \eqref{fun}, has a natural cutoff represented by the mass $M$ of the messenger fields. For all energies below the cutoff scale $M$, as we are interested in the visible theory observables, we shall integrate out the hidden $\h{QFT}$ to obtain \begin{align} e^{-W(A, \hat{A})} \, &= \,\int\,[D \Phi] [D\h \Phi]\,e^{-S_{visible}\left(\Phi,A\right)-S_{hidden}\left(\h \Phi,\hat A\right)\,-\,S_{int}}\,\nonumber\\&=\,\int\,[D \Phi] \,e^{-S_{visible}\left(\Phi,A\right)\,-\,\mathcal{W}\left(\mathcal{O}_i, \hat{A}\right)} \label{6} \end{align} where $\mathcal{W}\left(\mathcal{O}_i,\hat{A} \right)$ is the generating functional for the hidden theory with external sources given by the operators $\mathcal{O}_i$ of the visible theory.\\ The low energy dynamics of the visible theory is now described by the total action: \begin{equation} S_{total}=S_{visible}+\mathcal{W} \, . \label{3} \end{equation} We define the expectation value of the current of the hidden theory (including the interaction) \begin{equation} \tilde{V}_{\mu}\,\equiv\,\frac{\delta \mathcal{W}\left(\mathcal{O}_i,\hat{A}\right)}{\delta \hat A^{\mu}}\,=\,\langle \h {J_{\mu}}\rangle\label{defJ2} \end{equation} with the idea that such an object could act as an emergent vector field for the visible theory. More precisely, the functional derivative appearing in \eqref{defJ2} must be computed at $\hat A^\mu={\bf \hat A^\mu}$. In the case of an uncharged coupling both currents are independently conserved \begin{equation} \partial^{\mu} \, \tilde{V}_\mu \, = \, 0 \, \qquad \partial^\mu \, J_\mu^{visible} \, =\,0 \label{v3}\end{equation} In this case, we defined the current of the visible theory as: \begin{equation} J_{\mu}^{visible}\,\equiv\,\frac{\delta S_{visible}\left(\Phi,A\right)}{\delta A^{\mu}}\Big|_{A={\bf \hat A}}\label{vis}\;. \end{equation} In the case of a single $U(1)$ invariance of the full theory \eqref{sym} defined when $A_{\mu} = \hat{A}_{\mu}$, only the total current is conserved: \begin{equation} \partial^{\mu}\left(\tilde{V}_\mu\,+\,J_\mu^{visible}\right)\,=\,0\label{conserv} \end{equation} At this point, we invert equation \eqref{defJ2}: \begin{equation} \hat A^\mu\,=\, \hat A^\mu(\tilde{V}^\mu,\mathcal{O}_i) \, . \end{equation} To proceed, we define the Legendre-transformed functional with arbitrary background sources ${\bf A}, {\bf \hat{A}}$ as \begin{align} \Gamma\left( \tilde{V},\mathcal{O}_i , {\bf A} , {\bf \hat{A}}\right)\,\equiv \,\int d^4x\,\tilde{V}_\mu\, \left( \hat{A}^\mu(\tilde{V}^\mu,\mathcal{O}_i) - {\bf \hat{A}}^\mu \right)\,-\,\mathcal{W}\left(\mathcal{O}_i,\hat{A}^\mu(\tilde{V}^\mu,\mathcal{O}_i)\right)\label{leg} \end{align} Using this we define an ``effective action'' $S_{eff}(\tilde{V},\Phi)$ which contains the same information as the original functional, but acts as an action for the visible sector's fields coupled to an induced dynamical vector field $\tilde{V}_\mu$ \begin{equation}\label{leg2} S_{eff}(\Phi, \tilde{V}, {\bf A} , {\bf \hat{A}} )\,=\,S_{visible}(\Phi, {\bf A})\,-\,\Gamma\left(\tilde{V},\mathcal{O}_i, {\bf A} , {\bf \hat{A}} \right)= \end{equation} $$ =\,S_{visible}(\Phi, {\bf A})\,-\,\int d^4x\,\,\tilde{V}_\mu\, \left( \hat A^\mu(\tilde{V}^\mu,\mathcal{O}_i) - {\bf \hat{A}}^\mu \right) \,+\,\mathcal{W}\left(\mathcal{O}_i,\hat A^\mu(\tilde{V}^\mu,\mathcal{O}_i)\right) $$ We shall prove that the effective action defined in \eqref{leg2}, once extremised with respect to the emergent vector field $\tilde{V}^\mu$, has the following desired feature \begin{equation} S_{eff}(\tilde{V}^\star,\Phi)\,=\,S_{visible}(\Phi,A= {\bf A})\,+\,\mathcal{W}(\mathcal{O}_i,\hat{A} = {\bf \hat A})\,\equiv\,S_{total}\vert_{{\bf A}, {\bf \hat{A} }} \end{equation} where $\tilde{V}=\tilde{V}^\star$ is the solution that extremises the effective action $S_{eff}(\tilde{V},\Phi)$.\\ In order to achieve this task, we start by computing the variation of the Legendre transformed functional \eqref{leg} with respect to the vector field $\tilde{V}^\mu$: \begin{equation} \frac{\delta \Gamma\left(\tilde{V},\mathcal{O}_i, {\bf A} , {\bf \hat{A}}\right)}{\delta \tilde{V}_\mu}\,= \hat A_\mu \, - {\bf \hat{A}}_\mu \, , \end{equation} where the visible current is defined in \eqref{vis} and we used the definition of the emergent vector field \eqref{defJ2}. {By setting the source gauge field $\hat A^\mu$ to its background value, we obtain the simple expression \begin{equation} \frac{\delta \Gamma\left(\tilde{V},\mathcal{O}_i\right)}{\delta \tilde{V}_\mu}\Big|_{\hat A = {\bf \hat{A}}}\,=\,0 \, .\label{dynm} \end{equation} Therefore, the Legendre transformed functional as we defined it, is extremal on the background. Using \eqref{dynm} we also conclude that $S_{eff}$ is extremal with respect to $\tilde{V}^\mu$ on the hidden source background $\hat{A}_\mu = {\bf \hat{A}}_\mu$.} \\ What is left to show is that the effective action $S_{eff}$ once evaluated on the solution of the equation \eqref{defJ2} reduces to the original induced action for the original theory \eqref{3}. Let us denote the solution of the equation \eqref{defJ2} $\tilde{V}_\mu^\star$. By construction it corresponds to the vev of the current of the hidden theory \begin{equation} \tilde{V}_{\mu}^\star\,=\,\frac{\delta \mathcal{W}\left(\mathcal{O}_i,\hat A\right)}{\delta \hat A^{\mu}}\Big|_{\hat A= {\bf \hat{A}}}\,=\,\langle \h {J_{\mu}}\rangle \end{equation} From \eqref{leg} we can evaluate the effective action at $\tilde{V}_\mu=\tilde{V}_\mu^\star$ (which coincides with $\hat A_\mu= {\bf \hat{A}}_\mu$) and we obtain indeed the already advertised result \begin{equation} S_{eff}(\tilde{V}^\star,\Phi)\,=\,S_{visible}(\Phi,A= {\bf A})\,+\,\mathcal{W}(\mathcal{O}_i,\hat{A} = {\bf \hat A})\,\equiv\,S_{total}\vert_{{\bf A}, {\bf \hat{A} }} \, . \end{equation} This is the description in the case of a $U(1) \times \widehat{U(1)}$ global invariance that is translated into a $U(1)_{global} \times \widehat{U(1)}_{local}$ invariance of the effective action. The whole procedure can be repeated with almost no differences in the case of a single $U(1)_{diag}$: one simply replaces ${\bf A} = {\hat{\bf A}}$ in the formulae above and the final effective action has only a single $U(1)_{diag}$ local invariance. To conclude, we have shown that the imprints of the hidden theory on the visible theory can quite generically be reformulated as the visible theory being coupled to an emergent dynamical vector field (denoted by $\tilde{V}^\mu$). The dynamics of this emergent vector field encode the effects that the hidden sector has to the visible one. In the next subsection, we shall perform a low energy derivative expansion to the various functionals, in order to make the features of the induced interactions more explicit. \subsection{The low-energy U(1) dynamics} In this section, we shall employ the generic procedure described above in a simple choice of the functional \eqref{6} dictated by symmetry and an IR derivative expansion. This case hence assumes the presence of a mass gap for the hidden theory, so that the derivative expansion is organised in inverse powers of the mass gap. For simplicity, we also focus in the case ${\bf A}_\mu = {\bf \hat{A}}_\mu = 0$ with a single $U(1)$ symmetry for the total system and a Lorentz invariant vacuum. We hence assume the low-energy dynamics of the hidden theory plus interactions to be described by the effective Schwinger functional \begin{equation} \mathcal{W}\left(A^\mu\right)\,=\,\int d^4x\,\left(Z_0(|\Phi|^2)-\frac{Z_1(|\Phi|^2)}{4}F^2\,+\,\frac{Z_{ij}(|\Phi|^2)}{2}(D_\mu \Phi^i)(D^\mu \Phi^j)^*\,+\,\dots\right)\label{effC} \end{equation} We also consider the presence of a set of complex operators $\Phi^i$ of equal charge, that belong to the visible QFT\footnote{In general the theory contains many charged operators. The relevant analysis in this more general setup is treated in appendix~\ref{multiplefields}.}. $\Phi_i$ denote here what we called ${\mathcal O}_i$ in the previous section. The notation $|\Phi|^2$ in the potential functions that appear in (\ref{effC}) is sketchy and stands for gauge-invariant combinations of the scalar sources without derivatives. Furthermore, the covariant derivative is defined as \begin{equation} D_\mu\,\equiv\,\partial_\mu\,+\,i\,q\,A_\mu\;, \end{equation} where we keep the charge $q$ generic and the field strength is given by $F=dA$.\\ The ellipsis in (\ref{effC}) is there to remind us that the effective action \eqref{effC} is a low-energy description where higher energy terms, \textit{i.e.} terms with more than two derivatives, are neglected.\\ Following the generic procedure explained in the previous section, we define the emergent vector field: \begin{equation} \tilde{V}^\mu\,\equiv\,\frac{\delta \mathcal{W}\left(\Phi_i,A^\mu\right)}{\delta A_\mu}\equiv \langle \h{J}^\mu \rangle\label{defj3} \end{equation} namely the expectation value of the U(1) current of the Schwinger functional (that includes cross interactions) of the hidden $\h{QFT}$.\\ More explicitly we obtain: \begin{equation} \tilde{V}^\mu\,=\,\partial_\nu \left(Z_1\,F^{\mu\nu}\right)\,+\,q^2\,A^\mu\,Z_{ij}\,\Phi^i\,{\Phi^j}^*\,-\,\frac{i\,q}{2}\,Z_{ij}\left(\Phi^i \partial^\mu {\Phi^j}^*\,-\,{\Phi^i}^*\,\partial^\mu \Phi^j\right) +\cdots \label{lala} \end{equation} At this stage we want to invert the previous expression \eqref{lala}: \begin{equation} A^\mu=A^\mu\left(\tilde{V}^\mu,\Phi\right)\label{inv} \end{equation} and identify $\tilde{V}^\mu$ as the new emergent and dynamical degrees of freedom. We perform such a task within a perturbative derivative expansion.\\ At zeroth order in derivatives, we find: \begin{equation} A^\mu\,=\,\frac{\tilde{V}^\mu}{q^2\,Z_{ij}\,\Phi^i\,{\Phi^j}^*}\equiv \,V^\mu \end{equation} where for simplicity, we have rescaled the original current $\tilde{V}^\mu$ appearing in \eqref{defj3}.\\ Up to two derivatives we find the result: \begin{equation} A^\mu \,=\,V^\mu\,+\,\frac{i\,Z_{ij}}{2\,q\,Z_{kl}\,\Phi^k\,{\Phi^l}^*}\,\left(\Phi^i \partial^\mu {\Phi^j}^*\,-\,{\Phi^i}^*\,\partial^\mu \Phi^j\right)\,-\,\frac{1}{q^2\,Z_{ij}\,\Phi^i\,{\Phi^j}^*}\,\partial_\nu\left(Z_1\,F_V^{\mu\nu}\right)\,+\,\mathcal{O}(\partial^3)\label{inva} \end{equation} where $F_V=d V$ is the field strength of the emerged vector field. We can manipulate this expression to bring it to the following form: \begin{equation} q^2\,Z_{ij}\,\Phi^i\,{\Phi^j}^*\,A^\mu\,=\,q^2\,Z_{ij}\,\Phi^i\,{\Phi^j}^*\,V^\mu\,+\,Z_{ij}\,(J_\Phi^{ij})^\mu-\,\partial_\nu\left(Z_1\,F_V^{\mu\nu}\right)+\cdots \label{pp} \end{equation} where we have defined the current associated to the field $\Phi$ as \begin{equation} (J_\Phi^{ij})^\mu \equiv \frac{i\,q}{2}\left(\Phi^i \partial^\mu {\Phi^j}^*\,-\,{\Phi^i}^*\,\partial^\mu \Phi^j\right) \;. \end{equation} We can now set the external source to zero ${\bf A}^\mu=0$ and rewrite our result eqn.\eqref{pp} as a Maxwell equation for the vector field $\tilde{V}^\mu$: \begin{equation} {\partial_\nu\left(Z_1\,F_V^{\mu\nu}\right)\,=\,q^2\,Z_{ij}\,\Phi^i\,{\Phi^j}^*\,V^\mu\,+\,Z_{ij}\,(J_\Phi^{ij})^\mu +\cdots \,}\label{max1} \end{equation} This equation exhibits a ``dual" gauge invariance \begin{equation} V_{\mu}\to V_{\mu}+\partial_{\mu}\lambda \;\;\;,\;\;\; \Phi_i \to \Phi_i ~e^{- i q \lambda} \, . \label{dual}\end{equation} As in the case of a single theory, in section \ref{effe}, it is an illusion of the low order in the derivative expansion, as explained in appendix \ref{high}. Defining the original current of the visible theory as: \begin{equation} J_{visible}^\mu\,=\,\frac{\delta S_{visible}(A^\mu)}{\delta A^\mu}\Big|_{{\bf A}=0}\,, \end{equation} two conservation laws follow (the one of which is redundant and reflects our specific definition of variable). The first physical conservation law is represented by the Ward identity for the total action: \begin{equation} S_{total}\,=\,S_{visible}\,+\,\mathcal{W} \end{equation} with respect to the diagonal preserved U(1) global group. This takes the form of the conservation of the total current: \begin{equation} \partial_\mu\,\left(\frac{\delta S_{total}(A^\mu,\Phi,\dots)}{\delta A_\mu}\right)\,=\,0 \quad \rightarrow \quad\partial_\mu\,\left[\tilde{V}^\mu \,+\, J^\mu_{visible}\right]\,=\,0\label{cons1} \end{equation} The second ``conservation law" comes directly from the specific definition of the emergent vector field that led to eqn. \eqref{max1}. It implies (on the background) \begin{equation} \partial_\mu\,\left[q^2\,Z_{ij}\,\Phi^i\,{\Phi^j}^*\,V^\mu\,+\,Z_{ij}\,(J_\Phi^{ij})^\mu \right]\,=\,0\label{cons2} \end{equation} The two conservation laws \eqref{cons1} and \eqref{cons2} can also be combined into \begin{equation}\label{cons2n} \partial_\mu\,\left[J_{visible}^\mu\, -\,Z_{ij}\,( J_\Phi^{ij})^\mu \right]\,=\,0 \end{equation} Equations \eqref{max1} together with \eqref{cons2n} represent the main result of this section. The dynamical equation \eqref{max1} indicates that the low-energy effects of the hidden sector on the visible one, can be captured by a Maxwell equation of a dynamical emergent vector field $V_\mu$ . The total current is conserved, as in \eqref{cons2n}, but it is split into the visible sector piece labelled by $J_{visible}^\mu$ and a piece of the hidden sector including interactions $Z_{ij}\,( J_\Phi^{ij})^\mu$. As we discussed in section~\ref{effectiveactionsingletheory}, we may add to the various functionals ``improvement terms" that shift the definition of the various currents by identically conserved quantities. A similar ambiguity is also reflected in the splitting $S_{total}\,=\,S_{visible}\,+\,\mathcal{W}$ ---it is a form of scheme dependence. Nevertheless, once a particular scheme is chosen, the procedure described above follows consistently and one obtains unambiguous results. \section{The holographic emergent photon}\label{Hologaxion} We now investigate the special case where the hidden theory $\widehat{QFT}$ is a large-$N$ holographic theory. In this case $\widehat{QFT}$ has a gravity dual that we shall assume to be five-dimensional. The general action can be written as \begin{equation} S=\hat S+S_{int}+S_{SM} \label{b19}\end{equation} where the interaction term $S_{int}$ has been defined in (\ref{linearaac}), $\hat S$ is the action of the holographic theory, and $S_{SM}$ the action of the SM. Applying the holographic correspondence, we can write\footnote{{For a conserved U(1) vector $\hat J_{\mu}$ of dimension $\Delta=3$, dual to a U(1) gauge field $A_M(x,z)$, the asymptotic behaviour near the boundary is $A_{\mu}(x,z)=B_{\mu}(x)$ while the radial component can be gauged away. $B_{\mu}(x)$ is the source that couples to $\hat J_{\mu}$ in the $\h{QFT}$ action. In our example $B_{\mu}=J_{\mu}$ the SM current. It is should be stressed, that there are, in general, many other couplings of hidden theory operators to SM Operators. These will generate further couplings between the bulk gravitational theory and the SM. We neglected them here, but they can be readily included.}} \begin{equation} \langle e^{iS_{int}}\rangle_{\widehat{QFT}}=\int_{\lim_{z\to 0}A_{\mu}(x,z)= J_{\mu}(x)} {\cal D}A_{\mu}~e^{iS_{\rm bulk}[A_{\mu}]} \label{e1}\end{equation} where $A_{\mu}$ is a bulk (five-dimensional) gauge field dual to the global current $\hat J_{\mu}$ of $\widehat{QFT}$. $S_{\rm bulk}[A_{\mu}]$ is the bulk gravity action, $z$ is the holographic coordinate, and the gravitational path integral has boundary conditions for $A_{\mu}$ to asymptote to the operator $ J_{\mu}$ near the AdS boundary. We have also neglected the other bulk fields. By inserting a functional $\delta$-function we may rewrite (\ref{e1}) as \begin{equation} \langle e^{iS_{int}}\rangle=\int_{\lim_{z\to 0}A_{\mu}(x,z)= B_{\mu}(x)} {\cal D}A_{\mu}(x,z){\cal D}B_{\mu}(x){\cal D}C_{\mu}(x)~e^{iS_{\rm bulk}[A_{\mu}]+i\int C^{\mu}(x)(B_{\mu}(x)-J_{\mu}(x))} \label{e2}\end{equation} If we now integrate $B_{\mu}(x)$ first in the path integral transform, we obtain the Legendre transform of the Schwinger functional of the bulk gauge field, which becomes the bulk effective action. This corresponds in holography to switching boundary conditions at the AdS boundary from Dirichlet to Neumann, and where $C_{\mu}(x)$ is the expectation value of the operator $\hat J_{\mu}$. We finally obtain \begin{equation} \langle e^{iS_{int}}\rangle=\int_{\lim_{z\to 0}A_{\mu}(x,z)=A^{(0)}(x)+z^2 C_{\mu}+\cdots} {\cal D}a(x,z){\cal D}k(x)~e^{iS_N[A_{\mu}]-i\int C_{\mu}(x)J^{\mu}(x)} \label{e22}\end{equation} This analysis is valid, with the SM action coupled to the holographic theory at the UV (the shifted boundary). When however the coupling is at a cutoff scale, the SM must be positioned as a brane in the appropriate radial position giving rise to the brane-world coupling. We therefore imagine the SM action as coupled at the radial scale $z_0\sim 1/M$ to the bulk action. Following holographic renormalization \cite{Bianchi:2001kw, Bianchi:2001de}, we may then rewrite the full bulk+brane action of the emergent vector field as \begin{equation} S_{total}=S_{bulk}+S_{brane} \label{e3}\end{equation} \begin{equation} S_{bulk}=M_P^3\int d^5x\sqrt{g}\left[Z~F^2+{\cal O}(F^4)\right] \label{e4}\end{equation} \begin{equation} S_{brane}=\delta(z-z_0)\int d^4x\sqrt{\gamma}\left[M^2(\hat F)^2+\hat A_{\mu}J^{\mu}+\cdots\right] \label{b5}\end{equation} where $\hat F_{\mu\nu}(x)\equiv F_{\mu\nu}(z_0,x)$ is the induced gauge field on the brane and we are working in the axial gauge $A_5=0$. As we shall be interested at energies $E\ll M$, we can ignore higher derivative terms like $\hat F^4$ on the brane. Here the U(1) gauge invariance is intact on the brane as the induced gauge field on the brane transforms under bulk gauge transformations induced on the brane. In the boundary action (\ref{b5}) $\gamma$ is the induced four-dimensional metric. We have suppressed the metric and other bulk fields. The kinetic coefficient $Z$ depends in general on scalar bulk fields. On the brane, we have suppressed the standard model fields some of which are charged under the gauge field $A$. There are also localized terms for other bulk fields that we have suppressed. All the localized kinetic terms of the bulk fields on the brane, like the $\hat F^2$ term are due to the quantum corrections of the SM fields. The graviton also couples to the SM action and provides emergent gravity, \cite{grav}. Importantly, the gauge symmetry on the bulk is unbroken and so is on the brane, as long as charged fields have no vev. However as we shall see, and in agreement with our earlier analysis, the dark-photon exchange on the brane is not massless. Finally, the boundary conditions for the bulk action are Neumann. It should be noted that what we have here is a close analogue of the DGP mechanism, \cite{DGP}, with two differences: here we have a vector field and also the bulk data are non-trivial as in the setup of \cite{self}. The main difference in the physics of an emergent vector field originating in a holographic theory is that, due to the strong coupling effects, there is an infinity of vector-like resonances coupled to the SM charged fields. They correspond to the poles of the two-point function of the global current $\hat J^{\mu}$, of the ``hidden" holographic theory. If the holographic theory is gapless, then there is a continuum of modes and, as mentioned earlier, in such a case the induced vector interaction is non-local. If the theory has a gap and a discrete spectrum (like QCD) then there is a tower of nearly stable states at large N that are essentially the vector meson trajectories, and act as the KK modes of the bulk gauge field. To investigate these interactions we should analyze the propagator of the gauge field on the SM brane. For this we introduce a $\delta$-function source for the vector on the brane and we solve the bulk+brane equations in the linearized approximation, assuming a trivial profile for the bulk gauge field\footnote{ This will be the case where the hidden QFT is at zero (global) charge density.} while the metric and other scalars have the holographic RG flow profile of a Lorentz-invariant QFT, namely \begin{equation} ds^2=dz^2+e^{2A(z)}dx_{\mu}dx^{\mu}\;\;\;,\;\;\; Z(\Phi_i(z)) \label{b6}\end{equation} In a transverse gauge, the bulk fluctuation equation is given by the bulk Laplacian, plus corrections that come from the brane couplings. We factor out the space-time index dependence (this is taken into account in appendix \ref{gau12}). The equations read \begin{equation} M_P^3Z\left[\partial_z^2+\left({Z'\over Z}+4A'\right)\partial_z +e^{-2A}\square_4\right]G(x,z)+ \label{b7}\end{equation} $$ +\delta(z-z_0)G_b~G(x,z)=\delta(z-z_0)\delta^{(4)}(x) $$ where, $G(x,z)$ is the bulk to bulk gauge field propagator with Neumann boundary conditions and $G_b$ is the two-point operator of the brane current . We work in Euclidean 4d space along the brane and primes stand for derivatives with respect to $z$. $m^2$ is a potential mass term for the gauge field on the brane, in the case there are non-trivial charged vevs on the brane. The two terms on the brane originate in the IR expansion of the two-point function of the brane current, that couples to the bulk gauge field. We Fourier transform along the four space-time dimensions to obtain \begin{equation} M_P^3Z\left[\partial_z^2+\left({Z'\over Z}+4A'\right)\partial_z -e^{-2A}p^2\right]G(p,z)=\delta(z-z_0)- \label{b8}\end{equation} $$ -\delta(z-z_0)G_b(p)~G(p,z) $$where $p^2=p^ip^i$ is the (Euclidean) momentum squared. Later on we also use $p=\sqrt{p^2}$. We have also substituted the low-energy expansion of the current two-point function on the brane\footnote{The presence of the $p^2\log p^2$ terms is associated with the logarithmic RG running of the coefficient of the $F^2$ term in four-dimensions. It is the first non-analytic term in the two-point function of currents.} \begin{equation} G_b(p)=M_4^2(p^2+m_0^2)+{\cal O}(p^4) \label{b8a}\end{equation} and we also added a brane mass $m_0$, in case symmetry breaking on the brane generates one. One can also add the logarithmic running of the brane coupling constant, due to the brane quantum corrections, in which case equation (\ref{b8a}) is modified to \begin{equation} G_b(p)=M_4^2\left(p^2+b_0 p^2\log{p^2\over m_{e}^2}+m_0^2\right)+{\cal O}(p^4) \label{b8ab}\end{equation} To solve (\ref{b8}), we must first solve this equation for $z>z_0$ and for $z<z_0$ obtaining two branches of the bulk propagator, $G_{IR}(p,z)$ and $G_{UV}(p,z)$ respectively. The IR part, $G_{IR}(p,z)$ depends on a single multiplicative integration constant as the regularity constraints in the interior of the bulk holographic geometry fix the extra integration constant. $G_{UV}(p,z)$ is defined with Neumann boundary conditions at the AdS boundary and depends on two integration constants. In the absence of sources and fluctuations on the SM brane, the propagator is continuous with a discontinuous $z$-derivative at the SM brane\footnote{For Randall-Sundrum branes this condition is replaced by $G_{UV}(p,z-z_0)=G_{IR}(p,z_0-z)$, which identifies the UV side with the IR side. This corresponds to a cutoff holographic QFT in the bulk.} \begin{equation} G_{UV}(p,z_0;z_0)=G_{IR}(p,z_0;z_0)\;\;\;,\;\;\; \partial_zG_{IR}(p,z_0;z_0)-\partial_zG_{UV}(p,z_0;z_0)={1\over Z~M_P^3} \label{b9}\end{equation} where $M_P$ is the five-dimensional Planck scale in (\ref{e4}). In this case there is a single multiplicative integration constant left and the standard AdS/CFT procedure extracts from this solution the two-point function of the global current of the hidden QFT. We denote the bulk gauge field propagator in the absence of the brane as $G_0(p,z;z_0)$ that satisfies \begin{equation} M_P^3Z\left[\partial_z^2+\left({Z'\over Z}+4A'\right)\partial_z -e^{-2A}p^2\right]G_0(p,z;z_0)=\delta(z-z_0) \label{b10}\end{equation} In our case the presence of an induced action on the SM brane changes the matching conditions to \begin{equation} G_{UV}(p,z_0)=G_{IR}(p,z_0) \label{b11}\end{equation} \begin{equation} \label{b11a} \partial_zG_{IR}(p,z_0)-\partial_zG_{UV}(p,z_0)={1+G_b(p)~G_{IR}(p,z_0)\over Z~M_P^3} \end{equation} The general solution can be written in terms of the bulk propagator $G_0$ with Neumann boundary conditions at the boundary as follows\footnote{Recall that $G(p,z;z_0)$ and $G_0(p,z;z_0)$ are bulk propagators in coordinate space in the radial/holographic direction $z$ and in Fourier space $p^\mu$ for the remaining directions $x^\mu$.} \cite{self} \begin{equation} G(p,z;z_0)=-{G_0(p,z;z_0)\over 1+G_b(p)~G_{0}(p,z_0;z_0)} \label{b12}\end{equation} The propagator on the brane is obtained by setting $z=z_0$ and becomes \begin{equation} G(p,z_0;z_0)=-{1\over {G_0(p,z_0;z_0)}^{-1}+G_b(p)}={G_0(p,z_0;z_0)\over 1+G_b(p)~G_{0}(p,z_0;z_0)} \label{b13}\end{equation} The general structure of the bulk propagator $G_0$ is derived in appendix (\ref{gau12}) and follows a similar structure of the scalar bulk propagator, \cite{self}. Similar to our manipulations in the appendix, we perform a scale transformation in order to bring the induced metric on the brane to be $\eta_{\mu\nu}$. Then $G_0$ in (\ref{b13}) is replaced by $\bar G_0$ given in (\ref{i8}) and (\ref{i9a}). The brane current correlator $G_b$ is also computed in the metric $\eta_{\mu\nu}$. We assume that the bulk holographic QFT has a single dynamical scale\footnote{The case where the bulk theory has several such scales can be treated in a similar manner, albeit having more regimes in the energy scale to analyse.}, that we shall denote by $m$. Another scale in the problem is the position of the brane, $z_0$. In cases where this is determined dynamically as in \cite{self}, this is of the same order as $m$. But there can be also cases where it is hierarchically different, \cite{self}. Assuming that $z_0\sim R_0$, we obtain \begin{equation} \bar G_0(p,z_0;z_0)={1\over 2ZM_P^3} \left\{ \begin{array}{lll} \displaystyle {1\over ~p}, &\phantom{aa} &p\gg m\\ \\ \displaystyle {1\over m}\left[d_0-\left(d_2+d_2'\log\left({p^2\over m^2}\right)\right){p^2\over m^2}+{\cal O}(p^4)\right],&\phantom{aa}& p\ll m. \end{array}\right. \label{b14} \end{equation} The IR expansion above is valid for all holographic RG flows. It starts having non-analytic terms starting at $p^2\log p^2$. The expansion coefficients can be determined either analytically or numerically from the bulk holographic RG flow solution. Analytic formulae for them in terms of the bulk solution were given in \cite{self}. The dimensionless coefficients $d_i$ above are functions of $mz_0$. Their size is typically of order one unless $z_0$ is very different from $m$. The UV expansion in (\ref{b14}) is given, as expected, by the flat space result. Using (\ref{b14}), we now investigate the interaction induced by the vector on the SM brane from (\ref{b13}). It is known that $\bar G_0(p,z_0;z_0)$ is monotonic as a function of $p$, vanishes at large $p$ and attains its maximum at $p=0$ compatible with (\ref{b14}). On the other hand, $G_b(p)$, that captures the two point function of the brane current, is diverging at large $p$ as $p^2\log(p)^2$ and asymptotes to a constant in the IR. Therefore the function $G_b(p)\bar G_0(p,z_0,z_0)$ that appears in the denominator of (\ref{b13}) starts from zero or constant in the IR, and asymptotes to +$\infty$ in the UV. We denote by the $E_t$ the transition scale at which it reaches the value one: \begin{equation} G_b(E_t)\bar G_0(E_t,z_0,z_0)\equiv 1 \label{b20}\end{equation} This is the analogue of the DGP scale, \cite{DGP} for the vector field. We may therefore write \begin{equation} \bar G(p,z_0;z_0)= \left\{ \begin{array}{lll} \displaystyle G_0(p,z_0,z_0), &\phantom{aa} &p\ll E_t\\ \\ \displaystyle {1\over G_b(p)},&\phantom{aa}& p\gg E_t. \end{array}\right. \label{b21} \end{equation} When $E_{t}\gg m$ then \begin{equation} \bar G(p,z_0;z_0)= \left\{ \begin{array}{lll} \displaystyle {1\over 2ZM_P^3m}\left[d_0-\left(d_2+d_2'\log\left({p^2\over m^2}\right)\right){p^2\over m^2}+{\cal O}(p^4)\right], &\phantom{aa} &p\ll m\\ \\ \displaystyle {1\over 2ZM_P^3}{1\over ~p}+{\cal O}(p),&\phantom{aa}& m\ll p\ll E_t\;, \\\\ \displaystyle {1\over G_b}= {1\over M_4^2}{1\over p^2}+{\cal O}(p^4) &\phantom{aa}& p\gg E_t. \end{array}\right. \label{b22} \end{equation} Up to now we used the five-dimensional definitions for the gauge field that is dimensionless. Now we pass to four-dimensional QFT language by $A_{\mu}\to {A_{\mu}\over M_P}$ and define \begin{equation} {g_5^2}={1\over 2ZM_P}\;\;\;,\;\;\; g_4^2={M_P^2\over M_4^2} \label{b23}\end{equation} where $g_5^2$ has dimension of length and $g_4$ is dimensionless. With these normalizations, the vector interaction on the brane in (\ref{b22}) becomes \begin{equation} \bar G(p,z_0;z_0)={1\over M_P^2} \left\{ \begin{array}{lll} \displaystyle {g_{IR}^2\over m_{IR}^2+\left(1+{d_2'\over d_2}\log{p^2\over m^2}\right)p^2}+{\cal O}(p^4), &\phantom{aa} &p\ll m\\ \\ \displaystyle {g_5^2\over ~p}+{\cal O}(p),&\phantom{aa}& m\ll p\ll E_t\;, \\\\ \displaystyle {1\over G_b}= {g_4^2\over p^2}+{\cal O}(p^4) &\phantom{aa}& p\gg E_t. \end{array}\right. \label{b24} \end{equation} with \begin{equation} g_{IR}^2\equiv (m g_{5}^2){d_0^2\over d_2} \;\;\;,\;\;\; m_{IR}^2={d_0\over d_2}m^2 \label{b25}\end{equation} Therefore at short enough distances, $p\to\infty$, the interaction mediated by the vector is four-dimensional, and is controlled by the dimensionless coupling constant $g_4$. At intermediate distances, $m\ll p\ll E_t$, the induced interaction becomes five-dimensional due to the coupling of the KK modes and the respective five dimensional coupling constant $g_5^2$ has dimension of length. At large enough distances, $p\ll m$, the interaction is determined by the bulk dynamics and is that of a massive photon with (dimensionless) coupling constant $g_{IR}$ and mass $m_{IR}$. We therefore find the emergent (dark) photon is always massive (like the graviton in \cite{self}), and its coupling constant is small, $g_{IR}\sim {1\over N}$. More precisely, the emergent photon is a resonance of the associated two point function, that is produced in the interplay between the bulk theory and the brane dynamics. Moreover, a similar analysis as in \cite{self} indicates that $m_{IR}\over M_{p}$, where $M_{p}$ is the emergent four-dimensional Planck scale, scales as $N^{-{1\over 3}}$ and can be made arbitrarily small at large enough $N$. Turning on a vector mass $m_0$ on the brane and keeping all IR contributions, the IR parameters in (\ref{b25}) become \begin{equation} {1\over g_{IR}^2}\simeq {1\over g_4^2}+{d_2\over d_0^2} {1\over mg_5^2} \;\;\;,\;\;\; {m_{IR}^2\over g_{IR}^2}\simeq {m^2\over d_0(mg_5^2)}+{m_0^2\over g_4^2} \label{b26}\end{equation} The effective IR coupling constant for the vector interactions $g_{IR}$ received contributions from the bulk and the brane. The weakest of the two interactions dominates and determines $g_{IR}$. Typically, this can be the bulk interaction as it behaves as $g_{5}\sim {1\over N}$. At hierarchically large $N$, it will dominate the vector interactions on the brane also and the associated coupling can be arbitrarily small. A similar argument indicates that for a generic bulk holographic theory the vector boson mass, $m_{IR}$ is determined by the bulk physics and is of order ${\cal O}(m)$. For special bulk theories, like theories where the coupling runs slowly at intermediate scales (walking theories\footnote{Holographic walking theories have been discussed in \cite{Nu}-\cite{KJ}.}), the ratio $d_0/d_2$ can become hierarchically small and the vector mass can be $\ll ~m$. There are other parameter ranges in (\ref{b26}) that offer more phenomenologically interesting windows but we shall not pursue this analysis here. Depending on the parameters of the bulk and the brane theory we could have a differing ordering of scales ie. $E_t\ll m$. In such a case, there is no intermediate five-dimensional regime for the vector-mediated interaction. We conclude this section, by observing that when the hidden theory is holographic, the setup generates an emergent dark photon coupled to the visible theory. It is always massive, but both its coupling as well as its mass can be made arbitrarily small, by taking $N$ to be sufficiently large. \section{A non-local emergent gauge theory}\label{singleNL} In section \ref{effe} we have analyzed the effective action of a global current in the presence of charged sources and have shown how this can describe an emergent ``photon" that is essentially massive because of the presence of the charged sources. In the presence of a mass gap that theory admits a local IR expansion. In this section, we analyze a similar effective action in the absence of charged sources\footnote{The treatement of the effective action in theories wirth massless degrees of freedom has pitfalls that are catalogue on page 7 of \cite{IS}. Although seberal issues mentioned there are not problems here, one should keep them alwys in mind.}. We start with the simplest setup possible. Consider a theory with a global U(1) symmetry, and an associated conserved current, $\partial_{\mu}J^{\mu}=0$. The variation of the action of the theory under a space-time dependent U(1) parameter $\epsilon(x)$ is \begin{equation} \delta S=\int d^4 x \, \partial_{\mu}\epsilon \, J^{\mu} \label{C1}\end{equation} Integrating by parts and demanding invariance when $\epsilon=constant$, \textit{i.e.} a global transformation, gives the conservation of the current advertised before. Consider now coupling the current to a background gauge field source and improving this so that the coupling is fully (locally) invariant \begin{equation} S\to S(A)=S+\int d^4 x \, J^{\mu}A_{\mu}+\cdots \, . \label{A2}\end{equation} where the ellipsis denotes subleading terms that may be important for gauge invariance, namely $S(A+d\epsilon)=S(A)$ provided all charged fields are appropriately transformed. We now consider the Schwinger functional \begin{equation} e^{-W(A)}\equiv \int {\cal D}\phi~e^{-S(\phi,A)} \label{A3}\end{equation} where $\phi$ denotes collectively the quantum fields of the theory. The functional $W(A)$ is locally gauge invariant. This is also equivalent to the standard Ward identity \begin{equation} \partial_{\mu}{\delta W\over \delta A_{\mu}}=0 \label{A4}\end{equation} We now Legendre-transform \begin{equation} \Gamma(V, {\bf A})=\int d^4 x\left[V^{\mu} (A_{\mu} - {\bf A}_\mu) -W(A)\right] \label{A6}\end{equation} by defining the current vev on the background ${\bf A}_\mu$ as usual \begin{equation} V_{\mu}\equiv {\delta W\over \delta A_{\mu}} \Big|_{A_\mu = {\bf A}_\mu} \label{A5}\end{equation} To vary $\Gamma(V, {\bf A})$ with respect to $V$ we must be careful as $V_{\mu}$ are not independent variables but satisfy the constraint $\partial^{\mu}V_{\mu}=0$. In the presence of charged sources analysed in section \ref{effe}, the inversion procedure was local and straightforward. In the present case we introduce a Lagrange multiplier function $\varphi$ and consider the modified effective action \begin{equation} \Gamma_{\varphi}(V, {\bf A})=\Gamma(V, {\bf A})-\int d^4 x ~\varphi\partial_{\mu}V^{\mu}= \Gamma(V)+\int d^4 x ~\partial_{\mu}\varphi V^{\mu} \, . \label{A7}\end{equation} A consistency check of the inversion procedure, is that the degree of freedom corresponding to the Lagrange multiplier should decouple and that the constraint $\partial^{\mu}V_{\mu}=0$ will be automatically satisfied in the effective action for $V_\mu$ that we derive. We now have to vary $\Gamma_{\varphi}(V, {\bf A})$ both with respect to $V_{\mu}$ and $\varphi$. We find \begin{equation} {\delta \Gamma \over \delta V_{\mu}}= A^{\mu} - {\bf A}^\mu +V^{\nu}{\delta A_{\nu}\over \delta V_{\mu}}-{\delta W\over \delta V_{\mu}} \label{A8}\end{equation} Using \begin{equation} {\delta W\over \delta V_{\mu}}={\delta W\over \delta A^{\nu}}{\delta A_{\nu}\over \delta V_{\mu}}=V^{\nu}{\delta A_{\nu}\over \delta V_{\mu}} \label{A9}\end{equation} and substituting above we obtain \begin{equation} {\delta \Gamma_{\varphi} \over \delta V_{\mu}}=A^{\mu}- {\bf A}^\mu+\partial^{\mu}\varphi\;\;\;,\;\;\; {\delta \Gamma_{\varphi} \over \delta \varphi}=\partial^{\mu}V_{\mu}=0 \label{A10}\end{equation} These are the two equations that fully describe the dynamics of the current vev, $V_{\mu}$. The Lagrange multiplier has become a gauge parameter. The original theory defined on the background $A_{\mu}= {\bf A}_\mu$ is also equivalent to the one on the gauge transformed background $A_{\mu}={\bf A}_\mu - \partial_{\mu}\epsilon$ because in this case $\epsilon$ couples to the redundant operator $\partial^\mu V_\mu$. Consider now a low-energy expansion for the functional $W(A)$, \begin{equation} W=W_0+\int d^4 x \left[{W_1\over 4}~F^2+{W_3\over 8}(F^2)^2+\right. \label{A11}\end{equation} $$ \left.+{W_4\over 8}F_{\mu\nu}F^{\nu\rho}F_{\rho\sigma}F^{\sigma\mu}+{W_5\over 4}F_{\mu\nu}\square F^{\mu\nu}+{\cal O}(\partial^6)\right] $$ with $W_i$ constants\footnote{More generally, $W_i$ may depend on neutral sources.}. Using the definition \eqref{A5} we obtain \begin{equation} V_{\nu}=-W_1\partial^{\mu}F_{\mu\nu}-W_3\partial^{\mu}(F^2 F_{\mu\nu})-W_4\partial^{\mu}(F^3)_{\nu\mu}-W_5\square \partial^{\mu}F_{\mu\nu} +{\cal O}(\partial^6) \label{A12}\end{equation} where\footnote{The seemingly independent term $ (\partial^{\rho}F_{\rho\mu})(\partial^{\sigma}{F_{\sigma}}^{\mu})$ upon integration by parts is equal to twice $F_{\mu\nu}\square F^{\mu\nu}$.} \begin{equation} F^2_{\mu\nu}\equiv F_{\mu\rho} {F^{\rho}}_{\nu}\;\;\;,\;\;\; F^{3}_{\mu\nu}= F_{\mu\rho} F^{\rho\sigma}F_{\sigma\nu} \label{A13}\end{equation} We have the following identities \begin{equation} \partial^{\mu}\partial^{\nu}F_{\mu\nu}=0\;\;\;,\;\;\; \partial^{\mu}\partial^{\nu}(F^2 F_{\mu\nu})=0\;\;\;,\;\;\; \partial^{\mu}\partial^{\nu}F^3_{\mu\nu}=0 \label{A14}\end{equation} Notice that $V_{\mu}$ defined in \eqref{A12} automatically satisfies $\partial^{\mu}V_{\mu}=0$. From (\ref{A12}) we can calculate the two-point function of the currents as \begin{equation} \langle J^{\mu}(x)J^{\nu}(y)\rangle={\delta V^{\mu}(x)\over \delta A_{\nu}(y)}\Big|_{A=0}=-W_1\left[\eta^{\mu\nu}\square -\partial^{\mu}\partial^{\nu}\right]\delta^{(4)}(x-y)- \label{A17}\end{equation} $$ -W_5\square\left[\eta^{\mu\nu}\square -\partial^{\mu}\partial^{\nu}\right]\delta^{(4)}(x-y)+{\cal O}(\partial^6) $$ It is clear that in the derivative expansion the most general function of conserved currents contains only contact terms and is of the form \begin{equation} \langle J^{\mu}(x)J^{\nu}(y)\rangle=f(\square)\left[\eta^{\mu\nu}\square -\partial^{\mu}\partial^{\nu}\right]\delta^{(4)}(x-y) \label{A18}\end{equation} where $f(x)$ is an arbitrary function with a regular expansion around $x=0$. Note also that the two-point function of a conserved current does not have an inverse, as it is annihilated by $\partial_{\mu}$ and this is a priori why the quadratic terms in the effective action (that are typically given by the inverse two point function) are ill-defined. We can also calculate \begin{equation} {\cal F}_{\mu\nu}=\partial_{\mu}V_{\nu}-\partial_{\nu}V_{\mu}=-W_1~\square F_{\mu\nu}-W_5~\square^2 F_{\mu\nu}- \label{A15}\end{equation} $$ -W_3\left(F^2\square F_{\mu\nu}+(\partial^{\rho}F^2)\partial_{\rho}F_{\mu\nu}+\partial_{\mu}F^2\partial^{\rho}F_{\rho\nu}- \partial_{\nu}F^2\partial^{\rho}F_{\rho\mu}+\partial_{\mu}\partial^{\rho}F^2 F_{\rho\nu}-\partial_{\nu}\partial^{\rho}F^2 F_{\rho\mu}\right)+ $$ $$ +W_4\left(\partial_{\mu}\partial^{\rho}(F^3_{\rho\nu})-\partial_{\nu}\partial^{\rho}(F^3_{\rho\mu})\right)+{\cal O}(\partial^6) $$ where we have also used the Bianchi identity. This can be inverted to \begin{equation} F_{\mu\nu}=-{1\over W_1}\square^{-1}{\cal F}_{\mu\nu}+{W_5\over W_1^2}{\cal F}_{\mu\nu}+{\cal O}(\partial^4) \label{A19}\end{equation} Notice that, as anticipated, the inversion is non-local. We define the action of the inverse Laplacian as \begin{equation} \square^{-1}_x f(x^{\mu})\equiv \int d^4x'~G(x;x')f(x') \label{A20}\end{equation} where $G(x,x')$ is the appropriate Green's function of the Laplacian, $\square_x G=\delta(x-x')$. With this definition, ``integration by parts" works trivially on $\square^{-1}$. Of course there are boundary conditions implicit in the Green's function that should be correlated with the absence of zero modes. Here, we shall be cavalier about these issues. Using the previous result we can now compute \begin{equation} \Gamma_{\varphi}=\int d^4x\left[V_{\mu} (A^{\mu} - {\bf A}^\mu)-W\right]+\int d^4 x ~\partial_{\mu}\varphi V^{\mu}= \label{A16}\end{equation} $$=-W_0+{W_1\over 4}\int d^4 x F^2+ {W_5\over 4}\int d^4 x ~F\square F+ \int d^4 x ~(\partial_{\mu} - {\bf A}_\mu)\varphi V^{\mu}+{\cal O}(\partial^6)= $$ $$ = -W_0+{1\over 4W_1}\int d^4 x \left[{\cal F}_{\mu\nu}\square^{-2}{\cal F}^{\mu\nu}-{W_5\over W_1}{\cal F}_{\mu\nu}\square^{-1}{\cal F}^{\mu\nu}\right]+\int d^4 x ~(\partial_{\mu}\varphi - {\bf A}_\mu) V^{\mu}+{\cal O}({\cal F}^3) $$ where we kept only quadratic terms in the field strength for simplicity. We now vary this effective action to verify that we obtain (\ref{A10}) \begin{equation} -{1\over W_1}\square^{-2}\partial^{\mu}{\cal F}_{\mu\nu}+{W_5\over W_1^2}\square^{-1}\partial^{\mu}{\cal F}_{\mu\nu}+\partial_{\nu}\varphi+{\cal O}(\partial^4)= {\bf A}_\nu \;\;\;,\;\;\; \partial_{\mu}V^{\mu}=0 \label{aa10} \end{equation} Notice that the previous equation automatically implies \begin{equation} \Box\,\varphi=\partial^\nu {\bf A}_\nu\;. \end{equation} There is a ``massless" degree of freedom that decouples and can be subsumed in the background ${\bf A}_\nu$. Using (\ref{A15}), (\ref{aa10}) can be translated to \begin{equation} \square^{-1}\partial^{\mu}F_{\mu\nu}+\partial_{\nu}\varphi - {\bf A}_\nu +{\cal O}(\partial^4)= A_{\nu}+\partial_{\nu}\left[\varphi-\square^{-1}\partial^{\mu}A_{\mu}\right]+{\cal O}(\partial^4)=0 \label{A21}\end{equation} (\ref{aa10}) implies that, modulo the subtleties of defining $\square^{-1}$, the effective equation is gauge-invariant under $V_{\mu}$ gauge transformations. On the other hand it is non-local. The invariance of this action can be shown beyond the derivative expansion. Our assumptions imply, that since there are no minimally-charged fields, the dependence of $W(A_{\mu})$ on $A_{\mu}$ is via $F_{\mu\nu}$ only, and this includes all possible multipole couplings. We parametrize \begin{equation} W(A_{\mu})=\int d^4x~{\cal L}(F_{\mu\nu}) \label{A22}\end{equation} where we dropped the dependence on other fields. Then the current is given by \begin{equation} V_{\mu}={\delta W\over \delta A^{\mu}}=-4\partial^{\nu}{\delta {\cal L}\over \delta F^{\nu\mu}}\;\;\;,\;\;\; \partial^{\mu}V_{\mu}\equiv 0 \label{def1}\end{equation} The right-hand side is a functional of the field strength of $A_{\mu}$ which implies that $F_{\mu\nu}$ is a non-local functional of ${\cal F}_{\mu\nu}$ proving the emergent gauge invariance. In the following, we propose a different set of dynamical variables that renders the effective description, quasi-local. We start again with the functional $\Gamma[V]$ defined in \eqref{A6} \begin{equation} \Gamma(V, {\bf A})=\int d^4 x\left[V^{\mu}(A_{\mu}- {\bf A}^\mu) -W(A)\right] \label{A23}\end{equation} and we change variables to a two form, $B_{\mu\nu}$ \begin{equation} V_{\mu}={1\over 2}\epsilon_{\mu\nu\rho\sigma}\partial^{\nu}B^{\rho\sigma}={1\over 3!}\epsilon_{\mu\nu\rho\sigma}H^{\nu\rho\sigma}\;\;\;,\;\;\; H_{\mu\nu\rho}\equiv \partial_{\mu}B_{\nu\rho}+\partial_{\nu}B_{\rho\mu}+\partial_{\rho}B_{\mu\nu} \label{A244}\end{equation} so that the constraint \begin{equation} \partial^{\mu}V_{\mu}=0 \label{A255}\end{equation} is obeyed identically. This dual substitution solves the constraint explicitly, but introduces a dual gauge invariance in terms of the new unconstrained variable, $\Lambda_{\mu}$ \begin{equation} B'_{\mu\nu}=B_{\mu\nu}+\partial_{\mu}\Lambda_{\nu}-\partial_{\nu}\Lambda_{\mu} \label{A266}\end{equation} We compute \begin{equation} {\cal F}_{\mu\nu}={1\over 3!}(\epsilon_{\nu\rho\sigma\theta}\partial_{\mu}H^{\rho\sigma\theta}-\epsilon_{\mu\rho\sigma\theta}\partial_{\nu}H^{\rho\sigma\theta}) \label{A277}\end{equation} or equivalently in form notation \begin{equation} {\cal F}=d^*dB \label{A288}\end{equation} We have that \begin{equation} (^*{\cal F})_{\mu\nu}=\partial^{\mu}H_{\mu\nu\rho} \label{29}\end{equation} Integrating by parts (\ref{A16}) we can simplify it as \begin{equation} \int d^4 x ~{\cal F}_{\mu\nu}{\cal F}^{\mu\nu}={1\over 3}\int d^4x~H_{\mu\nu\rho}\square H^{\mu\nu\rho} \label{30}\end{equation} The action (\ref{A16}) can be therefore written as \begin{equation} \Gamma[V, {\bf A}]=\int d^4x\left[ -W_0 - {1\over 6}\epsilon_{\mu\nu\rho\sigma}H^{\nu\rho\sigma} {\bf A}^\mu +{1\over 12W_1}\left[H_{\mu\nu\rho}\square^{-1} H^{\mu\nu\rho}-{W_5\over W_1}H_{\mu\nu\rho}H^{\mu\nu\rho}\right]+{\cal O}(\partial^4)\right] \label{A311}\end{equation} The coupling to the background term can also be written as the standard anomaly term \begin{equation} {1\over 6}\int \epsilon_{\mu\nu\rho\sigma}H^{\nu\rho\sigma} {\bf A}^\mu= {1\over 6}\int \epsilon_{\mu\nu\rho\sigma}B^{\rho\sigma} F_{\mu\nu}({\bf A}) \label{A322}\end{equation} From now on we set the background field ${\bf A}_\mu$ to zero, since it only acts as an external source for the dynamics of the emergent degree of freedom described by $B_{\mu \nu}$. To compute variations we need, \begin{equation} \delta{\cal F}_{\mu\nu}={1\over 2}(\epsilon_{\nu\rho\sigma\theta}\partial_{\mu}\partial^{\rho}\delta B^{\sigma\theta}-\epsilon_{\mu\rho\sigma\theta}\partial_{\nu}\partial^{\rho}\delta B^{\sigma\theta}) \label{A333}\end{equation} Therefore, after some integrations by parts we obtain, \begin{equation} \delta\Gamma[V, {\bf A}=0]=\int d^4x\left[{1\over 2W_1}\left[\delta{\cal F}_{\mu\nu}\square^{-2}{\cal F}^{\mu\nu}-{W_5\over W_1}\delta{\cal F}_{\mu\nu}\square^{-1}{\cal F}^{\mu\nu}\right]+{\cal O}(\partial^4)\right]= \label{A344}\end{equation} $$ =\epsilon_{\nu\rho\sigma\theta}\int d^4x~{\delta B^{\sigma\theta}\over 2W_1}\left[ \partial^{\rho}\left(\square^{-2}\partial_{\mu}{\cal F}^{\mu\nu}-{W_5\over W_1}\square^{-1}\partial_{\mu}{\cal F}^{\mu\nu}\right)+{\cal O}(\partial^4)\right] $$ We now use the Bianchi identity for ${\cal F}$ \begin{equation} \epsilon_{\nu\rho\sigma\theta}\partial^{\rho}\partial_{\mu}{\cal F}^{\mu\nu}=\epsilon_{\nu\rho\sigma\theta}\square {\cal F}^{\rho\nu} \label{A355}\end{equation} to rewrite \begin{equation} \delta\Gamma[V, {\bf A}=0]=\epsilon_{\nu\rho\sigma\theta}\int d^4x~\left[{\delta B^{\sigma\theta}\over 2W_1}\left( \square^{-1}{\cal F}^{\nu\rho}-{W_5\over W_1}{\cal F}^{\nu\rho}\right)+{\cal O}(\partial^4)\right] \label{A366}\end{equation} We also have \begin{equation} \epsilon_{\nu\rho\sigma\theta}{\cal F}^{\nu\rho}=-2\partial_{\mu}H^{\mu\sigma\theta} \label{A377}\end{equation} so that the equations of motion are \begin{equation} \left[\square^{-1}-{W_5\over W_1}\right]\partial_{\mu}H^{\mu\sigma\theta}+{\cal O}(\partial^2)=0 \label{A388}\end{equation} and they appear to be non-local. Nevertheless, if we fix the analogue of the ``Lorentz gauge" $\partial^{\mu}B_{\mu\nu}=0$ then the equation above becomes local \begin{equation} \left[\square-{W_1\over W_5}\right]B^{\sigma\theta}+{\cal O}(\partial^4)=0 \label{A399}\end{equation} and is indeed a free field equation for a massive two form. This gives three propagating degrees of freedom, which is the correct number carried by a conserved vector $V_{\mu}$ satisfying $\partial^\mu V_\mu=0$. In the Lorentz gauge, equation (\ref{A377}) can be written as \begin{equation} \square^{-1}{\cal F}=-{1\over 3!}~^* B \label{Bia}\end{equation} which indicates that in this gauge the theory is local. Indeed, the Bianchi identity for ${\cal F}$ is via equivalently, via (\ref{Bia}), the Lorentz gauge condition for $B$. In conclusion, this theory is an interacting theory of a massive two-form in four-dimensions. The two-form theory , including the coupling to the external source in (\ref{A322}), is reminiscent of the tensor theory in \cite{QT} that was inspired by the ideas in \cite{JT}. \subsection{A different non-local case} In this section, we consider a separate case. More specifically we assume the presence of a single global U(1) symmetry within the hidden $\h{QFT}$. On the contrary, the visible one will not have any U(1) global symmetry and it will inherit it from the hidden sector through the procedure we shall explain in what follows. Moreover, we assume the absence of any charged interaction between the visible QFT and the hidden $\h{QFT}$. As a consequence of that choice the emergent theory will become non local and will have to be treated in a different way. To proceed we make the two theories to interact via a coupling involving a set of uncharged fields. In such a way the original global U(1) symmetry of the hidden sector is preserved and the symmetry pattern can be summarized as: \begin{equation} \h{U(1)}\,\times\,\bullet \quad \rightarrow \quad U(1) \label{A400}\end{equation} where $\bullet$ stands for ``no global symmetry''. In particular, no operator of the visible theory is charged under the emergent vector field, and quantum corrections coupling the two theories can only generate multiple couplings between the emergent vector field and the visible fields. In order to describe such a setup we assume the Schwinger functional to take the form \begin{equation} W\left(A^\mu,\chi^I\right)=\int d^4x\Big[Y^{(0)}(\chi^I)-\frac{Y^{(1)}(\chi^I)}{4}F^2\,\,+\,\frac{Y^{(2)}_{IJ}}{2}\,\partial_\mu \chi^I\,\partial^\mu \chi^J\,+\,\mathcal{L}_{int}+{\cal O}(\partial^4)\,\Big]\label{act2} \end{equation} where the $\chi^I$ are a collection of neutral scalar fields and the leading interaction term takes the ``dipole'' form: \begin{equation} \mathcal{L}_{int}\,=\,\frac{Y^{(int)}_{IJ}(\chi^I)}{2}\,F^{\mu\nu}\,\partial_{[\mu} \chi^I\,\partial_{\nu]} \chi^J \label{A43}\end{equation} The ellipsis stands for higher derivative terms, namely terms with more than three derivatives. From the low energy functional \eqref{act2} the emergent vector field defined via the relation \begin{equation} \mathcal{V}^\mu\,\equiv\,\frac{\delta W(A,\chi^i)}{\delta A_\mu}\Big|_{A= {\bf A}}\;, \label{A44}\end{equation} takes the form \begin{equation} \mathcal{V}^\mu\,=\,\partial_\nu \left(Y^{(1)}\,F^{\nu\mu}\,-\,Y^{(int)}_{IJ}\,\partial^{[\nu} \chi^I\,\partial^{\mu]} \chi^J\right)+{\cal O}(\partial^4) \label{j2} \end{equation} where the brackets indicate the antisymmetrized part. In the absence of minimally charged sources, the constraint $\partial_\mu \mathcal{V}^\mu=0$ is trivially satisfied. Once now we try to invert the previous expression as: \begin{equation} A^\mu\,=\,A^\mu(\mathcal{V}^\mu,\chi^I) \label{A45}\end{equation} the same issue as before appears. In particular, it is clear that because of the structure of \eqref{j2}, the inversion cannot be performed in a local way. As a consequence the effective action for the emergent vector field $\mathcal{V}^\mu$ has a non local formulation. This example is the non-linear manifestation of the same problem we encounter in the single theory setup in the absence of charged operators. The first step is to construct the field strength of the vector field $\mathcal{V}^\mu$ as \begin{equation} \mathcal{F}^{\mu\nu}\,\equiv\,\partial^\mu \mathcal{V}^\nu\,-\,\partial^\nu \mathcal{V}^\mu\label{F1} \end{equation} From equation \eqref{j2} we can compute such a quantity and we obtain \begin{equation} \mathcal{F}^{\mu\nu}\,=\,\Box\,\left(\bar{F}^{\mu\nu}\,-\,\mathcal{K}^{\mu\nu}\right)\,+{\cal O}(\partial^5)\label{NL1} \end{equation} where we have defined the antisymmetric two form \begin{equation} \mathcal{K}^{\mu\nu}\,\equiv\,2\,Y^{(int)}_{IJ}\,\partial^{[\mu}\chi^I \partial^{\nu]}\chi^J\label{twoF} \end{equation} and \begin{equation} \bar{F}^{\mu\nu}\,\equiv\,2\,Y^{(1)}\,F^{\mu\nu} \label{def}\end{equation} The previous equation can be inverted into the non local form \begin{equation} \bar{F}^{\mu\nu}\,=\,\Box^{-1}\left(\mathcal{F}^{\mu\nu}\right)\,+\,\mathcal{K}^{\mu\nu}+{\cal O}(\partial^3)\;. \label{A46}\end{equation} We can then compute the gauge-invariant density \begin{equation} \frac {1}{4 \,{Y^{(1)}}^2}\,\bar{F}_{\mu\nu}\bar{F}^{\mu\nu}\,=\,\frac{1}{4 \,{Y^{(1)}}^2} \,\Box^{-1}\, \mathcal{F}_{\mu\nu}\,\Box^{-1}\,\mathcal{F}^{\mu\nu}\,+\,\frac{1}{4 \,{Y^{(1)}}^2}\,\mathcal{K}_{\mu\nu}\,\mathcal{K}^{\mu\nu} \, + \label{A47}\end{equation} $$ +\, \frac{1}{2 \,{Y^{(1)}}^2}\,\mathcal{K}_{\mu\nu}\,\Box^{-1}\,\mathcal{F}^{\mu\nu}+{\cal O}(\partial^5) $$ in terms of the field strength of the emerging vector field $\mathcal{F}$ defined in \eqref{F1} and the antisymmetric two-form $\mathcal{K}$ \eqref{twoF}. Furthermore, the interacting part becomes \begin{equation} \mathcal{L}_{int}\,=\,\mathcal{K}_{\mu\nu}\,\frac{\bar{F}^{\mu\nu}}{8 \,Y^{(1)}}\,=\,\frac{1}{8 \,Y^{(1)}}\,\mathcal{K}_{\mu\nu}\,\Box^{-1}\mathcal{F}^{\mu\nu}\,+\,\frac{\mathcal{K}^2}{8 \,Y^{(1)}}+{\cal O}(\partial^5) \label{A48}\end{equation} where we defined $\mathcal{K}^2\equiv \mathcal{K}_{\mu\nu}\mathcal{K}^{\mu\nu}$. It is easy to see then that the Schwinger functional is rewritten in the new variables as \begin{equation} W = \int d^4x\Big[Y^{(0)} \, -\frac{1}{16 Y^{(1)}} \,\Box^{-1}\mathcal{F}_{\mu\nu}\,\Box^{-1}\,\mathcal{F}^{\mu\nu}\, + \frac{1}{16 Y^{(1)}} \mathcal{K}^2 \, + \frac{Y_{IJ}^{(2)}}{2} \partial_\mu \chi^I \partial^\mu \chi^J \, +{\cal O}(\partial^5)\,\Big] \label{A49}\end{equation} and the effective action via the Legendre transform becomes \begin{equation} \Gamma(\mathcal{V}_\mu,\chi_I,\mathbf{A}_\mu) =\int d^4 x\left[-\mathcal{V}^{\mu}(A_{\mu}-\mathbf{A}_\mu) \right] \, + \, W \, \label{A50}\end{equation} $$ = \int d^4 x~\frac{1}{8 Y^{(1)}} \left[ \Box^{-1}\mathcal{F}_{\mu\nu}\,\Box^{-1}\,\mathcal{F}^{\mu\nu}+\mathcal{F}_{\mu\nu}\,\Box^{-1}\mathcal{K}^{\mu\nu}\right]\,+\mathcal{V}^{\mu}\mathbf{A}_\mu+ \, W\nonumber $$ $$ =\int d^4x\Big[Y^{(0)} \, +\frac{1}{16 Y^{(1)}}\left( \Box^{-1}\mathcal{F}_{\mu\nu}+ \mathcal{K}_{\mu\nu}\right)\left(\Box^{-1}\mathcal{F}^{\mu\nu} +\mathcal{K}^{\mu\nu}\right) + \frac{Y_{IJ}^{(2)}}{2} \partial_\mu \chi^I \partial^\mu \chi^J \, +{\cal O}(\partial^5)\,\Big] $$ We observe, that as in the previous section, and unlike the case were there are minimally charged fields, here the action for $\mathcal{V}_{\mu}$ is invariant under \begin{equation} \mathcal{V}_{\mu}\to \mathcal{V}_{\mu}+\partial_{\mu}\epsilon \label{A51}\end{equation} and this invariance can be shown to exist to all orders. To summarize, we obtained a gauge-invariant effective description in terms of the emergent gauge field $\mathcal{V}^\mu$ which appears to be non-local due to the presence of the inverse Laplacian in the effective action. This result is analogous of what we already obtained in the single theory case in section \ref{singleNL}.\\ One way to proceed is to follow the same method presented in section \ref{singleNL}.\\ To see however the structure of the next order terms we have to supplement the effective functional \eqref{act2} with the leading higher-derivative corrections, namely the terms containing four derivatives, \begin{equation} W_4(A^\mu,\chi^I)=\int d^4x\,\Big(\frac{Y^{(3)}}{8}(F^2)^2\,+\,\frac{Y^{(4)}}{8}F_{\mu\nu}F^{\nu\rho}F_{\rho\sigma}F^{\sigma \mu}\,+\,\frac{Y^{(5)}}{4}F_{\mu\nu}\Box F^{\mu\nu}\,+\nonumber \label{A52}\end{equation} \begin{equation} +\,\frac{Y^{(6)}_{IJKL}}{4}\,\partial_\mu \chi^I \partial^\mu \chi^J\,\partial_\nu \chi^K \partial^\nu \chi^L\,+ \frac{Y^{(8)}_{IJKL}}{4} \chi^I \Box \chi^J\, \chi^K \Box \chi^L\,+ \frac{Y^{(10)}_{IJKL}}{4} \chi^I \Box \chi^J\, \partial_\nu \chi^K \partial^\nu \chi^L\,+\mathcal{O}(\partial^5)\Big)\label{higher1} \end{equation} along with \begin{equation} \mathcal{L}_{int}^{4}(A^\mu,\chi^I)=\int d^4x\,\left(\frac{Y_{IJ}^{(7)}}{4}F^2\,\partial_\mu \chi^I \partial^\mu \chi^I\,+\frac{Y_{IJ}^{(9)}}{4}F^{\mu\rho}\,\partial_\mu \chi^I F_{\rho\nu} \partial^\nu \chi^I\,+\,\mathcal{O}(\partial^5)\right)\label{higher2} \end{equation} For simplicity, we assume in the following that all the $Y^{(n)}$ couplings are constant, independent of the neutral fields $\chi^I$. Furthermore from \eqref{higher1}, \eqref{higher2} we consider only corrections which are at most quadratic in the field strength $F^{\mu\nu}$ and in the derivative of the neutral scalars $\partial \chi$ as well as only linear terms, neglecting the terms proportional to $Y^{(3)},Y^{(4)},Y^{(8)}, Y^{(6)},Y^{(9)},Y^{(10)}$. Such a simplified setup shall be enough for the scope of this section, which is to demonstrate the propagating degrees of freedom. Under the previous assumptions we have \begin{equation} \label{W4} W(A^\mu,\chi^I)=\int d^4x\,\Big(Y^{(0)}(\chi^I)-\frac{Y^{(1)}(\chi^I)}{4}F^2\,\,+\,\frac{Y^{(2)}_{IJ}}{2}\,\partial_\mu \chi^I\,\partial^\mu \chi^J\,+\,\frac{Y^{(5)}}{4}F_{\mu\nu}\Box F^{\mu\nu}\,+\,\mathcal{L}_{int}\Big) \end{equation} where \begin{equation} \mathcal{L}_{int} = \frac{\mathcal{K}_{\mu\nu}F^{\mu\nu}}{4}+\frac{Y_{IJ}^{(7)}}{4}F^2\,\partial_\mu \chi^I \partial^\mu \chi^I \label{A53}\end{equation} The higher derivative corrections we have introduced add new contributions to the definition of the induced vector \eqref{j2}; in particular \begin{equation} \mathcal{V}^\mu\,=\,\partial_\nu \left(Y^{(1)}\,F^{\nu\mu}\,-\,Y^{(int)}_{IJ}\,\partial^{[\nu} \chi^I\,\partial^{\mu]} \chi^J\right)\,+\,\partial_\nu\left(Y^{(5)}\Box F^{\mu\nu}\right)+\partial_\nu\left(Y_{IJ}^{(7)}\left(F^{\mu\nu}\partial_\rho \chi^I \partial^\rho \chi^J\right)\right)+\mathcal{O}(\partial^5)\label{gg} \end{equation} Notice that from the definition \eqref{gg} we can immediately derive the identity \begin{equation} \partial_\mu \mathcal{V}^\mu\,=\,0\, \label{A54}\end{equation} by just using symmetry arguments.\\ We can calculate the field strength of the induced vector field $\mathcal{V}^\mu$ as \begin{equation} \mathcal{F}_{\mu\nu}\,\equiv\,\partial_\mu \mathcal{V}_\nu\,-\,\partial_\nu \mathcal{V}_\mu\,=\,Y^{(1)}\,\Box F_{\mu\nu}-\Box\mathcal{K}_{\mu\nu}\,-Y^{(5)}\,\Box^2 F_{\mu\nu}-\,\Box \left(F_{\mu\nu}\,\Theta\right)\,+\,\mathcal{O}(\partial^6) \label{fs} \end{equation} where we defined the scalar $$\Theta\equiv Y_{IJ}^{(7)} \partial_\mu \chi^I \partial^\mu \chi^J$$ and neglected higher corrections.\\ We can now invert the expression \eqref{fs} assuming an ansatz of the type \begin{equation} F_{\mu\nu}\,=\,\Box^{-1}A_{\mu\nu}\,+\,B_{\mu\nu}\label{inv1} \end{equation} where $A,B$ are generic two forms. Combining \eqref{fs} and \eqref{inv1} and solving them in a perturbative expansion we obtain \begin{equation} \mathcal{F}_{\mu\nu}=Y^{(1)}A_{\mu\nu}-A_{\mu\nu}\Theta,\qquad Y^{(1)}B_{\mu\nu}-\mathcal{K}_{\mu\nu}-Y^{(5)}A_{\mu\nu}-B_{\mu\nu}\Theta=0\,. \label{A55}\end{equation} and consequently \begin{equation} A_{\mu\nu}=\frac{1}{Y^{(1)}}\,\mathcal{F}_{\mu\nu}\,+\,\frac{1}{{Y^{(1)}}^2}\,\mathcal{F}_{\mu\nu} \,\Theta\,+\,\mathcal{O}(\partial^6) \label{A56}\end{equation} \begin{equation} B_{\mu\nu}\,=\,\frac{1}{Y^{(1)}}\mathcal{K}_{\mu\nu}\,+\,\frac{Y^{(5)}}{{Y^{(1)}}^2}\mathcal{F}_{\mu\nu}\,+\,\frac{\Theta \, (\mathcal{K}_{\mu\nu} Y^{(1)}+2 \,\mathcal{F}_{\mu\nu}\, Y^{(5)})}{{Y^{(1)}}^3}\,+\,\mathcal{O}(\partial^6) \label{A57}\end{equation} We use the redefinition in (\ref{def}) to write down the original field strength in terms of the induced one at leading order in derivatives \begin{equation} \bar{F}_{\mu\nu}\,=2\Box^{-1}\mathcal{F}_{\mu\nu}\,+\,2\mathcal{K}_{\mu\nu}\,+2\,\frac{Y^{(5)}}{Y^{(1)}}\,\mathcal{F}_{\mu\nu}\,+\, 2 \frac{\Theta}{Y^{(1)}}\Box^{-1}\mathcal{F}_{\mu\nu}+\,\mathcal{O}(\partial^4) \label{A59}\end{equation} and rewrite the original functional \eqref{W4} as \begin{equation} W(\mathcal{V}^\mu,\chi^I)\,=\,\int d^4x\,\left[Y^{(0)}\,+\,\frac{Y^{(2)}_{IJ}}{2}\partial_\mu \chi^I \partial^\mu \chi^J-\,\frac{1}{4 Y^{(1)}}\left(\Box^{-1}\mathcal{F}_{\mu\nu}\Box^{-1}\mathcal{F}^{\mu\nu}\,+\mathcal{K}_{\mu\nu}\Box^{-1}\mathcal{F}^{\mu\nu}\right)\right.\,\label{W11} \end{equation} $$ \left.-\,\frac{1}{4Y^{(1)\,2}}\left(Y^{(5)}\,\mathcal{F}_{\mu\nu}\Box^{-1}\mathcal{F}^{\mu\nu}\,+\,\Theta\, \Box^{-1}\mathcal{F}_{\mu\nu}\Box^{-1}\mathcal{F}^{\mu\nu}\right) +\, \mathcal{O}(\partial^5)\right] $$ \begin{equation} \Gamma(\mathcal{V}_\mu,\chi_I,\mathbf{A}_\mu) =\int d^4 x\left[-\mathcal{V}^{\mu}(A_{\mu}-\mathbf{A}_\mu) \right] \, + \, W \, = \label{A500}\end{equation} $$ =\,\int d^4x\,\left[Y^{(0)}\,+\,\frac{Y^{(2)}_{IJ}}{2}\partial_\mu \chi^I \partial^\mu \chi^J+\frac{1}{4Y^{(1)}}\left( \Box^{-1}\mathcal{F}_{\mu\nu}\Box^{-1}\mathcal{F}^{\mu\nu}+\mathcal{K}_{\mu\nu}\Box^{-1}\mathcal{F}^{\mu\nu}\right)\,+\right. $$ $$ \left.+\frac{1}{4Y^{(1)\,2}}\left(Y^{(5)}\,\mathcal{F}_{\mu\nu}\Box^{-1}\mathcal{F}^{\mu\nu}\,+\,\Theta\, \Box^{-1}\mathcal{F}_{\mu\nu}\Box^{-1}\mathcal{F}^{\mu\nu}\right)+\, \mathcal{V}^{\mu}\mathbf{A}_{\mu} +\, \mathcal{O}(\partial^5)\right] $$ The functional we obtain is definitely gauge-invariant with respect to the ``emergent'' U(1) symmetry but it is manifestly non-local. { We now resort to the same method we exploited for the single theory example in sec.\ref{singleNL}. More specifically we perform the following further change of variables \begin{equation} \mathcal{V}_{\mu}={1\over 2}\epsilon_{\mu\nu\rho\sigma}\partial^{\nu}\mathcal{B}^{\rho\sigma}={1\over 3!}\epsilon_{\mu\nu\rho\sigma}\mathcal{H}^{\nu\rho\sigma}\;\;\;,\;\;\; \mathcal{H}_{\mu\nu\rho}\equiv \partial_{\mu}\mathcal{B}_{\nu\rho}+\partial_{\nu}\mathcal{B}_{\rho\mu}+\partial_{\rho}\mathcal{B}_{\mu\nu} \label{A61}\end{equation} so that the condition \begin{equation} \partial^{\mu}\mathcal{V}_{\mu}=0 \label{A62}\end{equation} is identically satisfied. In other words we solve explicitly the constraint using symmetries. We are now in the position of writing down the functional \eqref{A500} in terms of the new unconstrained variable \begin{equation}\label{W222} \Gamma(\mathcal{V}^\mu,\chi^I, \mathbf{A}_\mu)\,=\,\int d^4x\,\left[Y^{(0)}+\frac{Y^{(2)}_{IJ}}{2}\partial_\mu \chi^I \partial^\mu \chi^J\, -\frac{1}{2 Y^{(1)}}\frac{1}{3!}\partial^{\mu}\left(\Box^{-1}\mathcal{K}_{\mu\nu}\right)\epsilon^{\nu}_{\kappa\lambda\sigma}\mathcal{H}^{\kappa\lambda\sigma}\right. \end{equation} $$ +{1\over 3!}\left(\epsilon_{\mu\kappa\lambda\sigma}\mathbf{A}^\mu-\frac{1}{2 Y^{(1)}}\partial^{\mu}\left(\Box^{-1}\mathcal{K}_{\mu\nu}\right)\epsilon^{\nu}_{\kappa\lambda\sigma}\right)\mathcal{H}^{\kappa\lambda\sigma} $$ $$ \left.-\,\frac{1}{4 Y^{(1)}}\Big(\mathcal{H}_{\mu\nu\rho}\Box^{-1}\mathcal{H}^{\mu\nu\rho}\,+\,\,\frac{Y^{(5)}}{Y^{(1)}}\mathcal{H}_{\mu\nu\rho}\mathcal{H}^{\mu\nu\rho}+\,\frac{\Theta}{ Y^{(1)}}\mathcal{H}_{\mu\nu\rho}\Box^{-1}\mathcal{H}^{\mu\nu\rho}\Big)\,+\mathcal{O}(\partial^5)\right] $$ The term in (\ref{W222}) that coupled the backround gauge field $\mathbf{A}$ to $\mathcal{H}$ can be written, after an integration by parts as \begin{equation} {1\over 3!}\epsilon_{\mu\kappa\lambda\sigma}\mathbf{A}^\mu\mathcal{H}^{\kappa\lambda\sigma}\to -{1\over 4}F^{\kappa\mu}(\mathbf{A})\mathcal{B}^{\lambda\sigma} \end{equation} and, interestingly, to the standard anomaly coupling between $\mathcal{B}$ and $\mathbf{A}$ for anomalous U(1)'s in four-dimensional compactifications of string theory, \cite{ABDK,review}. By varying \eqref{W222} with respect to $\mathcal{H}$ we compute the equations of motion for $\mathcal{H}$ \begin{equation} \label{EOMH} \partial_{\mu}\left[\left(1+\frac{\Theta}{Y^{(1)}}\right)\Box^{-1}\,+\,\frac{Y^{(5)}}{Y^{(1)}}\right]\mathcal{H}^{\mu\nu\rho}\,+\,\frac{1}{3!}\partial_{\mu}\left( \partial^{\lambda}\left(\Box^{-1}\mathcal{K}_{\lambda\sigma}\right)-2 \mathbf{A}_{\nu}\right)\epsilon^{\sigma\mu\nu\rho}=0\,\;. \end{equation} For the rest of the section, we set the source $\mathbf{A}_{\mu}=0$. The last term above is CP-odd and provides a source term for the equations of ${\cal B}_{\mu\nu}$. Using \eqref{EOMH} we can compute the equations of motion for the unconstrained two form $\mathcal{B}$ \begin{equation} \left[\left(1+\frac{\Theta}{Y^{(1)}}\right)\Box^{-1}\,+\,\frac{Y^{(5)}}{Y^{(1)}}\right]\,\partial_\mu \,\mathcal{H}^{\mu\nu\rho}\,+{\partial_{\mu}\Theta\over Y^{(1)}}\Box^{-1}\mathcal{H}^{\mu\nu\rho}= \label{A63}\end{equation} $$ =-\frac{1}{3!}\partial_{\mu} \partial^{\lambda}\left(\Box^{-1}\mathcal{K}_{\lambda\sigma}\right)\epsilon^{\sigma\mu\nu\rho}= -{1\over 12}F_{\sigma\mu}(Z)\epsilon^{\nu\rho\sigma\mu} $$ with \begin{equation} F_{\mu\nu}(Z)\equiv \partial_{\mu}Z_{\nu}-\partial_{\nu}Z_{\mu}\;\;\;,\;\;\; Z_{\mu}=\partial^{\lambda}(\square^{-1}{\mathcal K}_{\lambda\mu}) \end{equation} which is evidently non local but gauge-invariant. We now consider the linearized part of this equation, namely \begin{equation} \left[\Box^{-1}\,+\,\frac{Y^{(5)}}{Y^{(1)}}\right]\,\partial_\mu \,\mathcal{H}^{\mu\nu\rho}= - {1\over 12}F_{\sigma\mu}(Z)\epsilon^{\nu\rho\sigma\mu}\;. \label{A63a}\end{equation} Choosing an appropriate gauge, \textit{i.e.} the Lorentz gauge $\partial_\mu \mathcal{B}^{\mu\nu}=0$, we can rewrite the equation above, in form notation, as \begin{equation} \left[1\,+\,\frac{Y^{(5)}}{Y^{(1)}}\Box\right]\,\mathcal{B}_{\mu\nu}=-{1\over 3!}~(^*F(Z))_{\mu\nu}\,.\label{fifi} \end{equation} which is a similar massive equation as we found in the last subsection. } \section{Discussion\label{Dis}} In the previous sections we have studied the emergence of a dynamical U(1) vector field from a global symmetry of a hidden theory. Here, we would like to classify several distinct versions of this emergence and possible phenomenological applications. We may envisage the following cases of emergence of a U(1) vector field that couples to the standard model: \begin{enumerate} \item A hidden theory with a global U(1) symmetry and a current-current coupling to a SM global (non-anomalous) symmetry, An example could be $B-L$ in the SM. In that case, the result will be that the hidden global symmetry will generate a (generically massive) vector boson that couples to the $B-L$ charges of the SM. In such a case, the combined theory still has two independent U(1) symmetries, but one of them only is visible in the SM, (the hidden symmetry is not visible from the point of view of the SM). \item A hidden theory with a global U(1) symmetry and a coupling between a charged operator in the hidden theory to a charged operator of the SM under a SM global (non-anomalous) symmetry. Such a coupling breaks the two U(1)'s into a single diagonal U(1). This leftover U(1) couples to the emergent vector boson. \item In the two cases above, the U(1) global symmetry of the SM may also be an anomalous global symmetry, like baryon or lepton number. Although, some of the properties of the new vector interaction remain similar to what was described above, there are new features that are related to the anomaly of the global SM symmetry. In such a case, we expect to have similarities with the anomalous U(1) vector bosons of string theory. \item A hidden theory with a global U(1) symmetry and a current-current coupling to the (gauge-invariant) hypercharge current. In this case, both the hypercharge gauge field and the emergent vector couple to the hypercharge current. By a (generically non-local) rotation of the two vector fields, a linear combination will become the new hypercharge gauge field, while the other will couple to $|H|^2$ where $H$ is the SM Higgs. \item A hidden theory with a global U(1) symmetry, whose current is $J_{\mu}$ and a coupling with the hypercharge field strength of the SM of the form \begin{equation} S_{int}={1\over m^2}\int d^4 x F^{\mu\nu}(\partial_{\mu}J_{\nu}-\partial_{\nu}J_{\mu}) \end{equation} where the scale $M$ is of the same order as the messenger mass scale. By an integration by parts this interaction is equivalent to the previous case, using the equations of motion for hypercharge. \end{enumerate} In all of the above, we have an emergent U(1) vector boson that plays the role of a dark photon, and in this context the single most dangerous coupling to the standard model is the leading dark photon portal: a kinetic mixing with the hypercharge, \cite{ship}. We can make estimates of such dangerous couplings using (weak coupling) field theory dynamics, but it is also interesting to make such estimates using dual string theory information at strong coupling. Such a study is underway, \cite{u1}. \subsection{Emergent dark photons versus fundamental dark photons} There are several studies so far of dark photons coupled to the standard model. Such studies involve fundamental dark photons and all experimental constraints have been parametrised in terms of the dark-photon coupling constant and its mass. Extra parameters that affect phenomenological constraints may be minimal couplings to SM Fields and the mixing to hypercharge. There may be also exotic couplings involving generalized (four-dimensional) Chern Simons terms between the hypercharge and the dark photon, if specific mixed anomalies exist, \cite{akt,CIK,ABDK,Anto} Such constraints have been studied in detail, expecially in the last twenty years, and there are dedicated experiments for their detection, \cite{ship}. The current constraints are exclusion plots on the effective coupling vs mass diagram for the dark photon, as shown on Figure 2.6, page 28 of \cite{ship}. In the cases of emergent vectors studied here, as long as we look at such particles well below the compositeness scale, their dynamics to leading order may be indistinguishable from fundamental dark photons. Unlike fundamental dark photons, the emergent photons described here, especially in the holographic context, have propagators that at intermediate energies behave differently from fundamental photons and therefore can have different phenomenology and different constraints from standard elementary dark photons. The same was found recently for , emergent axions in \cite{axion} If such a compositeness scale is low and in such a case, non-local effects dominate and alter the phenomenological behavior of such vectors. This is the case for emergent axions studied in \cite{axion}. The impact of this softer behavior depends on the particular experiment, and the energies at which it is sensitive. An phenomenological analysis is therefore necessarily energy and experiment dependent. There is also another basic difference. With fundamental dark vectors, one can more or less choose whatever he wants for the main parameters, coupling and mass. In the emergent case, one can vary at will the hidden theory spanning a wide variety of theories. If however we want this same theory to provide emergent (observable) gravity coupled to the SM, as advocated in \cite{SMGRAV}, then this hidden theory must be a holographic theory. In that case, there are important changes in the estimates of effective couplings from Effective Field Theory (EFT). For example, it is found in \cite{u1}, that the effective couplings that lead to the mixing of the dark photon and the hypercahrge are smaller by extra factors of\footnote{The hidden theory number of colors.} N, if the dark photon is emergent from a hidden holographic theory. Such phenomena are interesting to investigate, and we shall address them in a future publication. \vskip 1cm \section*{Aknowledgments} \addcontentsline{toc}{section}{Aknowledgements\label{ack}} \vskip 1cm We would like to thank P. Anastasopoulos, C. Charmousis, M. Bianchi, G. Bossard, D. Consoli, B. Gouteraux, D. Luest, F. Nitti, A. Tolley, L. Witkowski for discussions. We would also like to thank Matteo Baggioli for participating in early stages of this work. \vskip 0.5cm This work was supported in part by the Advanced ERC grant SM-grav, No 669288. \newpage
1,108,101,563,592
arxiv
\section{Introduction} A very blue galaxy, SBS 1543+593, was discovered in the second Byurakan survey (Markarian et al. 1986). The Hamburg objective prism survey (Hagen et al. 1995) rediscovered it as a quasar and subsequent spectra and imaging revealed it to be a quasar of $z = .807$ only 2.4 arcsec from the center of a galaxy of dwarf characteristics (Reimers and Hagen 1998). Despite computing the probability of chance proximity as $1.5\times10^{-3}$, Reimers and Hagen stated that the center of the galaxy was a chance projection on a background quasar. It was not taken into account, however, that large, and statistically significant, numbers of quasars fell very close to many other low redshift galaxies (for lists of cases see G.R.Burbidge 1996). Moreover, Reimers and Hagen themselves estimated that the QSO was very little reddened. Of course if it was behind the inner part of the dwarf spiral it should have been noticeably reddened. \footnote{For $S_d$ spirals Sandage and Tamman (1981) use mean absorptions of .28 to .87 mag. ranging from face-on to edge-on systems. For a quasar entirely behind the galaxy this would give $.6 \lower.75ex\hbox{$\Lsim$} A_B \lower.75ex\hbox{$\Lsim$} 1.7$ mag. and $.15 \lower.75ex\hbox{$\Lsim$} E_{B-V} \lower.75ex\hbox{$\Lsim$} .44$mag. However the quasar here would have to shine through a region very close to the center where the absorption is much higher than average. For example, Hummel, van der Hulst and Keel (1987) show absorptions to the center of the late type spiral NGC 1097 to be $.3 \lower.75ex\hbox{$\Lsim$} A_v \lower.75ex\hbox{$\Lsim$} 3.0$mag. That would translate to a full reddening of $.2 \lower.75ex\hbox{$\Lsim$} E_{B-V} \lower.75ex\hbox{$\Lsim$} 2.0$mag. which again would be conspicuous.} The dwarf spiral, however, did not seem like a galaxy with an active nucleus which would eject a quasar. {\it Therefore I asked the question: "Is there a nearby galaxy which could be the origin of the quasar?"} It quickly turned out that only 36.9 arcmin distant was the very bright (V = 11.0 mag.) Seyfert galaxy, NGC5985. This was just the distance with\-in which quasars were found associated at a 7.4 sigma significance with a large sample of Seyferts of similar apparent magnitude (Radecke 1997; Arp 1997a). The following sections present the full census of objects around the Seyfert NGC5985. \section{The quasars around the Seyfert NGC5985} \begin{figure*} \mbox{\psfig{figure=chipfig1.ps,height=11.6cm}} \caption[]{All catalogued active galaxies and quasars within the pictured area are plotted. Redshifts are labeled. The dwarf spiral 2.4 arc sec from the z = .81 quasar has a z = .009 which marks it as a companion of the Seyfert NGC5985 at z = .008. The line represents the position of the minor axis of NGC5985.} \label{fig1} \end{figure*} Fig. 1 shows a plot of all the catalogued active galaxies (V\'eron and V\'eron 1996) in an area 3.4 x 2.4 degrees centered on NGC\-5985. It is apparent that there is only one active galaxy in the area and it is the bright, Shapley-Ames Seyfert, NGC5985. The chance of finding a Seyfert galaxy this bright, this close to the $z = .81$ quasar is $< 10^{-3}$. One caveat is that if the $z = .009$ galaxy is lensing the $z = .81$ QSO then this probability will increase because of the tendency of galaxies to cluster. We have, however, argued against the lensing hypothesis on the basis that the QSO appears unreddened. Reimers and Hagen also have suggested that the central mass of the galaxy may be too small for lensing to be important. All catalogued quasars, plus the new QSO of $z = .807$ and its companion are also plotted. It turned out all quasars are from a uniformly searched region in the second Byurakan survey. Since five of the six quasars are aligned within about $\pm 15^\circ$ accuracy one can compute the probability of their accidental alignment as $P^5_6 (\leq 15^\circ) = 6 \times 10^{-4}$. But then one must factor in the probability that the line would agree with an {\it a priori} minor axis direction. This would reduce the probability by another factor of $10^{-2}$. The total probability of accidentally finding this prototypical configuration is then of the order of $10^{-8}$ to $10^{-9}$. \begin{table*} \begin{center} \caption{Objects in the field around NGC5985} \begin{tabular}{|l|c|l|r|r|r|r|r|r|} \hline Object&Type&Vmag.&$z$&R.A.(2000)&\llap{D}ec.\hfill&$\Delta x^\circ$& $\Delta y^\circ $&$r^\prime$\\\hline NGC5985&Sey1&10.98&.008&15/39/37.5&{+}59/19/58 &---&---&---\\ HS 1543+5921&QSO&16.4&.807&15/44/20.1&59/12/26&.6019&-.1256&36.9\\ SBS 1543+593&dSp&16.9&.009&$\prime\prime$\quad \ &24&$\prime\prime$\ \ \ & -.1262&$\prime\prime$ \ \ \\ RXJ 15509+5856&S1&16.4&.348&15/50/56.8&58/56/04&1.4535&-.3984&90.4\\ SBS1537+595&QSO&19.0&2.125&15/38/06.0&59/22/36&-.1944&+.0439&12.0\\SBS1535+596&QSO&19.0&1.968&15/36/45.7&59/32/33&-.3644&+.2097&25.2\\ SBS1533+588&QSO&19~~&1.895&15/34/57.2&58/39/24&-.6016&-.6761&54.3\\ SBS1532+598&QSO&18.5&.690&15/33/52.8&59/40/19&-.729&+.339&48.2\\ \hline \end{tabular} \end{center} \end{table*} Whether one considers the $z = 1.90$ quasar an unrelated interloper or an ejection in a different direction does not change the calculation appreciably. The parameters of the objects in this field are listed in Table 1. Their distances from NGC5985, at the center of the field, are calculated in degrees. \section{Similar associations which have been previously published} From 1966 onward, evidence that quasars were associated with low redshift galaxies has been presented (for review see Arp 1987). Since the discovery of a pair of quasars across the Seyfert NGC4258 (Pietsch et al. 1994; E.M.Burbidge 1995), however, bright Seyfert galaxies have been systematically analyzed\break (Radecke 1997; Arp 1997a). The latter investigations demonstrated physical associations at greater than 7.5 sigma for qua\-sars within about $10' < r < 40'$ of these active galaxies. The alignment of quasars shown in Fig. 1 exhibits the same properties found in the previously published results except that the scale of the separations and apparent magnitudes of the qua\-sars in the NGC5985 system suggest it may be somewhat closer than the average Seyfert previously investigated. In the NGC\-5985 association, however, there are more than the usual pair of quasars so that the line is very well defined. In this respect it is another crucial case like the 6 quasars associated with NGC3516 (Chu et al. 1998). The NGC3516 case is shown here in Fig. 2 for comparison. \begin{figure} \mbox{\psfig{figure=chipfig2.ps,height=8.1cm}} \caption[]{ All the X-ray bright quasars around the bright Seyfert NGC3516 are plotted with their redshifts labeled. (From Chu et al. 1998). The line represents the direction of the galaxy minor axis.} \label{fig2} \end{figure} \section{Alignment along the minor axis} Early in the association of quasars with active galaxies it was noticed that there was a tendency for them to lie in the direction of the minor axis of the galaxy. When the X-ray pairs began to be identified this correlation strengthened (Arp 1997c; 1998a,b). Particularly in Arp (1998a,b) it was shown that the quasars associated with Seyferts with well defined minor axes fell along this minor axis within a cone of about $\pm 20^\circ$ opening angle. These same references showed companion galaxies, as in the present case of NGC 5985, also preferentially falling along this same axis. With the observations of NGC3516 shown in Fig. 2, however, there appeared a continuous definition of the minor axis by the quasars. The most striking aspect of the NGC5895 case now becomes the fact that the line in Fig. 1 was not drawn through the quasars as one might assume! The line in Fig. 1 actually plots the direction of the minor axis of the Seyfert as recorded in the Uppsala General Catalogue of Galaxies (Nilson 1973). There are two important conclusions to be drawn: The first is that the quasars must originate in an ejection process. The minor axis is the expected ejection direction from an active galaxy nucleus (see also the discussion of ejection along the minor axis in NGC4258 in Burbidge and Burbidge 1997). The second is that the chance of the observed quasars falling so closely along a {\it predicted} line by accident is negligible, thus confirming at an extraordinarily high level of significance the physical association of the quasars with the low redshift galaxy. \section{The role of companion galaxies} What is the origin of the dwarfish spiral only 2.4 arcsec from the $z = .81$ quasar? The simplest answer is that it represents entrained material from NGC5985 which accompanied the ejection of the $z = .81$ quasar. The redshift of NGC5985 is $z = .008$ and the redshift of the dwarf spiral is $z = .009$. The latter redshift and its distance from NGC5985 would pretty clearly identify it as a physical companion of the Seyfert. It has been suggested that dwarfs are associated with ejection from a central galaxy and an example is the string of dwarfs coming from the X-ray ejecting NGC4651 (Arp 1996, Fig.6). But, in general, companion galaxies have been identified since 1969 as falling preferentially along the minor axes of their central galaxy. This has been interpreted as high redshift quasars evolving to lower redshifts and finally into companion galaxies (Arp 1997b; 1998a,b). The line of ejection seems to be remarkably stable from NGC5985 so objects of older evolutionary age could be going out or falling in along the same track. This would give a much higher chance of the quasar and the dwarf being accidently nearby at any time. In fact, observing quasars and galaxies lying along the same ejection lines from active galaxies gives for the first time an explanation for the many cases of close associations of higher redshift quasars with low redshift galaxies (G. Burbidge 1996) and higher redshift quasars with lower redshift quasars (Burbidge, Hoyle and Schneider 1997). {\it I also note the presence of three NGC companion galaxies closely along NGC5985's minor axis to the NW. These are of fainter apparent magnitude and one of them has a measured redshift (z = .0095), similar to the dwarf SE in that it has a few hundreds of km/sec higher redshift than the parent Seyfert. This is just as had been found for the redshift behavior of companion galaxies (see Arp 1998a,b).} \section{Intrinsic redshift vs angular separation from the galaxy} Since the quasars are strictly ordered in decreasing redshift with distance from the central galaxy in NGC5985, it is interesting to compare the relation with the one in NGC3516 where they are also so ordered. Fig. 3 show several interesting results: \begin{figure} \mbox{\psfig{figure=chipfig3.ps,height=5.8cm}} \caption[]{The relation between the redshift of the quasars and their distance from the active galaxy for the two best cases of multiple, aligned quasars. The exponential law is indicated to be similar but the scale of the NGC5985 relation is larger by a factor of about 4.5, indicating NGC5985 is nearer the observer and/or oriented more across the line of sight.} \label{fig3} \end{figure} \begin{itemize} \item The slopes of the relation in both systems are closely the same. Since the plot is in ln r, this means the relationship is exponential with the same exponent in both cases. \item The constant separation between the two slopes translates into a difference in scale between the two systems of a factor of about 4.5. \end{itemize} The latter scale factor could be accounted for if the NGC5985 system were closer to the observer than the NGC3516 system. What do the apparent magnitudes suggest? The central Seyfert NGC5985, in dereddened magnitudes, is about 1.33 mag. bright\-er than NGC3516. The apparent magnitudes of the quasars in the NGC3516 association have not been accurately measured but the quasars in the NGC5985 system appear to be 2 to 2.5 mags. brighter than quasars measured around Seyferts generally at the distance of the Local Supercluster. If we therefore estimate that NGC5985 is at a distance modulus about 2 mag. closer than NGC3516 then the scale of the alignment should be about 2.5 times greater. Since the projected ellipticities of the two galaxies suggests the minor axis of NGC3516 is oriented at least 45\% closer to the line of sight, a total scale difference of 3.6 out of 4.5 is accounted for.\footnote{The inclination of NGC 5985 is taken from de Vaucouleurs et al. (1976) and of NGC3516 from Arribas et al. (1997).} It seems quite possible that the separation of the quasars from the galaxies as a function of intrinsic redshift is quite similar. This is an important result to refine because it implies, for similar ejection velocities, that the evolution rate to lower redshifts with time is a physical constant. One further comment concerns the quasars near $z = 2$. As found in previous associations they are more than 2 magnitudes fainter than the $z \approx 0.3$ to $1.4$ quasars. This predicts that for the majority of Seyferts which have medium redshift quasars in the 18 to 19th mag. range, that their $z = 2$ quasars should be found in the 20 to 21+ mag. range and closer to their active galaxy of origin. These quasars, generally weak in X-rays, should be searched for with grism detectors on large aperture telescopes such as was done for M82 (E.M. Burbidge et al. 1980). \section{Quantization of redshifts} NGC3516 was unusual in that each of the six quasars fell very close to each of the major redshift peaks observed for quasars in general. For radio selected quasars with $S_{11} > 1$Jy it was shown (Arp et al. 1990) that the Karlsson formula $$(1 + z_2) = (1 + z_1) 1.23$$ $$z_n = .061, .30, .60, .96, 1.41, 1.96, 2.64 \dots$$ was fitted with a confidence level of 99 to 99.97\%. In the present NGC5985 case, four of the five quasars fall close to the quantized redshift values. Therefore, together with NGC 3516, in the two best cases of multiple, aligned quasars the redshifts fall very close to the formula peaks in 10 out of 11 cases. The z = .81 redshift then may represent a short-lived phase in evolution from the .96 to the .60 peak. \section{Summary} Examination of catalogued objects in the vicinity of a QSO only 2.4 arcsec from a dwarf galaxy reveal that this pair of objects, as well as four additional quasars, are associated with a nearby, bright Seyfert galaxy.The configuration satisfies all the criteria of previous quasar - active galaxy associations. In particular the NGC5985 alignment of five quasars defines well the minor axis of the galaxy as does the alignment of six quasars through the Sey\-fert NGC3516. The similar decline in quantized redshift values with increasing distance from the active galaxy suggests there is one law of redshift evolution with time.
1,108,101,563,593
arxiv
\section{Introduction} It is now generally accepted that accretion of circumstellar disk material onto the surface of classical T Tauri stars (CTTSs) is controlled by strong stellar magnetic fields (e.g. see review by Bouvier et al. 2007). CTTSs represent Class II sources in the classification system defined by Lada (1987). The definition is based on a gradually falling spectral energy distribution (SED) beyond $\sim 1 \mu$m. This SED shape is believed to arise from a geometrically thin, optically thick accretion disk containing a high concentration of submicron sized dust grains (e.g. Bertout et al. 1988). At some level, the final mass of these forming stars is determined by how much of this disk material accretes onto the central star. Additionally, it is within the disks around these low mass pre-main sequence stars that solar systems similar to our own form. It is critical to understand how the central young star interacts with and disperses its disk in order to understand star, and particularly planet, formation. The Class I sources defined by Lada (1987) represent one of the earliest stages of star formation and are identified by a rising SED. These sources are deeply embedded within molecular clouds and are very faint or undetectable at optical wavelengths because of a thick envelope of circumstellar dust. It has been commonly thought that Class I objects represent an earlier evolutionary stage relative to Class II sources, with the paradigm emerging that Class I sources are young protostars near the end of their bulk mass accretion phase. This paradigm is bolstered by the very weak photospheric absorption features in near infrared (IR) spectra of these objects (e.g. Casali \& Matthews 1992; Greene \& Lada 1996, 2000). The lack of absorption lines was interpreted by these authors as the result of strong veiling produced by emission originating in a vigorously accreting circumstellar disk which is being fed by an infalling envelope. This emission is reprocessed by the dusty envelope which results in both the observed featureless continuum and the rising SED. As the accretion rate in the disk weakens and the thick circumstellar envelope either accretes onto the star plus disk system or is disrupted by strong outflows, it is generally thought that Class I objects evolve into Class II sources. This general paradigm has been recently challenged by White \& Hillenbrand (2004) who find no strong differences in the properties of the central stellar source between a sample of optically selected Class I and Class II sources in Taurus. On the other hand, Doppmann et al. (2005) argue that the White \& Hillenbrand (2004) results are biased by their optical selection of these Class I young stellar objects (YSOs). Doppmann et al. (2005) perform an extensive IR study of Class I YSOs in several star forming regions and conclude that, while there is a fair amount of spread in the stellar and accretion properties of these objects, the general paradigm of Class I sources representing an earlier, higher accretion rate phase of stellar evolution relative to Class II sources is borne out (see also Prato et al. 2009). Some of the confusion and disagreement over the true nature of the Class I YSOs may be due to variability. It has been suggested that the bulk of a star's final mass is accreted through episodic events where the accretion rate through the disk increases by a factor of $10 - 1000$ for some period of time (e.g., Hartmann 1998). These episodes of rapid disk accretion may be what we recognize as FU Orionis events (Hartmann \& Kenyon 1996), with these events occuring more frequently during the Class I stage. As a result, Class I objects should display a large range of accretion behavior, with some objects accreting at close to typical CTTS rates, while others are accreting much more rapidly than this. Qualitatively, such a picture matches the range of behavior found in these sources in recent studies (Doppmann et al. 2005, White et al. 2007, Prato et al. 2009). Since Class I YSOs are often rapidly accreting material, the question arises as to how this process occurs. There is substantial evidence to show that FU Ori outbursts are the result of very rapid disk accretion (for a review see Hartmann \& Kenyon 1996). The evidence also suggests that when these objects are not in outburst, accretion onto the central protostar occurs through a disk with infalling material from the envelope piling up in the disk (e.g. Bell 1993). Such a scenario can explain the apparent low luminosity of some Class I sources relative to what is expected if the infalling material from the envelope were to land initially on the central object (Kenyon et al. 1993, 1994). These observations of FU Ori and lower luminosity Class I YSOs suggest that accretion onto the central source occurs primarily through a disk whether a particular YSO is in a high or low accretion state. For the Class II sources (CTTSs) this accretion process appears to be well described by the magnetospheric accretion paradigm (see Bouvier et al. 2007 for a review), but it is currently unclear to what extent this model is appropriate for Class I protostars. The magnetospheric accretion model is successful at explaining a number of obsverations of CTTSs. A key question in the study of these stars is to understand how they can accrete large amounts of disk material with high specific angular momentum, yet maintain rotation rates that are observed to be relatively slow (e.g. Hartmann \& Stauffer 1989, Edwards et al. 1994). This problem is solved in current magnetospheric accretion models by having the stellar magnetic field truncate the inner disk, typically near the corotation radius, and channel the disk material onto the stellar surface, most often at high stellar latitude. The angular momentum of the accreting material is either transferred back to the disk (e.g., K\"onigl 1991; Cameron \& Campbell 1993; Shu et al. 1994) or is carried away by some sort of accretion powered stellar wind (e.g., Matt \& Pudritz 2005). Greene and Lada (2002) analyzed the stellar parameters and mass accretion rate of the Class I source Oph IRS 43 and showed that these were consistent with magnetospheric accretion models provided the magnetic field on this source is on a order a kG in strength. Covey et al. (2005) analyzed the rotational properties of Class I sources and found that while they are rotating more rapidly than CTTSs on average, they are not rotating at breakup velocities. These observations could be interpreted in the standard magnetospheric accretion paradigm if the accretion rates of Class I sources are larger on average than those of CTTSs. Magnetospheric accretion naturally requires a strong stellar magnetic field. Several TTSs have now been observed to have strong surface magnetic fields (Basri et al. 1992; Guenther et al. 1999; Johns--Krull 2007; Johns--Krull et al. 1999b, 2004; Yang et al. 2005, 2008), and strong magnetic fields have been observed in the formation region of the \ion{He}{1} emission line at 5876 \AA\ (Johns--Krull et al. 1999a; Valenti \& Johns--Krull 2004; Symington et al. 2005; Donati et al. 2007, 2008), which is believed to be produced in a shock near the stellar surface as the disk material impacts the star (Beristain, Edwards, \& Kwan 2001). While a considerable amount is now known about the magnetic field properties of Class II YSOs, almost nothing is known directly about the magnetic field properties of Class I sources. While not a Class I source, FU Ori has recently shown evidence for a magnetic field in its disk, revealed through high resolution spectropolarimetry (Donati et al. 2005); however, there are currently no observations of magnetic fields on the surface of a Class I protostar. This is in part due to their faintness and the need for substantial observing time on $8 - 10$ m class telescopes equipped with high resolution near IR spectrometers in order to obtain the necessary data. In order to begin to address the magnetic field properties of Class I sources, we have begun an observational program to survey the magnetic field properties of several Class I YSOs in the $\rho$-Ophiuchi star forming region. Here, we report on our first field detection on the Class I source WL 17 (2MASS J16270677-2438149, ISO-Oph 103). This source has a rising IR SED (Wilking et al. 1989) with a spectral index of $\alpha \equiv d{\rm log} \lambda F_\lambda / d{\rm log} \lambda = 0.61$ over the 2 -- 24 $\mu$m region (Evans et al. 2009). WL 17 has been detected in X-rays (Imanishi et al. 2001; Ozawa et al. 2005) suggesting the star is magnetically active. The temperature ($T_{\rm eff}$$ = 3400$ K) and luminosity ($L_* = 1.8 L_\odot$) of WL 17 (Doppmann et al. 2005) give it a mass of $\sim 0.31 M_\odot$ and an age $\sim 10^5$ years using the tracks of Siess et al. (2000). In this paper, we look for Zeeman broadening of K-band \ion{Ti}{1} lines in high resolution spectra of WL 17 to diagnose its magnetic field properties. Magnetic broadening is easiest to detect when other sources of line broadening are minimized, and the small rotation velocity of WL 17 ($v$sin$i = 12$ km s$^{-1}$; Doppmann et al. 2005) is a great advantage for this work. Muzerolle et al. (1998) derive an accretion luminosity of $\sim 0.3 L_\odot$ based on the Br-$\gamma$ line luminosity measurements of Greene and Lada (1996). This accretion luminosity implies a mass accretion rate of $\dot M \sim 1.5 \times 10^{-7} M_\odot$~yr$^{-1}$ using equation (8) of Gullbring et al. (1998) with the disk truncation radius assumed to be at $5 R_*$. While there are reasons to be concerned about this mass accretion rate estimate (which are discussed in \S 4), such a truncation radius implies a stellar field of about a kilogauss. Detecting and measuring that field is the goal of the current work. In \S 2 we describe our observations and data reduction. The magnetic field analysis is described in \S3. In \S 4 we give a discussion of our results, and \S 5 summarizes our findings. \section{Observations and Data Reduction} Spectra analyed here for magnetic fields on WL 17 come from two sources. Keck NIRSPEC data taken on UT 10 July 2001 is analyzed along with Gemini Phoenix data. The Keck data have already been published by Doppmann et al. (2005) and the reader is referred to that paper for observing and data reduction details. For reference, the resolution of the NIRSPEC data is $R \equiv \lambda / \delta \lambda \equiv 18,000$ (16.7~km~s$^{-1}$). Spectra of WL~17 were acquired on UT 03 April 2006 with the Phoenix near-IR spectrograph (Hinkle et al. 2002) on the 8-m Gemini South telescope on Cerro Pachon, Chile. Spectra were acquired with a $0\farcs35$ (4-pixel) wide slit, providing spectroscopic resolution $R = 40,000$ (7.5~km~s$^{-1}$). The grating was oriented to observe the spectral range $\lambda$ = 2.2194--2.2300~$\mu$m in a single long-slit spectral order, and a slit position angle of $90^{\circ}$ was used. The seeing was approximately $0\farcs50$ in $K$ band through light clouds, and WL~17 data were acquired in two pairs of exposures of 1200 s duration each. The telescope was nodded $5\arcsec$ along the slit between integrations so that object spectra were acquired in all exposures for a total of 80 minutes of integration time on WL~17. The B1V star HR~5993 was observed similarly but with shorter exposures for telluric correction of the WL~17 spectra. Both WL~17 and HR~5993 were observed at similar airmasses, $X$ = 1.01 - 1.05. Observations of a continuum lamp were acquired for flat fielding. All data were reduced with IRAF. First, pairs of stellar spectra taken at the two nod positions were differenced in order to remove bias, dark current, and sky emission. These differenced images were then divided by a normalized flat field. Next, bad pixels were fixed via interpolation with the {\it cosmicrays} task, and spectra were extracted with the {\it apall} task. Spectra were wavelength calibrated using low-order fits to 7 telluric absorption lines observed in each spectrum, and spectra at each slit position were co-added. Instrumental and atmospheric features were removed by dividing wavelength-calibrated spectra of WL~17 by spectra of HR~5993 for each of the two slit positions. Final spectra were produced by combining the corrected spectra of both slit positions and then normalizing the resultant spectrum to have a mean continuum flux of 1.0. \section{Analysis} The most successful approach for measuring magnetic fields on late--type stars in general has been to measure Zeeman broadening of spectral lines in unpolarized light (Stokes $I$ e.g., Robinson 1980; Saar 1988; Valenti et al. 1995; Johns--Krull \& Valenti 1996; Johns--Krull 2007). In the presence of a magnetic field, a given spectral line will split up into a combination of both $\pi$ and $\sigma$ components. The $\pi$ components are linearly polarized parallel to the magnetic field direction; and the $\sigma$ components are circularly polarized when viewed along the magnetic field, and linearly polarized perpendicular to the field when viewed from that direction. The exact appearance of a line depends then on the details of the field strength and direction, even when viewed in unpolarized light. For any given Zeeman component ($\pi$ or $\sigma$), the splitting resulting from the magnetic field is $$\Delta\lambda = {e \over 4\pi m_ec^2} \lambda^2 g B = 4.67 \times 10^{-7} \lambda^2 g B \,\,\,\,\,\, {\rm m\AA},\eqno(1)$$ where $g$ is the Land\'e-$g$ factor of the transition, $B$ is the strength of the magnetic field (given in kG), and $\lambda$ is the wavelength of the transition (specified in \AA). Class I YSOs are relatively rapid rotators (e.g., Covey et al. 2005) compared to most main sequence stars and most TTSs in which Zeeman broadening has been detected, though WL 17 in particular has a relatively low $v$sin$i$. Equation (1) shows the broadening due to the Zeeman effect depends on the second power of the wavelength, whereas Doppler broadening due to rotation or turbulent motions depends on wavelength to the first power. Thus, observations at longer wavelength are generally more sensitive to stellar magnetic fields. There are several \ion{Ti}{1} lines in the K band which are excellent probes of magnetic fields in late-type stars (e.g. Saar \& Linsky 1985; Johns--Krull 2007), and here we observe 4 of them with NIRSPEC: (air wavelengths) 2.22112 $\mu$m with $g_{eff} = 2.08$, 2.22328 $\mu$m with $g_{eff} = 1.66$, 2.22740 $\mu$m with $g_{eff} = 1.58$, and 2.23106 $\mu$m with $g_{eff} = 2.50$. We observe the first 3 of these with Phoenix. In addition to the strongly Zeeman sensitive \ion{Ti}{1} lines, our wavelength settings also record a few additional atomic lines that are weaker and less Zeeman sensitive (lower Land\'e-$g$ values) as well as the $v = 2-0$ CO bandhead at 2.294 $\mu$m for the case of the NIRSPEC data. The CO lines are much less magnetically sensitive than the \ion{Ti}{1} lines and provide a good check on other line broadening mechanisms. In order to measure the magnetic field properties of WL 17, we directly model the profiles of several K band photospheric absorption lines. Our spectrum synthesis code and detailed analysis technique for measuring magnetic fields using these K band lines is described elsewhere (Johns--Krull et al. 1999b; Johns--Krull et al. 2004, Yang et al. 2005). Here, we simply review some of the specific details relevant to the analysis presented here. In order to synthesize the stellar spectrum, we must first specify the atmospheric parameters: effective temperature ($T_{\rm eff}$), gravity ($\log\thinspace g$), metallicity ([M/H]), microturbulence (\vmic), macroturbulence (\vmac), and rotational velocity (\mbox{$v\sin i$}). Rotational broadening in YSOs is large compared to the effects of macroturbulence. This makes it difficult to solve for \vmac\ separately, so a fixed value of 2 km s$^{-1}$\ is adopted here as it was in the above mentioned papers. Valenti et al. (1998) found that microturbulence and macroturbulence were degenerate in M dwarfs, even with very high quality spectra. Therefore, for the low turbulent velocities considered here, microturbulence is neglected, allowing \vmac\ to be a proxy for all turbulent broadening in the photosphere. While \vmic\ and \vmac\ can in principle have different effects of the spectral lines (\vmic\ potentially affecting the line equivalent width) at the relatively low resolution and signal-to-noise used here, the effect of the two mechanisms on the {\it shape} of the spectral lines is equivalent, and the corresponding broadening is significantly less than that due to the resolution or magnetic fields. Any errors in the intrinsic line equivalent widths that result from an inaccurate value of \vmic\ can in principle be compensated for by small errors $T_{\rm eff}$, $\log\thinspace g$, [M/H], or the derived K band veiling. With the turbulent broadening specified, estimates are still needed for $T_{\rm eff}$, $\log\thinspace g$, \mbox{$v\sin i$}, and [M/H] for WL 17. Doppmann et al. (2005) find $T_{\rm eff}$$ = 3400$ K and a gravity of $\log\thinspace g$$ = 3.7$ for WL 17. For the stellar atmosphere then, we take the model from a grid of ``NextGen" atmospheres (Allard \& Hauschildt 1995) which is equal in effective temperature (3400 K) and closest in gravity ($\log\thinspace g$$ = 3.5$) to the values determined for WL 17. Yang et al. (2005), using the 4 \ion{Ti}{1} lines covered in the NIRSPEC data here, performed several tests of the magnetic analysis methods used here to see how sensitive the results are to small errors in the effective temperature and gravity assumed in the magnetic analysis. They find that a 200 K error in $T_{\rm eff}$\ or 0.5 dex error in $\log\thinspace g$\ typically results in less than a 10\% error in the derived magnetic field strength. Therefore, we are confident that our particular choice for the stellar atmosphere will not lead to significant error in the magnetic field properties we estimate for WL 17. Finally, solar metallicity is often assumed for young stars, and this assumption is supported by the few detailed analyses that have been performed (e.g. Padgett 1996). We assume solar metallicity here for WL 17. With the above quantities specified, we can then synthesize spectra for our lines of interest using the polarized radiative code SYNTHMAG (Piskunov 1999). The rotational broadening of WL 17 has been meaured by Doppmann et al. (2005), where they find \mbox{$v\sin i$}$ = 12$ km s$^{-1}$. The analysis of Doppmann et al. (2005) used the CO bandhead data used here to measure \mbox{$v\sin i$}. Since the CO lines are very insensitive to magnetic fields, we expect this to be an accurate estimate of the rotational broadening of WL 17; however, we let \mbox{$v\sin i$}\ be a free parameter of our fits described below. As mentioned above, Class I sources are often observed to have substantial K band veiling (e.g., Greene \& Lada 1996). Veiling is an excess continuum emission which, when added to the stellar spectrum, has the effect of weakening the spectral lines of the star in continuum normalized spectra. Near infrared veiling is assumed to arise from the disk around young stars. Veiling is measured in units of the local stellar continuum, and Doppmann et al. (2005) found a K band veiling of $r_K = 3.9$ for WL 17 using the same NIRSPEC data we use in part here. Doppmann et al. (2003) showed that when magnetic fields are unaccounted for in spectroscopic analysis, the results can be somewhat biased. Therefore, we choose to let the K band veiling be an additional free parameter of the spectral fitting performed here. In addition, we attempt to simultaneously fit both Keck NIRSPEC and Gemini Phoenix data which were observed at different times. CTTSs regularly show significant variations in their K band flux on timescales as short as a day (and occasionally shorter), likely as a result of accretion variability (Skrutskie et al. 1996, Carpenter et al. 2001, Eiroa et al. 2002). Additionally, Barsony et al. (2005) have shown the the near-IR brightness of WL 17 is variable, suggesting the K band veiling of WL 17 may be variable. Therefore, we separately solve for the K band veiling in the Keck and Gemini data. In previous studies of the magnetic field properties of TTSs it was found that the Zeeman sensitive \ion{Ti}{1} lines could not be well fit with a single value of the magnetic field strength. Instead, a distribution of magnetic field strengths provide a better fit (Johns--Krull et al. 1999b, 2004; Yang et al. 2005; Johns-Krull 2007). It was also found that fits to the spectra are degenerate in the derived field values unless the fit is limited to specific values of the magnetic field strength, separated by $\sim 2$ kG, which is the approximate ``magnetic resolution" of the data. While the NIRSPEC data used here are slightly lower in resolution, the Phoenix data are a bit higher in spectral resolution than that used in the studies cited above. Therefore, we use the same limitations when fitting the spectra of WL 17. We assume the star is composed of regions of 0, 2, 4, and 6 kG magentic field, and we solve for the filling factor, $f_i$, of each of these components subject to the constraint $\Sigma f_i = 1.0$. The different regions are assumed to be well mixed over the surface of the star -- different components are not divided up into well defined spots or other surface features. The field geometry is assumed to be radial in all regions. Another key assumption is that the temperature structure in all the field regions is assumed to be identical for the purpose of spectrum synthesis: the fields are not confined to cool spots or hot plage--like regions. A final assumption we make here is that the photospheric magnetic field properties of the star are the same between the two observing epochs. This may or may not be a good assumption. Substantial variability is seen in CTTSs, both photometrically and spectroscopically, which has been interpreted as rotational modulation of a non axisymmetric stellar magnetosphere (e.g., see Bouvier et al. 2007 for a review). This certainly suggests variation of the field geometry above the star, but not necessarily as much variation of the photospheric field itself. Very little is known about variations over timescales of months to years in the photospheric field properties of CTTSs (and nothing is known regarding Class I sources). Two field measurements exist for BP Tau (Johns--Krull et al. 1999b; Johns--Krull 2007) and for T Tau (Guenther et al. 1999; Johns--Krull 2007), and for both stars, the mean field strengths recovered from the two epochs agree to within the quoted uncertainties. We therefore assume identical field properties between the two epochs for WL 17 and show below that this provides a good match to the data within the uncertainties. There are then 6 free parameters in our model fits: the K band veiling, $r_K$, for each observing epoch; the value of $f_i$ for the 2, 4 and 6 kG regions ($\Sigma f_i = 1$, so the filling factor of the 0 kG field region is set once $f_i$ is determined for the other 3 regions); and the value of \mbox{$v\sin i$}. Synthetic spectra are convolved with a Gaussian of the appropriate width to represent the spectral resolution before comparison with the data. We have compared both calibration lamp lines and non-saturated telluric lines to Gaussian fits and find that the line profiles are well matched by this assumed line shape. We are therefore confident that a Gaussian is a good approximation for the instrumental profile. We solve for our 6 free parameters using the nonlinear least squares technique of Marquardt (see Bevington \& Robinson 1992) to fit the model spectra to the observed spectra shown in Figure \ref{profiles}. In our first attempt, labelled F1 in Table \ref{fitpars}, the entire observed region shown in Figure \ref{profiles} is used in the fit. The parameters derived from all our fits are listed in Table \ref{fitpars}. In Figure \ref{profiles} we show the spectra of WL 17 in the regions of the Zeeman sensitive \ion{Ti}{1} lines and the CO bandhead along with our best fitting model spectrum (F1). Also included in the figure is a model with no magnetic field for comparison. The Zeeman sensitive \ion{Ti}{1} lines at 2.2211, 2.2233, and 2.2274 $\mu$m (Land\'e-$g_{eff} = 2.08, 1.66, 1.58$, respectively) are significantly broader in the Phoenix data ($R = 40,000$) than is predicted by the model with no magnetic field. On the other hand, the width of the Zeeman insensitive (Land\'e-$g_{eff} = 0.50$) \ion{Sc}{1} line at 2.2266 $\mu$m is accurately predicted by both models due to its much weaker magnetic broadening. This suggests that the excess broadening seen in the \ion{Ti}{1} lines is not due to an error in our assumed instrumental profile. In the lower resolution ($R = 18,000$) NIRSPEC data, the \ion{Ti}{1} lines again appear broader than predicted by the model with no magnetic field, though the higher veiling associated with that data makes the lines weaker which combined with the noise in the data makes the reality of the excess broadening less certain than in the Gemini Phoenix data. However, the NIRSPEC data are fully consistent with the magnetic broadening clearly seen in the Phoenix data. The mean field, $\bar B = \Sigma B_i \times f_i$, that we find for WL 17 is 2.9 kG. In the spectral regions used for this analysis there are some relatively strong telluric absorption lines. The spectra shown in Figure \ref{profiles} have been corrected for telluric absorption; however, the regions affected by this absorption are likely more uncertain than regions not affected by such absorption. In order to test the sensitivity of our results to possible errors in the telluric correction, we increased the uncertainty of spectral regions where the telluric absorption lines went below 97\% of the continuum during the fitting process. These regions are shown in the bold lines above the spectra in Figure \ref{profiles}. In these regions, we increased the uncertainty by a factor of 3 and reperformed the fit. The fit parameters derived in this way are also given in Table \ref{fitpars} as those derived from constraints F2. The values of the fitted parameters are almost identical to those derived above (constraints F1). A second concern is that the numerical fitting method might be misled by small changes in $\chi^2$ due to the inclusion of large amounts of continuum regions on which the model has no real effect. Therefore, we performed a third fit (labelled F3 in Table \ref{fitpars}) in which we eliminated much of the continuum and focussed down on the lines for fit constraints. The regions of the spectra used for this fit are shown in bold in Figure \ref{profiles}, and the fit parameters are again reported in Table \ref{fitpars}. For this fit, we maintained the uncertainty of the telluric affected regions at 3 times their nominal values. Again, the fitted parameters are nearly identical to those determined in fits F1 and F2. \section{Discussion} Our detection of a mean field of $\bar B = 2.9$ kG on WL 17 is the first magnetic field measurement on a Class I protostar. Previous studies using K band data of comparable resolution and signal-to-noise level have shown that the field uncertainties are dominated by systematic effects associated with the choice of magnetic model (Johns--Krull et al. 1999b, 2004; Yang et al. 2008) and possible errors in the stellar parameters used to model the star (Yang et al. 2005). Based on these studies, while the formal uncertainty in the mean field determination is quite low, we estimate the true uncertainty in our mean field measurement of be $\sim 10-15$ \%. Johns--Krull (2007) measured the mean magnetic field on a sample of 14 CTTSs in the Taurus star forming region, finding field strengths which ranged from 1.1 to 2.9 kG, with a mean of 2.2 kG. Yang et al. (2005, 2008) measured the mean field on a total of 5 stars in the TW Hydrae association (TWA) finding values that range from 2.3 to 3.5 kG with a mean field of 3.0 kG. Yang et al. (2008) find that this difference in mean field strength between the two samples is marginally significant, with the older stars (TWA) having a larger field strength on average. However, Yang et al. (2008) point out that the TWA stars have smaller radii on average due to their older age ($\sim 10$ Myr compared to $\sim 2$ Myr for Taurus), and that the mean magnetic flux in the TWA stars is actually smaller than that in the Taurus stars. WL 17 has a field strength that is large relative to the Taurus stars studied by Johns--Krull (2007) and it also has a relatively large radius and corresponding high magnetic flux. On the other hand, WL 17 is in many ways similar to DF Tau, which has both a large radius (3.4 $R_\odot$) and a mean field of 2.9 kG, equal to that of WL 17. Observations of a statistically significant sample of Class I sources will be required to see how their magnetic field properties compare as a group to older populations of Class II and III (diskless T Tauri stars) objects. Our derived \mbox{$v\sin i$}$ = 11.7$ km s$^{-1}$, with a formal uncertainty of $0.4$ km s$^{-1}$, is consistent with the value of 12 km s$^{-1}$ reported by Doppmann et al. (2005). The veiling we derive for WL 17 is quite different in the two epochs. For the NIRSPEC data, we find $r_K = 6.4 \pm 0.1$. Using the same NIRSPEC data set, Doppmann et al. (2005) quote a veiling value of $r_K=3.9$, which includes a correction for a systematic effect seen in the best fit synthesis models to observations of MK standards. The measured veiling from Doppmann et al. (2005) without the correction for the systematic effect was $r_K=4.9$, based on fits to two wavelength regions containing strong lines of Al, Mg, and Na. The CO bandhead was the third wavelength region used in the Doppmann et al. (2005) study, but was only used to derive the \mbox{$v\sin i$}\ rotation value. As a result, there are actually no wavelength regions in common between this study and that of Doppmann et al. (2005) for the purpose of determining $r_K$. We do note that when using only the CO bandhead region of WL 17, Doppmann et al. find a value of $r_K = 7.1$ (Doppmann, private communication), though this region was not actually used in their final veiling determination. Another difference between this study and that of Doppmann et al. (2005) for the determination of the veiling is the inclusion of magnetic fields. All the atomic lines used in this analysis and that of Doppmann et al. have some Zeeman sensitivity. At the resolution of the NIRSPEC data ($R = 18,000$) and for the strong, broad (e.g. \ion{Na}{1}) lines used by Doppmann et al. (2005), magnetic fields primarily increase the equivalent width of the lines compared to models which do not include fields (Doppmann et al. 2003). As a result, the somewhat stronger lines produced by magnetic models must be diluted by more veiling flux than that required for these same (weaker) lines produced by non-magnetic models in order to match a given set of observed line strengths. The veiling, $r_K$, inferred from an observed spectrum will thus be larger when derived by comparison to magnetic models and smaller when derived by comparison to non-magnetic models. This is the effect we see when comparing the results here with those of Doppmann et al. (2005). Our veiling estimate of $r_K = 6.4$ from NIRSPEC is a little less than Doppmann et al. find from the CO bandhead alone, but a little larger than the $r_K = 4.9$ they find from their atomic lines, though accounting for magnetic fields would likely bring their $r_K = 4.9$ value up by some amount. We note that we use a single value of $r_K$ to fit both the CO bandhead and the \ion{Ti}{1} line region in the NIRSPEC data (lower two panels of Figure \ref{profiles}). Some of the difference in $r_K$ found by Doppmann et al. (2005) in different wavelength regions is likely due to model atmosphere and line data uncertainties (see also a discussion of this in Doppmann et al. 2005). As a result, it is likely that our formal uncertainty of 0.1 on the veiling is too low, so we arbitrarily increase it by a factor of 3 and estimate a veiling for our NIRSPEC data of $r_K = 6.4 \pm 0.3$. We adopt the same uncertainty for our Phoenix data, which is at a comparable signal-to-noise, giving $r_K = 1.1 \pm 0.3$ for this observation time. Assuming the veiling differences quoted above result in a corresponding change to the K band source brightness, and that only the strength of the veiling continuum changed between these observations (i.e. that the underlying star remained constant), WL 17 should have been a factor of $(1.0 + 6.4 \pm 0.3)/(1.0 + 1.1 \pm 0.3) = 3.5 \pm 0.5$ brighter in the K band at the time of the Keck NIRSPEC observation relative to the Gemini Phoenix observation. Interestingly, if we adopt the K band veiling correction of Doppmann et al. (2005), the individual veilings for each epoch are lowered, but the predicted flux ratio between the two observing epochs remains almost the same (3.4 instead of 3.5). The K band flux factor variation of 3.5 calculated above corresponds to a variation of 1.36 magnitudes. There are relatively few studies of the near-IR variability of Class I sources; however, the few that exist suggest that while the implied K band variability of WL 17 is large, it may not be too extreme for such a source. Kenyon and Hartmann (1995) plot a histogram of the standard deviation for protostars in their sample that have two or more K band photometric measurements, finding values that reach as high as $\sigma_K \sim 0.8$. In their study, Park and Kenyon (2002) find values of $\sigma_K$ up to 0.52. Our two veiling meaurements give $\sigma_K = 0.96$, while Barsony et al. (1997) report $\sigma_K = 0.57$ for WL 17 on the basis of 6 different K band photometric measurements. Barsony et al. (2005) tabulate $r_K$ variations for several sources in $\rho$ Oph, and while none show quite as large a variation as we find for WL 17, a few other show large $r_K$ variations with some ranging in value from 1 -- 4. Barsony et al. (2005) also study the mid-IR variability of their sources, finding that WL 17 varies in 12.5 $\mu$m flux by a factor of 2.4 which is similar in magnitude to the factor of 3.5 change in the K band brightness found here. Values of $r_K \geq 0.5$ are usually taken to indicate active accretion from a circumstellar disk, and large variable K band veiling such as that shown by WL 17 and several other sources in $\rho$ Oph is usually taken as evidence that this accretion can be highly time variable (e.g. Barsony et al. 2005). The exact cause of this high degree of variability is not yet clear however. The veiling analysis and implied K band photometric variations described above assumes the underlying star remains constant; however, it is likely that the star also possesses cool starspots which could contribute some K band variation due to rotational modulation. Veiling is usually attributed to a source producing a featureless continuum. Starspots themselves contribute very weakly to veiling variations since the spectrum of the spots contains many of the lines in present in the non-spotted photosphere. The potential effect of this can be estimated by measuring the veiling of a non-accreting T Tauri star using another non-accreting star as a template. In the optical, this level of veiling is $\sim 0.1$ (Hartigan et al. 1991) and should be smaller in the K band since the spot quiet atmosphere contrast is much weaker. Weakly or non-accreting T Tauri stars do show K band brightness variations with a peak-to-peak amplitude $\leq 0.2$ magnitudes as a result of spots (e.g. Skrutskie et al. 1996, Carpenter et al. 2001) which is substantially less than the K band variations implied by our veliling measurements or observed by Barsony et al. (2005). Thus, the majority of the K band veiling variation here must be due to changes in the accretion properties of WL 17. One of the motivations for this study is to see to what extent the magnetospheric accretion paradigm in place for Class II YSOs may be applicable to Class I sources. Johns--Krull et al. (1999b) give equations for predicting the stellar magnetic field strength required for 3 different prescriptions (K\"onigl 1991, Cameron \& Campbell 1993, Shu et al. 1994) of magnetospheric accretion theory. The work of K\"onigl (1991) and Shu et al. (1994, see also Camenzind et al. 1990) give the same scaling with stellar and accretion parameters: $B_* \propto M_*^{5/6} R_*^{-3} P_{rot}^{7/6} \dot M^{1/2}$. The scaling from Cameron and Campbell (1993) is very similar: $B_* \propto M_*^{2/3} R_*^{-3} P_{rot}^{29/24} \dot M^{23/40}$. Using these equations to predict the field on WL 17 is uncertain due to difficulties with estimating the luminosity of WL 17 (which affects the derived mass, radius, and accretion rate). Luminosity estimates for WL 17 range from 1.8 $L_\odot$ (Doppmann et al. 2005) to 0.12 $L_\odot$ (Bontemps et al. 2001). Additional uncertainties also affect the mass accretion rate, and the rotation period of WL 17 is unknown. Assuming $T_{\rm eff}$$=3400$ K from Doppmann et al. (2005) is a fairly robust estimate, we use this in combination with the two quoted stellar luminosities to derive the quantities needed to predict the stellar field strength from magnetospheric accretion theory. These stellar properties are reported in Table \ref{magaccretion} along with the field predictions for the studies mentioned above. We note that the stellar luminosity from Bontemps et al. (2001) in combination with the 3400 K effective temperature would give WL 17 an age of $\sim 5$ Myr using the pre-main sequence tracks of Siess et al. (2000). Such an age would be unusual for a Class I source. The Doppmann et al. (2005) stellar luminosity may be more accurate than the Bontemps et al. (2001) value. This is because the former was derived by using photometric measurements to estimate and correct for the extinction and veiling seen toward the photosphere of the spectroscopically determined effective temperature, while the latter was determined by de-reddening near-IR photometry of WL 17 to intrinsic CTTS colors and assuming an intrinsic J band flux to stellar luminosity relationship for a typical CTTS. The Doppmann et al. (2005) approach corrects for the veiling and effective temperature measured specifically for WL 17, while the Bontemps et al. (2001) technique does not. Nevertheless, we report magnetospheric accretion estimates based on both values of the stellar luminosity in order to illustrate the sensitivity of the expected fields to various stellar parameters, notably the luminosity. In addition to issues related to the correct luminosity for WL 17, there is additional uncertainty regarding the accretion rate estimate. The value of $\dot M \sim 1.5 \times 10^{-7} M_\odot$~yr$^{-1}$ quoted in \S 1 is based on the accretion luminosity estimate of Muzerolle et al. (1998) which is in turn based on the Br-$\gamma$ line luminosity estimate from Greene and Lada (1996). A major concern in this process is the extinction correction used to recover the Br-$\gamma$ line luminosity. For example, Greene and Lada (1996) corrected their data for WL 17 back to the CTTS locus in the JHK color--color diagram, not necessarily back to the stellar photosphere. As an example of the sensitivity to the details of extinction corrections and the photometric data used, we note that Doppmann et al. (2005) also compute Br-$\gamma$ line luminosities (their Figure 11). These authors de-redden the H-K color to an intrinsic value of 0.6 and then add in a correction for scattered light based on the models of Whitney et al. (1997). This results in a Br-$\gamma$ line luminosity of $6.8 \times 10^{-4} L_\odot$ (Doppmann, private communication), which in turn gives an accretion luminosity of 2.7 $L_\odot$ using the Muzerolle et al. (1998) relationship. Calculating the mass accretion rate as given in the introduction results is a value of $\dot M \sim 1.4 \times 10^{-6} M_\odot$~yr$^{-1}$. We include the field estimates resulting from this accretion rate in Table \ref{magaccretion}, and we note that such an accretion rate is about the level needed to accrete a 0.5 $M_\odot$ star in a $\sim 3.5 \times 10^5$ yr. This large accretion rate estimate for a Class I source is also supported by the estimate of $\dot M = 1 \times 10^{-6} M_\odot$~yr$^{-1}$ found by Greene and Lada (2002) for Oph IRS 43. Obviously, the accretion rate and implied magnetic field are fairly sensitive to the details of the Br-$\gamma$ line luminosity calculation. We therefore recomputed the Br-$\gamma$ line luminosity from the measured equivalent width value of 4.3 \AA\ (Doppmann et al. 2005), the 2MASS photometry (Skrutskie et al. 2006), and correcting for extinction by de-reddening the JHK colors to the CTTS locus and correcting for an extra $A_K = 0.88$ mag (see Doppmann et al. \S 3.8). We also used a distance of 135 pc to the $\rho$ Oph cloud (Mamajek 2008). This produced a Br-$\gamma$ line luminosity of $2.9 \times 10^{-4} L_\odot$, about 2.5 times higher than that given by Greene \& Lada due mostly to the more recent photometry and extinction correction technique. This new line luminosity indicates an accretion luminosity of 0.9 $L_\odot$ from the Muzerolle et al. (1998) relationship, which implies a mass accretion rate of $4.5 \times 10^{-7} M_\odot$~yr$^{-1}$, which is well bounded by the accretion rates given in Table \ref{magaccretion}. All of the above discussion implicitly assumes the Muzerolle et al. (1998) Br-$\gamma$ accretion luminosity relationship holds for Class I sources; however, this relationship was derived based on a sample of Class II objects. If there are any systematic differences between these and Class I sources, the resulting accretion rate will be in error. For example, if the Br-$\gamma$ line is more optically thick in Class I sources due to systematically higher accretion rates, the accretion rate we derive above for WL 17 will be lower than the true value. This suggests that the derived accretion rates for WL 17 may be too low. The rotation period reported in Table \ref{magaccretion} is an upper limit based on the stellar radius and measured \mbox{$v\sin i$}. If the rotation period is actually shorter, the derived magnetic field values will be smaller. In this sense, the values reported in the Table are an upper limit depending on the inclination of the source. The field strengths reported in Table \ref{magaccretion} are those corresponding to the equatorial field strength for an assumed dipolar field geometry. The polar field strength in such a geometry is twice this value. The mean field of 2.9 kG found for WL 17 is well above the predicted values for the larger luminosity found by Doppmann et al. (2005); while for the lower luminosity of Bontemps et al. (2001), the measured field value may not be strong enough, particularly if the field is not dominated by the dipole component. In most TTSs, the dipole component is found to be weak (Johns--Krull et al. 1999a; Daou et al. 2006; Yang et al. 2007; Donati et al. 2007). It is important to note that the data presented here only probe the photospheric field strength, while providing few constraints on the field geometry. High spectral resolution near-IR circular spectropolarimetry will likely be required to explore the field geometry on Class I YSOs such as WL 17. In summary, the magnetic field we measure on WL 17 is roughly consistent with predicted magnetic field strength required for magnetospheric accretion. However, detailed quantitative comparisons are greatly hampered due to a number of uncertainties related to other relevant stellar parameters that are currently difficult to estimate for Class I sources (see also the discussion in Prato et al. 2009). Class I sources by their nature are deeply embedded and therefore suffer substantial extinction. Uncertainties involved with the proper way to de-redden observations of these sources, combined with variable accretion luminosity (continuum and line), a general lack of measured rotation periods for Class I objects, and uncertainties in the methods used to measure accretion rates suggest that much work is still left to do before the magnetospheric accretion paradigm can be firmly established or refuted for Class I sources. On the Sun and active stars, it is expected that magnetic flux tubes are confined at photospheric levels by the gas pressure in the external non-magnetic atmosphere. For example, Spruit \& Zweibel (1979) computed flux tube equilibrium models, showing that the magnetic field strength is expected to scale with gas pressure in the surrounding non-magnetic photosphere. Similar results were found by Rajaguru et al. (2002). Field strengths set by such pressure equipartition considerations appear to be observed in active G and K dwarfs (e.g. Saar 1990, 1994, 1996) and possibly in M dwarfs (e.g. Johns--Krull \& Valenti 1996). Class I YSOs have relatively low surface gravities and hence low photospheric gas pressures, so that equipartition flux tubes would have relatively low magnetic field strengths compared to cool dwarfs. The maximum field strength allowed for a confined magnetic flux tube is $B_{eq} = (8 \pi P_g)^{1/2}$ where $P_g$ is the gas pressure at the observed level in the stellar atmosphere. Here, we take as a lower limit in the atmosphere (upper limit in pressure) the level where the local temperature is equal to the effective temperature (3400 K) in the NextGen models of the appropriate gravity. This is the approximate level at which the continuum forms, with the \ion{Ti}{1} lines forming over a range of atmospheric layers above this level at lower pressure. The values of $B_{eq}$ are given in Table \ref{magaccretion}. The mean field we measure for WL 17 is well above the value of $B_{eq}$ for either assumed luminosity and resulting gravity. This suggests that pressure equipartition does not hold in the case of Class I YSOs, and it also suggests the gas pressure in the atmospheres of these young stars is dominated by their magnetic fields. Indeed, our fit for the magnetic field of WL 17 has only 3\% of the surface as field free; however, the uncertainty on the filling factor of this component is such that this is not a significant measurement. It may well be that the entire surface of WL 17 is covered with strong magnetic fields. What is certain is that the field we measure is too strong to be accounted for by current models of dynamo action on fully convective stars, where the field strength is equal to the equipartition value (Bercik et al. 2005, Chabrier \& K\"uker 2006, Dobler et al. 2006) and the filling factor of this field is typically very low (Cattaneo 1999, Bercik et al. 2005). Rapid rotation can enhance field production and may help organize magnetic fields into large scale dipolar or quadrapolar geometries (e.g. Dobler et al. 2006, Brown et al. 2007); however, it is not yet clear that such models can produce mean fields on young stars in the range of $2-3$ kG. Stellar magnetic fields give rise to ``activity" in late-type stars. Activity is typically traced by line emission or broad band emission at high energy wavelengths such as X-rays. Pevtsov et al. (2003) find an excellent correlation between the X-ray luminosity, $L_X$, and magnetic flux in solar active regions and dwarf type stars with each of these two quantities ranging over almost 11 orders of magnitude. Using the X-ray luminosity--magnetic flux relationship of Pevtsov et al. (2003) with our mean magnetic field measurement of 2.9 kG for WL 17, we can predict $L_X$ for this YSO if the radius is known. In Table \ref{magaccretion} we report the predicted X-ray luminosity values for the two stellar radii implied by the different stellar luminosity estimates of Bontemps et al. (2001) and Doppmann et al. (2005). WL 17 has been detected in X-rays by Chandra (Imanishi et al. 2001) and by XMM (Ozawa et al. 2005), and possibly by ASCA, though crowding was an issue for this latter observation (Kamata et al. 1997). The Chandra observation gives an X-ray luminosity of $L_X = 3.1 \times 10^{29}$ erg s$^{-1}$ while the XMM observation found $L_X = 9.6 \times 10^{29}$ erg s$^{-1}$. Both values are lower than either of the predicted values in Table \ref{magaccretion}, though the XMM value of Ozawa et al. (2005) is not much lower than the lower estimate which is based on the Bontemps et al. (2001) luminosity for WL 17. We do note though that the Pevtsov et al. (2003) X-ray luminosity--magnetic flux relationship is based on observed X-ray emission outside obvious flares: the so-called quiescent emission level. The XMM X-ray lightcurve of WL 17 presented by Ozawa et al. (2005) is dominated by a strong, long lasting flare. Thus, the quiescent X-ray emission of WL 17 is likely significantly weaker than the predicted X-ray emission, independent of the exact value of the the stellar radius for WL 17. Pevtsov et al. (2003) found evidence that TTSs were underluminous in X-rays relative to the relationship defined by dwarf stars, and this result appears to hold for a majority of these stars (Johns--Krull 2007; Yang et al. 2008). In that sense, WL 17 follows the same pattern seen in most Class II sources. Johns--Krull (2007) suggested that this might be due to the fact that the magnetic pressure at the surface of these stars is so strong. Perhaps the ubiquitous, strong fields of these young stars inhibit (relative to dwarf stars) photospheric gas motions from moving flux tubes around and building up magnetic stresses which eventually results in coronal heating. Alternatively, WL 17 is deeply embedded and may suffer substantial X-ray attenuation, though this in nominally accounted for in the X-ray measurements. \section{Summary} We have measured the magnetic field on the Class I source WL 17, and we find the star is largely covered by strong magnetic fields. The surface averaged mean field on the star is $2.9 \pm 0.43$ kG. Comparing this field with predictions from magnetospheric accretion theory or with X-ray measurements depends fairly sensitively on the value of the stellar radius appropriate for WL 17. For the relatively large luminosity and associated stellar radius from Doppmann et al. (2005), the measured field values are more than strong enough to be consistent with magnetospheric accretion; however, for the lower radius implied by the Bontemps et al. (2001) luminosity, the field may in fact be too weak. For either radius, the measured fields are stronger than expected by pressure balance arguments for magnetic flux tubes confined by a non-magnetic atmosphere, suggesting that the star is likely fully covered by fields. In addition, independent of the radius assumed, the measured X-ray emission is lower than is to be expected based on correlations established between these two quantities on the Sun and late-type dwarf stars. \acknowledgements We are pleased to acknowledge numerous, stimulating discussions with J. Valenti on all aspects of the work reported here. We also acknowledge many useful comments and suggestions from an anonymous referee. CMJ-K wishes to acknowledge partial support for this research from the NASA Origins of Solar Systems program through grant numbers NAG5-13103 and NNG06GD85G made to Rice University. TPG acknowledges support from NASA's Origins of Solar Systems program via WBS 811073.02.07.01.89. KRC acknowledges support for this work by NASA through the Spitzer Space Telescope Fellowship Program, through a contract issued by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This work made use of the SIMBAD reference database, the NASA Astrophysics Data System, and VALD line database. \clearpage
1,108,101,563,594
arxiv
\section{Introduction} Understanding the effect of quenched disorder on continuous quantum phase transitions is a question of enduring interest~\cite{vojta2013,*vojta2019}, motivated by the ubiquitous presence of imperfections in the condensed matter systems that exhibit such transitions. In the clean limit, the divergence of the correlation length at criticality produces universal critical phenomena that are controlled by renormalization-group (RG) fixed points of a translationally invariant continuum quantum field theory. A given disorder configuration manifestly breaks translation symmetry even on long length scales, and thus produces behavior very different from that of a translationally invariant system. However, that symmetry is restored in physical properties upon averaging over such configurations. The cases of main interest, then, are those in which disorder qualitatively affects the long-distance physics even after disorder averaging. For quantum critical points (QCPs) described at long distances by a (2+1)D strongly interacting conformal field theory in the clean limit, such as many QCPs of interest in condensed matter physics~\cite{QPT}, determining the fate of the system in the infrared after disorder averaging is a problem fraught with technical difficulties. For example, thermodynamic properties are in principle determined by first computing the partition function of a strongly coupled quantum field theory with spatially random couplings, then averaging its logarithm over some chosen probability distribution. An approach better suited to determining the long-distance behavior of the system, our only concern here, is to investigate the RG flow of disorder-averaged observables. It was recently shown~\cite{narovlansky2018,aharony2018} that this is equivalent to studying the RG flow of an effective theory with disorder-induced translationally-invariant interactions, derived using the standard replica trick~\cite{emery1975}, despite the formally nonlocal nature of such theories and oft-invoked concerns about the validity of analytically continuing the number of replicas to zero. A situation of particular interest is one in which disorder produces RG flows on the critical hypersurface that connect the clean fixed point (CFP) describing the transition in the absence of disorder to fixed points characterized by a nonzero value of the effective disorder coupling(s). Such disordered fixed points (DFPs) exhibit scaling behavior but, by contrast with CFPs, no self-averaging in the thermodynamic limit~\cite{aharony1996}. We will be exclusively concerned with random-mass disorder, also known as random-$T_c$ disorder in the context of classical (thermal) phase transitions, every configuration of which preserves those symmetries of the system that are broken spontaneously at the transition. (In $d=2$ spatial dimensions, the focus of this paper, random-field disorder---which violates those symmetries---precludes long-range order, and thus the possibility of a sharp transition~\cite{imry1975,aizenman1989,greenblatt2009,aizenman2012}.) The standard scenario is one in which random-mass disorder is a relevant perturbation at the CFP [Fig.~\ref{fig:RG}(a)], and drives a direct RG flow to a DFP. For short-range correlated disorder, this occurs when the correlation length exponent $\nu_\text{CFP}$ of the CFP obeys the Harris inequality $\nu_\text{CFP}<2/d$~\cite{harris1974}. Examples include the superfluid-Mott glass transition of bosons with particle-hole symmetry in $d=2$~\cite{vojta2016} and $d=3$~\cite{crewse2018}, described by the $O(2)$ vector model with random-mass disorder in (2+1)D ($\nu_\text{CFP}\approx 0.67<1$) and (3+1)D ($\nu_\text{CFP}=1/2<2/3$), respectively. The true correlation length exponent $\nu$ in the presence of disorder, i.e., its value at the DFP, obeys the Chayes inequality $\nu\geqslant 2/d$~\cite{chayes1986}; the dynamic critical exponent $z$ changes from its Lorentz-invariant value $z=1$ at the conformally invariant $O(2)$ Wilson-Fisher fixed point to some noninteger but equally universal (and finite) value $z>1$ at the DFP (see Table~\ref{tab:SFMG}). Finite-randomness DFPs in the $O(n)$ vector model are in principle accessible via perturbative RG analyses of the disorder-averaged effective field theory combined with (double) epsilon~\cite{Khmelnitskii1978,Dorogovtsev1980,Boyanovsky1982,boyanovsky1983,Lawrie1984} or $1/n$ expansions~\cite{goldman2020}. Infinite-randomness DFPs (for which $z=\infty$) are also possible, such as those describing the random-bond transverse-field Ising model at criticality in (1+1)D~\cite{fisher1992,*fisher1995} ($\nu_\text{CFP}=1<2$) and (2+1)D~\cite{motrunich2000} ($\nu_\text{CFP}\approx 0.63<1$). These, however, are not adequately captured by perturbative RG analyses of a disorder-averaged continuum field theory, given the runaway flow to infinite disorder [Fig.~\ref{fig:RG}(b)]. Rather, they can be quantitatively studied using strong-disorder real-space RG methods~\cite{ma1979,dasgupta1980,fisher1992,*fisher1995,motrunich2000} which, in spatial dimensions $d\geqslant 2$ at least, must be implemented numerically in microscopic lattice models. \begin{figure}[t] \includegraphics[width=0.9\columnwidth]{RG.pdf} \caption{Possible schematic RG flows involving disordered fixed points (DFP) on the critical hypersurface in the vicinity of a clean fixed point (CFP), corresponding to a conformal field theory with nonzero interaction strength $g$, perturbed by disorder $\Delta$.} \label{fig:RG} \end{figure} If disorder is Harris-irrelevant, $\nu_\text{CFP}>2/d$, the standard lore is that disorder has no effect on the phase transition at long distances [Fig.~\ref{fig:RG}(c)]. However, as observed in Ref.~\cite{aharony2016}, an irrelevant perturbation with a finite coefficient can have nontrivial consequences on the RG flow finitely away from the CFP, just as formally irrelevant interactions at a stable noninteracting fixed point can eventually trigger a phase transition and produce a critical fixed point. The simplest possible RG flow leading to a DFP in the case of Harris-irrelevant disorder is illustrated in Fig.~\ref{fig:RG}(d), and was recently found in a double epsilon-expansion study of the random-mass chiral XY Gross-Neveu-Yukawa (GNY) model~\cite{Yerzhakov2018}, a fermionic analog of the $O(2)$ vector model that, absent disorder, describes the quantum phase transition between a Dirac semimetal and a gapped superconductor (Ref.~\cite{Roy2013,Zerf2016}, and also see Sec.~\ref{sec:model}). Below a separatrix line controlled by a disordered saddle-type fixed point (DFP$_1$), the transition is in the same universality class as the clean sytem, while above that separatrix line, the transition is governed by a disordered critical point (DFP$_2$). A similar RG flow is found in the classical 2D Ising model with binary ($\pm J$) random-bond disorder~\cite{picco2006}. For a weak concentration of antiferromagnetic bonds randomly distributed amidst ferromagnetic bonds, the paramagnetic-ferromagnetic critical behavior is controlled by the clean 2D Ising fixed point, consistent with the fact that random-mass disorder is (marginally) Harris irrelevant at that fixed point~\cite{dotsenko1983,zhu2015}. For sufficiently strong disorder, however, the clean critical behavior gives way to critical behavior controlled by a zero-temperature disordered fixed point (spin-glass critical point) via an intervening disordered multicritical point, the Nishimori point~\cite{nishimori1980,*nishimori1981,honecker2001}. Coming back to the RG flow of the random-mass chiral XY GNY model~\cite{Yerzhakov2018}, depending on the number of fermion flavors (see Sec.~\ref{sec:model}) the disordered critical point (DFP$_2$) is found to be either a standard sink-type fixed point, as illustrated in Fig.~\ref{fig:RG}(d), or a fixed point of stable-focus type. In the latter case, RG trajectories asymptotically spiral towards the fixed point, implying oscillatory corrections to scaling. Stable-focus fixed points have been found before in replica RG studies of both classical~\cite{aharony1975} and quantum~\cite{Khmelnitskii1978,Dorogovtsev1980,Boyanovsky1982,boyanovsky1983,Lawrie1984,kirkpatrick1996} disordered systems, and are sometimes considered an artefact of perturbative replica-based RG. However, such flows cannot be ruled out as a matter of principle, since DFPs are in general non-unitary, and real, non-unitary, scale-invariant quantum field theories can have pairs of scaling fields with complex-conjugate dimensions~\cite{aharony2018,gorbenko2018}. Furthermore, oscillations in scaling laws, characteristic of spiraling or cyclic RG flows, have also been found in numerical studies of disordered holographic models~\cite{hartnoll2016} which rely neither on the replica trick nor on perturbation theory (in either the interaction or disorder strengths). A recent Monte Carlo study of classically frustrated 3D Heisenberg antiferromagnets also supports the existence of a stable-focus critical point (in this case, in a clean system)~\cite{nagano2019}. In the present work, we extend the study of Ref.~\cite{Yerzhakov2018} in two directions. First, in Ref.~\cite{Yerzhakov2018}, only short-range correlated (or equivalently at long distances, uncorrelated) random-mass disorder was considered. Here we additionally consider random-mass disorder with correlations between two spatial points $\b{x},\b{x}'$ that decay asymptotically as a power law, $\sim|\b{x}-\b{x}'|^{-\alpha}$, with $\alpha<d$. (For $\alpha>d$, the correlations are short range, as the disorder correlation function in momentum space remains finite in the long-wavelength limit.) A clean critical point with correlation length exponent $\nu_\text{CFP}$ is perturbatively stable against such long-range correlated disorder if $\nu_\text{CFP} > 2/\min(d,\alpha)$~\cite{Weinrib1983}; this type of disorder thus generally has a stronger effect at phase transitions than uncorrelated disorder. Second, Ref.~\cite{Yerzhakov2018} only studied the chiral XY GNY model. Here, we perform a comprehensive study of the effect of random-mass disorder in the three standard families of critical GNY models: the chiral Ising, XY, and Heisenberg models~\cite{Rosenstein1993,Zerf2017}, fermionic analogs of the Ising, XY, and Heisenberg Wilson-Fisher universality classes, respectively. As we briefly review in Sec.~\ref{sec:model}, these chiral GNY models describe a variety of QCPs in condensed matter systems~\cite{boyack2020}. Our main results are summarized as follows. For the chiral Ising GNY model, we find new disordered multicritical points, and for the chiral XY and Heisenberg GNY models, new disordered critical and multicritical points. As in Ref.~\cite{Yerzhakov2018}, some of the disordered QCPs found exhibit usual sink-type RG flows, while others are of stable-focus type. We also explore how the structure of the RG flow on the critical hypersurface evolves upon tuning RG-invariant system parameters, here the number $N$ of fermion flavors and the exponent $\alpha$ describing disorder correlations. We are particularly interested in bifurcations of these RG flows~\cite{gukov2017}, where the number or stability properties of fixed points suddenly change as a function of $N$ and $\alpha$, called control parameters in bifurcation theory. We find and analyze instances of the saddle-node bifurcation, also known as the fixed-point annihilation scenario~\cite{Kaplan2009}, at which a repulsive fixed point and an attractive fixed point coalesce and disappear into the complex plane. This type of bifurcation appears or has been argued to appear in RG flows in a variety of problems of current interest in both high-energy physics~\cite{kubota2001,kaveh2005,gies2006,Kaplan2009,braun2011,herbut2016,gracey2018,gorbenko2018} and condensed matter physics/statistical mechanics~\cite{herbut2014,janssen2015,*janssen2016,*janssen2017,nahum2015,wang2017,gorbenko2018b,serna2019,ihrig2019,ma2019,nahum2019}. The characteristic phenomenology associated with it includes Berezinskii-Kosterlitz-Thouless/Miransky scaling, walking/pseudo-critical behavior, and weakly first-order transitions. In our particular problem, it manifests itself in the existence of an anomalously (i.e., exponentially) large length scale $L_*$ that governs the crossover between two distinct universality classes of critical behavior. In much previous work, the saddle-node bifurcation is tuned by a parameter such as space(time) dimensionality $d$ or the integer number $N$ of components of a fermionic or bosonic field, and thus cannot be approached continuously in practice. Here, for fixed $d$ and $N$ the bifurcation can be approached by continuously tuning the exponent $\alpha$ for disorder correlations. Besides the saddle-node bifurcation, we also discover instances of more exotic bifurcations~\cite{gukov2017}: the transcritical bifurcation, at which two fixed points exchange their stability properties without annihilating, and the supercritical Hopf (or Poincar\'e-Andronov-Hopf) bifurcation~\cite{Marsden1976}. The latter is a bifurcation at which a stable-focus QCP loses its stability by giving birth to a stable limit cycle, which then controls the asymptotic critical behavior. A possibility first considered by Wilson~\cite{wilson1971}, stable RG limit cycles lead to log-periodic scaling behavior~\cite{Veytsman1993}, i.e., discrete scale invariance (as opposed to log-periodic behavior of {\it corrections} to scaling at stable-focus points). Hopf bifurcations in RG flows were found in classical disordered $O(n)$ models~\cite{Weinrib1983,Athorne1985,*Athorne1986}, but only the subcritical Hopf bifurcation~\cite{Marsden1976} was found, where an unstable-focus fixed point becomes stable and gives birth to an {\it unstable} limit cycle. As a result, the models studied in Refs.~\cite{Weinrib1983,Athorne1985,*Athorne1986} did not exhibit log-periodic critical scaling behavior in the long-distance limit. The rest of the paper is structured as follows. In Sec.~\ref{sec:model}, we briefly describe the chiral GNY models with long-range correlated random-mass quenched disorder. In Sec.~\ref{sec:RG}, we describe the perturbative RG scheme used to derive beta functions on the critical hypersurface. By contrast with Ref.~\cite{Yerzhakov2018}, where the double epsilon~\cite{Dorogovtsev1980,Boyanovsky1982,boyanovsky1983,Lawrie1984} expansion was sufficient to tame RG flows in the presence of uncorrelated disorder, here we use a controlled {\it triple} epsilon expansion~\cite{DeCesare1994} at one-loop order that allows us to tame the flow of both interaction and correlated disorder strengths. In Sec.~\ref{sec:FPs}, we investigate the fixed points of the RG beta functions derived in Sec.~\ref{sec:RG}, focusing on DFPs and analyzing their linear stability. We compute critical exponents and anomalous dimensions at all DFPs. In Sec.~\ref{sec:RGflows}, we discuss qualitative features of the RG flow, including various bifurcations that occur under changes of the control parameters $N$ and $\alpha$, and their consequences for critical properties. We conclude in Sec.~\ref{sec:conclusion} with a summary of our main results and a few directions for further research. Three appendices (App.~\ref{app:Z}-\ref{app:LimitCycleScaling}) contain the details of some calculations. \section{The random-mass GNY models} \label{sec:model} Our starting point is the family of chiral $O(n)$ GNY models in 2+1 dimensions at zero temperature, described by the Euclidean action: \begin{align}\label{S} S=\int d^2\b{x}\,d\tau \left( \mathcal{L}_\phi+\mathcal{L}_\psi+\mathcal{L}_{\psi \phi} \right), \end{align} where $\b{x}$ denotes spatial coordinates, and $\tau$ is imaginary time. The model consists of a real $n$-component scalar field $\b{\phi}=(\phi^1,\ldots,\phi^n)$, the order parameter, governed by the Lagrangian: \begin{align}\label{Lphi} \c{L}_\phi=(\partial_\tau\b{\phi})^2+c_b^2(\nabla\b{\phi})^2+r\b{\phi}^2+\lambda^2(\b{\phi}^2)^2, \end{align} where $\b{\phi}^2=\b{\phi}\cdot\b{\phi}=\sum_{i=1}^n(\phi^i)^2$. It is coupled to a Dirac fermion field $\psi$, described by the Lagrangian: \begin{align}\label{Lpsi} \mathcal{L}_\psi = i \overline{\psi} (\gamma_0 \partial_\tau + c_f \b{\gamma} \cdot \nabla) \psi. \end{align} The scalar mass squared $r$ in Eq.~(\ref{Lphi}) tunes the model through criticality: $r<0$ gives a phase with spontaneously broken $O(n)$ symmetry, $r>0$ is the symmetric phase, and $r=0$ is the critical point. The parameter $\lambda^2$ describes self-interactions of the order parameter. We define the Dirac adjoint in Eq.~(\ref{Lpsi}) as $\overline{\psi} = -i \psi^\dagger \gamma_0$. We denote $\b{\gamma}=(\gamma_1,\gamma_2)$, and $\gamma_\mu$, $\mu=0,1,2$ are Hermitian Dirac matrices obeying the $SO(3)$ Clifford algebra $\{\gamma_\mu,\gamma_\nu\}=2\delta_{\mu\nu}$. In the ordinary GNY model, Lorentz invariance (exact or emergent at criticality~\cite{roy2016}) demands that the fermion $c_f$ and boson $c_b$ velocities be equal, but in the presence of quenched disorder, to be introduced below, the ratio $c=c_f/c_b$ will flow under RG transformations. We perform perturbative calculations near four dimensions at one-loop order in the context of a particular epsilon-expansion scheme to be explained below, but we are ultimately interested in (2+1)D physics. As is customary for these types of problems (see, e.g., Ref.~\cite{Zerf2017}), we adopt a naive dimensional-regularization prescription according to which all Dirac matrices anticommute~\cite{chanowitz1979} and spinor traces over products of an odd number of Dirac matrices vanish~\cite{DREG3}. In addition to a spinor index, the field $\psi$ carries a flavor index. With the dimensional-regularization prescription just mentioned, perturbative results only depend on the total number of (complex) fermionic degrees of freedom, i.e., the dimension of the chosen representation of the Dirac algebra, times the number of flavors. We will present our results in terms of the number $N$ of flavors of two-component Dirac fermions (i.e., the number of linear band crossing points at the Fermi level in a condensed matter system), but they can alternatively be interpreted as pertaining to $N_f=N/2$ flavors of four-component Dirac fermions when $N$ is even. We consider the cases $n=1,2,3$, corresponding to the chiral Ising, XY, and Heisenberg GNY models, respectively~\cite{Rosenstein1993,Zerf2017}. The form of the Yukawa coupling $\c{L}_{\psi\phi}$ in Eq.~(\ref{S}) differs in each case. In the chiral Ising GNY model~\cite{zinn-justin1991}, a single real scalar $\phi$ couples to the fermion mass $i\overline{\psi}\psi$, \begin{align}\label{YukIsing} \mathcal{L}_{\psi \phi}^\text{Ising}= i h \phi \overline{\psi} \psi, \end{align} with coupling strength $h$. The Yukawa coupling in the chiral XY GNY model can be formulated in different but equivalent ways, depending on the choice of spinor representation. In the four-component representation, the Yukawa coupling can be written as a coupling to both the ordinary mass $i\overline{\psi}\psi$ and an axial mass $\overline{\psi}\gamma_5\psi$, \begin{align} \c{L}_{\psi\phi}^\text{XY}=ih\overline{\psi}(\phi^1+i\gamma_5\phi^2)\psi, \end{align} and is equivalent to the Nambu--Jona-Lasinio model~\cite{nambu1961}. Here, one utilizes a four-dimensional representation $\gamma_\mu$, $\mu=0,1,2,3$ of the $SO(4)$ Clifford algebra, and $\gamma_5=\gamma_0\gamma_1\gamma_2\gamma_3$. In a different spinor representation~\cite{SpinorRep}, the model can be written as a coupling to a Majorana mass, \begin{align}\label{YukXY} \c{L}_{\psi\phi}^\text{XY}=\frac{h}{2}(\phi^*\psi^Ti\gamma_2\psi+\mathrm{H.c.}), \end{align} where the $O(2)$ order parameter $\b{\phi}=(\phi^1,\phi^2)$ is expressed as a complex scalar field $\phi=\phi^1+i\phi^2$. Finally, the Yukawa coupling in the chiral Heisenberg GNY model is: \begin{align}\label{YukHeis} \c{L}_{\psi\phi}^\text{Heis}=ih\b{\phi}\cdot\overline{\psi}\boldsymbol{\sigma}\psi, \end{align} where $\boldsymbol{\sigma}=(\sigma_1,\sigma_2,\sigma_3)$ forms a spin-1/2 representation of the $SU(2)$ algebra. For different values of $N$, the $O(n)$ GNY models introduced above describe a variety of quantum phase transitions in (2+1)D condensed matter systems~\cite{boyack2020}. For $N=4$ (spinful fermions) and $N=2$ (spinless fermions), the chiral Ising GNY model ($n=1$) describes a transition from a Dirac semimetal to an insulator with charge-density-wave order on the honeycomb lattice~\cite{herbut2006}. For $N=1$, the model describes a ferromagnetic transition on the surface of a 3D topological insulator~\cite{Xu2010}. For $N=1/2$, which can be interpreted as a model containing a single flavor of two-component Majorana fermions, the model describes the time-reversal symmetry-breaking transition on the surface of a 3D topological superconductor~\cite{Grover2014}, which exhibits an emergent $\c{N}=1$ supersymmetry~\cite{sonoda2011,Grover2014,fei2016}. Turning to the chiral XY GNY model ($n=2$), the cases $N=4$ and $N=2$ describe a quantum phase transition from a Dirac semimetal (spinful or spinless, respectively) to an insulator with Kekul\'e valence-bond-solid (VBS) order on the honeycomb lattice~\cite{hou2007,Roy2013}, or to an insulator with columnar VBS order on the $\pi$-flux square lattice~\cite{zhou2018}. The spontaneously broken symmetries in those examples are discrete $\mathbb{Z}_3$ and $\mathbb{Z}_4$ point group symmetries, respectively, but those anisotropies are irrelevant perturbations at the $O(2)$-symmetric GNY fixed point, at least in the large-$N$ limit~\cite{li2017,zerf2020}. However, in those VBS realizations of chiral XY GNY criticality, spatial randomness necessarily couples linearly to the VBS order parameter: it thus acts as random-field disorder, which destroys the $d=2$ critical point~\cite{RandomField}. Alternatively, the chiral XY GNY model also describes a semimetal-superconductor transition in a system with $N$ two-component Dirac fermions ($N=4$ for spinful fermions on the honeycomb lattice~\cite{Roy2013}), in which case the $U(1)\cong SO(2)$ symmetry is exact and random-field disorder is forbidden by conservation of particle number. For $N=1$, the model describes a superconducting transition on the surface of a 3D topological insulator, and exhibits an emergent $\mathcal{N}=2$ supersymmetry~\cite{Grover2014,ponte2014,Roy2013,Zerf2016,fei2016,witczak-krempa2016}. Finally, for $N=4$ the chiral Heisenberg GNY model ($n=3$) describes the transition from a Dirac semimetal to an insulator with antiferromagnetic spin-density-wave order on the honeycomb lattice~\cite{herbut2006}. We model quenched random-mass disorder by randomness in the scalar mass squared, $r(\b{x})=r_0+\delta r(\b{x})$, where $\delta r(\b{x})$ is a Gaussian random variable of zero mean and correlation function~\cite{Weinrib1983}: \begin{align}\label{DisCorr} \overline{\delta r(\b{x})\delta r(\b{x}')}\propto\Delta\delta(\b{x}-\b{x}')+\frac{v}{|\b{x}-\b{x}'|^\alpha}, \end{align} where $\overline{\cdots}$ denotes disorder averaging. (Random-mass disorder that couples directly to fermions is perturbatively irrelevant in the epsilon-expansion scheme we utilize, as we explain in more detail in Sec.~\ref{sec:RG}.) The uniform part $r_0$ is the tuning parameter for the transition, and $\Delta$ and $v$ are the short-range and long-range correlated disorder strengths, respectively. Even when considering initial conditions for the RG with only long-range correlated disorder, $\Delta=0$, short-range correlated disorder is generated perturbatively already at one-loop order, see Eq.~(\ref{bD}), and should be kept in the space of couplings. By contrast, long-range correlated disorder cannot be generated perturbatively from short-range correlated disorder, see Eq.~(\ref{bv}). We use the replica trick to average over disorder~\cite{QPT}, which induces an effective two-body interaction, \begin{align}\label{Sdis} S_\text{dis}=-\frac{\Delta}{2} \sum_{ab}\int d^2\b{x}\,d\tau\,d\tau'\, \b{\phi}_a^2(\b{x},\tau)\b{\phi}_b^2(\b{x},\tau') -\frac{v}{2} \sum_{ab}\int d^2\b{x}\,d^2\b{x}'\,d\tau\,d\tau'\, \frac{ \b{\phi}_a^2(\b{x},\tau)\b{\phi}_b^2(\b{x},\tau')}{|\b{x}-\b{x}'|^\alpha}, \end{align} where $a,b=1,\ldots,m$ are replica indices, and the replica limit $m\rightarrow 0$ is to be taken at the end of the calculation. As for the superfluid-Mott glass transition~\cite{weichman2008}, randomness in the scalar mass squared preserves the exact particle-hole symmetry of the clean GNY action (\ref{S}). \section{RG in the triple epsilon expansion} \label{sec:RG} We first briefly recapitulate the idea of the double epsilon expansion for QCPs perturbed by quenched short-range correlated disorder, first focusing on the purely bosonic random-mass $O(n)$ vector model~\cite{Dorogovtsev1980,Boyanovsky1982,boyanovsky1983,Lawrie1984}. In $d=4-\epsilon$ spatial and $\epsilon_\tau$ imaginary time dimensions, the order parameter field $\b{\phi}$ has engineering dimension $\Delta_\phi=(2-\epsilon+\epsilon_\tau)/2$. The couplings $\lambda^2$ and $\Delta$ thus have mass dimension $\epsilon-\epsilon_\tau$ and $\epsilon$, respectively, and a controlled perturbative RG analysis can be performed by treating $\epsilon$ and $\epsilon_\tau$ as small parameters. For $n>1$, a stable DFP with $\lambda^2_*\sim\mathcal{O}(\epsilon,\epsilon_\tau)$, $\Delta_*\sim\mathcal{O}(\epsilon,\epsilon_\tau)$ on the critical hypersurface $r=0$ is found at one-loop order, with critical exponents~\cite{NoteBCFP}: \begin{align} \nu&=\frac{1}{2}+\frac{3n\epsilon+(2n+4)\epsilon_\tau}{32(n-1)},\label{BCnu}\\ z&=1+\frac{(4-n)\epsilon+(2n+4)\epsilon_\tau}{16(n-1)}.\label{BCz} \end{align} For $n=2$, and extrapolating $\epsilon_\tau$ to 1 and $\epsilon$ to 2 or 1, relevant to the boson superfluid-Mott glass transition in (2+1)D and (3+1)D, respectively, one obtains exponents in reasonable agreement with those found in numerical Monte Carlo (MC) simulations (Table~\ref{tab:SFMG}). Ref.~\cite{Yerzhakov2018} observed that the double epsilon expansion can also be applied to short-range correlated random-mass GNY models: the fermion field $\psi$ has engineering dimension $\Delta_\psi=(3-\epsilon+\epsilon_\tau)/2$, thus the Yukawa coupling $h$ has mass dimension $(\epsilon-\epsilon_\tau)/2$ and can also be treated perturbatively. To the difference of the bosonic model, however, one must enlarge the space of running couplings to include the relative velocity $c=c_f/c_b$, and ensure that the beta function for this parameter also vanishes at the DFP. (For a disordered system with a single field, the flow of the velocity, e.g., $c_b$ for the bosonic $O(n)$ model, can be absorbed in the definition of $z$, provided the disorder strength flows to a fixed-point value $\Delta_*$~\cite{narovlansky2018,aharony2018}.) Random-mass disorder will also generally couple to the fermionic sector of the GNY models, the most relevant coupling being a random coupling to fermion bilinears. For Gaussian disorder, the resulting disorder-induced two-body coupling has mass dimension $-2+\epsilon$, and is thus strongly irrelevant in the epsilon expansion. \begin{table}[t] \begin{tabular}{|l||c|c|c|} \hline & MC~\cite{vojta2016,crewse2018} & $\c{O}(\epsilon,\epsilon_\tau)$ & $\c{O}(1/n)$~\cite{goldman2020} \tabularnewline \hline \hline $\nu$, (2+1)D & 1.16(5) & 1.125 & 1 \\ \hline $z$, (2+1)D & 1.52(3) & 1.75 & 1.54 \\ \hline \hline $\nu$, (3+1)D & 0.90(5) & 0.9375 & --- \\ \hline $z$, (3+1)D & 1.67(6) & 1.625 & --- \\ \hline \end{tabular} \caption{Critical exponents for the boson superfluid-Mott glass transition.}\label{tab:SFMG} \end{table} In the presence of long-range correlated disorder, we see from Eq.~(\ref{Sdis}) that the coupling constant $v$ has mass dimension $4-\alpha$ at the Gaussian fixed point. While for generic $\alpha<d<4$ this coupling is strongly relevant, if we set $\alpha=4-\delta$ and treat $\delta$ as a small parameter long-range correlated disorder is only slightly relevant and can be treated perturbatively~\cite{Weinrib1983}. (Fermionic disorder of the type discussed above but with long-range correlations~\cite{Fedorenko2012,Dudka2016} would have mass dimension $-2+\delta$ and is still irrelevant.) This forms the basis of a triple expansion in $\epsilon,\epsilon_\tau,\delta$~\cite{DeCesare1994}, which thus far has only been applied to bosonic systems. Below we employ this triple epsilon expansion to study the GNY models with both short-range and long-range correlated random-mass disorder. In the presence of three epsilon-like parameters, the nature of the RG fixed points and their stability depend on two ratios, e.g., $\epsilon/\epsilon_\tau$ and $\delta/\epsilon_\tau$. We restrict our consideration to $\epsilon/\epsilon_\tau=2$, which in the limit $\epsilon_\tau\rightarrow 1$ corresponds to (2+1)D systems. Regarding the $\delta/\epsilon_\tau$ ratio, we consider the range $0<\delta/\epsilon_\tau<4$. For $\delta<0$, long-range correlated disorder is irrelevant at the Gaussian fixed point, and for $\delta/\epsilon_\tau>4$, the long-range disorder correlations (\ref{DisCorr}) with $\alpha=4-\delta$ would have the unphysical feature of increasing rather than decaying with distance in the limit $\epsilon_\tau\rightarrow 1$. \subsection{Bare vs renormalized actions} We now outline the basic steps of the RG procedure using as example the chiral XY GNY model studied in Ref.~\cite{Yerzhakov2018}, but with long-range correlated disorder (\ref{Sdis}). For the chiral Ising and Heisenberg GNY models, the number of components of the order parameter and the form of the Yukawa coupling change [see Eqs.~(\ref{YukIsing}-\ref{YukHeis})], but the relations (\ref{rel}) between bare and renormalized couplings, and the formal expressions (\ref{betac2}-\ref{betar}) for the beta functions in terms of the anomalous dimensions (\ref{Zi}), remain the same. As in Refs.~\cite{boettcher2016,mandal2018}, we rescale the time coordinate as well as the fermion and boson fields, and redefine the couplings in the action (\ref{S}-\ref{Sdis}), to eliminate the velocities $c_f$ and $c_b$ in favor of the dimensionless ratio $c^2=(c_f/c_b)^2$, which then appears in front of the time derivative term for the boson field~\footnote{Alternatively, one can define fermionic $z_\psi$ and bosonic $z_\phi$ dynamic critical exponents from the flow of the velocities $c_f$ and $c_b$, respectively, which leads to renormalized dispersions $\omega_\psi(p)\sim p^{z_\psi}$, $\omega_\phi(p)\sim p^{z_\phi}$. The existence of a fixed point for $\beta_{c^2}$ then signifies that those exponents are in fact the same at criticality, $z_\psi=z_\phi=z$.}. The replicated bare action for the random-mass chiral XY GNY model is then: \begin{align}\label{Sbare} S_B=&\sum_a\int d^d\b{x}_B\,d^{\epsilon_\tau}\tau_B\biggl(i\overline{\psi}_{a,B}(\gamma_0\partial_{\tau_B}+\b{\gamma}\cdot\nabla_B)\psi_{a,B} +\phi_{a,B}^*(-c^2_B\partial_{\tau_B}^2-\nabla_B^2+r)\phi_{a,B} \nonumber\\ &\hspace{35mm}+ \lambda_B^2|\phi_{a,B}|^4+\frac{h_B}{2}(\phi_{a,B}^*\psi_{a,B}^Ti\gamma_2\psi_{a,B}+\text{H.c.})\biggr)\nonumber\\ &-\frac{\Delta_B}{2}\sum_{ab}\int d^d\b{x}_B\,d^{\epsilon_\tau}\tau_B\,d^{\epsilon_\tau}\tau_B' |\phi_{a,B}|^2(\b{x}_B,\tau_B)|\phi_{b,B}|^2(\b{x}_B,\tau_B')\nonumber \\ &-\frac{v_B}{2} \sum_{ab}\int d^d\b{x}_B\,d^d\b{x}'_B\,d^{\epsilon_\tau}\tau_B\,d^{\epsilon_\tau}\tau'_B\, \frac{ |\phi_{a,B}|^2(\b{x}_B,\tau_B)|\phi_{b,B}|^2(\b{x}'_B,\tau'_B)}{ |\b{x}_B-\b{x}'_B|^\alpha}, \end{align} where $a,b=1,\ldots,m$ are replica indices, and the corresponding renormalized action is: \begin{align}\label{Sren} S=&\sum_a\int d^d\b{x}\,d^{\epsilon_\tau}\tau\biggl(i\overline{\psi}_a(Z_1\gamma_0\partial_\tau+Z_2\b{\gamma}\cdot\nabla)\psi_a +\phi_a^*(-Z_3c^2\partial_\tau^2-Z_4\nabla^2+Z_rr\mu^2)\phi_a \nonumber\\ &\hspace{30mm}+Z_5\lambda^2\mu^{\epsilon-\epsilon_\tau}|\phi_a|^4 +Z_6\frac{h}{2}\mu^{(\epsilon-\epsilon_\tau)/2}(\phi_a^*\psi_a^Ti\gamma_2\psi_a+\text{H.c.})\biggr)\nonumber\\ &-Z_7\frac{\Delta}{2}\mu^\epsilon\sum_{ab}\int d^d\b{x}\,d^{\epsilon_\tau}\tau\,d^{\epsilon_\tau}\tau'\, |\phi_a|^2(\b{x},\tau)|\phi_b|^2(\b{x},\tau') \nonumber\\ &-Z_8\frac{v}{2} \mu^\delta \sum_{ab}\int d^d\b{x}\,d^d\b{x}'\,d^{\epsilon_\tau}\tau\,d^{\epsilon_\tau}\tau'\, \frac{ |\phi_a|^2(\b{x},\tau)|\phi_b|^2(\b{x}',\tau')}{ |\b{x}-\b{x}'|^\alpha}, \end{align} where $\mu$ is a renormalization scale. Due to the anisotropy between space and time, we set $\b{x}_B=\b{x}$ and $\tau_B=\eta\tau$, and matching the bare and renormalized kinetic terms for the fermion we find that $\eta=Z_{2}/Z_{1}$. Defining the anomalous dimensions: \begin{align}\label{Zi} \gamma_{i}=\mu\frac{d\ln Z_{i}}{d\mu},\,i=1,\ldots,8, r, \end{align} we find that the dynamic critical exponent $z=\mu(d\ln\tau/d\mu)$~\cite{Thomson2017} is given by: \begin{align} z=1+\gamma_{1}-\gamma_{2}. \end{align} The fermion and boson fields are multiplicatively renormalized, \begin{align} \psi_{a,B}(\b{x}_B,\tau_B)=\sqrt{Z_\psi}\psi_a(\b{x},\tau),\hspace{5mm} \phi_{a,B}(\b{x}_B,\tau_B)=\sqrt{Z_\phi}\phi_a(\b{x},\tau), \end{align} and the fermion and boson anomalous dimensions, $\eta_\psi=\mu(d\ln Z_{\psi}/d\mu)$ and $\eta_\phi=\mu(d\ln Z_{\phi}/d\mu)$, are given by: \begin{align} \eta_\psi=\gamma_2+\epsilon_\tau(z-1),\hspace{5mm} \eta_\phi=\gamma_4+\epsilon_\tau(z-1). \end{align} Comparing Eqs.~(\ref{Sbare}) and (\ref{Sren}), we obtain relations between the bare and (dimensionless) renormalized couplings, \begin{align}\label{rel} c^2&=Z_{3}^{-1}Z_{4}\left(\frac{Z_{1}}{Z_{2}}\right)^2 c_{B}^2,\hspace{5mm} \lambda^2=\mu^{-(\epsilon-\epsilon_\tau)}\left(\frac{Z_{1}}{Z_{2}}\right)^{\epsilon_\tau}Z_{4}^2Z_{5}^{-1}\lambda_{B}^2,\hspace{5mm} h^2=\mu^{-(\epsilon-\epsilon_\tau)}\left(\frac{Z_{1}}{Z_{2}}\right)^{\epsilon_\tau}Z_{2}^2Z_{4}Z_{6}^{-2}h_{B}^2, \nonumber\\ \Delta&=\mu^{-\epsilon}Z_{4}^2Z_{7}^{-1}\Delta_{B}, \hspace{5mm} v=\mu^{-\delta}Z_{4}^2Z_{8}^{-1}v_{B}, \hspace{5mm} r=\mu^{-2}Z_4Z_r^{-1}r_B. \end{align} Using the fact that the bare couplings do not depend on the renormalization scale $\mu$, we find the RG beta functions $\beta_g\equiv\mu(dg/d\mu)$, $g\in\{c^2,\lambda^2,h^2,\Delta,v\}$, to be: \begin{align} \beta_{c^2}&=(2\gamma_{1}-2\gamma_{2}-\gamma_{3}+\gamma_{4})c^2,\label{betac2}\\ \beta_{\lambda^2}&=\bigl(-(\epsilon-\epsilon_\tau)+2\gamma_{4}-\gamma_{5}+\epsilon_\tau(\gamma_{1}-\gamma_{2})\bigr) \lambda^2,\label{betal2formal}\\ \beta_{h^2}&=\bigl(-(\epsilon-\epsilon_\tau)+2(\gamma_{2}-\gamma_{6})+\gamma_{4}+\epsilon_\tau(\gamma_{1}-\gamma_{2}) \bigr)h^2,\label{betah2formal}\\ \beta_{\Delta}&=(-\epsilon+2\gamma_{4}-\gamma_{7})\Delta,\label{betaDelta}\\ \beta_{v}&=(-\delta+2\gamma_{4}-\gamma_{8})v,\label{betav} \\ \beta_r&=(-2+\gamma_4-\gamma_r)r. \label{betar} \end{align} From Eq.~(\ref{betar}), we find the inverse correlation length exponent~\cite{Zinn-JustinBook}, \begin{align} \nu^{-1}=2-\gamma_4+\gamma_r. \end{align} \subsection{Renormalization constants} We calculate the renormalization constants $Z_i$, $i=1,\dots,8,r$ at one-loop order in the modified minimal subtraction ($\overline{\text{MS}}$) scheme with dimensional regularization in $4-\epsilon$ space and $\epsilon_\tau$ time dimensions. The relevant Feynman rules and diagrams are shown schematically in Figs.~\ref{fig:feynrules} and \ref{fig:diagrams}, respectively. The fermion and boson propagators are given by: \begin{align} G_{ab}^{IJ}(p)&=\langle\psi_a^I(p)\overline{\psi}_b^J(p)\rangle=\delta_{ab}\delta^{IJ}\frac{\slashed{p}}{p^2},\\ D_{ab}^{ij}(p)&=\langle\phi^{i}_a(p)\phi^{j}_b(-p)\rangle=\delta_{ab} \delta^{ij}\frac{1}{c^2p_0^2+\b{p}^2+r\mu^2},\label{BosPropag} \end{align} where $I,J=1,\ldots,N$ and $i,j=1,\ldots,n$ are fermion flavor and $O(n)$ indices, respectively, and $\slashed{p}=\gamma_\mu p_\mu$. \begin{figure}[t] \includegraphics[width=0.65\columnwidth]{feynrules.pdf} \caption{Schematic momentum-space Feynman rules for the random-mass GNY models, omitting fermion flavor, $O(n)$, and replica indices. Solid line: fermion propagator, dashed line: boson propagator. Here $p=(p_0,\b{p})$ is the momentum of a propagator line, with $\slashed{p}=\gamma_\mu p_\mu$, and $q=(q_0,\b{q})$ is the momentum transfer in a boson four-point vertex.} \label{fig:feynrules} \end{figure} \begin{figure}[t] \includegraphics[width=0.8\columnwidth]{diagrams.pdf} \caption{Schematic one-loop Feynman diagrams for the random-mass GNY models. Renormalization of (a,b,c,d) the boson two-point function; (e) the fermion two-point function; (f) the Yukawa vertex $h$; (g,h,i,j) the boson self-interaction vertex $\lambda^2$; (i,k,l,m) the short-range correlated disorder vertex $\Delta$; and (j,l,m) the long-range correlated disorder vertex $v$.} \label{fig:diagrams} \end{figure} For the chiral XY GNY model ($n=2$), the diagrams in the clean limit or containing only short-range correlated disorder vertices were already computed in Ref.~\cite{Yerzhakov2018}; these results are also easily adapted to $n=1$ and $n=3$. The new diagrams containing long-range correlated disorder vertices are evaluated explicitly in Appendix~\ref{app:Z} for $n=1,2,3$. Unlike the standard epsilon expansion in $4-\epsilon$ dimensions, in the triple epsilon expansion one-loop diagrams contain simple poles not only in $\epsilon$, but also in $\epsilon-\epsilon_\tau$, $\delta$, and $2\delta - \epsilon$. We obtain the following renormalization constants: \begin{align} Z_{1}&=1- \frac{n h^2}{\epsilon-\epsilon_\tau}f(c^2),\label{Z1}\\ Z_{2}&=1- \frac{n h^2}{2(\epsilon-\epsilon_\tau)},\\ Z_{3}&=1-\frac{2\Delta}{\epsilon}-\frac{2v}{\delta} -\frac{Nh^2c^{-2}}{\epsilon-\epsilon_\tau},\\ Z_{4}&=1-\frac{Nh^2}{\epsilon-\epsilon_\tau},\\ Z_{5}&=1+\frac{2(n+8)\lambda^2}{\epsilon-\epsilon_\tau}-\frac{Nh^4\lambda^{-2}}{\epsilon-\epsilon_\tau}-\frac{12\Delta}{\epsilon}-\frac{12v}{\delta},\\ Z_{6}&=1+(2-n) \frac{h^2}{\epsilon-\epsilon_\tau},\label{Z6}\\ Z_{7}&=1+\frac{4(n+2)\lambda^2}{\epsilon-\epsilon_\tau}-\frac{8\Delta}{\epsilon}-\frac{12v}{\delta}-\frac{4v^2 \Delta^{-1}}{2\delta-\epsilon},\label{Z7}\\ Z_{8}&=1+\frac{4(n+2)\lambda^2}{\epsilon-\epsilon_\tau}-\frac{4\Delta}{\epsilon}-\frac{4v}{\delta},\label{Z8}\\ Z_{r}&=1+\frac{2(n+2)\lambda^2}{\epsilon-\epsilon_\tau}-\frac{2\Delta}{\epsilon}-\frac{2v}{\delta}.\label{Zr} \end{align} We have rescaled the couplings according to $g/(4\pi)^2\rightarrow g$, $g\in\{\lambda^2,h^2,\Delta,v, r\}$, and, as in Ref.~\cite{Yerzhakov2018}, we define the dimensionless function, \begin{align}\label{f} f(c^2)=\frac{c^2(c^2-1-\ln c^2)}{(c^2-1)^2}, \end{align} plotted in Fig.~\ref{fig:f}. At one-loop order there is no renormalization of the Yukawa vertex for the chiral XY GNY model, i.e., the diagram in Fig.~\ref{fig:diagrams}(f) vanishes for $n=2$ [see Eq.~(\ref{Z6})], which is easily seen from the form (\ref{YukXY}) of the Yukawa coupling. We also see from the last term in Eq.~(\ref{Z7}) that short-range correlated disorder is generated at one-loop order from long-range correlated disorder, via the diagram in Fig.~\ref{fig:diagrams}(m). By contrast, long-range correlated disorder cannot be generated perturbatively from short-range correlated disorder. \begin{figure}[t] \includegraphics[width=0.5\textwidth]{f.pdf} \caption{Plot of $f(c^2)$ in Eq.~(\ref{f}), with $c^2=(c_f/c_b)^2$ the velocity ratio squared; $f(0)=0$, $f(1)=\frac{1}{2}$, and $f(\infty)=1$.} \label{fig:f} \end{figure} \subsection{Beta functions and anomalous dimensions} Using the chain rule, \begin{align}\label{gammai} \gamma_{i}=\frac{\mu}{Z_i}\frac{dZ_{i}}{d\mu}=\frac{1}{Z_{i}}\sum_g\frac{\partial Z_{i}}{\partial g}\beta_g, \end{align} for $i=1,\ldots,8,r$ and $g\in\{c^2,\lambda^2,h^2,\Delta,v,r\}$ in Eqs.~(\ref{betac2}-\ref{betar}), and expanding the beta functions to quadratic order in all couplings except $c^2$, we obtain: \begin{align} \beta_{c^2}&=-2(\Delta+v)c^2+h^2\big[N(c^2-1)+n c^2\left(2f(c^2)-1\right) \big],\label{bc2}\\ \beta_{\lambda^2}&=-(\epsilon-\epsilon_\tau)\lambda^2+2(n+8)\lambda^4+2Nh^2\lambda^2 -Nh^4-12(\Delta+v)\lambda^2,\label{bl2}\\ \beta_{h^2}&=-(\epsilon-\epsilon_\tau)h^2+(N+4-n)h^4,\label{bh2}\\ \beta_\Delta&=-\epsilon\Delta+4(n+2)\lambda^2\Delta+2N h^2\Delta-8\Delta^2-12\Delta v-4v^2,\label{bD}\\ \beta_v&=-\delta v+4(n+2)\lambda^2v+2N h^2v-4\Delta v-4v^2.\label{bv} \end{align} We note that all poles in linear combinations of the small parameters $\epsilon,\epsilon_\tau,\delta$ properly cancel in the beta functions. Setting $\epsilon_\tau$ and the disorder couplings to zero, we find that Eqs.~(\ref{bl2}-\ref{bh2}) agree with the beta functions for the chiral $O(n)$ GNY models in the clean limit~\cite{Zerf2017}. When setting $n=2$ and $v=0$, Eqs.~(\ref{bc2}-\ref{bD}) reproduce our previous results for the chiral XY GNY model with short-range correlated disorder~\cite{Yerzhakov2018}. Finally, when turning off the Yukawa coupling, $h^2=0$, the beta functions (\ref{bl2},\ref{bD},\ref{bv}) with both short-range and long-range correlated disorder agree with those given in Refs.~\cite{Boyanovsky1982,boyanovsky1983,Lawrie1984,Weinrib1983,DeCesare1994}. We also note that the above beta functions are perturbative in the couplings $\lambda^2$, $h^2$, $\Delta$, and $v$, but exact in the dimensionless velocity ratio $c^2$. The critical exponents $\nu^{-1}$, $z$, $\eta_\psi$, and $\eta_\phi$ are obtained by evaluating: \begin{align} \nu^{-1}&=2-Nh^2-2(n+2)\lambda^2+2(\Delta+v),\label{nuinv}\\ z&=1+\left(f(c^2)-{\textstyle{\frac{1}{2}}}\right)nh^2,\label{z}\\ \eta_{\psi}&=\frac{n}{2} h^2+\epsilon_\tau (z-1),\label{gammapsi}\\ \eta_{\phi}&=Nh^2+\epsilon_\tau (z-1),\label{gammaphi} \end{align} at RG fixed points $(c^2_*,\lambda^2_*,h^2_*,\Delta_*,v_*)$, i.e., common zeros of the set (\ref{bc2}-\ref{bv}) of beta functions. Since $h_*^2$ will be $\mathcal{O}(\epsilon,\epsilon_\tau)$ at one-loop order, as can already be seen from Eq.~(\ref{bh2}), for a consistent treatment we have to discard the $\epsilon_\tau (z-1)$ terms in the fermion and boson anomalous dimensions. \section{Fixed points and critical exponents} \label{sec:FPs} In Sec.~\ref{sec:FixedPoints}, we discuss the fixed points of the flow equations (\ref{bc2}-\ref{bv}). Depending on their stability, which is analyzed in Sec.~\ref{sec:stability}, these are {\it bona fide} critical points (no relevant direction) or multicritical points (one or more relevant directions). Here, the number of relevant directions refers to the number of such directions on the critical hypersurface, since the tuning parameter $r$ for the transition (see Sec.~\ref{sec:model}) is a relevant direction at all fixed points. As mentioned in Sec.~\ref{sec:RG}, we fix $\epsilon=2\epsilon_\tau$, with the extrapolation $\epsilon_\tau\rightarrow 1$ corresponding to 2+1 dimensions. Throughout the paper, we evaluate quantities such as fixed-point couplings, RG eigenvalues, and critical exponents as a function of the control parameters $N\geqslant 1$ and $\delta=4-\alpha\in[0,4]$, where the latter parameter is to be understood as the ratio $\delta/\epsilon_\tau$ evaluated at $\epsilon_\tau=1$. \subsection{Fixed points} \label{sec:FixedPoints} We denote the RG fixed points as five-component vectors $(c^2_*,\lambda^2_*,h^2_*,\Delta_*,v_*)$ in the space of running couplings. Starting with the CFPs ($\Delta_*=v_*=0$), these include Gaussian fixed points $(c_*^2,0,0,0,0)$ and the $O(n)$ Wilson-Fisher fixed points $(c_*^2,\frac{\epsilon_\tau}{2(n+8)},0,0,0)$, where $c_*^2$ is arbitrary and can be set to unity by independent redefinitions of the fermion and boson fields. We also have the GNY fixed points, for all $n=1,2,3$ and $N$ given by: \begin{align}\label{CFP} \left(1,\frac{4-n-N+\sqrt{D_C}}{4(n+8)(N+4-n)}\epsilon_\tau,\frac{\epsilon_\tau}{(N+4-n)},0,0\right), \end{align} where $D_C=N^2+2(5n+28)N+(4-n)^2$, in agreement with earlier studies~\cite{Zerf2017}. The fixed-point couplings are positive for all $N>0$. Since $c_*^2=1$ and $f(1)=\frac{1}{2}$ (Fig.~\ref{fig:f}), Eq.~(\ref{z}) implies that the CFPs are Lorentz invariant ($z=1$), and are in fact conformally invariant. We next turn to DFPs, for which $\Delta_*$ and/or $v_*$ are nonzero. To be physical, all fixed points must obey the following conditions~\cite{Weinrib1983}: \begin{align}\label{physicality} c^2_*>0,\hspace{5mm} \lambda^2_*\geq 0,\hspace{5mm} h^2_*\geq 0,\hspace{5mm} v_*\geq 0,\hspace{5mm} \Delta_*+v_*\geq 0. \end{align} At fermionic DFPs with $h_*^2>0$, the condition $\beta_{c^2}=0$ together with Eq.~(\ref{physicality}) further implies that $c_*^2>1$. From Eq.~(\ref{bc2}), we find that at a fermionic fixed point, \begin{align}\label{c2>1} N(c_*^2-1)+2nc_*^2\left(f(c_*^2)-{\textstyle{\frac{1}{2}}}\right)=\frac{2(\Delta_*+v_*)c_*^2}{h_*^2}. \end{align} Equation~(\ref{physicality}) implies that the right-hand side of this equation is positive. From Fig.~\ref{fig:f} and Eq.~(\ref{f}), we see that $f(c_*^2)>{\textstyle{\frac{1}{2}}}$ only if $c_*^2>1$, and $f(c_*^2)<{\textstyle{\frac{1}{2}}}$ only if $c_*^2<1$. Thus for the left-hand side of Eq.~(\ref{c2>1}) to be positive also we must have $c_*^2>1$. (At a clean fermionic fixed point, the left-hand side must vanish, which can only happen for $c_*^2=1$.) \subsubsection{Fixed points with short-range correlated disorder} \label{sec:SDFPs} We first focus on DFPs with $\Delta_* \neq 0$ and $v_*=0$, which we term short-range disordered fixed points (SDFPs). From Eq.~(\ref{bh2}) we find that $h_*^2=0$ or $h_*^2=\epsilon_\tau/(N+4-n)$. When the fixed-point value of the Yukawa coupling is zero, we reproduce the results of Refs.~\cite{Boyanovsky1982,boyanovsky1983,Lawrie1984} for the purely bosonic $O(n)$ vector model with random-mass disorder. For $n=1$, there is an accidental degeneracy in the system of equations $\beta_{\lambda^2}=0, \beta_{\Delta}=0$. The degeneracy is lifted at two-loop order, giving rise to a DFP with $\lambda_*^2,\Delta_*\sim\c{O}(\sqrt{\epsilon_\tau})$, for a finite ratio $\epsilon/\epsilon_\tau$~\cite{Boyanovsky1982}. Our focus, however, is on fermionic DFPs with nonzero $h_*^2$. We find two fermionic SDFPs for $n=2,3$: \begin{align}\label{SDFP1} \left(c_{*\text{1,2}}^2,\frac{N+8-2n \pm \sqrt{D_S}}{8(n-1)(N+4-n)}\epsilon_\tau,\frac{\epsilon_\tau}{N+4-n},\frac{(n+2)(N \pm \sqrt{D_S})+2(4-n)^2}{16(n-1)(N+4-n)}\epsilon_\tau,0\right), \end{align} where $D_S=N^2-4(5n-8)N+4(4-n)^2$, which we denote by SDFP1 (with $+\sqrt{D_S}$, $c_*^2=c_{*1}^2$) and SDFP2 (with $-\sqrt{D_S}$, $c_*^2=c_{*2}^2$). The chiral XY case ($n=2$) was discussed in our earlier work~\cite{Yerzhakov2018}: the fixed-point couplings $\lambda_*^2$, $h_*^2$, and $\Delta_*$ are nonnegative, and thus physical, for all $N\geq 1$. At $N=1$, SDFP2 merges with the clean GNY fixed point (\ref{CFP}), while SDFP1 runs off to infinity as it is impossible to satisfy $\beta_{c^2}=0$. (Note that for $n=2$, SDFP1,2 here correspond to DFP1,2 in Ref.~\cite{Yerzhakov2018} for $N<4$ and to DFP2,1 for $N>4$.) In the chiral Heisenberg case ($n=3$), the discriminant $D_S \geq 0$ for $N \geq N_D \approx 27.856$, and the SDFPs (\ref{SDFP1}) are physical only for $N>N_D$. In the chiral Ising case ($n=1$), as previously mentioned the RG equations for $\lambda^2$ and $\Delta$ become degenerate for zero Yukawa coupling, and we find only one solution at order $\c{O}(\epsilon, \epsilon_\tau)$ for $h_*^2 \neq 0$: \begin{align}\label{SDFP_m=0} \left( c_*^2, \frac{N \epsilon_\tau}{(N+3)(N+6)},\frac{\epsilon_\tau}{N+3},\frac{3(N-6)\epsilon_\tau}{4(N+3)(N+6)},0 \right). \end{align} This SDFP is physical for $N \geq 6$, and merges with the clean GNY fixed point at $N=6$. There is in principle the possibility of an additional SDFP at two-loop order with $\lambda_*^2,\Delta_*\sim\c{O}(\sqrt{\epsilon_\tau})$, as in the bosonic case, and $h_*^2\sim\c{O}(\epsilon_\tau)$. We show in Appendix~\ref{app:NoPoint} that this cannot happen, because it is impossible to satisfy the equation $\beta_{c^2}=0$. We also note that this excludes the possibility of a physical SDFP for the $N=1/2$ chiral Ising GNY model, which in the clean limit flows to a conformal field theory with emergent supersymmetry~\cite{sonoda2011,fei2016}, the $\c{N}=1$ Wess-Zumino model. (This theory describes the time-reversal symmetry-breaking transition among the gapless Majorana surface states of a three-dimensional topological superfluid, e.g., $^{3}$He-$B$~\cite{Grover2014}.) For the fermionic SDFPs found in Eqs.~(\ref{SDFP1}-\ref{SDFP_m=0}) above, despite the fact that the equation $\beta_{c^2}=0$ is nonlinear in $c^2$, one can show analytically that it admits a unique solution $c_*^2>1$, except for $N=1$ in the XY GNY model. The actual fixed-point values of $c^2$ are obtained by solving the equation numerically, and together with $h_*^2$ determine via Eq.~(\ref{z}) the dynamic critical exponent $z$ at those fixed points (see Sec.~\ref{sec:exponents}, Fig.~\ref{fig:SDFPs_z}). \subsubsection{Fixed points with long-range correlated disorder} \label{sec:LDFPs} We now turn to DFPs with $v_* \neq 0$, which we dub long-range disordered fixed points (LDFPs). For vanishing $h_*^2$, the purely bosonic random-mass $O(n)$ vector model for $n > 1$ was studied in the triple epsilon expansion in Ref.~\cite{DeCesare1994}, where LDFPs were found. For $n=1$, long-range correlated disorder lifts the previously mentioned degeneracy in the system of fixed-point equations. For nonzero $h_*^2=\epsilon_\tau/(N+4-n)$, we find two fermionic LDFPs in all three GNY universality classes, $n=1,2,3$: \begin{align} \lambda^2_{*1,2} &=\frac{3(N+4-n) \delta - (5N+4-n) \epsilon_\tau \pm \sqrt{D_L} }{4(5n+4) (N+4-n)},\label{L1}\\ (\Delta_*+v_*)_{1,2} &=\frac{ -2(n-1) (N+4-n) \delta + \bigl[ (5n-2)N-9+(n-1)^2\bigr] \epsilon_\tau \pm (2+n) \sqrt{D_L} }{4 (5n+4) (N+4-n)},\label{L2}\\ v_{*1,2} &= \left(1+\frac{ 4 (\Delta_*+v_*)_{1,2}}{2\epsilon_\tau-\delta}\right) (\Delta_*+v_*)_{1,2},\label{L3} \end{align} where $D_L= \left[( 5N + 4 - n )\epsilon_\tau - 3 (N+4-n)\delta \right]^2 - 8(5n+4)N\epsilon_\tau^2 $. The discriminant $D_L$ is nonnegative, and thus the fixed-point couplings real, for either: \begin{align}\label{deltaD} \delta \geq \delta_{D}\equiv\frac{ (5N+4-n)+\sqrt{8(5n+4)N} }{3(N+4-n)}\epsilon_\tau, \end{align} or: \begin{align}\label{deltaD2} \delta \leq \delta_D'\equiv\frac{ (5N+4-n)-\sqrt{8(5n+4)N} }{3(N+4-n)}\epsilon_\tau. \end{align} In addition to being real, the fixed-point couplings (\ref{L1}-\ref{L3}) must obey the conditions (\ref{physicality}). By contrast with the SDFPs (\ref{SDFP1}-\ref{SDFP_m=0}), which are physical above a certain critical value of $N$ that is independent of $\delta$, the LDFPs are physical only in complicated regions of the $N$-$\delta$ plane that possess several disconnected components and/or curved boundaries. Since the fixed-point couplings (\ref{L1}-\ref{L3}) do not depend explicitly on $c_*^2$, we first assume a physical solution for $c_*^2$ exists, and discuss how the remaining conditions delimit those nontrivial regions. \begin{itemize} \item $\underline{\lambda^2_* \geq 0}$: This condition is satisfied for all $n=1,2,3$ for both LDFPs provided that $\delta \geq \delta_{D}$. Since $\delta_D>\delta_D'$ for all $N>0$, LDFPs in the region $\delta\leq\delta_D'$ of Eq.~(\ref{deltaD2}) are never physical. \item $\underline{\Delta_*+v_* \geq 0}$: For LDFP1, i.e., Eqs.~(\ref{L1}-\ref{L3}) with $+\sqrt{D_L}$, the condition is satisfied for different regions of the $N$-$\delta$ plane depending on $n$: \begin{align} n=1:\,&\delta \in \begin{cases} [0, \delta_2] \cup [\delta_1, 4\epsilon_\tau], & N \leq N_2, \\ [0, \delta_D'] \cup [\delta_D, 4\epsilon_\tau], & N > N_2; \end{cases}\\ n=2,3:\,& \delta \in [0,\delta_D'] \cup \begin{cases} [\delta_1, 4\epsilon_\tau], & N \leq N_2, \\ [\delta_D, 4\epsilon_\tau], & N > N_2. \end{cases} \end{align} \iffalse \begin{align} \delta \in [0,\delta_D'] \cup \begin{cases} [\delta_1, 4\epsilon_\tau], & N \leq N_2, \\ [\delta_D, 4\epsilon_\tau], & N > N_2. \end{cases} \end{align} \fi For LDFP2, i.e., Eqs.~(\ref{L1}-\ref{L3}) with $-\sqrt{D_L}$, we have: \begin{align} n=1:\,&\delta \in \cup \begin{cases} \varnothing, & N<N_2, \\ [\delta_{2}, \delta_D'] \cup [\delta_{D},\delta_1], & N \geq N_2; \end{cases} \\ n=2,3:\,& \delta \in \begin{cases} [\delta_2, \delta_D'], & N \leq N_2, \\ [\delta_2, \delta_D'] \cup [\delta_D, \delta_1], & N \geq N_2. \end{cases} \end{align} Here, \begin{align} \delta_{1}&\equiv\frac{[(n+14)N+9-(n-1)^2] + (n+2) \sqrt{D_C} }{(n+8)(N+4-n)} \epsilon_\tau,\label{delta1}\\ \delta_{2}&\equiv\frac{[(n+14)N+9-(n-1)^2] - (n+2) \sqrt{D_C} }{(n+8)(N+4-n)} \epsilon_\tau, \end{align} and $N_2$ is the value of $N$, which depends on $n$, at which $\delta_1=\delta_D$. For $N<N'<N_2$, $\delta_D'<0$, in which case $[0,\delta_D']$ denotes the empty set. We use the same notational convention whenever the left limit of the interval is greater than the right one. \item $\underline{v_* \geq 0}$: For LDFP1, we have the following constraints depending on the value of $n$: \begin{align} n=1:\,&\delta \in \begin{cases} [0, \delta_2] \cup [\delta_1, 2\epsilon_\tau), & N \leq N_2, \\ [0, \delta_D'] \cup [\delta_D, 2\epsilon_\tau), & N > N_2; \end{cases} \\ n=2:\,&\delta \in [0,\delta_D'] \cup [\delta_D, 2\epsilon_\tau) \cup \begin{cases} [\delta_1, \delta_4) \cup [\delta_3, 4\epsilon_\tau], 1 \leq N < N_2, \\ [\delta_D, \delta_4] \cup [\delta_3, 4\epsilon_\tau], N_2 \leq N \leq N_3, \\ [\delta_3, 4\epsilon_\tau], N > N_3; \end{cases} \\ n=3:\,&\delta \in [0,\delta_D'] \cup [\delta_D, 2\epsilon_\tau) \cup \begin{cases} [\delta_1, 4\epsilon_\tau], 1 \leq N < N_2, \\ [\delta_D, 4\epsilon_\tau], N_2 \leq N < N_D, \\ [\delta_D, \delta_4] \cup [\delta_3, 4\epsilon_\tau], N_D \leq N \leq N_3, \\ [\delta_3, 4\epsilon_\tau], N > N_3. \end{cases} \end{align} For LDFP2, we have: \begin{align} n=1:\,&\delta \in [\delta_5, \max(2\epsilon_\tau,\delta_1)] \cup \begin{cases} \varnothing, 1 \leq N < N_2, \\ [\delta_{2},\delta_D'] \cup [\delta_{D},\min(\delta_1, 2\epsilon_\tau)], N \geq N_2; \end{cases} \\ n=2,3:\,&\delta \in [\delta_2, \delta_D'] \cup [\delta_D, 2\epsilon_\tau) \cup \begin{cases} \varnothing, 1 \leq N < N_2, \\ [\delta_D, \delta_1], N_2 \leq N < N_3, \\ [\delta_4, \delta_1], N \geq N_3. \end{cases}\label{EqLDFP2} \end{align} \iffalse \begin{align} n=1:\,&\delta \in \begin{cases} \textcolor{red}{[\delta_5, 2\epsilon_\tau], 1 \leq N < N_2,}\\ [\delta_2, \delta_D'] \cup [\delta_D, \delta_1] \cup [\delta_5, 2\epsilon_\tau], N_2 \leq N < 6,\\ [\delta_2, \delta_D'] \cup [\delta_D, 2\epsilon_\tau), 6 \leq N \leq N'_1,\\ [\delta_2, \delta_D'], N'_1<N<N'_2,\\ [\delta_2, \delta_D'] \cup [\delta_D, 2\epsilon_\tau], N \geq N'_2, \end{cases}\cup \begin{cases} \varnothing, 1 \leq N < 6, \\ [\delta_5, \delta_1], N \geq 6; \end{cases} \\ n=2,3:\,&\delta \in [\delta_2, \delta_D'] \cup [\delta_D, 2\epsilon_\tau) \cup \begin{cases} \varnothing, 1 \leq N < N_2, \\ [\delta_D, \delta_1], N_2 \leq N < N_3, \\ [\delta_4, \delta_1], \textcolor{red}{N \geq N_3.} \end{cases}\label{EqLDFP2} \end{align} \fi We further define \begin{align} \delta_{3}&\equiv\frac{3[N+6+(n-1)(3N+6-2n)] + (n+2)\sqrt{D_S} }{4(n-1)(N+4-n)} \epsilon_\tau,\label{delta3}\\ \delta_{4}&\equiv\frac{3[N+6+(n-1)(3N+6-2n)] - (n+2)\sqrt{D_S} }{4(n-1)(N+4-n)} \epsilon_\tau,\label{delta4}\\ \delta_5&\equiv\frac{2N^2+21N+18}{(N+3)(N+6)}\epsilon_\tau,\label{delta03} \end{align} and $N_3$ is the $n$-dependent value of $N$ at which $\delta_D=\delta_4$. \end{itemize} For a given GNY symmetry class $n$, the intersection of all those conditions defines regions in the $N$-$\delta$ plane in which the various fixed points discussed are physical, and over which fixed-point properties are plotted throughout the paper. We now return to the question of whether a physical solution $c_*^2$ to the nonlinear equation $\beta_{c^2}=0$ exists for the LDFPs (\ref{L1}-\ref{L3}). We solve this equation numerically. For $n=1$ and $n=3$, we find a unique solution everywhere in the physical regions of the $N$-$\delta$ plane. For $n=2$, we likewise find a unique physical solution in the physical regions, but for LDFP1 computations become increasingly difficult upon approach to the point $N=1$, $\delta=4$, where $c_*^2$ grows rapidly. Since exactly at this point LDFP1 coincides with SDFP2, and SDFP2 does not admit a solution to $\beta_{c^2}=0$ for $N=1$~\cite{Yerzhakov2018}, we conjecture that $c_*^2$ gradually runs off to infinity as the point $N=1$, $\delta=4$ is approached. Summarizing, we thus find that for all three GNY symmetry classes, a unique solution $c_*^2>1$ exists for the LDFPs (\ref{L1}-\ref{L3}) everywhere inside the physical regions (\ref{physicality}) of the $N$-$\delta$ plane. As mentioned previously, $h_*^2$ and $c_*^2$ together determine the dynamic critical exponent $z$ at those fixed points (Sec.~\ref{sec:exponents}, Figs.~\ref{fig:LDFPs_z_m=0}-\ref{fig:LDFPs_z_m=2}). \subsection{Linear stability analysis} \label{sec:stability} We now investigate the stability properties of the physical fixed points. All bosonic fixed points (i.e., with $h_*^2=0$) are unstable with respect to the $h^2$ direction. Additionally, for all models, the Gaussian fixed points are unstable with respect to all other directions, and the Wilson-Fisher fixed points are unstable with respect to both short-range and long-range correlated disorder. The stability properties of the bosonic DFPs in the absence of Yukawa coupling have been discussed previously in Refs.~\cite{Boyanovsky1982,boyanovsky1983,Lawrie1984,DeCesare1994}. At all fermionic fixed points (i.e., with $h_*^2 \neq 0$), the $h^2$ direction is irrelevant. Additionally, we find that $\partial\beta_{c^2}/\partial c^2$ is positive at all such fixed points. Since $\beta_{c^2}$ is the only beta function in which $c^2$ appears, this means $c^2$ is also an irrelevant direction. We can thus exclude $h^2$ and $c^2$ from RG flow considerations and investigate stability within the three-dimensional subspace with fixed $h_*^2$ and $c_*^2$ of the full five-dimensional space of couplings. We compute the eigenvalues $y$ of the stability matrix $M_{gg'}\equiv-\partial\beta_g/\partial g'$, $g,g'\in\{\lambda^2,\Delta,v\}$, defined such that $y>0$ ($y<0$) corresponds to a relevant (irrelevant) direction. \subsubsection{Stability of the clean fixed point} \label{sec:StabCFP} We first focus on the clean GNY fixed point (\ref{CFP}), which for the rest of the paper we refer to as the CFP. The RG eigenvalues at the CFP are: \begin{align}\label{y4CFP} y_1=-\frac{\sqrt{D_C}}{N+4-n}\epsilon_\tau, \hspace{5mm} y_2=\frac{(n+2)N+(n+14)(4-n)-(n+2)\sqrt{D_C}}{(n+8)(N+4-n)}\epsilon_\tau, \hspace{5mm} y_3=\delta-\delta_1, \end{align} and are associated with eigenvectors with nonzero projections along the $\lambda^2$, $\Delta$, and $v$ directions, respectively. The eigenvalue $y_1$ is negative and thus irrelevant for all $n$ and $N$. For the flow of short-range correlated disorder ($y_2$), we discuss the three GNY symmetry classes in turn. \begin{itemize} \item $n=1$: Disorder is irrelevant for $N>6$. At $N=6$, the CFP merges with the SDFP (\ref{SDFP_m=0}), and disorder becomes marginally relevant. For $N<6$ (including $N=1/2$), the SDFP becomes unphysical, and disorder becomes relevant at the CFP. \item $n=2$: This case was studied in Ref.~\cite{Yerzhakov2018}. Disorder is irrelevant for $N>1$. At $N=1$, SDFP2 [see Eq.~(\ref{SDFP1})] merges with the CFP and disorder becomes marginally relevant. \item $n=3$: Disorder is irrelevant for all $N>\frac{2}{15}\approx 0.133$. \end{itemize} Finally, long-range correlated disorder ($y_3$) is irrelevant for $\delta$ less than $\delta_1$, which is defined in Eq.~(\ref{delta1}). At generic points along the curve $\delta=\delta_1$ in the $N$-$\delta$ plane, one of the LDFPs merges with the CFP, and long-range correlated disorder crosses marginality. At the special point $N=N_2$ along this curve, the two LDFPs (\ref{L1}-\ref{L3}) coincide with one another (and with the CFP). \subsubsection{Stability of short-range disordered fixed points} \label{sec:StabSDFPs} We now consider the SDFPs of Sec.~\ref{sec:SDFPs}. We begin with the unique SDFP (\ref{SDFP_m=0}) in the chiral Ising class ($n=1$), which is physical only for $N\geq 6$. Long-range correlated disorder is irrelevant at this fixed point provided that $\delta$ is less than $\delta_5$, which is defined in Eq.~(\ref{delta03}). Along the curve $\delta=\delta_5$ in the $N$-$\delta$ plane, the SDFP merges with LDFP2. However, one of the two other eigenvalues is always relevant for $N>6$, thus the SDFP is a multicritical point with at least one relevant direction on the critical hypersurface. \begin{figure}[!t] \includegraphics[width=0.99\columnwidth]{EVPhysRegions_m=0.pdf} \caption{Stability in the subspace $(\lambda^2, \Delta, v)$ of couplings of (a,b) LDFP1 and (c,d) LDFP2 in the chiral Ising GNY model ($n=1$), as a function of $N$ and $\delta$. I: one relevant eigenvalue; II: one relevant eigenvalue, two complex-conjugate irrelevant eigenvalues; III: two relevant eigenvalues.} \label{fig:EVs_m=0} \end{figure} \begin{figure}[!t] \includegraphics[width=0.99\columnwidth]{EVPhysRegions_m=1_N=4line.png} \caption{Stability in the subspace $(\lambda^2, \Delta, v)$ of couplings of (a,b) LDFP1 and (c,d) LDFP2 in the chiral XY GNY model ($n=2$), as a function of $N$ and $\delta$. Regions I-III are defined as in Fig.~\ref{fig:EVs_m=0}. IV: two complex-conjugate relevant eigenvalues; V: no relevant eigenvalues; VI: no relevant eigenvalues, two complex-conjugate irrelevant eigenvalues.} \label{fig:EVs_m=1} \end{figure} \begin{figure}[!t] \includegraphics[width=0.99\columnwidth]{EVPhysRegions_m=2_Edited2.png} \caption{Stability in the subspace $(\lambda^2, \Delta, v)$ of couplings of (a,b) LDFP1 and (c) LDFP2 in the chiral Heisenberg GNY model ($n=3$), as a function of $N$ and $\delta$. Regions are labeled as in Fig.~\ref{fig:EVs_m=1}.} \label{fig:EVs_m=2} \end{figure} The chiral XY ($n=2$) and Heisenberg ($n=3$) classes admit two fermionic SDFPs, Eq.~(\ref{SDFP1}). Similarly to the chiral Ising case, long-range correlated disorder is irrelevant at SDFP1 (SDFP2) provided that $\delta<\delta_3$ ($\delta<\delta_4$), with $\delta_3,\delta_4$ defined in Eqs.~(\ref{delta3}-\ref{delta4}). The curves $\delta=\delta_3$ and $\delta=\delta_4$ correspond to the merger of the corresponding SDFP with one of the LDFPs. When $\delta_3=\delta_4$, the discriminant $D_S$ vanishes, and the two SDFPs merge with one another. This happens at a critical value of $N$ which in the XY case is $N=4$, and in the Heisenberg case is $N=N_D\approx 27.856$. Besides long-range correlated disorder, the other two directions are irrelevant at SDFP1, thus it is a genuine critical point for $\delta<\delta_{3}$. By contrast, one of those two directions is relevant at SDFP2, thus the latter is a multicritical point. For the chiral XY and Heisenberg models, and for sufficiently large $N$, the two irrelevant eigenvalues at SDFP1 with eigenvectors in the $\lambda^2$-$\Delta$ plane form a complex conjugate pair. SDFP1 is then a fixed point of focus type, with spiraling flows near the fixed point. In the XY case, this happens for $N>\frac{32}{5}=6.4$, while for the Heisenberg case, this happens for $N>28.087$. Critical properties in this case are subject to oscillatory corrections to scaling~\cite{Khmelnitskii1978,Yerzhakov2018}. \subsubsection{Stability of long-range disordered fixed points} We finally turn to the stability of the LDFPs of Sec.~\ref{sec:LDFPs}. The eigenvalues of the stability matrix depend on $N$ and $\delta$ in a complicated way, and we compute them numerically. In Figs.~\ref{fig:EVs_m=0}-\ref{fig:EVs_m=2}, we characterize the stability of the two LDFPs in terms of their number of relevant/irrelevant eigenvalues, for each GNY symmetry class. Eigenvalues are real unless otherwise specified; since the stability matrix is real, complex eigenvalues necessarily appear in complex-conjugate pairs, and imply focus-type behavior as discussed above. For all three GNY symmetry classes, the two LDFPs merge along the curve $\delta=\delta_D$ in the $N$-$\delta$ plane, where the discriminant $D_L$ vanishes. In the Ising case (Fig.~\ref{fig:EVs_m=0}), both LDFPs have at least one relevant eigenvalue on the critical hypersurface and are thus multicritical points (for $N=1/2$, only LDFP1 is physical, for $\delta_1\approx 1.143<\delta<2$). In the XY and Heisenberg cases (Figs.~\ref{fig:EVs_m=1}-\ref{fig:EVs_m=2}), LDFP1 exists in regions (V and VI) in the $N$-$\delta$ plane with no relevant eigenvalues, and is thus a {\it bona fide} critical point in those regions. LDFP2 is always multicritical. \subsection{Critical exponents} \label{sec:exponents} Universal critical exponents at the newly found fermionic DFPs can be computed from Eqs.~(\ref{nuinv}-\ref{gammaphi}) using the fixed-point couplings found in Sec.~\ref{sec:SDFPs} and Sec.~\ref{sec:LDFPs}. At the present one-loop order, the fermion $\eta_\psi$ and boson $\eta_\phi$ anomalous dimensions depend only on $h_*^2$, which is the same at all fermionic fixed points. Thus their values at the DFPs are the same as those for the clean chiral GNY universality classes~\cite{Zerf2017}: $\eta_\psi=n\epsilon_\tau/[2(N+4-n)]$ and $\eta_\phi=N\epsilon_\tau/(N+4-n)$. At higher loop order the anomalous dimensions are expected to differ at the different fermionic fixed points. \begin{figure}[!t] \includegraphics[width=0.7\columnwidth]{DynamicalCriticalExponentsSDFPs_2.pdf} \caption{Dynamic critical exponent $z$ at SDFPs for all three chiral GNY symmetry classes, as a function of $N$.} \label{fig:SDFPs_z} \end{figure} Using Eq.~(\ref{z}), the dynamic critical exponent $z$ at the fermionic DFPs is given by \begin{align} z=1+\left(f(c_*^2)-{\textstyle{\frac{1}{2}}}\right)\frac{n\epsilon_\tau}{N+4-n}, \end{align} and thus depends on the fixed-point velocity parameter $c_*^2$. The latter is a universal function of $N$ and $\delta$ for a given DFP but must be computed numerically; we plot the resulting value of $z$ extrapolated to 2+1 dimensions ($\epsilon_\tau\rightarrow 1$) in Fig.~\ref{fig:SDFPs_z} for the SDFPs and in Figs.~\ref{fig:LDFPs_z_m=0}-\ref{fig:LDFPs_z_m=2} for the LDFPs. Since $c_*^2>1$, and thus $f(c_*^2)>{\textstyle{\frac{1}{2}}}$, at all fermionic DFPs (see Sec.~\ref{sec:FixedPoints}), such DFPs necessarily have $z>1$. This is in agreement with the general expectation that weak disorder increases $z$~\cite{herbut2001}; Refs.~\cite{narovlansky2018,aharony2018} also derive the leading-order result $z-1\propto\Delta_*>0$ at SDFPs obtained by perturbing a conformally invariant QCP with weak short-range correlated disorder. Here we find $z>1$ at LDFPs as well. \begin{figure}[!t] \includegraphics[width=0.99\columnwidth]{zLRDFPs_m=0.png} \caption{Dynamic critical exponent $z$ in the chiral Ising GNY model ($n=1$) at (a,b) LDFP1 and (c,d) LDFP2, as a function of $N$ and $\delta$.} \label{fig:LDFPs_z_m=0} \end{figure} \begin{figure}[!t] \includegraphics[width=0.99\columnwidth]{zLRDFPs_m=1.png} \caption{Dynamic critical exponent $z$ in the chiral XY GNY model ($n=2$) at (a,b) LDFP1 and (c,d) LDFP2, as a function of $N$ and $\delta$.} \label{fig:LDFPs_z_m=1} \end{figure} \begin{figure}[!t] \includegraphics[width=0.99\columnwidth]{zLRDFPs_m=2.png} \caption{Dynamic critical exponent $z$ in the chiral Heisenberg GNY model ($n=3$) at (a,b) LDFP1 and (c) LDFP2, as a function of $N$ and $\delta$.} \label{fig:LDFPs_z_m=2} \end{figure} The inverse correlation length exponent $\nu^{-1}$, determined from Eq.~(\ref{nuinv}), is the RG eigenvalue associated with the relevant direction $r$ which tunes across the symmetry-breaking transition. For a \emph{bona fide} critical point, $\nu$ controls the divergence of the correlation length $\xi$ at the transition $r=0$ via $\xi\sim r^{-\nu}$. For multicritical points with additional relevant directions $g_1,g_2,\ldots$ on the critical hypersurface with real, positive eigenvalues $y_1,y_2,\ldots$, the correlation length behaves near the transition as $\xi(r,g_1,g_2,\ldots)=r^{-\nu}\widetilde{\xi}(g_1/r^{\nu y_1},g_2/r^{\nu y_2},\ldots)$, where $\widetilde{\xi}(x_1,x_2,\ldots)$ is a universal scaling function~\cite{Goldenfeld}. Complex-conjugate eigenvalues produce a scaling function with oscillatory behavior. At all LDFPs in all three GNY symmetry classes, we find $\nu^{-1}=2-{\textstyle{\frac{1}{2}}}\delta$, which alternatively can be written as $\nu=2/\alpha$, with $\alpha=4-\delta$ the exponent controlling long-range disorder correlations in Eq.~(\ref{DisCorr}). This superuniversal behavior was also found at long-range correlated bosonic DFPs and explained by Weinrib and Halperin~\cite{Weinrib1983}. Consider a LDFP with correlation length exponent $\nu(\alpha)$ in a system with disorder of the type (\ref{DisCorr}). If one further perturbs this fixed point with disorder correlated according to $|\b{x}-\b{x}'|^{-\alpha_+}$ such that $\alpha_+>\alpha$, the original asymptotic critical behavior should remain the same, as we expect it is controlled by the longest-range part of the disorder. Conversely, if the perturbation is of the form $|\b{x}-\b{x}'|^{-\alpha_-}$ with $\alpha_-<\alpha$, this falls off more slowly than the original disorder, and the original critical behavior should be unstable. Assuming $\alpha,\alpha_\pm<d$ and applying the modified Harris criterion for long-range correlated disorder, we find $\nu(\alpha)>2/\alpha_+$ and $\nu(\alpha)<2/\alpha_-$, for all $\alpha_-<\alpha<\alpha_+$. Choosing $\alpha_\pm=\alpha\pm\varepsilon$ and taking the limit $\varepsilon\rightarrow 0^+$, we obtain $\nu(\alpha)=2/\alpha$. The exponent $\nu$ for the SDFPs can likewise be calculated directly from Eq.~(\ref{nuinv}), and we obtain $\nu^{-1}=2-{\textstyle{\frac{1}{2}}}\delta_5$ for the chiral Ising SDFP, with $\delta_5$ defined in Eq.~(\ref{delta03}). In light of the result above for $\nu^{-1}$ at LDFPs, this is consistent with the fact that the $n=1$ SDFP coalesces with one of the LDFPs at $\delta=\delta_5$. Similarly, for both the chiral XY and Heisenberg models we find that SDFP1 has $\nu^{-1}=2-{\textstyle{\frac{1}{2}}}\delta_3$ and SDFP2 has $\nu^{-1}=2-{\textstyle{\frac{1}{2}}}\delta_4$, with $\delta_{3,4}$ defined in Eqs.~(\ref{delta3}-\ref{delta4}). As previously mentioned, the curves $\delta=\delta_3$ ($\delta=\delta_4$) correspond to the merger of SDFP1 (SDFP2) with a LDFP. We plot $\nu^{-1}$ at SDFPs for all three GNY models in Fig.~\ref{fig:Inverse_nu_Allm}, including $\nu^{-1}$ at the clean GNY critical point for comparison. \begin{figure}[!t] \includegraphics[width=0.9\columnwidth]{CorrelationLengthExponentsCFPsSDFPs.pdf} \caption{Inverse correlation length exponent $\nu^{-1}$ for the CFP and SDFPs in all three chiral GNY symmetry classes, as a function of $N$.} \label{fig:Inverse_nu_Allm} \end{figure} \section{RG flows and bifurcations} \label{sec:RGflows} Having discussed RG fixed points and their local properties (stability and critical exponents), we now discuss global properties of the RG flow: bifurcations of the flow as the control parameters $N,\delta$ are varied (Secs.~\ref{sec:bifurc1} and \ref{sec:bifurc2}), and examples of global phase diagrams for fixed $N,\delta$ (Sec.~\ref{sec:phasediagram}). Although the original space of couplings $(c^2,\lambda^2,h^2,\Delta,v)$ is five-dimensional, as already mentioned the $c^2$ and $h^2$ directions are irrelevant at fermionic fixed points, which are the only stable ones. For practical purposes the RG flows thus live in the three-dimensional space $(\lambda^2,\Delta,v)$, with $c^2$ and $h^2$ assuming their fixed-point values. Since in the chiral Ising case all physical fixed points are multicritical, and for the sake of simplicity, we restrict our attention to the chiral XY and Heisenberg symmetry classes, which exhibit the most interesting phenomena. \subsection{Transcritical and saddle-node bifurcations} \label{sec:bifurc1} We have already mentioned a number of instances in which two fixed points collide as $N$ or $\delta$ are varied. We observe two distinct kinds of bifurcations associated with a collision of two fixed points: the transcritical bifurcation and the saddle-node bifurcation. The transcritical bifurcation [Fig.~\ref{fig:bifurcation}(a)] is a bifurcation at which a stable fixed point and an unstable fixed point pass through each other, exchanging their stability properties, but without annihilating~\cite{gukov2017}. An example of this bifurcation is the merging of the two chiral XY SDFPs (\ref{SDFP1}) as $N$ is varied through $N=4$. (There is ``exchange'' of fixed points provided we track individual fixed points on smooth trajectories, as opposed to their arbitrary definition as SDFP1 and SDFP2 in Eq.~(\ref{SDFP1}).) Unlike the saddle-node bifurcation discussed below, the two fixed points remain real before and after the bifurcation. At the transcritical bifurcation, the beta function (and associated RG flow) is not only marginal, but its derivative with respect to the control parameter, here $N$, must vanish as well. Other examples of this bifurcation include the collision of SDFPs with the CFP (at $N=1$ for the chiral XY SDFP2), of LDFPs with the CFP (along the curve $\delta=\delta_1$ in the $N$-$\delta$ plane), or of SDFPs with LDFPs (curves $\delta=\delta_3$ and $\delta=\delta_4$). At these latter bifurcations, one of the DFPs becomes unphysical, by either $\Delta_*$, $v_*$, or $\Delta_*+v_*$ going through zero and becoming negative. However, since the other fixed point remains physical and thus real, this unphysical fixed point necessarily remains real also (for another RG example of this scenario, see Ref.~\cite{boyack2018}). Thus the bifurcation is distinct from the saddle-node bifurcation, which we now discuss. \begin{figure}[!t] \includegraphics[width=0.9\columnwidth]{bifurcation.pdf} \caption{Schematic bifurcation diagrams for (a) the transcritical bifurcation, (b) the saddle-node bifurcation, and (c) the supercritical Hopf bifurcation. The horizontal axis represents a direction in the $N$-$\delta$ plane, and the vertical axis, the space of running couplings (critical hypersurface). Solid red symbolizes an RG attractor, dashed blue a repellor, and schematic RG trajectories are shown in black.} \label{fig:bifurcation} \end{figure} The saddle-node bifurcation [Fig.~\ref{fig:bifurcation}(b)] is a bifurcation at which a stable fixed point and an unstable fixed point merge, leading to marginal behavior as above, but subsequently disappear into the complex plane. This typically happens for a pair of fixed points with critical couplings $g_{*\pm}\propto A\pm\sqrt{D}$, such that the discriminant $D$ continuously goes through zero at the bifurcation and then becomes negative. Both pairs SDFP1,2 and LDFP1,2 are of this type. The two chiral Heisenberg SDFPs, with discriminant $D=D_S(n=3)$, annihilate with decreasing $N$ at $N\approx 27.856$. (For the chiral XY GNY model, $D=D_S(n=2)$ touches zero at $N=4$ but remains positive elsewhere, which gives the transcritical bifurcation at $N=4$.) Likewise, the two LDFPs in both the XY and Heisenberg cases annihilate on the curve $\delta=\delta_D$ in the $N$-$\delta$ plane, where the discriminant $D=D_L$ vanishes. Since $\delta_D$ in Eq.~(\ref{deltaD}) is a nonmonotonic function of $N$, for fixed $\delta$ this fixed-point annihilation can occur for either increasing or decreasing $N$. The saddle-node bifurcation is accompanied by the characteristic phenomenology of walking RG or quasi-critical behavior~\cite{Kaplan2009}; we now explain how this manifests itself in the current problem. Focusing on the example above of the annihilation of LDFPs in the chiral XY and Heisenberg GNY models, we first consider a situation where $\delta$ is slightly above $\delta_D$. Small regions in the $N$-$\delta$ plane exist such that both LDFPs are physical, with LDFP1 a stable sink-type fixed point (region V) and LDFP2 a multicritical point with one relevant direction (region I). LDFP2 is only physical provided $\delta<\delta_1$ [see Eq.~(\ref{EqLDFP2})], which implies that the CFP is stable (Sec.~\ref{sec:StabCFP}). For this type of region, numerical studies of the RG flow show that RG trajectories with initial conditions near LDFP2 end up at either LDFP1 or the CFP. We thus consider a curvilinear coordinate system such that one of these coordinates, $g$, passes through all three fixed points [Fig.~\ref{fig:quasiqc}(a)]. In this section only, we define the infrared (Wilsonian) beta function $\beta(g)\equiv dg/d\ell$, where $\ell$ grows towards the infrared. Denoting by $g_*$ the common fixed-point coupling of LDFP1 and LDFP2 at the bifurcation $\delta=\delta_D$, we assume that for $\delta$ near $\delta_D$ and $g$ near $g_*$, $\beta(g)$ can be well approximated by a quadratic function, $\beta(g)\approx A(\delta)+B(\delta)(g-g_*)+C(\delta)(g-g_*)^2$. Since $\beta(g_*)=\partial\beta(g_*)/\partial g=0$ and $\partial^2\beta(g_*)/\partial g^2<0$ at $\delta=\delta_D$, we have $A(\delta_D)=B(\delta_D)=0$ and $C(\delta_D)\equiv -\kappa<0$. For $\delta=\delta_D+\varepsilon$ with $\varepsilon$ small, $\beta(g)$ should have two real zeros that approach $g_*$ as $\varepsilon\rightarrow 0^+$. Expanding $A(\delta)$, $B(\delta)$, and $C(\delta)$ in powers of $\varepsilon$, we find at leading order a pair of zeros of the form $g_*\pm\sqrt{b\varepsilon/\kappa}$ with $b\equiv A'(\delta_D)$, which are real provided that $b>0$, and form a complex-conjugate pair when $\varepsilon<0$ ($\delta<\delta_D$). The beta function thus approximately assumes the form $\beta(g)\approx b(\delta-\delta_D)-\kappa(g-g_*)^2$, illustrated in Fig.~\ref{fig:quasiqc}(b), and considered in Ref.~\cite{Kaplan2009}. \begin{figure}[!t] \includegraphics[width=\columnwidth]{quasiqc.pdf} \caption{Phenomenology of the saddle-node bifurcation at $\delta=\delta_D$. (a) Curvilinear coordinate $g$ along RG trajectories for $\delta>\delta_D$; (b) Wilsonian beta function near the bifurcation; (c) crossover from disordered quasi-critical behavior to clean critical behavior for $\delta$ slightly below $\delta_D$.} \label{fig:quasiqc} \end{figure} We now take $\delta=\delta_D-\varepsilon$ with $\varepsilon>0$ small, and consider an RG trajectory with initial coupling $g_\text{UV}>g_*$ and ``flow velocity'' $\beta(g_\text{UV})$, which is generically not small. As $g$ approaches $g_*$ from above, the flow velocity decreases considerably (i.e., the running coupling ``walks''), since $\beta(g_*)\approx -b\varepsilon$ is small. This walking behavior persists until $g_*-g$ becomes on the order of $\sqrt{b\varepsilon/\kappa}$, after which the coupling starts ``running'' again. This determines a characteristic RG time $\Delta\ell$ insensitive to the initial condition $g_\text{UV}$ of the flow. Approximating $\beta(g)\approx\beta(g_*)\approx -b\varepsilon$ as constant during the walk, we have $\beta(g_*)\approx\Delta g/\Delta\ell\sim\sqrt{b\varepsilon/\kappa}/\Delta \ell$, and thus $\Delta\ell\sim 1/\sqrt{\kappa b\varepsilon}$. Alternatively, we may integrate the equation $dg/d\ell=\beta(g)$ from $g_\text{UV}$ at $\ell_\text{UV}$ to $g_\text{IR}<g_*$ at $\ell_\text{IR}$. Under the condition $|g_\text{UV,IR}-g_*|\gg\sqrt{b\varepsilon/\kappa}$, the result of this integration is insensitive to the precise values of $g_\text{UV}$ and $g_\text{IR}$, and we obtain $\Delta\ell\equiv\ell_\text{IR}-\ell_\text{UV}=\pi/\sqrt{\kappa b\varepsilon}$. In turn, this RG time determines a characteristic infrared length scale $L_*=L_\text{IR}=L_\text{UV}e^{\Delta\ell}$, where we can take $L_\text{UV}\sim a$ to be on the order of a microscopic lattice constant $a$. We obtain: \begin{align} L_*\sim a\exp\left(\pi/\sqrt{\kappa b(\delta_D-\delta)}\right), \end{align} as $\delta$ approaches $\delta_D$ from below. The exponential inverse-square-root divergence, reminiscent of the divergence of the correlation length at the Kosterlitz-Thouless transition~\cite{kosterlitz1974}, is characteristic of the saddle-node bifurcation~\cite{Kaplan2009}. The existence of this exponentially large length scale $L_*\gg a$ allows for a crossover between two distinct physical regimes [Fig.~\ref{fig:quasiqc}(c)]. On intermediate length scales $a\ll L\ll L_*$, RG trajectories dwell for an extended period of RG time near $g=g_*$, and we have quasi-critical behavior controlled by a complex pair of LDFPs with real part near $g_*$. This quasi-critical regime is characterized by approximate power-law scaling and drifting (i.e., scale-dependent) exponents~\cite{gorbenko2018b}. On the largest length scales $a\ll L_*\ll L$, the transition is controlled by the true infrared fixed point, the CFP, with genuine scale invariance. \subsection{Supercritical Hopf bifurcation and limit-cycle fermionic quantum criticality} \label{sec:bifurc2} The third type of bifurcation we observe is the supercritical Hopf bifurcation [Fig.~\ref{fig:bifurcation}(c)]. This bifurcation occurs as one passes from region VI (blue region) to region IV (purple region) in both the chiral XY [Fig.~\ref{fig:EVs_m=1}(b)] and Heisenberg [Fig.~\ref{fig:EVs_m=2}(a)] models. For instance, one can consider keeping $N$ fixed and tuning $\delta$ (black arrow in those figures). In region VI ($\delta<\delta_{c,1}$), LDFP1 is a stable-focus fixed point with two complex-conjugate irrelevant eigenvalues, i.e., complex-conjugate eigenvalues with a negative real part [solid red line on left part of Fig.~\ref{fig:bifurcation}(c)]. At the bifurcation ($\delta=\delta_{c,1}$), the real part of those eigenvalues goes through zero and becomes positive for $\delta>\delta_{c,1}$. LDFP1 thus loses its stability and becomes an unstable-focus fixed point [dashed blue line on the right part of Fig.~\ref{fig:bifurcation}(c)]. At the same type, a stable limit cycle is born [solid red line on the right part of Fig.~\ref{fig:bifurcation}(c)], towards which the spiraling RG trajectories coming out of LDFP1 asymptote, and which controls the critical behavior up to a second threshold value $\delta_{c,2}$ to be discussed shortly. (Trajectories outside the limit cycle also spiral and asymptote to it.) To our knowledge, this is the first instance in the context of quantum phase transitions where the supercritical Hopf bifurcation~\cite{Marsden1976} appears. After Ref.~\cite{hartnoll2016}, which studied a holographic model of a critical scalar field perturbed by disorder, our result is the second example of quantum phase transition governed by a stable limit cycle; to our knowledge, it is the first example for fermionic systems. The subcritical Hopf bifurcation~\cite{Marsden1976}, where an unstable-focus fixed point becomes stable by giving birth to an unstable limit cycle, has been reported previously in RG studies of classical disordered systems~\cite{Weinrib1983,Athorne1985,*Athorne1986}. The general phenomenology of critical behavior controlled by a stable limit cycle was explored in Ref.~\cite{Veytsman1993}. For a stable-focus critical point, spiraling trajectories manifest themselves as oscillatory corrections to scaling~\cite{Khmelnitskii1978,Yerzhakov2018}. By contrast, for a transition governed by a stable limit cycle, thermodynamic quantities exhibit log-periodic scaling behavior at leading order, i.e., discrete scale invariance. For instance, we show in Appendix~\ref{app:LimitCycleScaling} that the order parameter susceptibility $\chi$ obeys the approximate scaling form: \begin{align}\label{ChiLimitCycle} \chi\sim|r|^{-\gamma_\text{LC}}\left[1+\gamma_\text{LC}\c{F}\left(\nu_\text{LC}\ln\left(\frac{r_0}{r}\right)\right)\right], \end{align} where $\c{F}$ is a periodic function. Here $\nu_\text{LC}$ and $\gamma_\text{LC}=(2-\eta_\phi)\nu_\text{LC}$ are effective correlation-length and susceptibility exponents for the limit cycle, $r$ is the tuning parameter for the transition, and $r_0$ is a nonuniversal constant. As $\delta$ is further increased past $\delta_{c,1}$, the limit cycle eventually disappears at a second critical value $\delta_{c,2}$, but in different ways for the chiral XY and Heisenberg GNY models. In the Heisenberg case, the Hopf bifurcation of Fig.~\ref{fig:bifurcation}(c) occurs again but in reverse: the limit cycle shrinks to a point, which becomes the stable-focus LDFP1 of region VI. In the XY case, our numerical studies suggest that at least for some values of $N$, the limit cycle is destroyed at $\delta=\delta_{c,2}$ (still within region IV) by colliding with the CFP and SDFP2, which are both saddle points in this regime [see Fig.~\ref{fig:phasediagram}(c)]. This is a possible example of heteroclinic bifurcation~\cite{dingjun1997}, whose detailed study we reserve for future work. \subsection{Schematic phase diagrams} \label{sec:phasediagram} From the knowledge of the stability properties of the various fixed points and limit cycles, and numerical investigation of the RG flow connecting those different critical manifolds, schematic phase diagrams can be constructed analogously to those in Ref.~\cite{Yerzhakov2018}. For given values of $N$ and $\delta$, we focus on the critical hypersurface ($r=0$) and ask how the universality class of the transition depends on the bare couplings in the Lagrangian, which determine the initial conditions for the infrared RG flow. We consider a scenario in which the interaction parameters $h$ and $\lambda$ are fixed, and vary the two types of disorder, $\Delta$ and $v$. Since the number of possibilities is very large, given the complexity of the stability/physicality regions, we focus on the two most interesting regions: those which contain the instances of limit-cycle quantum criticality discussed in the previous section. \begin{figure}[!t] \includegraphics[width=0.8\columnwidth]{phasediagram.pdf} \caption{Schematic RG flow and critical ($r=0$) phase diagrams for generic $N$ and $\delta_{c,1}<\delta<\delta_{c,2}$ in region IV (see Figs.~\ref{fig:EVs_m=1}-\ref{fig:EVs_m=2}), for (a,b) the chiral Heisenberg GNY model and (c,d) the chiral XY model. In the Heisenberg case, the transition is controlled by a stable limit cycle (LC) for generic bare values of the short-range correlated ($\Delta$) and long-range correlated ($v$) disorder strengths. In the XY case, the transition is controlled by the limit cycle for weak short-range disorder and by a disordered fixed point (SDFP1) for strong short-range disorder.} \label{fig:phasediagram} \end{figure} We first focus on region IV in the chiral Heisenberg GNY model [see Fig.~\ref{fig:EVs_m=2}(a)]. For generic points in this region (e.g., for $\delta_{c,1}<\delta<\delta_{c,2}$), one has $\delta>\delta_D$ and $\delta>\delta_1$. Furthermore, we assume $N<N_D\approx 27.856$. From Sec.~\ref{sec:StabCFP}, we conclude that the CFP has two irrelevant directions in the $\lambda^2$-$\Delta$ plane, but that long-range correlated disorder $v$ is relevant, since $\delta>\delta_1$. SDFP1,2 are both unphysical, since $N<N_D$, and LDFP2 is unphysical as well. As seen in the previous section, LDFP1 is of unstable-focus type, with spiraling flow towards a stable limit cycle. The resulting RG flow is illustrated schematically in Fig.~\ref{fig:phasediagram}(a). Consequently, at least for sufficiently small bare values of the disorder, the transition is controlled by limit-cycle quantum criticality for generic disorder [Fig.~\ref{fig:phasediagram}(b)]. If long-range correlated disorder is turned off completely, the transition reverts back to the clean chiral Heisenberg GNY universality class. We now turn to region IV in the chiral XY GNY model [see Fig.~\ref{fig:EVs_m=1}(b)], assuming $\delta_{c,1}<\delta<\delta_{c,2}$. As in the previous case, we generically have $\delta>\delta_D$, $\delta>\delta_1$, and also $\delta<\delta_4$. As in the Heisenberg case, the CFP has two irrelevant directions in the $\lambda^2$-$\Delta$ plane, but $v$ is relevant. There are now nontrivial SDFPs, whose stability was discussed in Sec.~\ref{sec:StabSDFPs}. For SDFP1, $\lambda^2$ and $\Delta$ are both irrelevant, and $v$ is irrelevant as well, since $\delta<\delta_4<\delta_3$. For SDFP2, $v$ is irrelevant since $\delta<\delta_4$, but there is one relevant direction with nonzero $\Delta$ projection. LDFP2 is unphysical, and LDFP1 is an unstable focus with flow towards a stable limit cycle. The resulting RG flow is schematized in Fig.~\ref{fig:phasediagram}(c), and the corresponding phase diagram in Fig.~\ref{fig:phasediagram}(d). For weak $\Delta$, the transition is governed by the limit cycle, but for sufficiently strong $\Delta$, the transition is controlled by a disordered fixed point, SDFP1. CFP and SDFP2 appear as multicritical points. \section{Conclusion} \label{sec:conclusion} In summary, we have performed a comprehensive study of the three classes of chiral GNY models most relevant for symmetry-breaking quantum phase transitions in (2+1)D gapless Dirac matter---the chiral Ising, XY, and Heisenberg GNY models---in the presence of quenched short-range and long-range correlated random-mass disorder. Using a controlled triple epsilon expansion below the upper critical dimension for these models, we have found several disordered infrared fixed points characterized by finite short-range and/or long-range correlated randomness, and for which we computed critical exponents. The Boyanovsky-Cardy and quantum Weinrib-Halperin fixed points, while present, are destabilized by the Yukawa interaction in favor of new disordered fermionic QCPs, at which the strength of this interaction remains nonzero in the infrared. Besides local stability, using numerical and analytical approaches we analyzed bifurcations of the RG flow. We found instances of the familiar fixed-point annihilation scenario, which can here be tuned by a genuinely continuous variable---the exponent controlling the algebraic decay of disorder correlations---and with which is associated a parametrically large crossover length scale $L_*$ that separates a disordered quasi-critical regime ($L\ll L_*$) from a clean regime in the deep infrared ($L\gg L_*$). We also uncovered instances of the transcritical bifurcation, at which fixed points exchange their stability, and the more exotic supercritical Hopf bifurcation. The latter was accompanied by the emergence of a stable limit cycle on the critical hypersurface, thus producing the first instance of fermionic quantum criticality with discrete scale invariance. Several avenues present themselves for future research. The relative paucity of disordered fixed points found in the chiral Ising class as compared to its continuous-symmetry counterparts, and in fact, the complete absence of {\it bona fide} critical points in this class, is in agreement with the conjecture by Motrunich {\it et al.}~\cite{motrunich2000} that all discrete symmetry-breaking transitions in (2+1)D disordered systems should fall in the infinite-randomness universality class. Since infinite-randomness fixed points are not accessible to perturbative RG methods, nonperturbative numerical studies of Ising transitions of interacting Dirac fermions with quenched randomness are desirable, e.g., using quantum Monte Carlo methods~\cite{ma2018} or, possibly, incorporating fermions into (2+1)-dimensional adaptations of the strong-disorder RG method~\cite{iyer2012}. In the presence of gapless Dirac fermions strongly coupled to bosonic order parameter fluctuations, rare-region effects~\cite{nandkishore2013,nandkishore2014}---which dominate the low-energy physics at infinite-randomness fixed points---may however lead to a different strong-disorder phenomenology than that found in local bosonic models~\cite{vojta2003}. Besides the pure GNY universality classes, relevant to symmetry-breaking transitions in systems of itinerant Dirac electrons, our method of analysis may also provide a point of entry to study the effect of quenched disorder on more exotic transitions, such as those involving fractionalized phases. The algebraic or Dirac spin liquid~\cite{affleck1988,kim1999,rantner2001,*rantner2002,hermele2005,*hermele2007}, a quantum-disordered paramagnet with fractionalized spinon excitations, is described at low energies by (2+1)D quantum electrodynamics (QED$_3$) with $N=4$ flavors of two-component gapless Dirac fermions. The effect of quenched disorder on QED$_3$ itself was studied recently~\cite{Thomson2017,goswami2017,zhao2017,goldman2017,dey2020}; using the methods presented here, one could additionally study the effect of quenched disorder on quantum phase transitions out of the algebraic spin liquid~\footnote{As an experimental example of such transitions, Ref.~\cite{bordelon2019} reports the possible observation of a field-induced quantum phase transition between an algebraic spin liquid and a collinear magnetically ordered state in the triangular-lattice frustrated magnet NaYbO$_2$.}. Transitions towards conventional phases such as VBS states~\cite{boyack2019,zerf2020,janssen2020} or antiferromagnets~\cite{ghaemi2006,dupuis2019,zerf2019}, or transitions towards gapped chiral~\cite{janssen2017b,ihrig2018,zerf2018} or $\mathbb{Z}_2$ spin liquids~\cite{boyack2018}, are described by GNY theories in all three (Ising, XY, Heisenberg) symmetry classes, augmented by a coupling to fluctuating $U(1)$ gauge fields. The effect of random-mass disorder on the critical fixed points of such QED$_3$-GNY theories is an interesting topic for future research. \acknowledgments We thank D. A. Huse for a useful discussion. H.Y. was supported by Alberta Innovates and Alberta Advanced Education. J.M. was supported by NSERC Discovery Grants \#RGPIN-2014-4608, \#RGPIN-2020-06999, and \#RGPAS-2020-00064; the Canada Research Chair (CRC) Program; CIFAR; the Government of Alberta's Major Innovation Fund (MIF); the Tri-Agency New Frontiers in Research Fund (NFRF, Exploration Stream); and the Pacific Institute for the Mathematical Sciences (PIMS) Collaborative Research Group program.
1,108,101,563,595
arxiv
\section{Introduction}\label{sec:intro} We refer the readers to \cite{Bang-Jensen-Gutin, Bondy} for graph theoretical notation and terminology not given here. Note that all digraphs considered in this paper have no parallel arcs or loops. A digraph $D$ is {\em symmetric} if it can be obtained from its underlying undirected graph $G$ by replacing each edge of $G$ with the corresponding arcs of both directions, that is, $D=\overleftrightarrow{G}$. The order $|G|$ of a (di)graph $G$ is the number of vertices in $G.$ Let $\overleftrightarrow{T}_n$ be the symmetric digraph whose underlying undirected graph is a tree of order $n$. We use $\overrightarrow{C}_n$ and $\overleftrightarrow{K}_n$ to denote the cycle and complete digraph of order $n$, respectively. For a graph $G=(V,E)$ and a set $S\subseteq V$ of at least two vertices, an {\em $S$-Steiner tree} or, simply, an {\em $S$-tree} is a subgraph $T$ of $G$ which is a tree with $S\subseteq V(T)$. Two $S$-trees $T_1$ and $T_2$ are said to be {\em edge-disjoint} if $E(T_1)\cap E(T_2)=\emptyset$. Two arc-disjoint $S$-trees $T_1$ and $T_2$ are said to be {\em internally disjoint} if $V(T_1)\cap V(T_2)=S$. The {\em generalized local connectivity} $\kappa_S(G)$ is the maximum number of internally disjoint $S$-trees in $G$. For an integer $k$ with $2\leq k\leq n$, the {\em generalized $k$-connectivity} \cite{Hager} is defined as $$\kappa_k(G)=\min\{\kappa_S(G)\mid S\subseteq V(G), |S|=k\}.$$ Similarly, the {\em generalized local edge-connectivity} $\lambda_S(G)$ is the maximum number of edge-disjoint $S$-trees in $G$. For an integer $k$ with $2\leq k\leq n$, the {\em generalized $k$-edge-connectivity} \cite{Li-Mao-Sun} is defined as $$\lambda_k(G)=\min\{\lambda_S(G)\mid S\subseteq V(G), |S|=k\}.$$ Let $\kappa(G)$ and $\lambda(G)$ denote the classical vertex-connectivity and edge-connectivity of an undirected graph $G.$ Observe that $\kappa_2(G)=\kappa(G)$ and $\lambda_2(G)=\lambda(G)$, hence, these two parameters are generalizations of classical connectivity of undirected graphs and are also called tree connectivity. Now the topic of tree connectivity has become an established area in graph theory, see a recent monograph \cite{Li-Mao5} by Li and Mao on this topic. To extend generalized $k$-connectivity to directed graphs, Sun, Gutin, Yeo and Zhang \cite{Sun-Gutin-Yeo-Zhang} observed that in the definition of $\kappa_S(G)$, one can replace ``an $S$-tree'' by ``a connected subgraph of $G$ containing $S$.'' Therefore, they defined {\em strong subgraph $k$-connectivity} by replacing ``connected'' with ``strongly connected'' (or, simply, ``strong'') as follows. Let $D=(V,A)$ be a digraph of order $n$, $S$ a subset of $V$ of size $k$ and $2\le k\leq n$. An {\em S-strong subgraph} is a strong subgraph $H$ of $D$ such that $S\subseteq V(H)$. $S$-strong subgraphs $D_1, \dots , D_p$ are said to be {\em internally disjoint} if $V(D_i)\cap V(D_j)=S$ and $A(D_i)\cap A(D_j)=\emptyset$ for all $1\le i<j\le p$. Let $\kappa_S(D)$ be the maximum number of internally disjoint $S$-strong digraphs in $D$. The {\em strong subgraph $k$-connectivity} \cite{Sun-Gutin-Yeo-Zhang} is defined as $$\kappa_k(D)=\min\{\kappa_S(D)\mid S\subseteq V, |S|=k\}.$$ As a natural counterpart of the strong subgraph $k$-connectivity, Sun and Gutin \cite{Sun-Gutin} introduced the concept of strong subgraph $k$-arc-connectivity. Let $D=(V(D),A(D))$ be a digraph of order $n$, $S\subseteq V$ a $k$-subset of $V(D)$ and $2\le k\leq n$. Let $\lambda_S(D)$ be the maximum number of arc-disjoint $S$-strong digraphs in $D$. The {\em strong subgraph $k$-arc-connectivity} is defined as $$\lambda_k(D)=\min\{\lambda_S(D)\mid S\subseteq V(D), |S|=k\}.$$ Note that $\kappa_k(D)$ and $\lambda_k(D)$ are not only natural extensions of tree connectivity, but also could be seen as generalizations of connectivity and edge-connectivity of undirected graphs as $\kappa_2(\overleftrightarrow{G})=\kappa(G)$ \cite{Sun-Gutin-Yeo-Zhang} and $\lambda_2(\overleftrightarrow{G})=\lambda(G)$ \cite{Sun-Gutin}. For more information on the topic of strong subgraph connectivity of digraphs, the readers can see \cite{Sun-Gutin2} for a recent survey. In this paper, we continue research on strong subgraph arc-connectivity and focus on the strong subgraph 2-arc-connectivity of Cartesian products of digraphs. It is well known that Cartesian products of digraphs are of interest in graph theory and its applications; see a recent survey chapter by Hammack \cite{Hammack} considering many results on Cartesian products of digraphs. In the next section we introduce terminology and notation on Cartesian products of digraphs and give a simple yet useful upper bound on $\lambda_2(D),$ where $D$ is Cartesian product of any digraphs $G$ and $H$ i.e. $D=G\Box H$. In Section \ref{sec:exect}, we prove that $$\lambda \left ( G \Box H\right )= \min\left \{ \lambda \left ( G \right ) \left | H \right | , \lambda \left ( H \right ) \left |G \right |,\delta ^{+ } \left ( G \right )+ \delta ^{+ } \left ( H \right ),\delta ^{- } \left ( G \right )+ \delta ^{- } \left ( H \right ) \right \}$$ for every pair $G$ and $H$ of strong digraphs, each of order at least 2.\footnote{Note that the case of at least one of two digraphs having just one vertex in $\lambda \left ( G \Box H\right )$ is trivial. Thus, we will henceforth assume that each of the two digraphs is of order at least 2. The same holds for $\lambda_2 \left ( G \Box H\right )$. } In Section~\ref{sec:1product} we prove that $$ \min\left \{ \lambda \left ( G \right ) \left | H \right | , \lambda \left ( H \right ) \left |G \right |,\delta ^{+ } \left ( G \right )+ \delta ^{+ } \left ( H \right ),\delta ^{- } \left ( G \right )+ \delta ^{- } \left ( H \right ) \right \}$$ and $\lambda_2(G)+\lambda_2(H)-1$ are an upper bound and a lower bound, respectively, for $\lambda_2(G\Box H)$. The upper bound follows from the formula for $\lambda \left ( G \Box H\right )$ and thus it is tight. Unfortunately, we do not know whether this lower bound is tight or not, but by Theorem \ref{thmd1} (mentioned below), the gap with a tight bound is at most 1. In Section~\ref{sec:product}, we obtain exact values for the strong subgraph 2-arc-connectivity of Cartesian products of some digraph classes; our results are collated in Theorem~\ref{thmd1}. For the classes of strong digraphs considered in Theorem \ref{thmd1}, we have $\lambda_2(G\Box H)= \lambda_2(G) +\lambda_2(H).$ \section{Cartesian product of digraphs}\label{sec:cp} For a positive integer $n$, let $[n]=\{1,2,\dots ,n\}.$ Let $G$ and $H$ be two digraphs with $V(G)=\{u_i \mid 1\leq i\leq n\}$ and $V(H)=\{v_j \mid 1\leq j\leq m\}$. The {\em Cartesian product} $G\Box H$ of two digraphs $G$ and $H$ is a digraph with vertex set $$V(G\Box H)=V(G)\times V(H)=\{(x, x')\mid x\in V(G), x'\in V(H)\}$$ and arc set $$A(G\Box H)=\{(x,x')(y,y')\mid xy\in A(G),~x'=y',~or~x=y,~x'y'\in A(H)\}.$$~We will use $u_{i,j}$ to denote $(u_i,v_j)$ in the rest of the paper.~By definition, we know the Cartesian product is associative and commutative, and $G\Box H$ is strongly connected if and only if both $G$ and $H$ are strongly connected \cite{Hammack}. \begin{figure}[htbp] \small \centering \includegraphics[width=9cm]{figure1-eps-converted-to.pdf \caption{Two digraphs $G$, $H$ and their Cartesian product.} \label{figure1} \end{figure} We use $G(v_j)$ to denote the subgraph of $G\Box H$ induced by vertex set $\{u_{i,j}\mid 1\leq i\leq n\}$ where $1\leq j\leq m$,~and use $H(u_i)$ to denote the subgraph of $G\Box H$ induced by vertex set $\{u_{i,j}\mid 1\leq j\leq m\}$ where $1\leq i\leq n$.~Clearly,~we have $G(v_j)\cong G$ and $H(u_i)\cong H$. (For example,~as shown in Fig. \ref{figure1},~$G(v_j)\cong G$ for $1\leq j\leq 4$ and $H(u_i)\cong H$ for $1\leq i\leq 3$).~For $1\leq j_1\neq j_2\leq m$,~ the vertices $u_{i,j_1}$ and $u_{i,j_2}$ belong to the same digraph $H(u_i)$ where $u_i\in V(G)$;~we call $u_{i,j_2}$ the {\em vertex corresponding to} $u_{i,j_1}$ in $G(v_{j_2})$;~for $1\leq i_1\neq i_2\leq n$, we call $u_{i_2,j}$ the vertex corresponding to $u_{i_1,j}$ in $H(u_{i_2})$.~Similarly,~we can define the subgraph {\em corresponding} to some subgraph.~For example,~ in the digraph (c) of Fig. \ref{figure1},~let $P_1$~$(P_2)$ be the path labelled 1 (2) in $H(u_1)~(H(u_2))$, then $P_2$ is called the path {\em corresponding} to $P_1$ in $H(u_2)$. It follows from the definition of strong subgraph 2-arc-connectivity that for any digraph $D$, $\lambda_2(D)\le \min\{\delta^+(D), \delta^-(D)\}$ \cite{Sun-Gutin}. We will use this inequality in Section~\ref{sec:product}. Note that if $D=G\Box H$ then $\delta^+(D)=\delta^+(G)+\delta^+(H)$ and $\delta^-(D)=\delta^-(G)+\delta^-(H).$ \section{Formula for arc-connectivity of Cartesian product of two digraphs}\label{sec:exect} Xu and Yang \cite{Xu-Yang} (see also \cite{Spa} and \cite[Theorem 5.5]{Imrich-Klavzar-Rall}) proved that \begin{equation}\label{eq3} \lambda(G\Box H)=\min\{\lambda(G)|V(H)|,\lambda(H)|V(G)|,\delta(G)+\delta(H)\} \end{equation} for all connected undirected graphs $G$ and $H,$ each with at least two vertices. Since $\lambda(\overleftrightarrow{Q})=\lambda(Q)$ for every undirected graph $Q$, Formula (\ref{eq3}) can be easily extended to symmetric digraphs. In this section, we generalise Formula (\ref{eq3}) to all strong digraphs. Clearly, $\lambda\left ( D \right ) \le \min\left \{\delta ^{+ } \left ( D \right ),\delta ^{- } \left ( D \right ) \right \} $ for every digraph $D$. Hence, for any two strong digraphs $G$ and $H$, we have \begin{equation}\label{2} \begin{split} \lambda\left ( G \Box H \right ) \le \min\left \{\delta ^{+ } \left ( G \Box H \right ),\delta ^{- } \left ( G \Box H \right ) \right \} \\=\min\left \{ \delta ^{+ } \left ( G \right )+ \delta ^{+ } \left ( H \right ),\delta ^{- } \left ( G \right )+ \delta ^{- } \left ( H \right ) \right \}. \end{split} \end{equation} Furthermore, by the definitions of arc-strong connectivity and Cartesian product of digraphs, we have \begin{equation}\label{3} \lambda \left ( G \Box H\right )\le \lambda\left ( G \right ) \left | H \right | \end{equation} and \begin{equation}\label{4} \lambda \left ( G \Box H\right )\le \lambda\left ( H \right ) \left | G \right |. \end{equation} The inequalities (\ref{2}), (\ref{3}) and (\ref{4}) imply that $$\lambda \left ( G \Box H\right )\le \min\left \{ \lambda \left ( G \right ) \left | H \right |, \lambda \left ( H \right ) \left |G \right |,\delta ^{+ } \left ( G \right )+ \delta ^{+ } \left ( H \right ),\delta ^{- } \left ( G \right )+ \delta ^{- } \left ( H \right ) \right \}.$$ In fact, we can furthermore prove that the equality holds and it could be seen as a digraph extension of (\ref{eq3}). \begin{thm}\label{arc-connectivity} Let $G$ and $H$ be two strong digraphs, each of order at least 2. Then $$\lambda \left ( G \Box H\right )= \min\left \{ \lambda \left ( G \right ) \left | H \right | , \lambda \left ( H \right ) \left |G \right |,\delta ^{+ } \left ( G \right )+ \delta ^{+ } \left ( H \right ),\delta ^{- } \left ( G \right )+ \delta ^{- } \left ( H \right ) \right \}.$$ \end{thm} \begin{pf} Let ${S}\subseteq A\left (G \Box H \right ) $ be an arc-cut set of $G \Box H$ with $\left|{S} \right | =\lambda \left ( G \Box H\right )$. It suffices to show that $$|S|\ge \min\left \{ \lambda \left ( G \right ) \left | H \right | , \lambda \left ( H \right ) \left |G \right |,\delta ^{+ } \left ( G \right )+ \delta ^{+ } \left ( H \right ),\delta ^{- } \left ( G \right )+ \delta ^{- } \left ( H \right ) \right \}.$$ If $\left |{S} \right |\ge \min\left \{ \lambda \left ( G \right ) \left | H \right | , \lambda \left ( H \right ) \left |G \right | \right \}$, then the inequality clearly holds. Therefore, we assume that $\left |{S} \right |< \min\left \{ \lambda \left ( G \right ) \left | H \right | , \lambda \left ( H \right ) \left |G \right | \right \}$ in the following argument and in this case it suffices to show that $\left | {S} \right | \ge\delta ^{+ } \left ( G \Box H \right )$ or $\left | {S} \right | \ge\delta ^{-} \left ( G \Box H \right )$. Now there must exist a strong component $B$ of $G \Box H-{S}$ which contains some $G\left ( v_{j}\right )$, say $G\left ( v_{1}\right )$, (as $\left | {S} \right | <\lambda \left ( G \right ) \left | H \right |$) and some $H\left ( u_{i} \right )$, say $H\left ( u_{1} \right )$, in $G \Box H-{S}$ (as $\left | {S} \right | <\lambda \left ( H \right ) \left | G \right |$). Let $\left ( u,v \right )\in V(G \Box H)\setminus V(B)$. We want to prove that $\left | {S} \right | \ge d^{+ } \left (( u,v )\right )$ by the following operation that assigns each out-neighbor of $\left ( u,v \right )$ in $G\Box H$ a unique arc from ${S}$: We first consider out-neighbors of $\left ( u, v \right )$ in $G\left ( v \right )$. Let $\left ( {u}', v \right )$ be an out-neighbor of $\left ( u, v \right )$ in $G\left ( v \right )$. If the arc $a=\left ( u ,v \right )\left ( {u}' ,v \right )\in {S}$, we assign $a$ to $\left ( {u}', v \right )$. Otherwise, we must have $\left ( {u}', v \right )\notin B$. Therefore, the subdigraph of $G \Box H-{S}'$ induced by $V(H\left ( {u}' \right ))$ is not strong and so $H\left ( {u}' \right )$ contains at least one arc from ${S}$, and we assign this arc to $\left ( {u}' ,v \right )$. We next consider out-neighbors of $\left ( u, v \right )$ in $H\left ( u \right )$. Let $\left ( u, {v}' \right )$ be an out-neighbor of $\left ( u, v \right )$ in $H\left ( u \right )$. If ${a}'=\left ( u, v \right )\left ( u ,{v}' \right )\in {S}$, we assign ${a}'$ to $\left ( u, {v}' \right )$. Otherwise, we must have $\left ( u, {v}' \right )\notin B$. Therefore, the subdigraph of $G \Box H-{S}'$ induced by $V(G\left ( {v}' \right ))$ is not strong and so $G\left ( {v}' \right )$ contains at least one arc from ${S}$, and we assign this arc to $\left ( u, {v}' \right )$. The above operations mean that $\left | {S} \right | \ge d^{+ } \left (( u,v )\right ) \ge \delta ^{+ } \left ( G \Box H \right )$. With a similar argument, we can prove that $\left | {S} \right | \ge\delta ^{- } \left ( G \Box H \right )$. This completes the proof. \end{pf} \section{General bounds}\label{sec:1product} By Theorems~\ref{thmd1} and ~\ref{arc-connectivity}, and the fact that $\lambda _{k} \left ( D \right )\le \lambda\left ( D \right )$ for any digraph $D$\cite{Sun-Gutin}, we have the following sharp upper bound for $\lambda _{2} \left ( G \Box H\right )$. \begin{thm} Let $G$ and $H$ be two strong digraphs, each with at least two vertices. Then $$\lambda _{2} \left ( G \Box H\right )\le \min\left \{ \lambda \left ( G \right ) \left | H \right | , \lambda \left ( H \right ) \left |G \right |,\delta ^{+ } \left ( G \right )+ \delta ^{+ } \left ( H \right ),\delta ^{- } \left ( G \right )+ \delta ^{- } \left ( H \right ) \right \}.$$ Moreover, this bound is sharp. \end{thm} Now we will provide a lower bound for $\lambda_2(G\Box H)$ for strong digraphs $G$ and $H$. \begin{thm}\label{thmd} Let $G$ and $H$ be two strong digraphs. We have $$\lambda_2(G\Box H)\geq \lambda_2(G)+ \lambda_2(H)-1.$$ \end{thm} \begin{pf} It suffices to show that there are at least $\lambda_2(G)+ \lambda_2(H)-1 $ arc-disjoint $S$-strong subgraphs for any $S\subseteq V(G\Box H)$ with $|S|=2$. Let $S=\{x, y\}$ and consider the following two cases. {\em Case 1}: $x$ and $y$ are in the same $H(u_i)$ or $G(v_j)$ for some $1\leq i\leq n, 1\leq j\leq m$. We will prove that, in this case, $\lambda_2(G\Box H)\geq \lambda_2(G)+ \lambda_2(H).$ Without loss of generality, we may assume that $x=u_{1,1},~y=u_{1,2}$. We know there are at least $\lambda_2(H)$ arc-disjoint $S$-strong subgraphs in the subgraph $H(u_1)$, and so it suffices to find the remaining $\lambda_2(G)$ $S$-strong subgraphs in $G\Box H$. We know there are at least $\lambda_2(G)$ arc-disjoint $\{x, u_{2,1}\}$-strong subgraphs, say $D_i(v_1)~(i\in [\lambda_2(G)])$, in $G(v_1)$. For each $i\in [\lambda_2(G)]$, we can choose an out-neighbor, say $u_{t_i,1}$~$(i\in [\lambda_2(G)])$, of $x$ in $D_i(v_1)$ such that these out-neighbors are distinct. Then in $H(u_{t_i})$, we know there are $\lambda_2(H)$ arc-disjoint $\{u_{t_i,1}, u_{t_i,2}\}$-strong subgraphs, we choose one such strong subgraph, say $D(H(u_{t_i}))$.~For each $i\in [\lambda_2(G)]$,~let $D_i(v_2)$ be the $\{u_{t_i,2}, y\}$-strong subgraph corresponding to $D_i(v_1)$ in $G(v_2)$. We now construct the remaining $\lambda_2(G)$ $S$-strong subgraphs by letting $D_i=D_i(v_1)\cup D(H(u_{t_i}))\cup D_i(v_2)$ for each $i\in [\lambda_2(G)]$. Combining the former $\lambda_2(H)$ arc-disjoint $S$-strong subgraphs with the $\lambda_2(G)$ $S$-strong subgraphs, we can obtain $\lambda_2(G)+ \lambda_2(H)$ strong subgraphs. Observe all these strong subgraphs are arc-disjoint. {\em Case 2.} $x$ and $y$ belong to distinct $H(u_i)$ and $G(v_j)$. Without loss of generality, we may assume that $x=u_{1,1},~y=u_{2,2}$. There are at least $\lambda_2(G)$ arc-disjoint $\{x, u_{2,1}\}$-strong subgraphs, say $D_i(v_1) $ $(i\in [\lambda_2(G)])$, in $G(v_1)$.~For each $i\in [\lambda_2(G)]$,~we can choose an out-neighbor,~say $u_{t_i,1}$~$(i\in [\lambda_2(G)])$, of $x$ in $D_i(v_1)$ such that these out-neighbors are distinct. Then in $H(u_{t_i})$, we know that there are $\lambda_2(H)$ arc-disjoint $\{u_{t_i,1}, u_{t_i,2}\}$-strong subgraphs; we choose one such strong subgraph,~say $D(H(u_{t_i}))$. For each $i\in [\lambda_2(G)]$,~let $D_i(v_2)$ be the $\{u_{t_i,2},~y\}$-strong subgraph corresponding to $D_i(v_1)$ in $G(v_2)$. We now construct the $\lambda_2(G)$ $S$-strong subgraphs by letting $D_i=D_i(v_1)\cup D(H(u_{t_i}))\cup D_i(v_2)$ for each $i\in [\lambda_2(G)]$. Similarly, there are at least $\lambda_2(H)$ arc-disjoint $\{x, u_{1,2}\}$-strong subgraphs, say $D'_j(u_1)~(j\in [\lambda_2(H)])$, in $H(u_1)$. For each $j\in[\lambda_2(H)]$, we can choose an out-neighbor, say $u_{1,t'_j}$~$(j\in [\lambda_2(H)])$, of $x$ in $D'_j(u_1)$ such that these out-neighbors are distinct. Then in $G(v_{t'_j})$, we know there are $\lambda_2(G)$ arc-disjoint $\{u_{1,t'_j}, u_{2,t'_j}\}$-strong subgraphs, we choose one such strong subgraph, say $D(G(v_{t'_j}))$. For each $j\in [\lambda_2(H)]$, let $D'_j(u_2)$ be the $\{u_{2,t'_j}, y\}$-strong subgraph corresponding to $D'_j(u_1)$ in $H(u_2)$. We now construct the other $\lambda_2(H)$ $S$-strong subgraphs by letting $D'_j=D'_j(u_1)\cup D(G(v_{t'_j}))\cup D'_j(u_2)$ for each $j\in [\lambda_2(H)]$. {\em Subcase 2.1.} $t_i\neq 2$ for any $i\in [\lambda_2(G)]$ and $t'_j\neq 2$ for any $j\in [\lambda_2(H)]$, that is,~$u_{2,1}$ was not chosen as an out-neighbor of $u_{1,1}$ in $G(v_1)$ and $u_{1,2}$ was not chosen as an out-neighbor of $u_{1,1}$ in $H(u_1)$.~We can check the above $\lambda_2(G)+ \lambda_2(H)$ strong subgraphs are arc-disjoint. {\em Subcase 2.2.} $t_i=2$ for some $i\in [\lambda_2(G)]$ or $t'_j=2$ for some $j\in [\lambda_2(H)]$,~that is,~$u_{2,1}$ was chosen as an out-neighbor of $u_{1,1}$ in $G(v_1)$ or $u_{1,2}$ was chosen as an out-neighbor of $u_{1,1}$ in $H(u_1)$. Without loss of generality,~we may assume that $t_i=2$ and $t'_j \neq 2 $,~that is,~$u_{2,1}$ was chosen as an out-neighbor of $u_{1,1}$ in $G(v_1)$ and $u_{1,2}$ was not chosen as an out-neighbor of $u_{1,1}$ in $H(u_1)$.~When $A(D_i)\cap A(D_j) \neq \emptyset$,~we can get $\lambda_2(G)+ \lambda_2(H)-1$ arc-disjoint $S$-strong subgraphs.~Otherwise,~we can check the above $\lambda_2(G)+ \lambda_2(H)$ strong subgraphs are arc-disjoint and get the desired $S$-strong subgraphs. {\em Subcase 2.3.} $t_i=2$ for some $i\in [\lambda_2(G)]$ and $t'_j=2$ for some $j\in [\lambda_2(H)]$, we replace $D_1$,~$D'_1$ by $\overline{D}_1$, $\overline{D'}_1$, respectively as follows: let $\overline{D}_1= D_1(v_1)\cup D(H(u_{t_1}))$ and $\overline{D'}_1= D'_1(u_1)\cup D_1(v_2)$. We can check that the current $\lambda_2(G)+ \lambda_2(H)$ strong subgraphs are arc-disjoint. Hence, the bound holds. This completes the proof. \end{pf} \section{Exact values for digraph classes}\label{sec:product} In this section, we will obtain exact values for the strong subgraph 2-arc-connectivity of Cartesian product of two digraphs belonging to some digraph classes. \begin{pro}\label{p1} We have $ \lambda_2(\overrightarrow{C}_n\Box \overrightarrow{C}_m)=2. $ \end{pro} \begin{figure}[htbp] \small \centering \includegraphics[width=9cm]{figure31.png \caption{Cartesian product of two dicycles.} \label{figure3.1} \end{figure} \begin{pf}\, Let $S=\left \{ x, y \right \} $,~we just consider the case that $ x,~y$ are neither in the same $ \overrightarrow{C}_n\left ( u_{i} \right ) $ nor in the same $ \overrightarrow{C}_m\left ( v_{j} \right ) $ for some $ 1\leq i \leq n $,~$ 1\leq j \leq m $, since the arguments for remaining cases are similar.~Without loss of generality, we may assume that $ x= u_{1,1},~y=u_{2,2} $.~We can get two arc-disjoint $S$-strong subgraphs in $\overrightarrow{C}_n\Box \overrightarrow{C}_m$,~say $ D_{1} $ and $ D_{2} $ (as shown in Fig. \ref{figure3.1}) such that $ V\left ( D_{1} \right ) =\left \{ x,~y,~u_{1,2},\cdots,~u_{2,m-1},~u_{2,m},\cdots,~u_{n,m},~u_{1,m}\right \}$ and $A\left ( D_{1} \right )=\left \{xu_{1,2},~u_{1,2}y,\cdots,~u_{2,m-1}u_{2,m},\cdots ,~u_{n-1,m}u_{n,m},~u_{n,m}u_{1,m},~\right.\\ \left. u_{1,m}x\right \}$. $V\left ( D_{2} \right ) =\left \{ x,~y,~u_{2,1},\cdots,~u_{n-1,2},\cdots,~u_{n-1,m-1},~u_{n-1,m},~u_{n-1,1}\right \}$ and $ A\left ( D_{2} \right ) =\left \{ xu_{2,1},~u_{2,1}y,\cdots,~u_{n-2,2}u_{n-1,2},\cdots,~u_{n-1,m-1}u_{n-1,m},~u_{n-1,m}\right.\\ \left.u_{n-1,1},~u_{n-1,1}u_{n,1},~u_{n,1}x\right \} $. Then we have $2=\min\{\delta^+(D), \delta^-(D)\}\geq \lambda_2(\overrightarrow{C}_n \Box \overrightarrow{C}_m)\geq 2$.~This completes the proof. \end{pf} \begin{pro}\label{p2} We have $ \lambda_2(\overrightarrow{C}_n\Box \overleftrightarrow{C} _{m} )=3.$ \end{pro} \begin{figure}[htbp] \small \centering \includegraphics[width=9cm]{figure32.png \caption{Cartesian product of a dicycle and the complete biorientation of a cycle.} \label{figure3.2} \end{figure} \begin{pf}\, Let $S=\left \{ x, y \right \} $,~we just consider the case that $ x $,~$ y$ are neither in the same $ \overrightarrow{C}_n\left ( u_{i} \right ) $ nor in the same $ \overleftrightarrow{C } _{m} \left ( v_{j} \right ) $ for some $ 1\leq i \leq n $,~$ 1\leq j \leq m $,~since the arguments for remaining cases are similar.~Without loss of generality,~we may assume that $x=u_{1,1}$,~$y=u_{2,2}$.~We can get three arc-disjoint $S$-strong subgraphs in $\overrightarrow{C}_n\Box \overleftrightarrow{C } _{m} $,~say $ D_{1} $,~$ D_{2} $ and $ D_{3}$ (as shown in Fig. \ref{figure3.2}) such that $ V\left ( D_{1} \right ) =\left \{ x,~y,\cdots,~u_{n-1,2},~u_{n,2},~u_{1,2}\right \} $ and $A\left ( D_{1} \right )=\left \{ xu_{1,2},~u_{1,2}y,\cdots,~u_{n-1,2}u_{n,2},~u_{n,2}u_{1,2},~u_{1,2}x\right \}$. $V\left ( D_{2} \right ) =\left \{ x,~y,~u_{1,m},~u_{2,m},~u_{2,m-1},\cdots,~u_{n-1,m-1},~u_{n,m}\right \}$ and $ A\left ( D_{2} \right ) =\left \{ xu_{1,m},~u_{1,m}u_{2,m},~ u_{2,m}u_{2,m-1},\cdots,~u_{2,3}y,~yu_{2,3},\cdots,~u_{2,m-1}\right.\\ \left.u_{2,m},\cdots,~u_{n-1,m}u_{n,m},~u_{n,m}u_{1,m},~u_{1,m}x\right \} $. $V\left ( D_{3} \right ) =\left \{ x,~y,~u_{2,1},\cdots,~u_{n-1,1},~u_{n,1}\right \}$ and $ A\left ( D_{3} \right ) =\left \{ xu_{2,1},~u_{2,1}y,~ yu_{2,1},\cdots,~u_{n-1,1}u_{n,1},~u_{n,1}x\right \} $. Then we have $3=\min\{\delta^+(D),~\delta^-(D)\}~\geq \lambda_2(\overrightarrow{C}_n \Box \overleftrightarrow{C }_{m} )\geq 3$.~This completes the proof. \end{pf} \begin{pro}\label{p4} We have $ \lambda_2(\overrightarrow{C}_n\Box \overleftrightarrow{T} _{m} )=2.$ \end{pro} \begin{figure}[htbp] \small \centering \includegraphics[width=9cm]{figure34.png \caption{Cartesian product of a dicycle and an orientation of a tree.} \label{figure3.4} \end{figure} \begin{pf}\, Let $S=\left \{ x, y \right \} $,~we just the case consider that $x$,~$ y$ are neither in the same $ \overrightarrow{C}_n\left ( u_{i} \right ) $ nor in the same $ \overleftrightarrow{T } _{m} \left ( v_{j} \right ) $ for some $ 1\leq i \leq n $,~$ 1\leq j \leq m $,~as the arguments for the remaining cases are similar.~ Without loss of generality,~we may assume that $ x=u_{1,1}$,~$y=u_{2,2}$.~We can get two arc-disjoint $S$-strong subgraphs in $\overrightarrow{C}_n\Box \overleftrightarrow{T } _{m}$,~say $ D_{1} $ and $D_{2}$ (as shown in Fig. \ref{figure3.4}) such that $ V\left ( D_{1} \right ) =\left \{ x,~y,\cdots,~u_{n-1,2},~u_{n,2},~u_{1,2},~u_{2,1}\right \} $ and $A\left ( D_{1} \right )=\left \{ xu_{2,1},~u_{2,1}y,\cdots,~u_{n-1,2}u_{n,2},~u_{n,2}u_{1,2},~u_{1,2}x\right \}$. $V\left ( D_{2} \right ) =\left \{x,~y,~u_{1,2},~u_{2,1},\cdots,~u_{n-1,1},~u_{n,1}\right \}$ and $ A\left ( D_{2} \right ) =\left \{ xu_{1,2},~u_{1,2}y,~yu_{2,1},\cdots,~u_{n-1,1}u_{n,1},~u_{n,1}x\right \} $. Then we have $2=\min\{\delta^+(D),~\delta^-(D)\}\geq \lambda_2(\overrightarrow{C}_n \Box \overleftrightarrow{T } _{m} )\geq 2$.~This completes the proof. \end{pf} \begin{pro}\label{p7} We have $ \lambda_2(\overrightarrow{C}_n\Box \overleftrightarrow{K} _{m} )=m.$ \end{pro} \begin{figure}[htbp] \small \centering \includegraphics[width=9cm]{figure37.png \caption{Cartesian product of a dicycle and the complete biorientation of $K_2$.} \label{figure3.7} \end{figure} \begin{figure}[htbp] \small \centering \includegraphics[width=9cm]{figure371.png \caption{Cartesian product of a dicycle and the complete biorientation of $K_m$.} \label{figure3.7.1} \end{figure} \begin{pf}\, Let $S=\left \{ x, y \right \} $,~we just consider the case that $ x $,~$ y$ are neither in the same $ \overrightarrow{C}_n\left ( u_{i} \right ) $ nor in the same $ \overleftrightarrow{K } _{m} \left ( v_{j} \right )$ for some $ 1\leq i \leq n $,~$ 1\leq j \leq m $,~as the arguments for the remaining cases are similar.~Without loss of generality,~we may assume that $ x=u_{1,1}$,~$y=u_{2,2}$. We first show that $ \lambda_2(\overrightarrow{C}_n\Box \overleftrightarrow{K} _{2} )=2 $.~When $m=2$,~we can get two arc-disjoint $S$-strong subgraphs in $\overrightarrow{C}_n\Box \overleftrightarrow{K } _{2} $,~say $ D_{1} $ and $ D_{2} $~(as shown in Fig. \ref{figure3.7}) satisfying: $ V\left ( D_{1} \right ) =\left \{x,~y,~u_{1,2},~u_{2,1},\cdots,~u_{n-1,1},~u_{n,1}\right \} $ and $A\left ( D_{1} \right )=\left \{ xu_{1,2},~u_{1,2}y,~yu_{2,1},\cdots,~u_{n-1,1}u_{n,1},~u_{n,1}x\right \}$. $V\left ( D_{2} \right ) =\left \{x,~y,~u_{1,2},~u_{2,1},\cdots,~u_{n-1,1},~u_{n,1}\right \}$ and $ A\left ( D_{2} \right ) =\left \{ xu_{2,1},~u_{2,1}y,\cdots,~u_{n-1,2}u_{n,2},~u_{n,2}u_{1,2},~u_{1,2}x\right \} $. The propositon is now proved by induction on $m$. Suppose that when $m=k$,~we have $\lambda_2(\overrightarrow{C}_n \Box \overleftrightarrow{K } _{k} )=k$.~We shall show that $\lambda_2(\overrightarrow{C}_n \Box \overleftrightarrow{K } _{k+1} )=k+1 $~when $m=k+1$.Since we can get $k$ arc-disjoint $S$-strong subgraphs in $\overrightarrow{C}_n\Box \overleftrightarrow{K } _{k} $ ,~say $ D_{1} $,~$ D_{2} $,$\cdots$,~$ D_{k} $.~When $ m=k+1$,~that is,~the degree of each vertex increases by 2 in $\overleftrightarrow{K } _{k}$,~we can get $k+1$ arc-disjoint $S$-strong subgraphs in $\overrightarrow{C}_n\Box \overleftrightarrow{K } _{k+1} $,~say $ D_{1} $,~$ D_{2} $,$\cdots$,~$ D_{k} ,~ D_{k+1} $.~By the symmetry of the complete digraph,~the same conclusion is drawn in the two cases where $x$,~$y$ belong to $\overleftrightarrow{K} _{k+1}$,~and $x$,~$y$ belong to $\overleftrightarrow{K} _{k}$ and $\overleftrightarrow{K} _{k+1}$,~respectively.~From the above argument, the original proposition holds for any positive integer,~we can get $m$ arc-disjoint $S$-strong subgraphs in $\overrightarrow{C}_n\Box \overleftrightarrow{K } _{m} $,~say $ D_{1},~D_{2},\cdots,~D_{ j} (2< j\le m),\cdots,~D_{m-1},~D_{m} $ (as shown in Fig. \ref{figure3.7.1}) such that $ V\left ( D_{1} \right ) =\left \{x,~y,~u_{1,2},\cdots,~u_{n-1,2},~u_{n,2}\right \} $ and $A\left ( D_{1} \right )=\left \{ xu_{1,2},~u_{1,2}y,\cdots,~u_{n-1,2}u_{n,2},~u_{n,2}u_{1,2},~u_{1,2}x\right \}$. $ V\left ( D_{2} \right ) =\left \{x,~y,~u_{2,1},\cdots,~u_{n-1,2},~u_{n,2}\right \} $ and $A\left ( D_{2} \right )=\left \{ xu_{2,1},~u_{2,1}y,~yu_{2,1},\cdots,~u_{n-1,1}u_{n,1},~u_{n,1}x\right \}$. $V\left ( D_{j} \right ) =\left \{x,~y,~u_{1,2},~u_{2,j},\cdots,~u_{n-1,j},~u_{n,j}\right \}$ and $ A\left (D_{j} \right ) =\left \{ xu_{1,j},~u_{1,j}u_{2,j},~u_{2,j}y,~yu_{2,j},\cdots,~u_{n-1,j}u_{n,j},~u_{n,j}u_{1,j},~u_{1,j} x\right \} $. Then we have $m=\min\{\delta^+(D), \delta^-(D)\}~\geq \lambda_2(\overrightarrow{C}_n \Box \overleftrightarrow{K } _{m} )\geq m$.~This completes the proof. \end{pf} Since $\lambda_2(\overleftrightarrow{Q})=\lambda(Q)$ for any undirected graph $Q$, using Cartesian product definition, we have \begin{equation}\label{eq2} \lambda_2(\overleftrightarrow{G}\Box \overleftrightarrow{H})= \lambda(G\Box H) \end{equation} for undirected graphs $G$ and $H.$ Propositions \ref{p1}-\ref{p7} and Formulas (\ref{eq2}) and (\ref{eq3}) imply the following theorem. Indeed, entries in the first row and columns of Table 1 follow from Propositions \ref{p1}-\ref{p7} and all other entries can be easily computed using (\ref{eq2}) and (\ref{eq3}). \begin{thm}\label{thmd1} The following table for the strong subgraph 2-arc-connectivity of Cartesion products of some digraph classes holds: \begin{figure}[htbp] {\tiny \begin{center} \renewcommand\arraystretch{3.5} \begin{tabular}{|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|} \hline & $\overrightarrow{C}_m$ & $\overleftrightarrow{C}_m$ & $\overleftrightarrow{T}_m$ & $\overleftrightarrow{K}_m$ \\\hline $\overrightarrow{C}_n$ & $2$ & $3$ & $2$ & $m$ \\\hline $\overleftrightarrow{C}_n$ & $3$ & $4$ & $3$ & $m+1$ \\\hline $\overleftrightarrow{T}_n$ & $2$ & $3$ & $2$ & $m$ \\\hline $\overleftrightarrow{K}_n$ & $n$ & $n+1$ & $n$ & $n+m-2$ \\\hline \end{tabular} \vspace*{20pt} \centerline{\normalsize Table $1$. Exact values of $\lambda_2$ for Cartesian products of some digraph classes.} \end{center}} \end{figure} \end{thm} \vskip 0.5cm \section{Introduction}\label{sec:intro} We refer the readers to \cite{Bang-Jensen-Gutin, Bondy} for graph theoretical notation and terminology not given here. Note that all digraphs considered in this paper have no parallel arcs or loops. A digraph $D$ is {\em symmetric} if it can be obtained from its underlying undirected graph $G$ by replacing each edge of $G$ with the corresponding arcs of both directions, that is, $D=\overleftrightarrow{G}$. The order $|G|$ of a (di)graph $G$ is the number of vertices in $G.$ Let $\overleftrightarrow{T}_n$ be the symmetric digraph whose underlying undirected graph is a tree of order $n$. We use $\overrightarrow{C}_n$ and $\overleftrightarrow{K}_n$ to denote the cycle and complete digraph of order $n$, respectively. For a graph $G=(V,E)$ and a set $S\subseteq V$ of at least two vertices, an {\em $S$-Steiner tree} or, simply, an {\em $S$-tree} is a subgraph $T$ of $G$ which is a tree with $S\subseteq V(T)$. Two $S$-trees $T_1$ and $T_2$ are said to be {\em edge-disjoint} if $E(T_1)\cap E(T_2)=\emptyset$. Two arc-disjoint $S$-trees $T_1$ and $T_2$ are said to be {\em internally disjoint} if $V(T_1)\cap V(T_2)=S$. The {\em generalized local connectivity} $\kappa_S(G)$ is the maximum number of internally disjoint $S$-trees in $G$. For an integer $k$ with $2\leq k\leq n$, the {\em generalized $k$-connectivity} \cite{Hager} is defined as $$\kappa_k(G)=\min\{\kappa_S(G)\mid S\subseteq V(G), |S|=k\}.$$ Similarly, the {\em generalized local edge-connectivity} $\lambda_S(G)$ is the maximum number of edge-disjoint $S$-trees in $G$. For an integer $k$ with $2\leq k\leq n$, the {\em generalized $k$-edge-connectivity} \cite{Li-Mao-Sun} is defined as $$\lambda_k(G)=\min\{\lambda_S(G)\mid S\subseteq V(G), |S|=k\}.$$ Let $\kappa(G)$ and $\lambda(G)$ denote the classical vertex-connectivity and edge-connectivity of an undirected graph $G.$ Observe that $\kappa_2(G)=\kappa(G)$ and $\lambda_2(G)=\lambda(G)$, hence, these two parameters are generalizations of classical connectivity of undirected graphs and are also called tree connectivity. Now the topic of tree connectivity has become an established area in graph theory, see a recent monograph \cite{Li-Mao5} by Li and Mao on this topic. To extend generalized $k$-connectivity to directed graphs, Sun, Gutin, Yeo and Zhang \cite{Sun-Gutin-Yeo-Zhang} observed that in the definition of $\kappa_S(G)$, one can replace ``an $S$-tree'' by ``a connected subgraph of $G$ containing $S$.'' Therefore, they defined {\em strong subgraph $k$-connectivity} by replacing ``connected'' with ``strongly connected'' (or, simply, ``strong'') as follows. Let $D=(V,A)$ be a digraph of order $n$, $S$ a subset of $V$ of size $k$ and $2\le k\leq n$. An {\em S-strong subgraph} is a strong subgraph $H$ of $D$ such that $S\subseteq V(H)$. $S$-strong subgraphs $D_1, \dots , D_p$ are said to be {\em internally disjoint} if $V(D_i)\cap V(D_j)=S$ and $A(D_i)\cap A(D_j)=\emptyset$ for all $1\le i<j\le p$. Let $\kappa_S(D)$ be the maximum number of internally disjoint $S$-strong digraphs in $D$. The {\em strong subgraph $k$-connectivity} \cite{Sun-Gutin-Yeo-Zhang} is defined as $$\kappa_k(D)=\min\{\kappa_S(D)\mid S\subseteq V, |S|=k\}.$$ As a natural counterpart of the strong subgraph $k$-connectivity, Sun and Gutin \cite{Sun-Gutin} introduced the concept of strong subgraph $k$-arc-connectivity. Let $D=(V(D),A(D))$ be a digraph of order $n$, $S\subseteq V$ a $k$-subset of $V(D)$ and $2\le k\leq n$. Let $\lambda_S(D)$ be the maximum number of arc-disjoint $S$-strong digraphs in $D$. The {\em strong subgraph $k$-arc-connectivity} is defined as $$\lambda_k(D)=\min\{\lambda_S(D)\mid S\subseteq V(D), |S|=k\}.$$ Note that $\kappa_k(D)$ and $\lambda_k(D)$ are not only natural extensions of tree connectivity, but also could be seen as generalizations of connectivity and edge-connectivity of undirected graphs as $\kappa_2(\overleftrightarrow{G})=\kappa(G)$ \cite{Sun-Gutin-Yeo-Zhang} and $\lambda_2(\overleftrightarrow{G})=\lambda(G)$ \cite{Sun-Gutin}. For more information on the topic of strong subgraph connectivity of digraphs, the readers can see \cite{Sun-Gutin2} for a recent survey. In this paper, we continue research on strong subgraph arc-connectivity and focus on the strong subgraph 2-arc-connectivity of Cartesian products of digraphs. It is well known that Cartesian products of digraphs are of interest in graph theory and its applications; see a recent survey chapter by Hammack \cite{Hammack} considering many results on Cartesian products of digraphs. In the next section we introduce terminology and notation on Cartesian products of digraphs and give a simple yet useful upper bound on $\lambda_2(D),$ where $D$ is Cartesian product of any digraphs $G$ and $H$ i.e. $D=G\Box H$. In Section \ref{sec:exect}, we prove that $$\lambda \left ( G \Box H\right )= \min\left \{ \lambda \left ( G \right ) \left | H \right | , \lambda \left ( H \right ) \left |G \right |,\delta ^{+ } \left ( G \right )+ \delta ^{+ } \left ( H \right ),\delta ^{- } \left ( G \right )+ \delta ^{- } \left ( H \right ) \right \}$$ for every pair $G$ and $H$ of strong digraphs, each of order at least 2.\footnote{Note that the case of at least one of two digraphs having just one vertex in $\lambda \left ( G \Box H\right )$ is trivial. Thus, we will henceforth assume that each of the two digraphs is of order at least 2. The same holds for $\lambda_2 \left ( G \Box H\right )$. } In Section~\ref{sec:1product} we prove that $$ \min\left \{ \lambda \left ( G \right ) \left | H \right | , \lambda \left ( H \right ) \left |G \right |,\delta ^{+ } \left ( G \right )+ \delta ^{+ } \left ( H \right ),\delta ^{- } \left ( G \right )+ \delta ^{- } \left ( H \right ) \right \}$$ and $\lambda_2(G)+\lambda_2(H)-1$ are an upper bound and a lower bound, respectively, for $\lambda_2(G\Box H)$. The upper bound follows from the formula for $\lambda \left ( G \Box H\right )$ and thus it is tight. Unfortunately, we do not know whether this lower bound is tight or not, but by Theorem \ref{thmd1} (mentioned below), the gap with a tight bound is at most 1. In Section~\ref{sec:product}, we obtain exact values for the strong subgraph 2-arc-connectivity of Cartesian products of some digraph classes; our results are collated in Theorem~\ref{thmd1}. For the classes of strong digraphs considered in Theorem \ref{thmd1}, we have $\lambda_2(G\Box H)= \lambda_2(G) +\lambda_2(H).$ \section{Cartesian product of digraphs}\label{sec:cp} For a positive integer $n$, let $[n]=\{1,2,\dots ,n\}.$ Let $G$ and $H$ be two digraphs with $V(G)=\{u_i \mid 1\leq i\leq n\}$ and $V(H)=\{v_j \mid 1\leq j\leq m\}$. The {\em Cartesian product} $G\Box H$ of two digraphs $G$ and $H$ is a digraph with vertex set $$V(G\Box H)=V(G)\times V(H)=\{(x, x')\mid x\in V(G), x'\in V(H)\}$$ and arc set $$A(G\Box H)=\{(x,x')(y,y')\mid xy\in A(G),~x'=y',~or~x=y,~x'y'\in A(H)\}.$$~We will use $u_{i,j}$ to denote $(u_i,v_j)$ in the rest of the paper.~By definition, we know the Cartesian product is associative and commutative, and $G\Box H$ is strongly connected if and only if both $G$ and $H$ are strongly connected \cite{Hammack}. \begin{figure}[htbp] \small \centering \includegraphics[width=9cm]{figure1-eps-converted-to.pdf \caption{Two digraphs $G$, $H$ and their Cartesian product.} \label{figure1} \end{figure} We use $G(v_j)$ to denote the subgraph of $G\Box H$ induced by vertex set $\{u_{i,j}\mid 1\leq i\leq n\}$ where $1\leq j\leq m$,~and use $H(u_i)$ to denote the subgraph of $G\Box H$ induced by vertex set $\{u_{i,j}\mid 1\leq j\leq m\}$ where $1\leq i\leq n$.~Clearly,~we have $G(v_j)\cong G$ and $H(u_i)\cong H$. (For example,~as shown in Fig. \ref{figure1},~$G(v_j)\cong G$ for $1\leq j\leq 4$ and $H(u_i)\cong H$ for $1\leq i\leq 3$).~For $1\leq j_1\neq j_2\leq m$,~ the vertices $u_{i,j_1}$ and $u_{i,j_2}$ belong to the same digraph $H(u_i)$ where $u_i\in V(G)$;~we call $u_{i,j_2}$ the {\em vertex corresponding to} $u_{i,j_1}$ in $G(v_{j_2})$;~for $1\leq i_1\neq i_2\leq n$, we call $u_{i_2,j}$ the vertex corresponding to $u_{i_1,j}$ in $H(u_{i_2})$.~Similarly,~we can define the subgraph {\em corresponding} to some subgraph.~For example,~ in the digraph (c) of Fig. \ref{figure1},~let $P_1$~$(P_2)$ be the path labelled 1 (2) in $H(u_1)~(H(u_2))$, then $P_2$ is called the path {\em corresponding} to $P_1$ in $H(u_2)$. It follows from the definition of strong subgraph 2-arc-connectivity that for any digraph $D$, $\lambda_2(D)\le \min\{\delta^+(D), \delta^-(D)\}$ \cite{Sun-Gutin}. We will use this inequality in Section~\ref{sec:product}. Note that if $D=G\Box H$ then $\delta^+(D)=\delta^+(G)+\delta^+(H)$ and $\delta^-(D)=\delta^-(G)+\delta^-(H).$ \section{Formula for arc-connectivity of Cartesian product of two digraphs}\label{sec:exect} Xu and Yang \cite{Xu-Yang} (see also \cite{Spa} and \cite[Theorem 5.5]{Imrich-Klavzar-Rall}) proved that \begin{equation}\label{eq3} \lambda(G\Box H)=\min\{\lambda(G)|V(H)|,\lambda(H)|V(G)|,\delta(G)+\delta(H)\} \end{equation} for all connected undirected graphs $G$ and $H,$ each with at least two vertices. Since $\lambda(\overleftrightarrow{Q})=\lambda(Q)$ for every undirected graph $Q$, Formula (\ref{eq3}) can be easily extended to symmetric digraphs. In this section, we generalise Formula (\ref{eq3}) to all strong digraphs. Clearly, $\lambda\left ( D \right ) \le \min\left \{\delta ^{+ } \left ( D \right ),\delta ^{- } \left ( D \right ) \right \} $ for every digraph $D$. Hence, for any two strong digraphs $G$ and $H$, we have \begin{equation}\label{2} \begin{split} \lambda\left ( G \Box H \right ) \le \min\left \{\delta ^{+ } \left ( G \Box H \right ),\delta ^{- } \left ( G \Box H \right ) \right \} \\=\min\left \{ \delta ^{+ } \left ( G \right )+ \delta ^{+ } \left ( H \right ),\delta ^{- } \left ( G \right )+ \delta ^{- } \left ( H \right ) \right \}. \end{split} \end{equation} Furthermore, by the definitions of arc-strong connectivity and Cartesian product of digraphs, we have \begin{equation}\label{3} \lambda \left ( G \Box H\right )\le \lambda\left ( G \right ) \left | H \right | \end{equation} and \begin{equation}\label{4} \lambda \left ( G \Box H\right )\le \lambda\left ( H \right ) \left | G \right |. \end{equation} The inequalities (\ref{2}), (\ref{3}) and (\ref{4}) imply that $$\lambda \left ( G \Box H\right )\le \min\left \{ \lambda \left ( G \right ) \left | H \right |, \lambda \left ( H \right ) \left |G \right |,\delta ^{+ } \left ( G \right )+ \delta ^{+ } \left ( H \right ),\delta ^{- } \left ( G \right )+ \delta ^{- } \left ( H \right ) \right \}.$$ In fact, we can furthermore prove that the equality holds and it could be seen as a digraph extension of (\ref{eq3}). \begin{thm}\label{arc-connectivity} Let $G$ and $H$ be two strong digraphs, each of order at least 2. Then $$\lambda \left ( G \Box H\right )= \min\left \{ \lambda \left ( G \right ) \left | H \right | , \lambda \left ( H \right ) \left |G \right |,\delta ^{+ } \left ( G \right )+ \delta ^{+ } \left ( H \right ),\delta ^{- } \left ( G \right )+ \delta ^{- } \left ( H \right ) \right \}.$$ \end{thm} \begin{pf} Let ${S}\subseteq A\left (G \Box H \right ) $ be an arc-cut set of $G \Box H$ with $\left|{S} \right | =\lambda \left ( G \Box H\right )$. It suffices to show that $$|S|\ge \min\left \{ \lambda \left ( G \right ) \left | H \right | , \lambda \left ( H \right ) \left |G \right |,\delta ^{+ } \left ( G \right )+ \delta ^{+ } \left ( H \right ),\delta ^{- } \left ( G \right )+ \delta ^{- } \left ( H \right ) \right \}.$$ If $\left |{S} \right |\ge \min\left \{ \lambda \left ( G \right ) \left | H \right | , \lambda \left ( H \right ) \left |G \right | \right \}$, then the inequality clearly holds. Therefore, we assume that $\left |{S} \right |< \min\left \{ \lambda \left ( G \right ) \left | H \right | , \lambda \left ( H \right ) \left |G \right | \right \}$ in the following argument and in this case it suffices to show that $\left | {S} \right | \ge\delta ^{+ } \left ( G \Box H \right )$ or $\left | {S} \right | \ge\delta ^{-} \left ( G \Box H \right )$. Now there must exist a strong component $B$ of $G \Box H-{S}$ which contains some $G\left ( v_{j}\right )$, say $G\left ( v_{1}\right )$, (as $\left | {S} \right | <\lambda \left ( G \right ) \left | H \right |$) and some $H\left ( u_{i} \right )$, say $H\left ( u_{1} \right )$, in $G \Box H-{S}$ (as $\left | {S} \right | <\lambda \left ( H \right ) \left | G \right |$). Let $\left ( u,v \right )\in V(G \Box H)\setminus V(B)$. We want to prove that $\left | {S} \right | \ge d^{+ } \left (( u,v )\right )$ by the following operation that assigns each out-neighbor of $\left ( u,v \right )$ in $G\Box H$ a unique arc from ${S}$: We first consider out-neighbors of $\left ( u, v \right )$ in $G\left ( v \right )$. Let $\left ( {u}', v \right )$ be an out-neighbor of $\left ( u, v \right )$ in $G\left ( v \right )$. If the arc $a=\left ( u ,v \right )\left ( {u}' ,v \right )\in {S}$, we assign $a$ to $\left ( {u}', v \right )$. Otherwise, we must have $\left ( {u}', v \right )\notin B$. Therefore, the subdigraph of $G \Box H-{S}'$ induced by $V(H\left ( {u}' \right ))$ is not strong and so $H\left ( {u}' \right )$ contains at least one arc from ${S}$, and we assign this arc to $\left ( {u}' ,v \right )$. We next consider out-neighbors of $\left ( u, v \right )$ in $H\left ( u \right )$. Let $\left ( u, {v}' \right )$ be an out-neighbor of $\left ( u, v \right )$ in $H\left ( u \right )$. If ${a}'=\left ( u, v \right )\left ( u ,{v}' \right )\in {S}$, we assign ${a}'$ to $\left ( u, {v}' \right )$. Otherwise, we must have $\left ( u, {v}' \right )\notin B$. Therefore, the subdigraph of $G \Box H-{S}'$ induced by $V(G\left ( {v}' \right ))$ is not strong and so $G\left ( {v}' \right )$ contains at least one arc from ${S}$, and we assign this arc to $\left ( u, {v}' \right )$. The above operations mean that $\left | {S} \right | \ge d^{+ } \left (( u,v )\right ) \ge \delta ^{+ } \left ( G \Box H \right )$. With a similar argument, we can prove that $\left | {S} \right | \ge\delta ^{- } \left ( G \Box H \right )$. This completes the proof. \end{pf} \section{General bounds}\label{sec:1product} By Theorems~\ref{thmd1} and ~\ref{arc-connectivity}, and the fact that $\lambda _{k} \left ( D \right )\le \lambda\left ( D \right )$ for any digraph $D$\cite{Sun-Gutin}, we have the following sharp upper bound for $\lambda _{2} \left ( G \Box H\right )$. \begin{thm} Let $G$ and $H$ be two strong digraphs, each with at least two vertices. Then $$\lambda _{2} \left ( G \Box H\right )\le \min\left \{ \lambda \left ( G \right ) \left | H \right | , \lambda \left ( H \right ) \left |G \right |,\delta ^{+ } \left ( G \right )+ \delta ^{+ } \left ( H \right ),\delta ^{- } \left ( G \right )+ \delta ^{- } \left ( H \right ) \right \}.$$ Moreover, this bound is sharp. \end{thm} Now we will provide a lower bound for $\lambda_2(G\Box H)$ for strong digraphs $G$ and $H$. \begin{thm}\label{thmd} Let $G$ and $H$ be two strong digraphs. We have $$\lambda_2(G\Box H)\geq \lambda_2(G)+ \lambda_2(H)-1.$$ \end{thm} \begin{pf} It suffices to show that there are at least $\lambda_2(G)+ \lambda_2(H)-1 $ arc-disjoint $S$-strong subgraphs for any $S\subseteq V(G\Box H)$ with $|S|=2$. Let $S=\{x, y\}$ and consider the following two cases. {\em Case 1}: $x$ and $y$ are in the same $H(u_i)$ or $G(v_j)$ for some $1\leq i\leq n, 1\leq j\leq m$. We will prove that, in this case, $\lambda_2(G\Box H)\geq \lambda_2(G)+ \lambda_2(H).$ Without loss of generality, we may assume that $x=u_{1,1},~y=u_{1,2}$. We know there are at least $\lambda_2(H)$ arc-disjoint $S$-strong subgraphs in the subgraph $H(u_1)$, and so it suffices to find the remaining $\lambda_2(G)$ $S$-strong subgraphs in $G\Box H$. We know there are at least $\lambda_2(G)$ arc-disjoint $\{x, u_{2,1}\}$-strong subgraphs, say $D_i(v_1)~(i\in [\lambda_2(G)])$, in $G(v_1)$. For each $i\in [\lambda_2(G)]$, we can choose an out-neighbor, say $u_{t_i,1}$~$(i\in [\lambda_2(G)])$, of $x$ in $D_i(v_1)$ such that these out-neighbors are distinct. Then in $H(u_{t_i})$, we know there are $\lambda_2(H)$ arc-disjoint $\{u_{t_i,1}, u_{t_i,2}\}$-strong subgraphs, we choose one such strong subgraph, say $D(H(u_{t_i}))$.~For each $i\in [\lambda_2(G)]$,~let $D_i(v_2)$ be the $\{u_{t_i,2}, y\}$-strong subgraph corresponding to $D_i(v_1)$ in $G(v_2)$. We now construct the remaining $\lambda_2(G)$ $S$-strong subgraphs by letting $D_i=D_i(v_1)\cup D(H(u_{t_i}))\cup D_i(v_2)$ for each $i\in [\lambda_2(G)]$. Combining the former $\lambda_2(H)$ arc-disjoint $S$-strong subgraphs with the $\lambda_2(G)$ $S$-strong subgraphs, we can obtain $\lambda_2(G)+ \lambda_2(H)$ strong subgraphs. Observe all these strong subgraphs are arc-disjoint. {\em Case 2.} $x$ and $y$ belong to distinct $H(u_i)$ and $G(v_j)$. Without loss of generality, we may assume that $x=u_{1,1},~y=u_{2,2}$. There are at least $\lambda_2(G)$ arc-disjoint $\{x, u_{2,1}\}$-strong subgraphs, say $D_i(v_1) $ $(i\in [\lambda_2(G)])$, in $G(v_1)$.~For each $i\in [\lambda_2(G)]$,~we can choose an out-neighbor,~say $u_{t_i,1}$~$(i\in [\lambda_2(G)])$, of $x$ in $D_i(v_1)$ such that these out-neighbors are distinct. Then in $H(u_{t_i})$, we know that there are $\lambda_2(H)$ arc-disjoint $\{u_{t_i,1}, u_{t_i,2}\}$-strong subgraphs; we choose one such strong subgraph,~say $D(H(u_{t_i}))$. For each $i\in [\lambda_2(G)]$,~let $D_i(v_2)$ be the $\{u_{t_i,2},~y\}$-strong subgraph corresponding to $D_i(v_1)$ in $G(v_2)$. We now construct the $\lambda_2(G)$ $S$-strong subgraphs by letting $D_i=D_i(v_1)\cup D(H(u_{t_i}))\cup D_i(v_2)$ for each $i\in [\lambda_2(G)]$. Similarly, there are at least $\lambda_2(H)$ arc-disjoint $\{x, u_{1,2}\}$-strong subgraphs, say $D'_j(u_1)~(j\in [\lambda_2(H)])$, in $H(u_1)$. For each $j\in[\lambda_2(H)]$, we can choose an out-neighbor, say $u_{1,t'_j}$~$(j\in [\lambda_2(H)])$, of $x$ in $D'_j(u_1)$ such that these out-neighbors are distinct. Then in $G(v_{t'_j})$, we know there are $\lambda_2(G)$ arc-disjoint $\{u_{1,t'_j}, u_{2,t'_j}\}$-strong subgraphs, we choose one such strong subgraph, say $D(G(v_{t'_j}))$. For each $j\in [\lambda_2(H)]$, let $D'_j(u_2)$ be the $\{u_{2,t'_j}, y\}$-strong subgraph corresponding to $D'_j(u_1)$ in $H(u_2)$. We now construct the other $\lambda_2(H)$ $S$-strong subgraphs by letting $D'_j=D'_j(u_1)\cup D(G(v_{t'_j}))\cup D'_j(u_2)$ for each $j\in [\lambda_2(H)]$. {\em Subcase 2.1.} $t_i\neq 2$ for any $i\in [\lambda_2(G)]$ and $t'_j\neq 2$ for any $j\in [\lambda_2(H)]$, that is,~$u_{2,1}$ was not chosen as an out-neighbor of $u_{1,1}$ in $G(v_1)$ and $u_{1,2}$ was not chosen as an out-neighbor of $u_{1,1}$ in $H(u_1)$.~We can check the above $\lambda_2(G)+ \lambda_2(H)$ strong subgraphs are arc-disjoint. {\em Subcase 2.2.} $t_i=2$ for some $i\in [\lambda_2(G)]$ or $t'_j=2$ for some $j\in [\lambda_2(H)]$,~that is,~$u_{2,1}$ was chosen as an out-neighbor of $u_{1,1}$ in $G(v_1)$ or $u_{1,2}$ was chosen as an out-neighbor of $u_{1,1}$ in $H(u_1)$. Without loss of generality,~we may assume that $t_i=2$ and $t'_j \neq 2 $,~that is,~$u_{2,1}$ was chosen as an out-neighbor of $u_{1,1}$ in $G(v_1)$ and $u_{1,2}$ was not chosen as an out-neighbor of $u_{1,1}$ in $H(u_1)$.~When $A(D_i)\cap A(D_j) \neq \emptyset$,~we can get $\lambda_2(G)+ \lambda_2(H)-1$ arc-disjoint $S$-strong subgraphs.~Otherwise,~we can check the above $\lambda_2(G)+ \lambda_2(H)$ strong subgraphs are arc-disjoint and get the desired $S$-strong subgraphs. {\em Subcase 2.3.} $t_i=2$ for some $i\in [\lambda_2(G)]$ and $t'_j=2$ for some $j\in [\lambda_2(H)]$, we replace $D_1$,~$D'_1$ by $\overline{D}_1$, $\overline{D'}_1$, respectively as follows: let $\overline{D}_1= D_1(v_1)\cup D(H(u_{t_1}))$ and $\overline{D'}_1= D'_1(u_1)\cup D_1(v_2)$. We can check that the current $\lambda_2(G)+ \lambda_2(H)$ strong subgraphs are arc-disjoint. Hence, the bound holds. This completes the proof. \end{pf} \section{Exact values for digraph classes}\label{sec:product} In this section, we will obtain exact values for the strong subgraph 2-arc-connectivity of Cartesian product of two digraphs belonging to some digraph classes. \begin{pro}\label{p1} We have $ \lambda_2(\overrightarrow{C}_n\Box \overrightarrow{C}_m)=2. $ \end{pro} \begin{figure}[htbp] \small \centering \includegraphics[width=9cm]{figure31.png \caption{Cartesian product of two dicycles.} \label{figure3.1} \end{figure} \begin{pf}\, Let $S=\left \{ x, y \right \} $,~we just consider the case that $ x,~y$ are neither in the same $ \overrightarrow{C}_n\left ( u_{i} \right ) $ nor in the same $ \overrightarrow{C}_m\left ( v_{j} \right ) $ for some $ 1\leq i \leq n $,~$ 1\leq j \leq m $, since the arguments for remaining cases are similar.~Without loss of generality, we may assume that $ x= u_{1,1},~y=u_{2,2} $.~We can get two arc-disjoint $S$-strong subgraphs in $\overrightarrow{C}_n\Box \overrightarrow{C}_m$,~say $ D_{1} $ and $ D_{2} $ (as shown in Fig. \ref{figure3.1}) such that $ V\left ( D_{1} \right ) =\left \{ x,~y,~u_{1,2},\cdots,~u_{2,m-1},~u_{2,m},\cdots,~u_{n,m},~u_{1,m}\right \}$ and $A\left ( D_{1} \right )=\left \{xu_{1,2},~u_{1,2}y,\cdots,~u_{2,m-1}u_{2,m},\cdots ,~u_{n-1,m}u_{n,m},~u_{n,m}u_{1,m},~\right.\\ \left. u_{1,m}x\right \}$. $V\left ( D_{2} \right ) =\left \{ x,~y,~u_{2,1},\cdots,~u_{n-1,2},\cdots,~u_{n-1,m-1},~u_{n-1,m},~u_{n-1,1}\right \}$ and $ A\left ( D_{2} \right ) =\left \{ xu_{2,1},~u_{2,1}y,\cdots,~u_{n-2,2}u_{n-1,2},\cdots,~u_{n-1,m-1}u_{n-1,m},~u_{n-1,m}\right.\\ \left.u_{n-1,1},~u_{n-1,1}u_{n,1},~u_{n,1}x\right \} $. Then we have $2=\min\{\delta^+(D), \delta^-(D)\}\geq \lambda_2(\overrightarrow{C}_n \Box \overrightarrow{C}_m)\geq 2$.~This completes the proof. \end{pf} \begin{pro}\label{p2} We have $ \lambda_2(\overrightarrow{C}_n\Box \overleftrightarrow{C} _{m} )=3.$ \end{pro} \begin{figure}[htbp] \small \centering \includegraphics[width=9cm]{figure32.png \caption{Cartesian product of a dicycle and the complete biorientation of a cycle.} \label{figure3.2} \end{figure} \begin{pf}\, Let $S=\left \{ x, y \right \} $,~we just consider the case that $ x $,~$ y$ are neither in the same $ \overrightarrow{C}_n\left ( u_{i} \right ) $ nor in the same $ \overleftrightarrow{C } _{m} \left ( v_{j} \right ) $ for some $ 1\leq i \leq n $,~$ 1\leq j \leq m $,~since the arguments for remaining cases are similar.~Without loss of generality,~we may assume that $x=u_{1,1}$,~$y=u_{2,2}$.~We can get three arc-disjoint $S$-strong subgraphs in $\overrightarrow{C}_n\Box \overleftrightarrow{C } _{m} $,~say $ D_{1} $,~$ D_{2} $ and $ D_{3}$ (as shown in Fig. \ref{figure3.2}) such that $ V\left ( D_{1} \right ) =\left \{ x,~y,\cdots,~u_{n-1,2},~u_{n,2},~u_{1,2}\right \} $ and $A\left ( D_{1} \right )=\left \{ xu_{1,2},~u_{1,2}y,\cdots,~u_{n-1,2}u_{n,2},~u_{n,2}u_{1,2},~u_{1,2}x\right \}$. $V\left ( D_{2} \right ) =\left \{ x,~y,~u_{1,m},~u_{2,m},~u_{2,m-1},\cdots,~u_{n-1,m-1},~u_{n,m}\right \}$ and $ A\left ( D_{2} \right ) =\left \{ xu_{1,m},~u_{1,m}u_{2,m},~ u_{2,m}u_{2,m-1},\cdots,~u_{2,3}y,~yu_{2,3},\cdots,~u_{2,m-1}\right.\\ \left.u_{2,m},\cdots,~u_{n-1,m}u_{n,m},~u_{n,m}u_{1,m},~u_{1,m}x\right \} $. $V\left ( D_{3} \right ) =\left \{ x,~y,~u_{2,1},\cdots,~u_{n-1,1},~u_{n,1}\right \}$ and $ A\left ( D_{3} \right ) =\left \{ xu_{2,1},~u_{2,1}y,~ yu_{2,1},\cdots,~u_{n-1,1}u_{n,1},~u_{n,1}x\right \} $. Then we have $3=\min\{\delta^+(D),~\delta^-(D)\}~\geq \lambda_2(\overrightarrow{C}_n \Box \overleftrightarrow{C }_{m} )\geq 3$.~This completes the proof. \end{pf} \begin{pro}\label{p4} We have $ \lambda_2(\overrightarrow{C}_n\Box \overleftrightarrow{T} _{m} )=2.$ \end{pro} \begin{figure}[htbp] \small \centering \includegraphics[width=9cm]{figure34.png \caption{Cartesian product of a dicycle and an orientation of a tree.} \label{figure3.4} \end{figure} \begin{pf}\, Let $S=\left \{ x, y \right \} $,~we just the case consider that $x$,~$ y$ are neither in the same $ \overrightarrow{C}_n\left ( u_{i} \right ) $ nor in the same $ \overleftrightarrow{T } _{m} \left ( v_{j} \right ) $ for some $ 1\leq i \leq n $,~$ 1\leq j \leq m $,~as the arguments for the remaining cases are similar.~ Without loss of generality,~we may assume that $ x=u_{1,1}$,~$y=u_{2,2}$.~We can get two arc-disjoint $S$-strong subgraphs in $\overrightarrow{C}_n\Box \overleftrightarrow{T } _{m}$,~say $ D_{1} $ and $D_{2}$ (as shown in Fig. \ref{figure3.4}) such that $ V\left ( D_{1} \right ) =\left \{ x,~y,\cdots,~u_{n-1,2},~u_{n,2},~u_{1,2},~u_{2,1}\right \} $ and $A\left ( D_{1} \right )=\left \{ xu_{2,1},~u_{2,1}y,\cdots,~u_{n-1,2}u_{n,2},~u_{n,2}u_{1,2},~u_{1,2}x\right \}$. $V\left ( D_{2} \right ) =\left \{x,~y,~u_{1,2},~u_{2,1},\cdots,~u_{n-1,1},~u_{n,1}\right \}$ and $ A\left ( D_{2} \right ) =\left \{ xu_{1,2},~u_{1,2}y,~yu_{2,1},\cdots,~u_{n-1,1}u_{n,1},~u_{n,1}x\right \} $. Then we have $2=\min\{\delta^+(D),~\delta^-(D)\}\geq \lambda_2(\overrightarrow{C}_n \Box \overleftrightarrow{T } _{m} )\geq 2$.~This completes the proof. \end{pf} \begin{pro}\label{p7} We have $ \lambda_2(\overrightarrow{C}_n\Box \overleftrightarrow{K} _{m} )=m.$ \end{pro} \begin{figure}[htbp] \small \centering \includegraphics[width=9cm]{figure37.png \caption{Cartesian product of a dicycle and the complete biorientation of $K_2$.} \label{figure3.7} \end{figure} \begin{figure}[htbp] \small \centering \includegraphics[width=9cm]{figure371.png \caption{Cartesian product of a dicycle and the complete biorientation of $K_m$.} \label{figure3.7.1} \end{figure} \begin{pf}\, Let $S=\left \{ x, y \right \} $,~we just consider the case that $ x $,~$ y$ are neither in the same $ \overrightarrow{C}_n\left ( u_{i} \right ) $ nor in the same $ \overleftrightarrow{K } _{m} \left ( v_{j} \right )$ for some $ 1\leq i \leq n $,~$ 1\leq j \leq m $,~as the arguments for the remaining cases are similar.~Without loss of generality,~we may assume that $ x=u_{1,1}$,~$y=u_{2,2}$. We first show that $ \lambda_2(\overrightarrow{C}_n\Box \overleftrightarrow{K} _{2} )=2 $.~When $m=2$,~we can get two arc-disjoint $S$-strong subgraphs in $\overrightarrow{C}_n\Box \overleftrightarrow{K } _{2} $,~say $ D_{1} $ and $ D_{2} $~(as shown in Fig. \ref{figure3.7}) satisfying: $ V\left ( D_{1} \right ) =\left \{x,~y,~u_{1,2},~u_{2,1},\cdots,~u_{n-1,1},~u_{n,1}\right \} $ and $A\left ( D_{1} \right )=\left \{ xu_{1,2},~u_{1,2}y,~yu_{2,1},\cdots,~u_{n-1,1}u_{n,1},~u_{n,1}x\right \}$. $V\left ( D_{2} \right ) =\left \{x,~y,~u_{1,2},~u_{2,1},\cdots,~u_{n-1,1},~u_{n,1}\right \}$ and $ A\left ( D_{2} \right ) =\left \{ xu_{2,1},~u_{2,1}y,\cdots,~u_{n-1,2}u_{n,2},~u_{n,2}u_{1,2},~u_{1,2}x\right \} $. The propositon is now proved by induction on $m$. Suppose that when $m=k$,~we have $\lambda_2(\overrightarrow{C}_n \Box \overleftrightarrow{K } _{k} )=k$.~We shall show that $\lambda_2(\overrightarrow{C}_n \Box \overleftrightarrow{K } _{k+1} )=k+1 $~when $m=k+1$.Since we can get $k$ arc-disjoint $S$-strong subgraphs in $\overrightarrow{C}_n\Box \overleftrightarrow{K } _{k} $ ,~say $ D_{1} $,~$ D_{2} $,$\cdots$,~$ D_{k} $.~When $ m=k+1$,~that is,~the degree of each vertex increases by 2 in $\overleftrightarrow{K } _{k}$,~we can get $k+1$ arc-disjoint $S$-strong subgraphs in $\overrightarrow{C}_n\Box \overleftrightarrow{K } _{k+1} $,~say $ D_{1} $,~$ D_{2} $,$\cdots$,~$ D_{k} ,~ D_{k+1} $.~By the symmetry of the complete digraph,~the same conclusion is drawn in the two cases where $x$,~$y$ belong to $\overleftrightarrow{K} _{k+1}$,~and $x$,~$y$ belong to $\overleftrightarrow{K} _{k}$ and $\overleftrightarrow{K} _{k+1}$,~respectively.~From the above argument, the original proposition holds for any positive integer,~we can get $m$ arc-disjoint $S$-strong subgraphs in $\overrightarrow{C}_n\Box \overleftrightarrow{K } _{m} $,~say $ D_{1},~D_{2},\cdots,~D_{ j} (2< j\le m),\cdots,~D_{m-1},~D_{m} $ (as shown in Fig. \ref{figure3.7.1}) such that $ V\left ( D_{1} \right ) =\left \{x,~y,~u_{1,2},\cdots,~u_{n-1,2},~u_{n,2}\right \} $ and $A\left ( D_{1} \right )=\left \{ xu_{1,2},~u_{1,2}y,\cdots,~u_{n-1,2}u_{n,2},~u_{n,2}u_{1,2},~u_{1,2}x\right \}$. $ V\left ( D_{2} \right ) =\left \{x,~y,~u_{2,1},\cdots,~u_{n-1,2},~u_{n,2}\right \} $ and $A\left ( D_{2} \right )=\left \{ xu_{2,1},~u_{2,1}y,~yu_{2,1},\cdots,~u_{n-1,1}u_{n,1},~u_{n,1}x\right \}$. $V\left ( D_{j} \right ) =\left \{x,~y,~u_{1,2},~u_{2,j},\cdots,~u_{n-1,j},~u_{n,j}\right \}$ and $ A\left (D_{j} \right ) =\left \{ xu_{1,j},~u_{1,j}u_{2,j},~u_{2,j}y,~yu_{2,j},\cdots,~u_{n-1,j}u_{n,j},~u_{n,j}u_{1,j},~u_{1,j} x\right \} $. Then we have $m=\min\{\delta^+(D), \delta^-(D)\}~\geq \lambda_2(\overrightarrow{C}_n \Box \overleftrightarrow{K } _{m} )\geq m$.~This completes the proof. \end{pf} Since $\lambda_2(\overleftrightarrow{Q})=\lambda(Q)$ for any undirected graph $Q$, using Cartesian product definition, we have \begin{equation}\label{eq2} \lambda_2(\overleftrightarrow{G}\Box \overleftrightarrow{H})= \lambda(G\Box H) \end{equation} for undirected graphs $G$ and $H.$ Propositions \ref{p1}-\ref{p7} and Formulas (\ref{eq2}) and (\ref{eq3}) imply the following theorem. Indeed, entries in the first row and columns of Table 1 follow from Propositions \ref{p1}-\ref{p7} and all other entries can be easily computed using (\ref{eq2}) and (\ref{eq3}). \begin{thm}\label{thmd1} The following table for the strong subgraph 2-arc-connectivity of Cartesion products of some digraph classes holds: \begin{figure}[htbp] {\tiny \begin{center} \renewcommand\arraystretch{3.5} \begin{tabular}{|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|p{1.5cm}|} \hline & $\overrightarrow{C}_m$ & $\overleftrightarrow{C}_m$ & $\overleftrightarrow{T}_m$ & $\overleftrightarrow{K}_m$ \\\hline $\overrightarrow{C}_n$ & $2$ & $3$ & $2$ & $m$ \\\hline $\overleftrightarrow{C}_n$ & $3$ & $4$ & $3$ & $m+1$ \\\hline $\overleftrightarrow{T}_n$ & $2$ & $3$ & $2$ & $m$ \\\hline $\overleftrightarrow{K}_n$ & $n$ & $n+1$ & $n$ & $n+m-2$ \\\hline \end{tabular} \vspace*{20pt} \centerline{\normalsize Table $1$. Exact values of $\lambda_2$ for Cartesian products of some digraph classes.} \end{center}} \end{figure} \end{thm} \vskip 0.5cm
1,108,101,563,596
arxiv
\section{Introduction} \label{sec:intro} A common procedure in Statistics and Machine Learning when dealing with data sets of thousands of variables is to {\it sort} all these variables according to some measure that identifies how important they are to predict and/or retrospectively understand a certain target variable (or equivalently an indicator that tells in which group or {\it population} belongs each sample). Classical examples of such a procedure are the Student's $t$-test and the Wilcoxon's rank-sum $u$-test~\cite{dems,fay,utest}, whose statistics are often used to sort variables into some order of importance. Arguably, they represent the most commonly used methods for this problem in biomedical applications, in part because of their prompt availability and easiness of use. A typical scenario is to have gene expression data of cancer patients, and a {\it class} variable that identifies whether the patient {\it relapsed} or not (in other word, whether the cancer came back after treatment/surgery or not). The ability to sort variables in some meaningful order has a range of applications in many fields, and can also be seen as means of performing {\it feature selection}~\cite{mitchell,witten}. This work describes a simple yet effective approach, named {\it Quor}, to sort variables according to the order relationship of arbitrary quantiles of the variable's distributions under different groups. The method computes a value that indicates the confidence that such quantiles of these distributions are sorted in some pre-defined way. As example, suppose we have two populations and are interested in the median values of a variable representing the level of expression of a particular gene. The goal is to obtain the confidence that ``the median expression of that gene in the first population is strictly smaller (or greater) than the median expression of the same gene in the second population''. The comparison of medians might suggest that the gene is under-expressed, over-expressed, or simply that there is no significant difference of expression between the populations. As other methods with similar purpose, Quor can be used as a first aid for the later application of other sophisticated statistical or biological procedures: Its simplicity may avoids expensive and time-consuming analyses of uninteresting variables, or at least may help to prioritize further analyses in order to save valuable time and biological materials/machinery. Methods for this problem may rely on eventually unrealistic hypotheses, such as normality of samples, asymptotic behavior of some statistics, reasonably large-sized samples, approximate computations, comparisons of only two populations, need of equivalent number of samples in the groups (across distinct variables), necessity of multiple-test corrections (to avoid encountering significant results {\it by chance}), no evidence for the null hypothesis, among others~\cite{raftery}. These issues might be aggravated by having data sets with only a few patients and a large number of genes. On the other hand, Quor is nonparametric and assumes nothing but independence of samples. It can deal with different number of samples and missing data, and yet can properly compare these variables; all computations are performed exactly, without any asymptotic assumption. Moreover, its computations can be carried on in quadratic time in the number of samples, which is (roughly) as fast as other methods based on $t$-tests and $u$-tests. Other approaches for median test do exist (we refer to Chapter 6 of~\cite{Gibbons2003} for some examples), but they either fail in at least one of the issues mentioned previously in this paragraph, or are not able to order arbitrary quantiles of (arbitrary) many populations. This paper is divided as follows. Section~\ref{sec2} describes the approach in details and presents its computational complexity. Sections~\ref{sec4} and~\ref{sec5} analyze real biomedical data sets and compare the empirical performance of the methods. Section~\ref{sec6} concludes the paper and discuss on future work. \section{Unveiling Quor} \label{sec2} We describe here the details of Quor and present an efficient algorithm for its computation. The method is built on the ideas of confidence statements developed long ago~\cite{basu1981,kiefer1977} and revisited more recently~\cite{zellner2004}. The proposed method uses nonparametric confidence intervals for quantiles based on the binomial distribution \cite{degroot1975}. Its goal is to compute a confidence value indicating how much one believes that quantile parameters of different populations/groups are ordered among themselves. We do not assume any particular quantile nor a specific number of populations, even though the case of comparing medians of two populations is arguably the most common scenario for its application. The problem is defined as follows. Let $Q_1,\ldots,Q_n$ represent the quantiles at arbitrary percentages $q_1,\ldots,q_n$, respectively, for $n$ populations, and let ${\bf x}_i=(x_i^{(1)},\ldots,x_i^{(m_i)})$, for $i=1,\ldots,n$, be data samples from those populations, where the sample from population $i$ has size $m_i$. The goal is to produce a confidence value in $[0,1]$ that indicates how much we believe in the statement $Q_{i_1}<Q_{i_2}<\ldots <Q_{i_n}$, where $(i_1,\ldots,i_n)$ is a permutation of $(1,\ldots,n)$. As in other nonparametric methods \cite{wasserman2006}, the order and the values of the tuples of order statistics will be the solely observations needed. Consider an event $A$ related to a random variable $X_i$ whose quantile at $q_i$ is $Q_{i}$. $\Pr(A\mid Q_{i})$ indicates the probability of the event $A$ with $Q_{i}$ known. The quantile $Q_{i}$ of $X_i$ is a population parameter that satisfies the following inequality: \begin{equation} \Pr(\{X_i \leq Q_{i}\} \mid Q_{i})\geq q_i, \end{equation} \noindent while in the continuous case, this inequality is tight. Let $(X_i^{(1)},X_i^{(2)},\ldots,X_i^{(m_i)})$ be the ordered vector of $m_i$ independent and identically distributed continuous random variables. Since the probability of one observation being smaller than the population quantile $Q_{i}$ is $q_i$, it is straightforward that for the $j$th $(j=1,2,\ldots,m_i)$ order statistics, $X_i^{(j)}$, the following probability expression holds: \begin{equation} \Pr(\{X_i^{(j)} \leq Q_{i}\} \mid Q_{i}) = \sum\limits_{k=j}^{m_i} \binom{m_i}{k} q_i^k(1-q_i)^{m_i-k}, \label{e1} \end{equation} \noindent and symmetrically \begin{equation} \Pr(\{X_i^{(j)} \geq Q_{i}\} \mid Q_{i}) = \sum\limits_{k=0}^{j-1} \binom{m_i}{k} q_i^k(1-q_i)^{m_i-k}. \label{e2} \end{equation} \noindent These equalities come from probabilities obtained with a binomial distribution with sample size $m_i$ and probability of success $q_i$. Consider a sequence of pairs of order statistics ($(X_1^{(j_1)}$, $X_1^{(j'_1)})$, $(X_2^{(j_2)};X_2^{(j'_2)})$, $\ldots$, $(X_n^{(j_n)};X_n^{(j'_n)})$), each of them chosen from the $i$th group (with abuse of notation, let first and last groups have $X_1^{(j_1)}=-\infty$ and $X_n^{(j'_n)}=\infty$, respectively), and consider the event $\text{E}$ as follows. \[ \text{E} = \bigcap_{i=1}^n \{X_i^{(j_i)} \leq Q_{i}\leq X_i^{(j'_i)} \} =\bigcap_{i=1}^n \left(\{X_i^{(j'_i)} \geq Q_{i}\} \setminus \{ X_i^{(j_i)} > Q_{i}\}\right). \] \noindent Given the independence among the samples, one can compute $\Pr(\text{E}\mid Q_{1},\ldots,Q_{n})=$ \begin{equation} \prod_{i=1}^n \max\left\{0;\left[ \Pr(\{X_i^{(j'_i)} \geq Q_{i}\} \mid Q_{i})-\left(1- \Pr(\{X_i^{(j_i)} \leq Q_{i}\} \mid Q_{i})\right) \right]\right\}, \label{eq:confidence} \end{equation} using the product of binomial probabilities from Equations~\eqref{e1} and~\eqref{e2}. Let ${\bf x}_i$ be sorted, for every $i$. After these samples are observed, the only unknown quantities of interest are the quantiles $Q_{i}$ of the populations being studied. By replacing random variables with their observations, we create the statement $\text{e}:$ \begin{equation} \text{e}=\bigcap_{i=1}^n \{x_i^{(j_i)} \leq Q_{i}\leq x_i^{(j'_i)}\}, \label{eq:e} \end{equation} \noindent which has confidence given by Expression~\eqref{eq:confidence}. Note that we only need the orders $j_i$ and $j'_i$ for every $i$, and not the actual observed values of $X_i$, for computing with Expression~\eqref{eq:confidence}, that is, the confidence of $\text{e}$ could be equivalently defined as the confidence of $((0;j'_1);(j_2;j'_2);\ldots;(j_n;1+m_n))$, where we conveniently define that, for every $i=1,\ldots,n$, $x_i^{(j)}=-\infty$ for every $j<1$ and $x_i^{(j)}=\infty$ for every $j>m_i$. Hence, we might say that $\text{e}$ is represented by the list $((0;j'_1);(j_2;j'_2);\ldots;(j_n;1+m_n))$. At this point after sampling, we call the value of Expression~\eqref{eq:confidence} a confidence value instead of probability value, in order to keep terminology precise. This confidence regards the unknown quantities of interest, in our case the parameters $Q_{i}$. The idea is to look for statements $\text{e}$ that are able to tell us something about $Q_{i}$. Without loss of generality, assume that we take a list of pairs of order statistics $((0;j'_1);(j_2;j'_2);\ldots;(j_n;1+m_n))$ such that, in the observed sets, we have $x_i^{(j_i)} <x_i^{(j'_i)}<x_{i+1}^{(j_{i+1})}$, for every $1\leq i <n$ (we do not lose generality because in case we want to sort these observations in some other order, we simply rename the variables). With this fact and a quick analysis of Expression~\eqref{eq:e}, we derive \begin{equation} \left(\forall_{1\leq i <n}:~ x_i^{(j_i)} <x_i^{(j'_i)}<x_{i+1}^{(j_{i+1})}\right) \land \text{e} \Rightarrow \bigcap_{i=1}^{n-1} \{Q_{i} < Q_{{i+1}}\}, \label{eq:xx} \end{equation} \noindent that is, the assertion in the left-hand side of Expression~\eqref{eq:xx} implies an order for the quantiles, so its confidence is a lower bound for the confidence of the right-hand side. Because we know how to compute the confidence value of $\text{e}$ through Expression~\eqref{eq:confidence}, and any time the assertion $x_i^{(j_i)}<x_i^{(j'_i)}<x_{i+1}^{(j_{i+1})}$ is false for any $i$ the result of Expression~\eqref{eq:confidence} becomes zero, we have the following relation. \begin{equation} \Co(\text{e}) \leq \Co\left(\bigcap_{i=1}^{n-1} \{Q_{i} < Q_{{i+1}}\}\right). \label{eq:ac} \end{equation} In order to compute the best possible lower bound for the confidence value in the right-hand side of Expression~\eqref{eq:ac}, we run over all tuples $((0;j'_1);(j_2;j'_2);\ldots;(j_n;1+m_n))$ of orders that comply with the linear order \[ x_{1}^{(0)} <x_{1}^{(j'_{1})} <x_{2}^{(j_{2})}<x_{2}^{(j'_{2})}< \ldots < x_{n}^{(j_{n})}< x_{n}^{(1+m_n)} \] \noindent and keep the maximum confidence value of the $\text{e}$ statements built from these tuples as our estimation for the confidence that $Q_{1}<Q_{2}<\ldots <Q_{n}$ holds true. Because we want to maximize such confidence, we will always choose $j_i$ such that $x_i^{(j_i)}$ is the smallest possible value greater than $x_{i-1}^{(j'_{i-1})}$, that is, the value of $j_i$, in order to maximize the confidence, is uniquely computable from the value of $j'_{i-1}$ (that is because there is no reason to leave a larger gap between them if a smaller gap is possible, as smaller gaps will yield higher confidence values). Hence, we can ease on the terminology and compute the confidence of $\text{e}$ by $\Co(j'_1,\ldots,j'_{n-1})$, as these values are enough to find all $j_2,\ldots,j_n$ (that lead to the highest possible confidence) and then to use Expression~\eqref{eq:confidence} to obtain the confidence value. For easy of computation, we define the confidence using a sum of logarithms. \begin{eqnarray} \Co_i(j'_{i-1},j'_{i}) = \log\left(\max\left\{0;\Pr(\{X_i^{(j'_i)} \geq Q_{i}\} \mid Q_{i})+\Pr(\{X_i^{(j_i)} \leq Q_{i}\} \mid Q_{i}) - 1\right\}\right), \label{eq:cok} \end{eqnarray} \noindent where $j_i$ is obtained from $j'_{i-1}$ (by looking the data) as just explained, and $\log 0$ is $-\infty$. Note that the value of $\Co_i(j'_{i-1},j'_{i})$ will equal to $-\infty$ whenever the values $j'_{i-1},j'_{i}$ do not induce a ``valid'' order in the observations, that is, whenever there is no element $x_i^{(j_i)}$ strictly inside $[x_{i-1}^{(j'_{i-1})},x_i^{(j'_i)}]$. We are interested in $\exp(\sum_{i=1}^n\Co_{i}(j'_{i-1},j'_{i}))$, which is our statistical conclusion based on the observed samples that the quantiles (of the populations) are ordered according to the permutation $(1,\ldots,n)$. This procedure is presented in Algorithm~\ref{algo1}, which is explained in the continuation. We recall that if one wants to compute the confidence of some other order, for instance $Q_{i_1}<Q_{i_2}<\ldots <Q_{i_n}$, where $(i_1,\ldots,i_n)$ is a permutation of $(1,\ldots,n)$, then they simply need to rename the variables accordingly before invoking the algorithm, while if one wants to check for every possible order of the quantiles, there would be $n!$ permutations to check, which is fast for small values of $n$ (the vast majority of biomedical studies has $n\leq 6$ groups). \begin{algorithm} \caption{{\bf (Quor Core)} Finding the confidence value of a statement about the ordering of a quantile parameter among $n$ populations.} \label{algo1} \begin{algorithmic} \item[\textbf{Input}]a data set with samples ${\bf x}_i=(x_i^{(1)},\ldots,x_i^{(m_i)})$, for $i=1,\ldots,n$, and the quantiles of interest $q_i$, for each $i$. \item[\textbf{Output}]the log-confidence value that the statement $Q_{1}<Q_{2}<\ldots <Q_{n}$ holds. \item[1] For every $i$ in $1,\ldots,n$, sort ${\bf x}_i$. \item[2] Pre-compute the values that appear in Equations~\eqref{e1} and \eqref{e2} by making a cache to be used in the computation of Equation~\eqref{eq:cok}, for every $i=1,\ldots,n$: $\text{cache}(i,0)\leftarrow (1-q_i)^{m_i}$ and for $j=1,\ldots,m_i$: \[ \text{cache}(i,j) \leftarrow \text{cache}(i,j-1) + \binom{m_i}{j} q_i^j(1-q_i)^{m_i-j}. \] \item[3] Let $D_i$ be a vector of size $m_i$ (defined from $1$ to $m_i$), for each $i=1,\ldots,n-1$. Initialize $D_1[\ell'_1]\leftarrow \Co_1(0,\ell'_{1})$, for every $1\leq\ell'_1\leq m_1$. If $n=2$, then go to line 5. \item[4] For $i=2,\ldots,n-1$, do: \item[\quad ~ ~] For $\ell'_i=1,\ldots,m_i$, do:\\ \[ \quad \quad ~ D_i[\ell'_i] = \max_{1\leq \ell'_{i-1}\leq m_{i-1}} (D_{i-1}[\ell'_{i-1}] + \Co_i(\ell'_{i-1},\ell'_{i})). \]\\ \item[5] Return $\max_{1\leq \ell'_{n-1}\leq m_{n-1}} (D_{n-1}[\ell'_{n-1}] +\Co_n(\ell'_{n-1},1+m_n))$. \end{algorithmic} \end{algorithm} \begin{myTheo} Algorithm~\ref{algo1} uses space $O(n + m)$ and takes time $O(m\log m)$ if $n=2$, and $O(m\max_i m_i)$ otherwise. \end{myTheo} \noindent {\it Proof.} Step 1 needs to sort each of the samples ${\bf x}_i$, which takes $O(m_i\log m_i)$ for each $i$, so in total $O(m\log\max_i m_i)$. Step 2 pre-computes the partial binomial sums. By doing it in a proper order, this can be accomplished in constant time for each $\text{cache}(i,j)$, and the loop will execute $O(\sum_i m_i)=O(m)$ times. Step 3 initializes the data structures $D_i$ for the dynamic programming. Summed altogether, they spend $O(m)$ space and $O(m)$ time to initialize. Step 4 performs the dynamic programming. The number of times both loops are executed altogether is $O(m)$. The internal maximization takes less than $O(\max_i m_i)$, so this step has time $O(m\max_i m_i)$ in the worst case (and it is not run for $n=2$). Finally, Step 5 takes time $O(m)$. $\blacksquare$ Algorithm~\ref{algo1} is very fast. First, many common practical cases have $n=2$. In this case, the Step 4 of the algorithm is skipped and the whole algorithm runs in linear time (except for the sorting of Step 1). Second, this worst-case time complexity is pessimistic, and the algorithm will usually run in sub-quadratic time. Third, $m$ tends to be a reasonable small number (for example, biomedical data sets hardly contain more than one hundred patients). The correctness of Algorithm~\ref{algo1} comes from its simple dynamic programming formulation. At each main loop $i$ of Step 4, we have already solved the problem up to $i-1$ groups in the best possible way, for each one of the positions one could choose for the last placeholder $\ell'_{i-1}$, which we saved in $D_{i-1}[\ell'_{i-1}]$. The choices of the yet-to-decide values $\ell'_i,\ldots,\ell'_n$ do not depend on the positions of the placeholders before $\ell'_{i-1}$, only on $\ell'_{i-1}$ itself. Thus, the dynamic programming does the job of increasingly building new solutions. This step is inspired by other dynamic programming algorithms, such as those for $k$-means in one dimensional vectors~\cite{kmeans}, yet different because of the nature of our confidence function to be optimized. It is worth noting that some computations in Algorithm~\ref{algo1} may suffer from numerical problems. We have implemented it using incomplete Beta functions and arbitrary precision when needed; this is usually the case if $m$ is reasonably large (greater than some hundreds). The confidence value obtained with Quor can be used in procedures that want to optimize their utility functions, because they can provide information about differences in quantiles as well as similarities in quantiles (in the case that confidences are low in both directions). This is in contrast to usual hypothesis testing, where no evidence in favor of the null hypothesis can be obtained. A possible negative characteristic of Quor is that its computations may often lead to ties in the results, which is an intrinsic situation of exact computations with the binomial distribution. However, if computations are carried on with a clever implementation that avoids numerical issues, then ties should arguably be considered as effective ties, and other procedures/ideas could be devised to break the ties. For our current needs, ties have not constituted a problem, mainly because we are usually interested in the top confidence values, where the ties seem to be less often. \section{Schizophrenia data} \label{sec4} \begin{figure}[ht] \centering \subfloat[Transcript 204434.]{\includegraphics[width= 2.5in]{X204434_at.png}\label{f6a}} \subfloat[Transcript 209847 (7th highest Quor Confidence).]{\includegraphics[width= 2.5in]{X209847_at.png}\label{f6b}}\\ \subfloat[Transcript 215001.]{\includegraphics[width= 2.5in]{X215001_s_at.png}\label{f6c}} \subfloat[Transcript 215003 (highest Quor Confidence).]{\includegraphics[width= 2.5in]{X215003_at.png}\label{f6d}} \caption{Data of four transcripts from the Schizophrenia data set. In (a), Quor Conf=0.43, but $t$ and $u$ reject the null hypothesis at 0.002 and 0.02 (that is, they consider the difference between groups to be quite significant). In (b), $t$ cannot reject the null hypothesis, and in (c) neither $t$ nor $u$ can (in both cases Quor Conf is at least 0.95). In (d), both tests and Quor strongly agree (Quor Conf=0.99, $t$-test pvalue=8e-4, $u$-test pvalue=2e-4).}\label{f6} \end{figure} We show a practical scenario by analyzing the Schizophrenia data set from the Stanley Medical Research Institute's online genomics database~\cite{higgs2006,stanley2012}. The sample sizes are $m_1 = 34$ individuals in the control group and $m_2 = 34$ patients with schizophrenia. A total of $20992$ microarray probes were obtained. The usual goal in such an analysis is to find the most differentially expressed genes. For that purpose, we decided to evaluate the confidence of the statements $\{Q_1 < Q_2\}$ and $\{Q_1 > Q_2\}$, where $q_1=q_2=1/2$, that is, we compared the medians of the populations. This is performed for each gene in the data set. We argue that the difference in the populations' medians indicate that genes might be differentially expressed (at least it shows that the distributions are not equal). Not surprisingly, there are {\bf no} significant genes if we perform either $t$-test or $u$-test with multiple-test correction (we have used the Holm-Bonferroni correction). On the other hand, Quor measure has a clear interpretation as the confidence of the difference in medians, so if one chooses genes with, for instance, confidence above 95\%, then there are 56 genes in this situation, of which 14 cases suggest that ill patients (group 2) have under-expressed genes ($\{Q_1 > Q_2\}$) and 42 cases suggest over-expressed genes ($\{Q_1 < Q_2\}$). \begin{table}[ht] \begin{center} \caption{Number of rejected and non-rejected genes among those with Quor confidence $\geq 95\%$ in the Schizophrenia data. First line is performed without using multiple test correction for $t$ and $u$ tests. Second does correction assuming that only 56 hypothesis tests are executed. If one corrects for all the genes, no significant gene is found by $t$ or $u$ tests.} \label{table3} \begin{tabular}{c|cc|cc} \hline & \multicolumn{2}{c|}{$t$-test} & \multicolumn{2}{c}{$u$-test} \\ & Non-Reject & Reject & Non-Reject & Reject\\ \hline no correction & 18 & 38 & 1 & 55\\ with correction & 48 & 8 & 43 & 13\\ \hline \end{tabular} \end{center} \end{table} Because of its characteristic, one might see a single Quor confidence as a more conservative approach than other tests, as it has fewer assumptions (for instance, all but one high-confidence genes from Quor also had $u$-test pvalue $<5\%$), but this vanishes if multiple test correction is performed. Table~\ref{table3} shows the number of null hypothesis (that is, no difference) rejections with $t$-test and $u$-test assuming that only 56 tests were performed (exactly for those genes with high Quor confidence). This illustrates a hypothetical procedure where Quor is firstly executed, followed by the hypothesis testing (we decided to show this view otherwise those tests would not be able to reject the null hypothesis for any gene). In the table we see that $t$ and $u$ tests turn out to be very conservative because of multiple test correction, and the prior use of Quor can alleviate this situation and consequently reduce an excessive amount of Type II errors. \section{Evaluation as Feature Selection} \label{sec5} In this section we apply different hypothesis testing methods in the literature and Quor to the task of feature selection, with the ultimate goal of building a classifer with a subset of all variables. We do not compare these approaches with yet other methods for feature selection, for example those which consider correlations between variables and information measures, because they have very different goals, while ours is to compare methods with similar purpose, such as $t$-test, $u$-test, $ks$-test, and Quor itself. For that purpose, we try to predict a yet unseen class/group given the observations of the model covariates. In each data set, the class has a particular meaning, and the number of samples and variables vary. Table~\ref{t0} shows the main characteristics of the data sets with which we work. Please refer to the corresponding citation for additional information on the data~\cite{lymphomadata,colondata,leukdata,higgs2006,ovardata,prostatedata,breastdata,lungdata}. These data were obtained from internet repositories~\cite{repo2,seville,stanley2012}. \begin{table}[ht] \begin{center} \caption{Data set characteristics. $m$ is the number of patients, shown by the amount in each group. PS stands for {\it Proteomic spectra}, GEP for {\it Gene Expression Profiling}.} \label{t0} \begin{tabular}{c|cccc} \hline Data set & $m$ & \# Feat. & Type & Class \\ \hline Prostate C.& 8+13 & 12600 & GEP & Relapse/Not \\ Schizophr. & 34+32 & 20992 & GEP & Disease/Not\\ Lung C.& 24+15 & 2880 & GEP & Relapse/Not \\ Breast C. & 12+7 & 24481 & GEP & Relapse/Not\\ Colon C. & 22+40 & 2000 & GEP & Disease/Not\\ Lymphoma & 22+23 & 4026 & GEP & Dis. Subtype\\ Leukemia & 27+11 & 7129 & GEP & Dis. Subtype\\ Ovarian C. & 162+91 & 15154 & PS & Disease/Not\\ \hline \end{tabular} \end{center} \end{table} The idea is to fix the classifier while varying the feature selection approach among Quor (median as the chosen quantile), $t$-test, $u$-test and $ks$-test (Kolmogorov-Smirnov test~\cite{kolm}). To avoid a result that is specific to a single classifier, we perform the experiment with two very distinct classifiers: a Bayesian network and a C4.5 decision tree (both inferred from data), implemented in the Weka package~\cite{weka}. For each data set, we run a five-fold cross-validation procedure repeated 20 times (so as the test set is never available during training; five folds are chosen because some of the data sets contain very few patients). Results are shown in Tables~\ref{tableBA} and~\ref{tableJ48} for the Bayesian network and the decision tree, respectively. Numbers represent average accuracy over $5\cdot 20=100$ runs. Quor, as a feature selection procedure, has different characteristics from the $t$-test, and each of them seems to perform better in distinct data sets; on the other hand, when compared to the $u$-test, Quor seems to show a mild improvement in accuracy. This might be explained by the greater proximity of ideas, in some sense, of the $u$-test and Quor, with the latter demonstrating a slight better performance in these specific data sets. \begin{table}[ht] \begin{center} \caption{Average accuracy (over 100 runs) of a Bayesian network classifier using the 20 best ranked features for each selection method.} \label{tableBA} \begin{tabular}{c|cccc} \hline Data set & Quor & $t$-test & $u$-test & $ks$-test \\ \hline Prostate C.& {\bf 47.85} & 43.55 & 42.45 & 42.60 \\ Schizophrenia & 59.08 & 57.99 & 58.98 & {\bf 60.09} \\ Lung C.& 66.39 & 65.54 & 65.45 & {\bf 66.52} \\ Breast C. & 69.50 & {\bf 74.75} & 64.08 & 66.67 \\ Colon C. & {82.87} & 76.60 & {\bf 83.37} & {81.84} \\ Lymphoma & 91.33 & {\bf 92.44} & 92.33 & 91.67 \\ Leukemia & {\bf 96.23} & 91.54 & {96.07} & 92.48 \\ Ovarian C. & 97.41 & {\bf 98.00} & 97.23 & 97.35 \\ \hline Average & {\bf 76.02} & 75.52 & 75.56 & 75.55 \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[ht] \begin{center} \caption{Average accuracy (over 100 runs) of a C4.5 decision tree classifier using the 20 best ranked features for each selection method.} \label{tableJ48} \begin{tabular}{c|cccc} \hline Data set & Quor & $t$-test & $u$-test & $ks$-test \\ \hline Prostate C. & {40.45} & {\bf 40.70} & 39.55 & 39.90 \\ Schizophrenia & 55.91 & 55.24 & 56.81 & {\bf 56.87} \\ Lung C. & 60.02 & 59.07 & 63.09 & {\bf 64.27} \\ Breast C. & 50.67 & {\bf 59.50} & 51.67 & {58.33}\\ Colon C. & {\bf 83.65} & 77.40 & 80.57 & 81.01\\ Lymphoma & 76.89 & {\bf 77.44} & 77.11 & {\bf 77.44} \\ Leukemia & {\bf 87.66} & 84.70 & 86.55 & 86.11 \\ Ovarian C. & 96.68 & {\bf 97.21} & 96.66 & 96.58 \\ \hline Average & 68.27 & 68.23 & 68.74 & {\bf 69.76} \\ \hline \end{tabular} \end{center} \end{table} \section{Conclusions} \label{sec6} This study proposes a method called Quor to compute the confidence of a statement about the order of arbitrary quantiles of many populations. Its only assumption is independence of samples, which makes it applicable to many domains. Among Quor properties, we highlight its nonparametric nature, the possibility of processing data sets with missing data and varying number of samples from each population (across multiple tests), the confidence interpretation that might avoid the need of multiple-test correction, and its efficient exact computations. Quor will be made available under an open source license both as an easy-to-install R package and as a plugin for the Weka Data Mining package~\cite{weka}. Besides confidences on the order of quantiles of the populations, the package also computes confidence intervals based on order statistics (we refrained from detailing on them here; see Chapter 7 of~\cite{david2003}). Our empirical analysis has indicated that Quor results are in line with the most used techniques for hypothesis testing, which are also often used to order genes according to their statistics/p-values. Quor also performs well as a feature selection procedure (when compared to hypothesis testing methods) in a benchmark of high-dimensional data sets. As future work, we will study the occurrence of ties in the results of Quor, and means to break such ties. While they have not been a major issue so far in our analyses, we consider important to have a sound procedure to resolve them, especially to deal with discrete ordinal data. We are also pursuing applications of the method in recent data sets from the latest microarray technologies. As the number of covariates increases as Quor shall become important for its computational performance and minimal assumptions. \section*{Acknowledgements} This work was partially supported by the Brazilian agencies FAPESP, CNPq and CAPES. \bibliographystyle{plain}
1,108,101,563,597
arxiv
\section{\bf Introduction} \label{sec:intro} The study of rational points on varieties over number fields is one of the fundamental questions in arithmetic geometry. For curves, we have a good understanding of the behaviour of the rational points, and this behaviour is governed by the genus of the curve. If the curve has genus $g\leq 1$, then the rational points on the curve become infinite after a finite extension of the base number field; in this case, we say the curve is arithmetically special. If the curve has genus $g\geq 2$, then a famous result of Faltings \cite{Faltings2} asserts that the rational points on the curve are finite over any number field, and in this case, we say the curve is arithmetically hyperbolic. It is natural to ask if a similar dichotomy holds for higher dimensional varieties. The deep and contrasting conjectures of Lang \cite[p.~161--162]{Lang} and Campana \cite[Section 9]{campana2004orbifolds} posit that such a relationship holds for certain classes of varieties. Lang's conjecture asserts that a variety of general type over a number field is pseudo-arithmetically hyperbolic (\cite[Definition 7.2]{JBook}), and Campana's conjecture claims that a special variety (\autoref{def:specialBogomolov}) over a number field is arithmetically special (\autoref{defn:arithmeticallyspecial}). There has been significant progress on both of these conjectures (see e.g., \cite{FaltingsLang1, FaltingsLang2, Vojta11,Vojta:IntegralPointsII} and \cite{HarrisTsch, HassetTschinkel:AbelianFibrations, bogomolov2000density} and the excellent book \cite{nicole2020arithmetic} for more references). Since understanding the arithmetic properties of such varieties is difficult, we seek other ways to describe being of general type and special. There exist (conjectural) complex analytic characterizations of these notions (see e.g., \cite{Lang, Kobayashi} and \cite{campana2004orbifolds}), and recently, there has been work on providing a (conjectural) non-Archimedean characterization of general type (see e.g., \cite{Cherry, CherryKoba, JVez, MorrowNonArchGGLV, sun:NonArchBorelhyperbolic}). \subsection*{Main contributions} In this work, we offer a non-Archimedean interpretation of Campana's notion of specialness with the desideratum that our interpretation is equivalent to other characterizations of specialness. Our definition is motivated by the (conjectural) complex analytic analogue of specialness, which goes by the name Brody special (\autoref{defn:Brodyspecial}) and states that a complex manifold is Brody special if there exists a dense entire map from $\mathbb{C}$ to it. For example, the complex analytification of an abelian variety is Brody special by the Riemann uniformization theorem. A natural first guess for a non-Archimedean analogue of this notion would be to ask for a dense analytic morphism from non-Archimedean analytification of $\mathbb{A}^1$ or $\mathbb{G}_m$ into our analytic space. However, results of Cherry \cite{Cherry} tell us that is definition will not suffice. For example, the non-Archimedean analytification of an abelian variety with good reduction will not admit any non-constant morphism from the these spaces, and since these analytic spaces should be special we see that this naive notion does not suffice. Instead of testing specialness on non-Archimedean entire curves, we will test on big analytic opens of the non-Archimedean analytification of connected algebraic groups. To set notation for our definition, let $K$ be an algebraically closed, complete, non-Archimedean valued field of characteristic zero, let $\mathscr{X}$ be a $K$-analytic space (in the sense of Huber), and let $X^{\an}$ denote the non-Archimedean analytification of a variety $X$ over $K$. \begin{definition} We say that ${\mathscr X}$ is \cdef{$K$-analytically special} if there exists a connected, finite type algebraic group $G/K$, a dense open subset ${\mathscr U}\subset G^{\an}$ with $\codim(G^{\an}\setminus {\mathscr U}) \geq 2$, and an analytic morphism ${\mathscr U} \to {\mathscr X}$ which is Zariski dense. \end{definition} We highlight two points in the above definition. First, it may seem unnatural to test whether a $K$-analytic space is $K$-analytically special by asking for the existence of a Zariski dense analytic morphism a big analytic open of an algebraic group, however we provide explanations for these conditions in \autoref{exam:whybig} and \autoref{remark:whybiganalytic}. The second is that we do not require our $K$-analytic space to be compact. This allows us to avoid the language of logarithmic geometry and orbifolds when working with non-compact $K$-analytic space, which eliminates some technical difficulties. We prove several results to illustrate that our definition correctly captures Campana's notion of specialness (\autoref{def:specialBogomolov} and \autoref{def:specialfibration}). Our first result states that a $K$-analytically special $K$-analytic space cannot dominate a positive dimensional pseudo-$K$-analytically Brody hyperbolic $K$-analytic space, which can be viewed as a non-Archimedean version of \cite[Proposition 9.27]{campana2004orbifolds}. We refer the reader to Section \ref{sec:Brodyhyperbolic} for the definition and a discussion of the notion of pseudo-$K$-analytically Brody hyperbolic. \begin{thmx}\label{thm:nofibrationtoBrody} Let $K$ be an algebraically closed, complete, non-Archimedean valued field of characteristic zero, and let ${\mathscr X},{\mathscr Y}$ be irreducible, reduced, separated $K$-analytic spaces. If ${\mathscr Y}$ is a $K$-analytically special $K$-analytic space and ${\mathscr X}$ is a positive dimensional pseudo-$K$-analytically Brody hyperbolic $K$-analytic space (\autoref{defn:KBrodyMod}), then there is no dominant morphism ${\mathscr Y} \to {\mathscr X}$. \end{thmx} Using \autoref{thm:nofibrationtoBrody}, we are able to identify several classes of $K$-analytically special $K$-analytic spaces in terms of their intrinsic geometry and certain abelian properties of their fundamental group. We recall that our guiding principle is that the notions of specialness and being of general type contrast each other. In \cite{MorrowNonArchGGLV}, the first author proved that a closed subvariety $X$ of a semi-abelian variety over $K$ is of logarithmic general type if and only if it is pseudo-$K$-analytically Brody hyperbolic. This result builds off of the works \cite{Abram, Nogu}, where the authors show that the first condition is equivalent to the special locus of $X$ being properly contained in $X$, which essentially means that $X$ is not the translate of a semi-abelian subvariety. With our guiding principle in mind, we use results of Vojta and \autoref{thm:nofibrationtoBrody} to prove that $X$ being a translate of a semi-abelian subvariety is equivalent to $X^{\an}$ being $K$-analytically special. \begin{thmx}\label{thm:closedabelianspecial} Let $K$ be an algebraically closed, complete, non-Archimedean valued field of characteristic zero. Let $G/K$ be a semi-abelian variety, and let $X\subset G$ be a closed subvariety. Then, $X$ is the translate of a semi-abelian subvariety if and only if $X^{\an}$ is $K$-analytically special. \end{thmx} We point out that the analogous result for Brody special (i.e., for complex analytic varieties) holds due to work of \cite{Abram, Nogu, campana2004orbifolds}. We also note that results of Iitaka \cite[Theorems 2 \& 4]{IitakaLogForms} (see also Vojta's result in \cite[Theorem 5.15]{Vojta:IntegralPointsII}) and Campana \cite[Theorem 5.1]{campana2004orbifolds} imply that if $X$ is a translate of a semi-abelian variety, then $X$ is a special variety (see \autoref{def:specialfibrationlog} for the notion of special non-proper variety). The techniques of the proof of \autoref{thm:closedabelianspecial} can easily be adapted to the algebraic case, thus giving us the opposite implication of this result, see \autoref{thm:specialsemiabelian}. We record such a result as an immediate corollary to \autoref{thm:closedabelianspecial}. \begin{corox}\label{coro:specialabelian} Let $K$ be an algebraically closed, complete, non-Archimedean valued field of characteristic zero. Let $G/K$ be a semi-abelian variety, and let $X\subset G$ be a closed subvariety. Then, $X$ is special if and only if $X^{\an}$ is $K$-analytically special. \end{corox} Our final main result is related to the abelianity conjecture of Campana \cite[Conjecture 7.1]{campana2004orbifolds}. This conjecture postulates that for a variety $X/\mathbb{C}$, being special is equivalent to the topological fundamental group of the complex analytification of $X$ being virtually abelian (i.e., it contains a finite index abelian subgroup). For results concerning this abelianity conjecture, we refer the reader to \cite[Theorem 7.8]{campana2004orbifolds}, \cite[Theorem 1.1]{yamanoi2010fundamental}, and \cite[Theorem 1.12]{javanpeykar2020albanese}. The development of fundamental groups for a non-Archimedean analytic space has a rich history, which we briefly exposit. Berkovich $K$-analytic spaces posses nice topological properties. For example, an important result of Berkovich \cite[Corollary 9.6]{BerkovichUniversalCover} says that a smooth, connected Berkovich $K$-analytic space admits a topological universal covering which is a simply connected $K$-analytic space, and hence one can describe the topological fundamental group of a Berkovich $K$-analytic space via loops modulo homotopy. This illustrates that Berkovich $K$-analytic spaces have similarities to complex manifolds, however we note that the topological fundamental group does not detect interesting arithmetic properties of a variety; indeed, the topological fundamental group of the analytification of any variety over $K$ with good reduction is trivial. While there are too few topological coverings, working directly with the \'etale fundamental group of a Berkovich $K$-analytic space is unwieldly (see e.g., \cite[Proposition 7.4]{DeJongFundamentalGroupNonArch}). To remedy this, Andr\'e \cite{andre-per} introduced the tempered fundamental group of a Berkovich $K$-analytic space, which sits between the topological and \'etale fundamental groups and provides us with the correct fundamental group to study non-Archimedean analogues of Campana's abelianity conjectures. As a first step in this direction, we prove that the Berkovich analytification of a projective surface with negative Kodaira dimension is $K$-analytically special if and only if its tempered fundamental group is virtually abelian; see \cite[Theorem 3.1]{buzzard2000algebraic} for the analogous complex analytic statement. \begin{thmx}\label{thm:surfaceKinfinity} Let $K$ be an algebraically closed, complete, non-Archimedean valued field of characteristic zero, and let $X/K$ is a smooth, projective surface with negative Kodaira dimension. Then the following are equivalent: \begin{enumerate} \item $X$ has irregularity $q(X) = h^0(X,\Omega_X^1)$ less than 2; \item $X^{\an}$ is $K$-analytically special; \item the tempered fundamental group $\pi_1^{\temp}(X^{\an})$ of $X^{\an}$ is virtually abelian. \end{enumerate} \end{thmx} The above results inspire us to formulate non-Archimedean counterparts to conjectures of Campana. Below, we present shortened versions of these conjectures as many of the definitions have been omitted from our discussion up to this point. For precise statements of these conjectures, we refer the reader to Section \ref{sec:NonArchCampana}. \begin{conjecture}[Non-Archimedean Campana's conjectures] Let $X/K$ be a smooth projective variety. Then, the following are equivalent: \begin{enumerate} \item $X$ is special (\autoref{def:specialBogomolov}, \autoref{def:specialfibration}); \item $X^{\an}$ is $k$-analytically special. \end{enumerate} \end{conjecture} \begin{conjecture}[Non-Archimedean Campana's abelianity conjecture for fundamental groups] Let $X/K$ be a smooth projective variety. If $X^{\an}$ is $K$-analytically special, then $\pi_1^{\temp}(X^{\an})$ is virtually abelian. \end{conjecture} \subsection*{Preparatory results} In order to prove the above theorems, we need to prove three auxiliary results, which we believe to be of independent interest. The first one (\autoref{thm:meromorphicmap}) concerns the indeterminacy locus of a meromorphic mapping between $K$-analytic spaces and is a non-Archimedean analogue of a result due to Remmert \cite{remmert1957holomorphic}. The proof presented in that section has been provided to us by Brian Conrad. We use this result to show that our notion of $K$-analytically special is a bi-meromorphic invariant, which is a crucial property (see e.g., the proof of \autoref{thm:surfaceKinfinity}). Roughly speaking, a $K$-analytic space being $K$-analytically special means that it admits a dominant, meromorphic map from an algebraic group, and so in order for this notion to be a bi-meromorphic invariant, we need to understand the indeterminancy locus of a general meromorphic map. \autoref{thm:meromorphicmap} states that when the source is normal and the target is reduced and proper, the domain of definition of a meromorphic map is an open \textit{analytic} subset whose complement has codimension at least two. Note that this is precisely is the condition we require in our definition of $K$-analytically special. For the second result, we begin by offering a new, more natural definition of pseudo-$K$-analytically Brody hyperbolic (\autoref{defn:KBrodyMod}) and deduce an equivalent way of testing this notion (\autoref{thm:equivalentBrody}), which brings it closer in line with our notion of $K$-analytically special. To expand on this, the first definition of pseudo-$K$-analytically Brody hyperbolic appeared in \cite[Definition 2.2]{MorrowNonArchGGLV} and contained a seemingly unnatural condition of studying algebraic maps from big algebraic opens of abelian varieties. Moreover, with this original definition, it is unclear if the statement of \autoref{thm:nofibrationtoBrody} is true. To fix this issue, we modify this definition to test pseudo-$K$-analytically Brody hyperbolicity on big \textit{analytic} opens of analytifications of algebraic groups and prove that one can actually test this notion on analytic maps from $\mathbb{G}_{m,K}^{\an}$ and from big \textit{analytic} opens of analytifications of abelian varieties. With this new definition, we immediately arrive at \autoref{thm:nofibrationtoBrody}. The final preparatory theorem (\autoref{thm:extendanalyticmorphism}) is an extension result concerning meromorphic maps from smooth, irreducible adic spaces to the analytification of a semi-abelian variety, which is a non-Archimedean analogue of \cite[Section 8.4, Corollary 6]{BLR} and \cite[Lemma A.2]{MochizukiAbsoluteAnabelian}. We use this result and \autoref{thm:equivalentBrody} show that \cite[Theorem A]{MorrowNonArchGGLV} remains true with our new definition of pseudo-$K$-analytically Brody hyperbolic (see \autoref{prop:Brodyequivalence}). Equipped with this fact, the proof of \autoref{thm:closedabelianspecial} follows from utilizing results of Vojta \cite{Vojta:IntegralPointsII} on the Ueno fibration of a closed subvariety of a semi-abelian variety. \subsection*{Organization} The paper has two parts. Sections \ref{sec:nonArchspaces}, \ref{sec:meromorphicmaps}, and \ref{sec:Brodyhyperbolic} form the first part and consist of background material and auxiliary results. The remaining sections focus on defining and proving results related to our notion of $K$-analytically special. More precisely, we organize the paper as follows. In Section \ref{sec:nonArchspaces}, we review non-Archimedean analytic spaces, describe several properties preserved by analytification, and study the sheaf of meromorphic functions on a taut locally strongly Noetherian adic space. In Section \ref{sec:meromorphicmaps}, we prove \autoref{thm:meromorphicmap}, following the proof given to us by Brian Conrad, which is a non-Archimedean analogue of a theorem of Remmert stating that the indeterminacy locus of a meromorphic morphism between a normal and a proper non-Archimedean analytic space has codimension at least two. In Section \ref{sec:Brodyhyperbolic}, we give a new definition of pseudo-$K$-analytically Brody hyperbolic and deduce an equivalent way of testing this notion (\autoref{thm:equivalentBrody}). We begin our discussion on the various notions of specialness in Section \ref{sec:specialnotions} and offer our non-Archimedean characterization of specialness, which we call $K$-analytically special, in Section \ref{sec:nonArchspecial}. In this section, we prove several basic properties of being $K$-analytically special and our first main theorem (\autoref{thm:nofibrationtoBrody}). In Section \ref{sec:specialsubvarietiessemiabelian}, we prove our second main theorem (\autoref{thm:closedabelianspecial}) concerning when a closed subvariety of a semi-abelian variety is $K$-analytically special. In Section \ref{sec:sufacenegKodaira}, we prove our final main theorem (\autoref{thm:surfaceKinfinity}) which characterizes when a projective surface of negative Kodaira dimension is $K$-analytically special in terms of its tempered fundamental group. To conclude, we make non-Archimedean counterparts to Campana's conjectures in Section \ref{sec:NonArchCampana}. \subsection*{Conventions} We establish the following to be used throughout. \subsubsection*{Fields and algebraic geometry} We will let $k$ denote an algebraically closed field of characteristic zero and let $K$ be an algebraically closed, complete, non-Archimedean valued field of characteristic zero. A variety $X$ over a field will be an integral, separated scheme of finite type over said field. We will use $K_X$ to denote the canonical divisor on $X$ and $q(X) = h^0(X,\Omega_X^1)$ to denote the irregularity of $X$. For a smooth variety $X/k$ and a line bundle $\mathscr{L}$ on $X$, we use $\kappa(X,\mathscr{L})$ to denote the Iitaka dimension of $\mathscr{L}$, which we briefly recall. For each $m\geq 0$ such that $h^0(X,\mathscr{L}^{\otimes m}) \neq 0$, the linear system $|\mathscr{L}^{\otimes m}|$ induces a rational map from $X$ to a projective space of dimension $h^0(X,\mathscr{L}^{\otimes m}) - 1$. The Iitaka dimension of $\mathscr{L}$ is the maximum over all $m\geq 1$ of the dimension of the image for this rational map. The Kodaira dimension $\kappa(X)$ of $X$ is the Iitaka dimension of the canonical bundle. \subsubsection*{Analytic spaces} We will make use of various analytifications of a variety $X$ over a field. When $k = \mathbb{C}$, we will use $X^{\an}$ to denote the complex analytification of $X$, which corresponds to taking the complex valued points $X(\mathbb{C})$ of $X$. When $k = K$, we will denote the adic space associated with $X$ by $X^{\an}$ (as in \cite{huber2}). Sometimes we will need to consider the corresponding Berkovich space and we denote the Berkovich analytification of $X$ by $X^{\Ber}$ (as in \cite{BerkovichSpectral} or good Berkovich $K$-analytic spaces in \cite{BerkovichEtaleCohomology}). As these non-Archimedean analytifications are fundamental objects in our study, we devote Section \ref{sec:nonArchspaces} to describing their properties and relationships. When referring to rigid analytic, Berkovich, and adic spaces which may not be algebraic, we will use script notation ${\mathscr X},{\mathscr Y},{\mathscr Z}$. In our work, all rigid analytic spaces over $\Sp(K)$ are taut, all Berkovich $K$-analytic spaces are strict and Hausdorff, and all adic spaces are taut, locally strongly Noetherian, and locally of finite type over $\Spa(K,K^{\circ})$, unless otherwise stated. We will mainly use adic spaces but sometimes we need to refer to a different type of analytic space and will make it clear as to which category of analytic space we are using in certain instances. That being said, whenever we refer to a \cdef{$K$-analytic space}, we mean a taut, locally strongly Noetherian, and locally of finite type adic space over $\Spa(K,K^{\circ})$. \subsection*{Acknowledgments} We are very grateful to Brian Conrad for supplying the proof of \autoref{thm:meromorphicmap} and to Ariyan Javanpeykar for the suggestion to look at big opens for the definition of $K$-analytically special. We also thank Marta Benozzo, Marc-Hubert Nicole, and Remy van Dobben de Bruyn for helpful conversations and extend our thanks to Lea Beneish, Ariyan Javanpeykar, Marc-Hubert Nicole, and Alberto Vezzani for useful comments on a first draft. \section{\bf Preliminaries on non-Archimedean analytic spaces} \label{sec:nonArchspaces} In this section, we provide necessary background on non-Archimedean analytic spaces. In particular, we describe the equivalence between certain subcategories of rigid analytic, Berkovich $K$-analytic, and adic spaces, recall basic properties of adic spaces, discuss properties of analytifications of algebraic varieties and algebraic morphisms, and finally introduce the sheaf of meromorphic functions on a taut locally strongly Noetherian adic space. \subsection{Comparisons between rigid analytic, Berkovich $K$-analytic spaces, and adic spaces}\label{subsec:comparision} To begin, we recall the comparision between rigid analytic, Berkovich $K$-analytic, and adic spaces. \begin{theorem}[\protect{\cite[Theorem 1.6.1]{BerkovichEtaleCohomology}, \cite[Proposition 8.3.1]{huber}}]\label{thm:comparisionrigidadic} The category of taut rigid analytic spaces over $\Sp(K)$ is equivalent to the category of Hausdorff strict Berkovich $K$-analytic spaces. \end{theorem} \begin{theorem}[\protect{\cite[Proposition 8.3.1]{huber}, \cite[Theorem 0.1]{henkel:comparision}}]\label{thm:comparision} The category of taut adic spaces locally of finite type over $\Spa(K,K^{\circ})$ is equivalent to the category of Hausdorff strict Berkovich $K$-analytic spaces. At the level of topological spaces, this equivalence sends an adic space ${\mathscr X}$ to its universal Hausdorff quotient $[{\mathscr X}]$. \end{theorem} \begin{remark}\label{remark:nonHausdorff} We note that the adic space associated with an algebraic variety is an example of a taut adic space locally of finite type, and since it will be relevant in Subsection \ref{appendix}, we emphasize that \autoref{thm:comparision} tells us that taut adic spaces locally of finite type are not Hausdorff. \end{remark} We also recall that the notions of properness and finiteness are equivalent for rigid analytic and Berkovich $K$-analytic spaces. \begin{lemma}\label{lemma:equivalenceproperfinite} Let ${\mathscr X},{\mathscr Y}$ be rigid analytic spaces over $K$, and let ${\mathscr X}^{\Ber}$, ${\mathscr Y}^{\Ber}$ denote the associated Berkovich $K$-analytic spaces. Let $f\colon {\mathscr X} \to {\mathscr Y}$ denote a morphism of rigid analytic spaces, and let $f^{\Ber}\colon {\mathscr X}^{\Ber} \to {\mathscr Y}^{\Ber}$ denote the associated morphism of Berkovich $K$-analytic spaces. Then, \begin{enumerate} \item $f$ is proper if and only if $f^{\Ber}$ is proper, \item $f$ is finite if and only if $f^{\Ber}$ is finite. \end{enumerate} \end{lemma} \begin{proof} This follows from \cite[Proposition 3.3.2]{BerkovichSpectral} and \cite[Example 1.5.3]{BerkovichEtaleCohomology}. \end{proof} \subsection{Basic properties of adic spaces} We now recall the notion of reduced, normal, and irreducible adic spaces following \cite{mann:propertiesadicspaces}. \begin{definition}[\protect{\cite[Definition 1.3]{mann:propertiesadicspaces}}]\label{defn:normal} The adic space ${\mathscr X}$ is \cdef{normal} (resp.~\cdef{reduced}) if it can be covered by affinoid adic spaces of the form $\Spa(A,A^{+})$ where $A$ is normal (resp.~reduced). \end{definition} \begin{definition}[\protect{\cite[Definition 1.11]{mann:propertiesadicspaces}}] The adic space ${\mathscr X}$ is \cdef{irreducible} if it cannot be written as the disjoint union of two proper closed adic subspaces. \end{definition} \subsection{Properties of analytifications of algebraic morphisms}\label{subsec:propertiesanalytification} We now discuss facts concerning the analytification functor from locally finite type $K$-schemes to adic spaces over $K$. The analogous results for rigid analytic spaces over $K$ and Berkovich $K$-analytic are treated in \cite[Section 5]{conrad-conn} and \cite[Section 3.4]{BerkovichSpectral}, respectively. \begin{lemma}\label{lemma:absoluteproperties} Let $X$ be a $K$-scheme which is locally of finite type, and let $X^{\an}$ (resp.~$X^{\Ber}$) denote the adic space (resp.~Berkovich $K$-analytic space) associated with $X$. Then, \begin{enumerate} \item $X$ is reduced if and only if $X^{\an}$ (resp,~$X^{\Ber}$) is reduced, \item $X$ is normal if and only if $X^{\an}$ (resp.~$X^{\Ber}$) is normal, \item $X$ is separated if and only if $X^{\an}$ is separated (resp.~$|X^{\Ber}|$ is Hausdorff), \item $X$ is smooth if and only if $X^{\an}$ (resp.~$X^{\Ber}$) is smooth, \item $X$ is irreducible if and only if $X^{\an}$ (resp.~$X^{\Ber}$) is irreducible. \end{enumerate} \end{lemma} \begin{proof} The first and second statements follow from \cite[Proposition 3.4.3]{BerkovichSpectral} for Berkovich $K$-analytic spaces and from \cite[Theorem 5.1.3.(1)]{conrad-conn} and \cite[\S 1.1.11.(c)]{huber} for adic spaces. The third statement follows from \cite[Theorem 3.4.8.(1)]{BerkovichSpectral} for Berkovich $K$-analytic spaces and from \cite[Theorem 5.2.1]{conrad-conn} and \cite[Remark 1.3.19]{huber} for adic spaces. The fourth statement follows from \cite[Proposition 3.4.6.(3)]{BerkovichSpectral} for Berkovich $K$-analytic spaces and from \cite[Theorem 5.2.1]{conrad-conn} and \cite[Proposition 1.7.11]{huber} for adic spaces. The fifth statement follows from \cite[Proposition 2.7.16]{Ducros:Families} for Berkovich $K$-analytic spaces and from \cite[Theorem 2.3.1]{conrad-conn} and \cite[\S 1.1.11.(c)]{huber} for adic spaces. We mention that irreducibility of a Berkovich $K$-analytic space refers to irreducibility in the Zariski analytic topology (see \cite[\S 1.5.1]{Ducros:Families} for the definition). \end{proof} \begin{lemma}\label{lemma:morphismpartiallyproper} Let $f\colon X\to Y$ be a locally of finite type morphism of varieties over $K$, and let $f^{\an}\colon X^{\an} \to Y^{\an}$ (resp.~$f^{\Ber}\colon X^{\Ber} \to Y^{\Ber}$) denote the associated morphism of adic spaces (resp.~Berkovich $K$-analytic spaces). Then, $f^{\an}$ (resp.~$f^{\Ber}$) is partially proper (resp.~boundaryless). \end{lemma} \begin{proof} The second statement follows from \cite[Proposition 1.5.5.(ii)]{BerkovichEtaleCohomology} since $X^{\Ber}$ and $Y^{\Ber}$ are boundaryless Berkovich $K$-analytic spaces. For the first statement we first note that the first statement and \cite[Section 1.6]{BerkovichEtaleCohomology} imply that the associated morphism of rigid analytic spaces is partially proper, and now the result follows from \cite[Remark 1.3.19.(iii)]{huber}. \end{proof} \begin{lemma}\label{lemma:surjectiveflat} Let $f\colon X\to Y$ be a locally of finite type morphism of varieties over $K$, and let $f^{\an}\colon X^{\an} \to Y^{\an}$ (resp.~$f^{\Ber}\colon X^{\Ber} \to Y^{\Ber}$) denote the associated morphism of adic spaces (resp.~Berkovich $K$-analytic spaces). Then, \begin{enumerate} \item $f$ is surjective if and only if $f^{\an}$ (resp.~$f^{\Ber}$) is surjective, \item $f$ is flat if and only if $f^{\an}$ (resp.~$f^{\Ber}$) is flat. \end{enumerate} \end{lemma} \begin{proof} The first (resp.~second) statement for $f^{\an}$ follows from \autoref{lemma:morphismpartiallyproper} and \cite[p.~487, part (b)]{huber} (resp.~\cite[Lemma (1.1.10.(iii)]{huber}). The statements for $f^{\Ber}$ can be found in \cite[Proposition 3.4.6]{BerkovichSpectral}. \end{proof} \begin{lemma}\label{lemma:flatopen} Let $f\colon X\to Y$ be a locally of finite type morphism of varieties over $K$, and let $f^{\an}\colon X^{\an} \to Y^{\an}$ (resp.~$f^{\Ber}\colon X^{\Ber} \to Y^{\Ber}$) denote the associated morphism of adic spaces (resp.~Berkovich $K$-analytic spaces). If $f$ is flat, then $f^{\an}$ (resp.~$f^{\Ber}$) is flat and partially proper (resp.~boundaryless) and open. \end{lemma} \begin{proof} This follows from \autoref{lemma:morphismpartiallyproper}, \autoref{lemma:surjectiveflat}.(2), and \cite[Theorem 9.2.3 and Remark 9.2.4]{Ducros:Families} and \cite[p.~425]{huber}. \end{proof} \subsection{The sheaf of meromorphic functions on an adic space} To conclude this preliminaries section, we define the sheaf of meromorphic functions on a taut, locally strongly Noetherian adic space over $\Spa(K,K^{\circ})$. We follow the exposition of Bosch \cite{bosch1982meromorphic}. First, we define the sheaf locally for a strongly Noetherian affinoid adic space. For any ring $R$, denote by $Q(R)$ its total ring of fractions. Let ${\mathscr X} = \Spa(A,A^{+})$ be an affinoid adic space where $A$ is strongly Noetherian. Let ${\mathscr U} \subset {\mathscr X}$ be a rational subset. By \cite[(II.I.IV)]{huber2}, the restriction map ${\mathscr O}_{{\mathscr X}}({\mathscr X}) \to {\mathscr O}_{{\mathscr X}}({\mathscr U})$ is flat, and hence it gives us a homomorphism between the corresponding total rings of fractions. Moreover, we can define a presheaf ${\mathscr M}_{{\mathscr X}}$ on the set of rational subset ${\mathscr U}$ of ${\mathscr X}$ via \[ {\mathscr M}_{{\mathscr X}}({\mathscr U}) = Q({\mathscr O}_{{\mathscr X}}({\mathscr U})). \] \begin{lemma}\label{lemma:localsheafofdenominators} Let $f = g/h \in Q({\mathscr O}_{{\mathscr X}}({\mathscr X}))$ where $g,h \in {\mathscr O}_{{\mathscr X}}({\mathscr X})$ and $h$ is not a zero-divisor. Consider the presheaf $({\mathscr O}_{{\mathscr X}} : f)$ which associates with each rational subset ${\mathscr U} \subset {\mathscr X}$, the ${\mathscr O}_{{\mathscr X}}({\mathscr U})$-ideal \[ ({\mathscr O}_{{\mathscr X}}({\mathscr U}) : f) := \brk{a \in {\mathscr O}_{{\mathscr X}}({\mathscr U}) : af\in {\mathscr O}_{{\mathscr X}}({\mathscr U})}. \] Then, $({\mathscr O}_{{\mathscr X}} : f)$ is a coherent ${\mathscr O}_{{\mathscr X}}$-module. \end{lemma} \begin{proof} The proof is identical to that from \cite[Lemma 2.1]{bosch1982meromorphic}. \end{proof} \begin{lemma}\label{lemma:localsheaf} The presheaf ${\mathscr M}_{{\mathscr X}}$ defines a sheaf on ${\mathscr X}$. \end{lemma} \begin{proof} The proof follows in the same manner as \cite[Lemma 2.2]{bosch1982meromorphic}, and so we will only offer a sketch. Let $\brk{{\mathscr U}_1,\dots,{\mathscr U}_n}$ denote an open cover ${\mathscr X} = \Spa(A,A^{+})$ by rational subsets. Consider the functions $f_i \in {\mathscr M}_{{\mathscr X}}({\mathscr U}_i)$ such that $f_{i_{| {\mathscr U}_i \cap {\mathscr U}_j}} = f_{j_{| {\mathscr U}_i \cap {\mathscr U}_j}}$ for all $1\leq i,j\leq n$. By \autoref{lemma:localsheafofdenominators}, we can construct the coherent ${\mathscr O}_{{\mathscr U}}$-ideal $({\mathscr O}_{{\mathscr U}_i} : f_i)$. These ideals will coincide on all intersections ${\mathscr U}_i \cap {\mathscr U}_j$ and hence will glue together to form a coherent ${\mathscr O}_{{\mathscr X}}$-module, which we denote by ${\mathscr J}$. Using the Noether decomposition theorem, we can see that not all elements of ${\mathscr J}({\mathscr X})$ are zero-divisors in ${\mathscr O}_{{\mathscr X}}({\mathscr X})$. Let $h \in {\mathscr J}({\mathscr X})$ denote such an element. Now $(h_{|{\mathscr U}_i})f_i \in {\mathscr O}_{{\mathscr X}}({\mathscr U}_i)$ for all $i$, and since they coincide on ${\mathscr U}_i \cap {\mathscr U}_j$, they glue to define a function $g\in {\mathscr O}_{{\mathscr X}}({\mathscr X})$. Therefore, we can define a global element $f = g/h \in {\mathscr M}_{{\mathscr X}}({\mathscr X})$ that restricts to $f_i$ on ${\mathscr U}_i$ and can be shown to be unique. \end{proof} Using the above results, we can globalize this construction. \begin{definition} Let ${\mathscr X}$ be a taut, locally strongly Noetherian adic space over $\Spa(K,K^{\circ})$. We define the \cdef{sheaf of meromorphic functions on ${\mathscr X}$} as follows. Consider an affinoid open cover $\brk{{\mathscr U}_i}$ of ${\mathscr X}$ where each ${\mathscr U}_i = \Spa(A_i,A_i^+)$ for $A_i$ a strongly Notherian ring. By \autoref{lemma:localsheaf}, we can define the sheaf ${\mathscr M}_{{\mathscr U}_i}$ for each ${\mathscr U}_i$ and glue these together to get a global sheaf ${\mathscr M}_{{\mathscr X}}$. \end{definition} \begin{remark} The stalk at $x\in {\mathscr X}$ is $Q({\mathscr O}_{{\mathscr X},x})$. Any non-zero-divisor germ $h\in {\mathscr O}_{{\mathscr X},x}$ can be extended to a function on an affinoid neighborhood ${\mathscr U}$ of $x$ such that $h$ is not a zero-divisor in ${\mathscr O}_{{\mathscr X}}({\mathscr U})$. \end{remark} \begin{definition} For each global meromorphic function $f\in {\mathscr M}_{{\mathscr X}}({\mathscr X})$, the coherent ${\mathscr O}_{{\mathscr X}}$-ideal $({\mathscr O}_{{\mathscr X}} : f)$ can be constructed via \autoref{lemma:localsheafofdenominators}, and we will call this ideal sheaf the \cdef{ideal of denominators of $f$}. \end{definition} We now define the indeterminacy locus of a global meromorphic function $f \in {\mathscr M}_{{\mathscr X}}({\mathscr X})$. To do so, we briefly recall \cite[(1.4.1)]{huber}, which illustrates how one can define a closed adic subspace associated with a coherent ${\mathscr O}_{{\mathscr X}}$-ideal ${\mathscr I}$. Let \[ V({\mathscr I}) := \brk{x\in {\mathscr X} : {\mathscr I}_{x} \neq {\mathscr O}_{{\mathscr X},x}} \] and ${\mathscr O}_{V({\mathscr I})} := ({\mathscr O}_{{\mathscr X}}/{\mathscr I})_{|V({\mathscr I})}$. Note that for every $x\in {\mathscr X}$, the support of $v_x$ is equal to the maximal ideal $\mathfrak{m}_x$ of ${\mathscr O}_{{\mathscr X},x}$. By definition ${\mathscr I}_{y} \subset \fm_{y}$ for every $y \in V({\mathscr I})$, and the valuation $v_y$ induces a valuation $v_{y}'$ of ${\mathscr O}_{V({\mathscr I}),y}.$ By considering the quotient topology, one has that for every open affinoid subspace ${\mathscr V} \subset {\mathscr X}$, the mapping ${\mathscr O}_{{\mathscr X}}({\mathscr V}) \to {\mathscr O}_{V({\mathscr I})}(V({\mathscr I}) \cap {\mathscr V})$ is a quotient map. Moreover, the triple \[ V({\mathscr I}) := (V({\mathscr I}),{\mathscr O}_{V({\mathscr I})}, (v_y' : y \in V({\mathscr I}))) \] defines an adic space, which we call the \cdef{closed adic space associated with the ${\mathscr O}_{{\mathscr X}}$-ideal ${\mathscr I}$}. We now return to our goal of defining the set of poles and zeros of a global meromorphic function. \begin{definition} Let $f\in {\mathscr M}_{{\mathscr X}}({\mathscr X})$. \begin{itemize} \item The closed adic space $P_f := V(({\mathscr O}_{{\mathscr X}} : f))$ associated with the coherent ${\mathscr O}_{{\mathscr X}}$-ideal $({\mathscr O}_{{\mathscr X}} : f)$ is called \cdef{the set of poles of $f $}. \item Similar to the sheaf of denominators, we can define the sheaf of numerators $({\mathscr O}_{{\mathscr X}} : f)\cdot f$, which is a coherent ${\mathscr O}_{{\mathscr X}}$-ideal. The closed adic space $Z_f : = V(({\mathscr O}_{{\mathscr X}} : f)\cdot f)$ associated with the sheaf of numerators is called \cdef{the set of zeros of $f$}. \item The \cdef{indeterminacy locus} of $f$ is defined as the intersection $P_f \cap Z_f$. Note that by Hilbert's nullstellensatz, $f\in {\mathscr O}_{{\mathscr X}}({\mathscr X})$ if and only if $P_f = \emptyset$. \end{itemize} \end{definition} We want to show that ${\mathscr M}_{{\mathscr X}}({\mathscr X})$ is a field when ${\mathscr X}$ is irreducible and reduced, and to prove this result, we will need a simple lemma. \begin{lemma}\label{lemma:unitmeromorphic} Let ${\mathscr X}$ be a reduced, taut, locally strongly Noetherian adic space over $\Spa(K,K^{\circ})$, and let $f\in {\mathscr M}_{{\mathscr X}}({\mathscr X})$. Then $f$ is a unit in ${\mathscr M}_{{\mathscr X}}({\mathscr X})$ if and only if $Z_f$ does not contain a non-empty open subspace. \end{lemma} \begin{proof} We may assume that ${\mathscr X} = \Spa(A,A^{+})$ is affinoid and that $A$ is complete, and so we write $f = g/h$ with $g,h\in A$ such that $h$ is not a zero-divisor in $A$. We have that $Z_f \subset V(g)$ where $V(g)$ denotes the vanishing locus of $g$ and both $Z_f $ and $ V(g)$ coincide on the complement of $V(h)$. Since $h$ is not a zero-divisor, $V(h)$ does not contain a non-empty open subspace, and moreover, we have that $Z_f$ contains an open subspace if and only if $V(g)$ contains an open subspace. However, this only happens if $g$ is a zero-divisor in $A$, and hence the result follows. \end{proof} We now record a useful corollary, which is the adic analogue of \cite[Corollary 2.5]{bosch1982meromorphic}. \begin{corollary}\label{coro:meromorphicfield} If ${\mathscr X}$ is an irreducible, reduced, taut, locally strongly Noetherian adic space over $\Spa(K,K^{\circ})$, then ${\mathscr M}_{{\mathscr X}}({\mathscr X})$ is a field. \end{corollary} \begin{proof} This follows from \autoref{lemma:unitmeromorphic} and the fact that a closed adic subspace ${\mathscr Y}$ of an irreducible adic space ${\mathscr X}$ which contains a non-empty open subspace must be the entire space i.e., ${\mathscr Y} = {\mathscr X}$. \end{proof} When ${\mathscr X}$ is irreducible and reduced, any morphism ${\mathscr X} \to \mathbb{P}^{1,\an}$ that is not identically equal to $\infty$ is naturally identified with a meromorphic function $f\in {\mathscr M}_{{\mathscr X}}({\mathscr X})$. We will need to know when such a morphism will separate points. \begin{lemma}\label{lemma:separatingpoints} Suppose ${\mathscr X}$ is an irreducible, reduced, separated, taut, locally strongly Noetherian adic space over $\Spa(K,K^{\circ})$. Let $x,y\in {\mathscr X}$ be two distinct points. Then, there exists a meromorphic function $f\in {\mathscr M}_{{\mathscr X}}({\mathscr X})$ such that $f(x) \neq f(y)$, where we consider $f$ as a morphism from ${\mathscr X} \to \mathbb{P}^{1,\an}$. \end{lemma} \begin{proof} By the valuative criterion for separatedness \cite[Proposition 1.3.7]{huber}, we have that points of ${\mathscr X}$ are separated, and in particular, we have that $({\mathscr O}_{{\mathscr X},x},{\mathscr O}_{{\mathscr X},x}^+)\ncong ({\mathscr O}_{{\mathscr X},y},{\mathscr O}_{{\mathscr X},y}^+)$. Moreover, we choose a germ $f\in ({\mathscr O}_{{\mathscr X},x},{\mathscr O}_{{\mathscr X},x}^+)$ which is not in $({\mathscr O}_{{\mathscr X},y},{\mathscr O}_{{\mathscr X},y}^+)$ and then $f(x) \neq f(y)$ where we say that $f(y) = \infty$ (as $f\notin ({\mathscr O}_{{\mathscr X},y},{\mathscr O}_{{\mathscr X},y}^+)$). Since ${\mathscr X}$ is irreducible and reduced, ${\mathscr O}_{{\mathscr X},x}$ is a subring of ${\mathscr M}_{{\mathscr X}}({\mathscr X})$ and hence $f$ defines an element of ${\mathscr M}_{{\mathscr X}}({\mathscr X})$. \end{proof} \section{\bf Meromorphic maps between non-Archimedean analytic spaces} \label{sec:meromorphicmaps} In this section, we prove a non-Archimedean variant of a result of Remmert \cite[p.~333]{remmert1957holomorphic} concerning the codimension of the indeterminacy locus of a meromorphic map between certain $K$-analytic spaces. The proofs presented here were given to us by Brian Conrad. \begin{theorem}\label{thm:meromorphicmap} Let $K$ be an algebraically closed, complete, non-Archimedean valued field of characteristic zero. Let ${\mathscr X}/K$ be a normal, taut rigid analytic space, and let ${\mathscr Y}/K$ be a proper, reduced, taut rigid analytic space. The indeterminacy locus of any meromorphic map ${\mathscr X}\dashrightarrow {\mathscr Y}$ is an analytic subset of codimension at least two. \end{theorem} First, we recall the definition of meromorphic mapping following Remmert. \begin{definition}\label{defn:meromorphic} Let ${\mathscr X}$ and ${\mathscr Y}$ be reduced rigid analytic spaces. A \cdef{meromorphic map} $\varphi\colon {\mathscr X} \dashrightarrow {\mathscr Y}$ is an analytic subset ${\mathscr E}\subset {\mathscr X} \times {\mathscr Y}$ which is mapped properly onto ${\mathscr X}$ by the projection map $\pr_1\colon {\mathscr X} \times {\mathscr Y} \to {\mathscr X}$ of ${\mathscr X} \times {\mathscr Y}$ onto the first factor such that outside a nowhere dense analytic set ${\mathscr Z} \subset {\mathscr X}$ this map is bi-holomorphic. Moreover, $\pr_1^{-1}({\mathscr Z})$ is nowhere dense in ${\mathscr E}$, and we call the set ${\mathscr E}$ the \cdef{graph of $\varphi$}. \end{definition} \begin{remark} We now explain the reason for calling ${\mathscr E}$ the graph of $\varphi$. Let ${\mathscr X}$ be a reduced rigid analytic space, and let ${\mathscr U},{\mathscr V}$ be open subsets of ${\mathscr X}$. Suppose we are given two maps $\varphi_1\colon {\mathscr U} \to {\mathscr Y}$ and $\varphi_2\colon {\mathscr V}\to{\mathscr Y}$ that coincide on the intersection ${\mathscr U} \cap {\mathscr V}$. Then the closure of the graph of $\varphi_1$ coincides with the closure of the graph of $\varphi_2$, and this graph defines a meromorphic map $\varphi$ as in \autoref{defn:meromorphic}. \end{remark} For the remainder of this section, let ${\mathscr X}/K$ be a normal rigid analytic space, let ${\mathscr Y}/K$ be a proper, reduced rigid analytic space, let $\varphi\colon {\mathscr X} \dashrightarrow {\mathscr Y}$ be a meromorphic map, and let ${\mathscr E}$ denote the graph of $\varphi$. By \autoref{defn:meromorphic}, the morphism \begin{equation}\label{eqn:morphism} {\mathscr E} \setminus \pr_1^{-1}({\mathscr Z}) \longrightarrow {\mathscr X} \setminus {\mathscr Z} \end{equation} is an isomorphism. We first note that there is a unique minimal ${\mathscr Z}$ with the above properties and its formation is Tate-local on ${\mathscr X}$. We may and do assume that ${\mathscr Z}$ is minimal. Our goal is to show that ${\mathscr Z}$ in ${\mathscr X}$ has codimension at most $2$ everywhere along ${\mathscr Z}$. Observe that this assertion is vacuously true when ${\mathscr Z}$ is empty. Since the formation of ${\mathscr Z}$ is Tate-local, we may assume that ${\mathscr X}$ is affinoid, then connected, and hence irreducible, so ${\mathscr X} \setminus {\mathscr Z}$ is also irreducible. Therefore by \eqref{eqn:morphism} and the nowhere-density of $\pr_1^{-1}({\mathscr Z})$ in ${\mathscr E}$, we have that ${\mathscr E}$ is irreducible. The image of $\pr_1\colon {\mathscr E} \to {\mathscr X}$ is an analytic set which contains the non-empty, Zariski open ${\mathscr X} \setminus {\mathscr Z}$, so it has non-empty interior in ${\mathscr X}$, and since ${\mathscr X}$ is irreducible, we have that this image must coincide with ${\mathscr X}$ and hence the fibers of $\pr_1$ are non-empty. Since ${\mathscr E}$ is equidimensional (by irreducibility), its pure dimension is the same as that of the irreducible affinoid ${\mathscr X}$ because we may determine the dimension using ${\mathscr X} \setminus {\mathscr Z}$ and ${\mathscr E} \setminus \pr_1^{-1}({\mathscr Z})$. If there exists a section $s\colon {\mathscr X} \to {\mathscr E}$, then ${\mathscr X} \to {\mathscr E} \to {\mathscr Y}$ is an actual map which agrees on ${\mathscr X} \setminus {\mathscr Z}$ with the given meromorphic map $\varphi$, and then the minimality of ${\mathscr Z}$ will imply that ${\mathscr Z}$ is empty. We now study the fibers of the surjective morphism $\pr_1\colon {\mathscr E} \to {\mathscr X}$ over ${\mathscr Z}$. Before doing so, we recall a result concerning proper morphisms. \begin{lemma}\label{lemma:locusproper} Let $f\colon {\mathscr X} \to {\mathscr{S}}$ be a proper map of rigid analytic spaces. Then the locus of points in ${\mathscr{S}}$ over which the fiber of $f$ is finite is an admissible open. \end{lemma} \begin{proof} We may first assume that ${\mathscr{S}} = \Sp(A)$ is affinoid, so ${\mathscr X}$ is quasi-compact and separated. Now consider the associated map $f^{\Ber}\colon {\mathscr X}^{\Ber}\to {\mathscr{S}}^{\Ber} = M(A)$ of Berkovich spaces. By \autoref{lemma:equivalenceproperfinite}.(1), the notions of properness are equivalent, and so we have that $f^{\Ber}$ is proper. If $s\in {\mathscr{S}}$ has fiber ${\mathscr X}_s$ that is finite, then likewise $f^{\Ber}$ has analytic fiber that is finite, hence a finite set, see \autoref{lemma:equivalenceproperfinite}.(2). By \cite[Corollary 3.3.11]{BerkovichSpectral}, we get an open ${\mathscr U}$ in ${\mathscr{S}}^{\Ber}$ over which $f^{\Ber}$ has finite fibers. In $M(A)$, a base of neighborhoods around any point is provided by rational affinoid domains, and we can arrange it to be strict since $M(A)$ is strict. Thus, we can find a rational affinoid $\Sp(B)$ in $\Sp(A)$ so that $M(B)$ is contained in ${\mathscr U}$ and contains $s$. We have that $\Sp(B)$ is an admissible open in $\Sp(A)$ around $s$ over which $f$ has finite fibers (since every fiber ${\mathscr X}_t$ is a quasi-compact and separated $\kappa(t)$-analytic space with $({\mathscr X}_t)^{\Ber}$ over $M(\kappa(t)$) identified with the fiber of $f^{\Ber}$ over $t$ in $M(A)$, and ${\mathscr X}_t \to \Sp(\kappa(t))$ is finite if and only if the Berkovich space over $M(\kappa(t))$ is finite (in the analytic sense), see again \autoref{lemma:equivalenceproperfinite}.(2). \end{proof} \begin{lemma}\label{lemma:positivedimfibers} All fibers of the surjection $\pr_1\colon {\mathscr E} \to {\mathscr X}$ over ${\mathscr Z}$ have positive dimension. \end{lemma} \begin{proof} Suppose for the sake of contradiction that there is some $z\in {\mathscr Z} \subset {\mathscr X}$ which has a finite fiber. By \autoref{lemma:locusproper}, we have that the locus of points in ${\mathscr X}$ over which is the fiber is finite is an admissible open. By passing to a connected affinoid neighborhood of such a $z$, we may and do assume that all of the fiber of $\pr_1$ are finite and ${\mathscr X} = \Sp(A) $ for a normal domain $A$ (since connected and normal implies irreducible and reduced). However, since $\pr_1$ is proper and quasi-finite, it is also finite, and hence ${\mathscr E}$ is also affinoid, say ${\mathscr E} = \Sp(B)$. Therefore, $\pr_1$ is a finite surjection $\Sp(B) \to \Sp(A)$ that restricts to an isomorphism over the complement of ${\mathscr Z}$. Recall that ${\mathscr Z}$ in $\Sp(A)$ and $\pr_1^{-1}({\mathscr Z})$ in $\Sp(B)$ are nowhere-dense, so the irreducibillity of ${\mathscr E} \setminus \pr_1^{-1}({\mathscr Z})$ in ${\mathscr E}$ forces ${\mathscr E}$ to be irreducible. As such, we have that $B_{\red}$ is a domain and the surjective map $\Sp(B_{\red}) \to \Sp(A)$ is an isomorphism over $\Sp(A) \setminus {\mathscr Z}$. We will show that the induced map on affinoid algebras $A \to B_{\red}$ is an isomorphism. Once we have shown this, the inverse map $B_{\red}\to A$ will define a morphism $\Sp(A)\to \Sp(B_{\red}) \to \Sp(B)$ that is a section to $\pr_1\colon \Sp(B) \to \Sp(A)$ because the composition of these two maps agrees with the identity away from ${\mathscr Z}$ and hence is the identity as ${\mathscr Z}$ is nowhere-dense in the reduced $\Sp(A)$. However, we noted before \autoref{lemma:locusproper} that the existence of a section allows us to show that ${\mathscr Z}$ is empty, which contradicts the existence of $z\in {\mathscr Z}$. We now return to showing that the module finite map $A \to B_{\red}$ between Noetherian domains is an isomorphism. Note that this map is injective, and so by normality of $A$, the map is an isomorphism if the induced map of fraction fields has degree 1. Let $d$ denote the degree of the map of fraction fields. We know that $\Spec(B_{\red}) \to \Spec(A)$ is finite flat of degree $d$ over some non-empty Zariski open $V$, and by the Jacobson property of $A$ \cite[6.1.1/3]{BGR}, there is a closed point $t$ which is in $V$ and also away from the proper closed set corresponding to the ideal of ${\mathscr Z}$. Moreover, $A\to B_{\red}$ induces a finite flat map of degree $d$ after completion at the maximal ideal of $t$, but that map coincides with the map upon completion at $t$ arising from the finite analytic map $\Sp(B_{\red}) \to \Sp(A)$. Recall that this latter map is an isomorphism over the complement of ${\mathscr Z}$ and hence upon the completion at $t$, and therefore, we have that $d=1$. \end{proof} To complete the proof, we need to show that the irreducible components of ${\mathscr Z}$ have codimension at least two. We will need the following lemma. \begin{lemma}\label{lemma:finitefibers} If $q\colon {\mathscr T} \to {\mathscr{S}}$ is a proper surjective map between $K$-analytic spaces that are equidimensional with the same dimension $d \geq 0$, then every non-empty admissible open in ${\mathscr{S}}$ contains a point over which the fiber is finite. \end{lemma} \begin{proof} The case of $d = 0$ is trivial, and we proceed by induction on $d$, so we assume that $d > 0$. The first step is to reduce to when ${\mathscr T}$ and ${\mathscr{S}}$ are each irreducible. We can precompose with the normalization of ${\mathscr T}$ so that ${\mathscr T}$ is normal with connected components ${\mathscr T}_1,\dots,{\mathscr T}_n$. Each ${\mathscr T}_j$ has image that is an analytic set in an irreducible component of ${\mathscr{S}}$. Each irreducible component of ${\mathscr{S}}$ must then be the image of some ${\mathscr T}_j$ since $q$ is surjective, and the connected components ${\mathscr T}_j$ which map onto an irreducible component of ${\mathscr{S}}$ must factor through the normalization of that irreducible component (cf.~\cite[Theorem 2.2.4]{conrad-conn}). With this, we can express $q$ as a ``disjoint union" of two types of maps:~some of the ${\mathscr T}_j$ map onto a connected component of the normalization of ${\mathscr{S}}$ and some ${\mathscr T}_j$ map onto a proper analytic set in an irreducible component of ${\mathscr{S}}$. If we can prove our result for the first type, then by working over the Zariski-open in ${\mathscr{S}}$ away from the images of the maps of the second type, we can conclude our result for the original $q$. Since the normalization of ${\mathscr{S}}$ is finite (and surjective), we can thereby reduce to the case where ${\mathscr T}$ and ${\mathscr{S}}$ are connected and normal. Let ${\mathscr U} =\Sp(A)$ be a connected affinoid in ${\mathscr{S}}$, so it is irreducible. It is enough to find one fiber over ${\mathscr U}$ that is finite. The connected components of ${\mathscr V} = q^{-1}({\mathscr U})$ are its irreducible components, and there are finitely many by the quasi-compactness of $q$. Note that at least one of these irreducible components maps onto ${\mathscr U}$. By an argument similar to above with the ${\mathscr T}_j$, we can find a point $u \in {\mathscr U}$ that is only contained in the image of those components of ${\mathscr V}$ which map onto ${\mathscr U}$. Let ${\mathscr W}$ be the union of those components. Pick a nonzero, non-unit $f\in A$ that vanishes at $u$. The pullback $f' = q^*(f)$ on ${\mathscr W}$ defines an analytic set ${\mathscr W}'$ mapping onto the zero locus ${\mathscr U}'$ of $f$ in ${\mathscr U}$. Endow ${\mathscr W}'$ and ${\mathscr U}'$ with reduced structures. Each of ${\mathscr W}'$ and ${\mathscr U}'$ is equidimensional of dimension $d-1$ since ${\mathscr U}$ is irreducible and every irreducible (and hence connected) component of ${\mathscr W}$ maps onto ${\mathscr U}$. The map $q\colon {\mathscr W}'\to {\mathscr U}'$ satisfies the original hypotheses but with dimension $d-1$. Therefore, by induction, every non-empty admissible open in ${\mathscr U}'$ has finite fiber in ${\mathscr W}'$. Now, pick an open around $u$ which avoids the analytic images of the components of ${\mathscr V}$ not part of ${\mathscr W}$, so we get a finite fiber for ${\mathscr W}'\to {\mathscr U}'$ which is also a fiber for $q^{-1}({\mathscr U}')\to {\mathscr U}'$, and so is a fiber of ${\mathscr V} = q^{-1}({\mathscr U}) \to {\mathscr U}$. \end{proof} \begin{prop}\label{prop:codim} Every irreducible component of ${\mathscr Z}$ has codimension at least two. \end{prop} \begin{proof} Suppose for the sake of contradiction that ${\mathscr Z}$ has an irreducible component ${\mathscr Z}'$ of codimension $1$ in the connected normal affinoid $X = \Sp(A)$, in particular, we assume that ${\mathscr Z}'$ has pure codimension $1$. First, we reduce to the case where ${\mathscr Z} = {\mathscr Z}'$. Pick $z'\in {\mathscr Z}'$ not in any other irreducible component of ${\mathscr Z}$, and let ${\mathscr W}$ be a connected affinoid open in ${\mathscr X}$ around $z'$ that is inside the Zariski-open complement of the finite union of the other irreducible components of ${\mathscr Z}$. We can replace ${\mathscr X}$ with such a ${\mathscr W}$ (since the formation of ${\mathscr Z}$ is local in ${\mathscr X}$), and so we may and do assume that ${\mathscr Z}$ is irreducible of pure codimension $1$. Next, we reduce to the case where ${\mathscr Z}$ is defined by a principal ideal. We may assume that ${\mathscr Z}$ is reduced, so ${\mathscr Z} = \Sp(A/P)$ where $P$ is some prime ideal of $A$ with height $1$. Since $A$ is normal, we have that $A_P$ is a DVR, and so there is an affine open $\Spec(A_a)$ in $\Spec(A)$ containing $P$ for which $P_a$ is principal with generator $f \in A$. By looking at the map $\Sp(A) \to \Spec(A)$, we see that for the Zariski open ${\mathscr U} = \brk{a \neq 0}$ in ${\mathscr X}$, the intersection ${\mathscr Z} \cap {\mathscr U}$ is defined by the ideal generated by $f$. Note that $a$ is nonzero on $\Sp(A/P)$ because $a$ is not in $P$ since $\brk{P} \in \Spec(A_a)$. As such, we have that its sup-norm on $\Sp(A/P)$ is positive, and by replacing $a$ with some $ca^n$ for $c\in K^{\times}$ and $n$ a positive integer, we can arrange it so that this sup-norm is $1$. With this, we have that ${\mathscr V} = \Sp(A\langle a \rangle)$ is an affinoid open in ${\mathscr X} = \Sp(A)$ that meets ${\mathscr Z} = \Sp(A/P)$, and ${\mathscr V} \cap {\mathscr Z}$ cannot equal ${\mathscr V}$ since ${\mathscr Z}$ has pure codimension $1$ in the irreducible ${\mathscr X}$. We can replace ${\mathscr X}$ with a connected component of ${\mathscr V}$ that touches ${\mathscr Z}$ so we retain all preceding properties and gain that the radical ideal of ${\mathscr Z}$ in ${\mathscr X}$ is principal, say $fA$. We know that ${\mathscr E}$ is irreducible with the same pure dimension $d$ as the irreducible ${\mathscr X}$ (so $d>0$ since ${\mathscr X}$ has the irreducible subspace ${\mathscr Z}$ with positive codimension), and the map $\pr_1\colon {\mathscr E} \to {\mathscr X}$ is surjective. Thus, the analytic function $f' \coloneqq \pr_1^*(f)$ on ${\mathscr E}$ determines an analytic set in ${\mathscr E}$ that does not exhaust the space (because its image in ${\mathscr X}$ is ${\mathscr Z}$ rather than ${\mathscr X}$), and so it has pure dimension $d-1$. Moreover, the map of vanishing loci \[ q\colon V(f')_{\red} \longrightarrow V(f) = {\mathscr Z} \] is a proper surjection between analytic spaces each of pure dimension $d-1$. Now, \autoref{lemma:positivedimfibers} tells us that \textit{all} of the fibers of $q$ have position dimension, however \autoref{lemma:finitefibers} asserts that there fibers which are finite. Therefore, we have reached a contradiction to our original assumption that ${\mathscr Z}$ has an irreducible component ${\mathscr Z}'$ of codimension $1$, and hence our result follows. \end{proof} \begin{proof}[Proof of \autoref{thm:meromorphicmap}] This follows from \autoref{defn:meromorphic} and \autoref{prop:codim}. \end{proof} To conclude this section, we show that the statement of \autoref{thm:meromorphicmap} carries over from rigid analytic to adic spaces. We note that \autoref{defn:meromorphic} works for to adic spaces \textit{mutatis mutandis}. \begin{prop}\label{prop:meromorphicmapadic} Let $K$ be an algebraically closed, complete, non-Archimedean valued field of characteristic zero. Let ${\mathscr X}$ be a normal, taut, locally of finite type adic space over $\Spa(K,K^{\circ})$, and let ${\mathscr Y}$ be a proper, reduced, taut, locally of finite type adic space over $\Spa(K,K^{\circ})$. The indeterminacy locus of any meromorphic map ${\mathscr X}\dashrightarrow {\mathscr Y}$ is an analytic subset of codimension at least two. \end{prop} \begin{proof} First, we recall that there is a functor $\mathfrak{r}$ from the category of rigid analytic spaces to the category of adic spaces, which induces an equivalence on certain subcategories (c.f.~\autoref{thm:comparisionrigidadic} and \autoref{thm:comparision}). We note that the image of a normal, taut (resp.~a proper, reduced, taut) rigid analytic space over $\Sp(K)$ via $\mathfrak{r}$ will be a normal, taut, locally of finite type (resp.~proper, reduced, taut, locally of finite type) adic space over $\Spa(K,K^{\circ})$ via \cite[(1.1.11) \& Remark 1.3.9.iv]{huber} and \autoref{defn:normal}. Next, we have that $\mathfrak{r}$ will map an open immersion of rigid analytic spaces to an open immersion of adic spaces (\textit{loc.~cit.~}(1.1.11.b)), that $\mathfrak{r}$ is fully faithful (\textit{loc.~cit.~}(1.1.11.d)), and that $\mathfrak{r}$ preserves dimensions (\textit{loc.~cit.~}(1.8.11.i)). The result now follows from \autoref{thm:meromorphicmap} and applying the functor $\mathfrak{r}$. \end{proof} \section{\bf On pseudo-$K$-analytically Brody hyperbolic varieties} \label{sec:Brodyhyperbolic} In this section, we offer a new definition of pseudo-$K$-analytically Brody hyperbolic which differs slightly from \cite{MorrowNonArchGGLV} and prove a result describing how one can test this notion. To begin, we offer our new definition. \begin{definition} \label{defn:KBrodyMod} Let ${\mathscr X}$ be a $K$-analytic space and let ${\mathscr D}\subset {\mathscr X}$ be a closed subset. Then ${\mathscr X}$ is \cdef{$K$-analytically Brody hyperbolic modulo ${\mathscr D}$} (or:~\cdef{the pair $({\mathscr X},{\mathscr D})$ is $K$-analytically Brody hyperbolic)} if \begin{itemize} \item every non-constant analytic morphism $\mathbb{G}_{m,K}^{\an} \to {\mathscr X}$ factors over ${\mathscr D}$, and \item for every abelian variety $A$ over $K$ and every dense open subset ${\mathscr U}\subset A^{\an}$ with $\mathrm{codim}(A^{\an}\setminus {\mathscr U})\geq 2$, every non-constant analytic morphism ${\mathscr U} \to {\mathscr X}$ factors over ${\mathscr D}$. \end{itemize} \end{definition} \begin{definition} A $K$-analytic space ${\mathscr X}$ over $K$ is \cdef{pseudo-$K$-analytically Brody hyperbolic} if there is a proper closed subset ${\mathscr D} \subsetneq {\mathscr X}$ of ${\mathscr X}$ such that $({\mathscr X},{\mathscr D})$ is $K$-analytically Brody hyperbolic. \end{definition} \begin{remark}\label{rmk:differentdefinitions} \autoref{defn:KBrodyMod} differs from \cite[Definition 2.2]{MorrowNonArchGGLV} in that we require every non-constant \textit{analytic} morphism from a big \textit{analytic} open of the analytification of an abelian variety to factor over a proper closed subset, whereas \cite[Definition 2.2]{MorrowNonArchGGLV} only requires every non-constant {algebraic} morphism from a big algebraic open of an abelian variety to factor over a proper closed subset. In Subsection \ref{appendix}, we show that the results from \cite{MorrowNonArchGGLV} still hold with this new definition. \end{remark} The goal of this section is to prove the following. \begin{theorem}\label{thm:equivalentBrody} Let ${\mathscr X}/K$ be an irreducible, reduced, separated $K$-analytic space and let ${\mathscr D}\subset {\mathscr X}$ be a closed subset. Then, ${\mathscr X}$ is $K$-analytically Brody hyperbolic modulo ${\mathscr D}$ if and only if for every connected, algebraic group $G/K$ and every dense open subset ${\mathscr U} \subset G^{\an}$, every non-constant morphism ${\mathscr U} \to {\mathscr X}$ factors over ${\mathscr D}$. \end{theorem} To prove \autoref{thm:equivalentBrody}, we follow the line of reasoning from \cite[Section 3.2]{javanpeykarXie:Finitenesspseudogroupless}, however, we need to transport several of the scheme theoretic arguments to the category of adic spaces. \begin{lemma}\label{lemma:proxyNoether} Let $\Spa(R,R^{\circ})$ and $\Spa(S,S^{\circ})$ be affinoid $K$-analytic spaces, and suppose $\Spa(R,R^{\circ})$ is irreducible and reduced. Let $\pi\colon \Spa(S,S^{\circ}) \to \Spa(R,R^{\circ})$ be a faithfully flat morphism. Then, there exists a dense affinoid subspace $\Spa(\tilde{R},\tilde{R}^{\circ})\subset \Spa(R,R^{\circ})$ and an affinoid subspace $\Spa(\tilde{S},\tilde{S}^{\circ}) \subset \Spa(S,S^{\circ}) $ such that the restricted map $\pi\colon\Spa(\tilde{S},\tilde{S}^{\circ}) \to \Spa(\tilde{R},\tilde{R}^{\circ})$ is finite and flat. \end{lemma} \begin{proof} We will prove the result using Berkovich spaces, and then transfer it over to adic spaces. As such, let $M(R)$ and $M(S)$ denote the associated Berkovich affinoid spaces, and let $\pi^{\Ber}\colon M(S) \to M(R)$ denote the associated map of Berkovich affinoid spaces. First assume that the morphism $\pi^{\Ber}\colon M(S) \to M(R)$ is quasi-finite i.e., for every $y \in M(S)$, we have that $\dim_{y}\pi^{\Ber} = 0$. If we let ${\mathscr Z}$ denote the relative interior of $\pi^{\Ber}$, then the restriction $\pi^{\Ber}_{|{\mathscr Z}}$ is a finite and flat morphism, and hence open by \cite[Proposition 3.2.7]{BerkovichEtaleCohomology}. Using Lemma 3.1.2 of \textit{loc.~cit.,} we may find small affinoid opens $M(S') \subset M(S)$ and $M(R') \subset M(R)$ such that the morphism $\pi^{\Ber}_{|M(S')}\colon M(S') \to M(R')$ is finite and flat. To translate the result to affinoid $K$-analytic spaces, we use \autoref{lemma:equivalenceproperfinite}.(2) and \cite[Lemma 1.4.5.(iv) \& p.~425]{huber}. Now suppose that $\pi^{\Ber}$ is not quasi-finite i.e., there exists a point $y \in M(S)$ such that $\dim_{y}\pi^{\Ber} = d \geq 1$. By \cite[Theorems 4.6 \& 3.2]{ducros_variation}, there exists an affinoid neighborhood $V = M(S')$ of $y$ such that there exists a quasi-finite morphism \[ \varphi\colon V \to \mathbb{A}_{M(R)}^d. \] This means that $\varphi$ is topologically proper and quasi-finite at every point of $V$. Let $y' \in V$, and let $x' = \varphi(y')$. Suppose that $M(R\langle T_1/r_1,\dots,T_d/r_d\rangle) \subset \mathbb{A}_{M(R)}^d$ is a relative closed disk such that $x'$ lies in the relative interior of $M(R\langle T_1/r_1,\dots,T_d/r_d\rangle)$ i.e., it lies in the topological interior of $M(R\langle T_1/r_1,\dots,T_d/r_d\rangle)$. Let $V’ = \varphi^{-1}(M(R\langle T_1/r_1,\dots,T_d/r_d\rangle))$. We claim that $\varphi_{|V’}$ is finite at $y’$. First, we note that $\varphi_{|V’}$ is quasi-finite as $M(R\langle T_1/r_1,\dots,T_d/r_d\rangle)$ is a compact, closed subset of $\mathbb{A}_{M(R)}^d$, and hence it suffices to show that $\varphi_{|V'}$ is boundaryless at $y'$ i.e., $y'$ is in the relative interior $\Int(V'/M(R\langle T_1/r_1,\dots,T_d/r_d\rangle))$. From the above and \cite[Theorems 4.6]{ducros_variation}, we have the following morphisms \[ \begin{tikzcd} V' \arrow{r}{\varphi_{|V'}} \arrow[bend right = 25]{rr}{\pi^{\Ber}_{|V'}}& M(R\langle T_1/r_1,\dots,T_d/r_d\rangle) \arrow{r} &M(R). \end{tikzcd} \] By the choice of $M(R\langle T_1/r_1,\dots,T_d/r_d\rangle)$, we have that $y'$ is in the relative interior $\Int(V'/M(R))$. Now the claim follows from \cite[Proposition 1.5.5.(ii)]{BerkovichEtaleCohomology} as \[\Int(V'/M(R)) = \Int(V'/M(R\langle T_1/r_1,\dots,T_d/r_d\rangle))\, \cap \,\varphi_{|V'}^{-1}(\Int(M(R\langle T_1/r_1,\dots,T_d/r_d\rangle)/M(R))),\] and hence $y' \in \Int(V'/M(R\langle T_1/r_1,\dots,T_d/r_d\rangle))$ and therefore $\varphi_{|V’}$ is finite at $y’$. By Proposition 3.1.4 of \textit{loc.~cit.}, we can find affinoid neighbourhoods $V'' = M(S'')$ of $y'$ and $M(R'\langle T_1/r_1',\dots,T_d/r_d' \rangle)$ of $x'$ such that $\varphi$ induces a finite morphism \[ M(S'') \to M(R'\langle T_1/r_1',\dots,T_d/r_d'\rangle), \] and so we have that $S''$ is finite over $R'\langle T_1/r_1',\dots,T_d/r_d'\rangle$. If we consider the ideal $I = ( T_1,\dots, T_n)$ in $S''$ and take quotients, we have that $S''/I$ is finite over $R'$, and hence there exists an affinoid subset $M(S''/I)$ of $V''$ such that the morphism $M(S''/I) \to M(R')$ is finite. Moreover, we may assume, by Lemma 3.1.2 of \textit{loc.~cit.,} that $M(R') \subset M(R)$ is an affinoid open as we may take $M(R'\langle T_1/r_1',\dots,T_d/r_d' \rangle)$ to be arbitrarily small. Translating this result back to adic spaces using \autoref{lemma:equivalenceproperfinite}.(2) and \cite[Lemma 1.4.5.(iv) \& p.~425]{huber}, we have a finite morphism $\Spa(S''/I, (S^{''\circ}/I \cap S^{''\circ})^c ) \to \Spa(R',R^{'\circ})$ where $(S^{''\circ}/I \cap S^{''\circ})^c $ is the integral closure of $S^{''\circ}/I \cap S^{''\circ}$ in $S''/I$ and where $\Spa(R',R^{'\circ})$ is open in $\Spa(R,R^{\circ})$. Since $\Spa(R',R^{'\circ})$ is reduced, \cite[Theorem 2.21]{bhatt_hanse:sixfunctor} tells us that the flat locus is a dense open subset of $\Spa(R',R^{'\circ})$. Consider a smaller affinoid open of the flat locus, call it $\Spa(R'',R^{''\circ})$, and let $\Spa(S''',S^{'''\circ})$ denote its preimage in $\Spa(S''/I, (S^{''\circ}/I \cap S^{''\circ})^c )$. Then, we have that the restricted morphism $\pi\colon \Spa(S''',S^{'''\circ}) \to \Spa(R'',R^{''\circ})$ is finite and flat, and $\Spa(R'',R^{''\circ})$ is a dense affinoid subspace of $\Spa(R,R^{\circ})$, as desired. \end{proof} \begin{lemma}\label{lem:affinefactorisation} Let $W_1 = \Spa(R,R^{\circ})$, $W_2 = \Spa(S,S^{\circ})$, and $W_3 = \Spa(T,T^{\circ})$ be affinoid $K$-analytic spaces. Let $\pi\colon W_2 \to W_1$ be a non-constant, faithfully flat morphism, and let $f\colon W_2\to W_3$ be a non-constant morphism. Assume that $W_1$ and $W_2$ are irreducible and normal, and that there exists a dense subset $E\subset W_1$ such that $f_{|\pi^{-1}(x)}$ is constant for every $x\in E$. Then there exists a unique $h\colon W_1\to W_3$ such that $f = h\circ \pi$. \end{lemma} \begin{proof} The proof follows closely the proof of \cite[Lemma 3.11]{javanpeykarXie:Finitenesspseudogroupless}. We will prove the result using rigid analytic spaces, and then transfer it over to adic spaces. To ease notation, we will simply identify $W_1,W_2$, and $W_3$ and the morphisms $\pi$ and $f$ with their associated images under the quasi-inverse of the functor $\mathfrak{r}$ from \cite[Proposition 4.5]{huber2}. We want to complete the diagram \[ \xymatrix{ W_1 \ar@/_/@{.>}[d]^{h} & W_2\ar[l]^{\pi}\ar[ld]^{f} \\ W_3 & } \] which is equivalent to completing the diagram \[ \xymatrix{ R \ar[r]^{\pi^*} & S \\ T. \ar[ru]^{f^*}\ar@/^/@{.>}[u]^{h^*} & } \] To construct $h^*$, it is enough to show that $f^*(T) \subset \pi^*(R)$. First, we consider $g \in f^*(T)$ and then will construct $r \in R$ such that $g=\pi^*(r)$. By \autoref{lemma:proxyNoether}, shrinking further to ensure that the degree is constant, we may find an affinoid subspace $W_2'=\Sp(Q) \subset W_2$ and an admissible open affinoid subspace $W_1'=\Sp(R_1) \subset W_1$ such that $\pi_{\vert_{W_2'}}\colon W_2' \rightarrow W_1' $ is finite and flat of degree $d$: \[ \xymatrix{ R_1 \ar[r]^{\pi_{\vert_{W_2'}}^*} & Q \\ R\ar[u]\ar[r]^{\pi^*} & S\ar[u] } \quad\quad \xymatrix{ W_1' \ar@{^{(}->}[d] & W_2' \ar[l]^{\pi_{\vert_{W_2'}}}\ar@{^{(}->}[d]\\ W_1 & W_2. \ar[l]^{\pi} } \] Let $\mathrm{tr}(\pi^*_{\vert_{W_2'}})\colon Q \rightarrow R_1$ be the trace as defined in \cite[\href{https://stacks.math.columbia.edu/tag/0BSY}{Section 0BSY}]{stacks-project}. For our chosen $g \in f^*(T)$ we denote by $g_{\vert_{W_2'}}$ the restriction to $W_2'$, i.e., its image in $Q$. We consider now $\tilde{g}:= \frac{1}{d}\mathrm{tr}(g_{\vert_{W_2'}}) \in R_1$; this $\tilde{g}$ will be the element $r$ such that $\pi^*(r)=g$. Returning to the proof, we first show that $g$ and $\pi^*(\tilde{g})$ coincide as elements in $\pi^*(R_1) \subset \pi^*(\mathrm{Frac}(R_1))=\pi^*(\mathrm{Frac}(R))$ inside $\mathrm{Frac}(S)$. Let $x \in W_2'(K)$ belonging to $ \pi^{-1}(W_1')$ and to $ \pi^{-1}(E) $, then we have \[ \pi^*(\tilde{g})(x)=\tilde{g}(\pi(x)) =\frac{1}{d}\left( \sum_{y \in W_2' \vert \pi(y)=\pi(x) } g(y)\right)=g(x) \] where for the last equality we used that $g \in f^*(T)$ and that $f$ is constant on $\pi^{-1}(x)$. As $\pi^{-1}(W_1')$ is dense in $W_2'$ and $\pi^{-1}(E)$ is dense in $W_2'$, then $g=\pi^*(\tilde{g})$ as elements of $\pi^*(\mathrm{Frac}(R))$. As $\pi^*\colon R \rightarrow S$ is faithfully flat, then we can use \cite[Lemma 3.10 (2)]{javanpeykarXie:Finitenesspseudogroupless} to see that \[ \pi^{*}(R)=S \cap \pi^*(\mathrm{Frac}(R)) \] which gives $f^*(T) \subset \pi^*(R)$. Note that this gives a homomorphism $h^*\colon T\to R$ of affinoid algebras, and since a homomorphism of affinoid algebras is continuous and bounded \cite[6.1.3/1, 6.2.2/1, 6.2.3/1]{BGR}, this gives a morphism $h\colon \Sp(R) \to \Sp(T)$ of affinoid rigid analytic spaces. Using \cite[Proposition 4.5.(iv)]{huber2}, we arrive at our desired morphism of affinoid $K$-analytic spaces \[ h\colon\Spa(R,R^\circ)\rightarrow \Spa(T,T^{\circ}). \] To conclude, we note that the construction of $\pi^*r$ is independent of $W_2'$. If we choose a different open subset $W_2''$ which is finite and flat over an open of $W_1$ and we define $\tilde{g}':= \frac{1}{d}\mathrm{tr}(g_{\vert_{W_2''}})$ we get that for every $x$ in $\pi^{-1}(E)$ for which both functions are defined \[\pi^*(\tilde{g}')(x)=\tilde{g}(\pi(x)) =\frac{1}{d}\left( \sum_{y' \in W_2'' \vert \pi(y')=\pi(x) } g(y')\right)=g(x),\] hence $\pi^*(\tilde{g}')$ and $\pi^*(\tilde{g})$ coincide on the dense set $\pi^{-1}(E)$. \end{proof} \begin{lemma}\label{lemma:extendmap} Let $\pi\colon {\mathscr Y}\to {\mathscr B}$ be a non-constant, surjective, flat morphism between normal, irreducible, separated, locally of finite type adic spaces over $\Spa(K,K^{\circ})$. Let $f\colon {\mathscr Y}\to {\mathscr Z}$ be a non-constant morphism of adic spaces where ${\mathscr Z}$ is a $K$-analytic space. Let $E\subset {\mathscr B}(K)$ be a dense subset. Assume that for every $x\in E$, the restriction $f_{|\pi^{-1}(x)}$ is constant. Then there exists a morphism $h\colon {\mathscr B}\to {\mathscr Z}$ such that $f = h\circ \pi$. \end{lemma} \begin{proof} We want to complete the diagram \[ \xymatrix{ {\mathscr B} \ar@/_/@{.>}[d]^{h} & {\mathscr Y} \ar[l]^{\pi}\ar[ld]^{f} \\ {\mathscr Z}. & } \] We cover ${\mathscr Y}$ by open affinoids and as $\pi$ is surjective it is enough to prove the theorem for any of these opens. We can further shrink these opens so that the image of each of them in ${\mathscr Z}$ is contained in an affinoid set, so we can assume that both ${\mathscr Y}$ and ${\mathscr Z}$ are affinoid. Since ${\mathscr Y}$ is affinoid and ${\mathscr B}$ is separated, the morphism $\pi\colon{\mathscr Y}\to {\mathscr B}$ is affinoid i.e., the pre-image of an open affinoid of ${\mathscr B}$ is an open affinoid in ${\mathscr Y}$. Indeed, this fact can be proved using the same argument from \cite[\href{https://stacks.math.columbia.edu/tag/01SG}{Tag 01SG}]{stacks-project}. Now if we cover ${\mathscr B}$ by open affinoids ${\mathscr V}_i$, then the preimages ${\mathscr W}_i = \pi^{-1}({\mathscr V}_i)$ are open affinoids. We note that we may assume that all of the affinoids will have rings of integral elements isomorphic to the power bounded elements as our assumptions imply that the adic spaces in question come from rigid analytic spaces by \cite[Proposition 4.5.(iii)]{huber2}. Therefore, we are reduced to the situation of \autoref{lem:affinefactorisation}. The independence from the choices in the construction of the element $r$ in {\it loc.~cit.~}ensures that we can glue together these locally defined maps into a global map $h$. \end{proof} \begin{lemma}\label{lemma:constantaffine} Let ${\mathscr X}/K$ be an irreducible, reduced, separated $K$-analytic space. Let ${\mathscr D} \subset {\mathscr X}$ be a closed subset such that every non-constant analytic morphism $\mathbb{G}^{\an}_{m,K}$ factors over ${\mathscr D}$. Let $G/K$ be a connected, finite type affine group scheme and let ${\mathscr U}$ be a dense open of $G^{\an}$ with $\codim(G^{\an}\setminus {\mathscr U})\geq 2$. If $\varphi\colon {\mathscr U} \to {\mathscr X}$ is an analytic morphism such that $\varphi({\mathscr U}) \nsubseteq {\mathscr D}$, then $\varphi$ is constant. \end{lemma} \begin{proof} We proceed by induction on $\dim G$. When $\dim G \leq 1$, the result is clear because such a finite type, connected, affine group scheme contains a dense $\mathbb{G}_{m,K}$. To show that $\varphi\colon {\mathscr U} \to {\mathscr X}$ is constant, it suffices to show that the meromorphic map $G^{\an} \dashrightarrow {\mathscr X}$ is constant. Let ${\mathscr Z} = \overline{\varphi({\mathscr U})}$ denote the analytic closure of the image of $\varphi$. Since $G^{\an}$ is irreducible (by \autoref{lemma:absoluteproperties}.(5)) and hence ${\mathscr U}$ is irreducible, the closure ${\mathscr Z} \subset {\mathscr X}$ is irreducible. Moreover, to show that $\varphi$ is constant, we may and do assume that ${\mathscr Z} = {\mathscr X}$. Since $\varphi$ is dominant, it will induce an embedding of fields of meromorphic function $\varphi^*({\mathscr M}_{{\mathscr X}}({\mathscr X})) \subseteq {\mathscr M}_{G^{\an}}(G^{\an})$. This follows because \autoref{coro:meromorphicfield} implies that ${\mathscr M}_{{\mathscr X}}({\mathscr X})$ and ${\mathscr M}_{G^{\an}}(G^{\an})$ are fields (for this latter fact we use that $G^{\an}$ is smooth which follows from \cite[\href{https://stacks.math.columbia.edu/tag/047N}{Tag 047N}]{stacks-project} and \autoref{lemma:absoluteproperties}.(4)), and the dominant morphism will induce a map of fields of meromorphic functions, which must be an injection. We now claim that if $\varphi^*({\mathscr M}_{{\mathscr X}}({\mathscr X})) = K$, then $\varphi\colon G^{\an} \dashrightarrow {\mathscr X}$ is constant. Indeed, if $\varphi$ was not constant, then the image of $\varphi$ contains at least two distinct values $x$ and $y$ in ${\mathscr X}$. Since ${\mathscr X}$ is separated, \autoref{lemma:separatingpoints} tells us that we can find a meromorphic function $f$ on ${\mathscr X}$ that separates these points, but then the composition $f\circ \varphi$ is a non constant element of $\varphi^*({\mathscr M}_{{\mathscr X}}({\mathscr X}))$, which yields a contradiction. Thus, it suffices to prove that $\varphi^*({\mathscr M}_{{\mathscr X}}({\mathscr X})) = K$. Below, we will suppress the subscript notation from the sheaf of meromorphic functions. To prove this statement, we first note that for an irreducible closed subgroup $H\subset G$, we may form the quotient \[ \pi_H\colon G \to G/H \] where $G/H$ is a smooth, quasi-projective scheme by \cite[IV$_\text{A}$]{SGA3}. By \cite[Example 2.21]{ulirsch2017tropicalization}, the analytification functor commutes with taking (stack) quotients, and hence we may identify $(G/H)^{\an}\cong G^{\an}/H^{\an}$. We note that the result from \textit{loc.~cit.~}is only for Berkovich $K$-analytic spaces, however using \autoref{thm:comparision} we can transfer this identification to adic spaces. Since $\pi_H$ is flat and surjective, we have that the morphism \[ \pi_{H^{\an}}\colon G^{\an} \to G^{\an}/H^{\an} \] is flat and surjective by \autoref{lemma:surjectiveflat}. Since $G^{\an}/H^{\an}$ is smooth and irreducible (as it is the image of an irreducible space), ${\mathscr M}(G^{\an}/H^{\an})$ is a field by \autoref{coro:meromorphicfield}, and moreover, we have that $\pi_{H^{\an}}^*({\mathscr M}(G^{\an}/H^{\an})) \cong {\mathscr M}(G^{\an})^{H^{\an}}$. Since $\pi_{H^{\an}}$ is partially proper, flat, and surjective, \autoref{lemma:flatopen} tells us that $\pi_{H^{\an}}({\mathscr U})$ is a big open subset of $G^{\an}/H^{\an}$ i.e., the codimension of the complement is greater than or equal to two. Moreover, there exists a dense subspace ${\mathscr V}\subset \pi_{H^{\an}}({\mathscr U})$ such that for every point $x\in {\mathscr V}$, the open subset ${\mathscr U} \cap \pi_{H^{\an}}^{-1}(x) \subset \pi_{H^{\an}}^{-1}(x)$ is big and satisfies $\varphi({\mathscr U} \cap \pi_{H^{\an}}^{-1}(x) ) \nsubseteq {\mathscr D}$. As an adic space, $\pi_{H^{\an}}^{-1}(x) $ is isomorphic to $H^{\an}$, and the induction hypothesis says that $\varphi_{|\pi_{H^{\an}}^{-1}(x) }$ is constant for every $x\in {\mathscr V}$. We want to apply \autoref{lemma:extendmap} to situation where ${\mathscr Y} = {\mathscr U}$, $\pi = \pi_{H^{\an}}$, ${\mathscr B} = \pi_{H^{\an}}({\mathscr U})$, and ${\mathscr Z} = {\mathscr X}$. Therefore, we need to verify that ${\mathscr U}$ and $\pi_{H^{\an}}({\mathscr U})$ are normal, irreducible, separated, locally of finite type adic spaces over $\Spa(K,K^{\circ})$. As $G^{\an}$ is smooth, irreducible, separated, and locally of finite type (see \autoref{lemma:absoluteproperties}) and ${\mathscr U}$ is an open subset of $G^{\an}$, we have that ${\mathscr U}$ satisfies all of these properties. To show these properties for $\pi_{H^{\an}}({\mathscr U})$, we first note that a quasi-projective scheme is separated \cite[\href{https://stacks.math.columbia.edu/tag/01VX}{Tag 01VX}]{stacks-project}, and hence $G^{\an}/H^{\an}$ is smooth, irreducible, separated, and locally of finite type by \autoref{lemma:absoluteproperties}. By \autoref{lemma:flatopen}, $\pi_{H^{\an}}({\mathscr U})$ is an open subset of $G^{\an}/H^{\an}$, and hence has the desired properties. Now, \autoref{lemma:extendmap} tells us that there exists a map $h_{H^{\an}}\colon \pi_{H^{\an}}({\mathscr U}) \to {\mathscr X}$ such that $\varphi = h_{H^{\an}} \circ \pi_{H^{\an}}$. In particular, \[ \varphi^{*}({\mathscr M}({\mathscr X})) \subset \pi_{H^{\an}}^*({\mathscr M}(G^{\an}/H^{\an})) \subset {\mathscr M}(G^{\an})^{H^{\an}}. \] By \cite[Lemma 3.14]{javanpeykarXie:Finitenesspseudogroupless}, we have that $G^{\an} = \langle H_1^{\an},\dots,H_s^{\an}\rangle$ where $H_i$ are proper connected closed subgroups of $G$. Therefore, we conclude that \[ \varphi^*({\mathscr M}({\mathscr X})) \subset \bigcap_{i=1}^s \pi_{H_i^{\an}}^*({\mathscr M}(G^{\an}/H_i^{\an})) \subset \bigcap_{i=1}^s {\mathscr M}(G^{\an})^{H_i^{\an}} = {\mathscr M}(G^{\an})^{\langle H_1^{\an},\dots, H_s^{\an} \rangle} = {\mathscr M}(G^{\an})^{G^{\an}} = K, \] as desired. \end{proof} \begin{prop}\label{prop:analyticpseudogroupless} Let ${\mathscr X}/K$ be an irreducible, reduced, separated $K$-analytic space. Let ${\mathscr D} \subset {\mathscr X}$ be a closed subset such that every non-constant analytic morphism $\mathbb{G}^{\an}_{m,K}$ factors over ${\mathscr D}$ and for every abelian variety $A$ over $K$ and every dense open ${\mathscr U}$ of $A^{\an}$ with $\codim(A^{\an}\setminus {\mathscr U}) \geq 2$, every non-constant analytic morphism ${\mathscr U} \to {\mathscr X}$ factors over ${\mathscr D}$. Then, for every connected, finite type algebraic group $G/K$ and every dense open ${\mathscr U}\subset G^{\an}$ with $\codim(G^{\an}\setminus {\mathscr U})\geq 2$, every analytic morphism $\varphi\colon{\mathscr U} \to {\mathscr X}$ factors over ${\mathscr D}$. \end{prop} \begin{proof} Let $G/K$ be a connected, finite type algebraic group, and let ${\mathscr U} \subset G^{\an}$ be a dense open ${\mathscr U}\subset G^{\an}$ with $\codim(G^{\an}\setminus {\mathscr U})\geq 2$. Let $\varphi\colon{\mathscr U} \to {\mathscr X}$ be an analytic morphism such that $\varphi({\mathscr U}) \nsubseteq {\mathscr D}$. We will show that $\varphi$ is constant. Let $H \subset G$ be the (unique) normal affine connected subgroup of $G$ such that $A \coloneqq G/H$ is an abelian variety over $K$ (see \cite[Theorem 1.1]{conradChev}). Denote by $\pi\colon G \to A$ the quotient map. Since $\pi$ is flat and surjective, we have that ${\mathscr V} \coloneqq \pi^{\an}({\mathscr U})$ is open in $A^{\an}$ by \autoref{lemma:flatopen} with $\codim(A^{\an}\setminus {\mathscr V}) \geq 2$. For every $x\in A^{\an}$, denote by $G^{\an}_x \coloneqq \pi^{\an^{-1}}(x)$ and ${\mathscr U}_x \coloneqq {\mathscr U} \cap G^{\an}_x$. Since $G^{\an}\setminus {\mathscr U}$ has codimension greater than or equal to two, there exists a dense open ${\mathscr V}_1 \subset {\mathscr V}$ such that for every $x\in {\mathscr V}_1$, the closed subset $G^{\an}_x\setminus{\mathscr U}_x$ has codimension greater than or equal to two. Let $\eta$ denote the generic point of $A^{\an}$; recall that $A^{\an}$ is irreducible by \autoref{lemma:absoluteproperties}.(5). If $\varphi({\mathscr U}_{\eta})$ is contained in ${\mathscr D}$, as ${\mathscr U}_{\eta}$ is dense in ${\mathscr U}$, we have that $\varphi({\mathscr U})$ is also contained in ${\mathscr D}$, but this contradicts our original assumption. Thus, we have that $\varphi({\mathscr U}_{\eta })$ is not contained in ${\mathscr D}$ and hence ${\mathscr U}_1 \coloneqq \varphi^{-1}({\mathscr X} \setminus {\mathscr D})$ is a non-empty open of ${\mathscr U}$. Since $\pi^{\an}$ is flat and surjective, we have that ${\mathscr V}_2 \coloneqq \pi^{\an}(\pi^{\an^{-1}}({\mathscr V}_1)\cap {\mathscr U}_1)$ is open in ${\mathscr V}_1$. Note that for every point $x\in {\mathscr V}_2$, the open subset ${\mathscr U}_x$ of $G^{\an}_x$ is big and $\varphi({\mathscr U}_x)$ is not contained in ${\mathscr D}$. Therefore by our first assumption and \autoref{lemma:constantaffine}, we have that $\varphi_{|{\mathscr U}_x}$ is constant for every $x\in {\mathscr V}_2$. Since ${\mathscr V}_2$ is dense in $A^{\an}$, it follows from \autoref{lemma:extendmap} that there is a morphism $h\colon {\mathscr V} \to {\mathscr X}$ such that $\varphi = h\circ \pi_{|{\mathscr U}}^{\an}$. Since $\varphi({\mathscr U}) \nsubseteq {\mathscr D}$, we have that $h({\mathscr V})\nsubseteq {\mathscr D}$. Moreover, since ${\mathscr V}$ is a big open of $A^{\an}$, our second assumption implies that $h$ is constant on ${\mathscr V}$, and hence $\varphi$ is constant, as desired. \end{proof} \begin{proof}[Proof of \autoref{thm:equivalentBrody}] This follows from \autoref{defn:KBrodyMod} and \autoref{prop:analyticpseudogroupless}. \end{proof} \section{\bf Brody special, special, arithmetically special, and geometrically special varieties} \label{sec:specialnotions} In this section, we recall various notions of special. \subsection{Special varieties in the sense of Campana} To begin, we describe Campana's notion of special from two perspectives. The first notion concerns the non-existence of certain sheaves, while the second notion characterizes specialness in terms of the non-existence of certain fibrations. \subsubsection{Special varieties via Bogomolov sheaves} First, we recall the definition of specialness in terms of Bogomolov sheaves. To define these sheaves, let $p$ be a positive integer and let ${\mathscr L} \subset \Omega_X^p$ be a saturated, rank one, coherent sheaf. We say that ${\mathscr L}$ is a \cdef{Bogomolov sheaf} for $X$ if the Iitaka dimension $\kappa(X,{\mathscr L})$ of ${\mathscr L}$ equals $ p>0$. \begin{definition}[\protect{\cite{campana2004orbifolds}}]\label{def:specialBogomolov} A proper variety $X$ is \cdef{special} if it has no Bogomolov sheaves. \end{definition} \begin{example}\label{exam:fundamental} From this definition, we have several fundamental examples of special varieties. It is immediate that a curve $C/k$ is special if and only if the genus of $C$ is less than two and that tori and abelian varieties are special. Also, rationally connected varieties are special since $\Sym^m(\Omega^p) = 0$ for any $p,m>0$. Finally, a variety with either first Chern number equal to zero or Kodaira dimension equal to zero is special (see \cite{campana2004orbifolds} for details). \end{example} \subsubsection{Special varieties via fibrations of general type} Another way to think of special varieties is via fibrations to varieties of general type. Consider a fibration $f\colon X\to Y$, with $X$ and $Y$ smooth and projective. Recall that a {fibration} $f\colon X\to Y$ is \cdef{of general type} if $K_Y(\Delta_f)$ is big and $\dim Y \geq 1$ where $\Delta_f$ is an effective $\mathbb{Q}$-Cartier divisor encoding the multiple fibers of $f$. More precisely, for every irreducible divisor $E \subset Y$, let $f^*(E)$ be the scheme-theoretic inverse image of $E$ in $X$, and write \[ f^*(E)=R+ \sum_i m_f(E_i) E_i, \] where $E_i$ are irreducible divisors in $X$, $t_i \geq 0$ are integers, and the codimension of $R$ is at least two. Define $m_f(E):=\mathrm{inf}_{i}(t_i)$. The $\mathbb{Q}$-Cartier divisor $\Delta_f$ above is defined as \[ \Delta_f:=\sum_{E}\left( 1 -\frac{1}{m_f(E)}\right)E \] and is called the \cdef{multiplicity divisor of $f$}, which encodes the defect of smoothness of the fibers. \begin{definition}\label{def:specialfibration} A proper variety $X$ is \cdef{special} if it does not admit a fibration of general type. \end{definition} It is a result of Campana \cite[Theorem 2.27]{campana2004orbifolds} that these two notions are equivalent. As a consequence of \autoref{def:specialfibration}, we have that a special variety does not dominate a positive dimensional variety of general type. Using this result, combined with some non-trivial arguments, allows us to deduce the specialness of the last two families of examples of \autoref{exam:fundamental}. \subsubsection{Special varieties that are not proper} Most of the aforementioned work of Campana works in much larger generality. Indeed, there is a way to define the notion of special for orbifolds, which roughly speaking is a proper variety $\overline{X}$ equipped with a orbifold divisor \[ \Delta:= \sum_{E_i}\left( 1 -\frac{1}{m_i}\right)E_i, \] where $E_i$ are Cartier divisors of $\overline{X}$ and $m_i \in [1,2, \ldots, +\infty]$ are almost all $1$ (so that it is a finite sum). Whenever all the $m_i$ that are not $1$ are $+\infty$, we recover the classical case of logarithmic geometry, that we revise briefly for successive applications to semi-abelian schemes. Let $X/K$ be a smooth variety and consider a smooth compactification $\overline{X}$ of $X$, with normal crossing divisor complement $\Delta$; we call $(\overline{X},\Delta)$ a logarithmic pair. We define the logarithmic canonical divisor $K_{\overline{X},\Delta}:= K_{\overline{X}}+\Delta$ and we have a notion of logarithmic general type for $X$ in terms of the logarithmic Kodaira dimension of $K_{\overline{X},\Delta}$ (see \cite[Definition 5.3]{Vojta:IntegralPointsII}). There is also a notion of logarithmic Bogomolov sheaf for a logarithmic pair \cite[p.~542]{campana2004orbifolds}, where one substitutes ${\mathscr L} \subset \Omega_X^p$ with ${\mathscr L} \subset \Omega_{\overline{X}}^p(\log \Delta)$. The logarithmic analogue of \autoref{def:specialBogomolov} and \autoref{def:specialfibration}, both due to Campana in the more general settings of orbifolds, are the following. \begin{definition}\label{def:specialBogomolovlog} A variety $X$ is \cdef{special} if $(\overline{X},\Delta)$ does not admit any logarithmic Bogomolov sheaves. \end{definition} \begin{definition}\label{def:specialfibrationlog} A variety $X$ is \cdef{special} if $(\overline{X},\Delta)$ does not admit a fibration of logarithmic general type. \end{definition} Campana \cite[p.~542]{campana2004orbifolds} notes that \autoref{def:specialBogomolov} and \autoref{def:specialfibration} are equivalent via the same proof as in the setting where $\Delta = \emptyset$. We also note that the logarithmic Kodaira dimension is a birational invariant \cite[Example 7.2.6]{campanaMH}, hence the two definitions are independent of $\overline{X}$. \subsection{Intermezzo on weakly special varieties} There is a weaker notion of special, called weakly special, which closely resembles \autoref{def:specialfibration}. \begin{definition} We say that $X/k$ is \cdef{weakly special} if there are no finite $k$-\'etale covers $X'\to X$ admitting a dominant rational map $X'\to Z'$ to a positive dimensional variety $Z'$ of general type. \end{definition} \begin{remark} Every special variety is weakly-special \cite[Proposition 9.29]{campana2004orbifolds}. Note that weakly special curves and surfaces are special \cite[Example 7.2 (6)]{campanaMH}, but already for threefolds the converse is not true; there is an example by Bogomolov and Tschinkel (c.f.~\cite[\S 8.7]{campanaMH}). \end{remark} \subsection{Brody special} Let $k = \mathbb{C}$. For a complex manifold $X^{\an}/\mathbb{C}$, there is a (conjectural) complex analytic characterization of specialness. \begin{definition}\label{defn:Brodyspecial} A complex manifold $X^{\an}$ is \cdef{Brody-special} if there is a Zariski dense holomorphic map $\mathbb{C}\to X^{\an}$. \end{definition} \begin{example}\label{exam:Brody} From this definition it is immediate that for a curve $C$ over $\mathbb{C}$, $C^{\an}$ is Brody-special if and only if the genus of $C$ is less than two. Also, we have that a torus $\mathbb{C}^{g}$ is Brody-special, and hence an abelian variety $A/\mathbb{C}$ is Brody-special via the Riemann uniformization theorem. Recently, Campana--Winkelmann \cite{campana2019dense} proved that the complex analytification of a rationally connected variety is Brody-special. \end{example} \subsection{Geometrically special varieties} Recently, Javanpeykar--Rousseau \cite{javanpeykar2020albanese} defined a new notion of specialness, called geometrically special, which is conjecturally equivalent to Campana's original notion. \begin{definition}[\protect{\cite{javanpeykar2020albanese}}]\label{defn:geometricallyspecial} A variety $X/k$ is \cdef{geometrically special over $k$} if for every dense open subset $U\subset X$, there exists a smooth, quasi-projective, connected curve $C/k$, a point in $c\in C(k)$, a point $u\in U(k)$, and a sequence of morphisms $f_i\colon C\to X$ with $f_i(c) = u$ for $i = 1,2,\dots$ such that $C\times X$ is covered by the graphs $\Gamma_{f_i} \subset C\times X$ of these maps. \end{definition} \begin{example} In \cite[Propositions 2.14 and 3.1]{javanpeykar2020albanese}, the authors prove that a rationally connected variety and an abelian variety are geometrically special over $k$, which recovers the examples from \autoref{exam:fundamental} and \autoref{exam:Brody}. \end{example} \subsection{Arithmetically special varieties} The notion of special also has a (conjectural) arithmetic counterpart, that aims to capture when the rational points of a variety are potentially dense. \begin{definition}\label{defn:arithmeticallyspecial} A proper variety $X/k$ is \cdef{arithmetically special over $k$} if there exists a finitely generated subfield $k'\subset k$ and a model ${\mathscr X}$ for $X$ over $k'$ such that ${\mathscr X}(k')$ is dense in ${\mathscr X}$. \end{definition} \begin{example} As with previous examples, a curve $C/k$ is arithmetically special if and only if the genus of $C$ is less than two and $C$ can be defined over a number field. Indeed, it is easy to see that a genus zero curve is arithmetically special, and it is a well-known but not obvious fact that an elliptic curve is arithmetically special. Furthermore, Faltings theorem \cite{Faltings2} asserts that a curve of genus $g\geq 2$ is not arithmetically special. By \cite[Section 3]{HassetTschinkel:AbelianFibrations}, any abelian variety is arithmetically special. For further examples of arithmetically special varieties, we refer the reader to \cite{HarrisTsch, bogomolov2000density, laiNaka:uniformpotentialdensity}. \end{example} \begin{remark} All of the above notions of special are stable under birational morphisms, finite \'etale covers, and products. We refer the reader to \cite[Section 2]{javanpeykar2020albanese} for details. \end{remark} \section{\bf A non-Archimedean analytic characterization of special and weakly special} \label{sec:nonArchspecial} In this section, we offer our definition of special for a $K$-analytic space and describe some basic properties of these special $K$-analytic spaces. \begin{definition}\label{defn:Kanspecial} We say that ${\mathscr X}$ is \cdef{$K$-analytically special} if there exists a connected, finite type algebraic group $G/K$, a dense open subset ${\mathscr U}\subset G^{\an}$ with $\codim(G^{\an}\setminus {\mathscr U}) \geq 2$, and an analytic morphism ${\mathscr U} \to {\mathscr X}$ which is Zariski dense. \end{definition} \begin{example} From \autoref{defn:Kanspecial}, we can immediately find several examples of $K$-analytically special varieties. First, a curve $C/K$ is $K$-analytically special if and only if the the genus of $C$ is less than two, and second, a connected, finite type algebraic group is $K$-analytically special. \end{example} While checking $K$-analytic specialness with big {analytic} opens of algebraic groups may seem unnatural, we do so in order to incorporate the following types of examples. \begin{example}\label{exam:whybig} Let $A/K$ be a simple abelian surface with good reduction. From \autoref{defn:Kanspecial}, it is clear that $A\setminus\brk{0}$ is $K$-analytically special, but if we did not test specialness on big analytic opens of algebraic groups then $A \setminus\brk{0}$ would not be $K$-analytically special. To see this, we note that if $G/K$ is an algebraic group and $G^{\an} \to (A\setminus\brk{0})^{\an}$ a non-constant, analytic morphism, then composition $G^{\an} \to (A\setminus\brk{0})^{\an} \to A^{\an}$ will be the translate of a group homomorphism, and hence a point by our assumption that $A$ is simple. Indeed, using Chevalley's decomposition theorem \cite[Theorem 1.1]{conradChev}, we may write $G$ as an extension of an abelian variety $B$ by a normal, affine group $H$. Note that each pair of points in $H$ will be connected by a $\mathbb{G}_m$, and hence the image of $H^{\an} $ in $ A^{\an}$ will be a point due to work of Cherry \cite[Theorem 3.2]{Cherry}. Moreover, we have that the morphism $G^{\an} \to A^{\an}$ will factor through the analytification of the abelian variety $B^{\an}$, but since $B$ is proper, rigid analytic GAGA \cite{kopfGAGA} implies that the morphism $B^{\an}\to A^{\an}$ is algebraic i.e., it is the analytification of an algebraic morphism $B\to A$. It is well-known that any morphism $B\to A$ is the translate of a group homomorphism, and therefore the image of $G^{\an} \to A^{\an}$ will be the translate of an abelian subvariety. Since $A$ was assumed to be simple, the image of $G^{\an}$ in $A^{\an}$ will be a point, and moreover, $(A\setminus\brk{0})^{\an}$ would not be $K$-analytically special. We remark that the idea of testing specialness and hyperbolicity on big open of algebraic groups goes back to Lang \cite{Lang} and was studied by Vojta in \cite{VojtaLangExc}. In this latter work, Vojta showed that for $A$ an abelian variety over $\mathbb{C}$ and $U$ a dense open subset of $A$ with $\codim(A\setminus U)\geq 2$, $U^{\an}$ is Brody-special by \textit{loc.~cit.~}Section 4. \end{example} \subsection{Basic properties:~birational invariance, products, and ascending along finite \'etale covers} For the remainder of this section, we prove that our notion of $K$-analytically special is preserved under birational morphisms, products, and finite \'etale covers. \begin{lemma}\label{lemma:birationalinvariance} Let ${\mathscr X} \dashrightarrow {\mathscr Y}$ be a bi-meromorphic morphism between proper, integral $K$- analytic spaces. Then, ${\mathscr X}$ is $K$-analytically special if and only if ${\mathscr Y}$ is $K$-analytically special. \end{lemma} \begin{proof} Suppose that ${\mathscr X}$ is $K$-analytically special so there exists a connected, finite type algebraic group $G/K$, an open dense subset ${\mathscr U} \subset G^{\an}$ with $\codim(G^{\an}\setminus {\mathscr U}) \geq 2$, and a Zariski dense, analytic morphism $\varphi\colon{\mathscr U} \to {\mathscr X}$. Since $\varphi$ is Zariski dense, the composition ${\mathscr U} \to {\mathscr X} \dashrightarrow {\mathscr Y}$ defines a meromorphic map from $G^{\an} \dashrightarrow {\mathscr Y}$. By \cite[\href{https://stacks.math.columbia.edu/tag/047N}{Tag 047N}]{stacks-project} and \autoref{lemma:absoluteproperties}.(4), $G^{\an}$ is smooth, and so by \autoref{prop:meromorphicmapadic}, this meromorphic map is defined on some dense open ${\mathscr U}'$ of $G^{\an}$ with $\codim(G^{\an}\setminus {\mathscr U}') \geq 2$. Therefore, ${\mathscr Y}$ is $K$-analytically special. To conclude, we note that the proof of the converse statement follows in the exact same manner. \end{proof} \begin{remark}\label{remark:whybiganalytic} We remark that if we tested $K$-analytic specialness on big \textit{algebraic} opens of connected, finite type algebraic groups, then bi-meromorphic invariance would not follow. Indeed, the proof of \autoref{lemma:birationalinvariance} boils down to showing that a meromorphic map between $K$-analytic spaces is defined on an open subset whose complement has codimension at least $2$. However, we do not know of a way to guarantee that this open subset is algebraic, and therefore, it is crucial that we test $K$-analytic specialness on big \textit{analytic} opens of connected, finite type algebraic groups. \end{remark} \begin{lemma} Let ${\mathscr X}/K$ and ${\mathscr Y}/K$ be $K$-analytic spaces which are both $K$-analytically special. Then the product ${\mathscr X}\times {\mathscr Y}$ is $K$-analytically special. \end{lemma} \begin{proof} Let $G$ (resp.~$G'$) be a connected, finite type algebraic group over $K$, let ${\mathscr U} \subset G^{\an}$ (resp.~${\mathscr U}'\subset G^{'\an}$) be a dense open subset with $\codim(G^{\an}\setminus {\mathscr U}) \geq 2$ (resp.~$\codim(G^{'\an}\setminus {\mathscr U}') \geq 2$) such that there exists a Zariski dense analytic morphism $\varphi\colon {\mathscr U} \to {\mathscr X}$ (resp.~$\varphi'\colon {\mathscr U}'\to {\mathscr Y}$). We have that $G \times G'$ is a connected, finite type algebraic group over $K$ and that the fibered products $G^{\an} \times G^{'\an}$ and ${\mathscr U} \times {\mathscr U}'$ exist in the category of locally of finite type adic spaces over $\Spa(K,K^{\circ})$ by \cite[1.2.2.(a)]{huber}. The same argument from \cite[\href{https://stacks.math.columbia.edu/tag/01JR}{Tag 01JR}]{stacks-project} tells us that ${\mathscr U} \times {\mathscr U}'$ is a dense open subset of $G^{\an} \times G^{'\an}$. Moreover, we claim that $\codim((G^{\an} \times G^{'\an}) \setminus ({\mathscr U} \times {\mathscr U}'))\geq 2$. Indeed, using \cite[Theorem 5.1.3.1]{conrad-conn} and \cite[1.8.11.(i)]{huber}, we deduce that $\dim(G^{\an} \times G^{'\an}) = \dim(G) + \dim(G')$ and the claim now follows from \cite[1.8.8]{huber}. To conclude, we have that the morphism $(\varphi \times \varphi')\colon {\mathscr U} \times {\mathscr U}' \to {\mathscr X} \times {\mathscr Y}$ is Zariski dense, and so ${\mathscr X}\times {\mathscr Y}$ is $K$-analytically special. \end{proof} \begin{lemma}\label{lemma:ascendfiniteetale} Let ${\mathscr X}\to {\mathscr Y}$ be a finite \'etale morphism between proper $K$-analytic spaces. Then ${\mathscr X}$ is $K$-analytically special if and only if ${\mathscr Y}$ is $K$-analytically special. \end{lemma} \begin{proof} If ${\mathscr X}$ is $K$-analytically special, then it follows that ${\mathscr Y}$ is $K$-analytically special since ${\mathscr X}\to {\mathscr Y}$ is finite \'etale. Now suppose that ${\mathscr Y}$ is $K$-analytically special. Let $G/K$ be a connected, finite type algebraic group, let ${\mathscr U} \subset G^{\an}$ open dense such that $\codim(G^{\an}\setminus {\mathscr U}) \geq 2$, and let ${\mathscr U} \to {\mathscr Y}$ be a Zariski dense, analytic morphism. By \cite[1.2.2.(a)]{huber}, we can construct the fibered product diagram \[ \begin{tikzcd} {\mathscr V} \arrow{r}\arrow{d} & {\mathscr X}\arrow{d} \\ {\mathscr U}\arrow{r} & {\mathscr Y}, \end{tikzcd} \] and since finite \'etale covers are stable under base change \cite[1.4.5.(i) \& 1.6.7.(iv)]{huber}, we have that ${\mathscr V} \to {\mathscr U}$ is finite \'etale. By \cite[\href{https://stacks.math.columbia.edu/tag/047N}{Tag 047N}]{stacks-project} and \autoref{lemma:absoluteproperties}.(4), $G^{\an}$ is smooth over $K$, and so purity of the branch locus (see e.g., \cite[Chapter 3, Theorem 2.1.11]{andre-per} or \cite[Corollary 2.15]{Hanse:Vanishing}) implies that the finite \'etale morphism ${\mathscr V}\to {\mathscr U}$ extends to a finite \'etale morphism ${\mathscr G}'\to G^{\an}$. By the non-Archimedean analogue of Riemann's existence theorem \cite[Theorem 3.1]{Lutkebohmert:Riemannexistence}, the finite \'etale morphism ${\mathscr G}'\to G^{\an}$ algebraizes, in particular there is a finite \'etale morphism of locally of finite type schemes $G'\to G$ whose analytification coincides with ${\mathscr G}'\to G^{\an}$. Note that every connected component $G''$ of $G'$ has the structure of a connected, finite type group scheme over $K$, and with this structure the morphism $G''\to G$ is a homomorphism. Since smooth morphisms preserve codimension, we have that $\codim(G^{''\an}\setminus {\mathscr V}) \geq 2$, and note that ${\mathscr V} \to {\mathscr U} \to {\mathscr Y}$ (and hence ${\mathscr V} \to {\mathscr X} \to {\mathscr Y}$) is Zariski dense. To conclude, we need to show that the image of ${\mathscr V} \to {\mathscr X}$ is Zariski dense. Suppose to the contrary. Let ${\mathscr Z}$ denote the complement of the Zariski closure of the image of ${\mathscr V}$ inside of ${\mathscr X}$. By assumption, ${\mathscr Z}$ is a non-empty open subset of ${\mathscr X}$. Since ${\mathscr X}\to {\mathscr Y}$ is \'etale, the image of ${\mathscr Z}$ in ${\mathscr Y}$ is open \cite[1.7.8]{huber}, and so we have an open subset of ${\mathscr Y}$ which is not in the image of ${\mathscr V}$. However, this contradicts the fact that ${\mathscr V} \to {\mathscr X} \to {\mathscr Y}$ is Zariski dense, and therefore, we have that ${\mathscr V} \to {\mathscr X} $ is Zariski dense and so ${\mathscr X}$ is $K$-analytically special. \end{proof} Recall that one can characterize special varieties as those which do not admit a fibration of general type. In the non-Archimedean setting, \autoref{thm:equivalentBrody} allows us to show that a $K$-analytically special variety cannot dominant a pseudo-$K$-analytically Brody hyperbolic variety. \begin{theorem}[= \autoref{thm:nofibrationtoBrody}] Let ${\mathscr X}$ and ${\mathscr Y}$ be irreducible, reduced, separated $K$-analytic spaces over $K$. If ${\mathscr Y}$ is $K$-analytically special and ${\mathscr X}$ is a positive dimensional pseudo-$K$-analytically Brody hyperbolic variety, then there is no dominant morphism ${\mathscr Y} \to {\mathscr X}$. \end{theorem} \begin{proof} Suppose there existed a dominant morphism from ${\mathscr Y} \to {\mathscr X}$. Then there exists a connected, finite type algebraic group $G$, a dense open subset ${\mathscr U}$ of $G^{\an}$ with $\codim(G^{\an}\setminus {\mathscr U})\geq 2$, and an analytic morphism ${\mathscr U}\to {\mathscr Y}\to {\mathscr X}$, which is Zariski dense since the composition of dominant morphisms is dominant. However, this contradicts \autoref{thm:equivalentBrody}, and therefore, we have that there cannot exist a dominant morphism from ${\mathscr Y}\to {\mathscr X}$. \end{proof} To conclude this section, we define the notion of $K$-analytically weakly special. \begin{definition} We say that ${\mathscr X}$ is \cdef{$K$-analytically weakly special} if there are no finite $K$-\'etale covers ${\mathscr X}'\to {\mathscr X}$ admitting a dominant meromorphic map ${\mathscr X}'\to {\mathscr Z}'$ to a positive dimensional $K$-analytic space ${\mathscr Z}'$ which is pseudo-$K$-analytically Brody hyperbolic. \end{definition} With this definition, we have the following corollary to \autoref{lemma:ascendfiniteetale} and \autoref{thm:nofibrationtoBrody}. \begin{corollary} A $K$-analytically special $K$-analytic space is $K$-analytically weakly special. \end{corollary} \section{\bf $K$-analytically special subvarieties of semi-abelian varieties and properties of quasi-Albanese maps} \label{sec:specialsubvarietiessemiabelian} In this section, we will prove \autoref{thm:closedabelianspecial} and \autoref{coro:specialabelian} and use these results to deduce some properties of the quasi-Albanese map of a $K$-analytically special variety. To begin, we recall the statement of \autoref{thm:closedabelianspecial}. \begin{theorem}[= \autoref{thm:closedabelianspecial}] Let $X$ be a closed subvariety of a semi-abelian variety $G$ over $K$. Then $X$ is the translate of a semi-abelian subvariety if and only if $X^{\an}$ is $K$-analytically special. \end{theorem} To prove Theorem \ref{thm:closedabelianspecial}, we will use some deep theorems about closed subvarieties of semi-abelian varieties. First, we recall progress on the Green--Griffiths--Lang--Vojta conjecture for closed subvarieties of semi-abelian varieties. \begin{theorem}[\protect{\cite{Abram, Nogu, MorrowNonArchGGLV}}] \label{thm:starting_point} Let $X$ be a closed subvariety of a semi-abelian variety $G$ over $K$. Then, $X$ is of logarithmic general type if and only if $X^{\an}$ is pseudo-$K$-analytically Brody hyperbolic. \end{theorem} We note that \autoref{thm:starting_point} was proved using the definition of pseudo-$K$-analytically Brody hyperbolic from \cite{MorrowNonArchGGLV}, and we will need to know that this result holds with our new definition (\autoref{defn:KBrodyMod}). The proof of this fact will occupy the next subsection. \subsection{\bf Extending meromorphic maps to analytifications of semi-abelian varieties}\label{appendix} The goal of this subsection is to prove the following extension result concerning meromorphic maps from smooth, irreducible analytic space to analytifications of semi-abelian varieties. \begin{theorem}\label{thm:extendanalyticmorphism} Let ${\mathscr Z}$ be a smooth, irreducible $K$-analytic space, and let ${\mathscr U} \subset {\mathscr Z}$ be a dense open with $\codim({\mathscr Z}\setminus {\mathscr U}) \geq 2$. Let $G/K$ be a semi-abelian variety. Then any analytic morphism ${\mathscr U} \to G^{\an}$ uniquely extends to an analytic morphism ${\mathscr Z} \to G^{\an}$. \end{theorem} The algebraic variant of \autoref{thm:extendanalyticmorphism} is well-known (see e.g., \cite[Theorem 4.4.1]{BLR} and \cite[Lemma A.2]{MochizukiAbsoluteAnabelian}). To prove our results, we begin by studying the case when $G$ is proper (i.e., when $G$ is an abelian variety). We first show that the analytic Picard group of ${\mathscr U}$ is in bijection to the analytic Picard group of ${\mathscr Z}$ (\autoref{lemma:Picard}), and then the result follows using similar reasoning as \cite[Corollary 8.4.6]{BLR}. Next, we prove the claim in the setting where $G$ is a (split) torus using the non-Archimedean variant of the second Hebbarkeitssatz. Finally, we conclude the proof using these two pieces and a descent argument. To start, we prove our result on analytic Picard groups. \begin{lemma}\label{lemma:Picard} Let ${\mathscr Z}$ be a smooth, irreducible $K$-analytic space, and let ${\mathscr U} \subset {\mathscr Z}$ be a dense open with $\codim({\mathscr Z}\setminus {\mathscr U}) \geq 2$. Then, $\Pic({\mathscr U}) $ is in bijection with $ \Pic({\mathscr Z})$. \end{lemma} \begin{proof} By the non-Archimedean version of the Remmert--Stein theorem \cite{Lutkebohmert:RemmertStein}, we have that the group of Weil divisors on ${\mathscr U}$ is isomorphic to the group of Weil divisors on ${\mathscr Z}$. Since ${\mathscr Z}$ (resp.~${\mathscr U}$) is smooth, the group of Weil divisors on ${\mathscr Z}$ (resp.~${\mathscr U}$) isomorphic to the group of Cartier divisors on ${\mathscr Z}$ (resp.~${\mathscr U}$) by \cite[Theorem 8.9]{mitsui2011bimeromorphic}. Now since ${\mathscr Z}$ (resp.~${\mathscr U}$) is irreducible, the classical argument (see e.g., \cite[Chapter II, Proposition 6.15]{Har77}) tells us that Cartier divisors on ${\mathscr Z}$ (resp.~${\mathscr U}$) are in bijection with line bundles on ${\mathscr Z}$ (resp.~${\mathscr U}$). Therefore, we have that line bundles on ${\mathscr U}$ are in bijection with line bundles on ${\mathscr Z}$. \end{proof} \begin{remark} The algebraic variant of \autoref{lemma:Picard} is well-known, and the result is actually not true for complex analytic manifolds (see \cite[Section 2.3]{huybrechts2005complex} for a discussion). Indeed, in the complex analytic setting, it is not true that Cartier divisors on a smooth complex manifold $X(\mathbb{C})$ are in bijection with line bundles on $X(\mathbb{C})$; instead, they are in bijection with line bundles ${\mathscr L}$ on $X(\mathbb{C})$ such that $H^0(X(\mathbb{C}),{\mathscr L})\neq 0$. Moreover, there are many examples of line bundles that do not come from Cartier divisors on complex manifolds and line bundles on big, dense opens which do not extend to the entire space (see e.g., \cite[Remark 2.3.21]{huybrechts2005complex}). The issue in the complex analytic setting is that when $X/\mathbb{C}$ is separated, $X(\mathbb{C})$ is Hausdorff, and hence if $X$ has positive dimension, it cannot also be irreducible in the complex analytic topology. The lack of irreducibility prevents the scheme-theoretic argument from going through as we need to know that if the restriction of a sheaf to each open covering is constant, then sheaf is constant. Therefore, it is essential that we work with adic spaces and not Berkovich spaces (c.f.~\autoref{remark:nonHausdorff}). \end{remark} \begin{lemma}\label{lemma:extendabelianvariety} Let ${\mathscr Z}$ be a smooth, irreducible $K$-analytic space, and let ${\mathscr U} \subset {\mathscr Z}$ be a dense open with $\codim({\mathscr Z}\setminus {\mathscr U}) \geq 2$. Let $A/K$ be an abelian variety. Then any analytic morphism $\varphi\colon {\mathscr U} \to A^{\an}$ uniquely extends to an analytic morphism $\wt{\varphi}\colon {\mathscr Z} \to A^{\an}$. \end{lemma} \begin{proof} We will closely follow the proof from \cite[Corollary 8.4.6]{BLR}. First, we base change $A/K$ to an abelian variety $A_{\mathscr Z} \coloneqq A\times_K {\mathscr Z} $. Recall that $A \cong A^{**}$ \cite[Theorem 8.4.5]{BLR}, and so by rigid analytic GAGA \cite{kopfGAGA}, $A^{\an}\cong A^{**\an}$, where $A^{*\an} \cong A^{*}$ represents the adic Picard functor of $A^{\an}$ (c.f.~\cite[Section 1]{BLII}). Now a morphism $\varphi\colon {\mathscr U} \to A^{\an}_{\mathscr Z}$ corresponds to an analytic line bundle on $A^{*\an} \times_{{\mathscr Z}} {\mathscr U}$. Since ${\mathscr Z}/K$ is smooth, we have that $A^{*\an} \to {\mathscr Z}$ is smooth, and so \autoref{lemma:Picard} implies that an analytic line bundle on $A^{*\an} \times_{{\mathscr Z}} {\mathscr U}$ uniquely extends to a line bundle on $A^{*\an}_{{\mathscr Z}}$, and hence gives rise to an extension $\wt{\varphi}\colon {\mathscr Z} \to A^{**\an}_{{\mathscr Z}}$. \end{proof} \begin{lemma}\label{lemma:extendtorus} Let ${\mathscr Z}/K$ be a smooth $K$-analytic space, and let ${\mathscr U} \subset {\mathscr Z}$ be an open subset with $\codim({\mathscr Z}\setminus {\mathscr U}) \geq 2$. For $T/K$ a split torus, any analytic morphism ${\mathscr U} \to T^{\an}$ uniquely extends to an analytic morphism ${\mathscr Z}\to T^{\an}$. \end{lemma} \begin{proof} Note that it suffices to show that if ${\mathscr L}$ is any line bundle on ${\mathscr Z}$ that admits a generating section $s_{{\mathscr U}} \in H^0({\mathscr U},{\mathscr L})$, then $s_{{\mathscr U}}$ extends to a generating section of ${\mathscr L}$ over ${\mathscr Z}$. This follows from the non-Archimedean variant of the second Hebbarkeitssatz \cite{Lutkebohmert:RemmertStein}. \end{proof} \begin{proof}[Proof of \autoref{thm:extendanalyticmorphism}] Let $G/K$ denote a semi-abelian variety and suppose we have the following presentation \[ 0 \to T \to G \to A \to 0 \] where $T$ is a split torus over $K$ and $A$ is an abelian variety over $K$. First we note that the quotient map $G\to A$ is faithfully flat with smooth fibers and hence smooth by \cite[Tag 01V8]{stacks-project}. In fact, we have that $G\times_A G \cong T \times G$ via $(g,h) \to (g-h,g)$ and similarly $G\times_A G \times_A G \cong T\times T\times G$. We note that since analytification commutes with fiber products, we have that $G^{\an} \times_{A^{\an}} G^{\an} \cong T^{\an}\times G^{\an}$ and $G^{\an}\times_{A^{\an}} G^{\an} \times_{A^{\an}} G^{\an} \cong T^{\an}\times T^{\an}\times G^{\an}$. Let ${\mathscr Z}$ be a smooth, irreducible $K$-analytic space, and let ${\mathscr U} \subset {\mathscr Z}$ be a dense open with $\codim({\mathscr Z}\setminus {\mathscr U}) \geq 2$. By \autoref{lemma:extendabelianvariety}, we can uniquely extend the composition ${\mathscr U}\to G^{\an} \to A^{\an}$ to ${\mathscr Z} \to A^{\an}$, which gives use the commutative diagram \[ \begin{tikzcd} {\mathscr U} \arrow[right hook->]{r} \arrow{d} & {\mathscr Z} \arrow{d} \\ G^{\an} \arrow{r} & A^{\an}. \end{tikzcd} \] Pulling back along the map $G^{\an} \to A^{\an}$, we have the commutative diagram \[ \begin{tikzcd} {\mathscr U} \times_{A^{\an}} G^{\an} \arrow[right hook->]{r} \arrow{d} & {\mathscr Z} \times_{A^{\an}} G^{\an} \arrow{d} \\ G^{\an} \times_{A^{\an}} G^{\an} \arrow{r} & G^{\an}. \end{tikzcd} \] Recalling that $G^{\an} \times_{A^{\an}} G^{\an} \cong T^{\an}\times G^{\an}$, we get a map ${\mathscr U} \times_{A^{\an}} G^{\an} \to T^{\an}\times G^{\an} \to T^{\an}$ via the first projection. By \autoref{lemma:extendtorus}, this uniquely extends to a map ${\mathscr Z} \times_{A^{\an}} G^{\an} \to T^{\an}$ since ${\mathscr U}\times_{A^{\an}} G^{\an} \subset {\mathscr Z}\times_{A^{\an}}G^{\an}$ is an open immersion of smooth $K$-analytic spaces. This gives us the following diagonal arrow in the above diagram \[ \begin{tikzcd} {\mathscr U} \times_{A^{\an}} G^{\an} \arrow[right hook->]{r} \arrow{d} & {\mathscr Z} \times_{A^{\an}} \arrow{ld} G^{\an} \arrow{d} \\ T^{\an} \times G^{\an} \arrow{r} & G^{\an}. \end{tikzcd} \] Note that the top triangle commutes since it does so after composing with the first and second projections $T^{\an}\times G^{\an} \to T^{\an}$ (by the extension property) and $T^{\an}\times G^{\an} \to G^{\an}$ (which was given). Moreover, the bottom triangle commutes because everything is a morphism over $G^{\an}$. We need to show that the morphism ${\mathscr Z} \times_{A^{\an}} G^{\an} \to G^{\an} \times_{A^{\an}} G^{\an}$ descends to ${\mathscr Z} \to G^{\an}$. To do this, we use the theory of faithfully flat descent in the category of rigid analytic varieties, which was developed in \cite[Section 4.2]{conrad2006relative}. First, we note that the morphism of $K$-analytic spaces $G^{\an} \to A^{\an}$ admits local {fpqc }quasi-sections via \textit{loc.~cit.~}Theorem 4.2.2 as it is the analytification of the faithfully flat morphism of $K$-schemes $G\to A$. Moreover, we may appeal to \textit{loc.~cit.~}Theorem 4.2.3. We note that while these results are for rigid analytic spaces, they carry over for the $K$-analytic spaces we consider due to \autoref{thm:comparisionrigidadic} and \autoref{thm:comparision}. In order to utilize \textit{loc.~cit.~}Theorem 4.2.3, we need to check the cocycle condition that the two pullbacks along $(\cdot) \times_{A^{\an}} G^{\an} \times_{A^{\an}} G^{\an} \substack{\rightarrow\\[-1em] \rightarrow} (\cdot) \times_{A^{\an}} G^{\an}$ agree. These pullbacks agree above the dense open ${\mathscr U} \to {\mathscr Z}$ and $G^{\an}\times_{A^{\an}} G^{\an} \times_{A^{\an}} G^{\an} \cong T^{\an}\times T^{\an}\times G^{\an}$ over $G^{\an}$ (via projection onto the last component), so the uniqueness statement for morphisms to $T^{\an} \times T^{\an}$ shows that they agree everywhere. Therefore, we have that ${\mathscr Z} \times_{A^{\an}} G^{\an} \to G^{\an} \times_{A^{\an}} G^{\an}$ descends to ${\mathscr Z} \to G^{\an}$, and by construction, the restriction to ${\mathscr U}$ is the initial morphism ${\mathscr U} \to G^{\an}$. \end{proof} We now use \autoref{thm:extendanalyticmorphism} to deduce that an analytic map from a big open subset of an abelian variety to a semi-abelian varietiy uniquely extend to an algebraic morphism between the abelian variety and the semi-abelian variety. \begin{prop}\label{prop:extendalgebraic} Let $A/K$ be an abelian variety, and let ${\mathscr U} \subset A^{\an}$ be a dense open with $\codim(A^{\an}\setminus {\mathscr U}) \geq 2$. Let $G/K$ be a semi-abelian variety. Then any analytic morphism $\varphi\colon {\mathscr U} \to G^{\an}$ uniquely extends to an algebraic morphism $\wt{\varphi}\colon A\to G$. \end{prop} \begin{proof} By \autoref{thm:extendanalyticmorphism}, we have that the analytic morphism $\varphi\colon {\mathscr U} \to G^{\an}$ extends to an analytic map $\wt{\varphi}\colon A^{\an} \to G^{\an}$. The result now follows from \cite[Lemma 2.15]{JVez}. \end{proof} To conclude this subsection, we show that the results from \cite{MorrowNonArchGGLV} remain valid with our new definition of $K$-analytically Brody hyperbolic modulo ${\mathscr D}$ (\autoref{defn:KBrodyMod}). \begin{prop}\label{prop:Brodyequivalence} Let $X/K$ be a quasi-projective variety which is a closed subvariety of a semi-abelian variety, and let $\Delta \subset X$ be a closed subset. Then, $X^{\an}$ is $K$-analytically Brody hyperbolic modulo $\Delta^{\an}$ if and only if every analytic morphism $\mathbb{G}_{m,K}^{\an} \to X^{\an}$ factors over $\Delta^{\an}$ and for every abelian variety $A/K$ and every dense open subset $U\subset A$ with $\codim(A\setminus U)\geq 2$, every morphism $U\to X$ factors over $\Delta$. \end{prop} \begin{proof} It suffices to show the second implication, namely that if every analytic morphism $\mathbb{G}_{m,K}^{\an} \to X^{\an}$ factors over $\Delta^{\an}$ and for every abelian variety $A/K$ and every dense open subset $U\subset A$ with $\codim(A\setminus U)\geq 2$, every morphism $U\to X$ factors over $\Delta$, then $X^{\an}$ is $K$-analytically Brody hyperbolic modulo $\Delta^{\an}$. By \autoref{thm:equivalentBrody}, it suffices to show that for every abelian variety $A/K$ and every dense open subset ${\mathscr U}\subset A^{\an}$ with $\codim(A^{\an}\setminus {\mathscr U})\geq 2$, every morphism ${\mathscr U}\to X^{\an}$ factors over $\Delta^{\an}$. \autoref{prop:extendalgebraic} tells us that the morphism ${\mathscr U}\to X^{\an}$ uniquely extends to an algebraic morphism of $K$-schemes $A\to X$. The desired result follows because \cite[Lemma A.2]{MochizukiAbsoluteAnabelian} asserts that the morphism $U\to X$ uniquely extends to a morphism $A\to X$, and the image of the map $A\to X$ must factor through $\Delta$. \end{proof} \begin{remark}\label{rem:Brodymoduloempty} We can use \autoref{thm:extendanalyticmorphism} and \autoref{prop:Brodyequivalence} to deduce that for $X/K$ a quasi-projective variety which is a closed subvariety of a semi-abelian variety, $X^{\an}$ is $K$-analytically Brody hyperbolic modulo $\emptyset$ if and only if $X^{\an}$ is $K$-analytically Brody hyperbolic, in the sense of \cite[Definition 2.3]{JVez}. Indeed, this follows immediately from the argument from \cite[Remark 2.6]{MorrowNonArchGGLV} and the previously mentioned results. \end{remark} \subsection{Proof of \autoref{thm:closedabelianspecial}} We now return to the proof of \autoref{thm:closedabelianspecial} and begin by recalling the construction of the Ueno fibration for closed subvarieties of semi-abelian varieties. \begin{definition}\label{defn:stablizier} Let $X$ be a closed subvariety of a semi-abelian variety $G$. \begin{enumerate} \item We define the \cdef{stabilizer of $X$ in $G$} as the maximal closed subgroup $\Stab(X,G)$ of $G$ such that $\Stab(X,G) + X = X$. \item We denote the identity component of the closed subgroup $\Stab(X,G)$ by $B(X,G)$. \end{enumerate} \end{definition} \begin{lemma}\label{lemmastabilizersemiabelian} Let $X$ be a closed subvariety of a semi-abelian variety $G$. The closed subgroup $B(X,G)$ is a semi-abelian subvariety of $G$. \end{lemma} \begin{proof} By definition, $B(X,G)$ is connected, and since $B(X,G)$ is a closed subgroup of $G$, we have that $B(X,G)$ is an algebraic group. Moreover, we have that $B(X,G)$ is a connected and smooth subgroup of $G$ (see \cite[\href{https://stacks.math.columbia.edu/tag/047N}{Tag 047N}]{stacks-project} for the statement on smoothness). The result now follows from \cite[Corollary 5.4.6.(1)]{brion2017some}. \end{proof} \begin{definition}[\protect{\cite[Definition 1.2]{Vojta:IntegralPointsII}}]\label{defn:Ueno} Let $X$ be a closed subvariety of a semi-abelian variety $G$. Consider the quotient $G/B(X,G)$, which is a semi-abelian variety by \cite[Corollary 5.4.6.(1)]{brion2017some}. The restriction to $X$ of the quotient map $G\to G/B(X,G)$ exhibits $X$ as a fiber bundle with fiber $B(X,G)$. This map $X \to X/B(X,G)$ is called the \cdef{Ueno fibration} of $X$. To summarize, we have the following diagram \[ \begin{tikzcd} G \arrow[two heads]{r} & G/B(X,G) \\ X \arrow[right hook->]{u} \arrow[two heads]{r} & X/B(X,G) \arrow[right hook->]{u} \end{tikzcd} \] where $X/B(X,G) \subset G/B(X,G)$ is a closed subvariety. \end{definition} By the following result, the Ueno fibration of $X$ allows us to identify closed subvarieties of a semi-abelian variety which are of logarithmic general type. \begin{theorem}[\protect{\cite[Theorem 5.16]{Vojta:IntegralPointsII}}]\label{thm:KawamataUenofibration} Let $X$ be a closed subvariety of a semi-abelian variety $G/K$, and let $D$ be an effective Weil divisor on $X$. Then, $B(X\setminus D, G) = 0$ if and only if $X\setminus D$ is of logarithmic general type. \end{theorem} We can now prove \autoref{thm:closedabelianspecial}. \begin{proof}[Proof of \autoref{thm:closedabelianspecial}] It is clear that if $X$ is the translate of a semi-abelian subvariety, then $X^{\an}$ is $K$-analytically special, so we need to prove the converse i.e., if $X^{\an}$ is $K$-analytically special then it is the translate of a sem-abelian subvariety. Let $B(X,G)$ be the semi-abelian subvariety from \autoref{defn:stablizier} and \autoref{lemmastabilizersemiabelian}. By considering the Ueno fibration, we have a fibration $X \to X/B(X,G)$ where $X/B(X,G)\subset G/B(X,G)$ is a closed subvariety of a semi-abelian variety. Now \autoref{thm:KawamataUenofibration} asserts that $X/B(X,G)$ is of logarithmic general type, and by \autoref{thm:starting_point}, we have that $(X/B(X,G))^{\an}$ is pseudo-$K$-analytically Brody hyperbolic. Moreover, \autoref{thm:nofibrationtoBrody} implies that $X/B(X,G) $ must be zero-dimensional, and hence $X$ is isomorphic to $B(X,G)$, which is the translate of a semi-abelian subvariety. \end{proof} We record the following fact, which can be proven in exactly the same way as \autoref{thm:closedabelianspecial}, using Ueno fibration and Vojta's result. We note that the version for proper $G$ is \cite[Theorem 3.4]{javanpeykar2020albanese}. \begin{theorem}\label{thm:specialsemiabelian} Let $G/K$ be a semi-abelian variety and fix a smooth compactification $\overline{G}$ with boundary a normal crossing divisor. Let $X$ be a closed subvariety of $G$. Then $X$ is special, in the sense of \autoref{def:specialfibrationlog}, if and only if $X$ is a translate of a semi-abelian variety. \end{theorem} \begin{proof}[Proof of \autoref{coro:specialabelian}] The proof follows immediately from \autoref{thm:closedabelianspecial} and \autoref{thm:specialsemiabelian}. \end{proof} Using \autoref{thm:closedabelianspecial}, we deduce that the quasi-Albanese map associated with a $K$-analytically special variety is surjective. First, we recall the definition of the quasi-Albanese variety associated with a smooth, quasi-projective variety. \begin{definition}[\cite{Serre_Albanese, iitaka_1977}]\label{defn:quasiAlb} Let $V/K$ be a smooth variety. The \cdef{quasi-Albanese map} \[ \alpha\colon V \to \QAlb(V) \] is a morphism to a semi-abelian variety $\QAlb(V)$ such that: \begin{enumerate} \item For any other morphism $\beta\colon V \to B$ to a semi-abelian variety $B$, there is a morphism $f\colon \QAlb(V)\to B$ such that $\beta = f\circ \alpha$, and \item the morphism $f$ is uniquely determined. \end{enumerate} \end{definition} \begin{prop} Let $X/K$ be a smooth variety, and suppose that $X^{\an}/K$ is $K$-analytically special. Then, the quasi-Albanese map is surjective. \end{prop} \begin{proof} The image of $X$ in $\QAlb(X)$ is $K$-analytically special, as we can simply compose ${\mathscr U} \to X^{\an}$ with the analytification of $\alpha$. Moreover, by \autoref{thm:closedabelianspecial}, any $K$-analytically closed subvariety of $\QAlb(X)$ is the translate of a semi-abelian subvariety, which implies that the image of $X \to \QAlb(X)$ is the translate of a semi-abelian subvariety. The universal property of the quasi-Albanese (\autoref{defn:quasiAlb}) asserts that the image must equal $\QAlb(X)$. \end{proof} \section{\bf $K$-analytically special surfaces of negative Kodaira dimension} \label{sec:sufacenegKodaira} In this section, we characterize $K$-analytically special surfaces of Kodaira dimension $-\infty$ in terms of their irregularity and their tempered fundamental group. In particular, we will prove the following theorem. \begin{theorem}[= Theorem \ref{thm:surfaceKinfinity}] Let $K$ be an algebraically closed, complete, non-Archimedean valued field of characteristic zero. If $X/K$ is a smooth, projective surface with $\kappa(X) = -\infty$, then the following are equivalent: \begin{enumerate} \item $X$ has irregularity $q(X)$ less than 2; \item $X$ is $K$-analytically special; \item the tempered fundamental group $\pi_1^{\temp}(X^{\Ber})$ of $X^{\Ber}$ is virtually abelian. \end{enumerate} \end{theorem} In this section we will be using Berkovich spaces rather than adic spaces, and we will write $X^{\Ber}$ to denote the Berkovich analytification of $X$. Thanks to \autoref{thm:comparision}, there is not much harm in doing so and we can use all the previously proven results. The choice of Berkovich spaces over adic spaces is due to the fact that we will be using the tempered fundamental group, a notion first introduced by Andr\'e \cite{andre-per} and whose definition we will recall below, and Berkovich spaces are better when one discusses topological coverings. \subsection{The tempered fundamental group} For this section, we say that a \cdef{$K$-manifold} is a connected, smooth, paracompact, strict Berkovich $K$-analytic space. We note that the analytification of a smooth variety $X/K$ will be a $K$-manifold by \autoref{lemma:absoluteproperties} and \cite[Theorem 3.4.8.(ii)]{BerkovichSpectral}. Let $f\colon {\mathscr X}'\to {\mathscr X}$ be a morphism of $K$-manifolds. First, we recall the notion of an \'etale covering from \cite{DeJongFundamentalGroupNonArch}. \begin{definition}\label{defn:etalecover} We say that $f$ is an \cdef{\'etale covering} if ${\mathscr X}$ is covered by open subsets ${\mathscr U}$ such that $f^{-1}({\mathscr U}) = \sqcup {\mathscr V}_j$ and ${\mathscr V}_j\to {\mathscr U}$ is finite \'etale. \end{definition} We now distinguish between several types of \'etale coverings. \begin{definition} Let $f\colon {\mathscr X}' \to {\mathscr X}$ be an \'etale covering and keep the notation from \autoref{defn:etalecover}. \begin{itemize} \item We say that $f$ is a \cdef{topological covering} if we can choose ${\mathscr U}$ and ${\mathscr V}_j$ such that all the maps ${\mathscr V}_j\to {\mathscr U}$ are isomorphisms. \item We say that $f$ is a \cdef{finite \'etale covering} if $f$ is also finite. \item We say that $f$ is \cdef{tempered} if it is the quotient of the composition of a topological covering ${\mathscr Y}'\to {\mathscr Y}$ and of a finite \'etale covering ${\mathscr Y}\to {\mathscr X}$. Here quotient means that we have a diagram \[ \xymatrix@R=1em{ & {\mathscr Y}'\ar[dr]\ar[dl] & \\ {\mathscr X}'\ar[dr] & & {\mathscr Y}\ar[dl] \\ & {\mathscr X} & } \] or, equivalently, a tempered covering is an \'etale covering which becomes a topological covering after pullback by some finite \'etale covering. \end{itemize} \end{definition} Using the language of fiber functors, we can define the topological, algebraic, tempered, and \'etale fundamental group of a $K$-manifold (see \cite[Section 2]{DeJongFundamentalGroupNonArch} and \cite[Chapter 3, Section 2]{andre-per}). For a $K$-manifold ${\mathscr X}$, we will let $\pi_1^{\topo}({\mathscr X})$, $\pi_1^{\alg}({\mathscr X})$, $\pi_1^{\temp}({\mathscr X})$, and $\pi_1^{\ett}({\mathscr X})$ denote each of these respective fundamental groups. We now give an important set of examples of tempered fundamental groups as well as a result concerning the birational invariance of the tempered fundamental group. \begin{example}\label{exam:temperedsmallgenus} The tempered fundamental group of the analytification of a smooth, projective curve $C/K$ of genus $g\leq 1$ is completely understood. Indeed, when $C \cong \mathbb{P}^1$, then $\pi_1^{\temp}(C^{\Ber}) = \brk{e}$, and when $C$ is an elliptic curve, we have that $\pi_1^{\temp}(C^{\Ber})$ is either isomorphic to $\widehat{\mathbb{Z}}^2$ or $\mathbb{Z} \times \widehat{\mathbb{Z}}$, depending on the reduction type of $E$ (see \cite[Chapter III, Section 2.3.2]{andre-per}). \end{example} \begin{prop}\label{lem:birationalinvariancetemp} Let ${\mathscr X},{\mathscr Y}$ be a smooth, proper $K$-manifolds over $K$ and let ${\mathscr X} \dashrightarrow {\mathscr Y}$ be a bi-meromorphic morphism. Then $\pi_1^{\temp}({\mathscr X}) \cong \pi_1^{\temp}({\mathscr Y})$. \end{prop} \begin{proof} This is \cite[Proposition 1.5]{Lepage:TemperedFundamentalGroup}. \end{proof} For our proof of \autoref{thm:surfaceKinfinity}, it will be key for us to understand when the tempered fundamental group of a curve is virtually abelian. \begin{lemma}\label{lemma:curvesvirtuallyabeliantemp} Let $C/K$ be a smooth, projective curve. The tempered fundamental group $\pi_1^{\temp}(C^{\Ber})$ of $C^{\Ber}$ is virtually abelian if and only if the genus of $C$ is less than two. \end{lemma} \begin{proof} When the genus of $C$ is less than one, $\pi_1^{\temp}(C^{\Ber})$ is virtually abelian by \autoref{exam:temperedsmallgenus}. Conversely, we need to show that when $C$ has genus greater than one, $\pi_1^{\temp}(C^{\Ber})$ is not virtually abelian. First, we note that $\pi_1^{\temp}(C^{\Ber})$ is centerless, and hence non-abelian. This follows from noting that the profinite completion of $\pi_1^{\temp}(C^{\Ber})$ is the algebraic fundamental group $\pi_1^{\alg}(C)$ (see \cite[p.~128, paragraph 2]{andre-per}), and the algebraic fundamental group of a curve is well-known to be centerless (see e.g., \cite[Lemma 1]{Faltings:CurvesBourbaki}). To conclude, we note that if $\pi_1^{\temp}(C^{\Ber})$ was virtually abelian, then there would exist a finite \'etale covering ${\mathscr C}'\to C^{\Ber}$ whose tempered fundamental group is abelian. Note that ${\mathscr C}'$ is algebraic by non-Archimedean Riemann existence theorem \cite{Lutkebohmert:Riemannexistence}, and so there exists some smooth, projective curve $C'$ such that $C^{'\Ber}\cong {\mathscr C}'$. The genus of $C'$ is strictly larger than the genus of $C$, and hence the tempered fundamental group of $C^{'\Ber}$ cannot be abelian by the above discussion. Therefore, we have that if $C$ has genus greater than one, $\pi_1^{\temp}(C^{\Ber})$ is not virtually abelian. \end{proof} \begin{proof}[Proof of \autoref{thm:surfaceKinfinity}] By \autoref{lemma:birationalinvariance} and \autoref{lem:birationalinvariancetemp}, all of the proposed equivalent conditions in the statement of \autoref{thm:surfaceKinfinity} are birational invariants so we are free to make birational modifications. By the Enriques--Kodaira classification theorem (see e.g., \cite[Chapter V, Theorem 6.1]{Har77}), we know that a smooth, projective surface $X$ of Kodaira dimension $-\infty$ is birational to a $\mathbb{P}^1$-bundle over a curve $C$ over $K$. Since $\mathbb{P}^1$ admits no analytic differentials and is simply connected in either the topological or tempered sense (see \autoref{exam:temperedsmallgenus}), we have that \[ q(X) = q(C), \quad \pi_1^{\topo}(X^{\Ber}) \cong \pi_1^{\topo}(C^{\Ber}), \text{ and }\quad\pi_1^{\temp}(X^{\Ber}) \cong \pi_1^{\temp}(C^{\Ber}). \] To prove our result, we will show that having $q(X) > 1$ is equivalent to $\pi_1^{\temp}(X^{\Ber})$ not being virtually abelian and to $X$ not being $K$-analytically special. By the above, we have that $q(X) > 1$ if and only if $C$ has genus greater than one, and so the first statement follows from \autoref{lemma:curvesvirtuallyabeliantemp}. The second statement follows because a curve $C$ of genus greater than 1 is $K$-analytically Brody hyperbolic \cite[Proposition 3.15]{JVez}. Moreover, any analytic morphism from a big analytic open of the analytification of a connected, finite type algebraic group will be constant in $C^{\Ber}$ by \autoref{rem:Brodymoduloempty}, and hence cannot be Zariski dense in $X^{\Ber}$ as it is contained in a fiber of the morphism $X \to C$. Note that if $C$ has genus less than one, then $X$ is birational to either $\mathbb{P}^1 \times E$ where $E/K$ is an elliptic curve or $\mathbb{P}^1 \times \mathbb{P}^1$, which are both clearly $K$-analytically special. \end{proof} \begin{remark} The proof of \autoref{thm:surfaceKinfinity} tells us that one cannot replace the tempered fundamental group with the topological fundamental group. To see this, it suffices to note that a curve $C/K$ of genus $g\geq 2$ with good reduction will have trivial topological fundamental group, which of course is abelian. We will use this observation when formulating our non-Archimedean counterparts to Campana's conjectures in Section \ref{sec:NonArchCampana}. \end{remark} \section{\bf Non-Archimedean variants of Campana's conjectures} \label{sec:NonArchCampana} In this final section, based on our results above, we formulate two non-Archimedean variants of a series of conjectures of Campana concerning various notions of specialness and their relationship to fundamental groups. \begin{conjecture}[Campana's conjectures, extended to include $k$-analytically special]\label{conjecture:Campana} Let $X/k$ be a smooth projective variety. The conditions (1) --- ($4_{\infty}$) and (1) --- ($4_{p}$) are equivalent: \begin{enumerate} \item $X$ is special (\autoref{def:specialBogomolov}, \autoref{def:specialfibration}); \item $X$ is geometrically special (\autoref{defn:geometricallyspecial}); \item $X$ is arithmetically special (\autoref{defn:arithmeticallyspecial}); \item[($4_{\infty}$)] if $k = \mathbb{C}$, $X$ is Brody special (\autoref{defn:Brodyspecial}); \item[($4_{p}$)] if $k$ is a complete, non-Archimedean valued field, $X$ is $k$-analytically special (\autoref{defn:Kanspecial}). \end{enumerate} \end{conjecture} \begin{conjecture}[Campana's abelianity conjecture for fundamental groups, extended to include $k$-analytically special]\label{conjecture:Abelian} Let $X/k$ be a smooth projective variety. The conditions (1) --- ($4_{\infty}$) and (1) --- ($4_{p}$) are equivalent: \begin{enumerate} \item If $k = \mathbb{C}$ and $X$ is special, then $\pi_1(X^{\an})$ is virtually abelian. \item If $k = \mathbb{C}$ and $X$ is Brody special, then $\pi_1(X^{\an})$ is virtually abelian. \item If $k = \mathbb{C}$ and $X$ is geometrically special, then $\pi_1(X^{\an})$ is virtually abelian. \item[($4_{\infty}$)] If $k = \mathbb{C}$ and $X$ is arithmetically special, then $\pi_1(X^{\an})$ is virtually abelian. \item[($4_p$)] If $k$ is a complete, non-Archimedean valued field and $X^{\Ber}$ is $k$-analytically special, then $\pi_1^{\temp}(X^{\Ber})$ is virtually abelian. \end{enumerate} \end{conjecture} \begin{remark} The complex analytic part of \autoref{conjecture:Campana}, namely the equivalence of conditions (1) and ($4_{\infty}$), has equivalent incarnation via the Kobayashi pseudo-metric. In \cite[Conjecture 9.2]{campana2004orbifolds}, Campana asked if for a proper variety $X/\mathbb{C}$, $X$ being special is equivalent to the Kobayashi pseudo-metric $d_X$ vanishing identically on $X^{\an}$. For a discussion on the Kobayashi pseudo-metric and its relation to hyperbolicity, we refer the reader to \cite{Kobayashi}. Cherry \cite{CherryKoba} offered a non-Archimedean variant of the Kobayashi pseudo-metric, however his notion seems to not correctly capture hyperbolic or special properties of a variety. For example, he proves (\textit{loc.~cit.~}Theorem 4.6) that his non-Archimedean Kobayashi pseudo-metric is a genuine metric on an abelian variety. In forthcoming work \cite{morrow:NonArchKob}, the first author provides a new definition of a non-Archimedean Kobayashi pseudo-metric, which does seem to correctly capture hyperbolic and special properties of a variety. As an illustration of this claim, the author shows that for $K$ an algebraically closed, complete, non-Archimedean valued field of characteristic zero that contains a countable dense subset and ${\mathscr X}$ a good, connected Berkovich $K$-analytic space, if this new non-Archimedean Kobayashi pseudo-metric $d_{{\mathscr X}}$ is in fact a metric, then ${\mathscr X}$ is $K$-analytically Brody hyperbolic in the sense of in the sense of \cite[Definition 2.3]{JVez}. With this result and others, it seems natural to conjecture that for such a $K$, a Berkovich $K$-analytic space is $K$-analytically special if and only if this new non-Archimedean Kobayashi pseudo-metric $d_{{\mathscr X}}$ is identically zero on ${\mathscr X}$. \end{remark}
1,108,101,563,598
arxiv
\section{Introduction} The concept of dielectric permittivity in media with temporal dispersion is commonly used in electrodynamics and condensed matter physics (see, e.g., \cite{1,2}). For not too strong fields the dielectric permittivity $\varepsilon(\tau)$ depending on the time-like variable $\tau$ is introduced from the linear integral relation between the electric field $\mbox{\boldmath$E$}(\mbox{\boldmath$r$},t)$ and the electric displacement $\mbox{\boldmath$D$}(\mbox{\boldmath$r$},t)$. Then the frequency-dependent permittivity $\varepsilon(\omega)$ is defined using the Fourier transformations of the fields $\mbox{\boldmath$E$}(\mbox{\boldmath$r$},t)$ and $\mbox{\boldmath$D$}(\mbox{\boldmath$r$},t)$. Below we argue that this procedure, which is wholly satisfactory for dielectric materials, faces additional regularization problems in an infinite medium containing free charge carriers. This leads us to the conclusion that some applications of the frequency-dependent dielectric permittivities allowing for free charge carriers might be not rigorously justified. As one such example we discuss the dielectric permittivity of the Drude model which leads to widely discussed difficulties when substituted into the Lifshitz formula for the van der Waals and Casimir force at nonzero temperature \cite{3}. \section{Dielectric permittivity in the presence of temporal dispersion} For simplicity we consider an isotropic nonmagnetic medium of infinite extent. If its properties do not depend on time, the linear dependence between $\mbox{\boldmath$D$}(t)$ and $\mbox{\boldmath$E$}(t)$ satisfying causality (here and below we omit the argument $\mbox{\boldmath$r$}$) is given by \begin{equation} \mbox{\boldmath$D$}(t)=\int_{-\infty}^{t}dt^{\prime}\, \varepsilon(t-t^{\prime})\mbox{\boldmath$E$}(t^{\prime}). \label{eq1} \end{equation} \noindent The kernel $\varepsilon(t-t^{\prime})$ of the integral operator on the right-hand side of (\ref{eq1}) is called the dielectric permittivity for media with temporal dispersion. We represent it in the form \begin{equation} \varepsilon(t-t^{\prime})=2\delta(t-t^{\prime})+f(t-t^{\prime}), \label{eq2} \end{equation} \noindent where $f(t-t^{\prime})$ is a continuous real-valued function and the delta function $\delta(t-t^{\prime})$ is defined on the interval $-\infty < t^{\prime}\leq t$ in the following manner \cite{Korn}: \begin{equation} \int_{-\infty}^{t}g(t^{\prime})\,\delta(T-t^{\prime})\,dt^{\prime} =\left\{ \begin{array}{l} 0,{\ \ }T>t, \\ \frac{1}{2}g(T-0),{\ \ }T=t, \\ \frac{1}{2}\bigl[g(T-0)+g(T+0)\bigr],{\ \ }-\infty<T<t. \end{array} \right. \label{eq2a} \end{equation} \noindent Here, $g(t)$ is an arbitrary function which has bounded variation in the vicinity of the point $t^{\prime}=T$. Note that from physical point of view the function $f(t-t^{\prime})$ is defined only for $t^{\prime}\leq t$; for $t^{\prime}>t$ it can be ascribed any values (including to vanish in that region). Substituting (\ref{eq2}) into (\ref{eq1}) with account of (\ref{eq2a}) and introducing the new variable $\tau=t-t^{\prime}\geq 0$, we rearrange (\ref{eq1}) to \cite{1} \begin{equation} \hspace*{-7mm} \mbox{\boldmath$D$}(t)=\mbox{\boldmath$E$}(t)+ \int_{-\infty}^{t}dt^{\prime}\, f(t-t^{\prime})\mbox{\boldmath$E$}(t^{\prime}) = \mbox{\boldmath$E$}(t)+ \int_{0}^{\infty}d\tau\,f(\tau)\mbox{\boldmath$E$}(t-\tau). \label{eq3} \end{equation} \noindent Representing the real functions $\mbox{\boldmath$D$}(t)$ and $\mbox{\boldmath$E$}(t)$ as Fourier integrals, \begin{equation} \mbox{\boldmath$D$}(t)=\int_{-\infty}^{\infty} \mbox{\boldmath$D$}(\omega){\rm e}^{-{\rm i}\,\omega t}d\omega,\qquad \mbox{\boldmath$E$}(t)=\int_{-\infty}^{\infty} \mbox{\boldmath$E$}(\omega){\rm e}^{-{\rm i}\,\omega t}d\omega, \label{eq4} \end{equation} \noindent one can rewrite (\ref{eq3}) in terms of Fourier transforms of the fields \cite{1} \begin{eqnarray} \hspace*{-7mm} \mbox{\boldmath$D$}(\omega)=\varepsilon(\omega) \mbox{\boldmath$E$}(\omega), \qquad \varepsilon(\omega)\equiv 1+\int_{0}^{\infty}\!\!d\tau\, f(\tau)\,{\rm e}^{{\rm i}\,\omega\tau} =\int_{0}^{\infty}\!\!d\tau\, \varepsilon(\tau)\,{\rm e}^{{\rm i}\,\omega\tau}, \label{eq5} \end{eqnarray} \noindent where $\mbox{\boldmath$D$}(\omega)$, $\mbox{\boldmath$E$}(\omega)$ and $\varepsilon(\omega)$ are complex-valued functions. {}From (\ref{eq5}) it follows that $\varepsilon(\omega)$ is an analytic function in the upper half-plane of complex $\omega$ including the real axis with possible exception of the point $\omega=0$. As a result, the real and imaginary parts of $\varepsilon(\omega)$ are connected by means of the Kramers-Kronig relations \cite{1}. Note that in contrast, for instance, to \cite{9} we always consider fields defined in $(\mbox{\boldmath$r$},t)$-space as real and only their Fourier transforms might be complex. The equivalence between (\ref{eq3}) and (\ref{eq5}) requires the existence of integrals (\ref{eq4}) (and respective inverse Fourier transformations) and the possibility to change the order of integrations with respect to $dt^{\prime}$ and $d\omega$. In mathematics there are many different conditions on how to assign a rigorous meaning to (\ref{eq4}) and respective inverse formulas. The most widely used demand is that the function $\mbox{\boldmath$D$}(t)$ should have a bounded variation and be integrable together with its modulus, i.e., should belong to $L^{1}(-\infty,\infty)$. In this case the function $\mbox{\boldmath$D$}(\omega)$ is also bounded, uniformly continuous on the axis $(-\infty,\infty)$ and $\mbox{\boldmath$D$}(\omega)\to 0$ when $|\omega|\to\infty$ \cite{4}. The function $\mbox{\boldmath$E$}(t)$ should possess the same properties. The change of order of integrations is possible if both integrals under consideration are uniformly convergent. \section{Media with free charge carriers} It can be easily seen that the above conditions permitting to introduce the frequency-dependent dielectric permittivity $\varepsilon(\omega)$ in accordance with (\ref{eq5}) are not directly applicable for media with free charge carriers. As an example we consider the widely used dielectric permittivity of the Drude model, describing such media \cite{5}, \begin{equation} \varepsilon_D(\omega)=1-\frac{\omega_p^2}{\omega(\omega+{\rm i}\,\gamma)}, \label{eq6} \end{equation} \noindent where $\omega_p$ is the plasma frequency and $\gamma>0$ is the relaxation parameter. It is obvious that $\omega=0$ may lead to mathematical problems in Fourier transformation. To make a link between $\varepsilon_D(\omega)$ and real-valued physical fields $\mbox{\boldmath$E$}(t)$ and $\mbox{\boldmath$D$}(t)$, it would be of interest to determine the respective function $f_D(\tau)$. The substitution of (\ref{eq6}) into (\ref{eq5}) leads to the following equations \begin{eqnarray} -\frac{\omega_p^2}{\omega^2+\gamma^2}&=&\int_{0}^{\infty} f_D(\tau)\,\cos(\omega\tau)\,d\tau, \nonumber \\ \frac{\omega_p^2\gamma}{\omega(\omega^2+\gamma^2)}&=&\int_{0}^{\infty} f_D(\tau)\,\sin(\omega\tau)\,d\tau. \label{eq7} \end{eqnarray} \noindent {}From the first equation, by means of the inverse cosine Fourier transformation performed with the help of the integral 3.723(2) in \cite{6}, one finds \begin{equation} f^{\rm cos}_D(\tau)=-\frac{\omega_p^2}{\gamma}\,{\rm e}^{-\gamma\tau}. \label{eq8} \end{equation} \noindent It is easily seen that the substitution of (\ref{eq8}) into the first equation of (\ref{eq7}) with account of 3.893(2) in \cite{6} leads to a correct identity, however, substituted into the second equation of (\ref{eq7}) fails. On the other hand, using the inverse sine Fourier transformation and the integral 3.725(1) in \cite{6}, the second equation of (\ref{eq7}) leads to a different result \begin{equation} f^{\rm sin}_D(\tau)=\frac{\omega_p^2}{\gamma}\left(1- {\rm e}^{-\gamma\tau}\right). \label{eq9} \end{equation} \noindent Now, the substitution of (\ref{eq9}) into the right-hand sides of equations (\ref{eq7}) reproduces their left-hand sides up to additional undefined terms and thus also violates the equalities. The pathological properties under consideration are explained by the fact that $\varepsilon_D(\omega)$ results in $\mbox{\boldmath$D$}(\omega)$ which is unbounded in any vicinity of $\omega=0$. This means that $\mbox{\boldmath$D$}(t)$ cannot be represented as a Fourier integral (\ref{eq4}) and both the definition of $\varepsilon(\omega)$ in (\ref{eq5}) and equivalent equations (\ref{eq7}) become unjustified. The question arises of whether there is a possibility to consistently define the function $f_D(\tau)$ related to the frequency-dependent permittivity (\ref{eq6}). Keeping in mind that in the case of the Drude model the second equality in (\ref{eq5}) cannot be considered as a classical Fourier transformation, we make an attempt to assign a definite meaning to the function $f_D(\tau)$ by considering the generalized inverse transformation of the quantity $\varepsilon_D(\omega)-1$ defined as \begin{eqnarray} f^{(0)}_D(\tau)&\equiv & -\frac{\omega_p^2}{2\pi}\int_{-\infty}^{\infty} d\omega\,\frac{1}{(\omega+{\rm i}\,0)(\omega+{\rm i}\,\gamma)}\, {\rm e}^{-{\rm i}\,\omega\tau} \label{eq10} \\ &=&-\frac{\omega_p^2}{2\pi}\int_{-\infty}^{\infty} d\omega\,\frac{\omega-{\rm i}\,\gamma}{(\omega+ {\rm i}\,0)(\omega^2+\gamma^2)}\, {\rm e}^{-{\rm i}\,\omega\tau} \equiv I_1+I_2. \nonumber \end{eqnarray} \noindent Here, the addition of an infinitesimally small quantity $+{\rm i}\,0$ establishes the rule on how to bypass the pole of ${\rm Im}\,\varepsilon_D(\omega)$ at $\omega=0$ and the following notations are used \begin{eqnarray} I_1&=&-\frac{\omega_p^2}{2\pi}\int_{-\infty}^{\infty} d\omega\,\frac{1}{\omega^2+\gamma^2}\, {\rm e}^{-{\rm i}\,\omega\tau} \label{eq11} \\ I_2&=&\frac{{\rm i}\,\omega_p^2\gamma}{2\pi}\int_{-\infty}^{\infty} d\omega\,\frac{1}{(\omega+{\rm i}\,0)(\omega^2+\gamma^2)}\, {\rm e}^{-{\rm i}\,\omega\tau}. \nonumber \end{eqnarray} In $I_1$ the integrated function is regular at $\omega=0$. This integral can be found in 3.354(5) \cite{6}, \begin{equation} I_1=-\frac{\omega_p^2}{2\gamma}\left\{ \begin{array}{ll} {\rm e}^{-\gamma\tau}\!,{\ }&\tau>0, \\ {\rm e}^{\gamma\tau}\!,{\ }&\tau<0. \end{array} \right. \label{eq12} \end{equation} \noindent The second integral in (\ref{eq11}) can be calculated using the contours consisting of the real axis in the complex $\omega$-plane and semicircles of infinitely large radii centered at the origin in the lower half-plane (for $\tau>0$) and in the upper half-plane (for $\tau<0$). The result is \begin{equation} I_2=\frac{\omega_p^2}{2\gamma}\left\{ \begin{array}{l} 2-{\rm e}^{-\gamma\tau}\!,{\ \ }\tau>0, \\ {\rm e}^{\gamma\tau}\!,{\ \ }\tau<0, \end{array} \right. \label{eq13} \end{equation} \noindent where for $\tau>0$ the contributions from the two poles at $\omega_1=-{\rm i}\,0$ and $\omega_2=-{\rm i}\,\gamma$ were taken into account whereas for $\tau<0$ only one pole at $\omega_3={\rm i}\,\gamma$ determines the value of $I_2$. Substituting (\ref{eq12}) and (\ref{eq13}) into the right-hand side of (\ref{eq10}) we arrive at \begin{equation} f^{(0)}_D(\tau)=\left\{ \begin{array}{l} \frac{\omega_p^2}{\gamma} \left(1-{\rm e}^{-\gamma\tau}\right),{\ \ }\tau>0, \\ 0,{\ \ }\tau<0. \end{array} \right. \label{eq14} \end{equation} \noindent It is seen that the suggested rule leads to the same result (\ref{eq9}), as was obtained by the inverse sine Fourier transformation from the imaginary part of $\varepsilon_D(\omega)$ in (\ref{eq7}). A similar situation occurs for other dielectric permittivities taking into account free charge carriers, e.g., for the dielectric permittivities of the plasma model and of the normal skin effect, \begin{equation} \varepsilon_p(\omega)=1-\frac{\omega_p^2}{\omega^2}, \qquad \varepsilon_n(\omega)=1+{\rm i}\,\frac{4\pi\sigma_0}{\omega}, \label{eq15} \end{equation} \noindent where $\sigma_0$ is the dc conductivity. In both cases the mathematical conditions permitting to perform the Fourier transformation in the classical understanding are violated. However, by using the same considerations as presented above in the case of the Drude model, one may assign a meaning analogous to (10) to the second formula in (\ref{eq5}) and obtain the following dielectric permittivities as functions of $\tau$: \begin{equation} f_p^{(0)}(\tau)= \left\{ \begin{array}{l} \omega_p^2\tau,{\ \ }\tau>0, \\ 0,{\ \ }\tau<0. \end{array}\right. \qquad f_n^{(0)}(\tau)= \left\{ \begin{array}{l} 4\pi\sigma_0,{\ \ }\tau>0, \\ 0,{\ \ }\tau<0. \end{array}\right. \label{eq16} \end{equation} \noindent It should be remarked that $f_p^{(0)}(\tau)$ is obtainable also from (\ref{eq14}) in the limiting case $\gamma\to 0$. Now let us check for consistency the respective results for $\mbox{\boldmath$D$}(t)$. For example, we choose the electric field in the form \begin{equation} \mbox{\boldmath$E$}(t)=\mbox{\boldmath$E$}_0\, {\rm e}^{-\beta t^2}=\int_{-\infty}^{\infty} \mbox{\boldmath$E$}(\omega)\,{\rm e}^{-{\rm i}\,\omega t}d\omega, \label{eq17} \end{equation} \noindent where $\beta>0$ and $\mbox{\boldmath$E$}_0\equiv\mbox{\boldmath$E$}_0(\mbox{\boldmath$r$})$ describes the spatial dependence of the field. As already stated (\ref{eq17}), the function $\mbox{\boldmath$E$}(t)$ satisfies all required conditions and can be presented as Fourier integral. Its Fourier transform is calculated using the formula 3.896(4) in \cite{6}, \begin{equation} \mbox{\boldmath$E$}(\omega)=\frac{1}{2\pi}\int_{-\infty}^{\infty} \mbox{\boldmath$E$}(t)\,{\rm e}^{{\rm i}\omega t}dt= \frac{1}{2\sqrt{\pi\beta}}\mbox{\boldmath$E$}_0\, {\rm e}^{-\frac{\omega^2}{4\beta}}. \label{eq18} \end{equation} \noindent Substituting (\ref{eq14}) and (\ref{eq17}) into (\ref{eq3}) and using the formula 3.322(2) in \cite{6} we arrive at \begin{equation} \hspace*{-15mm} \mbox{\boldmath$D$}(t)=\mbox{\boldmath$E$}(t)+ \mbox{\boldmath$E$}_0\frac{\omega_p^2}{2\gamma}\sqrt{\frac{\pi}{\beta}} \left[1+{\rm erf}\,(\sqrt{\beta}t)- {\rm e}^{\frac{\gamma^2}{4\beta}-\gamma t} {\rm erfc}\,\left(\frac{\gamma}{2\sqrt{\beta}}-\sqrt{\beta}t\right)\right], \label{eq19} \end{equation} \noindent where ${\rm erf}\,(x)$ is the error function and ${\rm erfc}\,(x)=1-{\rm erf}\,(x)$. Keeping in mind that \cite{6} \begin{equation} {\rm erf}\,(-x)=-{\rm erf}\,(x), \qquad {\rm erf}\,(x)=1-\frac{1}{\sqrt{\pi}}\,\frac{{\rm e}^{-x^2}}{x}+ \cdots, \label{eq20} \end{equation} \noindent we obtain for $t\to\pm\infty$ \begin{equation} \mbox{\boldmath$D$}(\infty)= \mbox{\boldmath$E$}_0\frac{\omega_p^2}{\gamma}\sqrt{\frac{\pi}{\beta}}, \qquad \mbox{\boldmath$D$}(-\infty)=0. \label{eq21} \end{equation} \noindent This is what one expects on physical grounds because for an infinite medium containing free charge carriers the action of switching on and then switching off electric field should result in a nonzero residual displacement (for a finite medium, the presence of external electric field leads to the accumulation of positive and negative charges on the opposite boundary surfaces and to the vanishing total electric field inside such a medium \cite{7a}; after the external electric field switches off, the accumulated charges are distributed uniformly over the volume of the medium leading to zero electric displacement at $t\to +\infty$). However, the first equality in (\ref{eq21}) means that $\mbox{\boldmath$D$}(t)$ is not an integrable function over the interval $(-\infty,\infty)$. This makes impossible the use of the standard Fourier transformation (\ref{eq4}) and resulting equality (\ref{eq5}), and makes the whole formalism not self-consistent. This can be seen even more clearly if one defines $\mbox{\boldmath$D$}(\omega)$ in accordance with the first equality in (\ref{eq5}), where $\varepsilon(\omega)=\varepsilon_D(\omega)$ and $\mbox{\boldmath$E$}(\omega)$ is given in (\ref{eq18}), and then calculates the electric displacement using the first equality in (\ref{eq4}). The obtained quantity which we notate $\tilde{\mbox{\boldmath$D$}}(t)$ is calculated using the formulas 3.954(2) in \cite{6} and 2.5.36(6,\,11) in \cite{7}. The result is \begin{equation} \tilde{\mbox{\boldmath$D$}}(t)= \mbox{\boldmath$D$}(t)- \mbox{\boldmath$E$}_0\frac{\omega_p^2}{2\gamma}\sqrt{\frac{\pi}{\beta}}, \label{eq22} \end{equation} \noindent where ${\mbox{\boldmath$D$}}(t)$ is defined in (\ref{eq19}). It has nonzero values at both $t\to\infty$ and $t\to -\infty$: \begin{equation} \tilde{\mbox{\boldmath$D$}}(\infty)= \mbox{\boldmath$E$}_0\frac{\omega_p^2}{2\gamma}\sqrt{\frac{\pi}{\beta}}, \qquad \tilde{\mbox{\boldmath$D$}}(-\infty)=- \mbox{\boldmath$E$}_0\frac{\omega_p^2}{2\gamma}\sqrt{\frac{\pi}{\beta}}. \label{eq23} \end{equation} \noindent Derivation of a different electric displacement than in (\ref{eq19}), which does not vanish at $t\to -\infty$, i.e., before the switching on of the electric field, can be understood as an artifact resulting from the use of the Fourier integral of a nonintegrable function ${\mbox{\boldmath$D$}}(\omega)$ (the definition of the Fourier integral as a generalized function used in mathematics in this case seems to be not appropriate in our physical situation because it is natural to understand the electric displacement as a usual function). This suggests that in the presence of free charge carriers the standard definition of the frequency-dependent dielectric permittivity basing on the formal representation of ${\mbox{\boldmath$E$}}(t)$ and ${\mbox{\boldmath$D$}}(t)$ in terms of Fourier integrals is not satisfactory and requires some additional regularization procedure. As an example of such procedure, we consider the modified dielectric permittivity of the Drude model \begin{equation} \varepsilon_D^{(\theta)}(\omega)=1- \frac{\omega_p^2}{(\omega+{\rm i}\theta)(\omega+{\rm i}\,\gamma)}, \label{eq23a} \end{equation} \noindent where, in contrast with (\ref{eq10}), the quantity $\theta\!>\!0$ is not infinitesimally small. In ac\-cor\-dan\-ce with (\ref{eq23a}) $\varepsilon_D^{(\theta)}$ is regular at $\omega=0$. The substitution of (\ref{eq23a}) into (\ref{eq5}) leads to \begin{eqnarray} && -\omega_p^2 \frac{\omega^2-\theta\gamma}{(\omega^2+\theta^2)(\omega^2+\gamma^2)} =\int_{0}^{\infty} f_D^{(\theta)}(\tau)\,\cos(\omega\tau)\,d\tau, \nonumber \\ && \omega_p^2(\theta+\gamma) \frac{\omega}{(\omega^2+\theta^2)(\omega^2+\gamma^2)}=\int_{0}^{\infty} f_D^{(\theta)}(\tau)\,\sin(\omega\tau)\,d\tau. \label{eq23b} \end{eqnarray} \noindent It can be easily seen that both the inverse cosine and sine Fourier transformations performed in (\ref{eq23b}) lead to the common result \begin{equation} f^{(\theta)}_D(\tau)=\frac{\omega_p^2}{\gamma-\theta}\left( {\rm e}^{-\theta\tau}- {\rm e}^{-\gamma\tau}\right). \label{eq23c} \end{equation} \noindent Substituting this into (\ref{eq3}) and performing calculations with the electric field (\ref{eq17}), we arrive at the modified electric displacement \begin{eqnarray} \mbox{\boldmath$D$}^{(\theta)}(t)&=&\mbox{\boldmath$E$}(t)+ \mbox{\boldmath$E$}_0\frac{\omega_p^2}{2(\gamma-\theta)} \sqrt{\frac{\pi}{\beta}} \left[{\rm e}^{\frac{\theta^2}{4\beta}-\theta t} {\rm erfc}\,\left(\frac{\theta}{2\sqrt{\beta}}-\sqrt{\beta}t\right)\right. \nonumber \\ &&~~~-\left. {\rm e}^{\frac{\gamma^2}{4\beta}-\gamma t} {\rm erfc}\,\left(\frac{\gamma}{2\sqrt{\beta}}-\sqrt{\beta}t\right)\right]. \label{eq24d} \end{eqnarray} \noindent In the limiting case $\theta\to 0$ (\ref{eq24d}) coincides with (\ref{eq19}). Precisely the same result, as in (\ref{eq24d}), is obtained if one considers $\mbox{\boldmath$D$}^{(\theta)}(\omega)=\varepsilon_D^{(\theta)}(\omega) \mbox{\boldmath$E$}(\omega)$ and then finds $\mbox{\boldmath$D$}^{(\theta)}(t)$ from the first equality in (\ref{eq4}). Thus, when we assume $\theta>0$, both methods of the calculation of the electric displacement are in agreement. The reason is that for $\theta>0$ the functions $\mbox{\boldmath$D$}(t)$ and $\mbox{\boldmath$D$}(\omega)$ belong to $L^{1}(-\infty,\infty)$ and all Fourier transformations are well defined. However, to obtain the correct physical results for an infinite medium, one must put $\theta=0$ in (\ref{eq24d}) and return to (\ref{eq19}). The point is that (\ref{eq24d}) with $\theta>0$ leads to $\mbox{\boldmath$D$}^{(\theta)}(t)\to 0$ when $t\to\pm\infty$ [as it must be for functions belonging to $L^{1}(-\infty,\infty)$]. At the same time, in the presence of free charge carriers, the electric displacement in an infinite medium remains nonzero in accordance with (\ref{eq21}) after the electric field is switched off. Thus, the limiting transitions $t\to\pm\infty$ and $\theta\to 0$ are not interchangeable. \section{Insulating media} The situation is quite different for dielectric materials at zero temperature which do not contain free charge carriers (i.e., for true insulators). In this case the dielectric permittivity can be represented in the form \cite{8} \begin{equation} \varepsilon_I(\omega)=1+\sum\limits_{j=1}^{K} \frac{g_j}{\omega_j^2-\omega^2-{\rm i}\,\gamma_j\omega}, \label{eq24} \end{equation} \noindent where $\omega_j\neq 0$ are the oscillator frequencies, $\gamma_j$ are the relaxation parameters, and $g_j$ are the oscillator strengths of $K$ oscillators. In this case the second equality of (\ref{eq5}) results in \begin{eqnarray} \sum\limits_{j=1}^{K} \frac{g_j(\omega_j^2-\omega^2)}{(\omega_j^2-\omega^2)^2+\gamma_j^2\omega^2} =\int_{0}^{\infty}f_I(\tau)\,\cos\,(\omega\tau)d\tau, \label{eq25} \\ \sum\limits_{j=1}^{K} \frac{g_j\gamma_j\omega}{(\omega_j^2-\omega^2)^2+\gamma_j^2\omega^2} =\int_{0}^{\infty}f_I(\tau)\,\sin\,(\omega\tau)d\tau. \nonumber \end{eqnarray} \noindent Performing the inverse cosine Fourier transformation in the first equation of (\ref{eq25}) with the help of the integrals 3.733(1,\,3) in \cite{6} we obtain \begin{equation} f_I(\tau)=\sum\limits_{j=1}^{K} \frac{g_j\,{\rm e}^{-\frac{1}{2}\gamma_j\tau}}{\sqrt{\omega_j^2- \frac{1}{4}\gamma_j^2}}\,\sin\left(\sqrt{\omega_j^2- \frac{1}{4}\gamma_j^2}\,\,\tau\right). \label{eq26} \end{equation} \noindent Precisely the same result is obtained by means of the inverse sine Fourier transformation from the second equation in (\ref{eq25}) when one uses the integral 3.733(2) in \cite{6}. In this case all involved Fourier integrals exist in the classical sense with no use of regularization and $\varepsilon_I(\omega)$ is well defined. Substituting the electric field (\ref{eq17}) into (\ref{eq3}) and using the integral 3.897(1) in \cite{6}, we obtain the electric displacement in an insulating media, \begin{equation} \hspace*{-15mm} \mbox{\boldmath$D$}(t)=\mbox{\boldmath$E$}(t)\left\{1+ \sqrt{\frac{\pi}{\beta}}\sum\limits_{j=1}^{K} \frac{g_j}{\sqrt{4\omega_j^2-\gamma_j^2}}\, {\rm Im}\left[{\rm e}^{B^2}\,{\rm erfc}(B)\right]\right\}, \label{eq27} \end{equation} \noindent where \begin{equation} B\equiv B(t)= \frac{\gamma_j-4\beta t -{\rm i}\, \sqrt{4\omega_j^2-\gamma_j^2}}{4\sqrt{\beta}}. \label{eq28} \end{equation} \noindent The same result is obtained by means of the inverse Fourier transformation from $\mbox{\boldmath$D$}(\omega)$ found using the first equality in (\ref{eq5}), as it should be. {}From (\ref{eq27}) and (\ref{eq20}) it can be easily seen that $\mbox{\boldmath$D$}(t)\to 0$ when $t\to\pm\infty$, as it should be for insulating materials, and that both $\mbox{\boldmath$D$}(t)$ and $\mbox{\boldmath$D$}(\omega)$ belong to $L^1(-\infty,\infty)$. \section{Conclusions and discussion} To conclude, we have shown that the definition of the frequency-dependent dielectric permittivity for materials containing free charge carriers by means of Fourier transformation of the fields is not as straightforward as in the case of insulators. The essence of the problem is in the use of the idealization of an infinite medium. For insulators this idealization is applicable if the sizes of the bodies are much greater than some characteristic parameter (e.g., the width of a gap between the bodies). However, for media with movable free charge carriers such kind of conditions fail. The physical situation for an infinite medium turns out to be totally different from the case of finite bodies of any conceivable size. In fact for conductors $\varepsilon(\omega)$ is a quantity obtained through formal application of Fourier transformation in the region where it needs additional regularization. In spite of a great number of successful applications (see, e.g., \cite{1,2,9}) there are delicate cases where such procedure leads to problems. As an example one could mention the use of $\varepsilon_D(\omega)-1$ as a response function in the fluctuation-dissipation theorem and related puzzles in the theory of thermal Casimir force \cite{3,10,11}. During the last ten years the thermal Casimir force was the subject of considerable discussion. It was suggested \cite{14,15} to describe it using the Lifshitz theory combined with the Drude model (\ref{eq6}). In the limit of large separations between the test bodies the predictions of the Drude model approach were found to be in agreement with classical statistical physics \cite{16,17}. On the other hand, at short separations the predictions of this approach were excluded experimentally, whereas the predictions based on the use of the plasma model in (\ref{eq15}) were found to be consistent with the data \cite{18}. The question on how to correctly calculate the thermal Casimir force still remains to be answered. Keeping in mind that the Lifshitz theory is based on the fluctuation-dissipation theorem, we would like to emphasize that the application of this theorem with poorly defined response functions cannot be considered as either exact or rigorous and might cause currently discussed problems. \section*{Acknowledgments} The authors are grateful to V.N.~Marachevsky for helpful discussions. G.L.K. and V.M.M. are grateful to the Institute for Theoretical Physics, Leipzig University for kind hospitality. This work was supported by Deutsche Forschungsgemeinschaft, Grant No.~GE\,696/9--1. \section*{References} \numrefs{99} \bibitem{1} Landau L D, Lifshitz E M and Pitaevskii L P 1984 {\it Electrodynamics of Continuous Media} (Oxford: Pergamon Press) \bibitem{2} Mahan G D 1993 {\it Many-Particle Physics} (New York: Plenum Press) \bibitem{3} Bordag M, Klimchitskaya G L, Mohideen U and Mostepanenko V M 2009 {\it Advances in the Casimir Effect} (Oxford: Oxford University Press) \bibitem{Korn} Korn G A and Korn T M 1961 {\it Mathematical Handbook for Scientists and Engineers} (New York: McGraw-Hill); Spanier J and Oldham K B 1987 {\it An Atlas of Functions} (New York: Hemisphere Publishing Corporation) \bibitem{9} Jackson J D 1999 {\it Classical Electrodynamics} (New York: John Willey \& Sons) \bibitem{4} Titchmarsh E C 1962 {\it Introduction to the Theory of Fourier Integrals} (Oxford: Clarendon Press) \bibitem{5} Ashcroft N W and Mermin N D 1976 {\it Solid State Physics} (Philadelphia: Saunders Colledge) \bibitem{6} Gradshtein I A and Ryzhik I M 1980 {\it Table of Integrals, Series and Products} (New York: Academic Press) \bibitem{7a} Geyer B, Klimchitskaya G L and Mostepanenko V M 2007 {\it J. Phys. A: Math. Theor.} {\bf 40} 13485 \bibitem{7} Prudnikov A P, Brychkov Yu A and Marichev O I 1992 {\it Integrals and Series}, Vol~1 (New York: Gordon and Breach) \bibitem{8 Parsegian V A 2005 {\it Van der Waals forces: A Handbook for Biologists, Chemists, Engineers, and Physicists} (Cambridge: Cambridge University Press) \bibitem{10} Klimchitskaya G L and Mostepanenko V M 2006 {\it Contemp. Phys.} {\bf 47} 131 \bibitem{11} Klimchitskaya G L, Mohideen U and Mostepanenko V M 2009 ArXiv:0902.4022, to appear in {\it Rev. Mod. Phys.} \bibitem{14} Bostr\"{o}m M and Sernelius B E 2000 {\it Phys. Rev. Lett.} {\bf 84} 4757 \bibitem {15} Brevik I, Aarseth J B, H{\o}ye J S and Milton K A 2005 {\it Phys. Rev.} E {\bf 71} 056101 \bibitem {16} Buenzli P R and Martin P A 2008 {\it Phys. Rev.} A {\bf 77} 011114 \bibitem {17} Bimonte G 2009 {\it Phys. Rev.} A {\bf 79} 042107 \bibitem{18} Decca R S, L\'opez D, Fischbach E, Klimchitskaya G L, Krause D E and Mostepanenko V M 2007 {\it Eur. Phys. J.} C {\bf 51} 963 \endnumrefs \end{document}
1,108,101,563,599
arxiv
\section{#1}} \renewcommand{\arabic{section}.\arabic{equation}}{\thesection.\arabic{equation}} \numberwithin{equation}{section} \parskip 4pt \begin{document} \def\op#1{\mathcal{#1}} \def\relax{\rm O \kern-.635em 0}{\relax{\rm O \kern-.635em 0}} \def{\rm d}\hskip -1pt{{\rm d}\hskip -1pt} \def\alpha{\alpha} \def\beta{\beta} \def\gamma{\gamma} \def\delta{\delta} \def\varepsilon{\epsilon} \def\varepsilon{\varepsilon} \def\tau{\theta} \def\lambda{\lambda} \def\mu{\mu} \def\nu{\nu} \def\pi{\pi} \def\rho{\rho} \def\sigma{\sigma} \def\tau{\tau} \def\zeta{\zeta} \def\chi{\chi} \def\pi{\psi} \def\omega{\omega} \def\Gamma{\Gamma} \def\Delta{\Delta} \def\Theta{\Theta} \def\Lambda{\Lambda} \def\Pi{\Pi} \def\Sigma{\Sigma} \def\Omega{\Omega} \def\bar{\psi}{\bar{\psi}} \def\bar{\chi}{\bar{\chi}} \def\bar{\lambda}{\bar{\lambda}} \def\mathcal{P}{\mathcal{P}} \def\Theta{\mathcal{Q}} \def\mathcal{K}{\mathcal{K}} \def\mathcal{A}{\mathcal{A}} \def\mathcal{N}{\mathcal{N}} \def\Phi{\mathcal{F}} \def\mathcal{G}{\mathcal{G}} \def\mathcal{C}{\mathcal{C}} \def\overline{L}{\overline{L}} \def\overline{M}{\overline{M}} \def\widetilde{K}{\widetilde{K}} \def\bar{h}{\bar{h}} \def\eq#1{(\ref{#1})} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray*}}{\begin{eqnarray*}} \newcommand{\end{eqnarray*}}{\end{eqnarray*}} \newcommand{\nonumber}{\nonumber} \newcommand{\noindent}{\noindent} \newcommand{\mathfrak{gl}}{\mathfrak{gl}} \newcommand{\mathfrak{u}}{\mathfrak{u}} \newcommand{\mathfrak{sl}}{\mathfrak{sl}} \newcommand{\mathfrak{sp}}{\mathfrak{sp}} \newcommand{\mathfrak{usp}}{\mathfrak{usp}} \newcommand{\mathfrak{su}}{\mathfrak{su}} \newcommand{\mathfrak{p}}{\mathfrak{p}} \newcommand{\mathfrak{so}}{\mathfrak{so}} \newcommand{\mathfrak{g}}{\mathfrak{g}} \newcommand{\mathfrak{r}}{\mathfrak{r}} \newcommand{\mathfrak{e}}{\mathfrak{e}} \newcommand{\mathrm{E}}{\mathrm{E}} \newcommand{\mathrm{Sp}}{\mathrm{Sp}} \newcommand{\mathrm{SO}}{\mathrm{SO}} \newcommand{\mathrm{SL}}{\mathrm{SL}} \newcommand{\mathrm{SU}}{\mathrm{SU}} \newcommand{\mathrm{USp}}{\mathrm{USp}} \newcommand{\mathrm{U}}{\mathrm{U}} \newcommand{\mathrm{F}}{\mathrm{F}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{H}}{\mathbb{H}} \def\overline{L}{\overline{L}} \def\mathcal{W}{\mathcal{W}} \def\bullet{\bullet} \newcommand{\rf}[1]{(\ref{#1})} \newcommand{\cm}[1]{{\textbf{#1}}} \newcommand{\partial}{\partial} \newcommand{\slash{\partial}}{\slash{\partial}} \newcommand{\slash{H}}{\slash{H}} \newcommand{\slash{\Phi}}{\slash{\Phi}} \newcommand{\slash{\Psi}}{\slash{\Psi}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{{\mathrm{CY}}}{{\mathrm{CY}}} \newcommand{\epsilon}{{\mathrm e}} \newcommand{{\it e.g.}~}{{\it e.g.}~} \newcommand{{\it i.e.}\ }{{\it i.e.}\ } \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\mathrm{Pin}}{\mathrm{Pin}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathrm{SL}}{\mathrm{SL}} \newcommand{\mathrm{sign}}{\mathrm{sign}} \newcommand{\mathrm{SO}}{\mathrm{SO}} \newcommand{\mathrm{O}}{\mathrm{O}} \newcommand{\mathrm{Spin}}{\mathrm{Spin}} \newcommand{\mathrm{SU}}{\mathrm{SU}} \newcommand{\tilde{D}}{\tilde{D}} \newcommand{\tilde{\gamma}}{\tilde{\gamma}} \newcommand{\mathrm{tr}}{\mathrm{tr}} \newcommand{\mathbb{T}}{\mathbb{T}} \newcommand{\Upsilon}{\mathrm{U}} \newcommand{\mathbf{1}}{\mathbf{1}} \newcommand{\mathrm{Vol}}{\mathrm{Vol}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \def\alpha{\alpha} \def\beta{\beta} \def\gamma{\gamma} \def\chi{\chi} \def\delta{\delta} \def\epsilon{\epsilon} \def\varepsilon{\varepsilon} \def\phi{\phi} \def\varphi{\varphi} \def\zeta{\psi} \def\overline{\psi}{\overline{\psi}} \def\widetilde{\psi}{\widetilde{\psi}} \def\kappa{\kappa} \def\lambda{\lambda} \def\mu{\mu} \def\nu{\nu} \def\omega{\omega} \def\theta{\theta} \def\theta{\theta} \def\hat{\theta}{\hat{\theta}} \def\rho{\rho} \def\sigma{\sigma} \def\widetilde{\sigma}{\widetilde{\sigma}} \def\utw{\sigma}{\utw{\sigma}} \def\tau{\tau} \def\upsilon{\upsilon} \def\xi{\xi} \def\zeta{\zeta} \def\wedge{\wedge} \def\Delta{\Delta} \def\Phi{\Phi} \def\Gamma{\Gamma} \def\Phi{\Phi} \def\Lambda{\Lambda} \def\Omega{\Omega} \def\Pi{\Pi} \def\Theta{\Theta} \def\Xi{\Xi} \def{\cal N}{{\cal N}} \def{\cal P}{{\cal P}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\ft}[2]{{\textstyle\frac{#1}{#2}}} \def\fft#1#2{{\frac{#1}{#2}}} \def\ul#1{\underline{#1}} \newcommand{\frac{1}{2}}{\frac{1}{2}} \def\slashchar#1{\setbox0=\hbox{$#1$} \dimen0=\wd0 \setbox1=\hbox{/} \dimen1=\wd1 \ifdim\dimen0>\dimen1 \rlap{\hbox to \dimen0{\hfil/\hfil}} #1 \else \rlap{\hbox to \dimen1{\hfil$#1$\hfil}} / \fi} \def\slash#1{\rlap {\begin{picture}(10,10) \put(0,0){\line(1,1){10}} \end{picture}} #1} \def\slas#1{\rlap{\begin{picture}(10,10)(-5,0) \put(0,0){\line(2,1){15}} \end{picture}} #1 } \def\slashh#1{\rlap{\begin{picture}(10,10) \put(0,0){\line(5,1){40}} \end{picture}}#1} \newcommand{\slash\!\!\!\!}{\slash\!\!\!\!} \def\sei{e^{i J}\!\!\!\!\! \begin{picture}(10,10) \put(0,0){\line(1,2){5}} \end{picture} } \begin{titlepage} \begin{center} \rightline{\small IFT-UAM/CSIC-09-24} \rightline{\small ZMP-HH/09-11} \vskip 1cm {\Large \bf Domain wall flow equations and $SU(3)\times SU(3)$ structure compactifications} \vskip 1.2cm {\bf Paul Smyth$^1$ and Silvia Vaul\`a$^{2}$ } \vskip 0.2cm $^1$\textit{ II. Institut f\"ur Theoretische Physik der Universit\"at Hamburg\\ Luruper Chaussee 149, 22761 Hamburg, Germany} \\ {\small\upshape\tt [email protected] }\\[3mm] $^2$ {\it Instituto de F\'{\iota}sica Te\'orica UAM/CSIC\\ Facultad de Ciencias C-XVI, C.U.~Cantoblanco, E-28049-Madrid, Spain} \\ {\small\upshape\tt [email protected]} \vskip 0.4cm \end{center} \vskip 1cm \begin{center} \textit{Dedicated to the memory of Raffaele Punzi} \end{center} \vskip 1cm \begin{center} {\bf Abstract }\end{center} \vskip 0.4cm \noindent We study supersymmetric domain wall solutions in four dimensions arising from the compactification of type II supergravity on an $SU(3) \times SU(3)$ structure manifold. Using a pure spinor approach, we show that the supersymmetry variations can be reinterpreted as a generalisation of the Hitchin flow equations and describe the embedding of an $SU(3) \times SU(3)$ structure manifold into a $G_2 \times G_2$ structure manifold. We find a precise agreement between the four- and ten-dimensional supergravity results. The flow equations derived here should have applications in constructing the gravity duals of Chern-Simons-matter conformal field theories. \vfill \end{titlepage} \tableofcontents \newpage \section{Introduction}\label{intro} The task of finding compactifications of ten-dimensional supergravity that lead to a four-dimensional effective theory with ${\cal N}=1$ supersymmetry has received much attention. One promising approach is compactification with flux, where the internal manifold can be deformed away from being Ricci-flat (see, for instance, \cite{Grana:2005jc} for a review). Fluxes can induce torsion, meaning that the internal six-manifold will no longer have $SU(3)$ holonomy, for example, but rather $SU(2)$ or $SU(3)$ structure. The constraints on the geometry of the internal manifold are then most conveniently rephrased in terms of generalised complex geometry using $SU(3)\times SU(3)$ structures defined on the formal sum of the tangent and cotangent bundles \cite{Hitchin:2004ut,Gualtieri:2003dx,gmpt1,Witt,gmpt2,gmpt3,Grana:2005ny,Grana:2006hr}. Finding explicit examples of such manifolds that lead to four-dimensional Minkowski vacua has proven difficult in practice \cite{gmpt3}. This led to the search for alternative $AdS_4$ vacua (see \cite{Tomasiello:2007eq,Kounnas:2007dd,Koerber:2008rx} and references therein) which may also provide a useful starting point for realistic models via the KKLT proposal \cite{Kachru:2003aw}. In this paper we shall focus on another class of four-dimensional vacuum configurations - domain walls - which are readily found in gauged supergravity \cite{Mayer:2004sd,Louis:2006wq,Behrndt:2001mx}. The near-horizon limit of a domain wall in four dimensions can produce an $AdS_4$ spacetime that can be interpreted as arising from an $SU(3)\times SU(3)$ structure reduction of type II supergravity \cite{Kounnas:2007dd,Koerber:2008rx}. Much of the work to date has focused on the construction of examples of $AdS_4$ vacua that can be reinterpreted as stacks of orthogonally intersecting branes and Kaluza-Klein monopoles in ten dimensions. Domain wall probes in $SU(3)\times SU(3)$ structure backgrounds have also been studied for their interesting supersymmetry-breaking properties \cite{Lust:2008zd}. The aim of this article is to provide a general characterisation of domain wall \textit{vacua} arising from $SU(3)\times SU(3)$ structure compactifications. The equations of motion for supersymmetric domain wall configurations are known to reduce to a set of first-order flow equations for scalar fields \cite{Cvetic:1996vr}. This is exemplified in four-dimensional, matter-coupled, ${\cal N}=2$ supergravity arising from type IIA supergravity compactified on a half-flat six-manifold. For a domain wall configuration one finds a set of first-order flow equations for the hyper and vector multiplet scalar fields. This system of equations can be shown to be equivalent to the Hitchin flow equations \cite{Mayer:2004sd}, which describe the embedding of the half-flat SU(3) structure six-manifold into a $G_2$-holonomy seven-manifold with boundary \cite{HitchinHF}. The mirror symmetric description of this configuration is given by type IIB supergravity compactified on a Calabi-Yau six-manifold with electric Neveu-Schwarz (NS) fluxes. This was further extended to type IIB supergravity with electric and magnetic NS fluxes in \cite{Louis:2006wq}, which is mirror dual to type IIA supergravity compactified on a manifold with $SU(3)\times SU(3)$ structure. The flow equations derived in four dimensions were shown to be equivalent to the generalised Hitchin flow equations \cite{Witt,Jeschek}, which describe the embedding of an $SU(3)\times SU(3)$ structure manifold into $G_2\times G_2$, or generalised $G_2$ \cite{Witt2}, structure manifold. We shall further extend this analysis to a more general set of four-dimensional charges comprising torsion, Ramond-Ramond (RR), NS and non-geometric fluxes, and correspondingly derive a general set of flow equations in four and ten dimensions. Our results can be interpreted as describing the embedding of an $SU(3)\times SU(3)$ structure manifold into an `almost' $G_2\times G_2$ structure manifold, with the RR fields providing the obstruction to integrability of the generalised almost-$G_2$ structure. The dual approach of analysing the domain wall flow equations in both four and ten dimensions has the advantage of circumventing some well-known problems. One of the outstanding technical issues in flux compactifications is to find an appropriate definition of the spectrum of light modes in the absence of harmonic forms \cite{Grana:2005ny,Grana:2006hr,KashaniPoor:2007tr}. While progress has been made for coset- and nil-manifolds \cite{Caviezel:2008ik}, the comparison of the truncated four-dimensional theory and its ten-dimensional counterpart is complicated and unclear in general. On the other hand, from the four-dimensional gauged supergravity point of view, the models corresponding to flux compactifications \cite{Dall'Agata:2003yr,Sommovigo:2004vj}, $SU(3)$ structure compactifications with electric RR and NS fluxes \cite{D'Auria:2004tr} and $SU(3)\times SU(3)$ structure compactifications with electric and magnetic RR and NS fluxes \cite{D'Auria:2007ay} are well defined theories. In order to have a more complete understanding of generalised compactifications and the various low-energy vacua, it is worthwhile to study the problem in both four and ten dimensions. We shall initially follow the completely four-dimensional approach and look for half-supersymmetric, BPS, domain wall solutions of general gauged ${\cal N}=2$ supergravity. In section \ref{s2} we derive the domain wall flow equations, following the analysis of \cite{Louis:2006wq,Behrndt:2001mx,LopesCardoso:2001rt}, and further discuss their modification in the presence of orientifold projections. In section \ref{s3} we shall derive the flow equations from the ten-dimensional perspective. Our goal will be to find a set of equations describing domain wall vacua of SU(3)$\times$SU(3) structure compactifications by manipulating the ten-dimensional type II supersymmetry transformations, inspired by the black hole discussion in \cite{hmt}. For simplicity, we choose to focus on configurations preserving {\it at least} $1/16$-supersymmetry i.e. at least two unbroken supercharges. This allows us to follow the calculation of the ${\cal N}=1$, Minkowski and $AdS_4$ vacuum conditions described in \cite{gmpt3}, appropriately modified for domain wall spacetimes, and write our results in terms of pure spinors $\Phi_+$ and $\Phi_-$. The resulting expressions will provide an extension of the results of \cite{Witt} to compactifications with non-trivial RR fields, as well as more general domain wall profiles (see also \cite{hlmt} for related work). In section \ref{s4} we compare our two results and show that there is a precise agreement between the four- and ten-dimensional derivations. For clarity, we provide some simple examples with vanishing RR fluxes and we also discuss the extension of our results to a non-geometric setting. The generalised Hitchin flow equations that we find are consistent with the proposal of generalised mirror symmetry \cite{gmpt2}, interchanging the IIA/IIB fluxes and the two pure spinors $\Phi_+ \leftrightarrow \Phi_-$. Furthermore, our results provide a useful on-shell check of the truncation and reduction proposal of \cite{Grana:2006hr}. We present our conclusions and discuss possible applications of our results in section \ref{disc}. The reader should be aware that in order to more easily make contact with the literature, we have chosen to use different metric conventions in four and ten dimensions, and that an explanation of the dictionary can be found in section \ref{s4}. Due to this technicality, we have provided a pedagogical review of our conventions in appendix \ref{conv}. Appendices \ref{psd} and \ref{fermtrans} contain a review of pure spinors and the necessary features of general four-dimensional, matter-coupled, ${\cal N}=2$ supergravity. Appendix \ref{4dstr} specialises to theories arising from type II supergravity compactified on $SU(3)\times SU(3)$ structure manifolds with electric and magnetic RR and NS fluxes. In appendix \ref{DWmanipul} we provide details of the derivation of the Hitchin flow equations from the four-dimensional ${\cal N}=2$ supersymmetry transformations for $SU(3)\times SU(3)$ structure compactifications. \section{Domain wall vacua in 4D supergravity}\label{s2} \subsection{4D ${\cal N}=2$ supergravity} Let us consider an off-shell compactification of type IIA supergravity on an $SU(3)\times SU(3)$ structure manifold \cite{Grana:2006hr,Cassani:2009ck} $\hat{Y}$, with an $SU(3)$ structure manifold $Y$ \cite{Grana:2005ny} as a particular case. The resulting theory is a four-dimensional, ${\cal N}=2$ gauged supergravity coupled to $n_V=h^{(1,1)}$ vector multiplets and $n_H=(h^{(2,1)}+1)$ scalar/tensor hypermultiplets \cite{Dall'Agata:2003yr,Sommovigo:2004vj}. The most important point for us is the relation between the fields of type IIA supergravity and those of the four-dimensional theory. When compactifying on a Calabi-Yau manifold, one is accustomed to making a harmonic expansion of the various fields and truncating to the set of modes which are massless in four dimensions. Motivated by this, \cite{Grana:2005ny,Grana:2006hr} proposed that in $SU(3)$ and $SU(3)\times SU(3)$ compactifications, where the distinction between heavy and light modes is unclear due to the lack of a harmonic expansion, one should proceed by truncating the space of forms to a finite-dimensional subspace. The guiding physical principle was that one should aim to be left with only the gravitational multiplet along with vector, tensor and hypermultiplets. In particular, all possible spin-$3/2$ multiplets should be projected out. Furthermore, the truncation should not break supersymmetry and, therefore, the special K\"ahler metrics on the moduli spaces of the pure spinors $\Phi_\pm$, which are in one-to-one correspondence with metric deformations, should descend to the truncated subspaces. One then defines a set of basis forms on these subspaces, in terms of which the ten-dimensional fields and the pure spinors can be expanded. We shall now review our conventions for the truncated pure spinors, which we denote $\Phi^0_\pm$ and $\hat\Phi^0_\pm$ in the $SU(3)$ and $SU(3)\times SU(3)$ structure cases, respectively. We refer the reader to appendix \ref{4dstr} for further details. When $Y$ has $SU(3)$ structure the truncated pure spinors $\Phi^0_\pm$ are defined as \begin{eqnarray} &&\Phi^0_+=X^{\bf\Lambda}\omega_{\bf\Lambda}-F_{\bf\Lambda}\omega^{\bf\Lambda}\label{Phi0+}~,\\ &&\Phi^0_-=Z^A_\eta\alpha_A-G_{\eta\,A}\beta^A\label{Phi0-}~, \end{eqnarray} where ${\bf\Lambda}=(0,\,i)$, $i=1,\dots h^{(1,1)}$ and $A=(0,\,a)$, $a=1,\dots h^{(2,1)}$. The non-harmonic basis of forms \eq{asu3basis} on $Y$ is \begin{equation} (\omega_{\bf\Lambda}\,,\,\omega^{\bf\Lambda}),\quad\quad (\alpha_A\,,\,\beta^A)~.\label{su3basis} \end{equation} The rescaled sections introduced in \eq{Phi0-} are defined as \begin{equation} (Z^A_\eta\,,G_{\eta A})\equiv \eta (Z^A\,,G_A)\label{Zresc}~, \end{equation} where $\eta$ is a normalisation factor (see the discussion around \rf{eta} for more details). $X^{\bf\Lambda}$ and $F_{\bf\Lambda}=\frac{\partial F}{\partial X^{\bf\Lambda}}$ are the homogeneous complex coordinates and the derivative of the holomorphic prepotential $F$ for the K\"ahler moduli, respectively. $Z^A$ and $G_A=\frac{\partial G}{\partial Z^A}$ are the homogeneous complex coordinates and the derivative of the holomorphic prepotential $G$ for the complex structure moduli, respectively. The failure of the basis forms \rf{su3basis} to be closed can be expressed as \begin{eqnarray} d_H\alpha_A=e_{A{\bf\Lambda}}\omega^{\bf\Lambda},\quad\quad d_H\beta^A=e^A_{\bf\Lambda} \omega^{\bf\Lambda}~,\nonumber\\ d_H\omega_{\bf\Lambda}=e_{\bf\Lambda}^A\alpha_A-e_{A{\bf\Lambda}}\beta^{A},\quad\quad d_H\omega^{\bf\Lambda}=0\label{decohSU3TH}~, \end{eqnarray} where it is convenient to define a twisted derivative operator $ d_H\equiv d-H\wedge$. The NS flux gives rise to \begin{equation} \label{NSflux}H=e_0^A{\alpha}_A-e_{0A}\beta^A~, \end{equation} which we shall call $H$ deformations. We refer to the remaining flux parameters $(e^A_i,\,e_{Ai})$ as $T$(orsion) deformations, as they all have a geometric origin. Finally, we note that $d^2=d_H^2=0$ implies \begin{equation} e_{A{\bf\Lambda}}e^A_{~\bf\Sigma}-e_{A{\bf\Sigma}}e^A_{~\bf\Lambda}=0\label{notadSU3}~. \end{equation} The RR fluxes are introduced using the basis of even forms \eq{su3basis}, according to \begin{equation} F^{flux}=e_{\bf\Lambda}\omega^{\bf\Lambda}-m^{\bf\Lambda}\omega_{\bf\Lambda} \label{RRf3}. \end{equation} The $SU(3)\times SU(3)$ structure case is somewhat more complicated. We shall review the main points here and refer the reader to appendix D for further explanations. When $\hat Y$ has $SU(3)\times SU(3)$ structure we define a basis of polyforms \eq{aPFbasis} as \begin{equation} (\hat\omega_{\bf\Lambda},\,\hat\omega^{\bf\Lambda})\quad {\bf\Lambda}=0,1,\dots h^{(1,1)}\quad , \quad (\hat\alpha_A\,,\,\hat\beta^A)\quad A=0,1,\dots h^{(2,1)}~,\label{PFbasis} \end{equation} i.e. the basis forms are no longer of fixed degree. The truncated pure spinors $\hat\Phi^0_\pm$ are then defined as \begin{eqnarray} &&\hat\Phi_+^0=X^{\bf\Lambda}\hat\omega_{\bf\Lambda}-F_{\bf\Lambda}\,\hat\omega^{\bf\Lambda}\label{p1}~,\\ &&\hat\Phi_-^0=Z^A_\eta\hat\alpha_A-G_{\eta A}\,\hat\beta^A\ .\label{p2} \end{eqnarray} To discuss the non-closure of the basis forms \rf{PFbasis} it is convenient to introduce a {\it generalised differential} (see e.g. \cite{Grana:2006hr} and references therein) \begin{equation} \mathcal{D}\equiv d\ -H\wedge\ \,-Q\cdot\ \,-R\llcorner\label{defD}~, \end{equation} where $Q\cdot\ $ and $R\llcorner\ $ act on a generic $k$-form $C$ as \begin{equation} (Q\cdot C)_{m_1\dots m_{k-1}}=Q^{ab}_{\ \ [m_1}C_{ab m_2\dots m_{k-1}]},\quad\quad (R\llcorner C)_{m_1\dots m_{k-3}}=R^{abc}C_{abcm_1\dots m_{k-3}}~. \end{equation} The action of $\mathcal{D}$ on the basis forms and the subsequent constraints on the fluxes is given in \rf{decohSU33}-\rf{2notadSU33}. The RR fluxes are introduced using the same basis \eq{PFbasis} \begin{equation} \hat{F}^{flux}=e_{\bf\Lambda}\hat{\omega}^{\bf\Lambda}-m^{\bf\Lambda}\hat{\omega}_{\bf\Lambda} \label{RRf33}\ . \end{equation} When considering the RR fields, it is useful to introduce a third polyform $\hat\Sigma$ (that reduces to a three-form $\Sigma$ in ${\rm SU}(3)$ structure case) \begin{equation} \hat\Sigma=\zeta^A\hat\alpha_A-\tilde\zeta_{ A}\hat\beta^A~,\label{p3} \end{equation} such that the total RR contribution is given by \begin{equation} \hat{F}=\mathcal{D}\hat{\Sigma}+\hat{F}^{flux}~. \label{totalRRf} \end{equation} In the ${\rm SU}(3)$ structure case this becomes \begin{equation} F=d_H\Sigma+F^{flux}\label{totalRRff}\ . \end{equation} The fields $(\zeta^A,\tilde\zeta_A)$ are the RR scalars or dual tensors in the hypermultiplet sector in four dimensions. One may notice that the definition \eq{p3} is sensitive to the fact that some RR fields may appear as tensors in four dimensions in the $SU(3)\times SU(3)$ structure case. This is not a problem, as in all expressions of interest $\hat\Sigma$ only appears through its generalised derivative $\mathcal{D}\hat\Sigma$ \eq{totalRRf}, which contains just the gauge invariant combinations that are not dualised into tensors \cite{D'Auria:2007ay}. Finally, we note that it is often convenient to introduce the rescaled RR fields \begin{equation} (\zeta^A_{\tilde\eta},\,\tilde\zeta_{A\tilde\eta} )\equiv \tilde\eta (\zeta^A,\,\tilde\zeta_A)\label{Srescalintro}~, \end{equation} and the rescaled total RR contribution \begin{equation} \hat{F}_{\tilde\eta}\equiv \tilde\eta\hat{F}\label{Frescintro}~. \end{equation} where $\tilde\eta$ is a normalization factor (see the discussion around \eq{ap3}). Let us now return to the four-dimensional, ${\cal N}=2$ gauged supergravity. The gravitational multiplet is given by \begin{equation}(g_{\mu\nu},\,\pi_{\hat{A}\mu},\,\pi^{\hat{A}}_\mu,\,A^0_\mu)~,\end{equation} where $g_{\mu\nu}$ is the metric, $\pi_{\hat{A}\mu},\, \hat{A}=1,2$ are the two chiral gravitini and $A^0_\mu$ is the graviphoton. The vector multiplets are given by \begin{equation}(A^i_\mu,\,\lambda^{i\hat{A}},\,\lambda^i_{\hat{A}},\,t^i)\ ,\qquad i = 1,\ldots, n_V\ ,\end{equation} where $A^i_\mu$ are the gauge bosons, $\lambda^{i\hat{A}}$ are the doublets of chiral gaugini and $t^i$ are the complex scalar fields appearing in the expansion of truncated pure spinors\footnote{We refer the reader to \cite{Grana:2005ny,Grana:2006hr} for a thorough discussion of the reduction and truncation of the type II theories to four-dimensional {{\cal N}}=2 supergravity.} $\Phi^0_+$ and $\hat\Phi^0_+$ \eq{Phi0+}, \eq{p1}. The $n_H$ hypermultiplets contain two chiral hyperini, which we collectively denote as $\zeta_{\hat\alpha}$, and four bosons, which in principle can be scalar $q^u$ or tensor $B_{I\mu\nu}$ fields, \begin{eqnarray} (\zeta_{\hat\alpha},\,\zeta^{\hat\alpha},\,q^u,\,B_{I\mu\nu})\ , && \quad\hat\alpha = 1,\ldots,2n_H+2\ , \\ &&\quad u = 1,\ldots, 4n_H+4-n_T,\quad I=1,\dots n_T\ .\nonumber \end{eqnarray} If the tensors are massless, e.g. as happens for the universal hypermultiplet tensors in Calabi-Yau compactifications, they can be dualised into scalar fields; otherwise, they have to be kept as massive tensors. The scalar fields $q^u$ contain the dilaton $\varphi$, the complex scalars $z^a$ appearing in the expansion of the truncated pure spinors $\Phi^0_-$ and $\hat\Phi^0_-$ \eq{Phi0-}, \eq{p2}, and the scalars $(\zeta^A,\,\tilde\zeta_A)$ appearing in reduction of the RR sector. In the $SU(3)$ structure case \cite{D'Auria:2004tr}, $(\zeta^A,\,\tilde\zeta_A)$ appear as scalars in the four-dimensional theory, while in the $SU(3)\times SU(3)$ structure case \cite{D'Auria:2007ay} some combinations are dualised into tensors. The four-dimensional components of the NSNS two-form $B_{\mu\nu}$ naturally appear as a tensor. Nevertheless, in the $SU(3)$ structure case it is massless and for convenience it is dualised into a scalar. For the domain wall configurations that we are interested in, we only consider the situation where both the vector and tensor field strengths are vanishing. With this assumption the supersymmetry transformation laws for the fermions \eq{ferm} simplify and the supersymmetry conditions are \begin{eqnarray} \delta\pi_{\mu \hat{A}}&=&D_\mu\varepsilon_{\hat{A}}+iS_{\hat{A}\hat{B}}\gamma_\mu\varepsilon^{\hat{B}}=0\label{psi}~, \\ \delta\lambda^{i\hat{A}}&=&i\partial_\mu t^i\gamma^\mu\varepsilon^{\hat{A}}+W^{i\hat{A}\hat{B}}\varepsilon_{\hat{B}}=0\label{la}~,\\ \delta\zeta_{\hat\alpha}&=&iP_{u \hat{A}\hat\alpha}\partial_\mu q^u\gamma^\mu\varepsilon^{\hat{A}}+N_{\hat\alpha}^{\hat{A}}\varepsilon_{\hat{A}}=0\label{zi}~. \end{eqnarray} Here $P_{u {\hat{A}}\hat\alpha}$ parameterise the scalars in the hypermultiplets. Note that in the absence of (hyper) tensor multiplets, $P_{u {\hat{A}}\hat\alpha}$ coincide with the vielbein of the quaternionic manifold. In the presence of (hyper) tensor multiples, they parameterise the scalars which have not been dualised into tensors \cite{Dall'Agata:2003yr}. The fermion shifts $S_{{\hat{A}}{\hat{B}}}$, $W^{i{\hat{A}}{\hat{B}}}$ and $N_{\hat\alpha}^{\hat{A}}$ encode the gauging. For the cases we are interested in, the gauging will always be Abelian \cite{Sommovigo:2004vj,D'Auria:2004tr,D'Auria:2007ay}. As a consequence, the electric and magnetic triholomorphic momentum map takes the simple form \begin{equation}\vec P_{\bf\Lambda}=\vec\omega_I k^I_{\bf\Lambda},\quad\quad \vec Q^{\bf\Lambda}=\vec\omega_I k^{I\bf\Lambda}~,\end{equation} where $k^I_{\bf\Lambda}$ are the Killing vectors associated to the gauge group and $k^{I\bf\Lambda}$ are their magnetic duals. Using the homogeneous special coordinates $X^{\bf\Lambda}$ and the derivative of the holomorphic prepotential $F_{\bf\Lambda}$ for the K\"ahler moduli space, defined in appendix \ref{4dstr}, we can define an $SU(2)$ triplet of superpotentials \begin{equation} \vec W=e^{\frac{K_+}2}\left(X^{\bf\Lambda} \vec{P}_{\bf\Lambda}-F_{\bf\Lambda} \vec{Q}^{\bf\Lambda}\right)~,\label{SU2W} \end{equation} such that the fermion shifts are related to the superpotentials $\vec W$ as follows \begin{eqnarray} \label{S}S_{{\hat{A}}{\hat{B}}}&=&\frac i2 \vec\sigma_{{\hat{A}}{\hat{B}}}\cdot\vec W~,\\ \label{Wi}W^{i{\hat{A}}{\hat{B}}}&=&i\vec\sigma^{{\hat{A}}{\hat{B}}}g^{i\bar{\jmath}}\nabla_{\bar\jmath}\vec W~,\\ \label{N}P^{v{\hat\alpha}}_{({\hat{A}}}N_{{\hat{B}})\hat\alpha} &=&i\vec\sigma_{{\hat{A}}{\hat{B}}}h^{vu}\nabla_u\vec W~, \end{eqnarray} where $g_{i\bar{\jmath}}$ and $h_{uv}$ are the metrics of the scalar $\sigma$-model in the vector and hypermultiplet sectors, respectively, and $K_+$ is the K\"{a}hler potential defined in \eq{JJJ} and \rf{genkp}. It is convenient to decompose the $SU(2)$ vector $\vec W$ into its norm and a unit norm vector $\vec n$ \cite{LopesCardoso:2001rt} \begin{equation} \vec W=W\,\vec n,\quad\quad \vec n\cdot\vec n=1\label{normphase}~.\end{equation} Multiplying \eq{normphase} by $\vec n$ we obtain the expression for the ``superpotential'' \begin{equation} W=\vec n\cdot\vec W~.\end{equation} If we consider type IIA supergravity compactified on an $SU(3)$ structure manifold with electric RR fluxes\footnote{The constants $c_{\bf\Lambda}$ in \cite{D'Auria:2004tr} are related to the electric RR fluxes by $c_{\bf\Lambda}=-2e_{\bf\Lambda}$.} $e_{\bf\Lambda}$, the corresponding gauging of the ${\cal N}=2$ supergravity \cite{D'Auria:2004tr} is purely electric, i.e. it corresponds to the choice $\vec{Q}^{\bf\Lambda}=0$. The components of the $SU(2)$ superpotential $\vec W$ are given by \begin{eqnarray} &&W^1= 2\, e^\varphi e^{\frac{K_++K_-}2} X^{\bf\Lambda} {\rm Re}(G_A e^A_{~{\bf\Lambda}}-Z^A e_{A{\bf\Lambda}})\nonumber~,\\ &&W^2=2\, e^\varphi e^{\frac{K_++K_-}2}X^{\bf\Lambda} {\rm Im}(G_A e^A_{~{\bf\Lambda}}-Z^A e_{A{\bf\Lambda}})\nonumber~, \label{WSU3}\\ &&W^3=e^{2\varphi} e^{\frac{K_+}2}X^{\bf\Lambda} (\tilde\zeta_Ae^A_{~{\bf\Lambda}}-\zeta^A e_{A{\bf\Lambda}}-2e_{\bf\Lambda})~, \end{eqnarray} where $\varphi$ is the four-dimensional dilaton, related to the ten-dimensional dilaton by \begin{equation} e^{2\phi}=\frac18 e^{2\varphi-K_+}\label{10Adil}~. \end{equation} $K_-$ is the K\"{a}hler potential defined in \eq{OO}and \rf{genkp}. Using equations \eq{dPhi+}, \eq{dPhi-} and \eq{Ftorsflux} we can rewrite the components of the superpotential $\vec W$ in terms of the pure spinors \rf{Phi0+}, \rf{Phi0-} and the RR fields \rf{totalRRff} as follows \begin{eqnarray} &&W^1=-2\, e^{\varphi+K_+}\langle d_H {\rm Re}\Phi^0_-,\,\Phi^0_+\rangle~,\\ &&W^2=-2\, e^{\varphi+K_+}\langle d_H {\rm Im}\Phi^0_-,\,\Phi^0_+\rangle~,\\ &&W^3=-2e^{\varphi+K_+}\langle F_{\tilde\eta},\,\Phi^0_+\rangle~. \end{eqnarray} $\langle \cdot,\cdot \rangle$ is the Mukai pairing \rf{Mukai} and we have dropped the $\int_Y$ from each bracket to keep our formula compact. If we consider the more general $SU(3)\times SU(3)$ structure case with electric and magnetic RR fluxes $\left( e_{\bf\Lambda},\, m^{\bf\Lambda}\right)$, the theory will contain massive tensor multiplets and the components of the superpotential $\vec W$ are given by \cite{D'Auria:2007ay} \begin{eqnarray} W^1&\!\!=\!\!& 2\, e^\varphi e^{\frac{K_++K_-}2}\left[X^\Lambda {\rm Re}(G_A e^A_{~\Lambda}-Z^A e_{A\Lambda})-F_\Lambda {\rm Re}(G_A m^{A\Lambda}-Z^A m_{A}^{~\Lambda})\right]\nonumber~,\\ W^2&\!\!=\!\!&2\, e^\varphi e^{\frac{K_++K_-}2}\left[ X^\Lambda {\rm Im}(G_A e^A_{~\Lambda}-Z^A e_{A\Lambda})-F_\Lambda {\rm Im}(G_A m^{A\Lambda}-Z^A m_{A}^{~\Lambda})\right]\nonumber~,\\ W^3&\!\!=\!\!&e^{2\varphi} e^{\frac{K_+}2}\left[X^\Lambda (\tilde\zeta_Ae^A_{~\Lambda}-\zeta^A e_{A\Lambda}-2e_\Lambda)-F_\Lambda (\tilde\zeta_Am^{A\Lambda}-\zeta^A m_{A}^{~\Lambda}-2m^\Lambda)\right]\nonumber~.\\ &&\label{WSU33} \end{eqnarray} Using equations \eq{dPhi+}, \eq{dPhi-} and \eq{Ftorsflux} we can rewrite these expressions in terms of the pure spinors \eq{p1}, \eq{p2} and the RR fields \rf{totalRRf} \begin{eqnarray} &&\label{33w1}W^1=-2\, e^{\varphi+K_+}\langle \mathcal{D} {\rm Re}\hat\Phi^0_-,\,\hat\Phi^0_+\rangle~,\\ &&\label{33w2}W^2=-2\, e^{\varphi+K_+}\langle \mathcal{D} {\rm Im}\hat\Phi^0_-,\,\hat\Phi^0_+\rangle~,\\ &&\label{33w3}W^3=-2e^{\varphi+K_+}\langle \hat F_{\tilde\eta},\,\hat\Phi^0_+\rangle~. \end{eqnarray} \subsection{Domain wall solutions} We are interested in domain wall spacetimes in four dimensions described by a metric of the form\footnote{In four dimensions we use the mostly-minus metric signature in order to be consistent with references \cite{D'Auria:2004tr,D'Auria:2007ay}.} \begin{equation}\label{4dmetric} ds_4^2 = e^{2U(r)}\eta_{\alpha\beta}dx^\alpha dx^\beta -e^{-2pU(r)}dr^2 ~, \end{equation} where $p$ is a constant and, for simplicity, we have chosen a flat worldvolume metric $\eta_{\alpha\beta}$, and $\alpha,\beta=0,1,2$. For the coordinate transverse to the domain wall, we will use `$r$' and `3' to denote a curved and flat index, respectively. We shall make the usual, physically motivated assumption that scalar fields will only depend on the direction transverse to the domain wall. We can now proceed to analyse the supersymmetry variations as in \cite{Louis:2006wq,LopesCardoso:2001rt}. From $\delta\pi_{\alpha {\hat{A}}}=0$ we obtain \begin{equation} U^\prime e^{pU}\gamma_3\varepsilon_{\hat{A}}=-2i\, n_{{\hat{A}}{\hat{B}}}W\varepsilon^{\hat{B}}\label{psialpha}~,\end{equation} where we have used that the domain wall is flat \eq{metric} and we have defined $n_{{\hat{A}}{\hat{B}}}=\frac i2 \vec n\cdot \vec\sigma_{{\hat{A}}{\hat{B}}}$. The consistency of \eq{psialpha} implies \begin{equation} e^{2pU}\left(U^\prime(r)\right)^2=|W|^2\quad\Rightarrow\quad e^{pU}U^\prime(r)=\pm|W|\label{Uprime}~.\end{equation} Inserting \eq{Uprime} back into $\delta\pi_{\alpha {\hat{A}}}=0$ we obtain the BPS projector \begin{equation} \varepsilon_{\hat{A}}=\pm\bar{h}\,\vec n\cdot\vec\sigma_{{\hat{A}}{\hat{B}}}\,\gamma_3\varepsilon^{\hat{B}}\label{project}~,\end{equation} where $\bar{h}$ is the ${\rm U}(1)_V$ phase of $W$ \begin{equation} W=\bar{h}|W|\label{Wphase}~.\end{equation} The subscript ``${}_V$" refers to the line bundle of the special K\"ahler geometry of the vector multiplet scalars. From $\delta\pi_{r{\hat{A}}}=0$ we obtain \begin{eqnarray} \bar{h} \partial_rh&=&\pm i\, e^{-pU}{\rm Im}(hW)\label{Dh}~,\\ \partial_r n_{{\hat{A}}{\hat{B}}}&=&0\label{Dn}~. \end{eqnarray} For a constant curvature metric we have that \cite{LopesCardoso:2001rt} \begin{equation} D_\alpha (h^{\frac12}\varepsilon_{\hat{A}})=\frac i\ell \gamma_\alpha \gamma_3\,h^{\frac12}\varepsilon_{\hat{A}}~,\end{equation} which can be used in $\delta\pi_{r{\hat{A}}}=0$ in order to obtain \begin{eqnarray} \frac1\ell&=&\pm\frac 12 e^U {\rm Im}(hW)\label{1ell}~,\\ U^\prime&=&\pm e^{-pU}{\rm Re}(hW)\label{Uprime2}~, \end{eqnarray} where $\ell^{-2}$ is proportional to the curvature. Comparing \eq{Uprime} with \eq{Uprime2}, or alternatively taking the limit $\ell\rightarrow \infty$ in \eq{1ell}, we obtain \begin{equation}{\rm Im}(hW)=0\label{reW}~,\end{equation} which implies from \eq{Dh} that $\partial_r h=0$. From \eq{psi} we further obtain the expression for the Killing spinor \begin{equation} \varepsilon_{\hat{A}} (r)=e^{\frac12 U}\varepsilon_{\hat{A}}^0 \label{4drsoln}~,\end{equation} where $\varepsilon_{\hat{A}}^0$ is a constant spinor obeying the projection condition \eq{project}. Finally, using the projector \eq{project} in equations \eq{la} and \eq{zi} we obtain the following set of flow equations \begin{eqnarray} &&\partial_r t^i=\mp e^{-pU}g^{i\overline\jmath}\,\bar{h}\,\nabla_{\overline\jmath}\bar{W}\label{attz0}~,\\ &&\partial_rq^u=\mp e^{-pU} g^{uv}\,\bar{h}\, \partial_v\bar{W}\label{attq0}~,\\ && U^\prime=\pm e^{-pU}hW~.\label{attu} \end{eqnarray} The detailed derivation of the generalised Hitchin flow equations from \eq{attz0}-\eq{attu} is presented in appendix \ref{DWmanipul}. Here we shall only summarise the relevant results. By manipulating \eq{attz0} we can derive the expression for the flow equation of ${\rm Im}(h\hat\Phi^0_+)$. Under the assumption that $\vec n$ does not depend on the vector multiplet scalars, we can put \eq{attz0} into the desired form \begin{equation} \partial_r\begin{pmatrix} {\rm Im}(h\,e^{U+\frac{K_+}2}X^{\bf\Lambda})\cr {\rm Im } (h\,e^{U+\frac{K_+}2}F_{\bf\Lambda}) \end{pmatrix} =-\frac12e^{(1-p)U}\begin{pmatrix} W^{\bf\Lambda}\cr W_{\bf\Lambda} \end{pmatrix} \label{dXFSU33}~, \end{equation} where, in terms of the rescaled sections \eq{Zresc} and \eq{Srescalintro}, we have \begin{eqnarray} W^{\bf\Lambda}&\!\!=\!\!&2n^1 e^{\varphi+\frac{K_+}2} \left[\left({\rm Re}G_{A\eta}+\kappa \, \tilde\zeta_{A\tilde\eta}\right) m^{A\Lambda}-\left({\rm Re}Z^A_\eta+\kappa \,\zeta^A_{\tilde\eta}\right) m_{A}^{~\Lambda}-\kappa\, \tilde\eta\, m^\Lambda\right]\nonumber~,\\ W_{\bf\Lambda}&\!\!=\!\!&2n^1e^{\varphi+\frac{K_+}2} \left[\left({\rm Re}G_{A\eta}+\kappa \,\tilde\zeta_{A\tilde\eta}\right) e^A_{~\Lambda}- \left({\rm Re}Z^A_\eta +\kappa \,\zeta^A_{\tilde\eta} e_{A\Lambda}\right)-\kappa\, \tilde\eta\, e_{\Lambda}\right]\nonumber~,\\ &&\label{flux1} \end{eqnarray} where $\kappa\equiv\frac{n^3}{n^1}$ is a constant, due to \eq{Dn}, whose value depends on the specific solution. From the left-hand side of \eq{dXFSU33} we can immediately reconstruct $\partial_r{\rm Im}(h\,e^{U+\frac{K_+}2}\hat\Phi^0_+)$, while from the right-hand side \eq{flux1} we obtain \begin{equation} \partial_r{\rm Im}(h\,e^{U+\frac{K_+}2}\hat\Phi^0_+)=-n^1\, e^{\varphi+\frac{K_+}2+(1-p)U}\left[ \mathcal{D} {\rm Re}\hat\Phi^0_-+\kappa\, \hat F_{\tilde\eta}\right]~, \end{equation} where $\mathcal{D}$ is the generalised covariant derivative defined in \rf{defD}. The manipulation of \eq{attq0} is more complicated and it is less straightforward to put it into the form of a flow equation for ${\rm Im}\Phi_-$, as explained in appendix \ref{DWmanipul}. In order to get rid of the terms that do not recombine into a partial derivative, we have to impose $W^2=0$, that is \begin{equation} \langle \mathcal{D} {\rm Im}\hat\Phi^0_-,\,\hat\Phi^0_+\rangle=0\, . \end{equation} Note that this does not necessarily imply the stronger condition $\mathcal{D} {\rm Im}\hat\Phi^0_-=0$. After some manipulation we then find that we can rewrite \eq{attq0} as \begin{equation} \partial_r\begin{pmatrix} {\rm Im}(e^{(1-\lambda)U+\frac{K_+}2}Z^A_{\eta})\cr{\rm Im} (e^{(1-\lambda)U+\frac{K_+}2}G_{\eta\, A}) \end{pmatrix} =-\frac12e^{(1-\lambda-p)U}\begin{pmatrix} \mathcal{E}^A\cr \mathcal{E}_A \end{pmatrix} ~,\label{dZWSU33} \end{equation} where we have defined $\lambda\equiv\frac{\kappa^2}{1+\kappa^2}$, and \begin{equation} \begin{pmatrix} \mathcal{E}^A\cr\mathcal{E}_A \end{pmatrix} =2n^1\,e^{\varphi+\frac{K_+}2} \begin{pmatrix} {\rm Re}(hF)_{\bf\Lambda}\, m^{A{\bf\Lambda}}- {\rm Re}(hX^{\bf \Lambda})\, e^A_{\bf\Lambda} \cr {\rm Re}(hF)_{\bf \Lambda}\, m_{A}^{\bf\Lambda}- {\rm Re}(hX^{\bf \Lambda})\, e_{A\bf\Lambda} \end{pmatrix} ~.\label{WC1} \end{equation} From the left-hand side of \eq{dZWSU33} we can easily reconstruct $\partial_r{\rm Im}(e^{(1-\lambda)U+\frac{K_+}2}\hat\Phi^0_-)$, while from the right-hand side \eq{WC1} we obtain \begin{equation} \partial_r{\rm Im}(e^{(1-\lambda)U+\frac{K_+}2}\hat\Phi^0_-)=n^1\, e^{\varphi+\frac{K_+}2+(1-\lambda-p)U}\mathcal{D}{\rm Re}(h\hat\Phi^0_+)~. \end{equation} From the integration of \eq{dXFSU33} and \eq{dZWSU33} we also obtain \begin{equation} \mathcal{D}{\rm Im}(h\hat\Phi^0_+)=\mathcal{D}{\rm Im}\hat\Phi^0_-=0~. \end{equation} Putting this together we find that the generalised Hitchin flow equations are given by \begin{eqnarray} && \label{H1}\textstyle{\frac1{n^1}}e^{-\varphi+pU}\partial_r {\rm Im} (e^{(1-\lambda)U+\frac{K_+}2}\hat\Phi^0_-)=e^{(1-\lambda)U+\frac{K_+}2}\,\mathcal{D}{\rm Re}(h\hat\Phi^0_+)~,\\ && \label{H2}\textstyle{\frac1{n^1}}e^{-\varphi+pU}\partial_r{\rm Im}(e^{U+\frac{K_+}2}h\hat\Phi^0_+)=-e^{U+\frac{K_+}2}\left[\mathcal{D}{\rm Re}\hat\Phi^0_- +\kappa\,\hat F_{\tilde\eta}\right]~,\\ && \label{H3}\mathcal{D}{\rm Im}\hat\Phi^0_-=0~,\\ &&\label{H4} \mathcal{D}{\rm Im}(h\hat\Phi^0_+)=0~. \end{eqnarray} Note that for vanishing RR fields and RR fluxes $\kappa=\lambda=0$. In order to prepare for the comparison with the ten-dimensional result, we will rewrite \eq{H1}-\eq{H4} in a more convenient way. First, we can make use of the definitions \eq{Frescintro} and \eq{etatilde}, and use the dilaton relation \eq{10Adil} to substitute for $K_+$. Then we can use the expression for the four-dimensional dilaton that is valid for domain wall configurations $\lambda U=-(\varphi+U)$ (see \rf{4dimdil}) to obtain \begin{eqnarray} &&\!\!\!\! \label{H1D}\textstyle{\frac1{n^1}}e^{-\varphi+pU-2(U+\varphi)}\partial_r {\rm Im} (e^{2(U+\varphi)} e^{-\phi}\hat\Phi^0_-)=\mathcal{D}{\rm Re}(e^{-\phi}h\hat\Phi^0_+)~,\\ && \!\!\!\! \label{H2D}\textstyle{\frac1{n^1}}e^{-\varphi+pU-(U+\varphi)}\partial_r{\rm Im}(e^{U+\varphi} e^{-\phi}h\hat\Phi^0_+)=-\mathcal{D}{\rm Re}(e^{-\phi}\hat\Phi^0_-) -\kappa\sqrt2\,\hat F,\\ && \!\!\!\! \label{H3D}\mathcal{D}{\rm Im}(e^{-\phi}\hat\Phi^0_-)=0~,\\ &&\!\!\!\! \label{H4D} \mathcal{D}{\rm Im}(e^{-\phi}h\hat\Phi^0_+)=0~. \end{eqnarray} \subsection{Orientifold projection}\label{op} In the previous subsection we considered a supersymmetric domain wall solution in ${\cal N}=2$ supergravity preserving one half of the original supersymmetry \eq{project}. One can interpret such a solution as originating from the compactification of an appropriate ten-dimensional brane configuration filling three out of the four uncompactified spacetime directions, as we shall describe further in sections \ref{s3} and \ref{s4}. For consistency, such compactifications often require the introduction of orientifold planes which may produce a further reduction of supersymmetry. Here, we shall discuss how the flow equations that we have derived above get modified by such an orientifold-type projection. In \cite{Andrianopoli:2001gm} and \cite{D'Auria:2005yg} it was shown how one can perform a consistent truncation of gauged ${\cal N}=2$ supergravity with scalar and scalar-tensor multiplets, respectively, such that the ${\cal N}=2$ multiplets are rearranged into ${\cal N}=1$ multiplets. The identification of the fields to be truncated out involves considering the consistency of the fermionic supersymmetry transformation laws \eq{ferm} when a linear combination of the two gravitini is set to zero\footnote{In this section we are using the notation of \cite{Cassani:2007pq}.} \begin{equation} q^{{\hat{A}}\dagger}\pi_{{\hat{A}}\mu}=0~,\label{nograv} \end{equation} with the independent combination \begin{equation} \psi_{\mu +}=p^{{\hat{A}}\dagger}\pi_{\mu {\hat{A}}}~, \end{equation} being identified with the ${\cal N}=1$ gravitino. The projectors $p_{\hat{A}}$ and $q_{\hat{A}}$ satisfy \begin{equation} p^{{\hat{A}}\dagger}p_{\hat{A}}=q^{{\hat{A}}\dagger}q_{\hat{A}}=1,\quad\quad p^{{\hat{A}}\dagger}q_{\hat{A}}=q^{{\hat{A}}\dagger}p_{\hat{A}}=0\label{N=1proj}~. \end{equation} Correspondingly the supersymmetry parameter combination \begin{equation} \zeta_+=p^{{\hat{A}}\dagger}\varepsilon_{\hat{A}}\label{e1}~, \end{equation} generates ${\cal N}=1$ supersymmetry, while the orthogonal combination \begin{equation} \lambda_+= q^{{\hat{A}}\dagger}\varepsilon_{\hat{A}}\label{e2}~, \end{equation} should not appear in the ${\cal N}=1$ supersymmetry transformation laws. It is not difficult to realise \cite{Andrianopoli:2001gm} that the only way to achieve this without spoiling the ${\cal N}=1$ supermultiplet structure is to set one of the two supersymmetry generators to zero $\lambda_+=0$, as one might expect. Consistency of \eq{nograv} then requires that the corresponding gravitino shift is set to zero \begin{equation} W_\perp\equiv -2i\,q^{{\hat{A}}\dagger}S_{{\hat{A}}{\hat{B}}}\,p^{*{\hat{B}}}=0~.\label{noWperp} \end{equation} Solving the condition \eq{noWperp} determines which scalar fields are truncated out by the projection and thus how the domain wall solution gets modified. The truncation of the other fermionic transformation laws provides conditions for the consistent reduction of the scalar manifolds \cite{Andrianopoli:2001gm}. We are not going to make any assumption about the preserved supersymmetry. Rather, as we did in the previous section, we will derive the BPS projectors from the supersymmetry conditions \eq{psi}-\eq{zi}, this time imposing \eq{noWperp}. It is convenient to work from the outset in terms of the spinors \eq{e1} and \eq{e2}, for which the BPS projector \eq{project} \begin{equation} \varepsilon_{\hat{A}}=\mp 2i\bar{h}\,\gamma^3\ n_{{\hat{A}}{\hat{B}}}\varepsilon^{\hat{B}}\label{N=2BPS}~,\end{equation} gives rise to \begin{eqnarray} &&\zeta_+=\pm\bar{h}\gamma_3\left[ n_{\parallel}\,\zeta_-+n_{\perp}\lambda_-\right]~,\nonumber\\\nonumber\\ &&\lambda_+=\pm\bar{h}\gamma_3\left[n^{\dagger}_{\parallel}\lambda_-+n_{\perp}\zeta_-\right]\label{projnnn}~, \end{eqnarray} where \begin{equation} n_{\parallel}\equiv -2i\,p^{{\hat{A}}\dagger}n_{{\hat{A}}{\hat{B}}}\,p^{*{\hat{B}}},\quad\quad n_\perp\equiv -2i\,q^{{\hat{A}}\dagger}n_{{\hat{A}}{\hat{B}}}\,p^{*{\hat{B}}}~.\end{equation} Note that, according to \eq{normphase}, \begin{equation} W_{\parallel}\equiv Wn_{\parallel}~,\quad\quad\quad W_\perp\equiv W n_\perp~. \end{equation} It is natural to expect that on implementing the truncation, i.e. imposing condition \eq{noWperp} \begin{equation} W_\perp= n_\perp=0\label{Wperp=0}~, \end{equation} we will obtain two copies of an ${\cal N}=1$ $\textstyle{\frac12}$ BPS condition: \begin{eqnarray} && \zeta_+=\pm\bar{h} n_{\parallel}\gamma_3\,\zeta_-\label{projn}~,\\\nonumber\\ && \lambda_+=\pm\bar{h} n^{\dagger}_{\parallel}\gamma_3\,\lambda_-~.\label{projn*} \end{eqnarray} In fact, let us consider again the component $\delta\pi_\alpha=0$ of equation \eq{psi}, this time imposing $n_{\perp}=0$, according to the truncation condition \eq{noWperp}. We then find \begin{eqnarray} && U^\prime e^{pU}\gamma_3\zeta_+=n_{\parallel}W\zeta_-\label{psialpha1}~,\\\nonumber\\ && U^\prime e^{pU}\gamma_3\lambda_+=n^{\dagger}_{\parallel}W\lambda_-\label{psialpha2}~. \end{eqnarray} The consistency of these expressions again gives \eq{Uprime}. If we now insert \eq{Uprime} back into \eq{psialpha1} and \eq{psialpha2} we obtain, as expected, the projectors \eq{projn} and \eq{projn*}. Continuing as before, we can use $\delta\pi_{r{\hat{A}}}=0$ to obtain \begin{eqnarray} n_\parallel\bar{h}\,\partial_r(n^{\dagger}_\parallel h)&=&\pm i e^{-pU}{\rm Im}(hW)~,\\ n^{\dagger}_\parallel\bar{h}\,\partial_r(n_\parallel h)&=&\pm i e^{-pU}{\rm Im}(hW)~, \end{eqnarray} together with \eq{1ell} and \eq{Uprime2}. From this we can deduce that $\partial_r h=0$ and find the analogue of \eq{Dn} \begin{equation} \partial_r n_\parallel=0~. \end{equation} Using \eq{projn} and \eq{projn*} in \eq{la} and \eq{zi} we can proceed as before and obtain equations \eq{attz0} and \eq{attq0}, taking into account that due to the condition \eq{noWperp} the flow equations \eq{attz0} and \eq{attq0} are restricted to the set of scalars that survive the projection. From \eq{psi} we again obtain \eq{4drsoln}, which in the new basis reads \begin{eqnarray} \zeta_+&=&e^{\frac12 U}\zeta_+^0\label{KSz}~,\\ \lambda_+&=&e^{\frac12 U}\lambda_+^0\label{KSl}~, \end{eqnarray} where $\zeta_+^0$ and $\lambda_+^0$ are constant spinors subject to \eq{projn} and \eq{projn*}. As shown in \cite{D'Auria:2005yg} one can obtain three classes of projection of the ${\cal N}=2$ theory. Let us define \begin{equation} p_{\hat{A}}=\begin{pmatrix} a^*\cr b \end{pmatrix} ~,\end{equation} where the parameters $a$ and $b$ are complex constants satisfying (via \eq{N=1proj}) $|a|^2+|b|^2=1$. For orientifold projections $a$ and $b$ further satisfy $|a|^2-|b|^2=0$ and are related to the phase $\theta$ in \eq{Ccomp} by $e^{i\theta}=2|a|^2b/a^*$. The three types of projection are the ``Heterotic'' projection $\left(a=1,\,b=0\right)$, which projects out all the RR fields, and two orientifold projections $\left(a=\frac1{\sqrt 2},\,b=-\frac{i}{\sqrt2}\right)$, $\left(a=\frac1{\sqrt 2},\,b=\frac{1}{\sqrt2}\right)$. For these different values of $a$ and $b$ we obtain \begin{eqnarray} \label{HH} (H)\quad\quad\quad a=1,\ b=0~,\quad&&\quad n_\perp=-n^3=0,\quad\quad n_\parallel=(n^1+in^2)~,\nonumber\\\nonumber\\ \label{B}(B)\quad a=\textstyle{\frac1{\sqrt 2},\ b=-\frac{i}{\sqrt2}}~,\quad&&\quad n_\perp=n^2=0,\quad\quad \ \ \, n_\parallel=(n^1+in^3)~, \nonumber\\\nonumber\\ \label{C}(C)\quad a=\textstyle{\frac1{\sqrt 2},\ b=-\frac{1}{\sqrt2}}~,\quad&& \quad n_\perp=n^1=0,\quad\quad\ \ \, n_\parallel=(n^3+in^2)~. \nonumber \end{eqnarray} It is straightforward to see that $n_{||}$ is pure phase using \rf{normphase} and \rf{Wperp=0}. The condition \eq{Wperp=0} corresponds to $W^3=0$, $W^2=0$, $W^1=0$ in the three cases $H$, $B$ and $C$, respectively. This is enough to identify the reduction of the scalar sector and, hence, the corresponding truncation. Note that when manipulating \eq{attq0} we found that we could only write a flow equation for ${\rm Im}\hat\Phi^0_-$ when $W^2=0$. This meant that a solution corresponding to the geometry described by \eq{H1}-\eq{H4} naturally splits ${\cal N}=2$ supersymmetry into two copies of ${\cal N}=1$ via the choice $B$. This suggests that the ten-dimensional configurations that give rise to such domain wall solutions feature orientifold planes with $\theta=\pm\frac\pi2$ \footnote{The $\pm$ sign is due to the ${\rm SU}(2)$ symmetry of the problem that allows us to exchange $\varepsilon_1$ with $\varepsilon_2$ by acting with $\sigma^1$ \cite{D'Auria:2005yg}. In our case, this would just correspond to a different choice of $a$ and $b$.}. Finally, we shall briefly discuss the truncation $H$, since it is likely to correspond to an $SU(3)\times SU(3)$ structure compactification of the Heterotic theory at the zeroth order in $\alpha^\prime$. As in \cite{D'Auria:2005yg} we can see that the constraint $W_\perp=W^3=0$ implies that all the RR scalar fields and fluxes are identically zero \begin{equation}\zeta^A=\tilde\zeta_A=e_{\bf\Lambda}=m^{\bf\Lambda}=0~,\end{equation} and that the ${\cal N}=1$ K\"ahler-Hodge manifold is the product of the complex structure deformations and the K\"ahler class deformations. As there is no projection acting on the Calabi-Yau manifold, the definition of $\hat\Phi_{\pm}$ is the same as in the ${\cal N}=2$ case \eq{p1}-\eq{p2}. Finally, in the absence of the RR part, $n^1=1$ and $\kappa=\lambda=0$, the equations \eq{H1D}-\eq{H4D} read \begin{eqnarray} && d_H{\rm Re}(e^{-\phi}\,h\hat\Phi^0_+)=\partial_y {\rm Im}(e^{-\phi}\,\hat\Phi^0_-)~,\\ && d_H{\rm Re}(e^{-\phi}\,\hat\Phi^0_-)=-\partial_y {\rm Im}(e^{-\phi}\,h\hat\Phi^0_+)~,\\ &&d_H{\rm Im}(e^{-\phi}\,\hat\Phi^0_-)=0~,\\ &&d_H{\rm Im}(e^{-\phi}\,h\hat\Phi^0_+)=0~, \end{eqnarray} where we have defined a new transverse coordinate \begin{equation} \partial_y\equiv e^{(1+p)U}\partial_r~. \end{equation} From the ten-dimensional perspective it is known that there are subtleties in the analysis of pure spinor equations for heterotic compactifications (see \cite{Andriot:2009fp} for a recent discussion), and so we shall leave a detailed discussion of this case for later work. \section{Domain walls in 4D from 10D supergravity}\label{s3} We will now turn to the derivation of the flow equations from the ten-dimensional perspective. We want to have a ten-dimensional type II supergravity configuration which gives rise to a domain wall in the effective, four-dimensional description. Therefore, we shall consider an ansatz for a spacetime of the form $M^{1,2}\times_w\mathbb{R}\times_w \hat{Y}$, where $\hat{Y}$ is a SU(3)$\times$SU(3) structure manifold and the products are warped. As we shall make use of the democratic formalism \cite{bkorp}, it is most convenient to work in the string frame. We will take the following general form for the metric: \begin{equation} ds^2 = e^{2A(y,r)}\left( e^{2V(r)}\eta_{\alpha\beta}dx^\alpha dx^\beta + e^{2G(r)}dr^2 \right) + g_{mn}(r,y)dy^m dy^n~,\label{metric} \end{equation} where now $\alpha,\beta=0,1,2$ label the domain wall worldvolume directions, which are flat, and $m,n=1,\ldots 6$ label directions on $\hat{Y}$. $A(y,r)$ is called the warp factor and we want the term in brackets to describe a supersymmetric domain wall solution in four dimensions. We shall allow the ten-dimensional dilaton $\phi$ to depend on the transverse and internal coordinates $\phi = \phi(y,r)$, as is appropriate for a domain wall configuration. Let us introduce the modified RR field strengths \begin{eqnarray} F_{(n+1)}=dC_{(n)}+H\wedge C_{(n-2)}\ , \end{eqnarray} where $dC_{(n)}$ are the standard RR field strengths\footnote{We are essentially following the conventions of \cite{gmpt1,gmpt2}, up to some differences consisting in a sign for $H$ in type IIB and the sign change $C_{(2n+1)}\rightarrow (-)^nC_{(2n+1)}$ in type IIA.}. The most general RR flux decomposition respecting the domain wall symmetry is \begin{equation} F^{(10)}_{n} = \mathrm{vol_{dw}}\wedge f^{||}_{n-3} + dr\wedge f^{\bot}_{n-1} + \hat{F}_{n} + \mathrm{vol_{4}}\wedge \tilde{F}_{n-4}~, \label{RR} \end{equation} where $\mathrm{vol_{dw}}$ and $\mathrm{vol_{4}}$ denote the obvious volume forms on the domain wall $\mathbb{R}^{1,2}$ and the total four-dimensional external space $\mathbb{R}^{1,2}\times\mathbb{R}_r$, respectively (both viewed from ten dimensions). All $f$'s and $F$'s are forms on the internal manifold $\hat{Y}$. For domain walls the $\hat{F}_p$ and $\tilde{F}_p$ are pure internal and external $p$-form fluxes, respectively. In type IIA, the index $n$ runs over $0, 2, 4, 6, 8, 10$ while in type IIB $n$ runs over $1, 3, 5, 7, 9$. From now on, we shall set $f^{||} = 0 = f^{\bot}$, in agreement with the choice of section \ref{s2} where the tensor fields were set to zero. In four dimensions such tensor fields would correspond to fluxes on the domain wall worldvolume and are not considered in \cite{Grana:2006hr}, to which we want to make contact. The RR fluxes described above contain both field strengths and their duals, so we must impose the self-duality relations \begin{equation}\label{sd} F^{(10)}_{(n)}=(-)^{\frac{(n-1)(n-2)}{2}}\star_{10} F^{(10)}_{(10-n)}~, \end{equation} between the lower and higher rank field strengths. Unless explicitly stated otherwise, we shall always make use of the self-duality relations to write the RR fields entirely in terms of $\hat{F}$. The NS flux is decomposed in a similar manner as follows: \begin{equation}\label{NS} H^{(10)} = H_3 + dr\wedge b'_2~, \end{equation} where $H_3$ and $b_2(y,r)$ are forms on $\hat{Y}$, and $'$ denotes a transverse derivative $\partial/\partial_r$. Once again, for simplicity we shall only consider the $b_2 = 0$ case here. \subsection{Analysing the supersymmetry variations} The type II gravitino and dilatino supersymmetry variations in string frame are \begin{eqnarray} \delta\psi_M &=& (D_M \pm \frac{1}{4} H_M {\cal P})\epsilon + \frac{e^{\phi}}{16} \sum_n \slash{\hat{F}}_{2n} \Gamma_M{\cal P}_n{}\epsilon~, \\ \delta\lambda & = & (\Gamma^M{\partial}_M\phi \pm \frac{1}{2} \slash{H}{\cal P})\epsilon \frac{e^{\phi}}{16} \sum_n \Gamma^M\slash{\hat{F}}_{2n}\Gamma_M {\cal P}_n{}\epsilon ~, \end{eqnarray} where one chooses the upper sign for IIA and the lower sign for IIB. Capital Latin letters run over all ten directions. These expressions are written in the democratic formalism \cite{bkorp} with all spinor indices suppressed; so, for instance, $\epsilon = (\epsilon^1, \epsilon^2)$ is a doublet of ten-dimensional Majorana-Weyl spinors. The ${\cal P}$ matrices act on these doublets as ${\cal P} = \Gamma_{11}$ and ${\cal P}_n = \Gamma_{11}^n\sigma^1$ in type IIA, and as ${\cal P} = -\sigma^3$, ${\cal P}_{n} = \sigma^1$ for $(n/2+1/2)$ even and ${\cal P}_n = i\sigma^2$ for $(n/2+1/2)$ odd in type IIB. We will also make use of the modified dilatino variation \cite{gmpt2}, \begin{eqnarray} \delta \Lambda = \Gamma^M \psi_M - \delta \lambda = 0 \, \label{mod0}~, \end{eqnarray} in which all RR terms cancel. We begin by substituting our ansatz for the metric \rf{metric}, the RR fields \rf{RR} and the NS field \rf{NS} into the supersymmetry variations, \begin{eqnarray} \delta\psi^1_\alpha &=& \frac{1}{2} \Gamma_\alpha \slash{\partial}A \epsilon^1 + \frac{1}{2} e^{A-G}(V'+A')\Gamma_{\ul{\alpha r}}\epsilon^1 - \frac{e^\phi}{8}\slash{\hat{F}} \Gamma_\alpha\epsilon^2 \label{wv0} =0 ~,\\ \delta\psi^1_r &=& \partial_r\epsilon^1 + \frac{1}{2} \Gamma_r \slash{\partial}A \epsilon^1 - \frac{e^\phi}{8}\slash{\hat{F}} \Gamma_r \epsilon^2 \label{radial0} =0 ~,\\ \delta\psi^1_m &=& \left(D_m + \frac{1}{4} H_m \right)\epsilon^1 - \frac{1}{4} \Gamma^{rn} g'_{mn} \epsilon^1 - \frac{e^\phi}{8}\slash{\hat{F}} \Gamma_m \epsilon^2 = 0\label{int0}~, \\ \delta\Lambda &=& \left( \slash{D} - \slash{\partial}\phi + \frac{1}{4} \slash{H} + 2\slash{\partial}A \right)\epsilon^1 + (2V'+2A'-\phi')e^{-(A+G)}\Gamma_{\ul{r}}\epsilon^1 \nonumber \\ &~&+ ~~ \frac{1}{4} g^{mn} g'_{mn} \Gamma^r \epsilon^1 = 0 \label{md0}~, \end{eqnarray} where we have made the standard ansatz that $\epsilon$ is independent of the worldvolume coordinates $\epsilon = \epsilon(y,r)$. Underlined indices are flat tangent space indices. From now on it should be understood that slashed quantities are purely internal e.g. $\slash{\partial}\phi \equiv \Gamma^m \partial_m\phi$. The $\epsilon^2$ variations are found from the expressions above by taking the map \begin{eqnarray} \slash{H} \rightarrow -\slash{H},~~~~\slash{\hat{F}} \rightarrow + \slash{\hat{F}}^\dagger \label{map}~, \end{eqnarray} and interchanging $\epsilon^1$ and $\epsilon^2$. The transverse component of the gravitino variation $\delta\psi^1_r$ plays an important role in what follows. We want to manipulate this component so that we can determine the transverse dependence of the spinor parameter $\epsilon$. By comparing with the modified dilatino variation, it is straightforward to see that we can use the worldvolume component of the gravitino variation to simplify the transverse component. Specifically, we calculate $\Gamma^\alpha \delta\psi^1_\alpha$ and use the result to substitute for the $\slash{\partial}A$ and RR terms in $\delta\psi^1_r=0$ to find \begin{equation} \delta\psi^1_r = \partial_r\epsilon^1 - \frac{1}{2} (A' + V')\epsilon^1 = 0~. \label{radial1} \end{equation} This is easily solved by a typical domain wall ansatz, factoring out the transverse dependence\footnote{While \rf{rsoln} appears to be the same as \rf{4drsoln}, the reader is reminded the two expressions are written in different frames. We shall carry out a comparison in section \ref{s4}.}: \begin{equation} \epsilon(r, y^m) = e^{\frac{1}{2}(A+ V)}\epsilon_0(y^m)~.\label{rsoln} \end{equation} If we then substitute this back into \rf{radial0} we find \begin{equation} \delta\psi^1_r = \frac{1}{2} (A' + V')\epsilon^1 + \frac{1}{2} \Gamma_r \slash{\partial}A \epsilon^1 - \frac{e^{\phi}}{8}\slash{\hat{F}} \Gamma_r \epsilon^2 =0~ \label{radial2}~, \end{equation} which is the same as $\delta\psi^1_\alpha=0$. We should now decompose the ten-dimensional quantities appearing here into four- and six-dimensional components. This requires us to make an ansatz for the spinor decomposition and for the projection condition enforced by the domain wall. \subsection{${\cal N}=1$ spinor ansatz and the BPS projection condition} Throughout the rest of this section will shall focus on type IIA supergravity, as the type IIB case proceeds analogously. For type IIA backgrounds the supersymmetry parameter is a ten-dimensional Majorana spinor $\epsilon$ that can be split into two Majorana-Weyl spinors of opposite chirality. In order to harness the compact pure spinor notation found in the equations describing ${\cal N}=1$ Minkowski and AdS vacua \cite{gmpt2}, we shall consider backgrounds which initially preserve at least four supercharges before any domain wall projection conditions are applied. The Killing spinors can be decomposed as \begin{eqnarray}\label{spinors} \epsilon^1_0(y) &=&\zeta_+\otimes \eta^{(1)}_{+}(y)+\zeta_-\otimes \eta^{(1)}_{-}(y)\ ,\cr \epsilon^2_0(y) &=&\zeta_+\otimes \eta^{(2)}_{-}(y)+\zeta_-\otimes \eta^{(2)}_{+}(y)\ , \end{eqnarray} where $\zeta_{+} = (\zeta_{-})^*$ is a generic constant four-dimensional spinor of positive chirality, while the $\eta^{(a)}_+=(\eta^{(a)}_-)^*$ are two particular six-dimensional commuting spinors of positive chirality that characterise the solution. In the usual abuse of notation, we use the subscripts $\pm$ to denote both four- and six-dimensional chirality. The norms of the internal spinors are defined as \begin{eqnarray} ||\eta^{(1)}||^2=|a|^2~,\quad\quad ||\eta^{(2)}||^2=|b|^2\ . \label{norms} \end{eqnarray} We would like to find a domain wall solution preserving at least $1/2$ of the ${\cal N}=1$ supersymmetry in four dimensions i.e. 2 supercharges. Therefore, we should expect the four four-dimensional supersymmetries to be related by a projection condition. Motivated by the probe Dp-brane supersymmetry projection for general ${\cal N}=1$ backgrounds \cite{ms,Koerber:2005qi}, we make the following ansatz for the domain wall projection condition: \begin{equation}\label{proj1} \gamma_{\ul{0\ldots 2}}\zeta_+=\alpha^{-1} \zeta_{-} \,~~ \Longrightarrow ~~ \gamma_{\ul{r}}\zeta_+= i \alpha^{-1} \zeta_{-}~. \end{equation} Consistency of this ansatz requires that $\alpha$ is pure phase $\alpha^{-1} = \alpha^*$. Following \cite{hmt}, it would be straightforward to extend our spinor ansatz \rf{spinors} and the projection condition \rf{proj1} to allow for a greater amount of supersymmetry. The projection condition \rf{proj1} is a physically well-motivated choice as it agrees with the supersymmetry constraints on $\frac{1}{2}$-supersymmetric domain walls in ${\cal N}=1$ supergravity in four dimensions \cite{Cvetic:1996vr} and with \rf{projn}. It is natural to ask whether there is any relation between $\alpha$, $a$ and $b$. For probe D-branes in Minkowski or AdS background certain relations have been found for specific examples in \cite{Aharony:2008wz}. However, as this appears to be an example-dependent feature we shall not impose any further relations between the phase and the complex coefficients here. In general, we shall only require that the domain wall should preserve two of the four four-dimensional supercharges, corresponding to ${\cal N}=1$ supersymmetry on its worldvolume. From the ten-dimensional perspective this amounts to a preservation of $1/16$th supersymmetry. We shall not pursue this point any further, but refer the reader to \cite{Caviezel:2008ik} for a discussion of $1/16$th-supersymmetric type II intersecting brane configurations that give rise to domain walls after dimensional reduction on cosets and nilmanifolds. \subsection{Pure spinors and Hitchin flows for domain wall vacua} Using our ansatz for the projection condition \rf{proj1}, along with the $4+6$ decomposition of the spinors \rf{spinors}, we can now rewrite our supersymmetry variations in terms of pure spinors. We shall give the result here and refer to our appendix \ref{psd} and appendix A of \cite{gmpt3} for further details of the calculations. We construct the normalised pure spinors from bispinors as follows: \begin{equation} \slash{\Phi}_\pm = -\frac{8 i}{|a|^2} \eta^{(1)}_+\otimes\eta^{(2)\dagger}_{\pm}~. \label{nps} \end{equation} By virtue of \rf{rsoln} and \rf{spinors}, the bispinors $\eta^{(1)}_+\otimes\eta^{(2)\dagger}_{\pm}$ are independent of the transverse coordinate. However, using the Clifford map \rf{cmap} we see that the related differential polyforms have $r$-dependence through the vielbein. As is usual in the literature, we shall drop the slash notation which distinguishes the polyform and bispinor. We would now like to rewrite the supersymmetry conditions derived above in terms of differential constraints on $\Phi_\pm$ . After a lengthy calculation we find\footnote{These equations have been derived independently in \cite{hlmt}.} \begin{eqnarray} d_H \left[e^{2A - \phi} \mathrm{Im} \Phi_- \right] \!\! &=&\!\! 0 \label{ps1} ~,\\ d_H \left[e^{4A - \phi} \mathrm{Re} \Phi_- \right] \!\!&=&\!\! e^{4A} \tilde{F} - e^{-3V-G} \mathrm{Im} \left( \alpha^* \partial_r \left[ e^{3A +3V -\phi} \Phi_+ \right]\right), \label{ps2} \\ d_H \left[e^{3A-\phi} \mathrm{Im} \left(\alpha^* \Phi_+\right) \right] \!\!&=&\!\! 0 \label{ps3}~,\\ d_H \left[e^{3A-\phi} \mathrm{Re} \left(\alpha^* \Phi_+\right) \right] \!\!&=&\!\! e^{-2V-G} ~ \partial_r \mathrm{Im} \left[ e^{2A+2V-\phi} \Phi_- \right] ~,\label{ps4} \end{eqnarray} where $d_H \equiv d + H\wedge$ is the twisted exterior derivative on the six-dimensional manifold $\hat{Y}$. The result for type IIB is found by taking the map \rf{map}. In deriving these expressions we have made use of the following additional constraint derived from $\delta\psi_m$ \begin{eqnarray} \label{backsusy2} d|a|^2=|b|^2dA~,\quad\quad d|b|^2=|a|^2dA\ . \end{eqnarray} Note that we have used $\tilde{F} = \star ~{\sigma (\hat{F})}$, where $\sigma$ is an involution which reverses the order of indices on a form, to rewrite the RR fluxes $\hat{F}$ in terms of their duals ${\tilde{F}}$. This makes it straightforward to verify that the above equations agree with those presented in \cite{gmpt3,ten2four,adsbranes}, in the limit where the four-dimensional component of the spacetime becomes $\mathrm{Minkowski}$ or $AdS$. In general, one finds that the right-hand side of \rf{ps1} is non-vanishing and proportional to $(|a|^2-|b|^2)\hat{F}$. As described in section \ref{op}, orientifold projections enforce $|a|^2=|b|^2$, and in \cite{ms} it was shown that the same condition is necessary for an ${\cal N}=1$ Minkowski spacetime to admit supersymmetric probe D-branes. For anti-de Sitter spacetimes, one can show that $|a|^2=|b|^2$ is a consistency condition of the background itself \cite{gmpt3}, independent of the probe D-brane argument. When the warp factor $A$ is independent of the transverse direction $r$, it is straightforward to check that the same holds true of the domain wall background \rf{metric}. We can also rewrite the four external components of the gravitino variations, \rf{wv0} and \rf{radial0}, in terms of the pure spinors. We have already shown that the transverse dependence of the spinor parameter $\epsilon$ can be factored out \rf{rsoln}, leaving us with just one equation \rf{radial2}, which was used in the derivation of the (\ref{ps1}-\ref{ps4}). Equation \rf{radial2} is a transverse flow equation for the metric data $A$ and $V$, with a potential given in terms of $F$ and $dA$. Following \cite{hmt,Grana:2006hr}, we will rewrite the potential using the Mukai pairing on the internal manifold \rf{Mukai}. Multiplying \rf{radial2} on the right by $\eta^{(2)\dagger}_{+}$ and taking the spinor trace we find \begin{equation} (A'+V') = - \frac{i \alpha e^\phi}{4} \frac{\langle \hat{F}, \overline{\Phi}_{\pm} \rangle}{\langle \Phi_{-}, \overline{\Phi}_{-} \rangle}~, \label{auflow} \end{equation} and we see that the $dA$ contribution drops out by virtue of the compatibility condition \rf{comp}. In the following section we will show how equations (\ref{ps1}-\ref{ps4}) and \rf{auflow} agree with those derived by analysing the flow of the vector and hypermultiplet scalar fields in four dimensions. \subsubsection*{Hitchin flow} We have shown that the set of equations \rf{ps1}-\rf{ps4} defined in terms of pure spinors on $\hat{Y}$, along with \rf{backsusy2}, describe the necessary conditions for a domain wall solution in four dimensions preserving at least 2 supercharges. Following the literature discussing the cases with vanishing fluxes \cite{Witt} (see also section \ref{s4.1}), it is interesting to ask whether there is a 7-dimensional interpretation of these results, where the manifold $\hat{Y}$ is fibred over an interval given by the direction transverse to the domain wall. In fact, following the discussion of the $AdS_4$ backgrounds in \cite{adsbranes}, it is relatively straightforward to show that this set of equations can be rewritten in terms of a generalised $G_2$ structure defined by \cite{pk}\footnote{We thank Paul Koerber for useful discussions on this point.} \begin{eqnarray} \rho &=& e^{A+G} dr \wedge \mathrm{Re} \Phi_- \mp \mathrm{Im} \left(\alpha^* \Phi_+\right) \, ~,\label{r}\\ \hat \rho &=& - \mathrm{Im} \Phi_- \mp e^{A+G} dr \wedge \mathrm{Re} \left(\alpha^* \Phi_+\right)~,\label{rh} \end{eqnarray} where $\rho$ and $\hat\rho$ are related by the generalised Hodge star in seven dimensions. It is then possible to rewrite the six-dimensional pure spinor equations in terms of the seven-dimensional quantities as \begin{eqnarray} \hat d_H \left[e^{2(A+V)-\phi}\hat\rho \right] &=& 0 ~, \\ \hat d_H \left[e^{3(A+V)-\phi} \rho \right] &=& - e^{4A +3V + G} dr \wedge \tilde{F}~, \end{eqnarray} where now $\hat d_H = d_H + dr\partial_r$ is the twisted exterior derivative in seven dimensions. These equations define a generalised $G_2$ structure $\rho$ which is integrable with respect to $H$ and $F$, or, alternatively, an almost generalised $G_2$ structure with $F$ providing the obstruction to integrability \cite{Jeschek,adsbranes}. This implies that our set of equations \rf{ps1}-\rf{ps4} are a form of \textit{generalised Hitchin flow} equations describing the embedding of the $SU(3)\times SU(3)$ structure manifold $\hat{Y}$ into a generalised $G_2$ manifold, where one now has $G_2\times G_2$ structure defined on the formal sum of the tangent and cotangent bundles of the seven-manifold. Written in seven-dimensional notation, these equations are the same as their Minkowski \cite{Jeschek} and $AdS$ counterparts \cite{adsbranes}. One only notices a difference on decomposing $\rho$ and $\hat\rho$ into forms on $\hat{Y}$, when the more general $r$-dependence of the pure spinors on a domain wall background becomes apparent. In Hitchin's original language \cite{HitchinHF}, the case with $F=0$ was called a strongly integrable generalised $G_2$ structure. When the flux contribution $dr\wedge \tilde{F}$ is identified as being proportional to $\rho$, the flow equations give Hitchin's definition of a weakly integrable generalised $G_2$ structure. \section{Comparing the flow equations}\label{s4} In this section we shall show that our domain wall vacua equations derived in ten and four dimensions are in agreement. By directly studying the supersymmetry variations for domain wall configurations we are essentially carrying out an on-shell (up to imposing Bianchi identities \cite{Koerber:2007hd}) test of the procedure proposed in \cite{Grana:2006hr} for $SU(3)\times SU(3)$ structure compactifications. The analogous check for ${\cal N}=1$ Minkowski and AdS vacua has been carried out in \cite{Cassani:2007pq}, where it was shown that the pure spinor equations found in ten dimensions \cite{gmpt2} agree with those found by first carrying out an $SU(3)\times SU(3)$ structure compactification \cite{Grana:2006hr} and then looking at the conditions for maximally symmetric vacua. A key point in this check is the relation between the pure spinors in ten dimensions \rf{nps} and the Kaluza-Klein truncated pure spinors in the four-dimensional effective theory \rf{p1}-\rf{p2}. For $SU(3)\times SU(3)$ structure compactifications the truncation of the pure spinors such that appropriate special K\"ahler geometry arises in the kinetic terms of the resulting four-dimensional theory has been fully discussed in \cite{Grana:2006hr}. We have reviewed some necessary details of this in section \ref{s2} and appendix \ref{4dstr}, but for the remainder of this section it is sufficient for the reader to remember that one can make a rigourous comparison of the pure spinors. In order to compare our flow equations we have to pay attention to our conventions, in particular the metric signature and the choice of chirality assigned to the spinors. The supersymmetry analysis in section \ref{s2} employed the mostly-minus metric signature ($+,-,-,-$), in keeping with with previous four-dimensional supergravity conventions \cite{Louis:2006wq}, whereas the ten-dimensional analysis used the mostly-plus signature ($-,+,\cdots,+$) which is more common in the flux compactification literature \cite{Grana:2005jc}. We can rewrite our ten-dimensional expressions in the mostly-minus convention by inserting a minus sign for any explicit upper index or metric factor $g_{MN}$ and multiplying all gamma matrices $\Gamma_M$ by $i$. Of particular importance is the projection condition \rf{proj1}, which becomes \begin{equation}\label{proj2} \zeta_+= \alpha^* \gamma_{\ul{r}} \zeta_{-}~. \end{equation} By comparing this with the four-dimensional projector \rf{projn}, we find that we should make the following identifications \begin{equation} \zeta_+ \rightarrow (n_{\parallel}^*)^{\frac{1}{2}} \zeta_+~, \qquad \alpha = h ~. \end{equation} The rescaling of spinors $\zeta_+$ can be understood as a K\"ahler transformation in four dimensions, and is equivalent to the $\mathbb{C}^*$ action on the pure spinors \rf{nps}. References \cite{Grana:2005ny,Cassani:2007pq} assigned negative parity to $\epsilon^1$ and positive parity to $\epsilon^2$, whereas we have used the opposite convention, see \rf{spinors}. Our pure spinors ${\Phi}_\pm$ are mapped to the complex conjugate of those used in \cite{Grana:2005ny,Cassani:2007pq}, denoted $\overline{\Phi^{0}_\pm}$, and we also need to send $H\rightarrow -H$. Therefore, we should work out the supersymmetry equations for $\overline{{\Phi}_\pm}$ and map them to equations in terms of $\Phi^{0}_\pm$ using\footnote{Unlike \cite{Cassani:2007pq}, we do not need to map the fluxes as we have used the uniform notation of \cite{ms} throughout.}: \begin{equation} \overline{\Phi_+} ~~\longrightarrow~~ \Phi^{0}_+ = X^{\bf\Lambda}\omega_{\bf\Lambda}-F_{\bf\Lambda}\omega^{\bf\Lambda}~,\end{equation} \begin{equation} - \overline{(\Phi_-)}~~ \longrightarrow ~~ \Phi^{0}_-= Z^A\alpha_A-G_A\beta^A~,\end{equation} The pure spinors $\Phi^{0}_\pm$ are now the same as those employed in section \ref{s2}, and they have been expanded on a finite basis of forms. As we found that $|a|^2=|b|^2$ for domain wall backgrounds, the complex coefficients $a$ and $b$ in the pure spinors \rf{nps} give rise to two combinations of phases. In fact, it is only the sum of the phases that is of physical significance, and it convenient to set the phase in $\Phi_+$ to be $1$. For instance, in the $SU(3)$ structure case we then have \begin{eqnarray} \Phi^0_+= e^{iJ}~,\quad\quad \Phi^0_-= e^{2 i \theta}\Omega_\eta\ , \end{eqnarray} where now $\theta$ is identified with the phase appearing in the compensator field $C$ \rf{Ccomp}. The normalisation \rf{spinnorm} and compatibility \rf{comp} conditions for the SU(3) structure case are \begin{eqnarray} J\wedge \Omega_\eta=0 \, , \qquad \frac{1}{3!}J\wedge J\wedge J=-\frac{i}{8}\Omega_\eta\wedge\bar\Omega_\eta~. \end{eqnarray} Let us look at how this effects the flow equations. In the reduction to four dimensions, the warp factor $A$ \eq{metric} is neglected \cite{Grana:2005ny,Grana:2006hr}; hence, in order to perform our comparison we will set $A=0$ in \rf{ps1}-\rf{ps4}: \begin{eqnarray} d_H \left[e^{- \phi} \mathrm{Im} \left( \Phi^0_-\right) \right] &=& 0 \label{nps1} \\ d_H \left[e^{- \phi} \mathrm{Re} \left( \Phi^0_-\right) \right] &=& - \tilde{F} - e^{-3V-G} \partial_r \left( \mathrm{Im} \left[ e^{3V -\phi} h \Phi^0_+ \right] \right), \nonumber \\ \label{nps2} \\ d_H \left[e^{-\phi} \mathrm{Im} \left(h \Phi^0_+\right) \right] &=& 0 \label{nps3}\\ d_H \left[e^{-\phi} \mathrm{Re} \left(h \Phi^0_+ \right) \right] &=& e^{-2V-G} ~ \partial_r \left(\mathrm{Im} \left[ e^{2V-\phi}~ \Phi^0_- \right]\right) ~,\label{nps4} \end{eqnarray} where now $d_H = d - H\wedge$. It is this set of equations \rf{nps1}-\rf{nps4} that should be used to compare with the results from section \ref{s2}. To proceed further, we need a dictionary between the four-dimensional quantities $U$, $\varphi$ and their ten-dimensional counterparts $V$, $G$. The standard relation between the ten- and four-dimensional Einstein frame metrics is \begin{equation} ds^2_{10_{E}} = \mathrm{vol}_{6_E}^{-1}~ds^2_{4_E}+ g_{mn}^E(r,y)dy^m dy^n~, \label{mrel} \end{equation} where $\mathrm{vol}_{6_E}$ is the volume of the internal manifold in ten-dimensional Einstein frame. Similarly, the dilatons $\phi$ and $\varphi$ are related by $e^{-2\varphi} = e^{-2\phi} \mathrm{vol}_{6_S}$, where now $\mathrm{vol}_{6_S}$ is the volume of the internal manifold in ten-dimensional string frame. Recall that the ten-dimensional Einstein frame metric $g_{MN}^{~\mathrm{E}}$ and string frame metric $g_{MN}^{~\mathrm{S}}$ are related by $g_{MN}^{~\mathrm{S}} = e^{\frac{\phi}{2}}g_{MN}^{~\mathrm{E}}$; therefore, the volumes are related by $\mathrm{vol}_{6_S} = e^{\frac{3\phi}{2}} \mathrm{vol}_{6_E}$. Putting this together we find the relation between the ten-dimensional string frame metric and the four-dimensional Einstein frame metric \begin{equation} ds^2_{10_S} = e^{2\varphi} ds^2_{4_E} + \cdots~. \end{equation} We can now apply this to our metrics \rf{metric} and \rf{4dmetric} to match the parameters as follows \begin{equation} V = \varphi + U~, \qquad G=\varphi -pU~. \label{VG} \end{equation} As an initial consistency check we can compare the expressions for the Killing spinors in four dimensions. The four-dimensional component of the Killing spinor that comes from decomposing the ten-dimensional solution \rf{rsoln} naturally appears in string frame, $ \zeta = e^\frac{V}{2}\zeta_0~$, where $\zeta_0$ is an arbitrary constant spinor parameter. We can rescale this to Einstein frame in four dimensions using $\zeta \rightarrow e^{-\frac{\varphi}{2}}\zeta$. If we now use the matching of the metric factors \rf{VG} we find $ \zeta = e^\frac{U}{2}\zeta_0$, which agrees with the result found from the four-dimensional approach \eq{KSz}. Returning to the flow equations \rf{nps1}-\rf{nps4}, one can see that in order to have agreement we need to perform the following rescaling: \begin{equation} \Phi^0_\pm\rightarrow e^{-2V}\Phi^0_\pm,\quad\quad \tilde{F}\rightarrow \kappa \sqrt{2}~ e^{-2V}F ~. \end{equation} The rescaling of the pure spinors is easily achieved by appropriately fixing an overall $V$-dependence of the internal metric $g_{mn}(r,y)$. However, the source of the $V$-rescaling of the RR term $F$ is unclear. We believe that it is linked to a necessary rescaling of kinetic terms in the effective action which will become clear upon a careful analysis of the dimensional reduction, and we shall pursue this point further in future work \cite{SV2}. Despite this, it is pleasing to see that we are able to find an agreement between the domain wall vacuum equations in the four- and ten-dimensional approaches. Finally, let us now return to the equation derived from \rf{radial2} describing the behaviour of $V$ (see \rf{auflow}): \begin{equation} V'= - \frac{i \alpha e^\phi}{4} \frac{\langle \hat{F}, \overline{\Phi}_{\pm} \rangle}{\langle \Phi_{-}, \overline{\Phi}_{-} \rangle}~, \label{auflow2} \end{equation} At first glance one might worry that this has not captured all the relevant terms in the transverse flow. From the four-dimensional perspective, we have already seen that the potential term in the transverse flow of the four-dimensional metric component $U$ is given by $W$ \rf{attu}, which contains RR flux and torsion terms due to the non-closure of the pure spinors. This discrepancy is resolved by noting that a simple splitting of the ten-dimensional Einstein frame gravitino kinetic term using $\psi_M = (\psi_\mu, \psi_m)$ does not lead to a canonical kinetic term for the four-dimensional gravitino component \cite{Grana:2005ny,Grana:2006hr}. Rather, one should consider the combination $\Psi_\mu = \psi_\mu + \frac{1}{2}\gamma_{\mu}^{~m}\psi_m$. Applying this reasoning to the ten-dimensional supersymmetry transformations one finds precisely the torsion terms $\langle \Phi_{+}, d\overline{\Phi}_{-} \rangle$ that are missing from the right-hand side of \rf{auflow2}, in agreement with the four-dimensional result of section \ref{s2}. We shall not write the expression here as it is equivalent to the four-dimensional result given earlier. \subsection{Examples}\label{s4.1} In this section we shall discuss some particular examples of the flow equations \eq{ps1}-\eq{ps4} with certain fluxes vanishing. Switching on the different kinds of deformations, we find different behaviours for the field $\phi$ and $K_+$, and different classes of metrics. The simplest metric is obtained when $W^3=0$, as in this case the four-dimensional dilaton is given by \eq{4dimdil} \begin{equation} e^\varphi=e^{-U}\label{4dimdilF=0}~, \end{equation} and, consequently, the relations \eq{VG} simplify to give \begin{equation} V=0~,\quad\quad G=-(1+p)U\,.\label{VGF=0} \end{equation} The ten-dimensional metric then reads \begin{equation} ds_{10}^2 = \eta_{\alpha\beta}dx^\alpha dx^\beta+ e^{-2(1+p)U(r)}dr^2 + g_{mn}(r,y)dy^m dy^n~. \end{equation} Note that the factor $G$ can always be absorbed by a rescaling $dz\equiv e^{-(1+p)U(r)}dr$; thus, the resulting ten-dimensional metric is just that of $\mathbb{R}^{1,3}\times \hat{Y}$, where $\hat{Y}$ is an ${\rm SU(3)}\times SU(3)$ structure manifold \begin{equation}\label{metricF=0} ds_{10}^2 =ds^2_{\mathbb{R}^{1,2}} +dz^2 + ds^2_{\hat{Y}}(z)~. \end{equation} The simplest way to impose $W^3=0$ is to set to zero the total RR term $\hat{F}=0$ \eq{RRtotal}, i.e. not only the RR fluxes $e_{\bf\Lambda}=m^{\bf\Lambda}=0$ but also the scalars $\zeta^A=\tilde\zeta_A=0$. From the ten-dimensional perspective, strictly speaking the pure spinor equations describing ${\cal N}=1$ Minkowski and AdS vacua do not hold when $F=0$, as the resulting vacua are in fact ${\cal N}=2$ \cite{gmpt3}. This argument uses the fact that only the RR term mixes the two spinors $\epsilon^1,\epsilon^2$ in the supersymmetry variations \rf{wv0}-\rf{int0}. Without this mixing there is no reason for one to not take the two four-dimensional components of the spinors to be independent parameters $\zeta_1$ and $\zeta_2$. This highlights the fact that the pure spinor equations derived above are formally conditions to have \emph{at least} ${\cal N}=1$ supersymmetry, and in particular that $F\neq 0$ does not necessarily imply ${\cal N}=1$ supersymmetry, as can be seen explicitly in certain orientifold examples \cite{gmpt3,Koerber:2007hd}. For domain wall backgrounds with $F=0$, this equates to the situation where ${\cal N}=2$ supersymmetry is generated from two copies of ${\cal N}=1$, meaning that the BPS projection conditions do not mix $\zeta_1$ and $\zeta_2$. It is then straightforward to check that, with an appropriately modified projector ansatz, the supersymmetry conditions can still be consistently written as in the previous section. In four dimensions, we are working in an ${\cal N}=2$ language from the outset and the $F=0$ limit is perfectly consistent, being obtained by setting $\kappa=\lambda=0$. Given the standard relation between the four- and ten-dimensional dilatons \eq{10Adil}, we need to evaluate $e^{-K_+}$ in order to determine the behaviour of $e^\phi$. One can proceed by rewriting the flow equation \eq{dXFSU33} in components \begin{equation} \partial_r\begin{pmatrix} {\rm Im}(h\,e^{U+\frac{K_+}2})\cr {\rm Im}(h\,e^{U+\frac{K_+}2}X^{i}) \cr {\rm Im } (h\,e^{U+\frac{K_+}2}F_{0}) \cr {\rm Im } (h\,e^{U+\frac{K_+}2}F_{i}) \end{pmatrix} =-\frac12e^{(1-p)U}\begin{pmatrix} G^{0}\cr G^i\cr G_{0}\cr G_i \end{pmatrix} ~,\label{bonobo} \end{equation} where we have chosen $X^0=1$. As one can see from \eq{flux1}, $G_0$ contains the $H$ deformations\footnote{See appendix \ref{4dstr} for further explanation of the various deformations.} $(e_0^A,\,e_{0A})$, i.e. the NS fluxes \eq{NSflux} and the $e_0$ component of the RR flux \eq{p5}. The torsion $T$ deformations $(e_i^A,\,e_{iA})$ \eq{decohSU3T} are all encoded in $G_i$, together with the $e_i$ components of the RR flux \eq{p5}. $G^0$ contains the non-geometric deformations of type $R$ $(m_A^0,\,m^{0A})$ \eq{decohSU33} and the RR flux $m^0$ \eq{p5}. Finally, the $G^i$ contain the non-geometric deformations of type $Q$ $(m_A^i,\,m^{Ai})$ \eq{decohSU33} and the RR fluxes $m^i$ \eq{p5}. In the $SU(3)$ structure case $e_0$ is the IIA massive supergravity parameter, i.e. the Romans mass, $m^0$ corresponds to the Freund-Rubin parameter, $m^i$ are the two-form fluxes and $e_i$ are the four-form fluxes. From the structure of \eq{bonobo} we can see that $e^{K_+}$ is extremely sensitive to the presence of $G^0$ and ${\rm Im}h$. In fact, for $G^0=0$ and ${\rm Im}h \neq 0$, we find that $e^{-K_+}=e^{2U}$. This case is particularly simple, as in the absence of RR terms (i.e. $W^3=0$) the ten-dimensional IIA dilaton $\phi$ is constant. In order to extract $e^{-K_+}$ one has to solve \eq{bonobo} case by case, setting to zero the appropriate fluxes on the right-hand side. The most efficient way to achieve this is to first find the constraints imposed by the homogeneous equations, then to plug those with non-vanishing right-hand sides into \eq{Uprime2} and \eq{reW}, solving in terms of $U$. We shall not discuss all possible cases here, but rather focus on some interesting examples that make contact with the literature. We start by restricting ourselves to solutions with $W^0=W^i=0$, such that \eq{bonobo} is considerably simplified. Such solutions occur in the absence of non-geometric deformations and for vanishing ``magnetic'' RR fluxes $m^{0}=m^i=0$. The three cases of interest for us are: \begin{itemize} \item {\bf $T$ deformations} \\ For $G_i\neq0$ there is a unique solution with \begin{equation} h=i,\quad\quad\quad e^{-K_+}=e^{2U},\quad\quad\quad b^i=0~.\quad\quad\label{T} \end{equation} \item {\bf $H$ deformations} \\ For $G_0\neq0$ there is a unique solution with \begin{equation} h=1,\quad\quad\quad e^{-K_+}=e^{6U},\quad\quad\quad b^i={\rm const}~.\label{H} \end{equation} \item {\bf $T+H$ deformations} \\ For $G_0\neq0,\,G_i\neq0 $ there is a unique solution with \begin{equation} h=i,\quad\quad\quad e^{-K_+}=e^{2U},\quad\quad\quad b^i={\rm const}~.\label{HT} \end{equation} \end{itemize} The fields $b^i$ are the real parts of the complex scalars $t^i=b^i + i v^i = \frac{X^i}{X^0}$. Let us stress that the above result is only sensitive to the presence of $G_0$ and $G_i$, and not to the fact that they contain $T$ deformations and $H$ fluxes, rather than ``electric'' RR fluxes. Consequently, \rf{T}-\rf{HT} hold when $F\neq 0$. However, this is not the case for the metric and dilaton, which are sensitive to the presence of the RR term $F$. Let us now rewrite the flow equations \eq{H1D}-\eq{H4D} for $F=0$ and in the absence of non-geometric deformations. The metric always has the form \eq{metricF=0} while the structure of the flow equations depends on the presence of the torsion $T$ and the NS fluxes $H$. \subsubsection*{$T$ deformations} The pure $SU(3)$ structure case, that is when only $T$ deformations are present, gives the standard Hitchin flow equations. In the absence of $H$-fluxes and non-geometric $Q$ and $R$ fluxes, the generalised differential operator $\mathcal{D}$ reduces to the ordinary differential $d$. Using \eq{4dimdilF=0}-\eq{VGF=0} and \eq{T}, we find \begin{eqnarray} && \label{H1F=0}\partial_z {\rm Im}\Phi^0_-=-d{\rm Im}\Phi^0_+~,\\ && \label{H2F=0}\partial_z {\rm Re}\Phi^0_+=-d{\rm Re}\Phi^0_- ~,\\ && \label{H3F=0}d{\rm Im}\Phi^0_-=0~,\\ &&\label{H4F=0} d{\rm Re}\Phi^0_+=0~, \end{eqnarray} which are easily recognised to be the Hitchin flow equations describing a particular class of $SU(3)$ structure six-manifolds that are known as half-flat \cite{Gurrieri:2002wz}. More explicitly, the necessary and sufficient conditions for an $SU(3)$ structure manifold to be half-flat are \begin{eqnarray} && d{\rm Re}\Phi^0_+\equiv -d (J\wedge J) = 0~, \\ &&d{\rm Im}\Phi^0_- \equiv d ({\rm Im} ~\Omega_\eta) = 0~. \end{eqnarray} The other two pure spinor equations give rise to \begin{eqnarray} && d{\rm Im}\Phi^0_+\equiv dJ = -\partial_z ({\rm Im}~\Omega_\eta) ,\\ && d {\rm Re}\Phi^0_-\equiv d {\rm Re}~ \Omega_\eta = -\frac12\partial_z(J\wedge J)~, \end{eqnarray} and imply that the total non-compact seven-manifold, constructed by the fibration of the six-manifold over the direction transverse to the domain wall, has $G_2$-holonomy. These equations were first derived in the physics literature for a domain wall solution with $e_{0i}\neq0$ in \cite{Gurrieri:2002wz,Mayer:2004sd}. Here we have given their formulation with the general torsion compatible with the half-flat condition, and with constant ten-dimensional dilaton ($\phi=0$ for convenience). \subsubsection*{$H$ deformations} Let us now consider the case with non-vanishing NS H-flux in ten dimensions. The appropriate configuration which could give rise to a domain wall after compactification to four dimensions is generated by a stack of NS5-branes. As discussed in \cite{Gurrieri:2002wz}, it is possible to smear an NS5-brane over three of its four transverse directions such that the harmonic function depends on only one direction, which descends to the direction perpendicular to the domain wall in four dimensions. For such a configuration the ten-dimensional string frame metric, H-field and dilaton take the form \begin{eqnarray} ds^2 &=& ds^2_{\mathbb{R}^{1,2}} + dz^2 + ds^2_Y(z) \label{ns5} ~,\\ \phi &=& \phi(z) ~,\nonumber \\ H&\in& H^3(\hat{Y}, \mathbb{R}) ~, \end{eqnarray} where the flux $H$ is harmonic \eq{notadSU3}. When $Y$ is Calabi-Yau, the mirror manifold $\tilde Y$ is precisely the half-flat, $SU(3)$ structure manifold with vanishing H-flux and constant dilaton \cite{Gurrieri:2002wz} discussed in the previous subsection. In this case $Y$ has SU(3) holonomy and the pure spinors $\Phi_\pm$ are the familiar K\"ahler form $J$ and holomorphic three-form $\Omega$, both of which are closed. The transverse flow of the pure spinors is then supported solely by the H deformations. The metric \eq{ns5} evidently coincides with \eq{metricF=0}, while $H$ is harmonic by construction \eq{NSflux}. From \eq{H} and \eq{10Adil}, we find that the ten-dimensional dilaton is $e^\phi=e^{2U}$. In this case the domain wall flow equations become \begin{eqnarray} d_H \mathrm{Im}\left[e^{-\phi}~ \Phi^0_- \right] &=& 0 \label{ps1f0} ~,\\ d_H \mathrm{Re}\left[e^{-\phi}~\Phi^0_- \right] &=& - \partial_z \mathrm{Im} \left[ e^{-\phi}~\Phi^0_+ \right] \label{ps2f0} ~,\\ d_H \mathrm{Im}\left[e^{-\phi}~ \Phi^0_+ \right] &=& 0 \label{ps3f0}~,\\ d_H \mathrm{Re}\left[e^{-\phi}~\Phi^0_+ \right] &=& \partial_z \mathrm{Im} \left[ e^{-\phi}~ \Phi^0_- \right] \label{ps4f0}~. \end{eqnarray} \subsubsection*{$T+H$ deformations} We shall now consider the most generic geometric background with $F=0$. As one can see from equations \eq{T} and \eq{HT}, it is quite similar to the $T$ background, apart from the fact that the NSNS two-form scalars can take any constant value, not necessarily zero. The ten-dimensional dilaton is constant and we choose $\phi=0$ for simplicity. The flow equations are then \begin{eqnarray} && \label{TH1F=0}\partial_z {\rm Im}\Phi^0_-=-d_H{\rm Im}\Phi^0_+~,\\ && \label{TH2F=0}\partial_z {\rm Re}\Phi^0_+=-d_H{\rm Re}\Phi^0_- ~,\\ && \label{TH3F=0}d_H{\rm Im}\Phi^0_-=0~,\\ &&\label{TH4F=0} d_H{\rm Re}\Phi^0_+=0~. \end{eqnarray} This set of equation were first derived in \cite{Witt}, where it was shown that they describe the embedding of an $SU(3)\times SU(3)$ structure manifold into a $G_2\times G_2$ structure manifold. In particular, it was shown that the flow equations lift to a set of conditions which an describe a generalised $G_2$ structure $\rho~,\hat\rho$ on $M_7=\hat{Y}\times \mathbb{I}_z$ which is integrable with respect to $H$, as we described in detail above (see \rf{r} and \rf{rh}). A particular case of such backgrounds with only $e_{0\bf\Lambda}\neq0$ is included in the analysis of \cite{Louis:2006wq}. In the absence of the flow terms (i.e. the $\partial_z$ terms), and if $H$ is a primitive $(2,1)$-form, then these equations are equivalent to Gualtieri's definition of a twisted generalised K\"ahler structure (see section 6 of \cite{Gualtieri:2003dx}). \subsection{Non-geometric deformations} Finally, we shall comment briefly on the case of non-geometric backgrounds. From the four-dimensional perspective, whenever $W^3=0$ the dilaton $\varphi$ and the metric take the form \eq{4dimdilF=0} and \eq{metricF=0}, even in the presence of non-geometric deformations. As a consequence of \eq{VGF=0}, the flow equations \eq{H1D}-\eq{H4D} can then be written as \begin{eqnarray} \mathcal{D}\mathrm{Im}\left[e^{-\phi}~ \hat\Phi^0_- \right] &=& 0 \label{ps1f0n} ~,\\ \partial_z \mathrm{Im} \left[ e^{-\phi}~h~\hat\Phi^0_+ \right] &=&-\mathcal{D}\mathrm{Re}\left[e^{-\phi}~\hat\Phi^0_- \right] \label{ps2f0n} ~,\\ \partial_z \mathrm{Im} \left[ e^{-\phi}~ \hat\Phi^0_- \right] &=&\mathcal{D} \mathrm{Re}\left[e^{-\phi}~h~\hat\Phi^0_+ \right] ~, \label{ps3f0n}\\ \mathcal{D} \mathrm{Im}\left[e^{-\phi}~h ~ \hat\Phi^0_+ \right] &=& 0 \label{ps4f0n}~, \end{eqnarray} where the precise expression for the dilaton $\phi$ can be obtained by using \eq{bonobo} to determine $K_+$, analogously to the geometric case. We shall limit ourselves to the example of $Q$ deformations, for which we find a solution with \begin{equation} h=1,\quad\quad\quad e^{-K_+}=e^{-2U},\quad\quad\quad b^i=0~, \label{Q} \end{equation} and, therefore, the dilaton in \rf{ps1f0n}-\rf{ps4f0n} is given by $e^\phi=e^{-2U}$. Turning to the comparison of the flow equations, we recall that in ten dimensions our derivation assumed that the background was globally geometric. Nevertheless, it has been argued \cite{Cassani:2007pq} that one can formally incorporate non-geometric charges in the pure spinor equations for ${\cal N}=1$, maximally symmetric vacua by replacing the twisted derivative $d_H$ appearing there with the generalised derivative $\mathcal{D}$ \rf{defD}. We shall not pursue the non-geometric case in any detail here, but note that on substituting $d_H \rightarrow \mathcal{D}$ in the domain wall pure spinor equations derived in ten dimensions \rf{nps1}-\rf{nps4}, we find formal agreement with the four-dimensional result presented above \rf{ps1f0n}-\rf{ps4f0n}. \section{Discussion}\label{disc} We have studied BPS domain wall configurations in gauged four-dimensional supergravity arising from type II supergravity compactified on an $SU(3) \times SU(3)$ structure manifold. Starting in four dimensions, we used standard manipulations of the supersymmetry transformations to derive a set of flow equations for the scalar fields of the vector and hypermultiplets in a domain wall background. We then showed how these equations could be recast as a set of generalised Hitchin flow equations, describing the embedding of the $SU(3) \times SU(3)$ structure manifold into a $G_2 \times G_2$ structure, or generalised $G_2$, manifold, provided that the pure spinors satisfied $\langle \mathcal{D} {\rm Im}\hat\Phi^0_-,\,\hat\Phi^0_+\rangle=0$. Interestingly, from the ten-dimensional perspective, this condition follows directly from one of the supersymmetry constraints for a domain wall configuration \rf{nps} and the compatibility constraint \rf{comp} for an $SU(3) \times SU(3)$ structure manifold. For simplicity, our ten-dimensional analysis focused solely on configurations that could give rise to domain walls in four dimensions preserving at least two supersymmetries. This allowed us to adapt the formalism previously used to describe maximally symmetric type II supergravity vacua in terms of pure spinors. As we have already noted, the conditions of Gra\~na et al. \cite{gmpt2} are strictly for backgrounds preserving \emph{at least} ${\cal N} =1$ supersymmetry in four dimensions. The same applies to the domain wall configurations here, and by carefully comparing our ten-dimensional result with the orientifold truncation of the four-dimensional counterpart, we were able to show a precise agreement between the two approaches. This matching between the equations describing domain wall vacua in the ten-dimensional and four-dimensional theories is a useful test of the $SU(3) \times SU(3)$ structure compactification procedure proposed in \cite{Grana:2006hr}. For maximally symmetric vacua this check was carried out in \cite{Cassani:2007pq}. As we argued above, due to the prevalence of domain wall vacua in gauged supergravities our results provide a valuable additional check. Furthermore, the generalised Hitchin flow equation for domain walls derived here are symmetric under the proposed generalisation of mirror symmetry for $SU(3) \times SU(3)$ structure backgrounds: \begin{equation} \hat \Phi^0_+ \leftrightarrow \hat\Phi^0_-~,\quad\quad F_{IIA} \leftrightarrow F_{IIB}~. \end{equation} The flow equations we derived in ten dimensions also included a non-trivial warp factor $A$. However, in order to make a strict comparison with our four-dimensional result we made the standard assumption that the warp factor vanishes \cite{Grana:2006hr}. It would be interesting to reconsider domain wall vacua in warped compactifications directly in terms of ${\cal N}=1$ supergravity in four dimensions. In \cite{ten2four} it was suggested that the appropriate four-dimensional effective theory for warped $SU(3) \times SU(3)$ compactifications is a partially gauge fixed version of matter-coupled, ${\cal N}=1$ superconformal supergravity (see also \cite{hlmt}). However, the relation between this and the warped version of the off-shell approach of \cite{Grana:2006hr} remains unsettled. In future work, we aim to reassess `warped' domain wall vacua in the ${\cal N}=1$ superconformal theory and determine their relation to the cases we have considered here \cite{SV2}. Finally, we shall briefly comment on the applications of the generalised Hitchin flow equations we have derived to gauge/gravity duality. Recently, it has been realised that Chern-Simons-matter conformal field theories are dual to massive type IIA supergravity solutions of the form $AdS_4 \times \mathbb{CP}^3$, with the Romans mass $F_0$ acting as a deformation parameter (see \cite{Gaiotto:2009yz,Petrini:2009ur} and references therein). It has been suggested in \cite{Gaiotto:2009yz} that the appropriate equations for describing the gravity duals of such Chern-Simons-matter theories should be a generalised version of the Hitchin flow equations. Therefore, our results \rf{ps1}-\rf{ps4} should prove useful in constructing interesting examples of these gravity duals. \begin{center} {\large {\em Acknowledgements}} \end{center} We would like to thank P. Koerber, J. Louis, P. Meessen, M. Petrini, V. Stojevic and A. Tomasiello for discussions, and L. Martucci for his collaboration in the early stages of this work. We especially thank R. Reid-Edwards for useful discussions and his careful reading of this manuscript. S.V. is partially supported by the Spanish MEC grant FPA2006-00783, a MEC Juan de la Cierva scholarship, the CAM grant HEPHACOS P-ESP-00346 and the Spanish Consolider-Ingenio 2010 program CPAN CSD2007-00042. P.S. is supported by the German Science Foundation (DFG) and would also like to thank the K. U. Leuven for its support at various stages. This article is dedicated to the memory of Raffaele Punzi, a friend and colleague who will be missed. \vskip 2cm
1,108,101,563,600
arxiv
\section{Introduction} Occupation measures and local times associated to $d$-dimensional paths $(p_t)_{t\in [0,T]}$ have received much attention over the past decades from both in the analytical and the probabilistic community. The occupation measure essentially quantifies the amount of time the path $p$ spends in a given set, i.e. for a Borel set $A\in \cB(\RR^d)$ the occupation measure is given by \begin{equation*} \mu_t(A)=\lambda\{s\in [0,t]|\, p_s\in A\}, \end{equation*} where $\lambda$ is the Lebesgue measure on $\mathbb{R}$. The local time is given as the Radon-Nikodym derivative of the occupation measure with respect to the Lebesgue measure. The existence of the local time is generally not assured without some further knowledge of the path $p$, and the existence of the local time associated to the Weierstrass function, and other deterministic fractal like paths, is, to the best of our knowledge, still considered an open question. However, when $(p_t)_{t\in [0,T]}$ is a stochastic process, existence of the local time can often be proved using probabilistic techniques, and much research has been devoted to this aim, see e.g. \cite{GerHoro} and the references therein for a comprehensive overview. Knowledge of probabilistic and analytic properties of the local time becomes useful in a variety problems arising in analysis. For example, given a measurable path $p$ with an existing local time, the following formula holds \begin{equation*} \int_0^t b(x-p_s)\dd s=b\ast L_{t}(x), \end{equation*} where $\ast$ denotes convolution, and $L:[0,T]\times \RR^d \rightarrow \RR_+$ is the local time associated to $p$. Thus analytical or probabilistic questions relating to the left hand side integral can often be answered with the knowledge of the probabilistic and analytic properties of the local time $L$. \\ In this article we will study regularity properties of the local time associated to Volterra-L\'evy processes given on the form \begin{equation}\label{Volterra Levy} z_t=\int_0^t k(t,s)\dd \scL_s, \qquad t\in [0,T], \end{equation} where $k(t,\cdot)\in L^\alpha([0,t])$ for all $t\in [0,T]$ with $\alpha\in(0,2]$, and $\scL$ is a L\'evy process on a filtered probability space $(\Omega,\cF,\PP)$. In the case when $\scL=B$ is a Brownian motion, then joint regularity in time and space of the local time associated to Volterra processes has received some attention in recent years as this knowledge can be applied towards regularization of ODEs by noise \cite{HarangPerkowski2020,galeati2020noiseless,galeati2020prevalence,Catellier2016}, as discussed in detail below. Furthermore, in \cite{galeati2020prevalence}, the authors investigated the regularity of the local time associated to $\alpha$-stable processes, i.e. when the kernel $k\equiv 1$, and $\scL$ is an $\alpha$-stable process. One goal of this article is therefore to extend these results to the general case of Volterra-L\'evy processes, as well as apply this to the regularization by noise procedure. Towards this end, we formulate a simple local non-determinism condition for these processes, which will be used to determine the regularity of the local time. The regularity of the local time is then proved in Sobolev space, by application of the recently developed stochastic sewing lemma \cite{le2018}, similarly as done for Gaussian Volterra processes in \cite{HarangPerkowski2020}. By embedding, it follows that the local time is also contained in a wide range of Besov spaces. \\ As an application of our results on regularity of the local time, we show existence and pathwise uniqueness of SDEs of the form \begin{equation}\label{eq: genral intro equation} \frac{\dd }{\dd t}x_t = b(x_t)+\frac{\dd}{\dd t}z_t,\qquad x_0=\xi\in \RR^d \end{equation} even when $b$ is a Besov-distribution (the exact regularity requirement of $z$ and $b$ will be given in Section \eqref{sec. main results} below). It is well known that certain stochastic processes provide a regularizing effect on SDEs on the form of \eqref{eq: genral intro equation}. By this we mean that if the process $(z_t)_{t\in [0,T]}$ is given on some explicit form, \eqref{eq: genral intro equation} might be well posed, even when $b$ does not satisfy the usual assumption of Lipschitz and linear growth. In fact, in \cite{Catellier2016}, the authors show that if $z$ is given as a sample path of a fractional Brownian motion with Hurst index $H\in (0,1)$, equation \eqref{eq: genral intro equation} is well posed and have a unique solution even when $b$ is only a distribution in the generalized Besov-H\"older space $\cC^\beta$ with $\beta<\frac{1}{2H}-2$. More recently, Perkowski and one of the authors of the current article in \cite{HarangPerkowski2020} proved that there exists a certain class of continuous Gaussian processes with exceptional regularization properties. In particular, if $z$ in \eqref{eq: genral intro equation} is given as a path of such a process, then a unique solution exists to \eqref{eq: genral intro equation} (where the equation is understood in the pathwise sense), for any $b\in \cC^\beta$ with $\beta\in \RR$. Moreover, the flow map $\xi\mapsto x_t(\xi)$ is infinitely differentiable. We then say that the path $z$ is infinitely regularizing. Not long after this result was published, Galeati and Gubinelli \cite{galeati2020noiseless}, showed that in fact {\em almost all continuous paths are infinitely regularizing} by using the concept of prevalence. Furthermore, the regularity assumption on $b$ was proven to be inverse proportional to the irregularity of the continuous process $z$. In fact, this statement holds in a purely deterministic sense, see e.g \cite[Thm. 1]{galeati2020noiseless}. The main ingredient in this approach to regularization by noise is to formulate the ODE/SDE into a non-linear Young equation, involving a non-linear Young integral, as was first described in \cite{Catellier2016}. This reformulation allows one to construct integrals, even in the case when traditional integrals (Riemann, Lebesgue, etc.) does not make sense. A particular advantage of this theory is furthermore that the framework itself does not rely on any probabilistic properties of the processes, such as Markov or martingale properties. This makes this framework particularly suitable when considering SDEs where the additive stochastic process is of a more exotic type. As is demonstrated in the current paper, the framework is well suited to study SDEs driven by Volterra-L\'evy processes, which is a class of processes difficult to analyse using traditional probabilistic techniques. We believe that this powerful framework can furthermore be applied towards analysing several interesting problems relating to ill-posed SDEs and ODEs in the future. \\ Historically, the investigation of similar regularising effects for SDEs with general L\'evy noise seems to have received less attention compared to the case when the SDE \eqref{eq: genral intro equation} is driven by a continuous Gaussian process. Of course, the general structure of the L\'evy noise excludes several techniques which has previously been applied in the Gaussian case. However, much progress has been made also on this front when the equation has jump type noise, and although several interesting results deserves to be mentioned, we will only discuss here some the most recent results and refer the reader to \cite{Krylov2005,Flandoli20112,Bass2001,Flandoli2017,Zhang2017} for further results. In \cite{priola2012}, Priola showed that \eqref{eq: genral intro equation} has a path-wise unique strong solution (in a probabilistic sense) when $z=\scL$ is a symmetric $\alpha$-stable process with $\alpha\in (0,2)$ and $b$ is a bounded $\beta$-H\"older continuous function of order $\beta>1-\frac{\alpha}{2}$. In \cite{priola2018} this result was put in the context of path-by-path uniqueness suggested by Davie \cite{Davie2007}. More recently, in \cite{raynal2020weak} the authors prove that the martingale problem associated to \eqref{eq: genral intro equation} is well posed, even when $b$ is only assumed to be bounded and continuous, in the case when $z=\scL$ is an $\alpha$-stable process with $\alpha=1$ (being the critical case). Further in \cite{athreya2020} the authors show strong existence and uniqueness of \eqref{eq: genral intro equation} when $z=\scL$ is an $1$-dimensional $\alpha$-stable process, and $b\in \cC^\beta$ with $\beta>\frac{1}{2}-\frac{\alpha}{2}$. Thus, allowing here for possibly distributional coefficients $b$ when $\alpha$ is sufficiently large (i.e. greater than $1$). Our results can be seen as an extension of the last result to a purely pathwise setting, and to the case of general Volterra-L\'evy processes. Similarly as seen in the Gaussian case, the choice of Volterra kernel then dictates the regularity $\beta\in \RR$ of the distribution $b\in \cC^\beta$ that can be considered to still obtain existence and uniqueness. \subsection{Main results}\label{sec. main results} We present here the main results to be proven in this article. The first result provides a simple condition to show regularity of the local time associated to Volterra-L\'evy processes. \begin{thm}\label{thm: first main reg of local time} Let $(\scL_t)_{t\in [0,T]}$ be a L\'evy process on a filtered probability space $(\Omega,\cF,\PP)$, with characteristic $\psi:\RR^d\rightarrow \CC$, and let $k$ be a real valued and possibly singular Volterra kernel satisfying for $t\in[0,T]$, $k(t,\cdot)\in L^\alpha([0,t])$ with $\alpha\in(0,2]$. Define the Volterra L\'evy process $(z_t)_{t\in [0,T]}$ by $z_t :=\int_0^t k(t,s)\dd \scL_s$, where the integral is defined in Definition \ref{VL}. Suppose that the characteristic triplet and the Volterra kernel satisfies for some $\zeta > 0$ and $\alpha\in(0,2]$ \begin{equation*} \inf_{t\in [0,T]}\inf_{s\in [0,t]} \inf_{\xi\in \RR^d} \frac{\int_s^t \psi(k(t,r)\xi)\dd r }{(t-s)^\zeta |\xi|^\alpha}>0. \end{equation*} If $\zeta\in (0,\frac{\alpha}{d})$, then there exists a $\gamma>\frac{1}{2}$ such that the local time $L:\Omega\times[0,T]\times \RR^d\rightarrow \RR_+$ associated to $z$ is contained in $\cC^\gamma([0,T]; H^\kappa(\RR^d))$ for any $\kappa<\frac{\alpha}{2\zeta}-\frac{d}{2}$, $\PP$-a.s.. \end{thm} \begin{cor}\label{cor: test} There exists a class of Volterra-L\'evy processes $z_t=\int_0^t k(t,s)\dd \scL_s$ such that for each $t\in [0,T]$, its associated local time $L_t$ is a test function. More precisely, we have that $(t,x)\mapsto L_t(x)\in \cC^\gamma([0,T];\cD(\RR^d))$ $\PP$-a.s. for any $\gamma\in (0,1)$. Here $\cD(\RR^d)$ denotes the space of test functions on $\RR^d$. \end{cor} See Example \ref{exVL}, {\rm (iv)} for proof of this corollary. \\ Inspired by \cite{HarangPerkowski2020,galeati2020prevalence,Catellier2016} we apply the result on regularity of the local time to prove regularization of SDEs by Volterra-L\'evy noise. Since we will allow the coefficient $b$ in \eqref{eq: genral intro equation} to be distributional-valued, it is not \emph{a priori} clear what we mean by a solution. Indeed, since the integral $\int_0^t b(x_s)\dd s$ is not well defined in a Riemann or Lebesgue sense if $b$ is truly distributional, it is not a priori clear how to make sense of \eqref{eq: genral intro equation}. We therefore begin with the following definition of a solution, which is in line with the definition of pathwise solutions to SDEs used in \cite{HarangPerkowski2020,galeati2020prevalence,Catellier2016}. \begin{defn}\label{def: concept of solution} Consider a Volterra-L\'evy process $z$ given as in \eqref{Volterra Levy} with measurable paths, and associated local time $L$. Let $b\in \mathcal{S}'(\mathbb{R}^d)$ be a distribution such that $b\ast L\in \cC^\gamma([0,T];\cC^2(\RR^d))$ for some $\gamma>\frac{1}{2}$. Then for any $\xi\in \RR^d$ we say that $x$ is a solution to \begin{equation*} x_t=\xi+\int_0^t b(x_s)\dd s+z_t,\qquad \forall t\in [0,T], \end{equation*} if and only if $x-z\in \cC^\gamma([0,T];\RR^d)$, and there exists a $\theta\in \cC^\gamma([0,T];\RR^d)$ such that $\theta=x-z$, and $\theta $ solves the non-linear Young equation \begin{equation*} \theta_t=\xi+\int_0^t b\ast \bar{L}_{\dd r} (\theta_r), \qquad \forall t\in [0,T]. \end{equation*} Here $\bar{L}_t(z)=L_t(-z)$ where $L$ is the local time associated to $(z_t)_{t\in [0,T]}$, and the integral is interpreted in the non-linear Young sense, described in Lemma \ref{lem: non linear young integral}. \end{defn} \begin{thm}\label{thm: main existence and uniqueness} Suppose $(z_t)_{t\in [0,T]}$ is a Volterra-L\'evy process such that its associated local time $L\in \cC^\gamma([0,T]; H^\kappa)$ for some $\kappa>0$ and $\gamma>\frac{1}{2}$, $\PP$-a.s.. Then for any $b\in H^\beta(\RR^d)$ with $\beta>2-\kappa$, there exists a unique pathwise solution to the equation \begin{equation*}\label{eq:SDE intro} x_t=\xi+\int_0^t b(x_s)\dd s+z_t,\qquad \forall t\in [0,T], \end{equation*} where the solution is interpreted in sense of Definition \ref{def: concept of solution}. Moreover, if $\beta>n+1-\kappa$ for some $n\in \NN$, then the flow mapping $\xi\mapsto x_t(\xi)$ is $n$-times continuously differentiable. \end{thm} \subsection{Structure of the paper} In section \ref{sec:occupation measures} we recall some basic aspects from the theory of occupation measures, local times, and Sobolev/Besov distribution spaces. Section \ref{sec:volterr levy process} introduces a class of Volterra processes where the driving noise is given as a L\'evy process. We show a construction of such processes, even in the case of singular Volterra kernels, and discuss conditions under which the process is continuous in probability. Several examples of Volterra-L\'evy processes are given, including a rough fractional $\alpha$-stable process, with $\alpha\in [1,2)$. In section \ref{sec: regualrity of local times} we provide some sufficient conditions for the characteristics of Volterra-L\'evy processes such that their associated local time exists, and is $\PP$-a.s. contained in a H\"older-Sobolev space of positive regularity. At last, we apply the concept of local times in order to prove regularization by noise for SDEs with additive Volterra-L\'evy processes. Here, we apply the framework of non-linear Young equations and integration, and thus our results can truly be seen as pathwise, in the "rough path" sense. An appendix is included in the end, where statements and proofs of some auxiliary results are given. \subsection{Notation} For a fixed $T>0$, we will denote by $x_t$ the evaluation of a function at time $t\in [0,T]$, and write $x_{s,t}=x_t-x_s$. For some $n\in \NN$, we define \begin{equation*} \Delta_T^n:=\{(s_1,\ldots,s_n)\in [0,T]^n|\, s_1\leq \dots \leq s_n\}. \end{equation*} To avoid confusion, the letter $\scL$ will be used to denote a L\'evy process, while $L$ will be used to denote the local time of a process. For $\gamma\in (0,1)$ and a Banach space $E$, the space $\cC^\gamma_TE:=\cC^\gamma([0,T];E)$ is defined to be the space of functions $f:[0,T]\rightarrow E$ which is H\"older continuous of order $\gamma$. The space is equipped with the standard semi-norm \begin{equation*} \|f\|_{\gamma}:=\sup_{s\neq t\in [0,T]}\frac{\|f_t-f_s\|_E}{|t-s|^\gamma}, \end{equation*} and note that under the mapping $f\mapsto |f_0|+\|f\|_\gamma$ the space $\cC^\gamma_TE$ is a Banach space. We let $\cS(\RR^d)$ denote the Schwartz space of rapidly decreasing functions on $\RR^d$, and $\cS'(\RR^d)$ its dual space. Given $f\in\mathcal S(\mathbb{R}^d)$, let $\mathscr{F} f$ be the Fourier transform of $f$ defined by $$ \mathscr{F} f(\xi):=(2\pi)^{-d/2}\int_{\mathbb{R}^d}e^{-i\la\xi, x\ra} f(x)\dd x. $$ Let $s$ be a real number. The Sobolev space $H^s(\mathbb{R}^d)$ consists of distributions $f\in \cS'(\RR^d)$ such that $\mathscr{F} f\in L_{loc}^2(\mathbb{R}^d)$ and $$\Vert f\Vert_{H^s}^2:=\int_{\mathbb{R}^d}(1+|\xi|^2)^s|\mathscr{F} f(\xi)|^2\dd \xi<\infty.$$ For $\alpha>0$, if $\int_0^T|f(s)|^\alpha\dd s<\infty$, then we say $f\in L^\alpha([0,T])$. \section{Occupation measures and local times, and distributions}\label{sec:occupation measures} This section is devoted to give some background on the theory of occupation measures and local times, as well as definitions of Sobolev and Besov spaces, which will play a central role throughout this article. \subsection{Occupation measure and local times} The occupation measure associated to a process $(x_t)_{t\in [0,T]}$ gives information about the amount of time the process spends in a given set. Formally, we define the occupation measure $\mu$ associated to $(x_t)_{t\in [0,T]}$ evaluated at $t\in [0,T]$ by \begin{equation*} \mu_t(A)=\lambda\{s\leq t|x_s\in A\}, \end{equation*} where $\lambda$ denotes the Lebesgue measure. The Local time $L$ associated to $x$ is then the Radon-Nikodym derivative with of $\mu$ with respect to the Lebesgue measure(as long as this exists). We therefore give the following definition. \begin{defn} Consider a process $x:[0,T]\rightarrow \RR^d$ be a process, and let $\mu$ denote the occupation measure of $x$. If there exists a function $L:[0,T]\times \RR^d \rightarrow \RR_+ $ such that \begin{equation*} \mu_t(A)=\int_A L_t(z)\dd z,\qquad {\rm for} \qquad A\in \cB(\RR^d), \end{equation*} then we say that $L$ is the local time associated to the process $(x_t)_{t\in [0,T]}$. \end{defn} \begin{rem} The interpretation of the local time $L_t(z)$ is {the time spent by the process $x:[0,T]\rightarrow \RR^d$ at a given point $z\in \RR^d$ }. Thus, the study of this object has received much attention from people investigating both probabilistic and path-wise properties of stochastic processes. For purely deterministic processes $(x_t)_{t\in [0,T]}$, the local time might still exist, however, as discussed in \cite{HarangPerkowski2020}, if $x$ is a Lipschitz path, there exists at least two discontinuities of the mapping $z\mapsto L_t(z)$. On the other hand, it is well known (see \cite{GerHoro}) that the local time associated to the trajectory of a one dimensional Brownian motion is $\frac{1}{2}$-H\"older regular in its spatial variable ($a.s.$). More generally, for the trajectory of a fractional Brownian motion with Hurst index $H\in (0,1)$, we know that its local time $L$ is contained in $H^\kappa$ ($a.s.$) for $\kappa<\frac{1}{2H}-\frac{d}{2}$, while still preserving H\"older regularity in time. This clearly shows that the more irregular the trajectory of the fractional Brownian motion is, the more regularity we obtain in the local time associated to this trajectory. In this case, the regularity of the local time can therefore be seen as an irregularity condition. This heuristic has recently been formalized in \cite{galeati2020prevalence}. There, the authors show that if the local time associated to a continuous path $(x_t)_{t\in [0,T]}$ is regular (i.e. H\"older continuous or better) in space, then $x$ is \emph{truly rough}, in the sense of \cite{Friz2014}. More recently, the authors of \cite{HarangPerkowski2020} showed that the local time associated to trajectories of certain particularly irregular Gaussian processes (for example the log-Brownian motion) is infinitely differentiable in space, and {almost} Lipschitz in time. In the current article, we will extend this analysis to L\'evy processes. \end{rem} The next proposition will be particularly interesting towards applications in differential equations, and which we will use in subsequent sections. \begin{prop}[{\rm Local time formula}] Let $b$ be a measurable function, and suppose $(x_t)_{t\in [0,T]}$ is a process with associated local time $L$. Then the following formula holds for any $\xi \in \RR^d$ and $(s,t)\in \Delta^2_T$ \begin{equation*} \int _s^t b(\xi+x_r)\dd r=b\ast \bar{L}_{s,t}(\xi), \end{equation*} where $\bar{L}_t(z)=L_t(-z)$ and $L_{s,t}=L_t-L_s$ denotes the increment. \end{prop} A proof of this statement follows directly from the definition of the local time, see \cite[Thm. 6.4]{GerHoro} for further details. \begin{rem} It is readily seen that, formally, the local time can be expressed in the following way for $\xi\in \RR^d$ and $(s,t)\in \Delta^2_T$ \begin{equation*} L_{s,t}(\xi)=\int_s^t \delta(\xi - X_r)\dd r, \end{equation*} where $\delta$ is the Dirac distribution. \end{rem} \begin{rem} For future reference, we also recall here that the Dirac distribution $\delta$ is contained in the in-homogeneous Sobolev space $H^{-\frac{d}{2}-\epsilon}$ for any $\epsilon>0$ (See e.g. \cite[Remark 1.54]{Bahouri2011}). \end{rem} \subsection{Besov spaces and distributions} Before introducing the notion of Besov spaces, we give a definition of the Paley-Littlewood blocks, which plays a central role in the construction of these spaces. \begin{defn}[Paley-Littlewood blocks] For $j\in \NN$, $\rho_j:=\rho(2^{-j}\cdot)$ where $\rho$ is a smooth function supported on an annulus $\cA:=\{x\in\mathbb{R}^d:\frac{4}{3}\leq |x|\leq \frac{8}{3}\}$ and $\rho_{-1}$ is a smooth function supported on the ball $B_{\frac{4}{3}}$. Then $\{\rho_j\}_{j\geq -1}$ is a partition of unity (\cite{Bahouri2011}). For $j\geq -1$ and some $f\in \cS^\prime$ we define the Paley-Littlewood blocks $\Delta_j$ in the following way \begin{equation*} \Delta_j f=\mathscr{F}^{-1}(\rho_j\mathscr{F}{f}). \end{equation*} \end{defn} \begin{defn} For $\alpha\in \mathbb{R}$ and $p,q\in[1,\infty]$, the in-homogeneous Besov space $\cB^\alpha_{p,q}$ is defined by \begin{equation*} \cB_{p,q}^\alpha =\Big\{f\in \mathcal{S}^\prime \Big|\,\,\|f\|_{B_{p,q}^\alpha}:= \left(\sum _{j\geq-1} 2^{jq\alpha}\|\Delta_j f\|_{L^p(\mathbb{R}^d)}^q\right)^{\frac{1}{q}} <\infty \Big\}. \end{equation*} We will typically write $\cC^\alpha:=\cB_{\infty,\infty}^\alpha$. Besides, by the definition of the partition of unity and Fourier-Plancherel formula (\cite[Examples p99]{Bahouri2011}), the Besov space $\cB^\alpha_{2,2}$ coincides with Sobolev space $H^\alpha$. \end{defn} \begin{rem} We will work with regularity of the local time in the Sobolev space $H^\kappa$. However, towards applications to regularization by noise in SDEs, we will also encounter Besov spaces, through Young's convolution inequality. We therefore give a definition of these spaces here. Of course, through Besov embedding, $H^\kappa \hookrightarrow B^{\kappa-(\frac{d}{2}-\frac{d}{p})}_{p,q}$ for any $p,q\in [2,\infty]$ and $\kappa \in \RR$, (e.g. \cite[Prop. 2.20]{Bahouri2011}), and thus our results implies that the local time is also included in these Besov spaces. We will however not specifically work in this setting to avoid extra confusion, but refer the reader to \cite{galeati2020noiseless,galeati2020prevalence} for a good overview of regularity of the local time associated to Gaussian processes in such spaces. \end{rem} \section{Volterra-L\'evy process}\label{sec:volterr levy process} In this section we give a brief introduction on L\'evy processes and stochastic integral for a Volterra kernel with respect to a L\'evy process. General references for this part are \cite[Chp. 4]{Sato1990} and \cite[Chp. 2, Chp. 4]{Applebaum2004}. In Section 3.1 we give the definition of Volterra-L\'evy processes (with possibly singular kernels) and obtain the associated characteristic function. Particularly, our framework include Volterra processes driven by symmetric $\alpha$-stable noise. In the end, we provide several examples of Volterra-L\'evy processes, including the \emph{fractional $\alpha$-stable process}. We begin to provide a definition of L\'evy processes, as well as a short discussion on a few important properties. \begin{defn}[L\'evy process]\label{alphastable} Let $T>0$ be fixed. We say that a c\`adl\`ag and ($\mathcal{F}_t$)-adapted stochastic process $(\scL_t)_{t\in [0,T]}$ defined on a complete probability space $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\in [0,T]},\PP)$, and which satisfies the usual assumptions is a {\em L\'evy process} if the following properties hold: \begin{itemize}[leftmargin=.3in] \item[{\rm (i)}] $\scL_0=0$ ($\PP$-a.s.). \item[{\rm (ii)}] $\scL$ has independent and stationary increments. \item[{\rm (iii)}] $\scL$ is continuous in probability, i.e. for all $\epsilon>0$, and all $s>0$, $$\lim_{t\rightarrow s}\PP(|\scL_t-\scL_s|>\epsilon)=0.$$ \end{itemize} Furthermore, let $\nu$ be a $\sigma$-finite measure on $\mathbb{R}^d$. We say that it is a \emph{L\'evy measure} if \begin{align*} \nu(\{0\})=0,\quad \int_{\mathbb{R}^d}(1\wedge|x|^2)\nu(\dd x)<\infty. \end{align*} \end{defn} \begin{rem} A known description of L\'evy process is L\'evy-Khintchine formula: for a $d$-dimensional L\'evy process $\scL$, the characteristic function $\psi$ of $\scL$ verifies that for $t\geq0$, there exists a vector $a\in\mathbb{R}^d$, a positive definite symmetric $d\times d$ matrix $\sigma$ and a L\'evy measure $\nu$ such that the characteristic function is given by $\EE[e^{i\la\xi,\scL_t\ra}]=e^{-t\psi(\xi)}$ with \begin{align}\label{LK} \psi(\xi)=-i\la a,\xi\ra+\frac{1}{2}\la \xi,\sigma\xi\ra-\int_{\mathbb{R}^d-\{0\}}(e^{i\la\xi,x\ra}-1-i\la\xi,x\ra1_{|x|\leq1}(x))\nu(\dd x). \end{align} Here the triple $(a,\sigma,\nu)$ is called the \emph{characteristic} of the random variable $\scL_1$. \end{rem} The typical examples for L\'evy processes is the case when the L\'evy triplet is given by $(0,\sigma,0)$, resulting in a Brownian motion. Another typical example is when the characteristic triplet is given by $(0,0,\nu)$ and the L\'evy measure $\nu$ defines an $\alpha$-stable process. We provide the following definition for this class of processes. \begin{defn}[Standard $\alpha$-stable process] If a $d$-dimensional L\'evy process $(\scL_t)_{t\geq0}$ has the following characteristic function \begin{align*} \psi(\xi)=c_\alpha|\xi|^\alpha,\quad \xi\in\mathbb{R}^d \end{align*} with $\alpha\in(0,2]$ and some positive constant $c_\alpha$, then we say $(\scL_t)_{t\geq0}$ is a standard $\alpha$-stable process. \end{defn} We now move on to the construction of Volterra-L\'evy processes, given of the form \begin{equation}\label{eq:volt processes} z_t=\int_0^t k(t,s)\dd \scL_s,\qquad t\in [0,T]. \end{equation} Of course in the case when $(\scL_t)_{t\in [0,T]}$ is a Gaussian process, or even a square integrable martingale, the construction of such a stochastic integral is by now standard, and $z$ is constructed as an element in $L^2(\Omega)$ given that $k(t,\cdot)\in L^2([0,t])$ for all $t\in [0,T]$, see e.g. \cite{Protter2004}. However, in the case when $\scL$ is not square integrable, then the construction of $z$ as a stochastic integral is not as straight forward. However, several articles discuss also this construction in the case of $\alpha$-stable processes, which would be sufficient for our purpose. The next remark gives only a brief overview on this construction, and we therefore ask the interested reader to consult the given references for further details on the construction. \begin{rem}\label{stableint} Consider a symmetric $\alpha$-stable process $\scL$ with $\alpha\in(0,2)$. From \cite[Ex. 25.10, p162]{Sato1990} we know that $\EE[|\scL_t|^p]=Ct^{p/\alpha}$ for any $-1<p<\alpha$ and $t\in[0,T]$, and thus the process is not square integrable and the standard "It\^o type" construction of the Volterra process in \ref{eq:volt processes} can not be applied. However, in \cite[Chp. 3.2-3.12]{ST1994} the authors propose several different ways of constructing integral $\int_0^tk(t,s)\dd\scL_s$ given that $k(t,\cdot)\in L^\alpha([0,t])$. In particular, in \cite[Chp. 3.6]{ST1994} it is shown that the Volterra-stable process below is well-defined and exists in $L^p(\Omega)$ for any $p<\alpha$, given that the kernel $k(t,\cdot)\in L^\alpha([0,t])$ for all $t\in [0,T]$. In fact, in the case when $\scL$ is a symmetric $\alpha$-stable process, it is known that for any $0<p<\alpha$ \begin{equation*} \left(\EE\left[\left|\int_0^t k(t,s)\dd\scL_s\right|^p\right] \right)^{\frac{1}{p}}\simeq_{p,\alpha,d} \left(\int_0^t |k(t,s)|^\alpha\dd s\right)^{\frac{1}{\alpha}}, \end{equation*} where $\simeq_{p,\alpha,d}$ means that they differ up to a constant depending on $p,\alpha$ and $d$ (recall that $d$ is the dimension of $\scL$). See e.g. \cite{ROSINSKI1986} and the references therein for more details on this relation and the construction of such integrals. \end{rem} The above discussion yields the following definition of the Volterra-L\'evy process. \begin{defn}[Volterra-L\'evy process]\label{VL} Fix $T>0$, and let $(\scL_t)_{t\in [0,T]}$ be a L\'evy process as given in Definition \ref{alphastable}. For a given kernel $k:\Delta_T^2\rightarrow\mathbb{R}$ with the property that for any $t\in [0,T]$, $k(t,\cdot)\in L^\beta([0,t])$ with $\beta\in(0,2]$, define $$ z_t=\int_0^tk(t,s)\dd \scL_s,\quad t\geq0 $$ where the integral is constructed in $ L^p(\Omega)$ sense for $p\leq \beta$, as discussed above. Then we call the stochastic process $(z_t)_{t\in [0,T]}$ a {\em Volterra-L\'evy process}, where $\scL$ is the associated L\'evy process to $z$ and $k$ is called the Volterra kernel. \end{defn} \begin{prop}\label{integral} Let $(\scL_t)_{t\in [0,T]}$ be a L\'evy process on a probability space $(\Omega,\cF,\PP)$, such that $\EE[|\scL_t|^p]<\infty$ for all $0<p<\beta$ where $\beta\in (0,2]$. If $k(t,\cdot)\in L^{\beta}([0,t])$ for any $t\in [0,T]$, then the Volterra \-L\'evy process $(z_t)_{t\in [0,T]}$ given by \begin{equation*} z_t=\int_0^t k(t,s)\dd \scL_s \end{equation*} is well defined as an element of $L^{p}(\Omega)$ for any $0<p<\beta$. For $0\leq s\leq t\leq T$, the characteristic function of $z$ is given by \begin{align}\label{chfz} \EE[\exp(i \la \xi, z_t\ra)]=\exp \left(-\int_0^t\psi(k(t,s)\xi ) \dd s\right), \end{align} and the conditional characteristic function is given by \begin{align}\label{chfcz} \EE[\exp(i \la \xi, z_t\ra)|\cF_s]=\cE_{0,s,t}(\xi)\exp \left(-\int_s^t\psi(k(t,r)\xi )\dd r\right), \end{align} where $\cE_{0,s,t}(\xi):=\exp\left(i\la \xi, \int_0^sk(t,r)\dd \scL_r \ra \right)$. \end{prop} \begin{proof} The fact that $z_t\in L^p(\Omega)$ for any $0<p<\beta$ follows from Remark \ref{stableint}. However, the statement is even stronger in the case when $\scL$ is a square integrable martingale, as in this case it is well known that if $k(t,\cdot)\in L^2([0,t])$ for any $t\in [0,T]$ then $z_t\in L^2(\Omega)$ for any $t\in [0,T]$. By application of a standard \emph{dominated convergence theorem} argument, it is readily checked that the characteristic function satisfies the following relations \begin{align*} \EE[\exp(i \la \xi, z_t\ra)]&= \EE\Big[\lim_{n\rightarrow\infty}\exp\Big(\sum_{j=1}^n i \la k(t,s_j)\xi,(\scL_{s_{j+1}}-\scL_{s_j})\ra\Big)\Big]\\ &=\lim_{n\rightarrow\infty} \prod_{j=0}^{n}\EE\Big[\exp\Big(i \la k(t,s_j)\xi,(\scL_{s_{j+1}}-\scL_{s_j})\ra\Big)\Big]\\ &=\lim_{n\rightarrow\infty} \exp \Big(-\sum_{j=0}^{n}(s_{j+1}-s_j)\psi(k(t,s_j)\xi)\Big) \\ &=\exp \left(-\int_0^t\psi(k(t,s)\xi ) \dd s\right). \end{align*} It follows that \eqref{chfz} holds. For \eqref{chfcz}, since $\int_s^tk(t,r)\dd\scL_r$ is independent of $\cF_s$ and $\int_0^sk(t,r)\dd \scL_r$ is adapted to $\cF_s$ for any $s\in[0,t]$, we similarly have \begin{align*} \EE[\exp(i \la \xi, z_t\ra)|\cF_s]&=\EE[\exp(i \la \xi, \int_s^tk(t,r)\dd\scL_r+\int_0^sk(t,r)\dd\scL_r\ra)|\cF_s] \\&=\cE_{0,s,t}(\xi)\exp \left(-\int_s^t\psi(k(t,r)\xi )\dd r\right). \end{align*} \end{proof} Everything we have introduced so far only relates to the probabilistic properties of Volterra-L\'evy process without any details regarding its sample path behavior. Towards the goal of proving regularity of the local time associated to $(z_t)_{t\in [0,T]}$, as done in Section \ref{sec: regualrity of local times}, we require that the process $z$ is continuous in probability. We therefore provide here a simple sufficient condition on the kernel $k$ that assures this property of the process. \begin{lem}\label{lem:cont in prob} Let $(z_t)_{t\in [0,T]}$ be a Volterra L\'evy process, as given in Definition \ref{VL}. Then $z$ is continuous in probability if there exists a $p>0$ such that \begin{equation*} \EE[|z_t-z_s|^p]\rightarrow 0 \quad {\rm when} \quad s\rightarrow t. \end{equation*} \end{lem} \begin{proof} It is readily checked that for any $p>0$ and any $\epsilon>0$ \begin{equation*} \PP(|z_t-z_s|>\epsilon)\leq \frac{1}{\epsilon^p} \EE[|z_t-z_s|^p], \end{equation*} and thus if $\EE[|z_t-z_s|^p]\rightarrow 0$ as $s\rightarrow t$, continuity in probability holds. \end{proof} Below we provide three examples of different types of Volterra processes driven by L\'evy noise. \begin{ex}[Brownian motion] Let $\beta=2$, $k(t,\cdot)\in L^2([0,t])$ for $t\in[0,T]$. Suppose $\scL$ is a Brownian motion with values in $\mathbb{R}^d$. Then it is well known that $z_t=\int_0^tk(t,s)\dd \scL_s$ is well-defined in $L^2(\Omega)$ as a Wiener integral. The sample paths of such processes are clearly measurable, and depending on the regularity of the kernel $k$, the process may also be (H\"older) continuous. \end{ex} \begin{ex}[Square-integrable martingale case] Let $\beta=2$, $k(t,\cdot)\in L^2([0,t])$ for $t\in[0,T]$ and $\scL$ be a $(\mathcal{F}_t)$-martingale satisfying $\EE[|\scL_t|^2]<\infty$, for all $t\in[0,T]$. Then we know $z_t=\int_0^tk(t,s)\dd \scL_s,t\geq0$ is well-defined according to Proposition \ref{integral} (this is also clear from classical martingale theory, e.g. \cite{Applebaum2004}). \end{ex} \begin{ex}[Standard $\alpha$-stable case]\label{ex:standard alpha stable} For $\alpha\in(0,2)$, {$\beta>-\frac{1}{\alpha}$}, such that $|k(t,s)|\simeq |t-s|^\beta$ and $k(t,\cdot)\in L^\alpha([0,T])$, let $\scL$ be a standard $\alpha$-stable process defined in Definition \ref{alphastable}. By Proposition \ref{integral}, we know that $z_t=\int_0^tk(t,s)\scL_s,t\geq0$ is well defined for $\alpha\in(0,2)$ (see also \cite[Section 3.6 Examples]{ST1994}). Furthermore, $(z_t)_{t\in [0,T]}$ is continuous in probability. Indeed, note that $z_t-z_s=\int_s^t k(t,r)\dd\scL_r+\int_0^s k(t,r)-k(s,r)\dd\scL_r$. Let us consider the first integral in this decomposition, as the second follows similarly. By Remark \ref{stableint} it follows that there exists a $p>0$ such that \begin{equation} \EE[|\int_s^t k(t,r)\dd \cL_s|^p]\simeq_{p,\alpha,d} \left(\int_s^t k(t,r)^\alpha \dd r\right)^{\frac{p}{\alpha}}. \end{equation} Since $|k(t,r)|\simeq |t-r|^\beta$, then \begin{equation} H(s,t):= \int_s^t k(t,r)^\alpha\dd r\simeq \int_s^{t} |t-r|^{\beta\alpha} \dd r=(\beta\alpha+1)^{-1}|t-s|^{\beta\alpha+1}. \end{equation} Since $\alpha>0$ and $\beta> -1/\alpha$, it follows that $H(s,t)\rightarrow 0$ as $s\rightarrow t$. Treating similarly the term $\int_0^s k(t,r)-k(s,r)\dd \scL_r$, and using that \begin{equation} \EE[|z_t-z_s|^p]\lesssim_p \EE[|\int_s^t k(t,r)\dd\scL_r|^p]+\EE[|\int_0^s k(t,r)-k(s,r)\dd\scL_r|^p], \end{equation} we conclude by Lemma \ref{lem:cont in prob} that $(z_t)_{t\in [0,T]}$ is continuous in probability. \end{ex} With the above preparation at hand, we can then construct fractional $\alpha$-stable processes and give a representation of its characteristic function. We summarize this in the following example. \begin{ex}[\rm{Fractional $\alpha$-stable process}]\label{fr} Let $\scL$ be an $\alpha$-stable process with $ \alpha\in(0,2]$, and consider the Volterra kernel $k(t,s)=(t-s)^{H-\frac{1}{\alpha}}$, $H\in(0,1)$. Then the process $z_t=\int_0^tk(t,s)\dd\scL_s$ is called a \emph{fractional $\alpha$-stable process} (of Riemann-Liouville type) and specifically if $\alpha=2$, then $\scL$ is a Brownian motion and $z$ is a \emph{fractional Brownian motion}. Note that in this case $k(t,\cdot)\in L^\alpha([0,t],\dd s)$ for any $H\in (0,1)$. There is a more detailed study of fractional processes of this type in \cite[Chapter 7]{ST1994}. An application of Proposition \eqref{chfz} yields that the characteristic function associated to the fractional $\alpha$-stable process $z$, is given by $$\EE[\exp(i \la \xi, z_t\ra)]=\exp \left(-c_\alpha \frac{|\xi|^\alpha t^{H\alpha}}{H\alpha} \right).$$ Note also that by the same argument as used in Example \ref{ex:standard alpha stable}, it is readily checked that the fractional $\alpha$-stable process $(z_t)_{t\in [0,T]}$ is continuous in probability. \end{ex} \section{Regularity of the local time associated to Volterra-L\'evy processes}\label{sec: regualrity of local times} This section is devoted to prove space-time regularity of the local time associated to Volterra-L\'evy processes, as defined in Section \ref{sec:volterr levy process}. We begin to give a notion of local non-determinism for these processes, and provide a few examples of specific processes which satisfy this property. \subsection{Local non-determinism condition for Volterra-L\'evy process} The following definition of a local non-determinism condition can be seen as an extension of the concept of strong local non-determinism used in the context of Gaussian processes, see e.g. \cite{Xiao2006,galeati2020prevalence,HarangPerkowski2020}. \begin{defn}\label{def: zeta alpha Volterra kernel} Let $\scL$ be a L\'evy process with characteristic $\psi:\RR^d\rightarrow \CC$ as given in \eqref{LK}, and let $z$ be a Volterra-L\'evy process (Definition \ref{VL}) with L\'evy process $\scL$ and Volterra kernel $k:\Delta^2_T\rightarrow \RR$ satisfying $k(t,\cdot)\in L^\alpha([0,t])$ for all $t\in [0,T]$. If for some $\zeta>0$ and $\alpha\in(0,2]$ the following inequality holds \begin{equation}\label{LND} \lim_{t\downarrow 0} \inf_{s\in(0,t]} \inf_{\xi\in \RR^d} \frac{\int_s^t \psi(k(t,r)\xi) \dd r}{(t-s)^\zeta|\xi|^\alpha}>0. \end{equation} Then we say that $z$ is $(\alpha,\zeta)$-Locally non-deterministic ($(\alpha,\zeta)$-LND). \end{defn} \begin{rem} The elementary example of a Volterra kernel is $k(t,s):=1_{[0,t]}(s)$ for any $0\leq s\leq t\leq T$. In this case the Volterra-L\'evy process is just given as the L\'evy process itself, i.e. $z_t =\scL_t$. If we let $\scL$ be a standard $d$-dimensional $\alpha$-stable process, condition \eqref{LND} fulfills for $\zeta=1$. Hence a standard $d$-dimensional $\alpha$-stable process is $(\alpha,1)$-LND, which coincides with the conclusion in \cite[Proposition 4.5, Example (a)]{Nolan89}. \end{rem} \begin{rem} There already exists several concepts of local non-determinism, but, as far as we know, most of them are given in terms of a condition on the variance of certain stochastic processes. The only exception we are aware of is the definition of Nolan in \cite{Nolan89} for $\alpha$-stable processes, where a similar condition is stated in $L^p$ spaces, with $p=\alpha$ (see \cite[Definition 3.3]{Nolan89}). Of course working with general $\alpha$-stable processes, we do in general not have finite variance, and thus the standard definitions of such a concept is not applicable. On the other hand, in the case when $\alpha=2$, we have finite variance, and then the above criterion would be very similar to the condition for strong local non-determinism for Gaussian Volterra processes, as discussed for example in \cite{Xiao2006}. Working with the conditional characteristic function of Volterra $\alpha$-stable processes, we see however that this condition in some sense is what needs to be replaced in order to prove existence and regularity of local times associated to these processes. \end{rem} It is readily seen that the Volterra $\alpha$-stable process satisfies \eqref{LND}, with $\zeta$ depending on the choice of kernel $k$. The condition is however some what more general, as we only require the processes to behave similarly to Volterra $\alpha$-stable processes. Let us provide an example to discuss some interesting process that satisfies the LND condition. \begin{ex}[Volterra kernel] \label{VK} As two examples of Volterra kernel that we are interested most, we give a specific discussion here. The first one usually relates to fractional type processes, for instance, fractional Brownian motion and fractional stable processes. As we will see later, the second one makes the corresponding Volterra-L\'evy process an infinitely regularising process, similarly to the Gaussian counterpart discussed in \cite{HarangPerkowski2020}. \begin{itemize}[leftmargin=.3in] \item[\rm (i)]For $\alpha\in(0,2]$, $H\in(0,1)$, let $k(t,s)=F(t,s)(t-s)^{H-\frac{\alpha}{2}}$, where $F:\Delta_T^2\rightarrow \RR\setminus{0}$ is continuous and $F(t,s)\simeq 1$ when $|t-s|\rightarrow 0$, where $\simeq$ means that the two sides are comparable up to a positive constant. It can be easily checked that $k(t,\cdot)\in L^\alpha([0,t])$ for $t\in[0,T]$. \item[\rm (ii)] Let $p>\frac{1}{\alpha}$, and consider the kernel $k(t):=t^{-\frac{1}{\alpha}}(\ln \frac{1}{t})^{- p}$ for $t\in [0,1)$. It is readily seen that $k(t,\cdot)\in L^\alpha([0,t])$ for any $t<1$. \end{itemize} \end{ex} \begin{ex}[Gaussian case: $\alpha=2$] \label{Gaussian} Let $\scL=B$ be a Brownian motion. Then the Gaussian Volterra process $z_t=\int_0^tk(t,s)\dd B_s, t\in[0,T]$ is $(2,\zeta)$-LND according to definition \ref{def: zeta alpha Volterra kernel} if \begin{align*} \lim_{t\downarrow 0} \inf_{s\in(0,t]} \frac{\int_s^t |k(t,r)|^2 \dd r}{(t-s)^\zeta}>0. \end{align*} \end{ex} As we mentioned, the L\'evy process $\scL$ does not have to be Gaussian type processes. For non-Gaussian type $\scL$ we mostly consider $\alpha$-stable processes or the processes which has similar behavior to stable processes. Since the condition \eqref{LND} only focuses on the characteristic function $\psi$ of $\scL$, there is a large class of jump processes which can be studied here. \begin{ex}[Stable type processes]\label{Non-Gaussian} Fix an $\alpha\in(0,2)$. Given a kernel $k:\Delta^2_T\rightarrow \RR$ with $k(t,\cdot)\in L^\alpha([0,t])$ and satisfying for some $\zeta>0$ the following inequality \begin{align} \label{k} \lim_{t\downarrow 0} \inf_{s\in(0,t]} \frac{\int_s^t |k(t,r)|^\alpha \dd r}{(t-s)^\zeta}>0. \end{align} Then the following list of processes satisfy the LND condition in Definition \ref{def: zeta alpha Volterra kernel}: \begin{itemize}[leftmargin=.3in] \item[\rm(i)] $\scL$ is a standard $d$-dimensional $\alpha$-stable process, i.e., $$\psi(\xi)=c_\alpha|\xi|^\alpha,\quad c_\alpha>0.$$ Then obviously $z_t =\int_0^t k(t,r)\dd\scL_r, t>0$ is $(\alpha,\zeta)$-LND.\\ Besides, here if $k(t,s)=k(t-s)$ for $0\leq s\leq t<\infty$ and $\int_0^t|k(t,s)|^\alpha \dd s>0$, according to Definition \ref{def: zeta alpha Volterra kernel}, the process $z$ is $(\alpha,1)$-LND, which coincides the conclusion in \cite[Proposition 4.5]{Nolan89}. \item[\rm(ii)] $\scL=(\scL_1,\cdots,\scL_d)$, where $\scL_1,\cdots,\scL_d$ are independent $1$-dimensional standard $\alpha$-stable processes. In this case the corresponding characteristic function $\psi$ is given by $$ \psi(\xi)=c_\alpha(|\xi_1|^\alpha+\cdots+|\xi_d|^\alpha),\quad c_\alpha>0. $$ By Jensen's inequality, it follows that $|\xi_1|^\alpha+...+|\xi_d|^\alpha =|\xi_1|^{2\cdot \frac{\alpha}{2}}+...+|\xi_d|^{2\cdot \frac{\alpha}{2}} \geq (|\xi_1|^2+...+|\xi_d|^2)^{\frac{\alpha}{2}}=|\xi|^\alpha$ for $\alpha\in(0,2]$, which implies $\psi(\xi)\geq c_\alpha|\xi|^\alpha$. By \eqref{k} we conclude that $z_t =\int_0^t k(t,r)\dd\scL_r, t>0$, is $(\alpha,\zeta)$-LND. \item[\rm(iii)] $\scL$ is a $d$-dimensional L\'evy process with characteristic function $$\psi(\xi)=|\xi|^{\alpha}{\log(2+|\xi|)},\quad\xi\in \mathbb{R}^d.$$ We additionally assume $\alpha\in(0,1)$ (see \cite{Kang2015} Example 1.5). This processes is not really a stable process but the small size jumps of this process has similar behavior to stable processes. Since $|\xi|^{\alpha}{\log(2+|\xi|)}\geq |\xi|^\alpha$ for $\xi\in \mathbb{R}^d$, then \begin{align*} \lim_{t\downarrow 0} \inf_{s\in(0,t]} \inf_{\xi\in \RR^d} \frac{\int_s^t \psi(k(t,r)\xi) \dd r}{(t-s)^\zeta|\xi|^\alpha}&\geq \lim_{t\downarrow 0} \inf_{s\in(0,t]} \inf_{\xi\in \RR^d} \frac{\int_s^t |k(t,r)|^{{\alpha}}\dd r}{(t-s)^\zeta}>0. \end{align*} Therefore $z_t =\int_0^t k(t,r)\dd\scL_r, t>0$, is $(\alpha,\zeta)$-LND. \end{itemize} \end{ex} The following theorem shows the regularity of the local time associated to Volterra-L\'evy processes which is $(\alpha,\zeta)$-LND according to Definition \ref{def: zeta alpha Volterra kernel}. \subsection{Regularity of the local time} With the concept of local non-determinism at hand, we are now ready to prove the regularity of the local time associated to Volterra L\'evy processes, and thus also proving Theorem \ref{thm: first main reg of local time}. The following theorem provides a proof of Theorem \ref{thm: first main reg of local time}, as well as giving $\PP$-a.s. bounds for the Fourier transform of the occupation measure and the local time. \begin{thm}[{\rm Regularity of Local time}]\label{thm: regualrity of local times associated to alpha Volterra process} Let $z:\Omega\times [0,T]\rightarrow \RR^d$ be a L\'evy Volterra process with characteristic $\psi:\RR^d \rightarrow \CC$ on a complete filtered probability space $(\Omega,\cF,\{\cF_t\}_{t\in [0,T]},\PP)$, and suppose $z$ is $(\alpha,\zeta)$-LND for some $\zeta\in(0,\frac{\alpha}{d})$ and $\alpha\in(0,2]$, continuous in probability, and adapted to the filtration $(\cF_t)_{t\in [0,t]}$. Then the local time $L:\Omega\times [0,T]\times \RR^d \rightarrow \RR_+$ associated to $z$ exists and is square integrable. Furthermore, for any $\kappa<\frac{\alpha}{2\zeta}-\frac{d}{2}$ there exists a $\gamma>\frac{1}{2}$ such that the local time is contained in the space $\cC^\gamma_T H^\kappa$. \end{thm} \begin{proof} We will follow along the lines of the proof of \cite[Theorem 17]{HarangPerkowski2020}, but adapt to the case of L\'evy processes. To this end, we will apply the stochastic sewing lemma from \cite{le2018}, which is provided in Lemma \ref{Lem: Stochastic sewing lemma} for self-containedness. A Fourier transform of the occupation measure $\mu_{s,t}(\dd x)$ yields $\int_s^t e^{i\la\xi, z_r\ra}\dd r$. Note that this coincides with the Fourier transform of the local time $L_{s,t}(x)$ whenever $L$ exists. Our first goal is therefore to show that for any $p\geq 2$, the following inequality holds for some $\lambda\geq 0$ and $\gamma \in (\frac{1}{2},1)$ \begin{equation*} \|\widehat{\mu_{s,t}}(\xi)\|_{L^p(\Omega)}\lesssim (1+|\xi|^2)^{-\frac{\lambda}{2}}|t-s|^\gamma. \end{equation*} To this end, the stochastic sewing lemma (see Lemma \ref{Lem: Stochastic sewing lemma}) will provide us with this information. We begin to define $$A_{s,t}^\xi:=\int_s^t \EE[\exp(i\la \xi, z_r\ra)|\cF_s]\dd r,$$ and for a partition $\cP[s,t]$ of $[s,t]$ define \begin{equation*} \cA_{\cP[s,t]}^\xi:=\sum_{u,v} A_{u,v}^\xi \end{equation*} If the integrand $A^\xi$ satisfy the conditions {\rm (i)-(ii)} in Lemma \ref{Lem: Stochastic sewing lemma}, then a unique limit to $\cA_{s,t}^\xi=\lim_{|\cP|\rightarrow 0}\cA_{\cP}^\xi$ exists in $L^p(\Omega)$. Note that then $\int_s^t e^{i\la \xi, z_r\ra} \dd r =\cA_{s,t}^\xi$ in $L^p(\Omega)$. We continue to prove that conditions {\rm (i)-(ii)} in Lemma \ref{Lem: Stochastic sewing lemma} is indeed satisfied for our integrand $A$. It is already clear that $A^\xi_{s,s}=0$, and $A^\xi_{s,t}$ is ($\cF_t$)-measurable. For any point $u\in [s,t]$ we define $$ \delta_u f_{s,t}:=f_{s,t}-f_{s,u}-f_{u,t} $$ for any function $f:[0,T]^2 \rightarrow \RR$. It follows by the tower property and linearity of conditional expectations that \begin{multline*} \EE[\delta_u A^\xi_{s,t}|\cF_s]=\EE[\int_s^t \EE[\exp(i\la \xi,z_r\ra)|\cF_s]\dd r \\ -\int_s^t \EE[\exp(i\la \xi, z_r\ra)|\cF_s]\dd r-\int_s^u \EE[\exp(i\la \xi, z_r\ra)|\cF_u]\dd r |\cF_s]=0. \end{multline*} At last, we will need to control the term $\|\delta_u A_{s,t}^\xi\|_{L^p(\Omega)}$. To this end, using Proposition \ref{integral}, we know that \begin{equation*} A_{s,t}^\xi = \int_s^t \cE_{0,s,r}(\xi) \exp\left(- \int_s^r \psi(k(r,l)\xi) \dd l \right) \dd r, \end{equation*} where $\cE$ is defined as in \eqref{chfcz}. Therefore, it is readily checked that \begin{equation*} \delta_{u}A_{s,t}^\xi = \int_u^t \cE_{0,s,r}(\xi) \exp\left(-\int_s^r \psi(k(r,l)\xi) \dd l \right) - \cE_{0,u,r}(\xi) \exp\left(-\int_u^r \psi(k(r,l)\xi) \dd l \right) \dd r. \end{equation*} Of course, moments of the complex exponential $\cE_{0,s,r}(\xi)$ is bounded by $1$, i.e. for any $r\in[s,t]$, $\|\cE_{0,s,r}(\xi)\|_{L^p(\Omega)}\leq 1$, and therefore it follows that \begin{equation*} \|\delta_u A_{s,t}^\xi\|_{L^p(\Omega)} \lesssim \int_u^t \exp\left(-\int_s^r \psi(k(r,l)\xi) \dd l \right)+\exp\left(-\int_u^r \psi(k(r,l)\xi) \dd l\right) \dd r. \end{equation*} Using the fact that $z$ is $(\alpha,\zeta)$-LND for some $\zeta\in (0,1)$, and using that $(r-s)^\zeta\geq (r-u)^\zeta $ we obtain the estimate \begin{equation*} \|\delta_u A_{s,t}^\xi\|_{L^p(\Omega)} \lesssim \int_u^t \exp\left(-c |\xi|^\alpha (r-u)^\zeta \right) \dd r. \end{equation*} Note in particular that this holds for any $p\geq 2$. By the property of the exponential function, we have that for any $\eta\in \RR_+$ \begin{equation}\label{eq: exponential bound} \exp\left(-c_\alpha |\xi|^\alpha (r-u)^\zeta \right)\leq \exp(T^\zeta) \left(1+|\xi|^\alpha\right)^{-\eta}(r-u)^{-\zeta\eta}\sup_{q\in \RR_+} q^\eta \exp(-q). \end{equation} Since $1+|\xi|^\alpha\lesssim (1+|\xi|^2)^\frac{\alpha}{2}$ for all $\alpha\in(0,2]$, applying this relation in \eqref{eq: exponential bound}, and assuming that $0\leq \eta\zeta<\frac{1}{2}$ it follows that for all $p\geq 2$ there exists a $\gamma>\frac{1}{2}$ and $\lambda<\frac{\alpha}{2\zeta}$ \begin{equation*} \|\delta_u A_{s,t}^\xi\|_{L^p(\Omega)} \lesssim (1+|\xi|^2)^{-\frac{\lambda}{2}}(t-u)^{\gamma}. \end{equation*} Thus, both conditions of \eqref{eq:integrand cond} in Lemma \ref{Lem: Stochastic sewing lemma} are satisfied. By a simple addition and subtraction of the integrand $A_{s,t}^\xi$ in \eqref{bounds on stochastic integral}, it follows that for all $p\geq 2$ the limiting process $\cA_{s,t}^\xi$ satisfies \begin{equation}\label{eq:bound on integral A} \|\cA_{s,t}^\xi\|_{L^p(\Omega)} \lesssim (1+|\xi|^2)^{-\frac{\lambda}{2}}(t-u)^{\gamma}. \end{equation} We will now show that $\widehat{\mu_{s,t}}(\xi)=\cA_{s,t}^\xi$ in $L^p(\Omega)$. For a partition $\cP$ of $[s,t]$, we have \begin{align*} \|\widehat{\mu_{s,t}}(\xi)-\cA^\xi_{s,t}\|_{L^p(\Omega)} \leq \sum_{[u,v]\in \cP} \int_u^v \|e^{i\la \xi,z_r\ra}-\EE[e^{i\la \xi,z_r\ra}|\cF_u]\|_{L^p(\Omega)}\dd r. \end{align*} By Minkowski's inequality, we have \begin{equation*} \|e^{i\langle \xi,z_r\rangle} -\EE[e^{i\langle \xi,z_r\rangle}|\cF_u]\|_{L^p(\Omega)}\leq \|e^{i\langle \xi,z_r\rangle} -e^{i\langle \xi,z_u\rangle}\|_{L^p(\Omega)}+\|\EE[e^{i\langle \xi,z_r\rangle}-e^{i\langle \xi,z_u\rangle}|\cF_u]\|_{L^p(\Omega)}, \end{equation*} and by Jensen's inequality it follows that \begin{equation*} \|\EE[e^{i\langle \xi,z_r\rangle}-e^{i\langle \xi,z_u\rangle}|\cF_u]\|_{L^p(\Omega)}\leq \|e^{i\langle \xi,z_r\rangle}-e^{i\langle \xi,z_u\rangle}\|_{L^p(\Omega)}. \end{equation*} This implies that \begin{equation*} \|e^{i\langle \xi,z_r\rangle} -\EE[e^{i\langle \xi,z_r\rangle}|\cF_u]\|_{L^p}\leq 2 \|e^{i\langle \xi,z_r\rangle} -e^{i\langle \xi,z_u\rangle}|\cF_u\|_{L^p(\Omega)}. \end{equation*} Furthermore, for fixed $\epsilon>0$ then \begin{align*} \|e^{i\langle \xi,z_r\rangle}-e^{i\langle \xi,z_u\rangle}\|_{L^p(\Omega)}& \leq \|(e^{i\langle \xi,z_r\rangle}-e^{i\langle \xi,z_u\rangle})1_{|e^{i\langle \xi,z_r\rangle}-e^{i\langle \xi,z_u\rangle}|>\epsilon}\|_{L^p(\Omega)}+\epsilon \\ &\leq \PP(|e^{i\langle \xi,z_r\rangle}-e^{i\langle \xi,z_u\rangle}|>\epsilon)+\epsilon \end{align*} since $\|e^{i\langle \xi,z_r\rangle}-e^{i\langle \xi,z_u\rangle}\|_{L^q(\Omega)}\leq 1$ for any $q$. We conclude that for any $\epsilon>0$ \begin{equation}\label{eq:compare} \|\widehat{\mu_{s,t}}(\xi)-\cA^\xi_{s,t}\|_{L^p(\Omega)} \leq 2\epsilon(t-s)+ 2\sum_{[u,v]\in \cP} \int_u^v\PP(|e^{i\langle \xi,z_r\rangle}-e^{i\langle \xi,z_u\rangle}|>\epsilon)\dd r. \end{equation} Since this holds for any partition $\cP$, letting the mesh tend to $0$, {and using the assumption that $z$ is continuous in probability, it follows that $e^{i\langle \xi,z_r\rangle}$ is continuous in probability (see e.g. the continuous mapping theorem), and thus \begin{equation*} \lim_{|\cP|\rightarrow 0} \sum_{[u,v]\in \cP} \int_u^v\PP(|e^{i\langle \xi,z_r\rangle}-e^{i\langle \xi,z_u\rangle}|>\epsilon)\dd r \leq \lim_{|\cP|\rightarrow 0} \sup_{[u,v]\in \cP} \sup_{r\in [u,v]} \PP(|e^{i\langle \xi,z_r\rangle}-e^{i\langle \xi,z_u\rangle}|>\epsilon)(t-s)=0. \end{equation*} Since \eqref{eq:compare} holds for any partition (also for partitions with infinitesimal mesh), we conclude that for any $\epsilon>0$ \begin{equation*} \|\widehat{\mu_{s,t}}(\xi)-\cA^\xi_{s,t}\|_{L^p(\Omega)}\leq 2\epsilon(t-s) \end{equation*} and} since $\epsilon$ can be chosen arbitrarily small, we conclude that $\widehat{\mu_{s,t}}(\xi)=\cA^\xi_{s,t}$ in $L^p(\Omega)$. We move on to estimate the Sobolev norm $\|L_{s,t}\|_{H^{\kappa}}$ for some appropriate $\kappa\in \RR$. We begin to observe that \begin{equation*} \|\|\mu_{s,t}\|_{H^\kappa}\|_{L^p(\Omega)}=\left[\EE\left(\int_{\RR^d} (1+|\xi|^2)^{\kappa}|\widehat{\mu_{s,t}}(\xi)|^2 d\xi\right)^\frac{p}{2}\right]^\frac{1}{p}. \end{equation*} By Minkowski's inequality, it follows that \begin{equation*} \|\|\mu_{s,t}\|_{H^\kappa}\|_{L^p(\Omega)} \lesssim \|(1+|\cdot|^2)^{\frac{\kappa}{2}}\|\widehat{\mu_{s,t}}\|_{L^p(\Omega)}\|_{L^2(\RR^d)}, \end{equation*} and then use the bound from \eqref{eq:bound on integral A} to observe that \begin{equation*} \|\|\mu_{s,t}\|_{H^\kappa}\|_{L^p(\Omega)} \lesssim (t-s)^{\gamma}\|(1+|\cdot|^2)^{\frac{\kappa-\lambda}{2}}\|_{L^2(\RR)}. \end{equation*} Choosing $\kappa=\lambda-\frac{d}{2}-\epsilon$ for some arbitrarily small $\epsilon>0$, it follows that $$ \|(1+|\cdot|^2)^{\frac{\kappa-\lambda}{2}}\|_{L^2(\RR)}= \|(1+|\cdot|^2)^{\frac{d}{4}-\epsilon/2}\|_{L^2(\RR)}<\infty. $$ Recalling that $\lambda<\frac{\alpha}{2\zeta}$, since $\epsilon>0$ could be chosen arbitrarily small, we obtain that for any $\kappa<\frac{\alpha}{2\zeta}-\frac{d}{2}$ there exists a $\gamma>\frac{1}{2}$ such that \begin{equation*} \|\|\mu_{s,t}\|_{H^\kappa}\|_{L^p(\Omega)} \lesssim (t-s)^{\gamma}. \end{equation*} Since $p\geq 2$ can be chosen arbitrarily large, we conclude by Kolmogorov's theorem of continuity that there exists a set $\Omega^\prime \subset \Omega$ of full measure such that for all $\omega\in \Omega^\prime$ there exists a $C(\omega)>0$ such that \begin{equation*} \|\mu_{s,t}(\omega)\|_{H^\kappa}\leq C(\omega)(t-s)^{\gamma} \end{equation*} In particular, this implies that for almost all $\omega\in \Omega$, $\mu(\omega)\in L^2(\RR^d)$ and thus the local time $L(\omega)$ (given as the density of $\mu$) exists, and our claim follows. \end{proof} \begin{rem} In the case when $\alpha=2$, then $X$ is a Gaussian process, and Theorem \ref{thm: regualrity of local times associated to alpha Volterra process} provides the same regularity of the associated local time for a Gaussian Volterra process as proven in for example \cite{HarangPerkowski2020} (or without considering the joint time regularity, as shown in \cite{GerHoro,Berman73, Pitt78}). This theorem can therefore be seen as an extension of this work to the class of Volterra-L\'evy processes. \end{rem} We will now give several examples on the application of Theorem \ref{thm: regualrity of local times associated to alpha Volterra process} to show the regularity of the local time for a few specific Volterra $\alpha$-stable processes. All of the following examples also were studied for dimension $d=1$ in \cite[Corollary 4.6, Examples]{Nolan89}, it shows that the local time $L_{t}(x)$ of a Volterra $\alpha$-stable process exists $a.s.$ and is continuous for $(t,x)\in[0,T]\times\mathbb{R}$, furthermore for fixed $t\in[0,T]$, $L_{t}(x)$ is H\"older continuous for $x\in \mathbb{R}$ with some order less than $1$. The method therein \cite{Nolan89} heavily relies on the $L^\alpha$-representation for $\alpha$-stable processes. \begin{ex}[\rm{Regularity of the local time for Volterra $\alpha$-stable processes}]\label{exVL} We consider Volterra-$\alpha$-stable processes $$z_t=\int_0^tk(t-s)\dd \scL_s,\quad t\geq0,$$ where $\scL$ is a $d$-dimensional standard $\alpha$-stable process with $\alpha\in(0,2]$. \begin{itemize}[leftmargin=.3in] \item[\rm (i).] Let $d=1$ and $k(t)\equiv 1$ for all $t\geq0$, then $$z_t=\scL_t$$ is an $1$-dimensional standard $\alpha$-stable process. When $\alpha\in(1,2]$, we know that an $1$-dimensional standard $\alpha$-stable process $\scL$ is $(\alpha,1)$-LND, and continuous in probability. According to above theorem there exists a $\gamma>\frac{1}{2}$, such that the local time associated to $z$, and thus also $\scL$ is contained in $\cC^\gamma_T H^\kappa$ for any $\kappa<\frac{\alpha}{2}-\frac{1}{2}$, $\PP$-a.s.. \item[(ii)] Let $k(t)=e^{-at}$ with $a>0$. Then the Ornstein-Uhlenbeck L\'evy process $$z_t=\int_0^t e^{-a(t-s)}\dd \scL_s,\quad t\geq0$$ is $(\alpha,\zeta)$-LND for $\alpha\in(0,2]$ and $\zeta=1$, and continuous in probability. Hence there exists a $\gamma>\frac{1}{2}$, such that the process $z$ has a local time $L\in\cC^\gamma_T H^\kappa$ for any $\kappa<\frac{\alpha}{2}-\frac{d}{2}$, $\PP$-a.s.. \item[\rm (iii)] Let $(z_t)_{t\in [0,T]}$ be a fractional $\alpha$-stable process as in Example \ref{fr}. Then $(z_t)_{t\in [0,T]}$ is continuous in probability, and there exists a $\gamma>\frac{1}{2}$ such that the local time $L$ associated to $z$ is contained in $\cC^\gamma_TH^\kappa$ for any $\kappa<\frac{1}{2H}-\frac{d}{2}$, $\PP$-a.s.. Note that in this case, one obtains the same regularity for the local time, as one would for the fractional Brownian motion (see e.g. \cite{HarangPerkowski2020}). \item[\rm (iv)] Fix $T<1$ and let $k(t)=t^{-\frac{1}{\alpha}}\ln(\frac{1}{t})^{-p}$ for some $p>1$, and suppose $(\scL_t)_{t\in [0,T]}$ is a standard $\alpha$-stable process for some $\alpha\in (0,2]$. Let $(z_t)_{t\in [0,T]}$ be the Volterra L\'evy process built from $k$ and $\scL$. It is readily checked with Lemma \ref{lem:cont in prob}, Example \ref{ex:standard alpha stable} and Example \ref{VK} that this process is continuous in probability. Moreover, note that in this case $z$ is $(\alpha,\zeta)$-LND for any $\zeta>0$. Thus for any $\gamma\in (0,1)$ the local time $L$ associated to $z$ is contained in $\cC^\gamma_T H^\kappa$ for any $\kappa\in \RR$, $\PP$-a.s.. Furthermore, if $z$ is c\`adl\`ag, then the local time $L$ has compact support, $\PP$-a.s. and thus $L\in \cC^\gamma([0,T];\cD(\RR^d))$, $\PP$-a.s. where $\cD(\RR^d)$ denotes the space of test functions on $\RR^d$. This proves in particular Corollary \ref{cor: test}. \end{itemize} \end{ex} \section{Regularization of ODEs perturbed by Volterra-L\'evy processes}\label{sec:regualrization by noise} With the knowledge of the spatio-temporal regularity of the local time associated to a Volterra-L\'evy process, we can solve additive SDEs with possibly distributional-valued drift's coefficients. The goal of this section is to prove Theorem \ref{thm: main existence and uniqueness}. To this end, we will recall some of the tools from the theory of non-linear Young integrals and corresponding equations. This theory for construction of integrals and equations is by now well known (see e.g. \cite{Catellier2016, hu2017nonlinear} and more recently \cite{HarangPerkowski2020,galeati2020noiseless} for an overview), but for the sake of self-containedness, we have included some short versions of proofs in the appendix. We also mention that conditions for existence and uniqueness of non-linear Young equations can be stated in more general terms than what is used here. We choose to work with a simple set of conditions to provide a clear picture of the regularising effect in SDEs driven by Volterra-L\'evy noise, in contrast to the full generality which could be accessible. More general conditions for existence and uniqueness of non-linear Young equations can for example be found in \cite{galeati2020noiseless}. \begin{lem}\label{lem: abstract young equations} Suppose $\Gamma:[0,T]\times \RR^d \rightarrow \RR^d$ is contained in $\cC^{\gamma}_T\cC^\kappa$ for some $\gamma\in (\frac{1}{2},1)$ and $\kappa> \frac{1}{\gamma}$, and satisfies the following inequalities for $(s,t)\in \Delta^T_2$ and $\xi,\tilde{\xi}\in \RR^d$ \begin{equation}\label{eq: conditions for ex and uni} \begin{aligned} {\rm (i)}&\qquad |\Gamma_{s,t}(\xi)|+|\nabla \Gamma_{s,t}(\xi)| \lesssim |t-s|^\gamma \\ {\rm (ii)} &\qquad |\Gamma_{s,t}(\xi)-\Gamma_{s,t}(\tilde{\xi})| \lesssim |t-s|^\gamma |\xi-\tilde{\xi}| \\ {\rm (iii)}& \qquad |\nabla \Gamma_{s,t}(\xi)-\nabla \Gamma_{s,t}(\tilde{\xi})|\lesssim |t-s|^\gamma |\xi-\tilde{\xi}|^{\kappa-1}. \end{aligned} \end{equation} Then for any $\xi\in \RR^d$ there exists a unique solution to the equation \begin{equation}\label{eq: general ODE} y_t=\xi+\int_0^t \Gamma_{\dd r}(y_r). \end{equation} Here the integral is interpreted as the non-linear Young integral described in Appendix \ref{app: nonlinear young equations} \begin{equation*} \int_0^t \Gamma_{\dd r}(y_r)=\lim_{|\cP|\rightarrow 0} \sum_{[u,v]\in \cP} \Gamma_{u,v}(y_u), \end{equation*} for any partition $\cP$ of $[0,t]$. \end{lem} \begin{proof} See proof in Appendix \ref{app: nonlinear young equations}. \end{proof} From here on, all analysis is done pathwise. That is, we now consider a subset $\Omega'\subset \Omega$ of full measure such that for all $\omega\in \Omega'$ the local time $L(\omega)$ associated to a Volterra-L\'evy process is contained in $\cC_T^\gamma H^\kappa$, for $\gamma$ and $\kappa$ as given through Theorem \ref{thm: regualrity of local times associated to alpha Volterra process}. With a slight abuse of notation, we will write $L=L(\omega)$. \\ Before moving on to prove existence and uniqueness of ODEs perturbed by Volterra L\'evy processes, we will need a technical proposition on the convolution of the local time with certain (possibly distributional) vector fields. \begin{prop}\label{prop: regularity of conv} Let $(z_t)_{t\in [0,T]}$ be a Volterra L\'evy process which is continuous in probability and $(\alpha,\zeta)$-LND for some $\zeta\in (0,1]$, such that the associated local time $L$ is contained in $\cC^\gamma_T H^\kappa$ for some $\gamma>\frac{1}{2}$ and $\kappa <\frac{\alpha}{2\zeta}-\frac{d}{2}$. Suppose $b\in H^\beta$ for some $\beta\in \RR$. Then the following inequality holds for any $\theta<\beta+\kappa$ and $(s,t)\in \Delta^2_T$ \begin{equation}\label{reg of conv} \|b\ast \bar{L}_{s,t}\|_{\cC^{\theta}}\lesssim \|b\|_{H^\beta} \|L\|_{\cC^\gamma_TH^\kappa}|t-s|^\gamma, \qquad \PP-a.s. \end{equation} Here, $\bar{L}_t(x)=L_t(-x)$. \end{prop} \begin{proof} From Theorem \ref{thm: regualrity of local times associated to alpha Volterra process}, we know that $L_{s,t}\in H^\kappa$ for $\kappa<\frac{\alpha}{2\zeta}-\frac{d}{2}$, thus, an application of Young's convolution inequality, reveals that \eqref{reg of conv} holds. \end{proof} A combination of Lemma \ref{lem: abstract young equations} and Proposition \ref{prop: regularity of conv} provides the existence and uniqueness of ODEs perturbed by $(\alpha,\zeta)$-LND Volterra L\'evy processes. The following corollary and proposition can be seen as proof of Theorem \ref{thm: main existence and uniqueness}. \begin{cor}[SDEs driven by stable Volterra processes]\label{cor: ex and uni sde} Let $(z_t)_{t\in [0,T]}$ be a Volterra-L\'evy process which is continuous in probability, and $(\alpha,\zeta)$-LND according to definition \ref{def: zeta alpha Volterra kernel} for some $\zeta\in (0,\frac{\alpha}{d})$ and $\alpha\in (0,2]$. Suppose $b\in H^\beta$ for some $\beta \in \RR$ such that the following inequality holds $\beta+\frac{\alpha}{2\zeta}-\frac{d}{2}\geq 2$. Then for any $\xi\in \RR^d$ there exists a unique solution $y\in \cC^\gamma_T(\RR^d)$ to the equation \begin{equation}\label{eq:particular equation} y_t=\xi+\int_0^t b\ast \bar{L}_{\dd r}(y_r),\qquad t\in [0,T]. \end{equation} Here the integral and solution is interpreted pathwise in sense of Lemma \ref{lem: abstract young equations} by setting $\Gamma_{s,t}(x):=b\ast \bar{L}_{s,t}(x)$, where we recall that $\bar{L}_t(x)=L_t(-x)$ and $L$ is the local time associated to $ (z_t)_{t\in [0,T]}$. \end{cor} \begin{proof} By Proposition \ref{prop: regularity of conv}, we know that $b\ast L \in \cC^\gamma_T\cC^{\theta}$ for any $\theta<\beta+\frac{\alpha}{2\zeta}-\frac{d}{2}$. Since $\beta+\frac{\alpha}{2\zeta}-\frac{d}{2}\geq 2$, set $\Gamma_{s,t}(x):=b\ast \bar{L}_{s,t}(x)$, and it follows directly that conditions {\rm (i)-(iii)} of Lemma \ref{lem: abstract young equations} are satisfied, and thus a unique solution to \eqref{eq:particular equation} exists. \end{proof} Additionally to existence and uniqueness, the authors of \cite{HarangPerkowski2020} provided a general program to prove higher order differentiability of the flow mapping $\xi \mapsto y_t(\xi)$. We will here apply this program in order to show differentiability of flows associated to ODEs perturbed by sample paths of a Volterra-L\'evy process. It is well known that if $b\in C^k$ for some $k\geq 1$, the the flow $\xi\mapsto y_t(\xi)$ where $y$ is the solution to the ODE \begin{equation*} y_t =\xi+\int_0^t b(y_r)\dd r, \end{equation*} is $k$-times differentiable. Translating this to the abstract framework of non-linear Young equations; let $y$ be the solution to \begin{equation*} y_t=\xi+\int_0^t \Gamma_{\dd r}(y_r),\qquad \xi\in \RR^d, \end{equation*} where $\Gamma\in \cC^\gamma_T\cC^\kappa$ for some $\kappa>\frac{1}{\gamma}$. Then the flow $\xi\mapsto y_t(\xi)$ is $\kappa$ times differentiable. Recall that $\Gamma$ in our setting represents the convolution between the (possibly distributional) vector field $ b$ and the local time associated to the irregular path of a Volterra-L\'evy process. We therefore provide a proposition to highlight the relationship between the regularity of the vector field $b$, the regularity of the local time associated to a Volterra L\'evy process, and the differentiability of the flow. \begin{prop} Let $(z_t)_{t\in [0,T]}$ be a Volterra-L\'evy process taking values in $\RR^d$ which is continuous in probability, and $(\alpha,\zeta)$-LND for some $\alpha\in (0,2]$ and $\zeta\in(0,\frac{\alpha}{d})$. Suppose $b\in H^\beta$ for some $\beta\in \RR$ such that $\beta+\frac{\alpha}{2\zeta}-\frac{d}{2}\geq 1+n$ for some integer $n\geq 1$. Let $y(\xi)\in \cC^\gamma_T(\RR^d)$ denote the solution to \eqref{eq:particular equation} starting in $\xi\in \RR^d$. Then, the flow map $\xi\mapsto y_\cdot(\xi)$ is $n$-times Fr\'echet differentiable. \end{prop} \begin{proof} This result for abstract Young equations was proven in \cite[Thm. 2]{HarangPerkowski2020}, but we give a short outline of the proof here. Denote by $\theta=\beta+\frac{\alpha}{2\zeta}-\frac{d}{2}$, and since $\theta\geq 2$ it follows that there exists a unique solution to \eqref{eq:particular equation}. We will prove the differentiability of the flow by induction, and begin to show the existence of the first derivative. It is readily checked that the first derivative of the flow $\xi\mapsto y_\cdot(\xi)$ needs to satisfy the equation \begin{equation}\label{eq:gradient linear ODE} \nabla y_t(\xi)=1 +\int_0^t \nabla \Gamma_{\dd r}(y_r(\xi)) \nabla y_r(\xi), \qquad {\rm for} \qquad t\in [0,T], \end{equation} where the integral is understood in sense of the non-linear Young integral in Lemma \ref{lem: non linear young integral},by setting $\Gamma^1_{s,t}(u_s)=\nabla \Gamma_{s,t}(y_s(\xi)) u_s $ where $u_s=\nabla y_s(\xi) \in \RR^{d\times d}$. Since \eqref{eq:gradient linear ODE} is a linear equation, existence and uniqueness can be simply verified following along the lines of the proof of Lemma \ref{lem: abstract young equations}. \end{proof}
1,108,101,563,601
arxiv
\section{\label{Int}Introduction} Over the last two decades, the Aluthge transform for bounded operators on Hilbert space has attracted considerable attention. \ In this note, we set out to extend the Aluthge transform to commuting $n$-tuples of bounded operators. \ We identify two natural notions (toral and spherical) and study their basic properties. \ We then focus on $2$-variable weighted shifts, for which much can be said. \ Let $\mathcal{H}$ be a complex Hilbert space and let $\mathcal{B}(\mathcal{H )$ denote the algebra of bounded linear operators on $\mathcal{H}$. $\ $For T\in \mathcal{B}(\mathcal{H})$, the polar decomposition of $T$ is $T \equiv V|T|$, where $V$ is a partial isometry with $\textrm{ker} \; V=\textrm{ker} T$ and $|T|:=\sqrt{T^{\ast }T}$. \ The \textit Aluthge transform} of $T$ is the operator $\widetilde{T}:=|T|^{\frac{1}{2 }V|T|^{\frac{1}{2}}$ \cite{Alu}. \ This transform was first considered by A. Aluthge, in an effort to study $p$-hyponormal and $\mathrm{\log }$-hyponormal operators. \ Roughly speaking, the idea behind the Aluthge transform is to convert an operator into another operator which shares with the first one many spectral properties, but which is closer to being a normal operator. \ In recent years, the Aluthge transform has received substantial attention. \ I.B. Jung, E. Ko and C. Pearcy proved in \cite{JKP} that $T$ has a nontrivial invariant subspace if and only if $\widetilde{T}$ does. \ (Since every normal operator has nontrivial invariant subspaces, the Aluthge transform has a natural connection with the invariant subspace problem.) \ In \cite{JKP2}, I.B. Jung, E. Ko and C. Pearcy also proved that $T$ and $\widetilde{T}$ have the same spectrum. \ Moreover, in \cite{KiKo} (resp. \cite{Kim}) M.K. Kim and E. Ko (resp. F. Kimura) proved that $T$ has property $(\beta )$ if and only if $\widetilde{T}$ has property $(\beta )$. \ Finally, T. Ando proved in \cite{Ando} that $\left\| (T-\lambda )^{-1}\right\| \geq \left\| (\widetilde{T}-\lambda)^{-1}\right\| \;(\lambda \notin \sigma (T))$. \ (For additional results, see \cite{CJL} and \cite{Yam}.) For a unilateral weighted shift $W_{\alpha }\equiv \mathrm{shift}(\alpha _{0},\alpha _{1},\cdots )$, the Aluthge transform $\widetilde{W}_{\alpha }$ is also a unilateral weighted shift, given by \begin{equation} \widetilde{W}_{\alpha }\equiv \mathrm{shift}(\sqrt{\alpha _{0}\alpha _{1}} \sqrt{\alpha _{1}\alpha _{2}},\cdots )\text{ (see \cite{LLY}).} \label{Alu-hypo} \end{equation It is easy to see that $W_{\alpha }$ is hyponormal if and only if $\alpha _{0}\leq \alpha _{1}\leq \cdots $. \ Thus, by (\ref{Alu-hypo}), if $W_{\alpha }$ is hyponormal, then the Aluthge transform $\widetilde{W}_{\alpha }$ of $W_{\alpha }$ is also hyponormal. \ However, the converse is not true in general. \ For example, if $W_{\alpha }\equiv \mathrm{shift}\left( \frac{1}{2},2,\frac{1}{2},2,\frac{1}{2 ,2,\cdots \right) $, then $W_{\alpha }$ is clearly not hyponormal but the Aluthge transform $\widetilde{W}_{\alpha } \equiv U_{+}$ is subnormal. \ (Here and in what follows, $U_{+}$ denotes the (unweighted) unilateral shift.) \ In \cit {LLY}, S.H. Lee, W.Y. Lee and the second-named author showed that for $k\geq 2$, the Aluthge transform, when acting on weighted shifts, need not preserve $k$-hyponormality. \ Finally, G. Exner proved in \cite{Ex} that the Aluthge transform of a subnormal weighted shift need not be subnormal. In this article, we introduce two Aluthge transforms of commuting pairs of Hilbert space operators, with special emphasis on $2$-variable weighted shifts $W_{(\alpha ,\beta )}\equiv (T_{1},T_{2})$. \ Since a priori there are several possible notions, we discuss two plausible definitions and their basic properties in Sections \ref{Sect3} and \ref{Sec2}. \ Our research will allow us to compare both definitions in terms of how well they generalize the $1$-variable notion. \ After discussing some basic properties of each Aluthge transform, we proceed to study both transforms in the case of $2$-variable weighted shifts. \ We consider topics such as preservation of joint hyponormality, norm continuity, and Taylor spectral behavior. For $i=1,2$, we consider the polar decomposition $T_{i}\equiv V_{i}\left\vert T_{i}\right\vert $, and we let \begin{equation} \widetilde{T_i}:=|T_{i}|^{\frac{1}{2}}V_{i}|T_{i}|^{\frac{1}{2}} \; \; \; (i=1,2) \label{Def-Alu} \end{equation} denote the classical ($1$-variable) Aluthge transform. \ The {\it toral} Aluthge transform of the pair $(T_{1},T_{2})$ is $\widetilde{(T_{1},T_{2})}:=(\widetilde{T_1},\widetilde{T_2})$. \ For a $2$-variable weighted shift $W_{(\alpha ,\beta )} \equiv (T_{1},T_{2})$, we denote the toral Aluthge transform of $W_{(\alpha ,\beta )}$ by $\widetilde{W}_{(\alpha ,\beta )}$. \ As we will see in Proposition \ref{commuting1}, the commutativity of $\widetilde{W}_{(\alpha ,\beta )}$ does not automatically follow from the commutativity of $W_{(\alpha ,\beta )}$; in fact, the necessary and sufficient condition to preserve commutativity is \begin{equation} \alpha _{(k_{1},k_{2}+1)}\alpha _{(k_{1}+1,k_{2}+1)}=\alpha _{(k_{1}+1,k_{2})}\alpha _{(k_{1},k_{2}+2)} \; \; (\text{for all }k_{1},k_{2}\geq 0). \label{alphacomm} \end{equation} Under this assumption, and in sharp contrast with the $1$-variable situation, it is possible to exhibit a {\it commuting subnormal} pair $W_{(\alpha ,\beta )}$ such that $\widetilde{W}_{(\alpha ,\beta )}$ is commuting and {\it not hyponormal}. \ As a matter of fact, in Theorem \ref{example100} we construct a class of subnormal $2$-variable weighted shifts $W_{(\alpha ,\beta )}$ whose cores are of tensor form, and for which the hyponormality of $\widetilde{W}_{(\alpha ,\beta )}$ can be described entirely by two parameters. \ As a result, we obtain a rather large class of subnormal $2$-variable weighted shifts with non-hyponormal toral Aluthge transforms. There is a second plausible definition of Aluthge transform, which uses a joint polar decomposition. \ Assume that we have a decomposition of the form \begin{equation*} (T_{1},T_{2})\equiv\left( V_{1}P,V_{2}P\right) \text{,} \end{equation*} where $P:=\sqrt{T_{1}^{\ast }T_{1}+T_{2}^{\ast }T_{2}}$. \ Now, let \begin{equation} \widehat{(T_1,T_2)}:=\left( \sqrt{P}V_{1}\sqrt{P},\sqrt{P}V_{2 \sqrt{P}\right) \text{,} \label{Def-Alu1} \end{equation We refer to $\widehat{\mathbf{T}}$ as the {\it spherical} Aluthge transform of $\mathbf{T}$. \ Even though \widehat{T}_{1}=\sqrt{P}V_{1}\sqrt{P}$ is not the Aluthge transform of T_{1} $, we observe in Section \ref{Sec2} that $Q:=\sqrt{V_{1}^{\ast }V_{1}+V_{2}^{\ast }V_{2}}$ is a (joint) partial isometry; for, $PQ^2P=P^2$, from which it follows that $Q$ is isometric on the range of $P$. \ We will prove in Section \ref{Sec2} that this particular definition of the Aluthge transform preserves commutativity. \ There is another useful aspect of the spherical Aluthge transform, which we now mention. \ If we consider the fixed points of this transform acting on $2$-variable weighted shifts, then we obtain an appropriate generalization of the concept of quasinormality. \ Recall that a Hilbert space operator $T$ is said to be {\it quasinormal} if $T$ commutes with the positive factor $P$ in the polar decomposition $T\equiv VP$; equivalently, if $V$ commutes with $P$. \ It follows easily that $T$ is quasinormal if and only if $T=\widetilde{T}$, that is, if and only if $T$ is a fixed point for the Aluthge transform. \ In Section \ref{Spherquasi}, we prove that if a $2$-variable weighted shift $W_{(\alpha,\beta)}=(T_1,T_2)$ satisfies $W_{(\alpha,\beta)}=\left( \widehat{T}_{1},\widehat{T}_{2}\right)$, then $T_1^*T_1+T_2^*T_2$ is, up to scalar multiple, a spherical isometry. \ It follows that we can then study some properties of the spherical Aluthge transform using well known results about spherical isometries. In this paper, we also focus on the following three basic problems. \begin{problem} \label{problem 1}Let $k\geq 1$ and assume that $W_{(\alpha ,\beta )}$ is $k$-hyponormal. \ Does it follow that the toral Aluthge transform $\widetilde{W}_{(\alpha ,\beta )}$ is $k -hyponormal? \ What about the case of the spherical Aluthge transform $\widehat{W}_{(\alpha ,\beta )}$? \end{problem} \begin{problem} \label{problem 2} Is the toral Aluthge transform $\left( S,T\right) \rightarrow \left( \widetilde{S},\widetilde{T}\right) $ continuous in the uniform topology? \ Similarly, does continuity hold for the spherical Aluthge transform $\widehat{W}_{(\alpha ,\beta )}$? \end{problem} \begin{problem} \label{problem 3}Does the Taylor spectrum (resp. Taylor essential spectrum) of \widetilde{W}_{(\alpha ,\beta )}$ equal to that of $W_{(\alpha ,\beta )}$ ? \ What about the case of the spherical Aluthge transform $\widehat{W}_{(\alpha ,\beta )}$? \end{problem} \section{\label{Sect1}Notation and Preliminaries} \subsection{Subnormality and $k$-hyponormality} \ We say that $T\in \mathcal{B}(\mathcal{H})$ is \textit{normal} if $T^{\ast }T=TT^{\ast }$, \textit{quasinormal} if $T$ commutes with $T^{\ast }T$, \textit{subnormal} if $T=N|_{\mathcal{H}}$, where $N$ is normal and $N(\mathcal{H}\mathcal{)}$ $\mathcal{\subseteq H}$, and \textit hyponormal} if $T^{\ast }T\geq TT^{\ast }$. \ For $S,T\in \mathcal{B} \mathcal{H})$, let $[S,T]:=ST-TS$. \ We say that an $n$-tuple $\mathbf{T} \equiv (T_{1},\cdots ,T_{n})$ of operators on $\mathcal{H}$ is (jointly) \textit hyponormal} if the operator matrix \begin{equation*} \lbrack \mathbf{T}^{\ast },\mathbf{T]:=}\left( \begin{array}{llll} \lbrack T_{1}^{\ast },T_{1}] & [T_{2}^{\ast },T_{1}] & \cdots & [T_{n}^{\ast },T_{1}] \\ \lbrack T_{1}^{\ast },T_{2}] & [T_{2}^{\ast },T_{2}] & \cdots & [T_{n}^{\ast },T_{2}] \\ \text{ \thinspace \thinspace \quad }\vdots & \text{ \thinspace \thinspace \quad }\vdots & \ddots & \text{ \thinspace \thinspace \quad }\vdots \\ \lbrack T_{1}^{\ast },T_{n}] & [T_{2}^{\ast },T_{n}] & \cdots & [T_{n}^{\ast },T_{n} \end{array \right) \end{equation* is positive on the direct sum of $n$ copies of $\mathcal{H}$ (cf. \cite{Ath , \cite{CMX}). \ For instance, if $n=2$, \begin{equation*} \lbrack \mathbf{T}^{\ast },\mathbf{T}\rbrack = \left( \begin{array}{ll} \lbrack T_{1}^{\ast },T_{1}] & [T_{2}^{\ast },T_{1}] \\ \lbrack T_{1}^{\ast },T_{2}] & [T_{2}^{\ast },T_{2} \end{array \right) \text{.} \end{equation* For $k\geq 1$ $T$ is $k$\textit{-hyponormal} if $\left( I,T,\cdots ,T^{k}\right) $ is (jointly) hyponormal. \ The Bram-Halmos characterization of subnormality (\cite[III.1.9]{Con}) can be paraphrased as follows: \ $T$ is subnormal if and only if $T$ is $k$-hyponormal for every $k\geq 1$ (\cite Proposition 1.9]{CMX}). \ The $n$-tuple $\mathbf{T}\equiv (T_{1},T_{2},\cdots ,T_{n})$ is said to be \textit{normal} if $\mathbf{T}$ is commuting and each $T_{i}$ is normal, and $\mathbf{T}$ is \textit subnormal} if $\mathbf{T}$ is the restriction of a normal $n$-tuple to a common invariant subspace. \ In particular, a commuting pair $\mathbf{T \equiv (T_{1},T_{2})$ is said to be $k$\textit{-hyponormal }$(k\geq 1)$ \cit {CLY1} if \begin{equation*} \mathbf{T}(k):=(T_{1},T_{2},T_{1}^{2},T_{2}T_{1},T_{2}^{2},\cdots ,T_{1}^{k},T_{2}T_{1}^{k-1},\cdots ,T_{2}^{k}) \end{equation* is hyponormal, or equivalently \begin{equation*} \lbrack \mathbf{T}(k)^{\ast },\mathbf{T}(k)]=([(T_{2}^{q}T_{1}^{p})^{\ast },T_{2}^{m}T_{1}^{n}])_{_{1\leq p+q\leq k}^{1\leq n+m\leq k}}\geq 0. \end{equation* Clearly, for $T \in \mathcal{B}(\mathcal{H})$ we have $$ \textrm{normal} \Rightarrow \textrm{quasinormal} \Rightarrow \textrm{subnormal} \Rightarrow k \textrm{-hyponormal} \Rightarrow \textrm{hyponormal}. $$ As one might expect, there is a version of the Bram-Halmos Theorem in several variables, proved in \cite{CLY1}: a commuting pair which is $k$-hyponormal for every $k \ge 1$ is necessarily subnormal. \subsection{Unilateral weighted shifts} \ For $\alpha \equiv \{\alpha _{n}\}_{n=0}^{\infty }$ a bounded sequence of positive real numbers (called \textit{weights}), let $W_{\alpha }:\ell ^{2} \mathbb{Z}_{+})\rightarrow \ell ^{2}(\mathbb{Z}_{+})$ be the associated \textit{unilateral weighted shift}, defined by $W_{\alpha }e_{n}:=\alpha _{n}e_{n+1}\;($all $n\geq 0)$, where $\{e_{n}\}_{n=0}^{\infty }$ is the canonical orthonormal basis in $\ell ^{2}(\mathbb{Z}_{+}).$ \ We will often write $\mathrm{shift}(\alpha _{0},\alpha _{1},\cdots )$ to denote the weighted shift $W_{\alpha }$ with a weight sequence $\{\alpha _{n}\}_{n=0}^{\infty }$. \ As usual, the (unweighted) unilateral shift will be denoted by $U_{+}:=\mathrm{shift} (1,1,\cdots)$. \ The \textit{moments} of $W_{\alpha }$ are given by \begin{equation*} \gamma _{k}\equiv \gamma _{k}(W_{\alpha }):=\left\{ \begin{array}{cc} 1 & \text{if }k=0\text{ } \\ \alpha _{0}^{2}\alpha _{1}^{2}\cdots \alpha _{k-1}^{2} & \text{if }k>0 \end{array \right. \end{equation* \subsection{$2$-variable weighted shifts} \ Similarly, consider double-indexed positive bounded sequences $\alpha _ \mathbf{k}},\beta _{\mathbf{k}}\in \ell ^{\infty }(\mathbb{Z}_{+}^{2})$, \mathbf{k}\equiv (k_{1},k_{2})\in \mathbb{Z}_{+}^{2}:=\mathbb{Z}_{+}\times \mathbb{Z}_{+}$ and let $\ell ^{2}(\mathbb{Z}_{+}^{2})$\ be the Hilbert space of square-summable complex sequences indexed by $\mathbb{Z}_{+}^{2}$. We define the $2$-variable weighted shift $W_{(\alpha ,\beta )}\equiv (T_{1},T_{2})$\ by \begin{equation*} T_{1}e_{\mathbf{k}}:=\alpha _{\mathbf{k}}e_{\mathbf{k+}\varepsilon _{1} \text{ and }T_{2}e_{\mathbf{k}}:=\beta _{\mathbf{k}}e_{\mathbf{k+ \varepsilon _{2}}, \end{equation* where $\mathbf{\varepsilon }_{1}:=(1,0)$ and $\mathbf{\varepsilon _{2}:=(0,1)$ (see Figure \ref{Figure 1}(i)). \ Clearly, \begin{equation} T_{1}T_{2}=T_{2}T_{1}\Longleftrightarrow \alpha _{\mathbf{k+}\varepsilon _{2}}\beta _ \mathbf{k}}=\beta _{\mathbf{k+}\varepsilon _{1}}\alpha _{\mathbf{k}} \;(\text{all }\mathbf{k}\in \mathbb{Z}_{+}^{2}). \label{commuting} \end{equation In an entirely similar way, one can define multivariable weighted shifts. \ Trivially, a pair of unilateral weighted shifts $W_{\sigma }$ and $W_{\tau } $ gives rise to a $2$-variable weighted shift $W_{(\alpha ,\beta )}\equiv \mathbf{T}\equiv (T_{1},T_{2})$, if we let $\alpha _{(k_{1},k_{2})}:=\sigma _{k_{1}}$ and $\beta _{(k_{1},k_{2})}:=\tau _{k_{2}}\;($all $k_{1},k_{2}\in \mathbb{Z}_{+})$. \ In this case, $W_{(\alpha ,\beta )}$ is subnormal (resp. hyponormal) if and only if $T_{1}$ and $T_{2}$ are as well; in fact, under the canonical identification of $\ell ^{2}(\mathbb{Z}_{+}^{2})$ with $\ell ^{2}(\mathbb{Z}_{+})\bigotimes \ell ^{2}(\mathbb{Z}_{+})$, we have T_{1}\cong I\bigotimes W_{\sigma }$ and $T_{2}\cong W_{\tau }\bigotimes I$, and $W_{(\alpha ,\beta )}$ is also doubly commuting. \ For this reason, we do not focus attention on shifts of this type, and use them only when the above mentioned triviality is desirable or needed. \setlength{\unitlength}{1mm} \psset{unit=1mm} \begin{figure}[th] \begin{center} \begin{picture}(135,70) \psline{->}(20,20)(65,20) \psline(20,40)(63,40) \psline(20,60)(63,60) \psline{->}(20,20)(20,65) \psline(40,20)(40,63) \psline(60,20)(60,63) \put(12,16){\footnotesize{$(0,0)$}} \put(37,16){\footnotesize{$(1,0)$}} \put(57,16){\footnotesize{$(2,0)$}} \put(29,21){\footnotesize{$\alpha_{00}$}} \put(49,21){\footnotesize{$\alpha_{10}$}} \put(61,21){\footnotesize{$\cdots$}} \put(29,41){\footnotesize{$\alpha_{01}$}} \put(49,41){\footnotesize{$\alpha_{11}$}} \put(61,41){\footnotesize{$\cdots$}} \put(29,61){\footnotesize{$\alpha_{02}$}} \put(49,61){\footnotesize{$\alpha_{12}$}} \put(61,61){\footnotesize{$\cdots$}} \psline{->}(35,14)(50,14) \put(42,10){$\rm{T}_1$} \psline{->}(10,35)(10,50) \put(4,42){$\rm{T}_2$} \put(11,40){\footnotesize{$(0,1)$}} \put(11,60){\footnotesize{$(0,2)$}} \put(20,30){\footnotesize{$\beta_{00}$}} \put(20,50){\footnotesize{$\beta_{01}$}} \put(21,61){\footnotesize{$\vdots$}} \put(40,30){\footnotesize{$\beta_{10}=\frac{\beta_{00}\alpha_{01}}{\alpha_{00}}$}} \put(40,50){\footnotesize{$\beta_{11}=\frac{\beta_{01}\alpha_{02}}{\alpha_{01}}$}} \put(41,61){\footnotesize{$\vdots$}} \put(61,30){\footnotesize{$\vdots$}} \put(61,50){\footnotesize{$\vdots$}} \put(15,8){(i)} \put(85,8){(ii)} \psline{->}(90,14)(105,14) \put(97,9){$\widetilde{T}_{1}$} \psline{->}(72,35)(72,50) \put(67,42){$\widetilde{T}_{2}$} \psline{->}(75,20)(120,20) \psline(75,40)(118,40) \psline(75,60)(118,60) \psline{->}(75,20)(75,65) \psline(95,20)(95,63) \psline(115,20)(115,63) \put(71,16){\footnotesize{$(0,0)$}} \put(91,16){\footnotesize{$(1,0)$}} \put(111,16){\footnotesize{$(2,0)$}} \put(80,21){\footnotesize{$\sqrt{\alpha_{00}\alpha_{10}}$}} \put(100,21){\footnotesize{$\sqrt{\alpha_{10}\alpha_{20}}$}} \put(116,21){\footnotesize{$\cdots$}} \put(80,41){\footnotesize{$\sqrt{\alpha_{01}\alpha_{11}}$}} \put(100,41){\footnotesize{$\sqrt{\alpha_{11}\alpha_{21}}$}} \put(116,41){\footnotesize{$\cdots$}} \put(80,61){\footnotesize{$\sqrt{\alpha_{02}\alpha_{12}}$}} \put(100,61){\footnotesize{$\sqrt{\alpha_{12}\alpha_{22}}$}} \put(116,61){\footnotesize{$\cdots$}} \put(75,30){\footnotesize{$\sqrt{\beta_{00}\beta_{01}}$}} \put(75,50){\footnotesize{$\sqrt{\beta_{01}\beta_{02}}$}} \put(76,61){\footnotesize{$\vdots$}} \put(95,30){\footnotesize{$\sqrt{\frac{\beta_{00}\beta_{01}\alpha_{02}}{\alpha_{00}}}$}} \put(95,50){\footnotesize{$\sqrt{\frac{\beta_{01}\beta_{02}\alpha_{03}}{\alpha_{01}}}$}} \put(96,61){\footnotesize{$\vdots$}} \put(116,26){\footnotesize{$\vdots$}} \put(116,50){\footnotesize{$\vdots$}} \end{picture} \end{center} \caption{Weight diagram of a commutative $2$-variable weighted shift $W_{(\protect\alpha \protect\beta )}\equiv (T_{1},T_{2})$ and weight diagram of its toral Aluthge transform $\widetilde{W}_{(\protect\alpha ,\protect\beta )}\equiv \widetilde (T_{1},T_{2})}\equiv (\widetilde{T}_{1},\widetilde{T}_{2})$, respectively. \ Observe that the commutativity of $\widetilde{W}_{(\protect\alpha ,\protect\beta )}$ requires (\ref{alphacomm}).} \label{Figure 1} \end{figure} \subsection{Moments and subnormality} \ Given $\mathbf{k}\in \mathbb{Z}_{+}^{2}$, the \textit{moments} of $W_{(\alpha ,\beta )}$ are \begin{eqnarray*} \gamma _{\mathbf{k}} &\equiv &\gamma _{\mathbf{k}}(W_{(\alpha ,\beta )}) \\ &:& \begin{cases} 1, & \text{if }k_{1}=0\text{ and }k_{2}=0 \\ \alpha _{(0,0)}^{2}\cdots \alpha _{(k_{1}-1,0)}^{2}, & \text{if }k_{1}\geq \text{ and }k_{2}=0 \\ \beta _{(0,0)}^{2}\cdots \beta _{(0,k_{2}-1)}^{2}, & \text{if }k_{1}=0\text{ and }k_{2}\geq 1 \\ \alpha _{(0,0)}^{2}\cdots \alpha _{(k_{1}-1,0)}^{2}\beta _{(k_{1},0)}^{2}\cdots \beta _{(k_{1},k_{2}-1)}^{2}, & \text{if }k_{1}\geq \text{ and }k_{2}\geq 1 \end{cases \end{eqnarray*} We remark that, due to the commutativity condition (\ref{commuting}), \gamma _{\mathbf{k}}$ can be computed using any nondecreasing path from (0,0)$ to $(k_{1},k_{2})$. \ We now recall a well known characterization of subnormality for multivariable weighted shifts \cite{JeLu}, due to C. Berger (cf. \cite[III.8.16]{Con}) and independently established by Gellar and Wallen \cite{GeWa} in the $1$-variable case: $W_{(\alpha,\beta)} \equiv (T_{1},T_{2})$ admits a commuting normal extension if and only if there is a probability measure $\mu $ (which we call the \textit{Berger measure} of $W_{(\alpha,\beta)}$ defined on the $2$-dimensional rectangle $R=[0,a_{1}]\times \lbrack 0,a_{2}]$ (where $a_{i}:=\left\Vert T_{i}\right\Vert ^{2}$) such tha \begin{equation*} \gamma _{\mathbf{k}}(W_{(\alpha ,\beta )})=\int_{R}t^{\mathbf{k}}d\mu (s,t):=\int_{R}s^{k_{1}}t^{k_{2}}d\mu (s,t)\text{, for all }\mathbf{k}\in \mathbb{Z}_{+}^{2}. \end{equation* \ For $i\geq 1$, we let $\mathcal{L}_{i}:=\bigvee \{e_{k_{1}}:k_{1}\geq i\}$ denote the invariant subspace obtained by removing the first $i$ vectors in the canonical orthonormal basis of $\ell ^{2}(\mathbb{Z}_{+})$; we also let $\mathcal{L}:=\mathcal{L}_{1}$. \ In the $1 -variable case, if $W_{\alpha }$ is subnormal with Berger measure $\xi _{\alpha }$, then the Berger measure of $W_{\alpha }|_{\mathcal{L}_{i}}$ is d\xi _{\alpha }|_{\mathcal{L}_{i}}(s):=\frac{s^{i}}{\gamma _{i}\left( W_{\alpha }\right) }d\xi _{\alpha }(s)$, where $W_{\alpha }|_{\mathcal{L _{i}}$ means the restriction of $W_{\alpha }$ to $\mathcal{L}_{i}$. \ As above, $U_{+}$ is the (unweighted) unilateral shift, and for $0<a<1$ we let $S_{a}:=\mathrm{shift}(a,1,1,\cdots )$. \ Let \delta _{p}$ denote the point-mass probability measure with support the singleton set $\{p\}$. \ Observe that $U_{+}$ and $S_{a}$ are subnormal, with Berger measures $\delta _{1}$ and $(1-a^{2})\delta _{0}+a^{2}\delta _{1} $, respectively. \subsection{Taylor spectra} \ We conclude this section with some terminology needed to describe the Taylor and Taylor essential spectra of commuting $n$-tuples of operators on a Hilbert space. \ Let $\Lambda \equiv \Lambda _{n}[e]$ be the \textit{complex exterior algebra }on $n$ generators $e_{1},\ldots ,e_{n}$ with identity $e_{0}\equiv 1$, multiplication denoted by $\wedge $ (wedge product) and complex coefficients, subject to the collapsing property $e_{i}\wedge e_{j}+e_{j}\wedge e_{i}=0$ $\left( 1\leq i,j\leq 1\right) $. \ If one declares $\left\{ e_{I}\equiv e_{i_{1}}\wedge \ldots \wedge e_{i_{k}}:I\in \{1,\ldots ,n\}\right\} $ to be an orthonormal basis, the exterior algebra becomes a Hilbert space with the canonical inner product, i.e., $\left\langle e_{I},e_{J}\right\rangle :=0$ if $I\neq J$, $\left\langle e_{I},e_{J}\right\rangle :=1$ if $I=J$. \ It also admits an orthogonal decomposition $\Lambda =\oplus _{i=0}^{n}\Lambda ^{i}$ with $\Lambda ^{i}\wedge \Lambda ^{k}\subset \Lambda ^{i+k}$. \ Moreover, $\dim \Lambda ^{k}=\binom{n}{k}=\frac{n!}{k!(n-k)!}$. \ Let $E_{i}:\Lambda \rightarrow \Lambda $ denote the \textit{creation operator}, given by $\xi \longmapsto e_{i}\wedge \xi $ \; ($i=1,\cdots,n$). \ We recall that $E_{i}^{\ast }E_{j}+E_{j}E_{i}^{\ast }=\delta _{ij}$ and $E_{i}$ is a partial isometry (all $i,j=1,\cdots,n$). \ Consider a Hilbert space $\mathcal{H}$ and set $\Lambda \left( \mathcal{H}\right) :=\oplus _{i=0}^{n}\mathcal{H}\otimes _{\mathbb{C}}\Lambda ^{i}$.\ \ For a commuting $n$-tuple $\mathbf{T} \equiv \left( T_{1},\ldots ,T_{n}\right)$ of bounded operators on $\mathcal{H}$, define $$ D_{\mathbf{T}}:\Lambda \left( \mathcal{H}\right) \rightarrow \Lambda \left( \mathcal{H}\right) \text{ by }D_{\mathbf{T}}\left( x\otimes \xi \right) =\sum_{i=1}^{n}T_{i}x\otimes e_{i}\wedge \xi \text{.} $$ Then $D_{\mathbf{T}}\circ D_{\mathbf{ }}=0$, so $Ran D_{\mathbf{T}}\subseteq Ker D_{\mathbf{T}}$. \ This naturally leads to a cochain complex, called the \textit{Koszul complex} $K \mathbf{T,}\mathcal{H})$ associated to $\mathbf{T}$ on $\mathcal{H}$, as follows $$ K(\mathbf{T,}\mathcal{H}):0 \overset{0}{\rightarrow } \mathcal{H}\otimes \wedge ^{0} \overset{D_{\mathbf{T}}^{0}}{\rightarrow } \mathcal{H \otimes \wedge ^{1} \overset{D_{\mathbf{T}}^{1}}{\rightarrow } \cdots \overset{D_{\mathbf{T}}^{n-1}}{\rightarrow } \mathcal{H}\otimes \wedge ^{n} \overset{D_{\mathbf{T}}^{n}\equiv 0}{\rightarrow } 0\text{,} $$ where $D_{\mathbf{T}}^{i}$ denotes the restriction of $D_{\mathbf{T}}$ to the subspace $\mathcal{H}\otimes \wedge ^{i}$. \ We define $\mathbf{T}$ to be \textit invertible} in case its associated Koszul complex $K(\mathbf{T,}\mathcal{H})$ is exact. \ Thus, we can define the Taylor spectrum $\sigma _{T}(\mathbf{T})$ of $\mathbf{T}$ as follows:\ \begin{equation*} \begin{tabular}{l} $\sigma _{T}(\mathbf{T})$ \\ $:=\left\{ (\lambda _{1},\cdots ,\lambda _{n})\in \mathbb{C}^{n}:K\left( \left( T_{1}-\lambda _{1},\ldots ,T_{n}-\lambda _{n}\right) \mathbf{,}\text{ }\mathcal{H}\right) \text{ is not invertible}\right\} $ \end{tabular \end{equation*} \ J. L. Taylor showed that, if $\mathcal{H} \neq \{0\}$, then $\sigma _{T}(\mathbf{T})$ is a nonempty, compact subset of the polydisc of multiradius $r(\mathbf{T}):=(r(T_{1}),\cdots ,r(T_{n})),$ where r(T_{i})$ is the spectral radius of $T_{i}$ \; ($i=1,\cdots,n$) (\cite{Tay1}, \cite{Tay2}). \ For additional facts about this notion of joint spectrum, the reader is referred to \cite{Cu1}, \cite{Appl} and \cite{Cu3}. \bigskip \section{\label{Sect3}The Toral Aluthge Transform} We will now gather several well known auxiliary results which are needed for the proofs of the main results of this section. \ We begin with a criterion for the $k -hyponormality $\left( k\geq 1\right) $ of $2$-variable weighted shifts. \ But first we need to describe concretely the toral Aluthge transform of a $2$-variable weighted shift, and the necessary and sufficient condition to guarantee its commutativity. \begin{lemma} \label{CartAlu} Let $W_{(\alpha ,\beta )} \equiv \left(T_{1},T_{2}\right)$ be a $2$-variable weighted shift. \ Then $$ \widetilde{T}_{1}e_{\mathbf{k}}=\sqrt{\alpha_{\mathbf{k}}\alpha_{\mathbf{k}+\mathbf{\varepsilon}_{1}}}e_{\mathbf{k}+\mathbf{\varepsilon}_{1}} $$ and $$ \widetilde{T}_{2}e_{\mathbf{k}}=\sqrt{\beta_{\mathbf{k}}\beta_{\mathbf{k}+\mathbf{\varepsilon}_{2}}}e_{\mathbf{k}+\mathbf{\varepsilon}_{2}} $$ for all $\mathbf{k} \in \mathbb{Z}_+^2$. \end{lemma} \begin{proof} Straightforward from (\ref{Def-Alu}). \end{proof} In the following result we prove that the commutativity of $\left( \widetilde{T}_{1} \widetilde{T}_{2}\right)$ requires a condition on the weight sequences. \subsection{Commutativity of the toral Aluthge transform} \begin{proposition} \label{commuting1} Let $W_{(\alpha ,\beta )}$ be a commuting $2$-variable weighted shift, with weight diagram given by Figure \ref{Figure 1}(i). \ Then \begin{eqnarray} \widetilde{W}_{(\alpha ,\beta )} &\equiv &\left( \widetilde{T}_{1} \widetilde{T}_{2}\right) \text{ is commuting} \nonumber \\ &\Longleftrightarrow &\alpha _{\mathbf{k+}\varepsilon _{2}}\alpha _{\mathbf{k}+\varepsilon _{1}+\varepsilon _{2}}=\alpha_{\mathbf{k+}\varepsilon _{1}}\alpha _{\mathbf{k}+2\varepsilon _{2}}\label{prop1eq} \end{eqnarray} for all $\mathbf{k} \in \mathbb{Z}_+^2$. \end{proposition} \begin{proof} Let $\mathbf{k} \in \mathbb{Z}_+^2$; by Lemma \ref{CartAlu}, \begin{eqnarray} \widetilde{T}_{2}\widetilde{T}_{1}e_{\mathbf{k}}&=&\sqrt{\alpha_{\mathbf{k}}\alpha_{\mathbf{k}+\varepsilon_{1}}\beta_{\mathbf{k}+\varepsilon_{1}}\beta_{\mathbf{k}+\varepsilon_{1}+\varepsilon_{2}}}e_{\mathbf{k}+\varepsilon_{1}+\varepsilon_{2}} \nonumber \\ &=&\sqrt{(\alpha_{\mathbf{k}}\beta_{\mathbf{k}+\varepsilon_{1}})\alpha_{\mathbf{k}+\varepsilon_{1}}\beta_{\mathbf{k}+\varepsilon_{1}+\varepsilon_{2}}}e_{\mathbf{k}+\varepsilon_{1}+\varepsilon_{2}} \nonumber \\ &=&\sqrt{(\beta_{\mathbf{k}}\alpha_{\mathbf{k}+\varepsilon_{2}})\alpha_{\mathbf{k}+\varepsilon_{1}}\beta_{\mathbf{k}+\varepsilon_{1}+\varepsilon_{2}}}e_{\mathbf{k}+\varepsilon_{1}+\varepsilon_{2}} \; \; (\textrm{by } \ref{commuting}) \nonumber \\ &=&\sqrt{\beta_{\mathbf{k}}\alpha_{\mathbf{k}+\varepsilon_{1}}(\beta_{\mathbf{k}+\varepsilon_{2}}\alpha_{\mathbf{k}+2\varepsilon_{2}})}e_{\mathbf{k}+\varepsilon_{1}+\varepsilon_{2}} \; \; (\textrm{again by } \ref{commuting}) \nonumber \\ &=&\sqrt{\beta_{\mathbf{k}}\beta_{\mathbf{k}+\varepsilon_{2}}(\alpha_{\mathbf{k}+\varepsilon_{1}}\alpha_{\mathbf{k}+2\varepsilon_{2}}})e_{\mathbf{k}+\varepsilon_{1}+\varepsilon_{2}}. \label{eqq1} \end{eqnarray} On the other hand, \begin{eqnarray} \widetilde{T}_{1}\widetilde{T}_{2}e_{\mathbf{k}}&=&\sqrt{\beta_{\mathbf{k}}\beta_{\mathbf{k}+\varepsilon_{2}}\alpha_{\mathbf{k}+\varepsilon_{2}}\alpha_{\mathbf{k}+\varepsilon_{1}+\varepsilon_{2}}}e_{\mathbf{k}+\varepsilon_{1}+\varepsilon_{2}}. \label{eqq2} \end{eqnarray} From (\ref{eqq1}) and (\ref{eqq2}) it follows that $\widetilde{T}_{1}\widetilde{T}_{2}=\widetilde{T}_{2}\widetilde{T}_{1}$ if and only if $$ \alpha _{\mathbf{k+}\varepsilon _{2}}\alpha _{\mathbf{k}+\varepsilon _{1}+\varepsilon _{2}}=\alpha_{\mathbf{k+}\varepsilon _{1}}\alpha _{\mathbf{k}+2\varepsilon _{2}}, $$ as desired. \end{proof} \begin{remark} \label{Re 1} By Proposition \ref{commuting1} and the commutativity condition for $W_{(\alpha ,\beta )}$, it is straightforward to prove that (\ref{prop1eq}) is equivalent to \begin{equation} \beta _{\mathbf{k+}\varepsilon _{1}}\beta _{\mathbf{k}+\varepsilon _{1}+\varepsilon _{2}}=\beta_{\mathbf{k+}\varepsilon _{2}}\beta _{\mathbf{k}+2\varepsilon _{1}}. \label{equ3} \end{equation} for all $\mathbf{k} \in \mathbb{Z}_+^2$. \; \qed \end{remark} \begin{lemma} (\cite{CLY1})\label{khypo} Let $W_{(\alpha ,\beta )}$ be a commuting $2$-variable weighted shift. \ Then the following are equivalent:\newline (i) $\ W_{(\alpha ,\beta )}$ is $k$-hyponormal;\newline (ii) $\ M_{\mathbf{u}}(k)\left( W_{(\alpha ,\beta )}\right) :=\left( \gamma _{\mathbf{u}+(n,m)+(p,q)}\left( W_{(\alpha ,\beta )}\right) \right) _{_{0\leq p+q\leq k}^{0\leq n+m\leq k}}\geq 0$ for all $\mathbf{u}\in \mathbb{Z}_{+}^{2}$. \end{lemma} We recall that $\mathcal{M}_{i}$ $\left( \text{resp. }\mathcal{N}_{j}\right) $ is the subspace of $\ell ^{2}(\mathbb{Z}_{+}^{2})$ spanned by the canonical orthonormal basis associated to indices $\mathbf{k}=(k_{1},k_{2})$ with $k_{1}\geq 0$ and $k_{2}\geq i$ (resp. $k_{1}\geq j$ \newline and $k_{2}\geq 0$). \ For simplicity, we write $\mathcal{M}=\mathcal{M}_1$ and $\mathcal{N}=\mathcal{N}_1$. \ The \textit{core} $c(W_{(\alpha ,\beta )})$ of W_{(\alpha ,\beta )}$ is the restriction of $W_{(\alpha ,\beta )}$ to the invariant subspace $\mathcal{M}\cap \mathcal{N}$. \ A $2$-variable weighted shift $W_{(\alpha ,\beta )}$ is said to be of\textit{\ tensor form} if it is of the form $(I\otimes W_{\sigma },W_{\tau }\otimes I)$ for suitable $1$-variable weight sequences $\sigma$ and $\tau$. \ We also let $$ \mathcal{TC}:=\{W_{(\alpha ,\beta )}: c(W_{(\alpha ,\beta )}) \textrm{ is of tensor form} \}. $$ \begin{proposition} \label{propscaling} (cf. \cite{KimYoon}) \ Let $W_{(\alpha ,\beta )}\equiv \left( T_{1},T_{2}\right)$ be a commuting $2$-variable weighted shift. \ Then, for $a,b>0$ and $k \ge 1$ we hav \begin{equation*} \left( T_{1},T_{2}\right) \text{ is } k \text{-hyponormal} \Longleftrightarrow \left( aT_{1},bT_{2}\right) \text{ is } k \text{-hyponormal}. \end{equation*} \end{proposition} \subsection{Hyponormality is not preserved under the toral Aluthge transform} \label{subsec31} \ As we observed in the Introduction, the $1$-variable Aluthge transform leaves the class of hyponormal weighted shifts invariant. \ In this Subsection we will show that the same is not true of the toral Aluthge transform acting on $2$-variable weighted shifts. \ To see this, consider the commuting $2$-variable weighted shift $W_{(\alpha ,\beta )}$ given by Figure \ref{Figure 2ex}(ii). \ That is, $W_{(\alpha ,\beta )}$ has a symmetric weight diagram, has a core of tensor form (with Berger measure $\xi \times \xi$), and with zero-th and first rows given by backward extensions of a weighted shift whose Berger measure is $\xi$; we will denote those backward extensions by $[x_0,\xi]$ and $[a,\xi]$, respectively. \ Also, we denote by $\omega_0,\omega_1,\cdots$ the weight sequence associated with $\xi$. \ Since we wish to characterize the subnormality of $W_{(\alpha ,\beta )}$, we assume that $[x_0,\xi]$ and $[a,\xi]$ are subnormal, which requires that $\frac{1}{s} \in L^1(\xi)$. \ Let $\rho:=\int{\frac{1}{s} d\xi(s)}<\infty$. \ We recall the following result from \cite{CuYo1}. \setlength{\unitlength}{1mm} \psset{unit=1mm} \begin{figure}[th] \begin{center} \begin{picture}(120,40) \pspolygon*[linecolor=lightgray](25,16)(57,16)(57,38)(25,38) \psline{->}(10,6)(58,6) \psline(10,16)(57,16) \psline(10,26)(57,26) \psline(10,36)(57,36) \psline{->}(10,6)(10,38) \psline(25,6)(25,37) \psline(40,6)(40,37) \psline(55,6)(55,37) \put(2.3,3.2){\footnotesize{$(0,0)$}} \put(9,-2){$(i)$} \put(21,3){\footnotesize{$(1,0)$}} \put(36,3){\footnotesize{$(2,0)$}} \put(51,3){\footnotesize{$(3,0)$}} \put(14,7){\footnotesize{$x_{0}$}} \put(29,7){\footnotesize{$x_{1}$}} \put(44,7){\footnotesize{$x_2=\omega_{1}$}} \put(56,7){\footnotesize{$\omega_{2}$}} \put(14,17){\footnotesize{$a$}} \put(29,17){\footnotesize{$\omega_{0}$}} \put(44,17){\footnotesize{$\omega_{1}$}} \put(56,17){\footnotesize{$\omega_{2}$}} \put(14,28){\footnotesize{$a \frac{\omega_{0}}{x_{1}}$}} \put(29,27){\footnotesize{$\omega_{0}$}} \put(44,27){\footnotesize{$\omega_{1}$}} \put(56,27){\footnotesize{$\omega_{2}$}} \put(15,38){\footnotesize{$a \frac{\omega_{0}}{x_{1}}$}} \put(30,37){\footnotesize{$\omega_{0}$}} \put(45,37){\footnotesize{$\omega_{1}$}} \put(56,37){\footnotesize{$\omega_{2}$}} \psline{->}(25,1)(40,1) \put(30,-2.5){$\rm{T}_1$} \psline{->}(2, 15)(2,30) \put(-3,20){$\rm{T}_2$} \put(2.7,15){\footnotesize{$(0,1)$}} \put(2.7,25){\footnotesize{$(0,2)$}} \put(2.7,35){\footnotesize{$(0,3)$}} \put(10,11){\footnotesize{$y_{0}$}} \put(10,20){\footnotesize{$y_{1}$}} \put(10,31){\footnotesize{$y_2=\tau_{1}$}} \put(11,37){\footnotesize{$\vdots$}} \put(25,11){\footnotesize{$a \frac{y_{0}}{x_{0}}$}} \put(25,20){\footnotesize{$\tau_{0}$}} \put(25,31){\footnotesize{$\tau_{1}$}} \put(26,37){\footnotesize{$\vdots$}} \put(40,11){\footnotesize{$a \frac{y_0 \omega_{0}}{x_0 x_{1}}$}} \put(40,20){\footnotesize{$\tau_{0}$}} \put(40,31){\footnotesize{$\tau_{1}$}} \put(41,37){\footnotesize{$\vdots$}} \pspolygon*[linecolor=lightgray](67,16)(114,16)(114,38)(67,38) \pspolygon*[linecolor=lightgray](82,6)(114,6)(114,38)(82,38) \psline{->}(67,6)(115,6) \psline(67,16)(114,16) \psline(67,26)(115,26) \psline(67,36)(114,36) \psline{->}(67,6)(67,38) \psline(82,6)(82,37) \psline(97,6)(97,37) \psline(112,6)(112,37) \put(65,3.2){\footnotesize{$(0,0)$}} \put(67.2,-2){$(ii)$} \put(78,3){\footnotesize{$(1,0)$}} \put(93,3){\footnotesize{$(2,0)$}} \put(108,3){\footnotesize{$(3,0)$}} \put(73,7){\footnotesize{$x_{0}$}} \put(87,7){\footnotesize{$\omega_{0}$}} \put(101,7){\footnotesize{$\omega_{1}$}} \put(113,7){\footnotesize{$\cdots$}} \put(73,17){\footnotesize{$a$}} \put(87,17){\footnotesize{$\omega_{0}$}} \put(101,17){\footnotesize{$\omega_{1}$}} \put(113,17){\footnotesize{$\cdots$}} \put(73,28){\footnotesize{$a$}} \put(87,27){\footnotesize{$\omega_{0}$}} \put(101,27){\footnotesize{$\omega_{1}$}} \put(113,27){\footnotesize{$\cdots$}} \put(73,36.5){\footnotesize{$\cdots$}} \put(87,36.5){\footnotesize{$\cdots$}} \put(102,36.5){\footnotesize{$\cdots$}} \psline{->}(83,1)(98,1) \put(88,-2.5){$\rm{T}_1$} \put(67,10){\footnotesize{$x_{0}$}} \put(67,20){\footnotesize{$\omega_{0}$}} \put(67,30){\footnotesize{$\omega_{1}$}} \put(68,37){\footnotesize{$\vdots$}} \put(82,11){\footnotesize{$a$}} \put(82,20){\footnotesize{$\omega_{0}$}} \put(82,30){\footnotesize{$\omega_{1}$}} \put(83,37){\footnotesize{$\vdots$}} \put(97,11){\footnotesize{$a$}} \put(97,20){\footnotesize{$\omega_{0}$}} \put(97,30){\footnotesize{$\omega_{1}$}} \put(98,37){\footnotesize{$\vdots$}} \end{picture} \end{center} \caption{Weight diagram of a 2-variable weighted shift $W_{(\protect\alpha \protect\beta)}\equiv(T_{1},T_{2})$ with core of tensor form and with commutative toral transform. \ Observe that $x_2=\protect\omega_1$, $x_3 \protect\omega_2$, $\cdots$, $y_2=\protect\tau_1$, $y_3=\protect\tau_2$, \cdots$, and $\protect\tau_0 x_1 = \protect\omega_0 y_1$ all follow from \protect\ref{prop1eq}). Weight diagram of the 2-variable weighted shift $W_{ \protect\alpha,\protect\beta)}\equiv(T_{1},T_{2})$ in Subsection \protect\re {subsec31}.} \label{Figure 2ex} \end{figure} \begin{lemma} \label{backwardext} \ (Subnormal backward extension of a $1$-variable weighted shift \cite[Proposition 1.5]{CuYo1}) \ Let $T$ be a weighted shift whose restriction $T_ \mathcal{L}}$ to $\mathcal{L}=\vee \{e_{1},e_{2},\cdots \}$ is subnormal, with associated measure $\mu _{\mathcal{L}}.$ \ Then $T$ is subnormal (with associated measure $\mu $) if and only if\newline (i) $\ \frac{1}{t}\in L^{1}(\mu _{\mathcal{L}})$\newline (ii) $\ \alpha _{0}^{2}\leq (\left\| \frac{1}{t}\right\| _{L^{1}(\mu _ \mathcal{L}})})^{-1}$\newline In this case, $d\mu (t)=\frac{\alpha _{0}^{2}}{t}d\mu _{\mathcal{L }(t)+(1-\alpha _{0}^{2}\left\| \frac{1}{t}\right\| _{L^{1}(\mu _{\mathcal{L })})d\delta _{0}(t)$. \ In particular, $T$ is never subnormal when $\mu _{\mathcal{L}}(\{0\})>0$. \ \end{lemma} Thus, by Lemma \ref{backwardext}, we must have $x_0^2 \rho \le 1$ and $a^2 \rho \le 1$. \ For the proof of Lemma \ref{lem39}, we need to recall a few facts about $2$-variable weighted shifts. \begin{lemma} \ (cf. \cite{CuYo2}) \newline (i) \ Let $\mu $ and $\nu $ be two positive measures on a set $X$. \ We say that $\mu \leq \nu $ on $X,$ if $\mu (E)\leq \nu (E)$ for all Borel subset E\subseteq X$; equivalently, $\mu \leq \nu $ if and only if $\int fd\mu \leq \int fd\nu $ for all $f\in C(X)$ such that $f\geq 0$ on $X$.\newline (ii)\ \ Let $\mu $ be a probability measure on $X\times Y$, and assume that \frac{1}{t}\in L^{1}(\mu ).$ \ The \textit{extremal measure} $\mu _{ext}$ (which is also a probability measure) on $X\times Y$ is given by $d\mu _{ext}(s,t):=(1-\delta _{0}(t))\frac{1}{t\left\Vert \frac{1}{t}\right\Vert _{L^{1}(\mu )}}d\mu (s,t)$. \newline (iii) \ Given a measure $\mu $ on $X\times Y$, the \textit{marginal measure} $\mu ^{X}$ is given by $\mu ^{X}:=\mu \circ \pi _{X}^{-1}$, where $\pi _{X}:X\times Y\rightarrow X$ is the canonical projection onto $X$. \ Thus, \mu ^{X}(E)=\mu (E\times Y)$, for every $E\subseteq X$. \end{lemma} \begin{lemma} \label{backext} (\cite[Proposition 3.10]{CuYo1}) \ (Subnormal backward extension of a $2$-variable weighted shift) \ Assume that $W_{(\alpha ,\beta )}$ is a commuting pair of hyponormal operators, and that $W_{(\alpha ,\beta )}|_{\mathcal{M}}$ is subnormal with associated measure \mu _{\mathcal{M}}$. \ Then, $W_{(\alpha ,\beta )}$ is subnormal if and only if the following conditions hold:\newline $(i)$ $\ \frac{1}{t}\in L^{1}(\mu _{\mathcal{M}})$;\newline $(ii)$ $\ \beta _{00}^{2}\leq (\left\Vert \frac{1}{t}\right\Vert _{L^{1}(\mu _{\mathcal{M}})})^{-1}$;\newline $(iii)$ $\ \beta _{00}^{2}\left\Vert \frac{1}{t}\right\Vert _{L^{1}(\mu _ \mathcal{M}})}(\mu _{\mathcal{M}})_{ext}^{X}\leq \xi _{0}$.\newline Moreover, if $\beta _{00}^{2}\left\Vert \frac{1}{t}\right\Vert _{L^{1}(\mu _ \mathcal{M}})}=1,$ then $(\mu _{\mathcal{M}})_{ext}^{X}=\xi _{0}$. \ In the case when $W_{(\alpha ,\beta )}$ is subnormal, the Berger measure $\mu $ of W_{(\alpha ,\beta )}$ is given b \begin{eqnarray} d\mu (s,t) & = & \beta _{00}^{2}\left\Vert \frac{1}{t}\right\Vert _{L^{1}(\mu _{\mathcal{M })}d(\mu _{\mathcal{M}})_{ext}(s,t) \nonumber \\ && + (d\xi _{0}(s)-\beta _{00}^{2}\left\Vert \frac{1}{t}\right\Vert _{L^{1}(\mu _{\mathcal{M}})}d(\mu _{\mathcal{M })_{ext}^{X}(s))d\delta _{0}(t). \label{Berger} \end{eqnarray} \end{lemma} In the rest of this section, we restrict attention to the $2$-variable weighted shift with weight diagram given as in Figure \ref{Figure 2ex}(ii). \begin{lemma} \label{lem39} Let $W_{(\alpha ,\beta )}$ be a $2$-variable weighted shift, let $\rho:=\int{\frac{1}{s} d\xi(s)}<\infty$, and assume that $x_0^2 \rho \le 1$ and $a^2 \rho \le 1$. \ Then, $W_{(\alpha ,\beta )}$ is subnormal if and only if $x_0^2 \rho(2-a^2 \rho)\le 1$. \end{lemma} \begin{proof} Observe that the Berger measure of $[x_0,\xi]$ is $\xi_{x_0} =\frac{x_0^2 \xi}{s} +(1-x_0^2 \rho) \delta_0$ and similarly the Berger measure of $[a,\xi]$ is $\xi_{a} =\frac{a^2 \xi}{s} +(1-a^2 \rho) \delta_0$. \ The restriction of $W_{(\alpha ,\beta )}$ to the subspace $\mathcal{M}$ is then $\mu_{\mathcal{M}}=\xi_a \times \xi$, from which it follows at once that $(\mu_{\mathcal{M}})_{ext}^X=\xi_a$. \ Therefore, for the subnormality of $W_{(\alpha ,\beta )}$ we will need $x_0^2 \rho \xi_a \le \xi_{x_0}$ and this naturally leads to the condition $x_0^2 \rho(2-a^2 \rho)\le 1$, as desired. \end{proof} \begin{lemma} The 2-variable weighted shift $\widetilde{W}_{(\alpha ,\beta )}$ is hyponormal if and only if $|a-x_0|\le \omega_1-x_0$. \end{lemma} \begin{proof} Since the restrictions of $\widetilde{W}_{(\alpha ,\beta )}$ to the subspaces $\mathcal{M}$ and $\mathcal{N}$ are subnormal, Lemma \ref{khypo} says that $\widetilde{W}_{(\alpha ,\beta )}$ is hyponormal if and only if $M_{(0,0)}(1) \equiv M_{(0,0)}(1)(\widetilde{W}_{(\alpha ,\beta )}) \ge 0$. \ Since $$ M_{(0,0)}(1)=\left( \begin{array}{cc} \omega_0 \omega_1 - x_0 \omega_0 & a \omega_0 - x_0 \omega_0 \\ a \omega_0 - x_0 \omega_0 & \omega_0 \omega_1 - x_0 \omega_0 \end{array} \right), $$ it follows that $\widetilde{W}_{(\alpha ,\beta )}$ is hyponormal if and only if $|a-x_0| \le \omega_1-x_0$, as desired. \end{proof} We observe that if $a \ge x_0$, then $|a-x_0|\le \omega_1-x_0$ becomes $a \le \omega_1$, which is always true. \ Thus, to build an example where the hyponormality of $\widetilde{W}_{ (\alpha ,\beta )}$ is violated, we must necessarily assume that $a<x_0$. \ Incidentally, this assumption automatically leads to $a^2 \rho \le x_0^2 \rho$, so that the subnormality of $W_{(\alpha ,\beta )}$ is now determined by the conditions $x_0^2 \rho \le 1$ and $x_0^2 \rho(2-a^2 \rho)\le 1$. \ In short, an example with the desired properties can be constructed once we guarantee the following three conditions: \begin{equation} x_0^2 \rho \le 1 \label{condition1} \end{equation} \begin{equation} x_0^2 \rho (2-a^2 \rho) \le 1 \label{condition2} \end{equation} \begin{equation} x_0 > \frac{\omega_1+a}{2} \label{condition3}. \end{equation} Notice that $2-a^2 \rho <2$, so if we were to assume that $x_0^2 \rho \le \frac{1}{2}$ then both conditions (\ref{condition1}) and (\ref{condition2}) would be simultaneously satisfied. \ Moreover, if we were to assume that $x_0 > \frac{\omega_1}{2}$, then we could always find $a<x_0$ such that $x_0>\frac{\omega_1+a}{2}$. \ We can then focus on the following question: Can we simultaneously guarantee $x_0^2 \rho \le \frac{1}{2}$ and $x_0^2 > \frac{\omega_1^2}{4}$? Alternatively, we need \begin{equation} \frac{\omega_1^2}{4} < x_0^2 \le \frac{1}{2 \rho}. \label{condition4} \end{equation} Now, if $\frac{\omega_1^2}{4} < \frac{1}{2 \rho}$, then it would be possible to select $x_0$ such that (\ref{condition4}) is satisfied. \ We have thus established the following result. \begin{theorem} \label{example100} Let $W_{(\alpha ,\beta )}$ be as above, and assume that $$ \omega_1^2 \rho < 2. $$ Then: (i) \ $W_{(\alpha ,\beta )}$ is subnormal; and (ii) \ $\widetilde{W}_{(\alpha ,\beta )}$ is not hyponormal. \end{theorem} We will now show that the condition in Theorem \ref{example100} holds for a large class of $2$-variable weighted shifts. \ \begin{example} Consider the case when the measure $\xi$ is $2$-atomic, that is, $\xi \equiv r \delta_p + s \delta_q$, with $r,s>0$, $r+s=1$ and $0<p<q$. \ (Recall that $0$ cannot be in the support of $\xi$, because otherwise $\frac{1}{s} \notin L^1(\xi)$.) \ We compute $$ \omega_1^2 \rho = \frac{rp^2+sq^2}{rp+sq}(\frac{r}{p}+\frac{s}{q})=\frac{r+s (\frac{q}{p})^2}{r+s\frac{q}{p}}(r+\frac{s}{\frac{q}{p}}). $$ Thus, without loss of generality we can always assume that $p=1$, that is, $$ \omega_1^2 \rho=\frac{r+sq^2}{r+sq}(r+\frac{s}{q}). $$ A calculation using {\it Mathematica} \cite{Wol} reveals that for $1 < q \le \tilde{q}:=1/2 + \sqrt{2} + \frac{1}{2} (\sqrt{5 + 4 \sqrt{2}}) \approx 3.546$, we have $\frac{r+sq^2}{r+sq}(r+\frac{s}{q})<2$ for all $r,s>0$ with $r+s=1$. \ As a matter of fact, there is a region $R$ in the $(s,q)$-plane bounded by the graph $q=f(s)$ of a positive convex function $f$, such that $\omega_1^2 \rho < 2$ precisely when $1<q<f(s)$; $R$ contains the rectangle $[0,1] \times (1,\tilde{q}]$. \qed \end{example} We have thus established the existence of subnormal $2$-variable weighted shifts $W_{(\alpha ,\beta )}$ with non-hyponormal toral Aluthge transforms. \section{\label{Sec2}The spherical Aluthge Transform} In this section, we study the second plausible definition of the multivariable Aluthge transform, which we will denote, to avoid confusion, by $\widehat{(T_{1},T_{2})}$; this corresponds to (\ref{Def-Alu1}). \ We begin with the following elementary result. \begin{proposition} \label{basic} Assume that $(T_1,T_2) \equiv (V_1P,V_2P)$, where $P=(T_1^*T_1+T_2^*T_2)^{1/2}$, and let $\widehat {(T_1,T_2)}\equiv (\widehat{T_1},\widehat{T_2}):=(\sqrt{P}V_1\sqrt{P},\sqrt{P}V_2\sqrt{P})$. \ Assume also that $(T_1,T_2)$ is commutative. \ Then \newline (i) \ $(V_1,V_2)$ is a (joint) partial isometry; more precisely, $V_1^*V_1+V_2^*V_2$ is the projection onto $\emph{ran} \; P$; \newline (ii) \ $\widehat {(T_1,T_2)}$ is commutative on $\emph{ran} \;P$, so in particular $\widehat {(T_1,T_2)}$ is commutative whenever $P$ is injective. \end{proposition} \begin{proof} \ (i) An easy computation reveals that $$ P^2=T_1^*T_1+T_2^*T_2=(V_1P)^*(V_1P)+(V_2P)^*(V_2P)=P(V_1^*V_1+V_2^*V_2)P, $$ and therefore $(V_1^*V_1+V_2^*V_2)|_{\textrm{ran} \; P}$ is the identity operator on $\textrm{ran} \; P$, as desired. To prove (ii), consider the product $$ \widehat{T_1}\widehat{T_2}=\sqrt{P}V_1\sqrt{P}\sqrt{P}V_2\sqrt{P}=\sqrt{P}V_1PV_2\sqrt{P}. $$ Then \begin{eqnarray*} \widehat{T_{1}}\widehat{T_{2}}\sqrt{P} &=&\sqrt{P}T_{1}T_{2}=\sqrt{P T_{2}T_{1}=(\sqrt{P}V_{2}PV_{1}\sqrt{P})\sqrt{P} \\ &=&(\sqrt{P}V_{2}\sqrt{P})(\sqrt{P}V_{1}\sqrt{P})\sqrt{P}=\widehat{T_{2} \widehat{T_{1}}\sqrt{P}. \end{eqnarray*} It follows at once that $\widehat{T_1}\widehat{T_2}-\widehat{T_2}\widehat{T_1}$ vanishes on $\textrm{ran} \; P$, as desired. \end{proof} We now prove: \begin{proposition} \label{Prop1}Given a $2$-variable weighted shift $W_{(\alpha ,\beta )}\equiv (T_{1},T_{2})$, let $\widehat{W}_{(\alpha ,\beta )}$ be given by (\ref{Def-Alu1}). \ Assume that $W_{(\alpha ,\beta )}$ is commutative. \ Then $\widehat{W}_{(\alpha ,\beta )}$ is commutative. \end{proposition} \begin{proof} Straightforward from Proposition \ref{basic}. \end{proof} We briefly pause to describe how $\widehat{W}_{(\alpha ,\beta )}$ acts on the canonical orthonormal basis vectors. \begin{lemma} \label{PolarAlu} Let $W_{(\alpha ,\beta )} \equiv \left(T_{1},T_{2}\right)$ be a $2$-variable weighted shift. \ Then $$ \widehat{T}_{1}e_{\mathbf{k}}=\alpha_{\mathbf{k}} \frac{(\alpha_{\mathbf{k}+\mathbf{\epsilon}_1}^2+\beta_{\mathbf{k}+\mathbf{\epsilon}_1}^2)^{1/4}}{(\alpha_{\mathbf{k}}^2+\beta_{\mathbf{k}}^2)^{1/4}} e_{\mathbf{k}+\mathbf{\epsilon}_1} $$ and $$ \widehat{T}_{2}e_{\mathbf{k}}=\beta_{\mathbf{k}} \frac{(\alpha_{\mathbf{k}+\mathbf{\epsilon}_2}^2+\beta_{\mathbf{k}+\mathbf{\epsilon}_2}^2)^{1/4}}{(\alpha_{\mathbf{k}}^2+\beta_{\mathbf{k}}^2)^{1/4}} e_{\mathbf{k}+\mathbf{\epsilon}_2} $$ for all $\mathbf{k} \in \mathbb{Z}_+^2$. \end{lemma} \begin{proof} Straightforward from (\ref{Def-Alu1}). \end{proof} \setlength{\unitlength}{1mm} \psset{unit=1mm} \begin{figure}[th] \begin{center} \begin{picture}(120,45) \psline{->}(20,6)(53,6) \psline(20,16)(52,16) \psline(20,26)(52,26) \psline(20,36)(52,36) \psline{->}(20,6)(20,40) \psline(35,6)(35,39) \psline(50,6)(50,39) \put(7.7,3.2){\footnotesize{$(k_1,k_2)$}} \put(19,-2){$(i)$} \put(28,3){\footnotesize{$(k_1+1,k_2)$}} \put(26,7){\footnotesize{$1$}} \put(26,17){\footnotesize{$a$}} \put(26,27){\footnotesize{$c$}} \psline{->}(35,1)(50,1) \put(40,-2.5){$\rm{T}_1$} \psline{->}(0, 15)(0,30) \put(-5,20){$\rm{T}_2$} \put(2,15){\footnotesize{$(k_1,k_2+1)$}} \put(2,25){\footnotesize{$(k_1,k_2+2)$}} \put(2,35){\footnotesize{$(k_1,k_2+3)$}} \put(20.5,11){\footnotesize{$1$}} \put(20.5,20){\footnotesize{$b$}} \put(20.5,31){\footnotesize{$d$}} \put(35.5,11){\footnotesize{$a$}} \put(35.5,20){\footnotesize{$\frac{bc}{a}$}} \psline{->}(67,6)(115,6) \psline(67,17)(114,17) \psline(67,28)(115,28) \psline(67,39)(114,39) \psline{->}(67,6)(67,44) \psline(82,6)(82,43) \psline(97,6)(97,43) \psline(112,6)(112,43) \put(65,3.2){\footnotesize{$(0,0)$}} \put(67.2,-2){$(ii)$} \put(78,3){\footnotesize{$(1,0)$}} \put(93,3){\footnotesize{$(2,0)$}} \put(108,3){\footnotesize{$(3,0)$}} \put(73,6.5){\footnotesize{${\alpha_{00}}$}} \put(87,6.5){\footnotesize{${\alpha_{10}}$}} \put(101,6.5){\footnotesize{${\alpha_{20}}$}} \put(113,6.6){\footnotesize{$\cdots$}} \put(73,17.5){\footnotesize{${\alpha_{10}}$}} \put(87,17.5){\footnotesize{${\alpha_{20}}$}} \put(101,17.5){\footnotesize{${\alpha_{30}}$}} \put(113,17.6){\footnotesize{$\cdots$}} \put(73,28.5){\footnotesize{${\alpha_{20}}$}} \put(87,28.5){\footnotesize{${\alpha_{30}}$}} \put(101,28.5){\footnotesize{${\alpha_{40}}$}} \put(113,28.6){\footnotesize{$\cdots$}} \put(73,39.5){\footnotesize{$\cdots$}} \put(87,39.5){\footnotesize{$\cdots$}} \put(102,39.5){\footnotesize{$\cdots$}} \psline{->}(83,1)(98,1) \put(88,-2.5){$\rm{T}_1$} \put(67,12){\footnotesize{$\beta_{00}$}} \put(67,23){\footnotesize{$\frac{\alpha_{10}\beta_{00}}{\alpha_{00}}$}} \put(67,34){\footnotesize{$\frac{\alpha_{20}\beta_{00}}{\alpha_{00}}$}} \put(68,40){\footnotesize{$\vdots$}} \put(82,12){\footnotesize{$\frac{\alpha_{10}\beta_{00}}{\alpha_{00}}$}} \put(82,23){\footnotesize{$\frac{\alpha_{20}\beta_{00}}{\alpha_{00}}$}} \put(83,34){\footnotesize{$\frac{\alpha_{30}\beta_{00}}{\alpha_{00}}$}} \put(83,40){\footnotesize{$\vdots$}} \put(97,12){\footnotesize{$\frac{\alpha_{20}\beta_{00}}{\alpha_{00}}$}} \put(98,23){\footnotesize{$\frac{\alpha_{30}\beta_{00}}{\alpha_{00}}$}} \put(98,34){\footnotesize{$\frac{\alpha_{40}\beta_{00}}{\alpha_{00}}$}} \put(98,40){\footnotesize{$\vdots$}} \end{picture} \end{center} \caption{Weight diagram of the 2-variable weighted shift in Proposition \protect\ref{Proposition1} and weight diagram of a commuting $2$-variable weighted shift for which the toral and spherical Aluthge transforms coincide, respectively.} \label{Fig 1} \end{figure} We next have: \begin{proposition} \label{Proposition1} Consider a $2$-variable weighted shift $W_{(\alpha ,\beta )}\equiv \left( T_{1},T_{2}\right)$, and assume that $W_{(\alpha ,\beta )}$ is a commuting pair of hyponormal operators. \ Then so is $\widehat{W}_{(\alpha ,\beta )}$. \end{proposition} \begin{proof} We will establish that $\widehat{T}_2$ is hyponormal. \ Fix a lattice point $(k_1,k_2)$; we would like to prove that $\widehat{\beta}_{(k_1,k_2)} \le \widehat{\beta}_{(k_1,k_2+1)}$. \ Since the hyponormality of a Hilbert space operator is invariant under multiplication by a nonzero scalar, we can, without loss of generality, assume that $\alpha_{(k_1,k_2)}=\beta_{(k_1,k_2)}=1$. \ To simplify the calculation, let $a:=\alpha_{(k_1,k_2+1)}$, $b:=\beta_{(k_1,k_2+1)}$, $c:=\alpha_{(k_1,k_2+2)}$ and $d:=\beta_{(k_1,k_2+2)}$. \ Thus, the weight diagram of $(T_1,T_2)$ is now given as in Figure \ref{Fig 1}(i). \ Since $T_2$ is hyponormal, we must necessarily have \begin{equation} \label{bc} a \le \frac{bc}{a} \end{equation} in the first column of the weight diagram in Figure \ref{Fig 1}(i). \ Recall also the Cauchy-Schwarz inequality $a^2b^2 \le \frac{a^4+b^4}{2}$. \ Then \begin{eqnarray*} \widehat{\beta }_{(k_{1},k_{2})}^{4}&=&\frac{a^2+b^2}{2}=\frac{(a^2+b^2)^2}{2(a^2+b^2)}=\frac{a^4+2a^2b^2+b^4}{2(a^2+b^2)} \\ & \le &\frac{a^4+b^4}{a^2+b^2} \le \frac{b^2c^2+b^2d^2}{a^2+b^2} \; \; (\textrm{by } (\ref{bc}) \textrm{ and the fact that } b \le d) \\ & = &b^2 \cdot \frac{c^2+d^2}{a^2+b^2} \le b^4 \cdot \frac{c^2+d^2}{a^2+b^2} = \widehat{\beta }_{(k_{1},k_{2}+1)}^{4}, \end{eqnarray*} as desired. \end{proof} We now present an example of a hyponormal $2$-variable weighted shift $W_{(\alpha ,\beta )}$ for which $\widetilde{W}_{(\alpha ,\beta )}$ is not hyponormal. \ While we have already encountered this behavior (cf. Theorem \ref{example100}), the simplicity of the following example warrants special mention (aware as we are that the result is weaker than Theorem \ref{example100}). \ Moreover, this example shows that the spherical Aluthge transform $\widehat{W}_{(\alpha,\beta )}$ may be hyponormal even if $W_{(\alpha,\beta )}$ is not. \begin{example} \label{2 atomic} For $0<x,y<1$, let $W_{(\alpha ,\beta )}$ be the $2$-variable weighted shift in Figure \ref{Figure 2ex}(ii), where $\omega_0=\omega_1=\omega_2=\cdots:=1$, $x_{0}:=x$, $a:=y$. \ Then \newline (i) \ $\ W_{(\alpha ,\beta )}$ is subnormal $\Longleftrightarrow x \le s(y):=\sqrt{\frac{1}{2-y^2}}$; \newline (ii) \ $\ W_{(\alpha ,\beta )}$ is hyponormal $\Longleftrightarrow x\le h(y):=\sqrt{\frac 1+y^{2}}{2}}$; \newline (iii) \ $\ \widetilde{W}_{(\alpha ,\beta )}$ is hyponormal $\Longleftrightarrow x \le CA(y):=\frac{1+y}{2}$; \newline (iv) \ $\widehat{W}_{(\alpha ,\beta )}$ is hyponormal $\Longleftrightarrow x\le PA(y):=\frac{2\left( 1+y^{2}-y^{4}\right) }{\left( 1+\sqrt{2}\right) \left( 1+y^{2}\right) \left( \sqrt{1+y^{2}}-y^{2}\right) }$.\newline Clearly, $s(y) \le h(y) \le PA(y)$ and $CA(y) < h(y)$ for all $0 < y <1$, while $CA(y) < s(y)$ on $(0,q)$ and $CA(y) > s(y)$ on $(q,1)$, where $q \cong 0.52138$. \ Then $W_{(\alpha ,\beta )}$ is hyponormal but $\widetilde{W _{(\alpha ,\beta )}$ is not hyponormal if $0<CA(y)<x\le h(y)$, and $\widehat{W}_{(\alpha ,\beta )}$ is hyponormal but W_{(\alpha ,\beta )}$ is not hyponormal if $0<h(y)<x\le PA(y)$. \end{example} \section{\label{Identical Aluthge Transforms}$2$-variable Weighted Shifts with Identical Aluthge Transforms} We shall now characterize the class $\mathcal{A}_{TS}$ of commuting $2$-variable weighted shifts $W_{(\alpha,\beta)}$ for which the toral and spherical Aluthge transforms agree, that is, $\widetilde{W}_{(\alpha,\beta)}=\widehat{W}_{(\alpha,\beta)}$. \ Using Lemmas \ref{CartAlu} and \ref{PolarAlu}, it suffices to restrict attention to the equalities $$ \sqrt{\alpha_{\mathbf{k}}\alpha_{\mathbf{k}+\mathbf{\varepsilon}_{1}}}=\alpha_{\mathbf{k}} \frac{(\alpha_{\mathbf{k}+\mathbf{\epsilon}_1}^2+\beta_{\mathbf{k}+\mathbf{\epsilon}_1}^2)^{1/4}}{(\alpha_{\mathbf{k}}^2+\beta_{\mathbf{k}}^2)^{1/4}} $$ and $$ \sqrt{\beta_{\mathbf{k}}\beta_{\mathbf{k}+\mathbf{\varepsilon}_{2}}}=\beta_{\mathbf{k}} \frac{(\alpha_{\mathbf{k}+\mathbf{\epsilon}_2}^2+\beta_{\mathbf{k}+\mathbf{\epsilon}_2}^2)^{1/4}}{(\alpha_{\mathbf{k}}^2+\beta_{\mathbf{k}}^2)^{1/4}} $$ for all $\mathbf{k} \in \mathbb{Z}_+^2$. \ Thus, we easily see that $\widetilde{W}_{(\alpha,\beta)}=\widehat{W}_{(\alpha,\beta)}$ if and only if $$ \alpha_{\mathbf{k}+\mathbf{\varepsilon}_{1}}^2 (\alpha_{\mathbf{k}}^2+\beta_{\mathbf{k}}^2)=\alpha_{\mathbf{k}}^2(\alpha_{\mathbf{k}+\mathbf{\epsilon}_1}^2+\beta_{\mathbf{k}+\mathbf{\epsilon}_1}^2) $$ and $$ \beta_{\mathbf{k}+\mathbf{\varepsilon}_{2}}^2 (\alpha_{\mathbf{k}}^2+\beta_{\mathbf{k}}^2)=\beta_{\mathbf{k}}^2(\alpha_{\mathbf{k}+\mathbf{\epsilon}_2}^2+\beta_{\mathbf{k}+\mathbf{\epsilon}_2}^2) $$ for all $\mathbf{k} \in \mathbb{Z}_+^2$, which is equivalent to $$ \alpha_{\mathbf{k}+\mathbf{\varepsilon}_{1}}\beta_{\mathbf{k}}=\alpha_{\mathbf{k}}\beta_{\mathbf{k}+\mathbf{\varepsilon}_{1}} $$ and $$ \beta_{\mathbf{k}+\mathbf{\varepsilon}_{2}}\alpha_{\mathbf{k}}=\beta_{\mathbf{k}}\alpha_{\mathbf{k}+\mathbf{\varepsilon}_{2}} $$ for all $\mathbf{k} \in \mathbb{Z}_+^2$. \ If we now recall condition (\ref{commuting}) for the commutativity of $W_{(\alpha,\beta)}$, that is, $\alpha_{\mathbf{k}}\beta_{\mathbf{k}+\mathbf{\epsilon_1}}=\beta_{\mathbf{k}}\alpha_{\mathbf{k}+\mathbf{\epsilon_2}}$ for all $\mathbf{k} \in \mathbb{Z}_+^2$, we see at once that $\widetilde{W}_{(\alpha,\beta)}=\widehat{W}_{(\alpha,\beta)}$ if and only if $\alpha_{\mathbf{k}+\mathbf{\epsilon_1}}=\alpha_{\mathbf{k}+\mathbf{\epsilon_2}}$ and $\beta_{\mathbf{k}+\mathbf{\epsilon_2}}=\beta_{\mathbf{k}+\mathbf{\epsilon_1}}$ for all $\mathbf{k} \in \mathbb{Z}_+^2$. \ It follows that the weight diagram for $W_{(\alpha,\beta)}$ is completely determined by the zeroth row and the weight $\beta_{(0,0)}$. \ For, referring to Figure \ref{Figure 1}(i), once we have $\alpha_{(0,0)}$ and $\alpha_{(1,0)}$, we immediately get $\alpha_{(0,1)} (=\alpha_{(1,0)}$). \ With $\alpha_{(0,0)}$ and $\alpha_{(0,1)}$ known, we use commutativity and $\beta_{(0,0)}$ to calculate $\beta_{(1,0)}$. \ Since $\beta_{(0,1)} = \beta_{(1,0)}$ and $\alpha_{(0,2)}=\alpha_{(1,1)}=\alpha_{(2,0)}$, we can then calculate $\beta_{(1,1)}$ and $\beta_{(2,0)}$. \ A similar reasoning yields all remaining $\alpha_{\mathbf{k}}$'s and $\beta_{\mathbf{k}}$'s. We will now show that, for the purpose of establishing the invariance of $k$-hyponormality under the Aluthge transform for the class $\mathcal{A}_{TS}$, it is enough to assume that $\beta_{(0,0)}=\alpha_{(0,0)}$. \ This is an immediate consequence of the following well known result. \begin{lemma} \label{lem51} Let $T$ be a bounded linear operator on Hilbert space, and let $T\equiv VP$ be its polar decomposition. \ Let $a \equiv |a|e^{i \theta}$ be a complex number written in polar form, and define $T_a:=aT$. \ Then, the polar decomposition of $T_a$ is $(e^{i\theta}V)(|a|P)$. \ As a consequence, $\widetilde{T_a}=a\widetilde{T}$. \end{lemma} \begin{remark} By Lemma \ref{lem51}, to study the toral Aluthge transform of $W_{(\alpha,\beta)} \equiv (T_1,T_2)\in \mathcal{A}_{TS}$ we can multiply $T_2$ by the factor $\frac{\alpha_{(0,0)}}{\beta_{(0,0)}}$. \ This results in a new $2$-variable weighted shift for which $\alpha_{\mathbf{k}}=\beta_{\mathbf{k}}$ for all $\mathbf{k} \in \mathbb{Z}_+^2$. \ This subclass of $\mathcal{A}_{TS}$ is the central subject of the next section. \ Observe that, while the natural generalization of Lemma \ref{lem51} is not true for the spherical Aluthge transform, it is true when restricted to $\mathcal{A}_{TS}$, since both the toral and spherical Aluthge transforms agree on this class. \qed \end{remark} \section{\label{Sect3-1}When is Hyponormality Invariant Under the Toral and Spherical Aluthge Transforms?} In this section we identify a large class of $2$-variable weighted shifts for which the toral ans spherical Aluthge transforms do preserve hyponormality. \ This is in some sense optimal, since we know that $k$-hyponormality ($k \ge 2$) is not preserved by the $1$-variable Aluthge transform \cite{LLY}, as mentioned in the Introduction. \ Since this class is actually a subclass of $\mathcal{A}_{TS}$ (introduced in Section \ref{Identical Aluthge Transforms}), it follows at once that all the results we establish for the toral Aluthge transform are also true for the spherical Aluthge transform. We start with some definitions. \ Recall that the core $c(W_{(\alpha ,\beta )})$ of $W_{(\alpha ,\beta )}$ is the restriction of $W_{(\alpha ,\beta )}$ to the invariant subspace $\mathcal{M} \cap \mathcal{N}$. $\ W_{(\alpha ,\beta )}$ is said to be of\textit{\ tensor form} if it is of the form (I \otimes W_{\sigma },W_{\tau } \otimes I)$ for some unilateral weighted shifts $W_{\sigma }$ and $W_{\tau }$. \ Consider $\Theta \left( W_{\omega }\right) \equiv W_{(\alpha ,\beta )}$ on $\ell ^{2}(\mathbb{Z}_{+}^{2})$ given by the double-indexed weight sequences $\alpha _{(k_{1},k_{2})}=\beta _{(k_{1},k_{2})}:=\omega _{k_{1}+k_{2}}$ for $k_{1},k_{2}\geq 0$. \ It is clear that $\Theta \left( W_{\omega }\right) $ is a commuting pair, and we refer to it as a $2$-variable weighted shift with \textit{diagonal core} \cite{CLY7}. \ This $2$-variable weighted shift can be represented by the weight diagram in Figure \ref{Figure 6}(i)). \ It is straightforward to observe that the class of shifts of the form $\Theta \left( W_{\omega }\right) $ is simply $\mathcal{A}_{\mathcal{TS}}$ with the extra condition $\beta_{(0,0)}=\alpha_{(0,0)}$. \ (For more on these shifts the reader is referred to \cite{CLY7}). \ Now, we show that the $k$-hyponormality of $W_{\omega }$ implies the $k -hyponormality of $\Theta \left( W_{\omega }\right) $. \ For this, we present a simple criterion to detect the $k$-hyponormality of weighted shifts. \begin{lemma} (\cite{Cu2})\label{k-hyponormal} \ Let $W_{\alpha }e_{i}=\alpha _{i}e_{i+1}$ $(i\geq 0)$ be a hyponormal weighted shift, and let $k\geq 1$. \ The following statements are equivalent:\newline (i) $\ W_{\alpha }$ is $k$-hyponormal;\newline (ii) \ The matrix \begin{equation*} (([W_{\alpha }^{\ast j},W_{\alpha }^{i}]e_{u+j},e_{u+i}))_{i,j=1}^{k} \end{equation* is positive semi-definite for all $u\geq -1$;\newline (iii) \ The Hankel matrix \begin{equation*} H(k;u)\left( W_{\alpha }\right) :=(\gamma _{u+i+j-2})_{i,j=1}^{k+1} \end{equation* is positive semi-definite for all $u\geq 0$. \end{lemma} \setlength{\unitlength}{1mm} \psset{unit=1mm} \begin{figure}[th] \begin{center} \begin{picture}(135,70) \psline{->}(20,20)(65,20) \psline(20,40)(63,40) \psline(20,60)(63,60) \psline{->}(20,20)(20,65) \psline(40,20)(40,63) \psline(60,20)(60,63) \put(12,16){\footnotesize{$(0,0)$}} \put(37,16){\footnotesize{$(1,0)$}} \put(57,16){\footnotesize{$(2,0)$}} \put(29,21){\footnotesize{$\omega_{0}$}} \put(49,21){\footnotesize{$\omega_{1}$}} \put(61,21){\footnotesize{$\cdots$}} \put(29,41){\footnotesize{$\omega_{1}$}} \put(49,41){\footnotesize{$\omega_{2}$}} \put(61,41){\footnotesize{$\cdots$}} \put(29,61){\footnotesize{$\omega_{2}$}} \put(49,61){\footnotesize{$\omega_{3}$}} \put(61,61){\footnotesize{$\cdots$}} \psline{->}(35,14)(50,14) \put(42,10){$\rm{T}_1$} \psline{->}(10,35)(10,50) \put(4,42){$\rm{T}_2$} \put(11,40){\footnotesize{$(0,1)$}} \put(11,60){\footnotesize{$(0,2)$}} \put(20,30){\footnotesize{$\omega_{0}$}} \put(20,50){\footnotesize{$\omega_{1}$}} \put(21,61){\footnotesize{$\vdots$}} \put(40,30){\footnotesize{$\omega_{1}$}} \put(40,50){\footnotesize{$\omega_{2}$}} \put(41,61){\footnotesize{$\vdots$}} \put(60,30){\footnotesize{$\omega_{1}$}} \put(60,50){\footnotesize{$\omega_{2}$}} \put(15,8){(i)} \put(85,8){(ii)} \psline{->}(90,14)(105,14) \put(97,9){$\widetilde{T}_{1}$} \psline{->}(72,35)(72,50) \put(67,42){$\widetilde{T}_{2}$} \psline{->}(75,20)(120,20) \psline(75,40)(118,40) \psline(75,60)(118,60) \psline{->}(75,20)(75,65) \psline(95,20)(95,63) \psline(115,20)(115,63) \put(71,16){\footnotesize{$(0,0)$}} \put(91,16){\footnotesize{$(1,0)$}} \put(111,16){\footnotesize{$(2,0)$}} \put(80,21){\footnotesize{$\sqrt{\omega_{0}\omega_{1}}$}} \put(100,21){\footnotesize{$\sqrt{\omega_{1}\omega_{2}}$}} \put(116,21){\footnotesize{$\cdots$}} \put(80,41){\footnotesize{$\sqrt{\omega_{1}\omega_{2}}$}} \put(100,41){\footnotesize{$\sqrt{\omega_{2}\omega_{3}}$}} \put(116,41){\footnotesize{$\cdots$}} \put(80,61){\footnotesize{$\sqrt{\omega_{2}\omega_{3}}$}} \put(100,61){\footnotesize{$\sqrt{\omega_{3}\omega_{4}}$}} \put(116,61){\footnotesize{$\cdots$}} \put(75,30){\footnotesize{$\sqrt{\omega_{0}\omega_{1}}$}} \put(75,50){\footnotesize{$\sqrt{\omega_{1}\omega_{2}}$}} \put(76,61){\footnotesize{$\vdots$}} \put(95,30){\footnotesize{$\sqrt{\omega_{1}\omega_{2}}$}} \put(95,50){\footnotesize{$\sqrt{\omega_{2}\omega_{3}}$}} \put(96,61){\footnotesize{$\vdots$}} \end{picture} \end{center} \caption{Weight diagram of a generic $2$-variable weighted shift $\Theta \left( W_{\protect\omega }\right) \equiv \mathbf{(}T_{1},T_{2})$ and weight diagram of the toral Aluthge transform $\widetilde{\Theta \left( W_{\protect\omega }\right)}\equiv (\widetilde{T}_{1},\widetilde{T}_{2})$ of $\Theta \left( W_{\protect\omega }\right)$, respectively.} \label{Figure 6} \end{figure} \subsection{Preservation of hyponormality} We then have: \begin{proposition} \label{propscaling2}Consider $\Theta \left( W_{\omega }\right) \equiv \mathbf{(}T_{1},T_{2})$ given by Figure \ref{Figure 6}(i). \ Then for k\geq 1$ \begin{equation*} W_{\omega }\text{ is }k\text{-hyponormal if and only if }\Theta \left( W_{\omega }\right) \text{is } k \text{-hyponormal.} \end{equation*} \end{proposition} \begin{proof} $(\Longleftarrow )$ \ This is clear from the construction of $\Theta \left( W_{\omega }\right) $ and Figure \ref{Figure 6}(i).\newline $(\Longrightarrow )$ \ For $k\geq 1$, we suppose that $W_{\omega }$ is a $k -hyponormal weighted shift. \ Then, by Lemma \ref{k-hyponormal}, for all k_{1}\geq 0$, we have that the Hankel matri \begin{equation*} H(k;u)\left( W_{\omega }\right) :=(\gamma _{u+i+j-2}\left( W_{\omega }\right) )_{i,j=1}^{k+1}\geq 0. \end{equation*} By Lemma \ref{khypo}, we can see that a $2$-variable weighted shift W_{(\alpha ,\beta )}$ is $k$-hyponormal if and only if \begin{equation} M_{\mathbf{u}}(k)\left( W_{(\alpha ,\beta )}\right) =(\gamma _{\mathbf{u +(m,n)+(p,q)})_{_{0\leq p+q\leq k}^{0\leq n+m\leq k}}\geq 0, \label{k-hy} \end{equation for all $\mathbf{u}\equiv (u_{1},u_{2})\in \mathbb{Z}_{+}^{2}$. \ Thus, for \Theta \left( W_{\omega }\right)$ $k$-hyponormal, it is enough to show that $M_{\mathbf{u}}(k)\geq 0$ for all $\mathbf{u}\in \mathbb{Z}_{+}^{2}$. \ Observe that the moments associated with $\Theta \left( W_{\omega }\right) $ are \begin{equation} \gamma _{\mathbf{u}}(\Theta \left( W_{\omega }\right) )=\gamma _{u_{1}+u_{2}}\left( W_{\omega }\right) \equiv \gamma _{u_{1}+u_{2}} \; (\text{all \mathbf{u}\in \mathbb{Z}_{+}^{2}). \label{moment0} \end{equation By a direct computation, we have \[ \begin{tabular}{l} $M_{\mathbf{u}}(k)\left( \Theta \left( W_{\omega }\right) \right) =$ \\ \\ $\left( \begin{array}{ccccccc} \gamma _{\mathbf{u}} & \gamma _{\mathbf{u}+\mathbf{\epsilon }_{1}} & \gamma _{\mathbf{u}+\mathbf{\epsilon }_{2}} & \cdots & \gamma _{\mathbf{u}+ \mathbf{\epsilon }_{1}} & \cdots & \gamma _{\mathbf{u}+k\mathbf{\epsilon _{2}} \\ \gamma _{\mathbf{u}+\mathbf{\epsilon }_{1}} & \gamma _{\mathbf{u}+2\mathbf \epsilon }_{1}} & \gamma _{\mathbf{u}+\mathbf{\epsilon }_{1}+\mathbf \epsilon }_{2}} & \cdots & \gamma _{\mathbf{u}+(k+1)\mathbf{\epsilon }_{1}} & \cdots & \gamma _{\mathbf{u}+\mathbf{\epsilon }_{1}+k\mathbf{\epsilon _{2}} \\ \gamma _{\mathbf{u}+\mathbf{\epsilon }_{2}} & \gamma _{\mathbf{u}+\mathbf \epsilon }_{1}+\mathbf{\epsilon }_{2}} & \gamma _{\mathbf{u}+2\mathbf \epsilon }_{2}} & \cdots & \gamma _{\mathbf{u}+k\mathbf{\epsilon }_{1} \mathbf{\epsilon }_{2}} & \cdots & \gamma _{\mathbf{u}+(k+1)\mathbf \epsilon }_{2}} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ \gamma _{\mathbf{u}+k\mathbf{\epsilon }_{1}} & \gamma _{\mathbf{u}+(k+1 \mathbf{\epsilon }_{1}} & \gamma _{\mathbf{u}+k\mathbf{\epsilon }_{1} \mathbf{\epsilon }_{2}} & \cdots & \gamma _{\mathbf{u}+2k\mathbf{\epsilon _{1}} & \cdots & \gamma _{\mathbf{u}+k\mathbf{\epsilon }_{1}+k\mathbf \epsilon }_{2}} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ \gamma _{\mathbf{u}+k\mathbf{\epsilon }_{2}} & \gamma _{\mathbf{u}+\mathbf \epsilon }_{1}+k\mathbf{\epsilon }_{2}} & \gamma _{\mathbf{u}+(k+1)\mathbf \epsilon }_{2}} & \cdots & \gamma _{\mathbf{u}+k\mathbf{\epsilon }_{1}+ \mathbf{\epsilon }_{2}} & \cdots & \gamma _{\mathbf{u}+2k\mathbf{\epsilon _{2} \end{array \right) , \end{tabular \ which by (\ref{moment0}) equals \[ \begin{tabular}{l} $J_{\mathbf{u}}(k):=$ \\ \\ $\left( \begin{array}{cccccc} \gamma _{u_{1}+u_{2}} & \gamma _{u_{1}+u_{2}+1} & \cdots & \gamma _{u_{1}+u_{2}+k} & \cdots & \gamma _{u_{1}+u_{2}+k} \\ \gamma _{u_{1}+u_{2}+1} & \gamma _{u_{1}+u_{2}+2} & \cdots & \gamma _{u_{1}+u_{2}+k+1} & \cdots & \gamma _{u_{1}+u_{2}+k+1} \\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ \gamma _{u_{1}+u_{2}+k} & \gamma _{u_{1}+u_{2}+k+1} & \cdots & \gamma _{u_{1}+u_{2}+2k} & \cdots & \gamma _{u_{1}+u_{2}+2k} \\ \gamma _{u_{1}+u_{2}+k} & \gamma _{u_{1}+u_{2}+k+1} & \cdots & \gamma _{u_{1}+u_{2}+2k} & \cdots & \gamma _{u_{1}+u_{2}+2k} \\ \vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\ \gamma _{u_{1}+u_{2}+k} & \gamma _{u_{1}+u_{2}+u+1} & \ddots & \gamma _{u_{1}+u_{2}+2k} & \ddots & \gamma _{u_{1}+u_{2}+2k} \\ \gamma _{u_{1}+u_{2}+k} & \gamma _{u_{1}+u_{2}+k+1} & \cdots & \gamma _{u_{1}+u_{2}+2k} & \cdots & \gamma _{u_{1}+u_{2}+2k \end{array \right) . \end{tabular \ For $1\leq i\leq k+1$, we can observe that th \[ \left( \frac{i(i+1)}{2}+1\right) ^{th},\left( \frac{i(i+1)}{2}+2\right) ^{th},\cdots ,\left( \frac{i(i+1)}{2}+(i+1)\right) ^{th} \ rows and columns of $J_{\mathbf{u}}(k)$ are equal. \ Thus, a direct calculation (i.e., discarding some redundant rows and columns in the matrix J_{\mathbf{u}}(k)$) shows that \begin{equation} \begin{tabular}{l} $J_{\mathbf{u}}(k)\geq 0\iff L_{\mathbf{u}}(k)\geq 0$ \end{tabular} \label{condition02} \end{equation wher \[ L_{\mathbf{u}}(k):=\left( \begin{array}{cccc} \gamma _{u_{1}+u_{2}}\left( W_{\omega }\right) & \gamma _{u_{1}+u_{2}+1}\left( W_{\omega }\right) & \cdots & \gamma _{u_{1}+u_{2}+k}\left( W_{\omega }\right) \\ \gamma _{u_{1}+u_{2}+1}\left( W_{\omega }\right) & \gamma _{u_{1}+u_{2}+2}\left( W_{\omega }\right) & \cdots & \gamma _{u_{1}+u_{2}+k+1}\left( W_{\omega }\right) \\ \vdots & \vdots & \ddots & \vdots \\ \gamma _{u_{1}+u_{2}+k}\left( W_{\omega }\right) & \gamma _{u_{1}+u_{2}+k+1}\left( W_{\omega }\right) & \cdots & \gamma _{u_{1}+u_{2}+2k}\left( W_{\omega }\right) \end{array \right) . \] Note tha \begin{equation} L_{\left( u_{1},u_{2}\right) }(k)\geq 0\Longleftrightarrow H(k;u_{1}+u_{2})\left( W_{\omega }\right) \geq 0\text{.} \label{condition03} \end{equation Thus, if $W_{\omega }$ is $k$-hyponormal then $H(k;u)\left( W_{\omega }\right) \ge 0$ for all $(u\geq 0)$, which a fortiori implies that $M_{\mathbf{u}}(k)\left( W_{(\alpha ,\beta )}\right) \geq 0$ for all $\mathbf{u}\in \mathbb{Z _{+}^{2} $, as desired. \ The proof is now complete. \end{proof} Now we have the following result. \begin{theorem} \label{pre-hypo}Consider the $2$-variable weighted shift $\Theta \left( W_{\omega }\right) \equiv \mathbf{( T_{1},T_{2})$ given by Figure \ref{Figure 6}(i). \ Suppose that $\Theta \left( W_{\omega }\right) $ is hyponormal. \ Then, the toral Aluthge transform \widetilde{\Theta \left( W_{\omega }\right)} \equiv \Theta \left( \widetilde W}_{\omega }\right) $ is also hyponormal. \end{theorem} In view of Lemma \ref{lem51}, we immediately get \begin{corollary} \ The conclusion of Theorem \ref{pre-hypo} holds in the class $\mathcal{A}_{\mathcal{TS}}$. \end{corollary} \begin{proof}[Proof of Theorem \ref{pre-hypo}] \ Since $\Theta \left( W_{\omega }\right) $ is hyponormal, by Proposition \re {propscaling2}, $W_{\omega }$ is hyponormal. \ Thus, we have that for any integer $n\geq 0$, $\omega _{n}\leq \omega _{n+1}\Longrightarrow \sqrt \omega _{n}\omega _{n+1}}\leq \sqrt{\omega _{n+1}\omega _{n+2}}$, which implies that $\widetilde{W}_{\omega }$ is also hyponormal. \ By Proposition \ref{propscaling2}, $\widetilde{\Theta \left( W_{\omega }\right)} $ is hyponormal, as desired. \end{proof} \begin{remark} (i) \ We construct an example $\Theta \left( W_{\omega }\right) $ such that \Theta \left( W_{\omega }\right) $ is not hyponormal, but the Aluthge transform $\widetilde{\Theta \left( W_{\omega }\right)} $ of $\Theta \left( W_{\omega }\right) $ is hyponormal. \ Consider the unilateral weighted shift introduced in Section \ref{Int}, that is, $W_{\omega }\equiv \mathrm shift}\left( \frac{1}{2},2,\frac{1}{2},2,\frac{1}{2},2,\cdots \right)$. \ $W_{\omega }$ is not hyponormal, but the Aluthge transform $\widetilde{W}_{\omega }=U_{+}$ is subnormal. \ Thus, by Proposition \ref{propscaling2}, we have that $\Theta \left( W_{\omega }\right) $ is not hyponormal, but $\widetilde{\Theta \left( W_{\omega }\right)} $ is hyponormal, as desired. \newline (ii) \ Using an argument entirely similar to that in (i) above, one can show that $2$-hyponor-mality is not preserved by the toral or spherical Aluthge transform (as in the single variable case). \qed \end{remark} We can easily observe that if $W_{(\alpha ,\beta )}$ is of tensor form, that is, $(I\otimes W_{\sigma },W_{\tau }\otimes I)$, then its toral Aluthge transform $\widetilde{W}_{(\alpha ,\beta )}$ is also of tensor form; however, the spherical Aluthge transform $\widetilde{W}_{(\alpha ,\beta )}$ is in general not of tensor form. \ In any event, hyponormality is invariant under both Aluthge transforms when $W_{(\alpha ,\beta )}$ is of tensor form. \ That the toral Aluthge transform preserves hyponormality for these $2$-variable weighted shifts is clear; we now establish invariance of hyponormality for the spherical Aluthge transform. \ Recall first that, by Proposition \ref{Proposition1}, the spherical Aluthge transform is commuting. \begin{proposition} \label{tensor} Let $W_{(\alpha ,\beta )}$ be a $2 -variable weighted shift of \textit{tensor form} $(I\otimes W_{\sigma },W_{\tau }\otimes I)$, and assume that $W_{\sigma}$ and $W_{\tau}$ are hyponormal. \ Then $\widehat{W}_{(\alpha ,\beta )}$ is hyponormal. \end{proposition} \begin{proof} Without loss of generality, we can assume that $$ W_{\sigma }\equiv shift(\sqrt{x},\sqrt{y},1,\cdots ) \; \; \textrm{ and } \; \; W_{\tau }\equiv shift(\sqrt{a},\sqrt{b},1,\cdots), $$ with $0<x<y<1$ and $0<a<b<1$. \ Also, it is enough to focus on the Six-Point Test at $(0,0)$ (cf. \cite[Theorem 6.1]{bridge}, \cite[Theorem 1.3]{CuYo1}); that is, we will check that $M_{\left( 0,0\right) }(1)\left( \widehat{W}_{(\alpha ,\beta )}\right) \geq 0$. Observe that \begin{equation*} M_{\left( 0,0\right) }(1)\left( \widehat{W}_{(\alpha ,\beta )}\right) =\left( \begin{array}{ccc} 1 & x\sqrt{\frac{a+y}{a+x}} & a\sqrt{\frac{b+x}{a+x}} \\ x\sqrt{\frac{a+y}{a+x}} & xy\sqrt{\frac{a+1}{a+x}} & ax\sqrt{\frac{b+y}{a+x}} \\ a\sqrt{\frac{b+x}{a+x}} & ax\sqrt{\frac{b+y}{a+x}} & ab\sqrt{\frac{x+1}{a+x} \end{array \right) \text{.} \end{equation* Thus, we obtai \begin{equation*} M_{\left( 0,0\right) }(1)\left( \widehat{W}_{(\alpha ,\beta )}\right) \geq 0\Longleftrightarrow A\geq 0\text{,} \end{equation* wher \begin{equation*} A:=\left( \begin{array}{ccc} \sqrt{a+x} & x\sqrt{a+y} & a\sqrt{b+x} \\ x\sqrt{a+y} & xy\sqrt{a+1} & ax\sqrt{b+y} \\ a\sqrt{b+x} & ax\sqrt{b+y} & ab\sqrt{x+1 \end{array \right) \text{.} \end{equation* Now modify the $(2,2)$ and $(3,3)$ entries of $A$ and let \begin{equation*} B:=\left( \begin{array}{ccc} \sqrt{a+x} & x\sqrt{a+y} & a\sqrt{b+x} \\ x\sqrt{a+y} & xy\sqrt{a+x} & ax\sqrt{b+y} \\ a\sqrt{b+x} & ax\sqrt{b+y} & ab\sqrt{a+x \end{array \right) \text{.} \end{equation* A direct calculation using the Nested Determinant Test shows that $A\geq B$ and that $B\geq 0$. \ Thus, we have \begin{equation*} M_{\left( 0,0\right) }(1)\left( \widehat{W}_{(\alpha ,\beta )}\right) \geq \text{,} \end{equation*} so that $\widehat{W}_{(\alpha ,\beta )}$ is hyponormal, as desired. \end{proof} \begin{remark} \ One might be tempted to claim that subnormality is also preserved by the toral and spherical Aluthge transforms, within the class of $2$-variable weighted shifts of tensor form. \ However, this is not the case. \ Indeed, in \cite{Ex} and \cite{Exn}, G. Exner considered the weighted shift $W_{\sigma}$ with $3$-atomic Berger measure $\frac{1}{3}(\delta_{0}+\delta_{\frac{1}{2}}+\delta_{1})$ (studied in \cite{CPY}) and proved that the Aluthge transform of $W_{\sigma}$ is not subnormal. \end{remark} \section{\label{Sect4}Continuity Properties of the Aluthge Transforms} We turn our attention to the continuity properties of the Aluthge transforms of a commuting pair. \ The following result is well known. \ For a single operator $T\in \mathcal{B}(\mathcal{H})$, the Aluthge transform map T\rightarrow \widetilde{T}$ is $\left( \left\Vert \cdot \right\Vert ,\left\Vert \cdot \right\Vert \right) -$ continuous on $\mathcal{B}(\mathcal H})$ (\cite{DySc}). \ We want to extend the result to multivariable case. \ First, we define the operator norm of $\mathbf{T}\equiv (T_1,T_2)$ as \begin{equation} \left\Vert \mathbf{T}\right\Vert :=\max \left\{ \left\Vert T_1 \right\Vert ,\left\Vert T_2 \right\Vert \right\}. \label{normdef} \end{equation} \begin{theorem} \label{ContinuityC}The toral Aluthge transform map $\mathbf{T} \rightarrow \widetilde{\mathbf{T}}$ is $\left( \left\Vert \cdot \right\Vert ,\left\Vert \cdot \right\Vert \right) -$ continuous on $\mathcal B}(\mathcal{H})$. \end{theorem} \begin{proof} Straightforward from the definition of $\left\Vert \mathbf{T}\right\Vert$. \end{proof} We turn our attention to the continuity properties in $\mathcal{B}(\mathcal{ })$ for the spherical Aluthge transform of a commuting pair. \ For this, we need a couple of auxiliary results, which can be proved by suitable adaptations of the results in \cite[Lemmas 2.1 and 2.2]{DySc}. \begin{lemma} \label{Re 4} \ Let $\mathbf{T}\equiv (T_{1},T_{2}) \equiv (V_1P,V_2P)$ be a pair of commuting operators, written in joint polar decomposition form, where $P=\sqrt{T_{1}^{\ast }T_{1}+T_{2}^{\ast }T_{2}}$. \ For $n\in \mathbb{N}$ and $t>0$, let $f_{n}\left( t\right) :=\sqrt{\max \left( \frac{1}{n},t\right) }$ and let $A_n:=f_n(\mathbf{T})$. \ Then: \newline (i) \ $\left\Vert A_{n} \right\Vert \leq \max \left( n^{ \frac{1}{2}},\left\Vert P\right\Vert ^{\frac{1}{2}}\right) $;\newline (ii) \ $\left\Vert P A_n^{-1}\right\Vert \leq \left\Vert P\right\Vert ^{\frac{1}{2}}$;\newline (iii) \ $\left\Vert A_n -P^{\frac{1}{2}}\right\Vert \leq n^{-\frac{1}{2}}$;\newline (iv) \ $\left\Vert PA_n^{-1}-P^{\frac{1}{2 }\right\Vert \leq \frac{1}{4} n^{-\frac{1}{2}}$;\newline (v) \ For $i=1,2$, $\left\Vert A_n T_{i}A_n^{-1}-P^{\frac{1}{2}}V_{i} P^{\frac{1}{2}}\right\Vert \leq \frac{5}{4} n^{-\frac{1}{2}}\left\Vert T_{i}\right\Vert ^{\frac{1}{2}}$. \end{lemma} \begin{lemma} \label{Re 5} \ Given $R\geq 1$ and $\epsilon >0$, there are real polynomials $p$ and $q$ such that for every commuting pair $\mathbf{T \equiv (T_{1},T_{2})$ with $\left\Vert T_{i}\right\Vert \leq R$ $\left( i=1,2\right) $, we have \begin{equation*} \left\Vert P^{\frac{1}{2}}V_{i}P^{\frac{1}{2}}-p\left( T_{1}^{\ast }T_{1}+T_{2}^{\ast }T_{2}\right) T_{i}q\left( T_{1}^{\ast }T_{1}+T_{2}^{\ast }T_{2}\right) \right\Vert <\epsilon \text{.} \end{equation*} \end{lemma} In the statement below, $\left\Vert \cdot \right\Vert $ refers to the operator norm topology on $\mathcal{B}(\mathcal{H})^2$ (see \ref{normdef}). \begin{theorem} \label{ContinuityP}The spherical Aluthge transform $$ \mathbf{( T_{1},T_{2})\rightarrow \widehat{\mathbf{(}T_{1},T_{2})} $$ is $\left( \left\Vert \cdot \right\Vert ,\left\Vert \cdot \right\Vert \right) -$ continuous on $\mathcal{B}(\mathcal{H})$. \end{theorem} \begin{proof} Observe first that $\left\Vert V_i \right\Vert \le 1$ for $i=1,2$, as follows from the inequality $$ \left\Vert V_iPx \right\Vert^2 =\left\Vert T_ix \right\Vert^2 = <T_i^*T_ix,x> \le <P^2x,x> = \left\Vert Px \right\Vert^2. $$ The proof is now an easy consequence of the Proof of \cite[Theorem 2.3]{DySc}, when one uses Lemma \ref{Re 5} instead of \cite[Lemma 2.2]{DySc}. \end{proof} \section{\label{Sect5}Spectral Properties of the Aluthge Transforms} In this section, we study whether the multivariable Aluthge transforms preserve the Taylor spectrum and Taylor essential spectrum, when W_{(\alpha ,\beta )}$ is in the class $\mathcal{TC}$ of $2$-variable weighted shifts with core of tensor form; this is a large nontrivial class, which has been previously studied in [15--20], [24--28] and [47--49]. We begin by looking at the toral Aluthge transform. \ By Proposition \re {commuting1} and Remark \ref{Re 1}, we note that the weight diagram of W_{(\alpha ,\beta )}$ is as in Figure \ref{Figure 2ex}(i), provided the toral Aluthge transform is commutative. \ We first address the Taylor spectrum. \begin{lemma} \label{lem1} (i) (\cite{Cu1}, \cite{Cu3}) \ Let $\mathcal{H}_{1}$ and \mathcal{H}_{2}$ be Hilbert spaces, and let $A_{i}\in \mathcal{B}(\mathcal{H _{1}),$ $C_{i}\in \mathcal{B}(\mathcal{H}_{2})$ and $B_{i}\in \mathcal{B} \mathcal{H}_{1},\mathcal{H}_{2}),(i=1,\cdots ,n)$ be such tha \begin{equation*} \left( \begin{array}{cc} \mathbf{A} & 0 \\ \mathbf{B} & \mathbf{C \end{array \right) :=\left( \left( \begin{array}{cc} A_{1} & 0 \\ B_{1} & C_{1 \end{array \right) ,\ldots ,\left( \begin{array}{cc} A_{n} & 0 \\ B_{n} & C_{n \end{array \right) \right) \end{equation* is commuting. \ Assume that $\mathbf{A}$ and $\left( \begin{array}{cc} \mathbf{A} & 0 \\ \mathbf{B} & \mathbf{C \end{array \right) $ are Taylor invertible. \ Then, $\mathbf{C}$ is Taylor invertible. \ Furthermore, if $\mathbf{A}$ and $\mathbf{C}$ are Taylor invertible, then \left( \begin{array}{cc} \mathbf{A} & 0 \\ \mathbf{B} & \mathbf{C \end{array \right) $ is Taylor invertible.\newline (ii) (\cite{CuFi}) \ For $\mathbf{A}$ and $\mathbf{B}$ two commuting $n$-tuples of bounded operators on Hilbert space, we have \begin{equation*} \sigma _{T}(I \otimes \mathbf{A},\mathbf{B} \otimes I)=\sigma _{T}(\mathbf{A )\times \sigma _{T}(\mathbf{B}) \end{equation* an \begin{equation*} \sigma _{Te}(I \otimes \mathbf{A}, \mathbf{B} \otimes I)=\sigma _{Te}\left( \mathbf{A}\right) \times \sigma _{T}\left( \mathbf{B}\right) \cup \sigma _{T}\left( \mathbf{A}\right) \times \sigma _{Te}\left( \mathbf{B}\right) \text{.} \end{equation*} \end{lemma} \ To apply Lemma \ref{lem1}, we first le \begin{equation*} \begin{tabular}{l} $W_{\omega }:=\mathrm{shift}\left( \omega _{0},\omega _{1},\cdots \right) $, $W_{\tau }:=\mathrm{shift}\left( \tau _{0},\tau _{1},\cdots \right) $, \\ $W_{\omega ^{(0)}}:=\mathrm{shift}\left( x_{0},x_{1},\omega _{1},\omega _{2},\cdots \right) $, $W_{\omega ^{(a)}}:=\mathrm{shift}\left( a,\omega _{0},\omega _{1},\cdots \right) $, \\ $W_{\omega ^{(2)}}:=\mathrm{shift}\left( a\frac{\omega _{0}}{x_{1}},\omega _{0},\omega _{1},\cdots \right) $, $I:=\mathrm{diag}\left( 1,1,\cdots \right) $, \\ $D_{1}:=\mathrm{diag}\left( y_{0},a\frac{y_{0}}{x_{0}},a\frac{\omega _{0}y_{0}}{x_{0}x_{1}},a\frac{\omega _{0}y_{0}}{x_{0}x_{1}},\cdots \right) , and $D_{2}:=\mathrm{diag}\left( \frac{x_{1}\tau _{0}}{\omega _{0}},\tau _{0},\tau _{0},\cdots \right) $, \\ where $I$ is the identity matrix \end{tabular} \end{equation*} \begin{theorem} \label{thmtaylor}Consider a commuting $2$-variable weighted shift $W_{(\alpha ,\beta )}\equiv (T_{1},T_{2})$ with weight diagram given by Figure \ref{Figure 2ex}(i). \ Assume also that $T_1$ and $T_2$ are hyponormal. \ Then \begin{equation} \begin{tabular}{l} \begin{tabular}{l} $\sigma _{T}\left( W_{(\alpha ,\beta )}\right) =\left( \left\Vert W_{\omega }\right\Vert \cdot \overline{\mathbb{D}}\times \left\Vert W_{\tau }\right\Vert \cdot \overline{\mathbb{D}}\right) $ and \\ $\sigma _{Te}\left( W_{(\alpha ,\beta )}\right) =\left( \left\Vert W_{\omega }\right\Vert \cdot \mathbb{T}\times \left\Vert W_{\tau }\right\Vert \cdot \overline{\mathbb{D}}\right) \cup \left( \left\Vert W_{\omega }\right\Vert \cdot \overline{\mathbb{D}}\times \left\Vert W_{\tau }\right\Vert \cdot \mathbb{T}\right) . \end{tabular \end{tabular} \label{1} \end{equation Here $\overline{\mathbb{D}}$ denotes the closure of the open unit disk \mathbb{D}$ and $\mathbb{T}$ the unit circle. \end{theorem} \begin{proof} We represent $W_{(\alpha ,\beta )}\equiv (T_{1},T_{2})$ by block matrices relative to the decomposition \begin{equation} \ell ^{2}(\mathbb{Z}_{+}^{2})\cong \ell ^{2}(\mathbb{Z}_{+})\otimes \ell ^{2}(\mathbb{Z}_{+})\cong (\ell ^{2}(\mathbb{Z}_{+})\oplus \ell ^{2} \mathbb{Z}_{+}))\oplus (\ell ^{2}(\mathbb{Z}_{+})\oplus \cdots) \text{.} \label{decom} \end{equation \ Then, we obtai \begin{equation*} T_{1}\equiv \left( \begin{array}{cccc} W_{\omega ^{(0)}} & & \vdots & \\ & W_{\omega ^{(1)}} & \vdots & \\ \cdots & \cdots & & \cdots \\ & & \vdots & R_{1 \end{array \right) \text{ and }T_{2}\equiv \left( \begin{array}{cccc} 0 & & \vdots & \\ D_{1} & 0 & \vdots & \\ \cdots & \cdots & & \cdots \\ & D_{2} & \vdots & R_{2 \end{array \right) \text{,} \end{equation* where $R_{1}:=\left( \begin{array}{ccc} W_{\omega ^{(2)}} & & \\ & W_{\omega ^{(2)}} & \\ & & \ddots \end{array \right) $ and $R_{2}:=\left( \begin{array}{cccc} 0 & & & \\ \tau _{1}I & 0 & & \\ & \tau _{2}I & 0 & \\ & & \ddots & \ddots \end{array \right) $. \newline We first consider $\sigma _{T}(W_{(\alpha ,\beta )})$ of W_{(\alpha ,\beta )}\equiv (T_{1},T_{2})$. \ Note the following \begin{equation} \begin{tabular}{l} $\left\Vert W_{\omega }\right\Vert =\left\Vert W_{\omega ^{(0)}}\right\Vert =\left\Vert W_{\omega ^{(1)}}\right\Vert =\left\Vert W_{\omega ^{(2)}}\right\Vert \text{ and}$ \\ $\left\Vert W_{\tau }\right\Vert =\left\Vert \mathrm{shift}\left( \tau _{1},\tau _{2},\cdots \right) \right\Vert =\left\Vert \mathrm{shift}\left( \frac{\omega _{0}y_{0}}{x_{0}x_{1}},\tau _{0},\tau _{1},\cdots \right) \right\Vert \text{.} \end{tabular} \label{norm} \end{equation Thus, by Lemma \ref{lem1}(i) and (\ref{norm}), we hav \begin{equation} \begin{tabular}{l} $\sigma _{T}(T_{1},T_{2})$ \\ $\subseteq \sigma _{T}\left( \left( \begin{array}{cc} W_{\omega ^{(0)}} & 0 \\ 0 & W_{\omega ^{(1)} \end{array \right) ,\left( \begin{array}{cc} 0 & 0 \\ D_{1} & \end{array \right) \right) \cup \sigma _{T}\left( I \otimes W_{\omega ^{(2)}}, W_{\tau} \otimes I \right) $ \\ $\subseteq \sigma _{T}\left( W_{\omega ^{(0)}},0\right) \cup \sigma _{T}\left( W_{\omega ^{(1)}},0\right) \cup \sigma _{T}\left( I \otimes W_{\omega ^{(2)}}, W_{\tau} \otimes I \right) $ \\ $=\left( \left\Vert W_{\omega }\right\Vert \cdot \overline{\mathbb{D}}\times \{0\}\right) \cup \left( \left\Vert W_{\omega }\right\Vert \cdot \overline \mathbb{D}}\times \left\Vert W_{\tau }\right\Vert \cdot \overline{\mathbb{D} \right) =\left\Vert W_{\omega }\right\Vert \cdot \overline{\mathbb{D}}\times \left\Vert W_{\tau }\right\Vert \cdot \overline{\mathbb{D}} \end{tabular} \label{2} \end{equation By Lemma \ref{lem1}(ii) and (\ref{norm}), we hav \begin{equation} \begin{tabular}{l} $\sigma _{T}\left( I \otimes W_{\omega ^{(2)}}, W_{\tau} \otimes I \right) $ \\ $\subseteq \sigma _{T}\left( \left( \begin{array}{cc} W_{\omega ^{(0)}} & 0 \\ 0 & W_{\omega ^{(1)} \end{array \right) ,\left( \begin{array}{cc} 0 & 0 \\ D_{1} & \end{array \right) \right) \cup \sigma _{T}(T_{1},T_{2})$ \\ $\Rightarrow \sigma _{T}\left( I \otimes W_{\omega ^{(2)}}, W_{\tau} \otimes I \right) $ \\ $\ \ \ \subseteq \sigma _{T}\left( W_{\omega ^{(0)}},0\right) \cup \sigma _{T}\left( W_{\omega ^{(1)}},0\right) \cup \sigma _{T}(T_{1},T_{2})$ \\ $\Rightarrow \left\Vert W_{\omega }\right\Vert \cdot \overline{\mathbb{D} \times \left\Vert W_{\tau }\right\Vert \cdot \overline{\mathbb{D}}\subseteq \left\Vert W_{\omega }\right\Vert \cdot \overline{\mathbb{D}}\times \{0\}\cup \sigma _{T}(T_{1},T_{2})$ \\ $\Rightarrow \left( \left\Vert W_{\omega }\right\Vert \cdot \overline \mathbb{D}}\times \left\Vert W_{\tau }\right\Vert \cdot \overline{\mathbb{D} \right) $ $\backslash $ $\left( \left\Vert W_{\omega }\right\Vert \cdot \overline{\mathbb{D}}\times \{0\}\right) \subseteq \sigma _{T}(T_{1},T_{2})$ \end{tabular} \label{3} \end{equation Since the Taylor spectrum $\sigma _{T}(T_{1},T_{2})$ is a closed set in $\mathbb C}\times \mathbb{C}$, by (\ref{equa1}) and (\ref{equa2}), we can ge \begin{equation} \sigma _{T}\left( W_{(\alpha ,\beta )}\right) =\left( \left\Vert W_{\omega }\right\Vert \cdot \overline{\mathbb{D}}\times \left\Vert W_{\tau }\right\Vert \cdot \overline{\mathbb{D}}\right) \text{.} \label{4} \end{equation We next consider the Taylor essential spectrum $\sigma _{Te}(T_{1},T_{2})$ of $W_{(\alpha ,\beta )}\equiv (T_{1},T_{2})$. \ Observe that $W_{\omega ^{(2)}}$ is a compact perturbation of $W_{\omega ^{(1)}}$ and $W_{\omega ^{(0)}}$. \ Also, $\frac{\omega _{0}y_{0}}{x_{0}x_{1}}I$ and $\tau _{0}I$ are compact perturbations of $D_{1}$ and $D_{2}$, respectively. \ Thus, we hav \begin{equation} \begin{tabular}{l} $\sigma _{Te}(T_{1},T_{2})=\sigma _{Te}\left( I \otimes W_{\omega ^{(2)}} , \mathrm{shift}\left( \frac{\omega _{0}y_{0}}{x_{0}x_{1}},\tau _{0},\tau _{1},\cdots \right) \otimes I \right) $ \end{tabular} \label{5} \end{equation By Lemma \ref{lem1}(ii) and (\ref{norm}), we not \begin{equation} \begin{tabular}{l} $\sigma _{Te}\left( I \otimes W_{\omega ^{(2)}}, \mathrm{shift}\left( \frac{\omega _{0}y_{0}}{x_{0}x_{1}},\tau _{0},\tau _{1},\cdots \right) \otimes I \right) $ \\ $=\sigma _{Te}\left( W_{\omega ^{(2)}}\right) \times \sigma _{T}\left( W_{\tau }\right) \cup \sigma _{T}\left( W_{\omega ^{(2)}}\right) \times \sigma _{Te}\left( W_{\tau }\right) $ \\ $=\left( \left\Vert W_{\omega }\right\Vert \cdot \mathbb{T}\times \left\Vert W_{\tau }\right\Vert \cdot \overline{\mathbb{D}}\right) \cup \left( \left\Vert W_{\omega }\right\Vert \cdot \overline{\mathbb{D}}\times \left\Vert W_{\tau }\right\Vert \cdot \mathbb{T}\right) $ \end{tabular} \label{6} \end{equation Therefore, our proof is now complete. \end{proof} \begin{theorem} \label{thmtaylor-Alu}(Case of toral Aluthge Transform) \ Consider a commuting $2$-variable weighted shift $W_{(\alpha ,\beta )}\equiv (T_{1},T_{2})$ with weight diagram given by Figure \ref{Figure 2ex}(i). \ Assume also that $T_1$ and $T_2$ are hyponormal. \ Then $$ \sigma _{T}\left( \widetilde{W}_{(\alpha ,\beta )}\right) =\left( \left\Vert W_{\omega }\right\Vert \cdot \overline{\mathbb{D}}\times \left\Vert W_{\tau }\right\Vert \cdot \overline{\mathbb{D}}\right) $$ and $$ \sigma _{Te}\left( \widetilde{W}_{(\alpha ,\beta )}\right) = \left( \left\Vert W_{\omega }\right\Vert \cdot \mathbb{T}\times \left\Vert W_{\tau }\right\Vert \cdot \overline{\mathbb{D}}\right) \cup \left( \left\Vert W_{\omega }\right\Vert \cdot \overline{\mathbb{D}}\times \left\Vert W_{\tau }\right\Vert \cdot \mathbb{T}\right) . $$ \end{theorem} \begin{proof} Since the structure of the weight diagram for $\widetilde{W}_{(\alpha ,\beta )}\equiv \left( \widetilde{T}_{1},\widetilde{T}_{2}\right) $ is entirely similar to that of $W_{(\alpha ,\beta )}\equiv (T_{1},T_{2})$, the results follows by imitating the Proof of Theorem \ref{thmtaylor}. \end{proof} By the results of Theorems \ref{thmtaylor} and \ref{thmtaylor-Alu}, we easily obtain the following result. \begin{corollary} \label{Cor1}Consider a commuting $2$-variable weighted shift $W_{(\alpha ,\beta )}\equiv (T_{1},T_{2})$ with weight diagram given by Figure \ref{Figure 2ex}(i). \ Assume also that $T_1$ and $T_2$ are hyponormal. \ Then $$ \sigma _{T}\left( \widetilde{W}_{(\alpha ,\beta )}\right) =\sigma _{T}\left( {W}_{(\alpha ,\beta )}\right) $$ and $$ \sigma _{Te}\left( \widetilde{W}_{(\alpha ,\beta )}\right) =\sigma _{Te}\left({W}_{(\alpha ,\beta )}\right). $$ \end{corollary} \begin{remark} \label{Re 3} We note that the commutativity property is required to check the Taylor spectrum (resp. Taylor essential spectrum) of $\widetilde{W _{(\alpha ,\beta )}$. \ Thus, if $W_{(\alpha ,\beta )}\in \mathcal{TC}$, then $\widetilde{W}_{(\alpha ,\beta )}$ is commuting. \newline (ii) \ By Corollary \ref{Cor1}, we can see that the Taylor spectrum (resp. Taylor essential spectrum) of $\widetilde{W}_{(\alpha ,\beta )}$ equals that of $W_{(\alpha ,\beta )}$ when $W_{(\alpha ,\beta )}$ is commuting and $T_1$ and $T_2$ are hyponormal. \ Thus, Corollary \ref{Cor1} gives a partial solution to Problem \ref{problem 3}. \end{remark} We now turn our attention to the case of the spherical Aluthge Transform. \ We need a preliminary result. \begin{proposition} \label{proptensor} Let $W_{(\alpha ,\beta )} \in \mathcal{TC}$, with core $c(W_{(\alpha ,\beta )})=(I \otimes W_{\sigma}, W_{\tau} \otimes I)$. \ Then $\widehat{W}_{(\alpha ,\beta )}\in \mathcal{TC}$ if and only $c(W_{(\alpha ,\beta )})=(rI \otimes U_+, W_{\tau} \otimes I)$ or $c(W_{(\alpha ,\beta )})=(I \otimes W_{\sigma}, U_+ \otimes sI)$ for some $r,s>0$. \end{proposition} \begin{proof} By Lemma \ref{PolarAlu}, we recall that \begin{equation} \widehat{T}_{1}e_{\mathbf{k}}=\alpha _{\mathbf{k}}\frac{(\alpha _{\mathbf{k} \mathbf{\epsilon }_{1}}^{2}+\beta _{\mathbf{k}+\mathbf{\epsilon _{1}}^{2})^{1/4}}{(\alpha _{\mathbf{k}}^{2}+\beta _{\mathbf{k}}^{2})^{1/4} e_{\mathbf{k}+\mathbf{\epsilon }_{1}}\text{ and }\widehat{T}_{2}e_{\mathbf{k }=\beta _{\mathbf{k}}\frac{(\alpha _{\mathbf{k}+\mathbf{\epsilon _{2}}^{2}+\beta _{\mathbf{k}+\mathbf{\epsilon }_{2}}^{2})^{1/4}}{(\alpha _ \mathbf{k}}^{2}+\beta _{\mathbf{k}}^{2})^{1/4}}e_{\mathbf{k}+\mathbf \epsilon }_{2}} \label{Weight of Polar} \end{equation} Since $W_{(\alpha ,\beta )}\in \mathcal{TC}$, and since $\widehat{W}_{(\alpha ,\beta )}$ leaves the subspace $\mathcal{M} \cap \mathcal{N}$ invariant, it readily follows that the spherical Aluthge transform of $c(W_{(\alpha ,\beta )})$ is $c(\widehat{W}_{(\alpha ,\beta )})$. \ As a result, we may assume, without loss of generality, that $W_{(\alpha ,\beta )}\equiv (I \otimes W_{\sigma },W_{\tau }\otimes I)$, where $W_{\sigma }$ and $W_{\tau }$ are unilateral weighted shifts. \ Now, by (\ref{Weight of Polar}), for all $\mathbf{k \equiv (k_{1},k_{2})\in \mathbb{Z}_{+}^{2}$ we have \begin{eqnarray*} \widehat{T}_{1}e_{\mathbf{k}} &=&\widehat{T}_{1}e_{\mathbf{k}+\mathbf \epsilon }_{2}}\Longleftrightarrow \frac{\sigma _{k_{1}+1}^{2}+\tau _{k_{2}}^{2}}{\sigma _{k_{1}}^{2}+\tau _{k_{2}}^{2}}=\frac{\sigma _{k_{1} 1}^{2}+\tau _{k_{2}+1}^{2}}{\sigma _{k_{1}}^{2}+\tau _{k_{2}+1}^{2}} \Longleftrightarrow \widehat{T}_{2}e_{\mathbf{k}} =\widehat{T}_{1}e_{\mathbf{k}+\mathbf{\epsilon }_{1}}\\ &\Longleftrightarrow &\left( \tau _{k_{2}+1}^{2}-\tau _{k_{2}}^{2}\right) \left(\sigma _{k_{1}}^{2} - \sigma_{k_{1}+1}^{2} \right) \Longleftrightarrow \tau_{k_{2}+1}=\tau _{k_{2}} \; \; \textrm{or } \; \; \sigma _{k_{1}+1}=\sigma _{k_{1}}. \end{eqnarray*} If there exists $k_2 \in \mathbb{Z}_+$ such that $\tau_{k_{2}+1} \ne \tau _{k_{2}}$, then for all $k_1 \in \mathbb{Z}_+$ we must have $\sigma_{k_1+1}=\sigma_{k_1}$; that is, $\sigma_{k_1}=\sigma_0$ for all $k_1$. \ On the other hand, if $\tau_{k_{2}+1}=\tau _{k_{2}}$ for all $k_2 \in \mathbb{Z}_+$, then $\tau_{k_2}=\tau_0$ for all $k_2$. \ This completes the proof. \end{proof} We are now ready to state \begin{theorem} (Case of spherical Aluthge Transform) \label{thm87} Let $W_{(\alpha ,\beta )}\equiv \left( T_{1},T_{2}\right) \in \mathcal{TC}$ be as in Proposition 8.6. \ Assume also that $T_1$ and $T_2$ are hyponormal. \ Then \begin{equation*} \sigma _{T}\left(\widehat{W}_{(\alpha ,\beta )}\right) =\sigma _{T}\left(W_{(\alpha ,\beta )}\right) \text{ and }\sigma _{T_{e}}\left(\widehat{W}_{(\alpha ,\beta )}\right) =\sigma _{T_{e}}\left(W_{(\alpha ,\beta)}\right) \text{.} \end{equation*} \end{theorem} \begin{proof} By Proposition \ref{proptensor}, without loss of generality we may assume $$ c\left( W_{(\alpha ,\beta )}\right) \equiv \left( rI\otimes U_{+},W_{\tau }\otimes I\right), $$ for some $r>0$. \ Recall that \begin{equation} \widehat{T}_{1}e_{\mathbf{k}}=\alpha _{\mathbf{k}}\frac{(\alpha _{\mathbf{k} \mathbf{\epsilon }_{1}}^{2}+\beta _{\mathbf{k}+\mathbf{\epsilon _{1}}^{2})^{1/4}}{(\alpha _{\mathbf{k}}^{2}+\beta _{\mathbf{k}}^{2})^{1/4} e_{\mathbf{k}+\mathbf{\epsilon }_{1}}\text{ and }\widehat{T}_{2}e_{\mathbf{k }=\beta _{\mathbf{k}}\frac{(\alpha _{\mathbf{k}+\mathbf{\epsilon _{2}}^{2}+\beta _{\mathbf{k}+\mathbf{\epsilon }_{2}}^{2})^{1/4}}{(\alpha _ \mathbf{k}}^{2}+\beta _{\mathbf{k}}^{2})^{1/4}}e_{\mathbf{k}+\mathbf \epsilon }_{2}}\text{.} \label{Weight of Polar1} \end{equation} By (\ref{Weight of Polar1}), we obtain \begin{equation*} c\left( \widehat{W}_{(\alpha ,\beta )}\right) \equiv \left( rI\otimes U_{+},W_{\tau }\otimes I\right) \circ \left( I\otimes U_{+},W_{\rho}\otimes I\right) \text{,} \end{equation* where $W_{\rho}\equiv \mathrm{shift}\left( \left( \frac{r^{2}+\tau _{1}^{2}} r^{2}+\tau _{0}^{2}}\right) ^{\frac{1}{4}},\left( \frac{r^{2}+\tau _{2}^{2}} r^{2}+\tau _{1}^{2}}\right) ^{\frac{1}{4}},\left( \frac{r^{2}+\tau _{3}^{2}} r^{2}+\tau _{2}^{2}}\right) ^{\frac{1}{4}},\cdots \right) $ with $\left\Vert W_{r}\right\Vert =1$ and $\circ $ denotes Schur product. Note tha \begin{equation*} \begin{tabular}{l} $\mathrm{shift}\left( \widehat{\alpha }_{\left( 0,0\right) },\widehat{\alpha }_{\left( 1,0\right) },\cdots \right) $ \\ $=\mathrm{shift}\left( \alpha _{\left( 0,0\right) },\alpha _{\left( 1,0\right) },\cdots \right) \circ \mathrm{shift}\left( \left( \frac{\alpha _{\left( 1,0\right) }^{2}+\beta _{\left( 1,0\right) }^{2}}{\alpha _{\left( 0,0\right) }^{2}+\beta _{\left( 0,0\right) }^{2}}\right) ^{\frac{1}{4 },\left( \frac{\alpha _{\left( 2,0\right) }^{2}+\beta _{\left( 2,0\right) }^{2}}{\alpha _{\left( 1,0\right) }^{2}+\beta _{\left( 1,0\right) }^{2} \right) ^{\frac{1}{4}},\cdots \right) \end{tabular \end{equation* an \begin{equation*} \begin{tabular}{l} $\mathrm{shift}\left( \widehat{\beta }_{\left( 0,0\right) },\widehat{\beta _{\left( 1,0\right) },\cdots \right) $ \\ $=\mathrm{shift}\left( \beta _{\left( 0,0\right) },\beta _{\left( 1,0\right) },\cdots \right) \circ \mathrm{shift}\left( \left( \frac{\alpha _{\left( 0,1\right) }^{2}+\beta _{\left( 0,1\right) }^{2}}{\alpha _{\left( 0,0\right) }^{2}+\beta _{\left( 0,0\right) }^{2}}\right) ^{\frac{1}{4}},\left( \frac \alpha _{\left( 0,2\right) }^{2}+\beta _{\left( 0,2\right) }^{2}}{\alpha _{\left( 0,1\right) }^{2}+\beta _{\left( 0,1\right) }^{2}}\right) ^{\frac{1} 4}},\cdots \right) . \end{tabular \end{equation* Since \begin{equation*} \begin{tabular}{l} $\left\Vert \mathrm{shift}\left( \left( \frac{\alpha _{\left( 1,0\right) }^{2}+\beta _{\left( 1,0\right) }^{2}}{\alpha _{\left( 0,0\right) }^{2}+\beta _{\left( 0,0\right) }^{2}}\right) ^{\frac{1}{4}},\left( \frac \alpha _{\left( 2,0\right) }^{2}+\beta _{\left( 2,0\right) }^{2}}{\alpha _{\left( 1,0\right) }^{2}+\beta _{\left( 1,0\right) }^{2}}\right) ^{\frac{1} 4}},\left( \frac{\alpha _{\left( 3,0\right) }^{2}+\beta _{\left( 3,0\right) }^{2}}{\alpha _{\left( 2,0\right) }^{2}+\beta _{\left( 2,0\right) }^{2} \right) ^{\frac{1}{4}},\cdots \right) \right\Vert $ \\ $=\left\Vert \mathrm{shift}\left( \left( \frac{\alpha _{\left( 0,1\right) }^{2}+\beta _{\left( 0,1\right) }^{2}}{\alpha _{\left( 0,0\right) }^{2}+\beta _{\left( 0,0\right) }^{2}}\right) ^{\frac{1}{4}},\left( \frac \alpha _{\left( 0,2\right) }^{2}+\beta _{\left( 0,2\right) }^{2}}{\alpha _{\left( 0,1\right) }^{2}+\beta _{\left( 0,1\right) }^{2}}\right) ^{\frac{1} 4}},\left( \frac{\alpha _{\left( 0,3\right) }^{2}+\beta _{\left( 0,3\right) }^{2}}{\alpha _{\left( 0,2\right) }^{2}+\beta _{\left( 0,2\right) }^{2} \right) ^{\frac{1}{4}},\cdots \right) \right\Vert =1$ \end{tabular \end{equation* we hav \begin{equation} \label{eq199} \left\Vert \mathrm{shift}\left( \widehat{\alpha }_{\left( 0,0\right) } \widehat{\alpha }_{\left( 1,0\right) },\widehat{\alpha }_{\left( 2,0\right) },\cdots \right) \right\Vert =\left\Vert \mathrm{shift}\left( \alpha _{\left( 0,0\right) },\alpha _{\left( 1,0\right) },\alpha _{\left( 2,0\right) },\cdots \right) \right\Vert \end{equation an \begin{equation} \label{eq200} \left\Vert \mathrm{shift}\left( \widehat{\beta }_{\left( 0,0\right) } \widehat{\beta }_{\left( 1,0\right) },\widehat{\beta }_{\left( 2,0\right) },\cdots \right) \right\Vert =\left\Vert \mathrm{shift}\left( \beta _{\left( 0,0\right) },\beta _{\left( 0,1\right) },\beta _{\left( 0,2\right) },\cdots \right) \right\Vert \text{.} \end{equation Thus, by the method used in the Proof of Theorem 8.2 we have $$ \sigma _{T}\left(\widehat{W}_{(\alpha ,\beta )}\right) =\sigma _{T}\left(W_{(\alpha ,\beta )}\right). $$ Observe now that (\ref{eq199}) and (\ref{eq200}) also show that the weighted shifts associated with $0$-th row and the $0$-th column of $\widehat{W}_{(\alpha ,\beta )}$ are compact perturbations of the corresponding weighted shifts for $W_{(\alpha ,\beta )}$. \ As a result, it is straightforward to conclude that $$ \sigma _{T_{e}}\left(\widehat{W}_{(\alpha ,\beta )}\right) =\sigma _{T_{e}}\left(W_{(\alpha ,\beta)}\right). $$ This completes the proof of the theorem. \end{proof} In view of Corollary \ref{Cor1}, Remark \ref{Re 3} and Theorem \ref{thm87}, it is natural to formulate the following \begin{conjecture} Let $W_{(\alpha ,\beta )}$ be a commuting $2$-variable weighted shift, whose toral and spherical Aluthge transforms are also commuting. \ Then $W_{(\alpha ,\beta )}$, $\widetilde{W _{(\alpha ,\beta )}$ and $\widehat{W _{(\alpha ,\beta )}$ all have the same Taylor spectrum and the same Taylor essential spectrum. \end{conjecture} \section{Aluthge Transforms of the Drury-Arveson Shift} In this section we consider the Drury-Arveson $2$-variable weighted shift $DA$, whose weight sequences are given by \begin{eqnarray} \alpha_{(k_1,k_2)}&:=&\sqrt{\frac{k_1+1}{k_1+k_2+1}} \; \; (\textrm{for all} \; k_1,k_2 \ge 0) \\ \beta_{(k_1,k_2)}&:=&\alpha_{(k_2,k_1)} \; \; (\textrm{for all} \; k_1,k_2 \ge 0). \end{eqnarray} If we denote the successive rows of the weight diagram of $DA$ by $R_1, R_2, \cdots $, it is easy to see that $R_1=A_1$, the (unweighted) unilateral shift, $R_2=A_2$, the Bergman shift and, more generally, $R_j=A_j$, the Agler $j$-th shift ($j \ge 2$); in particular, all rows and columns are subnormal weighted shifts. \ For $j \ge 2$, the Berger measure of $A_j$ is $(j-1)s^{j-2}ds$ on the closed interval $[0,1]$, and therefore all the Berger measures associated with rows $1, 2, 3, \cdots$ are mutually absolutely continuous, a necessary condition for the subnormality of $DA$. \ However, the Berger measure of the first row ($ds$ on $[0,1]$) is not absolutely continuous with respect to $\delta_1$ (which is the Berger measure of $U_+$, the zeroth-row), and therefore $DA$ cannot be subnormal (by \cite{CuYo2}). \ In fact, a stronger result is true: $DA$ is not jointly hyponormal, as a simple application of the Six-Point Test at $(0,0)$ reveals. It is also known that $DA \equiv(T_1,T_2)$ is essentially normal; in fact, the commutators $[T_j^*,T_i] \; (i,j=1,2)$ are in the Schatten p-class for $p>2$, as shown by W.A. Arveson \cite{Arv}. \ In the sequel, we prove compactness of the commutators using the homogeneous decomposition of $\ell^2(\mathbb{Z}_+^2)$; this will eventually help us prove that the Aluthge transforms of $DA$ are compact perturbations of $DA$. \ Let $\mathcal{P}_n$ denote the finite dimensional vector space generated by the orthonormal basis vectors $e_{(n,0)},e_{(n-1,1)},\cdots,e_{(0,n)}$, it is easy to see that $\mathcal{P}_n$ is invariant under the action of the self-commutators $[T_i^*,T_i]$ and the cross-commutators $[T_j^*,T_i]$ ($i,j=1,2$). \ A simple calculation reveals that $$ [T_1^*,T_1]e_{(0,k_2)}=\frac{1}{k_2+1}e_{(0,k_2)} $$ $$ [T_1^*,T_1]e_{(k_1,k_2)}=\frac{k_2}{(k_1+k_2))(k_1+k_2+1)}e_{(k_1,k_2)}, $$ so that in $\mathcal{P}_n$ we have $$ \left\|[T_1^*,T_1]e_{(k_1,n-k_1)}\right\|=\frac{n-k_1}{n(n+1)}. $$ It follows that the norm of $[T_1^*,T_1]$ restricted to $\mathcal{P}_n$ is bounded by $\frac{1}{n+1}$. \ Since $[T_1^*,T_1]$ is unitarily equivalent to the orthogonal direct sum of its restrictions to the subspaces $\mathcal{P}_n$, we easily conclude that $[T_1^*,T_1]$ is compact. \ The calculation for $[T_2^*,T_2]$ is identical. \ In terms of $[T_2^*,T_1]$, one again computes first the action on a generic basis vector in $\mathcal{P}_n$, that is, $$ [T_2^*,T_1]e_{(n,0)}=0 $$ $$ [T_2^*,T_1]e_{(k_1,n-k_1)}=-\frac{1}{n(n+1)}\sqrt{(k_1+1)(n-k_1)}e_{(k_1+1,n-k_1-1)} \; \; \; (k_1 \ge 1). $$ It follows that $$ \left\|[T_2^*,T_1]e_{(k_1,n-k_1)}\right\| \le \frac{1}{n(n+1)}\sqrt{(k_1+1)(n-k_1)} \le \frac{1}{2n} \; \; \; (0 \le k_1 \le n). $$ As before, $[T_2^*,T_1]$ is an orthogonal direct sum of its restrictions to the subspaces $\mathcal{P}_n$, so the previous estimate proves that $[T_2^*,T_1]$ is compact. \ As a result, we know that $DA$ is essentially normal. We will now study how much the Aluthge transforms of $DA$ differ from $DA$. \begin{theorem} (i) \ $\widetilde{DA}$ is a compact perturbation of $DA$. \newline (ii) \ $\widehat{DA}$ is a compact perturbation of $DA$. \end{theorem} \begin{proof} (i) \ We first note that the weight sequences $\alpha$ and $\beta$ of $DA$ satisfy (\ref{commuting1}); that is, $\widetilde{DA}$ is commuting. \ Next, we observe that $\widetilde{DA}$ maps $\mathcal{P}_n$ into $\mathcal{P}_{n+1}$ (cf. Lemma \ref{CartAlu}), just as $DA$ does. \ As a result the compactness of $DA-\widetilde{DA}$ will be established once we prove that $\left\|(DA-\widetilde{DA})|_{\mathcal{P}_n}\right\|$ tends to zero as $n \rightarrow \infty$. \ Toward this end, we calculate $$ (T_1-\widetilde{T}_1)e_{(k_1,n-k_1)}=(\alpha_{(k_1,n-k_1)}-\sqrt{\alpha_{(k_1,n-k_1)} \alpha_{(k_1+1,n-k_1)}})e_{(k_1+1,n-k_1)}. $$ Without loss of generality, we focus instead on the expression $$ \Delta_{\textrm{toral}}:=(\alpha_{(k_1,n-k_1)})^4-(\sqrt{\alpha_{(k_1,n-k_1)} \alpha_{(k_1+1,n-k_1)}})^4; $$ With the aid of \textit{Mathematica} \cite{Wol}, we obtain \begin{eqnarray*} \left|\Delta_{\textrm{toral}}\right| &=& \left|(\alpha_{(k_1,n-k_1)})^4-(\sqrt{\alpha_{(k_1,n-k_1)} \alpha_{(k_1+1,n-k_1)}})^4\right| \\ &=& \frac{(k_1+1)(n-k_1)}{(n+1)^2(n+2)} \le \frac{1}{4(n+2)}. \end{eqnarray*} Thus, $\lim_{n \rightarrow \infty} \left\|(DA-\widetilde{DA})|_{\mathcal{P}_n} \right\| = 0$, and therefore $DA-\widetilde{DA}$ is compact. (ii) As in (i) above, it suffices to prove that $\left\|(DA-\widehat{DA})|_{\mathcal{P}_n}\right\|$ tends to zero as $n \rightarrow \infty$. \ Since \begin{eqnarray*} &&(T_1-\widehat{T}_1)e_{(k_1,n-k_1)} \\ &=&(\alpha_{(k_1,n-k_1)}-\alpha_{(k_1,n-k_1)} (\frac{\alpha_{(k_1+1, n-k_1)}^2 + \beta_{(k_1+1, n-k_1)}^2} {\alpha_{(k_1, n-k_1)}^2 + \beta_{(k_1, n-k_1)}^2})^{1/4})e_{(k_1+1,n-k_1)}, \end{eqnarray*} we can again focus on the expression $$ \Delta_{\textrm{spherical}}:=(\alpha_{(k_1,n-k_1)})^4-(\alpha_{(k_1,n-k_1)} (\frac{\alpha_{(k_1+1, n-k_1)}^2 + \beta_{(k_1+1, n-k_1)}^2} {\alpha_{(k_1, n-k_1)}^2 + \beta_{(k_1, n-k_1)}^2})^{1/4})^4. $$ A computation using \textit{Mathematica} \cite{Wol} shows that $$ \left|\Delta_{\textrm{spherical}}\right|=\frac{k_1+1)(n-k_1)(2n+1)}{n^2(n+1)^2} \le \frac{2n+1}{4n^2}. $$ We thus conclude, as before, that $DA-\widehat{DA}$ is compact. \end{proof} \begin{corollary} The $2$-variable weighted shifts $DA$, $\widetilde{DA}$ and $\widehat{DA}$ all share the same Taylor spectral picture. \end{corollary} \begin{proof} Since the Taylor essential spectrum and the Fredholm index are invariant under compact perturbations (cf. \cite{Appl}, the result follows from the well know spectral picture of $DA$; that is, $\sigma_T(DA)=\bar{\mathbb{B}^2}$, $\sigma_{Te}(DA)=\partial{\mathbb{B}^2}$, and $\textrm{index} \; DA = \textrm{index} \; \widetilde{DA} = \textrm{index} \; \widehat{DA}$. \ (Here $\mathbb{B}^2$ denotes the unit ball in $\mathbb{C}^2$, and $\partial \mathbb{B}^2$ its topological boundary.) \end{proof} \begin{remark} It is an easy application of the Six-Point Test that neither $\widetilde{DA}$ nor $\widehat{DA}$ is jointly hyponormal. \end{remark} \section{Fixed Points of the Spherical Aluthge Transform: \\ Spherically Quasinormal Pairs}\label{Spherquasi} In this section we discuss the structure of $2$-variable weighted shifts which are fixed points for the spherical Aluthge transform. \ We believe this notion provides the proper generalization of quasinormality to several variables. \ As we noted in the Introduction, a Hilbert space operator $T$ is quasinormal if and only if $T=\widetilde{T}$. \ We use this as our point of departure for the $2$-variable case. \smallskip \begin{definition} A commuting pair $\mathbf{T} \equiv (T_1,T_2)$ is {\it spherically quasinormal} if $\widehat{\mathbf{T}}=\mathbf{T}$. \end{definition} We now recall the class of spherically isometric commuting pairs of operators (\cite{Ath1}, \cite{AtPo}, \cite{AtLu}, \cite{EsPu}, \cite{Gle}, \cite{Gle2}). \begin{definition} \label{spherisom} A commuting $n$-tuple $\mathbf{T} \equiv (T_1,\cdots,T_n)$ is a spherical isometry if $T_1^*T_1+\cdots+T_n^*T_n=I$. \end{definition} In the literature, spherical quasinormality of a commuting $n$-tuple $\mathbf{T} \equiv (T_1,\cdots,T_n)$ is associated with the commutativity of each $T_i$ with $P^2$. \ It is not hard to prove that, for $2$-variable weighted shifts, this is equivalent to requiring that $W_{(\alpha ,\beta )}\equiv (T_1,T_2)$ be a fixed point of the spherical Aluthge transform, that is, $\widehat{W}_{(\alpha ,\beta )}=W_{(\alpha ,\beta )}$. \ A straightforward calculation shows that this is equivalent to requiring that each $U_i$ commutes with $P$. \ In particular, $(U_1,U_2)$ is commuting whenever $(T_1,T_2)$ is commuting. \ Also, recall from Section 1 that a commuting pair $\mathbf{T}$ is a spherical isometry if $P^{2}=I$. \ Thus, in the case of spherically quasinormal $2$-variable weighted shifts, we always have $U_1^*U_1+U_2^*U_2=I$. \ In the following result, the key new ingredient is the equivalence of (i) and (ii). \smallskip As we noted in the Introduction, the operator $Q:=\sqrt{V_{1}^{\ast }V_{1}+V_{2}^{\ast }V_{2}}$ is a (joint) partial isometry; for, $PQ^2P=P^2$, from which it follows that $Q$ is isometric on the range of $P$. In the case when $P$ is injective, we see that a commuting pair $\mathbf{T}\equiv (T_1,T_2) \equiv (V_1P,V_2P)$ is spherically quasinormal if and only if each $T_{i}$ commutes with $P^2$, and if and only if each $V_{i}$ commutes with $P^2$ ($i=1,2$); in particular, $(V_1,V_2)$ is commuting whenever $(T_1,T_2)$ is commuting. \ Observe also that when $P$ is injective, we always have $V_1^*V_1+V_2^*V_2=I$. The proof of the following result is a straightforward application of Definition \ref{spherisom}. \begin{lemma} A $2$-variable weighted shift $W_{(\alpha,\beta)}$ is a spherical isometry if and only if $$ \alpha_{\bf{k}}^2+\beta_{\bf{k}}^2=1 $$ for all $\bf{k} \in \mathbb{Z}_+^2$. \end{lemma} \begin{lemma} \label{Quasinormal3} A 2-variable weighted shift $\mathbf{T}$ is spherically quasinormal if and only if there exists $C>0$ such that $\frac{1}{C}\mathbf{T}$ is a spherical isometry, that is, $T_1^*T_1+T_2^*T_2=I$. \end{lemma} \begin{proof} Assume that $\mathbf{T}\equiv (T_{1},T_{2})$ is commuting and spherically quasinormal. \ Then $T_{1}$ and $T_{2}$ commute with $P$. \ We now consider the following \noindent \textbf{Claim: \ }For all $\mathbf{k}\equiv (k_{1},k_{2})\in \mathbb{Z _{+}^{2}$, $\alpha _{k}^{2}+\beta _{k}^{2}$ is constant. \noindent For the proof of Claim, if we fix an orthonormal basis vector $e_ \mathbf{k}}$, then \[T_{1}e_{\mathbf{k}}=\alpha _{\mathbf{k}}e_{\mathbf{k+\varepsilon }_{1}}\text{ and }T_{2}e_{\mathbf{k}}:=\beta _{\mathbf{k}}e_{\mathbf{k+\varepsilon }_{2}} \text{,} \] where \textbf{$\varepsilon $}$_{1}:=(1,0)$ and \textbf{$\varepsilon $}$_{2}:=(0,1)$. \ We thus obtai \begin{eqnarray*} PT_{1}e_{\mathbf{k}} &=&\alpha _{(k_{1},k_{2})}\sqrt{\alpha _{(k_{1}+1,k_{2})}^{2}+\beta _{(k_{1}+1,k_{2})}^{2}}\text{ and} \\ T_{1}Pe_{\mathbf{k}} &=&\sqrt{\alpha _{(k_{1},k_{2})}^{2}+\beta _{(k_{1},k_{2})}^{2}}\alpha _{(k_{1},k_{2})}\text{.} \end{eqnarray* It follows that \begin{equation} \sqrt{\alpha _{(k_{1}+1,k_{2})}^{2}+\beta _{(k_{1}+1,k_{2})}^{2}}=\sqrt \alpha _{(k_{1},k_{2})}^{2}+\beta _{(k_{1},k_{2})}^{2}}\text{.} \label{equa1} \end{equation We also hav \begin{eqnarray*} PT_{2}e_{\mathbf{k}} &=&\beta _{(k_{1},k_{2})}\sqrt{\alpha _{(k_{1},k_{2}+1)}^{2}+\beta _{(k_{1},k_{2}+1)}^{2}}\text{ and } \\ T_{2}Pe_{\mathbf{k}} &=&\sqrt{\alpha _{(k_{1},k_{2})}^{2}+\beta _{(k_{1},k_{2})}^{2}}\beta _{(k_{1},k_{2})}\text{.} \end{eqnarray* Hence, we hav \begin{equation} \sqrt{\alpha _{(k_{1},k_{2}+1)}^{2}+\beta _{(k_{1},k_{2}+1)}^{2}}=\sqrt \alpha _{(k_{1},k_{2})}^{2}+\beta _{(k_{1},k_{2})}^{2}}\text{.} \label{equa2} \end{equation Therefore, by (\ref{equa1}) and (\ref{equa2}), for all $\mathbf{k}\equiv (k_{1},k_{2})\in \mathbb{Z}_{+}^{2}$, we obtai \begin{equation} \sqrt{\alpha _{(k_{1},k_{2})}^{2}+\beta _{(k_{1},k_{2})}^{2}}=\sqrt{\alpha _{(k_{1}+1,k_{2})}^{2}+\beta _{(k_{1}+1,k_{2})}^{2}}=\sqrt{\alpha _{(k_{1},k_{2}+1)}^{2}+\beta _{(k_{1},k_{2}+1)}^{2}}\text{.} \label{equa3} \end{equation} We have thus established the Claim. \ \medskip It follows that $C:=\sqrt{\alpha _{(k_{1},k_{2})}^{2}+\beta _{(k_{1},k_{2})}^{2}}$ is independent of $\mathbf{k}$. \ As a result, $\frac{1}{C}\mathbf{T}$ is a spherical isometry, as desired. \end{proof} By the proof of Lemma \ref{Quasinormal3}, we remark that once the zero-th row of $T_{1}$, call it $W_{0}$, is given, then the entire $2$-variable weighted shift is fully determined. \ We shall return to this in Subsection \ref{last}. We now recall a result of A. Athavale. \begin{theorem} (\cite{Ath1}) \ A spherical isometry is always subnormal. \end{theorem} By Lemma \ref{Quasinormal3}, we immediately obtain \begin{corollary} \label{Qua-sub} A spherically quasinormal $2$-variable weighted shift is subnormal. \end{corollary} We mention in passing two more significant features of spherical isometries. \begin{theorem} (\cite{MuPt}) \ Spherical isometries are hyporeflexive. \end{theorem} \begin{theorem} (\cite{EsPu}) \ For every $n \ge 3$ there exists a non-normal spherical isometry $\mathbf{T}$ such that the polynomially convex hull of $\sigma_T(\mathbf{T})$ is contained in the unit sphere. \end{theorem} \begin{remark} (i) \ A. Athavale and S. Poddar have recently proved that a commuting spherically quasinormal pair is always subnormal \cite[Proposition2.1]{AtPo}; this provides a different proof of Corollary \ref{Qua-sub}. \newline (ii) \ In a different direction, let $Q_{\mathbf{T}}(X):=T_1^*XT_1+T_2^*XT_2$. \ By induction, it is easy to prove that if $\mathbf{T}$ is spherically quasinormal, then $Q_{\mathbf{T}}^n(I)=(Q_{\mathbf{T}}(I))^n \; (n \ge 0)$; by \cite[Remark 4.6]{ChSh}, $\mathbf{T}$ is subnormal. \end{remark} \subsection{Construction of spherical isometries} \label{last} In the class of $2$-variable weighted shifts, there is a simple description of spherical isometries, in terms of the weight sequences $\alpha$ and $\beta$, which we now present. \ Since spherical isometries are (jointly) subnormal, we know that the zeroth-row must be subnormal. \ Start then with a subnormal unilateral weighted shift, and denote its weights by $(\alpha_{(k,0)})_{k=0,1,2,\cdots}$. \ Using the identity \begin{equation} \label{sphericalidentity} \alpha_{\mathbf{k}}^2+\beta_{\mathbf{k}}^2=1 \; \; \; (\mathbf{k} \in \mathbb{Z}_+^2), \end{equation} and the above mentioned zeroth-row, we can compute $\beta_{(k,0)}$ for $k=0,1,2,\cdots$. \ With these new values available, we can use the commutativity property (\ref{commuting}) to generate the values of $\alpha$ in the first row (see Figure \ref{Figure 1}); that is, $$ \alpha_{(k,1)}:=\alpha_{(k,0)}\beta_{(k+1,0)}/\beta_{(k,0)}. $$ We can now repeat the algorithm, and calculate the weights $\beta_{(k,1)}$ for $k=0,1,2,\cdots$, again using the identity (\ref{sphericalidentity}). \ This is turn leads to the $\alpha$ weights for the second row, and so on. This simple construction of spherically isometric $2$-variable weighted shifts will allow us to study properties like recursiveness (tied to the existence of finitely atomic Berger measures) and propagation of recursive relations. \ We pursue these ideas in an upcoming manuscript. \bigskip \textit{Acknowledgments.} \ Preliminary versions of some of the results in this paper have been announced in \cite{CR3}. \ Some of the calculations in this paper were obtained using the software tool \textit{Mathematica} \cite{Wol}.
1,108,101,563,602
arxiv
\section{Introduction} \label{sec:intro} Of all particles in the Standard Model (SM), the top quark is the one that couples most strongly to the Higgs boson. As such it is the particle that most severely contributes to the Higgs hierarchy problem. For this reason, natural extensions of the SM generically predict modifications of the Higgs and top sectors of the theory, either in the form of new weakly coupled states or new strong dynamics. On the other hand, all measurements performed at the Large Hadron Collider (LHC) seem to show agreement with the predictions of the SM\@. With no decisive indication of New Physics (NP) emerging from the data, a promising way to organize the available results is provided by the SM Effective Field Theory~\cite{Buchmuller:1985jz,Feruglio:1992wf,Grzadkowski:2010es,Alonso:2012px,Contino:2013kra,Davidson:2013fxa,Biekoetter:2014jwa,deVries:2014apa,Falkowski:2015wza}. The effects of new particles and phenomena that are too heavy to be directly accessed at the LHC can be described in full generality by adding operators of dimension larger than 4 to the SM Lagrangian, \begin{equation} \mathscr L = \mathscr L_{SM}+\sum_i C^{(5)}_i O^{(5)}_i+\sum_i C^{(6)}_i O^{(6)}_i+\ldots \, \, . \end{equation} Partly motivated by the special role the top quark has in relation to the hierarchy problem, we consider operators modifying the high energy tails of kinematic distributions arising from $t\bar t$ pair production at the LHC\@. Similar shape analyses for tails of other differential distributions have been performed by Refs.~\cite{Cirigliano:2012ab, deBlas:2013qqa,Falkowski:2016cxu,Farina:2016rws,Raj:2016aky,Alioli:2017jdo,Greljo:2017vvb,Bellazzini:2017bkb,Azatov:2017kzw,Panico:2017frx,Franceschini:2017xkh,Alioli:2017nzr}. We focus on the $t\bar t$ invariant mass distribution and we consider those operators for which the leading order effect at high energies comes from the interference between the QCD SM amplitude and the amplitude generated by a single insertion of a dimension 6 operator. We also require such corrections to be nonvanishing in the limit in which all SM masses are much smaller than the typical energy scale of the process that is considered. These requirements single out the set of gauge invariant dimension six operators shown in Table~(\ref{tab4f}). Using Lorentz, $SU(3)_c$, and $SU(2)_{EW}$ Fierz identities all the operators can be written as four fermion operators involving the product of two color octet currents: a $t\bar t$ one and a light quark one. This is indeed the same structure of the $q\bar q\to t\bar t$ amplitude in the SM\@. The full set of four fermion operators contributing to the $pp\to t\bar t$ cross section is shown in Appendix~\ref{operators}. Other groups have studied the impact of precise measurements of top quark observables on the SM EFT~\cite{AguilarSaavedra:2010zi,AguilarSaavedra:2011vw,Kamenik:2011dk,Zhang:2012cd,Rontsch:2014cca,Degrande:2014tta,Durieux:2014xla,Rontsch:2015una,Franzosi:2015osa,Buckley:2015nca,Dror:2015nkp,Buckley:2015lku,Zhang:2016omx,Bylund:2016phk,Schulze:2016qas,Cirigliano:2016nyn,Maltoni:2016yxb,Englert:2017dev,Zhang:2017mls,AguilarSaavedra:2018nen,Chala:2018agk}. Here we focus on the most recent data measuring the $t\bar t$ differential production cross section, and state of the art theoretical calculations, to extract reliable bounds on the dimension 6 operators appearing in Table~(\ref{tab4f}). \begin{table}[t] \caption*{95\%~CL bounds on operator coefficients ($\times10^{3}$) at $\sqrt s=13$\,TeV\vspace{-0.3cm}} \begin{center} {\small \begin{tabular}{c|c|c|c|c} & Operator & \makecell{35.8\,fb$^{-1}$\,(CMS) \\ {\footnotesize Observed $\ \ $ Expected}} & 300\,fb$^{-1}$&3\,ab$^{-1}$ \\ \hline \hline $ O_{Qq}^{(3)}$ & $\bar Q\gamma^\mu T^A\tau^aQ\, \bar q\gamma_\mu T^A\tau^aq$& $[-25,19]\ \ \ \ [-27,23]$ &$[-6.9,5.5]$ &$[-5.4,4.1]$\\ $O_{Qq}$ & $\bar Q\gamma^\mu T^AQ\, \bar q\gamma_\mu T^Aq$&$[-32,12]\ \ \ \ [-32,19]$&$[-7.1,5.2]$&$[-5.3,4.1]$\\ $O_{Qu}$ & $\bar Q\gamma^\mu T^AQ\, \bar u\gamma_\mu T^Au$&$[-37,17]\ \ \ \ [-39,24]$&$[-7.6,5.8]$&$[-5.5,4.2]$\\ $O_{Qd}$ & $\bar Q\gamma^\mu T^AQ\, \bar d\gamma_\mu T^Ad$&$[-46,26]\ \ \ \ [-49,36]$&$[-15.,13.]$&$[-13.,11.]$ \\ $O_{Uq}$ & $\bar U\gamma^\mu T^AU\, \bar q\gamma_\mu T^Aq$&$[-32,12]\ \ \ \ [-32,19]$&$[-7.1,5.2]$& $[-5.3,4.1]$\\ $O_{Uu}$ & $\bar U\gamma^\mu T^AU\, \bar u\gamma_\mu T^Au$&$[-37,17]\ \ \ \ [-39, 25]$&$[-7.6,5.8]$&$[-5.5,4.2]$ \\ $O_{Ud}$ & $\bar U\gamma^\mu T^AU\, \bar d\gamma_\mu T^Ad$&$[-46,26]\ \ \ \ [-49, 36]$&$[-15.,13.]$&$[-13.,11.] $\\ \end{tabular} } \vspace{0.3cm} \caption{\label{tab4f}\footnotesize Gauge and Lorentz structure of dimension 6 four fermion operators leading to nonvanishing interference with the SM QCD $q\bar q\to t\bar t$ amplitude at leading order and neglecting quark masses. We use capital letters $Q$ and $U$ to denote the third generation quark doublet and up-type singlet, while lowercase $q, u$, and $d$ denote quarks from the first two generations. $SU(3)_c$ and $SU(2)_{EW}$ generators are denoted by $T^A$ and $\tau^a$. In all the operators we sum over light quark flavors. We report 95\%\,CL bounds on $c_i$, where the operators, $\mathcal{O}_i$, are normalized to have coefficients $c_i g_s^2 / m_t^2 \, \mathcal{O}_i$. Current bounds are extracted from CMS data~\cite{Sirunyan:2018wem} (both observed and expected), while projections correspond to the higher luminosities of 300 and 3000~fb$^{-1}$ at $\sqrt s = 13$\,TeV. } \end{center} \end{table} This paper is organized as follows. In Sec.~\ref{sec:precision} we describe the theory calculations that are available for the $t\bar t$ invariant mass distribution and their uncertainties. We describe the experimental measurements and the statistical methods that we use to extract bounds and future projections. In Sec.~\ref{sec:bounds} bounds on a set of dimension 6 four fermion operators are presented and their validity in the framework of the Effective Field Theory is discussed. In Sec.~\ref{sec:implications} we discuss the implications of such bounds on relevant NP models such as composite Higgs models and flavor models with $U(3)$ or $U(2)$ flavor symmetries. In Sec.~\ref{sec:conclusions} we present our conclusions. \section{Precision measurements in high energy $t\bar t$ observables} \label{sec:precision} The differential cross section of top quark pair production at the LHC is one of the most accurately known hadronic observables. This is due to the groundbreaking work of~Refs.~\cite{Czakon:2013goa, Czakon1,Czakon2,Czakon3,Czakon4}, which achieve full NNLO QCD and NLO EW accuracy for (undecayed) final state top quarks. From the experimental side both CMS~\cite{Sirunyan:2018wem} and ATLAS~\cite{Aaboud:2017fha} provide measurements of the differential $t\bar t$ cross section in the lepton plus jets final state and ATLAS in the fully hadronic final state~\cite{Aaboud:2018eqg} at 13\,TeV center of mass energy. In this paper we use the CMS result, which uses a luminosity of 35.8\,fb$^{-1}$. The differential NNLO predictions of Ref.~\cite{Czakon4} are not available for the kinematic cuts of ATLAS~\cite{Aaboud:2018eqg}, while Ref.~\cite{Aaboud:2017fha} does not provide unfolded values for the parton level cross section. Previous measurements of the differential $t\bar t$ cross section have been performed by ATLAS~\cite{Aad:2012hg,Aad:2014zka,Aad:2015eia,Aad:2015hna,Aad:2015mbv,Aaboud:2016iot,Aaboud:2016syx} and CMS~\cite{Chatrchyan:2012saa,Khachatryan:2015oqa,Khachatryan:2016gxp,Khachatryan:2016mnb,Sirunyan:2017azo}. While we use differential cross section measurements to look for smooth effects parameterized by the Effective Field Theory, we note that top pair production measurements have also been used to search for sharp resonances by ATLAS~\cite{Aad:2015fna,Aaboud:2017hnm,Aaboud:2018mjh} and CMS~\cite{Khachatryan:2015sma,Sirunyan:2017uhk,Sirunyan:2018ryr}. The implications of using NNLO QCD theoretical predictions for such bump hunts in the $t\bar t$ invariant mass spectrum was studied by Ref.~\cite{Czakon:2016vfr}. In the left panel of Fig.~(\ref{CMSvstheory}) we compare the measurement of Ref.~\cite{Sirunyan:2018wem}, of the unfolded parton level $t\bar t$ invariant mass distribution, with the theory calculation from Ref.~\cite{Czakon4}. We include experimental uncertainties and their correlations from Refs.~\cite{Sirunyan:2018wem}. Theory uncertainties, including QCD scale variation and PDF uncertainties, are taken from Ref.~\cite{Czakon4} in which PDF uncertainties are calculated using the {\texttt{PDF4LHC15}}~\cite{Butterworth:2015oua} set extended with {\texttt{LUXqed}}~\cite{Manohar:2016nzj}. This PDF set includes a combination of the results from Refs.~\cite{Ball:2014uwa,Harland-Lang:2014zoa,Dulat:2015mca}, where the only top observable included in the fits is the total $t\bar t$ production cross sections at 7 and 8\,TeV, which we do not expect to be significantly contaminated by the EFT operators that we consider below, which produce effects that grow with energy. We take scale uncertainties and PDF uncertainties to be uncorrelated from each other. On the right panel of Fig.~(\ref{CMSvstheory}), we show the relative size of experimental and theory uncertainties. The largest source of uncertainty is experimental systematics, which is as large as 20\% in the last invariant mass bin. Note that CMS measures the cross section times branching fraction of semileptonic $t\bar t$ events, $\sigma_{t \bar t} \times \textrm{BR}_l$, where at parton level $\textrm{BR}_l \approx 0.29$.\footnote{The partonic semileptonic branching fraction includes decays where one top decays to an electron or muon while the other top decays to hadrons (and neither top decays to tau leptons). Therefore, $\textrm{BR}_l \approx 4 \, BR_{W \rightarrow l \nu} (1-3 \, \textrm{BR}_{W \rightarrow l \nu}) \approx 0.29$ where $\textrm{BR}_{W \rightarrow l \nu} \approx 0.109$~\cite{Tanabashi:2018oca}.} Goodness of fit is evaluated by constructing a $\chi^2$ statistic, \begin{equation}\label{chi2fit} \chi^2 = \sum_{I,J}({\textrm{th}}^{(I)} - {\textrm{exp}}^{(I)})\left(\Sigma^{-1}\right)_{I,J}({\textrm{th}}^{(J)} - {\textrm{exp}}^{(J)}) \, \, , \end{equation} where ${\textrm{th}}^{(I)}$ and ${\textrm{exp}}^{(I)}$ are the experimental and theory prediction in the $I$-th $t\bar t$ invariant mass bin, and $\Sigma$ is the total covariance matrix including all uncertainties described above. Assuming the usual asymptotic behavior of $\chi^2$ we can associate a p-value to the SM fit, \begin{equation} {\textrm{p}} = 1- {\textrm{cdf}}_{\chi^2_{n}}(\chi^2) \, \, , \end{equation} where ${\textrm{cdf}}_{\chi^2_{n}}$ is the cumulative chi-squared distribution with $n=10$ degrees of freedom. The p-value we obtain for the fit is decent, ${\textrm{p}}=0.10$, which we take as an indication that both the uncertainties and the theory prediction are under control. \begin{figure}[t] \begin{center} ~~\includegraphics[width=0.475\textwidth]{figs/theory-data.pdf}~~~~~\includegraphics[width=0.475\textwidth]{figs/errNow.pdf} \end{center} \vspace{-.3cm} \caption{ \footnotesize \emph{Left}: comparison of theory prediction to experimental data from Ref.~\cite{Sirunyan:2018wem}. \emph{Right}: summary of theory uncertainties from Ref.~\cite{Czakon4} and experimental uncertainties from Ref.~\cite{Sirunyan:2018wem}. The uncertainties on both plots are $1 \sigma$. } \label{CMSvstheory} \end{figure} In order to make projections for future measurements of the $t\bar t$ invariant mass distribution, that will benefit from more luminosity and therefore higher statistics at higher energies, we extend the invariant mass range until $m_{t \bar t} = 6$\,TeV\@. We write the full covariance matrix for the uncertainties as \begin{equation}\label{covproj} \Sigma = \Sigma_{\textrm{theory}}+\Sigma_{\textrm{stat}}+\Sigma_{\textrm{syst}} \, \, . \end{equation} Theory uncertainties and correlations, $\Sigma_{\textrm{theory}}$, including scale variation and PDF uncertainties, are evaluated in the new mass range using Ref.~\cite{Czakon4}, as shown in Fig.~(\ref{errorproj}). For the statistical uncertainty contribution to the full covariance, $\Sigma_{\textrm{stat}}$, we use the Gaussian limit, \begin{equation} \left(\Sigma_{\textrm{stat}}\right)_{I,J} = \frac{\sigma^{(I)}}{{\textrm{BR}_l}\times\epsilon\times\mathcal{L}}\,\delta_{IJ} \, \, , \end{equation} where as above $\textrm{BR}_l = 0.29$. The current measurement of Ref.~\cite{Sirunyan:2018wem} has an overall selection efficiency of about 4 and 5\% at the parton and particle levels, respectively. For our future projections we take $\epsilon=0.05$, independent of the invariant mass. Experimental systematic uncertainties are modeled by including two fractional sources of uncertainty, \begin{equation} \left(\Sigma_{\textrm{syst}}\right)_{I,J} =(\delta_C^2+\delta_U^2\delta_{IJ})\sigma^{(I)}\sigma^{(J)} \, \, , \end{equation} with $\delta_C$ being completely correlated and $\delta_U$ fully uncorrelated. We choose $\delta_C=\delta_{U}=7\%$ to roughy match current experimental uncertainties~\cite{Sirunyan:2018wem}.\footnote{We validated this simplified treatment of experimental uncertainties, for future projections, by verifying that we produce similar bounds on operators from the CMS measurement~\cite{Sirunyan:2018wem} when using $\delta_C=\delta_{U}=7\%$ and when using the full experimental covariance matrix.} \begin{figure}[t] \begin{center} ~~\includegraphics[width=0.475\textwidth]{figs/errLater1.pdf}~~~~~\includegraphics[width=0.375\textwidth]{figs/correlation.pdf} \end{center} \vspace{-.3cm} \caption{ \footnotesize Projected theory and statistical uncertainties ($1 \sigma$) for the $t\bar t$ invariant mass distribution at the 13\,TeV LHC\@. Statistical uncertainties are evaluated for the two different integrated luminosities of 0.3 and 3\,ab$^{-1}$. The left panel shows the size of the uncertainty in each invariant mass bin, and the right panel shows the correlation of the PDF uncertainty across different invariant masses. } \label{errorproj} \end{figure} \section{Bounds}\label{sec:bounds} We are now ready to derive the bounds shown in Table~(\ref{tab4f}). To do so we normalize the operators according to \begin{equation}\label{deltaL4f} \mathscr L\supset \sum_i \frac{g_s^2 c_i}{m_t^2} O_i \, \, , \end{equation} where the sum is over the operators in Table~(\ref{tab4f}), $g_s=1.22$ is the strong coupling constant taken here as a fixed reference value, and $m_t=173.3$\,GeV is the top quark mass. The value of the $ t\bar t$ cross section, $\sigma^{(I)}$, integrated over a range of invariant masses, $I$, is a quadratic polynomial in the coefficients $c_i$, \begin{equation}\label{sigmaeft} \sigma^{(I)} = \sigma^{(I)}_{SM}+ \sum_i c_i \sigma^{(I)}_{i}+ \sum_{i,j} c_i c_j \sigma^{(I)}_{i,j} \, \, . \end{equation} The linear term corresponds to interference between the SM amplitude and the NP one, and the quadratic terms are due to the square of the NP amplitude. The numerical values of these terms are obtained at leading order by integrating the squared amplitudes, shown in Appendix~(\ref{operators}), over the relevant phase space. \begin{figure}[t] \begin{center} \includegraphics[width=0.475\textwidth]{figs/barre.pdf}~~~~~~\includegraphics[width=0.475\textwidth]{figs/mttcutCMS.pdf} \end{center} \vspace{-.3cm} \caption{ \footnotesize \emph{Left}: 95\% CL limits on various dimension 6 operators, in terms of a reference mass scale $M\equiv m_t/\sqrt{|c|}$ (see Eq.~(\ref{deltaL4f})), from CMS data (observed and expected bounds) and our high luminosity projection study of the $t\bar t$ invariant mass distribution. For each operator, the shaded region is excluded. \emph{Right}: We show how the bounds degrade by only including in the fit those events for which the reconstructed $t\bar t$ invariant mass is below a certain value $m_{t\bar t}^{\textrm{max}}$. Thick (thin) lines correspond to positive (negative) operator coefficient which in turn corresponds to constructive (destructive) interference with the SM\@. } \label{barplot} \end{figure} In order to define confidence intervals for the coefficients of the operators in Eq.~(\ref{deltaL4f}) using CMS data, we assume Gaussian uncertainties and construct the following statistic, \begin{equation}\label{chi2bounds} \chi^2({\bf c}) = \sum_{I,J}({\sigma}^{(I)}({\bf c}) - {\sigma}^{(I)}_{\textrm{exp}})\left(\Sigma^{-1}\right)_{I,J}({\sigma}^{(J)}({\bf c}) - {\sigma}^{(J)}_{\textrm{exp}}) \, \, , \end{equation} where ${\bf c}$ is a subset of the coefficients $c_i$, ${\sigma}^{(I)}_{\textrm{exp}}$ are the cross section measurements, ${\sigma}^{(I)}({\bf c})$ their theory prediction, and $\Sigma$ is as in Eq.~(\ref{chi2fit}). Defining ${\bf c}^* ={\textrm{argmin}}_{\bf c}~ \chi^2({\bf c})$, Wilks' theorem guarantees that the quantity $\Delta \chi^2({\bf c})\equiv \chi^2({\bf c}) -\chi^2({\bf c}^*)$ has a chi-squared distribution with number of degrees of freedom equal to the number of components of ${\bf c}$. The 95\%\,CL intervals from the CMS measurement~\cite{Sirunyan:2018wem} are shown in Table~(\ref{tab4f}). We compare our results with other bounds present in the literature. Our limits are stronger than those derived by Ref.~\cite{Zhang:2017mls} using the 8\,TeV differential $m_{t\bar t}$ distribution measured by ATLAS~\cite{Aaboud:2016iot}. Notice however that Ref.~\cite{Zhang:2017mls} does not include experimental correlations which were not available. Another set of limits was obtained in the global fit of Refs.~\cite{Buckley:2015nca,Buckley:2015lku}, which include 8\,TeV differential $m_{t\bar t}$ distributions as an ingredient. We note that Refs.~\cite{Buckley:2015nca,Buckley:2015lku} do not always include experimental covariances, and only include interference terms between the SM and NP, neglecting contributions that go as NP squared which, as we discuss below, can impact the bounds within the regime of validity of the EFT\@. Four fermion operators have also been constrained using measurements of the charge asymmetry in top pair production, see for example Refs.~\cite{AguilarSaavedra:2011vw,Aguilar-Saavedra:2014kpa}. Ref.~\cite{Rosello:2015sck} uses the forward-backward asymmetry measured at Tevatron~\cite{Aaltonen:2012it,Abazov:2014cca}, and the charge asymmetry measured by CMS~\cite{Khachatryan:2015mna} and ATLAS~\cite{Aad:2015noh,Aad:2015lgx} at 8\,TeV to constrain four fermion operator coefficients. When we consider the same linear combination of operators, our bounds are stronger. Projected bounds at higher luminosities are obtained by substituting ${\sigma}^{(I)}_{\textrm{exp}}$ with its expected SM value and the total covariance with the projected one, Eq.~(\ref{covproj}). The left panel of Fig.~(\ref{barplot}) displays these same bounds but in terms of an arbitrarily defined NP scale, \begin{equation}\label{massscale} M_i\equiv m_t/\sqrt{|c_i|}\, \, . \end{equation} Fig.~(\ref{barplot}) shows bounds on both individual operators and the following linear combinations of operators: \begin{align}\nonumber O_{VV}&\equiv (\bar Q\gamma^\mu T^AQ+\bar U\gamma^\mu T^AU)(\bar q\gamma_\mu T^Aq+\bar u\gamma_\mu T^Au+\bar d\gamma_\mu T^Ad)\,,\\\label{opscombo} O_{AA}&\equiv (\bar Q\gamma^\mu T^AQ-\bar U\gamma^\mu T^AU)(\bar q\gamma_\mu T^Aq-\bar u\gamma_\mu T^Au-\bar d\gamma_\mu T^Ad)\,,\\\nonumber O_{QV}&\equiv (\bar Q\gamma^\mu T^AQ)(\bar q\gamma_\mu T^Aq+\bar u\gamma_\mu T^Au+\bar d\gamma_\mu T^Ad)\, \, . \end{align} The effect of the smaller down quark PDF, versus the up quark, on the bounds can be readily observed by noticing that operators which only contribute to $d\bar d\to t\bar t$ display a weaker limit. We note that the current CMS bound on the $O_{VV}$ operator shows the largest difference between the observed and expected limit. This is because $O_{VV}$ has the largest interference with SM amplitude. The CMS data are lower than expected, leading to a stronger than expected limit when the operator interferes constructively with the SM, which happens when $c_{VV}>0$. When the operators interfere destructively, the current bounds are dominated by NP squared contribution, which is a steeper function of the NP scale and therefore less sensitive to fluctuations in the data. In our study we only use information about the energy dependence of the $t\bar t$ cross section, but not its angular properties. This is because doubly-differential calculations of the $t\bar t$ cross sections are not yet available from Ref.~\cite{Czakon4}. This fact explains why the pairs of operators ($O_{Qq},O_{Uq}$), ($O_{Qu},O_{Uu}$), and ($O_{Qd},O_{Ud}$) have identical constraints. In order to understand the validity of our bounds within the EFT framework we proceed as in Refs.~\cite{Farina:2016rws,Alioli:2017jdo}. We introduce a variable $m_{t\bar t}^{\textrm{max}}$ and repeat the fit to the $t\bar t$ invariant mass distribution while conservatively only including bins characterized by a smaller invariant mass than this cutoff, $m_{t \bar t} < m_{t\bar t}^{\textrm{max}}$. We show CMS bounds and projected ones as a function of $m_{t\bar t}^{\textrm{max}}$ in the right panel of Fig.~(\ref{barplot}) and in Fig.~(\ref{mttcut}) for a few representative combinations of the operators in Table~(\ref{tab4f}). The bounds are again expressed in term of the mass scale $M_i(m_{t\bar t}^{\textrm{max}})\equiv m_t/\sqrt{|c_i|}$. We see that the strongest bound is for $O_{VV}$ and the weakest is for $O_{Qd}$. A negative sign of the operator coefficients implies destructive interference with the SM amplitude, leading to a weaker bound. The form factor $\textrm{Z}$ studied in Ref.~\cite{Alioli:2017jdo} gives a contribution to $O_{VV}$ \begin{equation} -\frac{\textrm{Z}}{2 m_W^2}(D_\mu G^{\mu\nu A})^2\rightarrow -\frac{{\textrm{Z}} g_s^2}{m_W^2} \,O_{VV} \, \, . \end{equation} The projected bounds on $\textrm{Z}$ from dijet physics extracted by Ref.~\cite{Alioli:2017jdo} can then be directly compared to the bounds on the NP mass scale associated to $O_{VV}$. These bounds from dijets are also shown in Fig.~(\ref{mttcut}). We find that the $t\bar t$ invariant mass distribution is not competitive with dijet physics to constrain $\textrm{Z}$. Among the operators in Table~(\ref{tab4f}), those involving the third generation quark doublet $Q$ can be constrained by the measurement of the both the dijet and the $pp\to b\bar b$ invariant mass distributions. Repeating the analysis of~\cite{Alioli:2017nzr} (which does not use $b$-tags), we find that the constraints coming from available dijet measurements~\cite{Aad:2013tea,Aad:2014vwa,Chatrchyan:2012bja} and projections at 300\,fb$^{-1}$ and 3\,ab$^{-1}$ are not competitive with the limit obtained in this paper from the $t\bar t$ invariant mass distribution. Ref.~\cite{Aaboud:2016jed} uses $b$-tagging to measure the $b\bar b$ production cross section at 7\,TeV center of mass energy. Given the limited energy and the limited invariant mass range that is explored ($m_{b\bar b}<1$\,TeV), and taking into account the large size of both systematic and theoretical uncertainties (due to the absence of full NNLO calculations for bottom production), we expect the constraints on operators involving the third generation quark doublet $Q$ coming from Ref.~\cite{Aaboud:2016jed} to be subleading to the one derived here from the $t\bar t$ distribution. \begin{figure}[t] \begin{center} ~~\includegraphics[width=0.475\textwidth]{figs/mttcutp.pdf}~~~~~~~\includegraphics[width=0.475\textwidth]{figs/mttcutm.pdf} \end{center} \vspace{-.3cm} \caption{ \footnotesize Projected 95\% CL bounds on four fermion operator mass scales. We show the dependence of the bounds on the $t\bar t$ invariant mass cut $m_{t\bar t}^{\textrm{max}}$. On the left the bounds are shown for the case of positive operator coefficients, corresponding to constructive interference with the SM amplitude, while on right we show the case of negative operator coefficients and destructive interference. We show the estimated region of validity of the EFT description (above the straight lines) for various assumptions about the coupling strength in the short distance model generating the effective operators. } \label{mttcut} \end{figure} While we can bound the size of the operator coefficients without knowing about the physics that generates such operators, the validity of the bounds we obtain depends on such details. The reason is clear: if Eq.~(\ref{deltaL4f}) is obtained integrating out some state of mass $m_{NP}$, Eq.~(\ref{deltaL4f}) cannot properly describe the $t\bar t$ invariant mass distribution for $m_{t\bar t}$ above $m_{NP}$. Assuming Eq.~(\ref{deltaL4f}) is obtained by integrating out, at tree level, states of mass $m_{NP}$ coupled with strength $g_{NP}$, one would approximately expect $M\sim (g_S/g_{NP})m_{NP}$. This implies that validity of the EFT description requires \begin{equation}\label{eftvalidity} M(m_{t\bar t}^{\textrm{max}}\sim m_{NP})\gtrsim\frac{g_s}{g_{NP}} m_{NP} \, \, . \end{equation} We display such limits in Fig.~(\ref{mttcut}) for various values of $g_{NP}$. It should be stressed that Eq.~(\ref{eftvalidity}) is by no means rigorous and the exact regime of validity of the EFT description can only be evaluated by calculating the $t \bar t$ invariant mass distribution within a complete model and comparing to the EFT prediction. To conclude this section we would like to point out another aspect of the bounds we described. Even though the operators in Table~(\ref{tab4f}) can interfere with the SM $q\bar q\to t\bar t$ amplitude, the bounds we obtain correspond to parameter points with similar contributions from the interference and the quadratic term in Eq.~(\ref{sigmaeft}). As an example, the projected bound on $c_{VV}$ at 300\,fb$^{-1}$ changes from $[-3.9,2.2]\times 10^{-3}$ to $[-4.0,4.0]\times 10^{-3}$ by dropping quadratic terms in the amplitude. In this situation, one possible concern is the presence of operators of dimension 8, which we have not taken into account, potentially affecting our analysis. To address this concern, let us again consider the situation in which NP of mass $m_{NP}$ and coupling $g_{NP}$ has been integrated out to obtain Eq.~(\ref{deltaL4f}). Let us also imagine operators of dimension 8 are generated at the same time. Corrections to the $t\bar t$ cross section, for $m_{t\bar t}\approx E$, are approximately given by \begin{equation}\label{dim8} \frac{\delta\sigma_{t\bar t}}{\sigma^{SM}_{t\bar t}}\sim \frac{g_{NP}^2}{g_s^2}\frac{E^2}{m_{NP}^2}+\frac{g_{NP}^4}{g_s^4}\frac{E^4}{m_{NP}^4}+\frac{g_{NP}^2}{g_s^2}\frac{E^4}{m_{NP}^4}+\ldots\, \, . \end{equation} The terms on the right hand side of Eq.~(\ref{dim8}) represent, from left to right, SM interference with dimension 6, dimension 6 squared, and SM interference with dimension 8. Under the assumption that $E\lesssim m_{NP}$, the third term never dominates over the second if $g_{NP}\gtrsim g_s$. This mild strong coupling requirement is also the region of parameter space where our bounds are most relevant, as suggested by Fig.~(\ref{mttcut}) and Eq.~(\ref{eftvalidity}). \section{Implications} \label{sec:implications} As our analysis in the previous section shows, our bounds are relevant for models with heavy new states, $m_{NP}\gg m_t$, with moderately large couplings, $g_{NP}\gtrsim g_s$. We now discuss motivated examples where these two features are realized. \subsection{Partially composite tops} Composite Higgs models~\cite{Contino:2010rs,Panico:2015jxa} stand out as particularly relevant for our bounds as they predict new heavy states sharing sizable interactions and mixings with SM states. The mechanism through which fermion masses are generated in these models, partial compositeness, implies that one or both helicities of the top quark mix strongly with resonances from the composite sector. At low energy this leads to a particular power counting for the four fermions operators that are generated. Assuming for simplicity that the right-handed helicity of the top quark is a composite state, up to order one factors, \begin{equation}\label{comptr} \Delta \mathscr L \approx \frac{g_\rho^2}{m_\rho^2} P\left(\frac{g_{SM}}{g_\rho}\psi,t_R\right)\,, \end{equation} where $P$ is a gauge and Lorentz invariant polynomial of degree four, $m_\rho$ is the mass of the composite states, $g_\rho$ represents their typical interaction strength, with $g_{SM}\lesssim g_\rho\lesssim 4\pi$. Finally $g_{SM}\sim 1$ could represent one of the SM gauge couplings or the top Yukawa coupling $y_t$. Additional power counting rules can be found in Ref.~\cite{Panico:2015jxa}. A toy model realizing Eq.~(\ref{comptr}) is shown in Appendix~(\ref{partialcomp}). In that example the mass of a massive gluon, $m_{\mathcal G}$, can be identified with $m_\rho$, and its coupling to composite states, $g_{\mathcal G}$, can be identified with $g_\rho$. Up to $O(1)$ factors, $O_{UV}$ is generated with coefficient $c_{UV}\sim m_t^2/m_{\mathcal G}^2$.\footnote{Analogously to Eq.~(\ref{opscombo}) we define $O_{UV}\equiv (\bar U\gamma^\mu T^AU)(\bar q\gamma_\mu T^Aq+\bar u\gamma_\mu T^Au+\bar d\gamma_\mu T^Ad)$. Since no angular information is used to extract the bounds, constraints on $O_{UV}$ and $O_{QV}$ (Figs.~(\ref{barplot}) and~(\ref{mttcut})) are equivalent.} According to Fig.~(\ref{mttcut}), the projected bounds at 300\,fb$^{-1}$ imply $m_{\mathcal G}\gtrsim 3$\,TeV\@. Given that $g_{NP}\sim g_s$, this is marginally consistent with EFT validity. Other dimension 6 operators also appear. There is a contribution to $\textrm{Z}\sim (g_s^2/g_{\mathcal G}^2)(m_W^2/m_{\mathcal G}^2)$, so that the same model can be constrained by the dijet analysis of Ref.~\cite{Alioli:2017jdo}. This constraint is negligible for $g_{\mathcal G} \gtrsim 3-4$ (see Fig.~(\ref{mttcut})). The four top operator \begin{equation}\label{fourtr} -\frac{g_{\mathcal G}^2}{2 m_{\mathcal G}^2}\left(\bar t_R \gamma_\mu T^A t_R\right)^2 \end{equation} is also generated and the size of its coefficient is enhanced for large $g_{\mathcal G}$. A bound on the coefficient of this operator corresponding to $m_{\mathcal G}/g_{\mathcal G} > 0.35$\,TeV was extracted by Ref.~\cite{Zhang:2017mls} by using the upper bound on the $pp\to t\bar t t \bar t$ cross section from CMS~\cite{Sirunyan:2017uyt}. Given the limit $m_{\mathcal G}\gtrsim 3$\,TeV from $O_{UV}$, the four top measurement is a subleading constraint when $4\lesssim g_{\mathcal G}\lesssim 10$. In this regime, measurements of the $t\bar t$ differential cross section are the leading constraint on the model. \subsection{Flavor models} Given the strong constraints that exist on four fermion operators involving only the light generations of quarks~\cite{Alioli:2017jdo}, bounds coming from $t\bar t$ production will be relevant only if some degree of flavor non-universality enhances third generation couplings. In this framework, flavor violation can remain under control if the underlying NP model respects flavor symmetries that the EFT then inherits. One possibility is that the EFT respects Minimal Flavor Violation (MFV)~\cite{DAmbrosio:2002vsn}, such that the SM Yukawas, $Y_{U,D}$, are the only flavor violating spurions that enter effective operators. In this setup all of the operators in Table~(\ref{tab4f}) can be generated by taking the product of one flavor singlet current and one of the following two bilinears \begin{equation}\label{mfvbil} \bar q Y_U Y_U^\dagger \gamma^\mu T^A q,\quad \bar u Y_U^\dagger Y_U \gamma^\mu T^A u \, \, , \end{equation} where $q$ and $u$ are now three dimensional vectors in flavor space. The currents in Eq.~(\ref{mfvbil}) are singlets under the SM flavor group (because $Y_U$ transforms as a $\mathbf{3} \otimes \mathbf{\bar 3}$ under $U(3)_q \otimes U(3)_u$). Neglecting light quark masses, $Y_U Y_U^\dagger=Y_U^\dagger Y_U=y_t^2\delta_{i3}\delta_{j3}$. Generic UV completions will also generate flavor universal operators that are the product of two flavor singlet currents. The bounds from dijets on flavor universal operators are a factor of $\sim10$ stronger than the bounds on operators including tops (see for example Fig.~(\ref{mttcut})). Bound from tops can still be relevant if a mild tuning suppresses the flavor universal operators. \begin{figure}[t] \begin{center} ~~\includegraphics[width=0.475\textwidth]{figs/phiup.pdf}~~~~~~~\includegraphics[width=0.475\textwidth]{figs/phidown.pdf} \end{center} \vspace{-.3cm} \caption{ \footnotesize \emph{Left}: constraints on the mass and coupling of a color octet electroweak doublet with hypercharge $Y=-1/2$, transforming as a doublet under the $U(2)_u$ flavor group acting on the first two generation right-handed up quarks, see Eq.~(\ref{phiu}). \emph{Right}: constraints on a color octet electroweak doublet with hypercharge $Y=1/2$, this time coupling to the first two generation right-handed down quarks with a $U(2)_d$ invariant coupling. We show 95\% CL constraints from CMS measurements of the $t\bar t$ invariant mass distribution~\cite{Sirunyan:2018wem} and high luminosity projections. Shaded regions show the constraints extracted from the low energy EFT description as in Eq.~(\ref{eftphiu}), while solid contours show the bounds obtained by calculating the corrections to the $t\bar t$ differential cross section using the full model. Expected CMS exclusions are also displayed (green dashed contours). The dashed gray contours show the bounds on direct pair production of $\Phi_{u,d}$ followed by decay to top plus jet~\cite{Sirunyan:2017yta}, while the shaded gray regions show the bounds on pair production of $\Phi_{u,d}$ followed by decay to bottom plus jet~\cite{Aaboud:2017nmi}. } \label{scalarmodel} \end{figure} Alternatively, operators with tops can naturally dominate if flavor violation respects a reduced symmetry group such as $U(2)^3$~\cite{Barbieri:2011ci}. In this case it is straightforward to identify UV completions where only operators such as those in Table~(\ref{tab4f}) are generated. As an example we extend the SM by including a complex color octet scalar $\Phi_u$ of mass $m_\Phi$. We take $\Phi_u$ to be a doublet under $SU(2)_{EW}$ with hypercharge $Y=-1/2$, transforming as a doublet under the flavor $U(2)_u$ corresponding to the right-handed up type quarks from the first two generations. We consider the following interactions \begin{equation}\label{phiu} y_{\Phi_u} \,\Phi_u^A\,\bar Q T^A u +{\textrm{h.c.}}\, \, . \end{equation} It is straightforward to vary the quantum numbers of the scalar mediator, so that it can couple to different quark bilinears. Integrating out $\Phi_u$ leads to a single four fermion operator in the low energy theory, \begin{equation}\label{eftphiu} \Delta \mathscr L = \frac{y^2_{\Phi_u}}{m_\Phi^2} \left(\bar Q T^A u\right)\left( \bar u T^A Q\right) = -\frac{1}{6}\frac{y^2_{\Phi_u}}{m_\Phi^2} \left(\bar Q \gamma^\mu T^A Q\right)\left( \bar u \gamma_\mu T^A u\right)+\frac{2}{9}\frac{y^2_{\Phi_u}}{m_\Phi^2} \left(\bar Q \gamma^\mu Q\right)\left( \bar u \gamma_\mu u\right) \, \, . \end{equation} In the last equality we use color and Lorentz Fierz identities to bring the operator to the canonical form used in Table~(\ref{tab4f}). Both octet and singlet color structures are generated. This model is particularly interesting in relations to the bounds we have derived, since while the scalar $\Phi_u$ cannot be resonantly produced it can contribute to the $pp\to t\bar t$ differential cross section. Bounds on $\Phi_u$ are shown in the left panel of Fig.~(\ref{scalarmodel}). Constraints from measurements of the $t\bar t$ invariant mass distribution are extracted both in the full model and by using the EFT description of Eq.~(\ref{eftphiu}). For the EFT bounds we fit to masses $m_{t \bar t} < m_\Phi$ to ensure validity of the EFT description (the jaggedness of the EFT bounds results from the binning of the $m_{t \bar t}$ spectrum). We find approximate agreement between the bound using the EFT and the full model, when $m_\Phi \gtrsim 2$\,TeV, verifying that operators of dimension larger than 6 do not play an important role in this regime. For lighter masses, the EFT gives a weaker bound than the full model because fitting to the low energy subset, $m_{t \bar t} < m_\Phi$, is conservative. We compare the above bounds from the $t \bar t$ invariant mass spectrum to bounds from direct pair production of $\Phi_u$. Bounds on pair production of the neutral component of $\Phi_u$ followed by its decay to $\bar tu(t\bar u)$ are extracted from Fig.~(3) of Ref.~\cite{Sirunyan:2017yta}, which uses 35.9\,fb$^{-1}$ at 13\,TeV\@. For pair production of the charged component of $\Phi_u$ followed by decay to $\bar bu(b\bar u)$ bounds are extracted from the coloron model in Fig.~(9) of Ref.~\cite{Aaboud:2017nmi}, which uses 36.7\,fb$^{-1}$ at 13\,TeV\@. In both cases we roughly adapt bounds by neglecting a possible order one difference in acceptance between the simplified model used by the experimental collaboration and the model of Eq.~(\ref{phiu}). While at low masses and couplings the bounds are dominated by $\Phi_u$ pair production and decay, at larger masses and moderate to large couplings the constraint from the $t\bar t$ invariant mass distribution dominates. Bounds on an analogous model in which a scalar $\Phi_d$ couples to light right-handed down quarks through $y_{\Phi_d} \,\Phi_d^A\,\bar Q T^A d$ are shown in the right panel of Fig.~(\ref{scalarmodel}). \section{Conclusions} \label{sec:conclusions} Measurements of the $t\bar t$ invariant mass distribution at high energies, together with the high precision calculation of the $t\bar t$ cross section, significantly constrains the top quark sector of the SM EFT\@. In this paper we use the most recent data from the CMS collaboration with a luminosity of 35.8\,fb$^{-1}$, and NNLO QCD and NLO EW calculations of the $t\bar t$ differential cross section, to constrain dimension 6 four fermion operators modifying the shape of the $t\bar t$ invariant mass distribution at high energies. Our results are summarized in Table~(\ref{tab4f}) and Figs.~(\ref{barplot}) and~(\ref{mttcut}). In terms of the mass scale defined in Eq.~(\ref{massscale}), the current CMS bound points to $M\gtrsim 1$\,TeV for the fully chiral operators in Table~(\ref{tab4f}) and $M\gtrsim 2$\,TeV for the combinations defined in Eq.~(\ref{opscombo}). The strongest current bound, $M\gtrsim 5.5$\,TeV, is obtained for the fully vectorlike operator $O_{VV}$, with a positive coefficient and constructive interference with the SM\@. This bound is stronger than expected due to the downward fluctuation in the CMS data that can be observed in Fig.~(\ref{CMSvstheory}). Projected bounds at high luminosity are stronger, of order 2\,TeV and 4\,TeV for the fully chiral operators and the operator combinations, respectively. Our bounds are applicable to NP scenarios in which the states generating the effective operators are heavy and moderately strongly coupled. We project that with more luminosity, measurements of differential tops will be sensitive to composite Higgs models with partial compositeness in which the underlying strong sector delivers resonances with moderately large couplings, $g_\rho\gtrsim 3-4$. If we compare our results with those obtained from other hadronic observables like Drell-Yan~\cite{Farina:2016rws} and dijets~\cite{Alioli:2017jdo} we see (for instance from Fig.~(\ref{CMSvstheory})) that for $t\bar t$ observables experimental systematics are a limiting factor. It would be worthwhile to explore statistical procedures that provide alternatives to unfolding~\cite{DAgostini:1994fjx} (for example see Ref.~\cite{cranmer_kyle_2017_1013926}), where systematic uncertainties may take a different size. It has been shown that measurements of top quark pair differential distributions can be used to constrain the gluon PDF~\cite{Czakon:2016olj}. There is a risk that nonzero operators from the SM EFT may bias future PDF fits, and that the resulting PDFs may lead to incorrect bounds on the size of these operators. It would be interesting to explore the interplay of PDF fits and the SM EFT, such as the possibility of using differential top measurements to perform simultaneous fits to EFT operators and PDFs. Finally, as a next step it would be interesting to perform a multidimensional kinematic fit that goes beyond this study by including angular observables. \section*{Acknowledgments} We thank Kyle Cranmer, Otto Hindrichs, Alex Mitov, and Juan Rojo for helpful conversations. CM and JTR acknowledge the CERN theory group for hospitality while part of this work was completed. MF is supported by the NSF grant PHY-1620628. CM is supported by the James Arthur Graduate Fellowship. JTR is supported by the NSF CAREER grant PHY-1554858. \newpage
1,108,101,563,603
arxiv
\section{Introduction} \label{sec:intro} The design and fabrication of novel small devices require the synergy of experiment, mathematical modeling and numerical simulation. In epitaxial growth, crystal surface features such as thin films, which are building blocks of solid-state devices, are grown on a substrate by material deposition from above. Despite continued progress, the modeling and simulation of epitaxial phenomena remains challenging because it involves reconciling a wide range of length and time scales. An elementary process on solid surfaces is the hopping of atoms in the presence of line defects (``steps'') of atomic height~\cite{evansetal06,jeongwilliams99,williamsbartelt96}: atoms hop on terraces, and attach to and detach from step edges (or island boundaries). Burton, Cabrera and Frank (BCF)~\cite{bcf51} first described each step edge as a boundary moving by mass conservation of point defects (``adatoms'') which diffuse on terraces. In the BCF theory, the step motion occurs near thermodynamic equilibrium. Subsequent theories have accounted for far-from-equilibrium processes; for a review see section~\ref{sec:model}. The macroscale behavior of crystal surfaces is described by use of effective material parameters such as the step stiffness, $ {\tilde\beta}$~\cite{margetiskohn06}. In principle, $ {\tilde\beta}$ depends on the step edge orientation angle, $\theta$, and is viewed as a quantitative measure of step edge fluctuations~\cite{akutsu86,stasevichetal04}. Generally, effective step parameters such as $ {\tilde\beta}$ originate from atomistic processes to which inputs are hopping rates for atoms; in practice, however, the parameters are often provided by phenomenology. For example, the dependence of $ {\tilde\beta}$ on $\theta$ is usually speculated by invoking the underlying crystal symmetry~\cite{banchetal05,hausservoigt05,siegeletal04,spencer04}.\looseness=-1 In this article we analyze a kinetic model for out-of-equilibrium processes~\cite{caflischetal99,caflischli03} in order to: (i) derive a Gibbs-Thomson (GT) type formula, which relates the adatom flux normal to a curved step edge and the step edge curvature~\cite{israelikandel99,jeongwilliams99}; and (ii) determine the step stiffness $ {\tilde\beta}$, which enters the GT relation, as a function of $\theta$. For this purpose, we apply perturbations of the kinetic (nonequilibrium) steady state of the model for small P\'eclet number $P$, which is the ratio of the material deposition flux to the diffusive flux along a step edge, i.e. \begin{equation} P = (2 a^3 \bar f) /D_E~, \label{eq:Pdef} \end{equation} in which $a$ is an atomic length, $\bar f$ is a characteristic size for the flux $f$ normal to the boundary from each side, and $D_E$ is the coefficient for diffusion along the boundary. A factor of $2$ is included in (\ref{eq:Pdef}) since the flux is two-sided and the total flux is of size $2\bar f$. For sufficiently small $\theta$ and $P$, we find that the stiffness has a behavior similar to that predicted by equilibrium-based calculations~\cite{stasevich06}.\looseness=-1 For the boundary of a two-dimensional material region, a definition of $ {\tilde\beta}$ can arise from linear kinetics. In the setting of atom attachment-detachment at an edge, this theory states that the material flux, $f$, normal to the curved boundary is linear in the difference of the material density, $\rho$, at the boundary from a reference or ``equilibrium'' density, $\rho_0$. The GT formula connects $\rho_0$ to the boundary curvature, $\kappa$. For unit layer thickness and negligible step interactions~\cite{jeongwilliams99}, the normal flux reads\looseness=-1 \begin{equation} f = {D_A} (\rho-\rho_0),\label{eq:f-noperm} \end{equation} where $ {D_A}$ is the diffusion coefficient for attachment and detachment, and $\rho_0$ is defined by \begin{equation} \rho_0=\rho_*\,e^{\frac{ {\tilde\beta}\,\kappa}{k_BT}}\sim \rho_*\biggl(1+\frac{ {\tilde\beta}}{k_BT}\, \kappa\biggr),\qquad | {\tilde\beta} \kappa|\ll k_BT~. \label{eq:GT} \end{equation} The last equation is referred to as the GT formula, in accord with standard thermodynamics~\cite{biskupetal04,gibbs28,krishnamacharietal96,landau,rowlinsonwidom82}. In~(\ref{eq:GT}), $\rho_*$ is the equilibrium density near a straight step edge and $k_B T$ is Boltzmann's energy ($T$ is temperature); the condition $| {\tilde\beta}\kappa|\ll k_B T$ is satisfied in most experimental situations~\cite{tersoffetal97}. Equation~(\ref{eq:f-noperm}) does not account for step permeability, by which terrace adatoms hop directly to adjacent terraces~\cite{ozdemir90,tanakaetal97}. This process is discussed in section~\ref{sec:model}.\looseness=-1 For systems that are nearly in equilibrium, the exponent in~(\ref{eq:GT}) is derived by a thermodynamic driving force starting from the step line tension $\beta$, the free energy per unit length of the boundary~\cite{gurtin93}. The step stiffness $ {\tilde\beta}$ is related to $\beta$ by~\cite{akutsu86,fisher84,fisher82}\looseness=-1 \begin{equation} {\tilde\beta} = \beta + \beta_{\theta \theta}\qquad (\beta_\theta:=\partial_\theta\beta)~. \label{eq:tension-stiff} \end{equation} Evidently, the knowledge of $ {\tilde\beta}$ alone does not yield $\beta$ uniquely: by~(\ref{eq:tension-stiff}), \begin{equation} \beta(\theta)=C_1\,\cos\theta+C_2\,\sin\theta+\int_0^{\theta}d\vartheta\ {\tilde\beta}(\vartheta)\,\sin(\theta-\vartheta) \label{eq:beta-tdbeta} \end{equation}where $C_1$ and $C_2$ are in principle arbitrary constants. The parameters $\beta$ and $ {\tilde\beta}$ are important in the modeling and numerical simulation of epitaxial phenomena. In thermodynamic equilibrium, the angular dependence of the step line tension, $\beta(\theta)$, determines the equilibrium (two-dimensional) shape of step edges or islands, e.g. the macroscopic flat parts (``facets'') of the step are found by minimizing the step line energy through the Wulff construction~\cite{herring51,pengetal99,rottmanwortis84,szalmaetal05,taylor74,wulff1901}. Near thermodynamic equilibrium, the step stiffness, $ {\tilde\beta}(\theta)$, controls the temporal decay of fluctuations from equilibrium~\cite{akutsu86,jeongwilliams99}. The significance of $ {\tilde\beta}$ was pointed out by de Gennes in the context of polymer physics almost forty years ago~\cite{deGennes68,einstein03}: the energy of a polymer (or step edge) can be described by a kinetic energy term proportional to $ {\tilde\beta}\cdot (dx/dy)^2$, i.e., the stiffness times a ``velocity" squared where $x$ and $y$ are suitable space coordinates and $y$ loosely corresponds to ``time." Starting with a two-dimensional Ising model, Stasevich et al.~\cite{stasevich_thesis,stasevich06,stasevichetal04,stasevichetal05} carried out a direct derivation of $\beta(\theta)$ and $ {\tilde\beta}(\theta)$ from an equilibrium perspective based on atomistic key energies. For most systems, however, there has been no standard theoretical method for determining $\beta(\theta)$ and $ {\tilde\beta}(\theta)$.\looseness=-1 More generally, energetic principles such as a thermodynamic driving force are powerful as a means of describing the macroscopic effect of atomistic kinetics. The range of validity of energetic principles is not fully known and is an important unresolved issue. We believe that energetic arguments should be valid for systems that are nearly in local equilibrium, where the relevant processes approximately satisfy detailed balance. For systems that are far from equilibrium, however, energetic principles may serve as a valuable qualitative guide, even if they are not quantitatively accurate. The kinetic and atomistic origin of a material parameter that plays the role of the step stiffness are the subject of this article. For a step edge or an island boundary on an epitaxial crystal surface, we use the detailed kinetic model formulated by Caflisch et al.~\cite{caflischetal99,caflischli03} and further developed by Balykov and Voigt \cite{balykovvoigt05,balykovvoigt06} for the dynamics of the boundary. The basic ingredients are: (i) diffusion equations for adatom and edge-atom densities on terraces and along step edges; (ii) a convection equation for the kink density along step edges; and (iii) constitutive, algebraic laws for adatom fluxes, sources for kinks and the kink velocity by mean-field theory. This model admits a kinetic (nonequilibrium) steady state that allows for epitaxial growth via step flow. The model has been partly validated by kinetic Monte Carlo simulations~\cite{caflischetal99}. The detailed step model described in~\cite{caflischetal99,caflischli03} and section~\ref{subsec:noneq-kin} focuses on the kinetics of adatoms, edge-atoms and kinks at a step edge. As discussed by Kallunki and Krug~\cite{kallunkikrug03}, an edge-atom is {\it energetically} equivalent to two kinks. For example, the equilibrium density of kinks is proportional to $\exp[-\varepsilon/(k_B T)]$ while the equilibrium density of edge-atoms is proportional to $\exp[-2\varepsilon/(k_B T)]$, in which $\varepsilon$ is defined as the kink energy in~\cite{kallunkikrug03}, or identified with $-(k_B T/2)\log({D_K/D_E})$ in~\cite{caflischetal99}; $D_K$ and $D_E$ are diffusion coefficients for kinks and edge-atoms. On the other hand, the {\it kinetics} in~\cite{caflischetal99,caflischli03} are different for edge-atoms and kinks, since edge-atoms can hop at rate $D_E$, while kinks move through detachment of atoms at rate $D_K$. This situation is consistent with the kinetics described in~\cite{kallunkikrug03}, in which the $D_E$ and $D_K$ are proportional to $\exp[-E_{st}/(k_B T)]$ and $\exp[-E_{det}/(k_B T)]$, respectively. We are aware that the mean-field laws applied here, although plausible and analytically tractable, pose a limitation: actual systems are characterized by atomic correlations, which can cause deviations from this mean-field approximation. In particular, the validity of the mean-field assumption may be limited to orientation angles $\theta$ in some neighborhood of $\theta=0$. Note also that the most interesting results of this analysis are for $\theta$ near zero. Determination of the range of validity for this model is an important endeavor but beyond the scope of this paper. An extension of this model, which could improve its range of validity, would be to explicitly track the kinks in a step edge. This additional discreteness in the model would make the analysis of step stiffness more difficult. Our analysis is a systematic study of predictions from the mean-field approach only, and the conclusions presented here are all derived within the context of this approach. On the other hand, our analysis is more detailed than previous treatments of step stiffness, since it is based on kinetics rather than a thermodynamic driving force. Moreover, the model includes atomistic information, through a density of adatoms, edge-atoms and kinks. For evolution near the kinetic steady state, we derive for the mass flux, $f$, a term analogous to the Gibbs-Thomson formula~(\ref{eq:GT}), and subsequently find the corresponding angular dependence of the step stiffness, $ {\tilde\beta}(\theta)$. Our main assumptions are: (i) the motion of step edges or island boundaries is slower than the diffusion of adatoms and edge-atoms and the convection of kinks, which amounts to the ``quasi-steady approximation''; (ii) the mean step edge radius of curvature, $\kappa^{-1}$, is large compared to other length scales including the step height, $a$; and (iii) the edge P\'eclet number, $P$, given by~(\ref{eq:Pdef}) is sufficiently small, which signifies the usual regime for molecular beam epitaxy (MBE). To the best of our knowledge, the analysis in this paper offers the first kinetic derivation of a Gibbs-Thomson type relation and the step stiffness for all admissible values of the step edge orientation angle, $\theta$. (This approach is distinctly different from the one in e.g.~\cite{shenoy04} where classical elasticity is invoked.) Our results for the stiffness are summarized in section~\ref{sec:res}; see~(\ref{eq:stiff-smalltheta})--(\ref{eq:k0-asymp1}). A principal result of our analysis is that $ {\tilde\beta} =O(\theta^{-1})$ for $O(P^{1/3}) < \theta \ll 1$, which by~(\ref{eq:beta-tdbeta}) yields $\beta=O(\theta\ln\theta)$ for the step line tension. This result is in agreement with the independent analysis in~\cite{stasevich_thesis,stasevich06,stasevichetal04,stasevichetal05}, which makes use of equilibrium concepts. A detailed comparison of the two approaches is not addressed in our analysis. Our findings are expected to have significance for epitaxial islands, for example in predicting their facets, their roughness (e.g., fractal or smooth island boundaries) and their stability, as well as for the numerical simulation of epitaxial growth. More generally, our analysis can serve as a guide for kinetic derivations of the GT relation in other material systems. For example, it should be possible to derive the step stiffness for a step in local thermodynamic equilibrium within the context of the same model. This topic is discussed briefly in section~\ref{sec:conclusion}. \looseness=-1 The present work extends an earlier analysis by Caflisch and Li~\cite{caflischli03}, which addressed the stability of step edge models and the derivation of the GT relation. The analysis in~\cite{caflischli03}, however, only determined the value of $ {\tilde\beta}$ along the high-symmetry orientation, $\theta=0$. This restriction was due to a scaling regime used in~\cite{caflischli03} on the basis of mathematical rather than physical principles. In the present article we transcend the analytical limitations of~\cite{caflischli03} by applying perturbation theory guided by the physics of the step-edge evolution near the kinetic steady state. \looseness=-1 Our analysis also leads to formulas for {\it kinetic rates} in boundary conditions involving adatom fluxes. In particular, the attachment-detachment rates are derived as functions of the step edge orientation, and are shown to be different for up- and down-step edges. This asymmetry amounts to an Ehrlich-Schwoebel (ES) effect~\cite{ehrlichhudda66,schwoebelshipsey66}, due to geometric effects rather than a difference in energy barriers. In addition, if the terrace adatom densities are treated as input parameters, the adatom fluxes involve effective {\it permeability} rates, by which a fraction of adatoms directly hop to adjacent terraces (without attaching to or detaching from step edges)~\cite{filimonovhervieu04,ozdemir90,tanakaetal97}. Our main results for the kinetic rates are described by~(\ref{eq:ES-Dpm})--(\ref{eq:Apm-def}). In this article we do not address the effects of elasticity, which are due for instance to bulk stress. One reason is that elasticity requires a non-trivial modification of the kinetic model that we use here. This task lies beyond our present scope. Another reason is that, in many physically interesting situations, the influence of elasticity may be described well via long-range step-step interactions that do not affect the step stiffness. The study of elastic effects is the subject of work in progress.\looseness=-1 The remainder of this article is organized as follows. In section~\ref{sec:model} we review the relevant island dynamics model and the concept of step stiffness: In section~\ref{subsec:geom} we introduce the step geometry; in section~\ref{subsec:bcf} we outline elements of the BCF model, which highlight the GT formula; in section~\ref{subsec:noneq-kin} we describe the previous kinetic, nonequilibrium step-edge model~\cite{caflischetal99,caflischli03}, which is slightly revised here; and in section~\ref{subsec:stiff-calc} we outline our program for the stiffness, based on the perturbed kinetic steady state for small step edge curvature, $\kappa$. In section~\ref{sec:res} we provide a summary of our main results. In section~\ref{sec:steady} we derive analytic formulas pertaining to the kinetic steady state: In section~\ref{subsec:flux} we use the mass fluxes as inputs and derive the ES effect~\cite{ehrlichhudda66,schwoebelshipsey66}; and in section~\ref{subsec:density} we use the mass densities as inputs to derive asymmetric, $\theta$-dependent step-edge permeability rates. In section~\ref{sec:GT-stiff} we apply perturbation theory to find $ {\tilde\beta}(\theta)$ by using primarily the mass fluxes as inputs: In section~\ref{subsec:pertb} we carry out the perturbation analysis to first order for the edge-atom and kink densities as $\kappa\to 0$; in section~\ref{subsec:stiff} we derive the step stiffness as a function of $\theta$; and in section~\ref{subsec:extension} we discuss an alternative viewpoint on the stiffness. In section~\ref{sec:conclusion} we discuss our results, and outline possible limitations. The appendices provide derivations and proofs needed in the main text. \section{Background} \label{sec:model} In this section we provide the necessary background for the derivation of the step stiffness. First, we describe the step configuration. Second, we revisit briefly the constituents of the BCF theory with focus on the GT formula and the step stiffness, $ {\tilde\beta}$. Our review provides the introduction of $ {\tilde\beta}$ from a kinetic rather than a thermodynamic perspective. Third, we describe in detail the nonequilibrium kinetic model~\cite{caflischetal99,caflischli03} with emphasis on the mean-field constitutive laws for edge-atom and kink densities. Fourth, we set a perturbation framework for the derivation of $ {\tilde\beta}(\theta)$. \subsection{Step geometry and conventions} \label{subsec:geom} Following~\cite{caflischetal99,caflischli03} we consider a simple cubic crystal (solid-on-solid model) with lattice spacing $a$ and crystallographic directions identified with the $x$, $y$ and $z$ axes of the Cartesian system. The analysis of this paper is for a step edge or island boundary to which there is flux $f$ of atoms from the adjoining terraces. The flux $f$ may vary along the edge, as well as in time, and it comes from both sides of the edge, but it is characterized by a typical size $\bar f$ which has units of $(length \cdot time)^{-1}$. In~\cite{caflischetal99,caflischli03} the geometry was specialized to a step train with interstep distance $2L$ and deposition flux $F$, so that in steady state the flux to the step is $f=LF$. This global scenario is not necessary, however, since the analysis here is local and only requires a nonzero quasi-steady flux $f$. This could occur even with no deposition flux $F=0$; for example, in annealing. \begin{figure} \begin{tabular}{cc} \includegraphics[width=2.3in,height=3.4in]{macro_fig.pdf} & \includegraphics[width=4.0in,height=4.5in,angle=0]{step_fig.pdf} \end{tabular} \caption{The macroscopic (left) and microscopic (right) views of a step edge in the (high-symmetry) $xy$-plane of a crystal. In the macroscopic view, the step edge orientation relative to the $x$ axis is indicated by the angle $\theta$. The $+$ ($-$) sign indicates an upper (lower) terrace. The surface height decreases to the right. The microscopic view shows adatoms ($\rho$), edge-atoms ($\phi$), left-facing kinks ($k_\ell$) and right-facing kinks ($k_r$); $\Omega_{+}$ ($\Omega_{-}$) is the region of the upper (lower) terrace.\looseness=-1}\label{fig:Fig1} \end{figure} For algebraic convenience we adopt and extend the notation conventions of~\cite{caflischli03}. Specifically, we use the following symbols: ($x, y, z$) for dimensional spatial coordinates, $t$ for time, $D$ for any diffusion coefficient, $\rho$ for number density per area , and $\xi$ for number density per length; and define the corresponding nondimensional quantities $\tilde x, \tilde y, \tilde z, \tilde t, \tilde D, \tilde \rho, \tilde \xi$ by \begin{eqnarray} (\tilde x, \tilde y, \tilde z) &:=& (x/a, y/a, z/a)~, \label{eq:nond-space}\\ \tilde t &:=& (a\bar f)\, t~, \label{eq:nond-time}\\ \tilde D &:=& D/(a^3\bar f)~, \label{eq:nond-diff}\\ \tilde \rho &:=& a^2\rho~, \label{eq:nond-rho}\\ \tilde \xi &:=& a\xi~. \label{eq:nond-phi} \end{eqnarray} Now drop the tildes, so that $x, y, z, t, D, \rho, \xi$ are dimensionless. This choice amounts to measuring all distances in units of $a$ and all times in units of $(a\bar f)^{-1}$. Equivalently, (\ref{eq:nond-space})--(\ref{eq:nond-phi}) correspond to setting $a=1$ and $\bar f=1$. For our analysis, the single most important dimensionless parameter is the Peclet number $P$ from (\ref{eq:Pdef}), which is equal to $2 D_E^{-1}$ after nondimensionalization; i.e. \begin{equation} D_E=2 P^{-1}. \end{equation} Next, we describe the coordinates of the step geometry in more detail. We consider step boundaries that stem from perturbing a straight step edge coinciding with a fixed axis (e.g., the $x$-axis). All steps are parallel to the high-symmetry (``basal''), $xy$-plane of the crystal. The projection of each edge on the basal plane is represented macroscopically by a smooth curve with a local tangent that forms the (signed) angle $\theta$ with the $x$-axis, where $-\theta_0< \theta <\theta_0$~\footnote{The definition of $\theta$ here is the same as that in~\cite{caflischetal99}, but different from the one in~\cite{caflischli03} where $\theta$ is the angle formed by the local tangent and the $y$ axis.}. Without loss of generality we take $0\le\theta< \theta_0$ and assume that $\theta_0 < \pi/4$ in our analysis. We take the upper terrace to be to the left of an edge so that all steps move to the right during the growth process. So, the projection of each step edge is represented by\looseness=-1 \begin{equation} y=Y(x,t)~, \label{eq:edge-repr} \end{equation} where $Y(x,t)$ is a sufficiently differentiable function of ($x, t$). It follows that the unit normal and tangential vectors to the step boundary are~\cite{caflischli03} \begin{equation} {\hat{\bf n}} = (\sin\theta,-\cos\theta)=(y_s,-x_s)~,\qquad {\hat\tg}=(\cos\theta,\sin\theta)=(x_s,y_s)~, \label{eq:unit-def} \end{equation} where $s$ is the arc length and lowercase subscripts denote partial differentiation (e.g., $x_s:=\partial_s x$) unless it is noted or implied otherwise. The step edge curvature is \begin{equation} \kappa=-\theta_s~. \label{eq:kappa} \end{equation} There is one more geometric relation that deserves attention. By denoting the densities of left- and right-facing kinks $k_{l}$ and $k_{r}$, respectively, we have~\cite{caflischetal99} \begin{equation} k_r-k_l=-\tan\theta~; \label{eq:krl-theta} \end{equation} see section~\ref{subsec:noneq-kin} for further discussion. This geometric relation poses a constraint on the total kink density, $k$ ($k\ge 0$). By \begin{equation} k:=k_{r}+k_{l}\label{eq:k-def} \end{equation}and~(\ref{eq:krl-theta}), $k$ must satisfy \begin{equation} k\ge |\tan\theta|~. \label{eq:k-ctr} \end{equation} The formulation of a nonequilibrium kinetic step edge model (section~\ref{subsec:noneq-kin}) requires the use of several coordinate systems for an island boundary; these are described in appendix~\ref{app:coods}. In the following analysis it becomes advantageous to use $\theta$ as the main local coordinate. Its importance as a dynamic variable along a step edge is implied by the steady-state limit $k\to |\tan\theta|$ as $\kappa\to 0$ and $P\to 0$; see~(\ref{eq:k0-asymp1-corr}). Some useful identities that enable transformations to the $(\theta, t)$ variables are provided in appendix~\ref{app:coods}.\looseness=-1 \subsection{BCF model} \label{subsec:bcf} In the standard BCF theory~\cite{bcf51} the projection of step edges on the basal plane are smooth curves that move by the attachment and detachment of atoms due to mass conservation. The BCF model comprises the following near-equilibrium evolution laws. (i) The adatom density solves the diffusion equation on terraces. (ii) The adatom flux and density satisfy (kinetic) boundary conditions for atom attachment-detachment at step edges. (iii) The step velocity equals the sum of the adatom fluxes normal to the edge. In this setting, the GT formula links the normal mass flux to the step edge curvature.\looseness=-1 We next describe the equations of motion in the BCF model for comparisons with the kinetic model of section~\ref{subsec:noneq-kin}. The density, $\rho$, of adatoms on each terrace solves \begin{equation} \partial_t \rho- {D_T}\,\Delta\rho=F~, \label{eq:rho-pde} \end{equation}where $ {D_T}$ is the terrace diffusion coefficient and $\Delta$ denotes the Laplacian in $(x,y)$. \looseness=-1 As an extension of the BCF model, the boundary conditions for~(\ref{eq:rho-pde}) are now formulated by linear kinetics with inclusion of both atom attachment-detachment {\it and} step permeability~\cite{jeongwilliams99,ozdemir90,tanakaetal97}: \begin{equation} f_{\pm}= {D_A}^{\pm}\, (\rho_{\pm}-\rho_0^\pm)\pm D_p^\pm\,(\rho_+-\rho_-)~; \label{eq:f-bcf} \end{equation} cf.~(\ref{eq:f-noperm}). Here, $f_\pm$ is the adatom flux normal to an edge from the upper ($+$) or lower ($-$) terrace, i.e., \begin{equation} \mp f_\pm:=v\rho_\pm+ {D_T} {\hat{\bf n}}\cdot(\nabla\rho)_\pm~,\label{eq:fpm-def} \end{equation} $\rho_{\pm}$ is the terrace adatom density restricted to the step edge, $ {D_A}^{\pm}$ is the attachment-detachment rate coefficient and $D_p^{\pm}$ is the permeability rate coefficient. These rates can account for different up- and down-step energy barriers, e.g. the ES effect in the case of $ {D_A}^\pm$~\cite{ehrlichhudda66,schwoebelshipsey66}. The reference density $\rho_0^\pm$ is given by~(\ref{eq:GT}) where $\rho_*$ is replaced by $\rho_*^\pm$ for up- and down-step edge asymmetry. Evidently,~(\ref{eq:f-bcf}) forms an extension of formula~(\ref{eq:f-noperm}) but still corresponds to near-equilibrium kinetics; it will be modified in section~\ref{subsec:noneq-kin}. Equations~(\ref{eq:rho-pde}) and~(\ref{eq:f-bcf}) provide the fluxes $f_\pm$ as functions of the step edge position and curvature. The step velocity, $v$, is then determined by mass conservation, \begin{equation} v=f_+ + f_-~.\label{eq:st-vel} \end{equation}In this formulation, step-edge diffusion and kink motion are neglected. In the next section, the BCF model is enriched with kinetic boundary conditions that account for the motion of edge-atoms and kinks. \subsection{Atomistic, nonequilibrium kinetic model} \label{subsec:noneq-kin} In this section we revisit the kinetic model by Caflisch et al.~\cite{caflischetal99,caflischli03}, which is an extension of the BCF model (section~\ref{subsec:bcf}) to nonequilibrium processes. We apply this kinetic model~\cite{caflischetal99} to step edges of arbitrary orientation; and further revise it to account for a step edge diffusion coefficient defined along the (fixed) crystallographic $x$-axis. This last feature, although not important for our present purpose of calculating the step stiffness, renders the model consistent with recent studies of the edge-atom migration along a step edge~\cite{kallunkikrug03}. The following processes are included. (i) Adatom diffusion on terraces, which is described by~(\ref{eq:rho-pde}) of the BCF theory, and edge-atom diffusion along step edges. (ii) Convection of kinks on step edges with sinks and sources to account for conversion of terrace adatoms and edge-atoms to kinks. (iii) Constitutive laws that relate mass fluxes, sources for kinks and the step velocity with densities via a mean-field theory, and modify the BCF laws~(\ref{eq:f-bcf}) and~(\ref{eq:st-vel}). In this model, kink densities are assumed sufficiently small, enabling the neglect of higher-order terms within the mean-field approach. Recently, extensions of this theory were developed~\cite{balykovvoigt05,balykovvoigt06,filimonovhervieu04}, including higher kink densities by Balykov and Voigt~\cite{balykovvoigt05,balykovvoigt06}. Next, we state the requisite equations of motion in addition to~(\ref{eq:rho-pde}) for adatom terrace diffusion.\looseness=-1 \subsubsection{Equations of motion along step edges} \label{sssec:eq-mot} An assumption inherent to the present model is the different kinetics of kinks and edge-atoms. Each of these species is of course not conserved separately, since edge-atoms can generate kinks, but can be described by a distinct density: $\phi(x,t)$ for edge-atoms and $k(x,t)$ for kinks. In addition, their motion is different: the edge-atom flux follows from gradients of the density $\phi$; while the kink flux stems from a velocity field, $w$. We proceed to describe the equations of motion. The edge-atom number density, $\phi(x,t)$, solves \begin{equation} \partial_t\phi - {D_E}\,\partial_x^2\phi=\frac{ f_++ f_-}{\cos\theta}-f_0~, \label{eq:phi-pde} \end{equation} where $ {D_E}$ is the step edge diffusivity defined along the high-symmetry ($x$-) axis and $f_0$ represents the loss of edge-atoms to kinks; see~(\ref{eq:fpm}) and~(\ref{eq:f0}) below. For later algebraic convenience, it is advantageous to transform~(\ref{eq:phi-pde}) to $(\theta, t)$ variables. By the formulas ~(\ref{eq:partx}) and~(\ref{eq:part-tx}) of appendix~\ref{app:coods}, (\ref{eq:phi-pde}) is thus recast to \begin{equation} \partial_t|_\theta\phi+\kappa(v_\theta+v\tan\theta)\partial_\theta\phi- {D_E}\frac{\kappa}{\cos\theta} \partial_\theta\frac{\kappa}{\cos\theta}\partial_\theta\phi=\frac{ f_++ f_-}{\cos\theta}-f_0~. \label{eq:phi-pde-th} \end{equation} We turn our attention to kinks. The total kink density, $k(x,t)$, of~(\ref{eq:k-def}) solves \begin{equation} \partial_t k+\partial_x[w(k_r-k_l)]=2(g-h)~, \label{eq:k-pde} \end{equation} where $w(k_r-k_l)=-w\tan\theta$ is the flux of kinks with respect to the $x$-axis, $g$ is the net gain in kink pairs due to nucleation and breakup, and $h$ is the net loss in kink pairs due to creation and annihilation~\cite{caflischetal99}. The terms $w$, $g$ and $h$ are described as functions of densities in~(\ref{eq:w-def})--(\ref{eq:h-def}) below. In the $(\theta,t)$ coordinates, (\ref{eq:k-pde}) reads \begin{equation} \partial_t|_\theta k+\kappa(v_\theta+v\tan\theta)\partial_\theta k+\frac{\kappa}{\cos\theta}\partial_\theta(w\tan\theta)=2(g-h)~. \label{eq:k-pde-th} \end{equation} Equations~(\ref{eq:phi-pde}) and~(\ref{eq:k-pde}) can be transformed to other coordinates, including the $(s, t)$ variables where $s$ is the arc length. For completeness, in appendix~\ref{app:id-edge} we provide relations that are needed in such transformations; and in appendix~\ref{app:eq-mot} we describe the ensuing equations of motion in the $(s,t)$ coordinates. Partial differential equations~(\ref{eq:phi-pde}) and (\ref{eq:k-pde}) are coupled with the motion of step edges. In the following analysis, we apply the quasi-steady approximation, neglecting the time derivative in~(\ref{eq:phi-pde-th}) and~(\ref{eq:k-pde-th}). For definiteness, the boundary conditions in $x$ can be taken to be periodic. It remains to prescribe boundary conditions for atom attachment-detachment, i.e., specify $f_\pm$ in~(\ref{eq:fpm-def}). In the present nonequilibrium context, $f_\pm$ are no longer given by~(\ref{eq:f-bcf}) of the BCF model, as discussed next. \subsubsection{Constitutive laws} \label{sssec:mf} Following~\cite{caflischetal99,caflischli03} we describe mean-field constitutive laws for fluxes related to a tilted step edge (at $\theta\neq 0$). We also provide a geometric relation for the step edge velocity, $v$, which in a certain sense replaces the BCF law~(\ref{eq:st-vel}). Because the explanations are given elsewhere~\cite{balykovvoigt05,caflischetal99}, we state the mean-field laws without a detailed discussion of their origin. By mean-field theory, the terrace adatom flux normal to the step edge is~\cite{caflischetal99} \begin{eqnarray} f_\pm&=&[ {D_T}\rho_\pm- {D_E}\phi+l_{j_\pm}( {D_T}\rho_\pm-D_K)k+m_{j_\pm}( {D_T}\rho_\pm\phi-D_Kk_rk_l)\nonumber\\ \mbox{}&&+n_{j_\pm}( {D_T}\rho_\pm k_rk_l-D_B)]\cos\theta,\quad j_+=2,\ j_-=3~, \label{eq:fpm} \end{eqnarray}where $l_j$, $m_j$ and $n_j$ are (effective) coordination numbers (positive integers) that count the number of possible paths in the kinetic processes, weighted by the relative probability of a particle to be at the corresponding position. Also, $D_K$ is the diffusion coefficient for an atom from a kink, and $D_B$ is the diffusion coefficient for an atom from a straight edge. By neglect of $D_K$ and $D_B$, (\ref{eq:fpm}) readily becomes \begin{equation} f_\pm=(1+l_{j_\pm} k+m_{j_\pm}\phi+n_{j_\pm} k_rk_l) {D_T}\rho_\pm\cos\theta-D_E\phi\cos\theta~. \label{eq:fpm-simp} \end{equation} Omitting $D_K$ and $D_B$ is inconsistent with detailed balance, but has little effect on the kinetic solutions described below. Similarly, the mean-field kink velocity reads~\cite{caflischetal99} \begin{equation} w= l_1 {D_E}\phi+ {D_T}(l_2\rho_++l_3\rho_-)- l_{123} D_K\sim l_1 {D_E}\phi+ {D_T}(l_2\rho_++l_3\rho_-)~. \label{eq:w-def} \end{equation} The gain in kink pairs from nucleation and breakup involving an edge-atom is~\cite{caflischetal99} \begin{eqnarray} g&=&\phi(m_1 {D_E}\phi+m_2 {D_T}\rho_++m_3 {D_T}\rho_-)- m_{123}\,D_K k_rk_l\nonumber\\ &\sim& \phi(m_1 {D_E}\phi+m_2 {D_T}\rho_++m_3 {D_T}\rho_-)~. \label{eq:g-def} \end{eqnarray} The respective loss of kink pairs by atom attachment-detachment is~\cite{caflischetal99} \begin{eqnarray} h&=&(n_1 {D_E}\phi+n_2 {D_T}\rho_++n_3 {D_T}\rho_-)k_rk_l- n_{123}\,D_B\nonumber\\ &\sim& (n_1 {D_E}\phi+n_2 {D_T}\rho_++n_3 {D_T}\rho_-)k_rk_l~. \label{eq:h-def} \end{eqnarray}In the above, \begin{equation} p_{ij}:=p_i+p_j,\quad p_{ijk}:=p_i+p_j+p_k;\qquad p=m,\,n,\,l~. \label{eq:sums-def} \end{equation} The constitutive laws are complemented by \begin{equation} f_0=wk+2g+h~, \label{eq:f0} \end{equation} which enters~(\ref{eq:phi-pde-th}). The step edge velocity, $v$, stems from a geometric relation; see appendix~\ref{app:vel} for details. Specifically, \begin{equation} v=\frac{f_0}{1+\phi\kappa\cos\theta}\cos\theta=\frac{wk+2g+h}{1+\phi\kappa\cos\theta}\cos\theta~. \label{eq:v-def} \end{equation} \subsection{Program for step stiffness} \label{subsec:stiff-calc} In this section we delineate a program for the calculation of the step stiffness from the model of section~\ref{subsec:noneq-kin}. The key idea is to reduce the nonequilibrium law~(\ref{eq:fpm}) to the linear kinetic law~(\ref{eq:f-bcf}) by treating the normal fluxes, $f_\pm$, as external, free to vary, $O(1)$ parameters of the equations of motion along a step edge. In this context, the diffusion equation~(\ref{eq:rho-pde}) is not invoked. Our method relies on the perturbation of a solution for the densities $\phi$ and $k$. The solution studied here is that of the kinetic steady state, under the assumption that it can be reached. Accordingly, we neglect the time derivative in the zeroth-order equations of motion; furthermore, we neglect this derivative to the next higher order by imposing the quasi-steady approximation. Another case, left for future work, is that of thermodynamic equilibrium; see section~\ref{sec:conclusion}. In summary, we apply the following procedure:\looseness=-1 (i) To extract the kinetic steady state, we set $\partial_t|_\theta \equiv 0$ and $\kappa=0$ (i.e., we consider straight edges). This leads to a system of algebraic equations for $(\phi, k)\equiv (\phi^{(0)}, k^{(0)})$~\footnote{In this context, the superscript in parentheses denotes the perturbation order in $\kappa$.}. The coefficients of this system depend on $\theta$ and $f_\pm$. In principle, $(\phi^{(0)}, k^{(0)})$ cannot be found in simple closed form at this stage. (ii) We assume that $P\ll 1$, and determine relatively simple expansions for $(\phi^{(0)}, k^{(0)})$ in powers of $P$ for $0\le\theta< O(P^{1/3})$ and $O(P^{1/3})<\theta<\pi/4$. (iii) We replace $(\phi, k)$ by $(\phi^{(0)}, k^{(0)})$ in the constitutive law~(\ref{eq:fpm}) and compare the result to~(\ref{eq:f-bcf}). Here, our analysis follows up two mathematically equivalent but physically distinct routes. (a) By taking $f_\pm$ as input parameters, we derive formulas for the adatom reference densities, $\rho_*^{\pm}$, and attachment-detachment rates, $ {D_A}^\pm$, that depend on $f_\pm$; cf.~(\ref{eq:f-bcf}). Step permeability is not manifested in this setting ($D_p\equiv 0$). (b) By considering $\rho_\pm$ as inputs, we predict attachment-detachment rates and non-vanishing step permeability rates. (iv) We consider perturbations of the kinetic steady state by taking $0<|\kappa|\ll 1$, i.e. slightly curved step edges. Accordingly, we let \begin{equation} \phi\sim \phi^{(0)} +\phi^{(1)}\,\kappa~,\quad k\sim k^{(0)}+k^{(1)}\,\kappa~, \label{eq:phk-app} \end{equation} where $\kappa\phi^{(1)}$ and $\kappa k^{(1)}$ are deviations from the kinetic steady state and depend on $(\phi^{(0)}, k^{(0)})$. Expansion~(\ref{eq:phk-app}) is imposed on physical rather than mathematical grounds. Indeed, if the mean-field flux~(\ref{eq:fpm}) is expected to reduce to the linear kinetic law~(\ref{eq:f-bcf}), then $\phi$ must be linear in $\kappa$. The equations of motion along an edge and the constitutive laws are linearized in $\kappa\phi^{(1)}$ and $\kappa k^{(1)}$. (v) By treating $f_\pm$ as input external parameters, we replace $\phi$ and $k$ in the right-hand side of the constitutive law~(\ref{eq:fpm-simp}) by expansions~(\ref{eq:phk-app}). Subsequently, we determine the stiffness $ {\tilde\beta}(\theta; f_+, f_-)$ by comparison to~(\ref{eq:f-bcf}) in view of~(\ref{eq:GT}). The choice of fluxes $f_\pm$ or densities $\rho_\pm$ as input parameters is a physics modeling question. Although the mathematical results are equivalent for the two choice, the physical interpretation of these results is different, as stated above. \section{Main results} \label{sec:res} Here, we give the main formulas stemming from our analysis of the kinetic model described in section~\ref{subsec:noneq-kin}. A necessary condition for our perturbation analysis is $0\le \kappa<O(P)\ll 1$, to be shown via a plausibility argument in section~\ref{subsec:pertb}. Derivations and other related details are provided in sections~\ref{sec:steady} and~\ref{sec:GT-stiff}. \subsection{\em ES effect (section~$\ref{subsec:flux}$)} \label{ESeffect} When the fluxes $f_\pm$ are input parameters, the attachment-detachment of adatoms from a terrace to an edge is {\it asymmetric}. So, the related diffusion coefficients $ {D_A}^\pm$, or attachment and detachment kinetic rates, which enter~(\ref{eq:f-bcf}), are found to be different for an upper and lower terrace: \begin{eqnarray} {D_A}^+&=& {D_T}\bigl[1+l_2k^{(0)}+m_2\phi^{(0)}+\textstyle{\frac{1}{4}}n_2({k^{(0)}}^2-\tan^2\theta)\bigr]\cos\theta~,\nonumber\\ {D_A}^-&=& {D_T}\bigl[1+l_3k^{(0)}+m_3\phi^{(0)}+\textstyle{\frac{1}{4}}n_3({k^{(0)}}^2-\tan^2\theta)\bigr]\cos\theta~, \label{eq:ES-Dpm} \end{eqnarray}where $0\le\theta<\pi/4$ and $(l_2,m_2,n_2)\neq (l_3, m_3, n_3)$. For $0<P\ll 1$, we show that~(\ref{eq:ES-Dpm}) reduce to \begin{equation} {D_A}^\pm \sim {D_T} (1+l_{j_\pm}\tan\theta)\cos\theta~, \label{eq:Dpm-unif} \end{equation}where $j_+=2$ and $j_-=3$. In this description, there is no step permeability. Note that the results presented in this section and their derivations do not depend on the step edge curvature. \subsection{\em Step permeability (section~$\ref{subsec:density}$)} \label{stepPermeability} By using the adatom densities $\rho_\pm$ as input external parameters, we show that step permeability coexists with the ES effect; cf.~(\ref{eq:f-bcf}). For $O(P^{1/3})<\theta<\pi/4$ the diffusion coefficients for permeability are \begin{equation} D_p^\pm = {D_T}\,\frac{A_\mp (1+l_{j_\pm}\tan\theta)}{1+(A_+ +A_-)\cos\theta}\cos^2\theta~. \label{eq:perm-Dpm} \end{equation} The accompanying (asymmetric) attachment-detachment diffusion coefficients are \begin{equation} {D_A}^\pm= {D_T} \frac{1+l_{j_\pm}\tan\theta\pm A_\mp (l_2-l_3)\sin\theta}{1+(A_+ +A_-)\cos\theta}\cos\theta~, \label{eq:Dpm-ES-dens} \end{equation} where \begin{equation} A_+=\frac{1}{\sin\theta}\,\frac{1+l_3\tan\theta}{Q({\bf l})}~,\qquad A_-=\frac{1}{\sin\theta}\,\frac{1+l_2\tan\theta}{Q({\bf l})}~, \label{eq:Apm-def} \end{equation} \begin{equation} Q({\bf p})=p_1(1+l_2\tan\theta)(1+l_3\tan\theta)+p_2(1+l_3\tan\theta)+p_3(1+l_2\tan\theta)~,\label{eq:Q-def} \end{equation} with ${\bf p}:=(p_1,\,p_2,\,p_3)$ and $p=l,\,m,\,n$; in~(\ref{eq:Apm-def}), ${\bf l}=(l_1,\,l_2,\,l_3)$. Note that $D_p^\pm$ and $ {D_A}^\pm$ here are independent of $f_\pm$, as in~(\ref{eq:Dpm-unif}). The corresponding results for $0\le\theta< O(P^{1/3})$ are presented in section~\ref{subsec:density}. Again, the results presented in this section and their derivations do not depend on the step edge curvature. \subsection{\em Step stiffness (section~$\ref{subsec:stiff}$)} \label{stepStiffness} Let the adatom fluxes $f_\pm$ from an upper ($+$) and lower ($-$) terrace towards an edge be the input, independent parameters. For sufficiently small angle $\theta$, the stiffness is found to be \begin{equation} \frac{ {\tilde\beta}}{k_B T}\sim 2\frac{l_{123}}{n_{123}}\,\frac{( f_++ f_-)_\theta}{ f_++ f_-}\,\frac{1}{\theta}= O\biggl(\frac{1}{\theta}\biggr)\qquad O(P^{1/3})<\theta\ll 1~, \label{eq:stiff-smalltheta} \end{equation} \begin{equation} \frac{ {\tilde\beta}}{k_B T}\sim P^{-2/3}\,\frac{4l_{123}}{n_{123} (\check C^k_0)^2+8m_{123}}=O(P^{-2/3}) \qquad 0\le\theta <O(P^{1/3})\ll 1~, \label{eq:stiff-rg2} \end{equation} where $p_{123}$ ($p=l,\,m,\,n$) is defined in~(\ref{eq:sums-def}) and~\footnote{The superscripts in $\check C^k$, $C^\phi$, $C^k$ and elsewhere below indicate the physical origin of these coefficients, and should not be confused with numerical exponents or perturbation orders.} \begin{equation} \check C^k_0 =\biggl[\frac{2m_{123}}{n_{123}\,l_{123}}( f_+ + f_-)\biggl]^{1/3}~.\label{eq:check-Ck} \end{equation} Matching the asymptotic results~(\ref{eq:stiff-smalltheta}) and~(\ref{eq:stiff-rg2}) is discussed near the end of section \ref{subsec:pertb}. For $\theta=O(1)$ the formula for $ {\tilde\beta}$ becomes more complicated; we give it here for completeness. Generally, \begin{equation} \frac{ {\tilde\beta}}{k_BT}=\frac{\phi^{(1)}}{\phi^{(0)}}~,\label{eq:stiff-intro} \end{equation} where $\phi^{(0)}$ and $\phi^{(1)}$ are expansion coefficients for $\phi$ and depend on $f_\pm$ and their derivatives in $\theta$; cf.~(\ref{eq:phk-app}). These coefficients are obtained explicitly for $P\ll 1$. In particular, for $O(P^{1/3})<\theta<\pi/4$, \begin{equation} \phi^{(0)}\sim C^\phi_0\,P~,\qquad \phi^{(1)}\sim C^\phi_1\,P~, \label{eq:ph01-asymp1} \end{equation} \begin{equation} C^\phi_0=\frac{1}{2\sin\theta}\,\frac{(1+l_3\tan\theta) f_++(1+l_2\tan\theta) f_-} {Q({\bf l})}~,\label{eq:Cphi-def} \end{equation} \begin{equation} C^\phi_1=\frac{(v^{(0)}_\theta+v^{(0)}\tan\theta)k^{(0)}_\theta\cos\theta+(w^{(0)}\tan\theta)_\theta}{2\sin\theta}\, \frac{W^k\tan\theta+w^{(0)}+H^k}{H^k\,W^\phi}~,\label{eq:Cph1-def} \end{equation} \begin{equation} v^{(0)}=f_++f_-~,\label{eq:v0-def} \end{equation} \begin{equation} w^{(0)}\sim 2l_1 C^\phi_0+l_2\frac{s_xf_+ +2C^\phi_0}{1+l_2\tan\theta}+l_3\frac{s_xf_- +2C^\phi_0}{1+l_3\tan\theta}~,\qquad s_x=(\cos\theta)^{-1}~, \label{eq:w0-def} \end{equation} \begin{equation} W^\phi= 2l_1+\frac{2l_2}{1+l_2\tan\theta}+\frac{2l_3}{1+l_3\tan\theta}~, \label{eq:Wphi} \end{equation} \begin{equation} W^k=-l_2\frac{l_2+\textstyle{\frac{n_2}{2}}\tan\theta}{(1+l_2\tan\theta)^2}(s_xf_+ +2C^\phi_0) -l_3\frac{l_3+\textstyle{\frac{n_3}{2}}\tan\theta}{(1+l_3\tan\theta)^2}(s_x f_- +2C^\phi_0)~, \label{eq:Wk} \end{equation} \begin{equation} H^k=\frac{\tan\theta}{2}\biggl(2n_1C^\phi_0+n_2\,\frac{s_xf_+ +2C^\phi_0}{1+l_2\tan\theta} +n_3\,\frac{s_xf_-+2C^\phi_0}{1+l_3\tan\theta}\biggr)~, \label{eq:Hk} \end{equation} \begin{equation} k\sim k^{(0)}\sim \tan\theta~, \label{eq:k0-asymp1} \end{equation} where $n_j$ and $l_j$ are coordination numbers. Recall definition~(\ref{eq:Q-def}) for $Q({\bf l})$. It is worthwhile noting that there is no asymmetry in the step stiffness $\tilde \beta$, in contrast to the attachment-detachment coefficients. The reason for this difference is that $\tilde \beta$ depends only on the edge-atom density, as shown in (\ref{eq:stiff-intro}).\looseness=-1 For the alternative approach in which the adatom densities $\rho_\pm$ are specified rather than the fluxes $f_\pm$, the analysis of the step stiffness is presented in section \ref{subsec:extension}. The corresponding result (\ref{eq:fpm-ss}) is not of the form (\ref{eq:f-noperm}) and (\ref{eq:GT}), however, since the coefficient $ \mbox{\ss}$ in (\ref{eq:ss-def}) is not proportional to $\rho_*$. \section{The kinetic steady state } \label{sec:steady} We analyze the kinetic steady state for a straight step, including its dependence on the P\'eclet number $P$ in section \ref{sec:steadyPdependence} and the ES effect and step permeabiity in sections \ref{subsec:flux} and \ref{subsec:density}. \subsection{Kinetic steady state and its dependence on $P$} \label{sec:steadyPdependence} In this section, we simplify the equations of motion for edge-atom and kink densities by imposing the kinetic steady state ($\partial_t\equiv 0$) for straight steps ($\kappa\equiv 0$). We find closed-form solutions for small P\'eclet number, $P\ll 1$, in two distinct ranges of $\theta$. For $\theta_c=O(P^{1/3})<\theta<\pi/4$, we show that $\phi=\phi^{(0)}$ is given by~(\ref{eq:ph01-asymp1}), and $k=k^{(0)}$ is given by~(\ref{eq:k0-asymp1}), or more precisely by \begin{equation} k^{(0)}\sim \tan\theta +C_0^k\,P,~\label{eq:k0-asymp1-corr} \end{equation} where \begin{equation} C_0^k=\frac{2C_0^\phi}{\tan\theta}\frac{2C_0^\phi Q({\bf m})\cos\theta+m_2(1+l_3\tan\theta) f_+ +m_3(1+l_2\tan\theta) f_-} {2C_0^\phi Q({\bf n})\cos\theta+n_2(1+l_3\tan\theta) f_+ +n_3(1+l_2\tan\theta) f_-}~; \label{eq:Ck-def} \end{equation} $Q({\bf p})$ and $C_0^\phi$ are defined by~(\ref{eq:Q-def}) and~(\ref{eq:Cphi-def}). Furthermore, \begin{equation} \phi^{(0)}\sim \check C^\phi_0\,P^{2/3}~,\quad k^{(0)}\sim \check C^k_0\,P^{1/3}\qquad 0\le\theta< \theta_c=O(P^{1/3})~, \label{eq:phik-exp-2} \end{equation} where $\check C^k_0$ is defined by~(\ref{eq:check-Ck}), \begin{equation} \check C^\phi_0=\biggl(\frac{n_{123}}{4m_{123}}\biggr)^{1/3}\ \biggl(\frac{f_+ +f_-}{2l_{123}}\biggr)^{2/3}~, \label{eq:check-Cphi} \end{equation} and $p_{123}$ ($p=l,\,m,\,n$) is given in~(\ref{eq:sums-def}); cf. equations~(4.27) and~(4.28) in~\cite{caflischli03}. In effect, we determine mesoscopic kinetic rates, including the attachment-detachment and permeability coefficients in~(\ref{eq:ES-Dpm})--(\ref{eq:Apm-def}). We proceed to describing the derivations. By $\partial_t|_\theta = 0$ and $\kappa=0$ in~(\ref{eq:phi-pde}), (\ref{eq:k-pde}) and~(\ref{eq:v-def}), we have $f_+ + f_- = f_0\cos\theta$, $g=h$ and $v=f_0\cos\theta$. Eliminate $ {D_T}\rho$ in terms of $ {D_E}\phi=2P^{-1}\phi$ using (\ref{eq:fpm-simp}). Thus, we readily obtain~(\ref{eq:v0-def}) for $v^{(0)}:=v$, along with the following system of coupled algebraic equations: \begin{eqnarray} &&\bigl[m_1\phi^{(0)}-\textstyle{\frac{n_1}{4}}({k^{(0)}}^2-\tan^2\theta)\bigr]\, \bigl[1+l_2k^{(0)}+m_2\phi^{(0)}+\textstyle{\frac{n_2}{4}}({k^{(0)}}^2-\tan^2\theta)\bigr]\nonumber\\ \mbox{}&&\times\bigl[1+l_3k^{(0)}+m_3\phi^{(0)}+\textstyle{\frac{n_3}{4}}({k^{(0)}}^2-\tan^2\theta)\bigr]2P^{-1}\phi^{(0)} +\bigl[m_2\phi^{(0)}-\textstyle{\frac{n_2}{4}}({k^{(0)}}^2-\tan^2\theta)\bigr]\nonumber\\ \mbox{}&&\times\bigl[1+l_3 k^{(0)}+m_3\phi^{(0)}+\textstyle{\frac{n_3}{4}}({k^{(0)}}^2-\tan^2\theta)\bigr]\, (s_x f_++2P^{-1}\phi^{(0)})\nonumber\\ &&+\bigl[m_3\phi^{(0)}-\textstyle{\frac{n_3}{4}}({k^{(0)}}^2-\tan^2\theta)\bigr] \bigl[1+l_2k^{(0)}+m_2\phi^{(0)}+\textstyle{\frac{n_2}{4}}({k^{(0)}}^2-\tan^2\theta)\bigr]\nonumber\\ &&\hskip30pt \times(s_xf_-+2P^{-1}\phi^{(0)})=0~, \label{eq:steady-phik-1} \end{eqnarray} \begin{eqnarray} &&(l_1k^{(0)}+3m_1\phi^{(0)})\,[1+l_2 k^{(0)}+m_2\phi^{(0)}+\textstyle{\frac{n_2}{4}}({k^{(0)}}^2-\tan^2\theta)\bigr]\nonumber\\ &&\times \bigl[1+l_3 k^{(0)}+m_3\phi^{(0)}+\textstyle{\frac{n_3}{4}}({k^{(0)}}^2-\tan^2\theta)\bigr]2P^{-1}\phi^{(0)}+ (l_2k^{(0)}+3m_2\phi^{(0)})\nonumber\\ &&\times [1+l_3k^{(0)}+m_3\phi^{(0)}+\textstyle{\frac{n_3}{4}}({k^{(0)}}^2-\tan^2\theta)\bigr] (s_xf_++2P^{-1}\phi^{(0)})+(l_3k^{(0)}+3m_3\phi^{(0)})\nonumber\\ &&\times[1+l_2k^{(0)}+m_2\phi^{(0)}+\textstyle{\frac{n_2}{4}}({k^{(0)}}^2-\tan^2\theta)\bigr](s_xf_-+2P^{-1}\phi_0)\nonumber\\ \mbox{}&&= (f_+ +f_-)\,s_x\bigl[1+l_2k_0+m_2\phi_0+\textstyle{\frac{n_2}{4}}({k^{(0)}}^2-\tan^2\theta)\bigr]\nonumber\\ \mbox{}&&\hskip25pt \times[1+l_3 k^{(0)}+m_3\phi^{(0)}+\textstyle{\frac{n_3}{4}}({k^{(0)}}^2-\tan^2\theta)\bigr]~. \label{eq:steady-phik-2} \end{eqnarray} Once these equations are solved, the flux variables $w=:w^{(0)}$, $g=:g^{(0)}$ and $h=:h^{(0)}$ are determined in terms of $f_\pm$ by the constitutive laws~(\ref{eq:w-def})--(\ref{eq:h-def}). The substitution of $\phi$ and $k$ into~(\ref{eq:fpm-simp}) provides a relation between $f_\pm$ and $\rho_\pm$. Next, we simplify and explicitly solve~(\ref{eq:steady-phik-1}) and~(\ref{eq:steady-phik-2}) by enforcing $P\ll 1$. The ensuing scaling of $\phi^{(0)}$ and $k^{(0)}$ with $P$ depends on the range of $\theta$. We distinguish the cases $\theta_c(P)<\theta<\pi/4$ and $0\le \theta<\theta_c(P)$, where $\theta_c$ is estimated below; we expect that $\theta_c\to 0$ as $P\to 0$. \vskip5pt (i)\ $\theta=O(1)$. By seeking solutions that are regular at $P=0$, we observe that if $P=0$ then $(\phi^{(0)}, k^{(0)})=(0, \tan\theta)$ solves~(\ref{eq:steady-phik-1}) and~(\ref{eq:steady-phik-2}). Thus, the expansions \begin{equation} \phi^{(0)}\sim C^\phi_0\,P~,\qquad k^{(0)}\sim \tan\theta+C^k_0\,P~,\qquad C_0^{\phi,k}=O(1)~, \label{eq:phik-exp} \end{equation} form a reasonable starting point. These expansions yield the simplified system \begin{eqnarray} \lefteqn{2C^\phi_0(2m_1C^\phi_0-n_1C^k_0\tan\theta) (1+l_2\tan\theta)(1+l_3\tan\theta)}\nonumber\\ &&+(2m_2C^\phi_0-n_2 C^k_0\tan\theta)(1+l_3\tan\theta)(s_xf_++2C^\phi_0)\nonumber\\ &&+(2m_3C^\phi_0-n_3C^k_0\tan\theta)(1+l_2\tan\theta)(s_x f_-+2C^\phi_0)=0~, \label{eq:Cphik-1} \end{eqnarray} \begin{eqnarray} \lefteqn{2C^\phi_0\, l_1\tan\theta(1+l_2\tan\theta)(1+l_3\tan\theta)}\nonumber\\ &&+l_2\tan\theta(1+l_3\tan\theta)(s_x f_+ +2C^\phi_0)+l_3\tan\theta(1+l_2\tan\theta)(s_x f_-+2C^\phi_0)\nonumber\\ \mbox{}&&=s_x(f_+ +f_-)(1+l_2\tan\theta)(1+l_3\tan\theta)~.\label{eq:Cphik-2} \end{eqnarray}The solution of this system leads to~(\ref{eq:Cphi-def}) and~(\ref{eq:Ck-def}). We now sketch an order-of-magnitude estimate for $\theta_c$, the lower bound for $\theta$ in the present range of interest. By~(\ref{eq:Ck-def}), $C^k_0=O(1/\theta^2)$ for $\theta_c<\theta\ll 1$. Hence, expansion~(\ref{eq:phik-exp}) for the kink density $k^{(0)}$ breaks down when its leading-order term, $\tan\theta$, is comparable to the correction term, $C^k_0 P$: $\theta_c=O(P/\theta_c^2)$ by which $\theta_c=O(P^{1/3})$. Thus,~(\ref{eq:phik-exp})--(\ref{eq:Cphik-2}) hold if $O(P^{1/3})<\theta<\pi/4$. A more accurate estimate of the lower bound requires the detailed solution of~(\ref{eq:steady-phik-1}) and (\ref{eq:steady-phik-2}) for $\theta=O(P^{1/3})$, and will not be pursued here. \vskip5pt (ii)\ $0\le\theta< O(P^{1/3})$. For all practical purposes we set $\theta=0$ in~(\ref{eq:steady-phik-1}) and~(\ref{eq:steady-phik-2}). We enforce the expansions \begin{equation} \phi^{(0)}\sim \check C^\phi_0\,P^{\nu}~,\qquad k^{(0)}\sim \check C^k_0\,P^{\sigma}\qquad \check C_0^{\phi,k}=O(1)\quad \mbox{as}\ P\to 0~, \label{eq:phik-exp-2-mnu} \end{equation} and find the exponents $\nu$ and $\sigma$ by {\it reductio ad absurdum}. The only values consistent with~(\ref{eq:steady-phik-1}) and~(\ref{eq:steady-phik-2}) readily turn out to be \begin{equation} \nu=2/3~,\qquad \sigma=1/3~. \label{eq:nu-sigma} \end{equation} These values are in agreement with the analysis in~\cite{caflischli03}. By dominant-balance arguments, the coefficients $\check C^\phi_0$ and $\check C^k_0$ satisfy \begin{equation} 4m_{123}\check C^\phi_0=n_{123}\,(\check C^k_0)^2~,\qquad 2l_{123}\,\check C^\phi_0\,\check C^k_0=f_+ +f_-~, \label{eq:check-Cphik} \end{equation} by which we readily obtain~(\ref{eq:check-Ck}) and~(\ref{eq:check-Cphi}). Note that the zeroth-order kink velocity becomes \begin{equation} w=w^{(0)}\sim 2l_{123}P^{-1/3} \check C^\phi_0=O(P^{-1/3})~. \label{eq:w0-rg2} \end{equation} \vskip5pt (iii)\ {\it Consistency of asymptotics for $\theta=O(P^{1/3})$}. As a check on the consistency of our asymptotics and the estimate of $\theta_c$, we study the limits of~(\ref{eq:phik-exp-2}) and~(\ref{eq:phik-exp}) in the transition region, as $\theta\to O(P^{1/3})$. It is expected that the two sets of formulas for $\phi$ and $k$, in $O(P^{1/3})<\theta<\pi/4$ and $0\le \theta<O(P^{1/3})$, should furnish the same order of magnitudes. Indeed, by letting $\theta\to O(P^{1/3})\ll 1$ in~(\ref{eq:phik-exp}) for $\phi$ we find $\phi=O(P/\theta)\to O(P^{2/3})$, in agreement with~(\ref{eq:phik-exp-2}) for $\nu=2/3$. Similarly, setting $\theta=O(P^{1/3})$ in~(\ref{eq:phik-exp}) for $k$ yields $k=O(\theta)\to O(P^{1/3})$, which is consistent with~(\ref{eq:phik-exp-2}) for $\sigma=1/3$. In section~\ref{subsec:stiff} we show that such a ``matching'' is not always achieved for the first-order corrections $\phi^{(1)}$ and $k^{(1)}$, since the corresponding asymptotic formulas involve derivatives in $\theta$. A sufficient condition on the $\theta$-behavior of the fluxes $f_\pm$ is sought in the latter case.\looseness=-1 In the following, we use the kinetic steady state in the mean-field law~(\ref{eq:fpm}) to derive mesoscopic kinetic rates as functions of $\theta$ by comparison to the BCF-type equation~(\ref{eq:f-bcf}). We adopt two approaches. In the first approach, $f_\pm$ are used as external, input parameters; the effective kinetic coefficients are thus allowed to depend on $f_\pm$. In the second approach, the densities $\rho_\pm$ are the primary variables instead. \subsection{Flux-driven kinetics approach: ES effect} \label{subsec:flux} In this subsection we treat the fluxes $f_\pm$ as given, input parameters. Accordingly, we derive~(\ref{eq:ES-Dpm}) and~(\ref{eq:Dpm-unif}), i.e. the attachment-detachment rates $ {D_A}^\pm$ for adatoms. In addition, we show that the reference densities $\rho_*^\pm$ entering~(\ref{eq:GT}) are \begin{equation} \rho_*^\pm\sim \frac{2C^\phi_0}{ {D_T}(1+l_{j_\pm}\tan\theta)}\qquad j_+=2,\,j_-=3~,\quad O(P^{1/3})<\theta<\pi/4~, \label{eq:rho-star-range1} \end{equation} \begin{equation} \rho_*^\pm\sim \frac{2\check C^\phi_0}{ {D_T}}P^{-1/3}\qquad 0\le\theta< O(P^{1/3})~, \label{eq:rho-star-range2} \end{equation} where $C^\phi_0$ and $\check C^\phi_0$ are defined by~(\ref{eq:Cphi-def}) and~(\ref{eq:check-Cphi}). The relevant derivations follow. The substitution of~(\ref{eq:phik-exp}) into~(\ref{eq:fpm-simp}) yields \begin{equation} f_\pm \sim \biggl[1+l_{j_\pm}k^{(0)}+m_{j_\pm}\phi^{(0)}+\frac{n_{j_\pm}}{4}({k^{(0)}}^2-\tan^2\theta)\biggr] {D_T}\rho_\pm\cos\theta- 2P^{-1}\phi^{(0)}\cos\theta~.\label{eq:f+0} \end{equation} Here, we view the linear-in-$\rho_\pm$ term of~(\ref{eq:f+0}) as the only physical contribution of the adatom densities to the mass flux towards an edge. Consequently, by comparison to~(\ref{eq:f-bcf}), the coefficient of this term must be identified with $ {D_A}^\pm$. Thus, we extract formulas~(\ref{eq:ES-Dpm}). In addition, we obtain $D_p^\pm\equiv 0$; so, step permeability is {\it not} manifested in this context. The reference density of~(\ref{eq:GT}) is\looseness=-1 \begin{equation} \rho_*^\pm = \frac{2P^{-1}\phi^{(0)}\cos\theta}{ {D_A}^\pm}~, \label{eq:rho-star} \end{equation}which is in principle {\it different} for an up- and down-step edge. Note that $ {D_A}^\pm$ and $\rho_*^\pm$ depend on $f_\pm$ within this approach. Further, the ratio of $ {D_A}^+$ and $ {D_A}^-$ depends on the values of $l_j$, $m_j$ and $n_j$. For suitable coordination numbers, it is possible to have $ {D_A}^+ > {D_A}^-$, i.e. a negative (vs. positive) ES effect~\cite{ehrlichhudda66,schwoebelshipsey66}, which can lead to instabilities in the step motion. Next, we derive simplified, explicit formulas for $ {D_A}^\pm$ and $\rho_*^\pm$ when $P\ll 1$. \vskip5pt (i)\ $O(P^{1/3})<\theta<\pi/4$. By substitution of~(\ref{eq:phik-exp}) with~(\ref{eq:Cphi-def}) and~(\ref{eq:Ck-def}) into~(\ref{eq:ES-Dpm}) we have \begin{eqnarray} {D_A}^+&\sim& {D_T}\bigl[1+l_2\tan\theta+\bigl(l_2+\textstyle{\frac{1}{2}}n_2\tan\theta\bigr)C^k_0 P+ m_2C^\phi_0 P\bigr]\cos\theta~,\nonumber\\ {D_A}^-&\sim& {D_T}\bigl[1+l_3\tan\theta+\bigl(l_3+\textstyle{\frac{1}{2}}n_3\tan\theta\bigr)C^k_0 P+ m_3C^\phi_0 P\bigr]\cos\theta~, \label{eq:DpmI-range1} \end{eqnarray}which reduce to~(\ref{eq:Dpm-unif}) as $P\to 0$. In the same vein, by~(\ref{eq:rho-star}) the reference densities $\rho_*^\pm$ are \begin{eqnarray} \rho_*^+&\sim&\frac{2C^\phi_0} { {D_T}\bigl[1+l_2\tan\theta+\bigl(l_2+\textstyle{\frac{1}{2}}n_2\tan\theta\bigr)C^k_0 P+ m_2C^\phi_0 P\bigr]}~,\nonumber\\ \rho_*^-&\sim&\frac{2C^\phi_0} { {D_T}\bigl[1+l_3\tan\theta+\bigl(l_3+\textstyle{\frac{1}{2}}n_3\tan\theta\bigr)C^k_0 P+ m_3C^\phi_0 P\bigr]}~, \label{eq:rho-starI-rangeI} \end{eqnarray} which readily yield~(\ref{eq:rho-star-range1}). \vskip5pt (ii)\ $0\le\theta< O(P^{1/3})$. In this case, we resort to~(\ref{eq:phik-exp-2}). Equation~(\ref{eq:ES-Dpm}) for the kinetic rates furnishes \begin{eqnarray} {D_A}^+&\sim& {D_T}\bigl\{ 1+l_2{\check C}^k_0\,P^{1/3}+\bigl[ m_2{\check C}^\phi_0+\textstyle{\frac{1}{4}}n_2 ({\check C}^k_0{})^2\bigr]P^{2/3}\bigr\} \cos\theta\nonumber\\ &\sim& {D_T}(1+l_2{\check C}^k_0\,P^{1/3})\cos\theta~,\nonumber\\ {D_A}^-&\sim& {D_T}\bigl\{ 1+l_3{\check C}^k_0\,P^{1/3}+\bigl[m_3{\check C}^\phi_0+\textstyle{\frac{1}{4}}n_2 ({\check C}^k_0{})^2\bigr]P^{2/3}\bigr\} \cos\theta\nonumber\\ &\sim& {D_T}(1+l_3{\check C}^k_0\,P^{1/3})\cos\theta~.\label{eq:Dpm-range2} \end{eqnarray} To leading order in $P$, these formulas connect smoothly with~(\ref{eq:DpmI-range1}) and, thus, justify~(\ref{eq:Dpm-unif}) for $0\le\theta<\pi/4$. Furthermore, $\rho_*^\pm$ are given by \begin{eqnarray} \rho_*^+ &\sim& \frac{2\check C^\phi_0\,P^{-1/3}}{ {D_T}(1+l_2\check C^k_0\,P^{1/3})} \sim \frac{2\check C^\phi_0}{ {D_T}}P^{-1/3}(1-l_2\check C^k_0\,P^{1/3})~,\nonumber\\ \rho_*^- &\sim& \frac{2\check C^\phi_0\,P^{-1/3}}{ {D_T}(1+l_3\check C^k_0\,P^{1/3})} \sim \frac{2\check C^\phi_0}{ {D_T}}P^{-1/3}(1-l_3\check C^k_0\,P^{1/3})~, \label{eq:rho-starI-range2} \end{eqnarray}which reduce to~(\ref{eq:rho-star-range2}). Notably, $\rho_*^\pm$ depend on the fluxes, $f_\pm$, through $\check C_0^\phi$. A few remarks are in order. First, by~(\ref{eq:Dpm-unif}) the ES effect is present for $O(P^{1/3})<\theta<\pi/4$ only if $l_2\neq l_3$. Accordingly, our formalism provides explicitly an analytical relation between the number of transition paths for atomistic processes and the mesoscopic kinetic rates. Second, formulas~(\ref{eq:Dpm-range2}) show that for $P\ll 1$ the nonzero ES barrier is a corrective, $O(P^{1/3})$ effect for sufficiently small $\theta$, even when $l_2\neq l_3$. \subsection{Density-driven approach: Step permeability and ES effect} \label{subsec:density} In this subsection we show that the treatment of the densities $\rho_\pm$ as independent, external parameters in the kinetic law~(\ref{eq:fpm-simp}) leads to coexistence of the ES effect and step permeability. In particular, the permeability and attachment-detachment rates are provided by~(\ref{eq:perm-Dpm})--(\ref{eq:Apm-def}). To derive~(\ref{eq:perm-Dpm}) and~(\ref{eq:Dpm-ES-dens}), we solve~(\ref{eq:f+0}) for $f_\pm$, which are viewed as {\it dependent} variables, taking into account that $\phi^{(0)}$ and $k^{(0)}$ depend on $f_\pm$. To simplify the algebra while keeping the essential physics intact, we restrict attention to $O(P^{1/3})<\theta< \pi/4$. First, in view of~(\ref{eq:phik-exp}) we further simplify relation~(\ref{eq:fpm-simp}). By \begin{equation} \phi^{(0)}=\textstyle{\frac{1}{2}}(A_+ f_+ + A_- f_-) P~,\label{eq:phi0-alt} \end{equation} where $A_\pm$ are defined by~(\ref{eq:Apm-def}), the adatom fluxes at the step edge reduce to \begin{equation} f_\pm\sim (1+l_{j_\pm}\tan\theta) {D_T}\rho_\pm\cos\theta-(A_+ f_+ + A_- f_-)\cos\theta~,\qquad P\ll 1~. \label{eq:fpm-alt} \end{equation} Second, we invert~(\ref{eq:fpm-alt}) to obtain $f_\pm$ in terms of $\rho_\pm$. Equation (\ref{eq:fpm-alt}) reads \begin{eqnarray} (1+A_+\cos\theta)f_+ +\cos\theta\,A_- f_-&=&(1+l_2\tan\theta) {D_T}\rho_+\cos\theta~,\nonumber\\ \cos\theta\,A_+ f_+ + (1+\cos\theta\,A_-)f_-&=& (1+l_3\tan\theta) {D_T}\rho_-\cos\theta~. \label{eq:A+-alt} \end{eqnarray} The inversion of this system yields \begin{eqnarray} &&f_+ = \biggl[\frac{(1+l_2\tan\theta)(1+\cos\theta\,A_-)}{1+(A_+ +A_-)\cos\theta} {D_T}\rho_+ -\frac{A_- (1+l_3\tan\theta)\cos\theta}{1+(A_+ + A_-)\cos\theta} {D_T}\rho_-\biggr]\cos\theta~,\nonumber\\ &&f_- = \biggl[-\frac{(1+l_2\tan\theta)A_+\cos\theta}{1+(A_+ +A_-)\cos\theta} {D_T}\rho_+ +\frac{(1+A_+ \cos\theta)(1+l_3\tan\theta)}{1+(A_+ + A_-)\cos\theta} {D_T}\rho_-\biggr] \cos\theta~.\nonumber \end{eqnarray} These relations have the form of the kinetic law~(\ref{eq:f-bcf}); by comparison, the rates $D_p^\pm$ are given by~(\ref{eq:perm-Dpm}), while \begin{equation} \rho_0^\pm \equiv 0\Rightarrow \rho_*^\pm\equiv 0~. \label{eq:rho-star-perm} \end{equation}This value is expected since the system is homogeneous in this setting, i.e. $f_\pm=0$ only if $\rho_\pm=0$. The reference density $\rho_0$ becomes nonzero (but small in an appropriate sense) if we allow in the formulation nonzero values for $D_B$ and $D_K$, i.e. nonzero diffusion coefficients for an atom to hop from a kink and a straight edge. The study of these effects lies beyond our present scope. Equation (\ref{eq:rho-star-perm}) challenges the definition of the step stiffness; see section~\ref{subsec:extension}. Equations (\ref{eq:A+-alt}) also predict an ES effect. Indeed, by recourse to~(\ref{eq:f-bcf}), the related attachment-detachment rates are \begin{equation} {D_A}^\pm =\frac{(1+l_{j_\pm}\tan\theta)(1+A_\mp\cos\theta)}{1+(A_+ +A_-)\cos\theta} {D_T}\cos\theta-D_p^\pm~, \label{eq:Dpm-prim} \end{equation} which readily yields~(\ref{eq:Dpm-ES-dens}) by use of~(\ref{eq:perm-Dpm}). The behavior of the fluxes $f_\pm$ as functions of $\rho_\pm$ is dramatically different for $0\le\theta < O(P^{1/3})$. Indeed, by~(\ref{eq:phik-exp-2})--(\ref{eq:check-Cphi}) the density $\phi^{(0)}$ is a nonlinear algebraic function of $f_+ + f_-$ in this case. Thus, the mean-field constitutive equations in principle {\it cannot} reduce to kinetic laws that are linear in $\rho_\pm$. This approach does not lead to standard BCF-type conditions at a high-symmetry step edge orientation. The implications of this behavior warrant further studies. In the following analysis for the stiffness we emphasize the flux-driven approach. \section{Perturbation theory and step stiffness} \label{sec:GT-stiff} In this section we consider slightly curved step edges, and apply perturbation theory to find approximately the edge-atom and kink densities, $\phi$ and $k$, from the kinetic model of section~\ref{subsec:noneq-kin}. On the basis of the linear kinetic law~(\ref{eq:f-bcf}) along with~(\ref{eq:GT}) for $\rho_0$, we calculate the step stiffness, $ {\tilde\beta}$, as a function of the orientation angle, $\theta$; see formulas~(\ref{eq:stiff-smalltheta})--(\ref{eq:k0-asymp1}). The underlying perturbation scheme for the densities is outlined in appendix~\ref{app:lin-pert}. The starting point is expansion~(\ref{eq:phk-app}), which we assume to be valid for $0\le\theta < \pi/4$ and view as a Taylor series. The functions $\phi^{(0)}$ and $k^{(0)}$ correspond to the kinetic steady state of section~\ref{sec:steadyPdependence}. The first-order coefficients $\phi^{(1)}$ and $k^{(1)}$ are locally bounded and are evaluated below. Only the coefficient $\phi^{(1)}$ is needed for the calculation of the step stiffness, $ {\tilde\beta}$, by~(\ref{eq:fpm-simp}); for completeness, we also derive $k^{(1)}$. The relation of $ {\tilde\beta}$ to $\phi^{(0)}$ and $\phi^{(1)}$ is provided by the following argument. By substitution of~(\ref{eq:phk-app}) into~(\ref{eq:fpm-simp}) and treatment of $f_\pm$ as given external parameters (in the spirit of section~\ref{subsec:flux}), we obtain \begin{equation} \frac{f_\pm}{\cos\theta}=[1+l_{j_\pm}k^{(0)}+m_{j_\pm}\phi^{(0)}+\textstyle{\frac{n_{j_\pm}}{4}} ({k^{(0)}}^2-\tan^2\theta)] {D_T}\rho_\pm - {D_E}\phi^{(0)}-\kappa {D_E}\phi^{(1)}~, \label{eq:fpm-kappa} \end{equation}where $j_+=2$ and $j_-=3$. By comparison of~(\ref{eq:fpm-kappa}) to~(\ref{eq:GT}) and~(\ref{eq:f-bcf}), we have (using $2P^{-1}=D_E$) \begin{equation} {D_A}^\pm\rho_*^\pm\,\frac{ {\tilde\beta}}{k_B T}=2P^{-1}\,\phi^{(1)}\,\cos\theta~,\qquad \label{eq:lten} \end{equation} by which we assert~(\ref{eq:stiff-intro}) in view of~(\ref{eq:rho-star}). Our task is to calculate $\phi^{(1)}$ in terms of $\theta$ and $P$ when $P\ll 1$. \subsection{Linear perturbations} \label{subsec:pertb} In this subsection we derive formula~(\ref{eq:ph01-asymp1}) for $\phi^{(1)}$ along with (\ref{eq:Cph1-def}) and (\ref{eq:Wphi})--(\ref{eq:Hk}) when $O(P^{1/3})<\theta <\pi/4$. In addition, we show that in this regime \begin{equation} k^{(1)}\sim -\frac{(v^{(0)}_\theta+v^{(0)}\tan\theta) (\cos\theta)^{-1}+(w^{(0)}\tan\theta)_\theta}{2H^k\cos\theta}~. \label{eq:k1-app1} \end{equation}For $0\le\theta < O(P^{1/3})$, $\phi^{(1)}$ and $k^{(1)}$ are \begin{equation} \phi^{(1)}\sim 4l_{123}\,\frac{\check C^\phi_0}{n_{123} (\check C^k_0)^2+8m_{123}}=O(1)~, \label{eq:phi1-rg2} \end{equation} \begin{equation} k^{(1)}\sim -4P^{-1/3}l_{123}\,\frac{\check C^k_0}{n_{123}(\check C^k_0)^2+8m_{123}\check C^\phi_0} =O(P^{-1/3})~. \label{eq:k1-rg2} \end{equation} Recall that $\check C^k_0$ and $\check C^\phi_0$ are defined by~(\ref{eq:check-Ck}) and~(\ref{eq:check-Cphi}). Furthermore, we demonstrate that $|\kappa|$ should be bounded by $P$ for the perturbation theory to hold; see~(\ref{eq:restr-kappa}). We proceed to carry out the derivations. Following appendix~\ref{app:lin-pert}, we formulate a $2\times 2$ system of linear perturbations for $\phi$ and $k$. First, we linearize the algebraic, constitutive laws~(\ref{eq:w-def})--(\ref{eq:h-def}). Expansions~(\ref{eq:phk-app}) induce the approximations $w(\phi,k)\sim w(\phi^{(0)},k^{(0)})+\kappa\, w^{(1)}$, $g(\phi,k)\sim g(\phi^{(0)},k^{(0)})+\kappa\, g^{(1)}$ and $h(\phi,k)\sim h(\phi^{(0)},k^{(0)})+\kappa\, h^{(1)}$, where \begin{eqnarray} w^{(1)}&=&\phi^{(1)}\,w_\phi+k^{(1)}\,w_k\qquad [w_\phi:=\partial_\phi w(\phi,k)]~,\nonumber\\ g^{(1)}&=&\phi^{(1)}\,g_\phi+k^{(1)}\,g_k~,\quad h^{(1)}=\phi^{(1)}\,h_\phi+k^{(1)}\,h_k~. \label{eq:wgh1} \end{eqnarray}In addition, $v^{(0)}=f_+ +f_-$ and $g^{(0)}:=g(\phi^{(0)},k^{(0)})=h(\phi^{(0)},k^{(0)})=:h^{(0)}$. Second, we replace the above expansions in the equations of motion~(\ref{eq:phi-pde-th}) and~(\ref{eq:k-pde-th}) and the constitutive law~(\ref{eq:f0}). Hence, we find the system \begin{eqnarray} \lefteqn{(w_\phi k^{(0)}+2g_\phi+h_\phi)\phi^{(1)}+(w_k k^{(0)}+w^{(0)}+2g_k+h_k)k^{(1)}=-(v^{(0)}_\theta+v^{(0)}\tan\theta) \phi^{(0)}_\theta,}\nonumber\\ &&\mbox{}\hskip10pt 2(g_\phi-h_\phi)\phi^{(1)}+2(g_k-h_k)k^{(1)}=(v^{(0)}_\theta+v^{(0)}\tan\theta)k^{(0)}_\theta+s_x (w^{(0)}\tan\theta)_\theta~, \label{eq:phi-k-1} \end{eqnarray} where $w^{(0)}:=w(\phi^{(0)}, k^{(0)})$ and $s_x=1/\cos\theta$. This system has solution \begin{equation} \phi^{(1)}=\frac{\mathcal D^\phi}{\mathcal D}~,\qquad k^{(1)}=\frac{\mathcal D^k}{\mathcal D}~, \label{eq:phi-k-1-sol} \end{equation} where \begin{equation} \mathcal D=\left|\begin{array}{ll}w_\phi k^{(0)}+(2g+h)_\phi\ &\ w_k k^{(0)}+w^{(0)}+(2g+h)_k\\ 2(g-h)_\phi\ &\ 2(g-h)_k\end{array}\right|~,\label{eq:D-determ} \end{equation} \begin{equation} \mathcal D^\phi=\left|\begin{array}{ll}-(v^{(0)}_\theta+v^{(0)}\tan\theta)\phi^{(0)}_\theta & w_k k^{(0)}+w^{(0)}+2(g+h)_k\\ (v^{(0)}_\theta+v^{(0)}\tan\theta)k^{(0)}_\theta+s_x(w^{(0)}\tan\theta)_\theta\ & 2(g-h)_k\end{array}\right|,\label{eq:Dphi-determ} \end{equation} \begin{equation} \mathcal D^k=\left|\begin{array}{ll}w_\phi k^{(0)} +(2g+h)_\phi & -(v^{(0)}_\theta+v^{(0)}\tan\theta)\phi^{(0)}_\theta\\ 2(g-h)_\phi\ &\ (v^{(0)}_\theta+v^{(0)}\tan\theta)k^{(0)}_\theta+s_x (w^{(0)}\tan\theta)_\theta\end{array}\right|~. \label{eq:Dk-determ} \end{equation}Note that $\phi^{(1)}$ and $k^{(1)}$ depend on the $\theta$-derivatives of the zeroth-order (kinetic steady-state) solutions. By~(\ref{eq:w-def})--(\ref{eq:h-def}), we calculate the $\phi$- and $k$-derivatives of $w$, $g$ and $h$: \begin{eqnarray} w_\phi&=&2l_1 P^{-1}+\sum_{q=+,-}l_{j_q}\biggl\{\frac{2P^{-1}}{1+l_{j_q} k^{(0)}+m_{j_q} \phi^{(0)}+ \frac{n_{j_q}}{4}({k^{(0)}}^2-\tan^2\theta)}\nonumber\\ &&\mbox{}-m_{j_q}\,\frac{(\cos\theta)^{-1}\,f_q+2P^{-1}\phi^{(0)}} {[1+l_{j_q}k^{(0)}+m_{j_q}\phi^{(0)}+ \frac{n_{j_q}}{4}({k^{(0)}}^2-\tan^2\theta)]^2}\biggr\}~,\quad j_+ =2,\,j_-=3~, \label{eq:wphi-deriv} \end{eqnarray} \begin{equation} w_k=-\sum_{q=+,-}l_{j_q}\bigl(l_{j_q}+{\textstyle\frac{n_{j_q}}{2}}k^{(0)}\bigr)\, \frac{(\cos\theta)^{-1}\,f_q +2P^{-1}\phi^{(0)}}{[1+l_{j_q}k^{(0)}+m_{j_q}\phi^{(0)}+ \frac{n_{j_q}}{4}({k^{(0)}}^2-\tan^2\theta)]^2}~, \label{eq:wk-deriv} \end{equation} \begin{eqnarray} g_\phi&=& 4m_1 P^{-1}\phi^{(0)}+\sum_{q=+,-}m_{j_q}\biggl\{\frac{(\cos\theta)^{-1}\,f_q +4P^{-1}\phi^{(0)}}{1+l_{j_q}k^{(0)}+m_{j_q} \phi^{(0)}+\frac{n_{j_q}}{4}({k^{(0)}}^2-\tan^2\theta)}\nonumber\\ \mbox{} && -m_{j_q}\phi^{(0)}\frac{(\cos\theta)^{-1}\,f_q +2P^{-1}\phi^{(0)}}{[1+l_{j_q}k^{(0)}+m_{j_q}\phi^{(0)}+ \frac{n_{j_q}}{4}({k^{(0)}}^2-\tan^2\theta)]^2}\biggr\}~,\label{eq:gphi-deriv} \end{eqnarray} \begin{equation} g_k = -\phi^{(0)}\sum_{q=+,-}m_{j_q}\frac{\bigl(l_{j_q}+{\textstyle\frac{n_{j_q}}{2}}k^{(0)}\bigr)[(\cos\theta)^{-1}\,f_q+2P^{-1}\phi^{(0)}]} {[1+l_{j_q}k^{(0)}+m_{j_q}\phi^{(0)}+\frac{n_{j_q}}{4}({k^{(0)}}^2-\tan^2\theta)]^2}~,\label{eq:gk-deriv} \end{equation} \begin{eqnarray} h_\phi&=& {\textstyle\frac{1}{4}}({k^{(0)}}^2-\tan^2\theta)\biggl\{2n_1 P^{-1} +\sum_{q=+,-}\biggl[\frac{2P^{-1}n_{j_q}}{1+l_{j_q}k^{(0)}+m_{j_q}\phi^{(0)}+\frac{n_{j_q}}{4}({k^{(0)}}^2-\tan^2\theta)}\nonumber\\ &&\hskip20pt -m_{j_q}n_{j_q}\,\frac{(\cos\theta)^{-1}\,f_q+2P^{-1}\phi^{(0)}}{[1+l_{j_q}k^{(0)}+m_{j_q}\phi^{(0)}+\frac{n_{j_q}}{4} ({k^{(0)}}^2-\tan^2\theta)]^2}\biggr]\biggr\}~, \label{eq:hphi-deriv} \end{eqnarray} \begin{eqnarray} h_k&=& \frac{k^{(0)}}{2}\biggl[2n_1 P^{-1}\phi^{(0)}+\sum_{q=+,-}n_{j_q}\frac{(\cos\theta)^{-1}f_q+2P^{-1}\phi^{(0)}} {1+l_{j_q}k^{(0)}+m_{j_q}\phi^{(0)}+\frac{n_{j_q}}{4}({k^{(0)}}^2-\tan^2\theta)}\biggr]\nonumber\\ &&\hskip20pt -{\textstyle\frac{1}{4}}({k^{(0)}}^2-\tan^2\theta)\sum_{q=+,-}n_{j_q}\frac{(l_{j_q}+ \frac{n_{j_q}}{2}k^{(0)})\, [(\cos\theta)^{-1}f_q+2P^{-1}\phi^{(0)}]}{[1+l_{j_q}k^{(0)}+m_{j_q}\phi^{(0)}+\frac{n_{j_q}}{4}({k^{(0)}}^2-\tan^2\theta)]^2}~. \label{eq:hk-deriv} \end{eqnarray} Equations~(\ref{eq:phi-k-1-sol})--(\ref{eq:hk-deriv}) are simplified under the condition $P\ll 1$, which we apply next. We distinguish two ranges for the angle $\theta$. \vskip5pt (i)\ $O(P^{1/3})<\theta< \pi/4$. We proceed to show~(\ref{eq:ph01-asymp1}) and~(\ref{eq:Cph1-def}) for $\phi^{(1)}$. By using~(\ref{eq:phik-exp}) with~(\ref{eq:Cphi-def}) and~(\ref{eq:Ck-def}), we replace $\phi^{(0)}$ and $k^{(0)}$ by their expansions in $P$. Thus, the derivatives of $w$, $g$ and $h$ are simplified to \begin{equation} w_\phi \sim P^{-1}\biggl(2l_1+\frac{2l_2}{1+l_2\tan\theta}+\frac{2l_3}{1+l_3\tan\theta}\biggr)=:P^{-1}\, W^\phi=O(P^{-1} )~, \label{eq:wphi-app1} \end{equation} \begin{equation} w_k\sim -\sum_{q=+,-}l_{j_q}\bigl(l_{j_q}+{\textstyle\frac{n_{j_q}}{2}}\tan\theta\bigr) \frac{(\cos\theta)^{-1}f_q+2C^\phi_0}{(1+l_{j_q}\tan\theta)^2}=O(1)~, \label{eq:wk-app1} \end{equation} \begin{equation} g_\phi\sim 4m_1 C^\phi_0+\sum_{q=+,-}m_{j_q}\,\frac{(\cos\theta)^{-1}f_q+4 C^\phi_0}{1+l_{j_q}\tan\theta}=O(1)~, \label{eq:gphi-app1} \end{equation} \begin{equation} g_k\sim -P C^\phi_0\sum_{q=+,-}m_{j_q}\bigl(l_{j_q}+{\textstyle\frac{n_{j_q}}{2}}\tan\theta\bigr) \frac{(\cos\theta)^{-1}f_q+2C^\phi_0}{(1+l_{j_q}\tan\theta)^2}=O(P)~, \label{eq:gk-app1} \end{equation} \begin{equation} h_\phi \sim C^k_0\,\tan\theta\biggl(n_1+\frac{n_2}{1+l_2\tan\theta}+\frac{n_3}{1+l_3\tan\theta}\biggr)=O(1)~, \label{eq:hphi-app1} \end{equation} \begin{equation} h_k \sim \frac{\tan\theta}{2}\biggl( 2n_1 C^\phi_0+\sum_{q=+,-}n_{j_q} \frac{(\cos\theta)^{-1}f_q+2C^\phi_0}{1+l_{j_q}\tan\theta}\biggr)=O(1)~. \label{eq:hk-app1} \end{equation} It follows that the determinants of~(\ref{eq:D-determ})--(\ref{eq:Dk-determ}) are \begin{equation} \mathcal D\sim -2P^{-1}\,h_kW^\phi\tan\theta~, \label{eq:D-determ-app1} \end{equation} \begin{equation} \mathcal D^\phi\sim -(w_k\tan\theta+w^{(0)}+h_k)[(v^{(0)}_\theta+v^{(0)}\tan\theta)k^{(0)}_\theta+(\cos\theta)^{-1}(w^{(0)}\tan\theta)_\theta]~, \label{eq:Dphi-determ-app1} \end{equation} \begin{equation} \mathcal D^k\sim P^{-1}W^\phi\frac{\tan\theta}{\cos\theta}[(v^{(0)}_\theta+v^{(0)}\tan\theta) (\cos\theta)^{-1}+(w^{(0)}\tan\theta)_\theta]~. \label{eq:Dk-determ-app1} \end{equation} Hence, in view of~(\ref{eq:phi-k-1-sol}), the coefficient $\phi^{(1)}$ is given by~(\ref{eq:ph01-asymp1}) with~(\ref{eq:Cph1-def}) and~(\ref{eq:v0-def})--(\ref{eq:k0-asymp1}) under the replacements $H^k:=h_k$, $W^\phi:= Pw_\phi$ and $W^k:=w_k$. By~(\ref{eq:phi-k-1-sol}), the corresponding coefficient $k^{(1)}$ is given by~(\ref{eq:k1-app1}). \vskip5pt (ii)\ $0\le\theta < O(P^{1/3})$. We now calculate the first-order corrections $\phi^{(1)}$ and $k^{(1)}$ by~(\ref{eq:phi-k-1-sol})--(\ref{eq:hk-deriv}) with recourse to formula~(\ref{eq:phik-exp-2}) with~(\ref{eq:check-Ck}) and~(\ref{eq:check-Cphi}). We start with~(\ref{eq:phi-k-1-sol}). The requisite derivatives of $w$, $g$ and $h$ in the present case (where practically $\theta=0$) reduce to \begin{equation} w_\phi \sim 2l_{123} P^{-1}=O(P^{-1})~,\label{eq:wphi-rg2} \end{equation} \begin{equation} w_k\sim -2P^{-1/3}\,(l_2^2+l_3^2)\check C^\phi_0=O(P^{-1/3})~, \label{eq:wk-rg2} \end{equation} \begin{equation} g_\phi\sim 4m_{123}P^{-1/3}\,\check C^\phi_0=O(P^{-1/3})~, \label{eq:gphi-rg2} \end{equation} \begin{equation} g_k\sim -2P^{1/3}(m_2 l_2+m_3 l_3) (\check C^\phi_0)^2=O(P^{1/3})~, \label{eq:gk-rg2} \end{equation} \begin{equation} h_\phi\sim {\textstyle\frac{1}{2}}n_{123}P^{-1/3} (\check C^k_0)^2=O(P^{-1/3})~, \label{eq:hphi-rg2} \end{equation} \begin{equation} h_k \sim n_{123}{\check C^k_0}\,{\check C^\phi_0}=O(1)~. \label{eq:hk-rg2} \end{equation}Note that $w^{(0)}$ is given by~(\ref{eq:w0-rg2}). It follows that the determinants $\mathcal D$, $\mathcal D^\phi$ and $\mathcal D^k$ of (\ref{eq:D-determ})--(\ref{eq:Dk-determ}) become \begin{equation} \mathcal D\sim -P^{-2/3} {\check C^\phi_0} l_{123}[n_{123} (\check C^k_0)^2+8 m_{123}\check C^\phi_0)=O(P^{-2/3})~, \label{eq:D-rg2} \end{equation} \begin{equation} \mathcal D^\phi \sim -4P^{-2/3} l_{123}^2 {\check C^\phi_0}\ (\check C^\phi_0\theta)_\theta\bigl|_{\theta=0}=O(P^{-2/3})~, \label{eq:Dphi-rg2} \end{equation} \begin{equation} \mathcal D^k \sim w_\phi k^{(0)} w^{(0)}\sim 4P^{-1}l_{123}^2 \check C^k_0 \check C^\phi_0=O(P^{-1})~. \label{eq:Dk-rg2} \end{equation}Since $\partial_\theta(\check C^\phi_0)$ is finite at $\theta=0$,~(\ref{eq:phi1-rg2}) and~(\ref{eq:k1-rg2}) ensue directly via~(\ref{eq:phi-k-1-sol}). \vskip5pt (iii)\ {\it Transition region,} $\theta=O(P^{1/3})$. Next, we study the limits of the $\phi^{(1)}$ and $k^{(1)}$ found above when $\theta$ enters the transition region, $\theta\to O(P^{1/3})$. First, we consider $\phi^{(1)}$ in the range $\theta >O(P^{1/3})$ and take $\theta\ll 1$. By~(\ref{eq:Cphi-def}) and~(\ref{eq:Q-def})--(\ref{eq:k0-asymp1}), we find $H^k=O(1)$, $W^k=O(1/\theta)$, $W^\phi=O(1)$, and \begin{equation} w^{(0)}= \frac{f_+ + f_-}{\theta}+O(\theta)=O\biggl(\frac{1}{\theta}\biggr)\Rightarrow (w^{(0)}\theta)_\theta= (f_+ +f_-)_{\theta}|_{\theta=0}+O(\theta)~, \label{eq:w0-smalltheta} \end{equation} \begin{equation} (v^{(0)}_\theta+v^{(0)}\tan\theta)k^{(0)}_\theta+(\cos\theta)^{-1}(w^{(0)}_\theta\tan\theta)_\theta= 2(f_+ +f_-)_\theta|_{\theta=0}+O(\theta)~. \label{eq:num-smalltheta} \end{equation} Hence, assuming $(f_+ +f_-)_\theta\neq 0$ at $\theta=0$, we have \begin{equation} \phi^{(1)}=O(P/\theta^2)\cdot O((f_++f_-)_\theta)\qquad O(P^{1/3})<\theta\ll 1~, \label{eq:phi1-ord-rg1} \end{equation} which becomes $O(P^{1/3}(f_+ +f_-)_\theta)$ as $\theta\to O(P^{1/3})$. On the other hand, by~(\ref{eq:phi1-rg2}) we get $\phi^{(1)}=O(1)$ when $0\le \theta < O(P^{1/3})$. This behavior is not in agreement with~(\ref{eq:phi1-ord-rg1}) unless $(f_+ + f_-)_\theta = O(P^{-1/3})$, i.e. the fluxes vary over angles $O(P^{1/3})$, $f_\pm=\breve f_\pm(P^{-1/3}\theta)$ for $\theta=O(P^{1/3})$. This behavior of $f_\pm$ is not compelling, since it is generally expected that the agreement in orders of magnitude is spoiled by the $\theta$-differentiation.\looseness=-1 We next consider $k^{(1)}$. By~(\ref{eq:k1-app1}) we find $k^{(1)}=O((f_+ +f_-)_\theta)$ for $O(P^{1/3})<\theta\ll 1$. On the other hand, by~(\ref{eq:k1-rg2}), $k^{(1)}=O(P^{-1/3})$ for $0\le\theta <O(P^{1/3})$. The two orders of magnitude agree if $(f_+ + f_-)_\theta = O(P^{-1/3})$ as above. \subsection{Condition on $\kappa$ and $P$} \label{sssec:limitn} Thus far, we have not provided any condition for the validity of our perturbation analysis. Such a condition would impose a constraint on $\kappa$ and $P$. In principle, $\kappa$ is a dynamic variable. For appropriate initial data, the step edges are assumed to evolve to the kinetic steady state with $\kappa=0$. Small deviations from this state can be treated within our perturbation framework if \begin{equation} |\kappa\phi^{(1)}|\ll \phi^{(0)},\qquad |\kappa\,k^{(1)}|\ll k^{(0)}~. \label{eq:phk-constr} \end{equation} By revisiting the formulas of sections~\ref{sec:steadyPdependence} and~\ref{sec:GT-stiff} for $\phi^{(j)}$ and $k^{(j)}$, we can give an order-of-magnitude estimate of an upper bound for $\kappa$. By comparison of the $O(P)$ correction term for $k^{(0)}$ in~(\ref{eq:k0-asymp1-corr}) to $k^{(1)}$ in~(\ref{eq:k1-app1}), where $\theta=O(1)$, we obtain \begin{equation} |\kappa|< O(P)~. \label{eq:restr-kappa} \end{equation} \subsection{Step stiffness} \label{subsec:stiff} Once $\phi^{(0)}$ and $\phi^{(1)}$ have been derived, the step stiffness follows. We invoke the formulation of section~\ref{subsec:pertb} on the basis of formula~(\ref{eq:stiff-intro}) by using the fluxes $f_\pm$ as input external parameters. In particular, we show the limiting behaviors~(\ref{eq:stiff-smalltheta}) and~(\ref{eq:stiff-rg2}) for small $\theta$. In correspondence to section~\ref{subsec:pertb}, we use two distinct regimes. \vskip5pt (i)\ $O(P^{1/3})<\theta <\pi/4$. By~(\ref{eq:lten}) and the analysis in section~\ref{subsec:pertb}, $ {\tilde\beta}$ is given by~(\ref{eq:stiff-intro})--(\ref{eq:k0-asymp1}). Specifically, \begin{equation} \frac{ {\tilde\beta}}{k_B T}\sim\frac{C^\phi_1}{C^\phi_0}~,\label{eq:stiff-rg1} \end{equation} which is an $O(1)$ quantity in $P$ when $\theta=O(1)$. In order to compare this result to a recent equilibrium-based calculation for the stiffness~\cite{stasevich06}, we take $O(P^{1/3})<\theta\ll 1$. Then, by~(\ref{eq:Cphi-def}), \begin{equation} C_0^\phi\sim \frac{1}{\theta}\,\frac{ f_+ + f_-}{l_{123}}=O\biggl(\frac{1}{\theta}\biggr)~. \label{C0-phi-lim} \end{equation} In addition, if $(f_+ +f_-)_\theta \neq 0$ as $\theta\to 0^+$, by~(\ref{eq:Cph1-def}) and~(\ref{eq:num-smalltheta}) we find \begin{equation} C_1^\phi\sim \frac{( f_+ + f_-)_\theta|_{\theta=0}}{n_{123}\,\theta^2}=O\biggl(\frac{1}{\theta^2}\biggr)~.\label{eq:C1-phi-lim} \end{equation} Thus,~(\ref{eq:stiff-smalltheta}) follows from~(\ref{eq:stiff-rg1}). By contrast, if $(f_+ + f_-)_\theta$ vanishes in the limit $\theta\to 0$ then, by~(\ref{eq:num-smalltheta}), $ {\tilde\beta}/(k_B T)=O(1)$. \vskip5pt (ii)\ $0\le\theta < O(P^{1/3})$. In view of~(\ref{eq:stiff-intro}) with (\ref{eq:phik-exp-2}) and~(\ref{eq:phi1-rg2}), we readily obtain formula~(\ref{eq:stiff-rg2}) for $ {\tilde\beta}$. \vskip5pt (iii)\ $\theta\to O(P^{1/3})$. Formula~(\ref{eq:stiff-rg2}) is consistent with the $O(1/\theta)$ behavior of $ {\tilde\beta}$ for $O(P^{1/3})<\theta\ll 1$ provided that $(f_+ +f_-)_\theta =O(P^{-1/3})$. Indeed, from~(\ref{eq:stiff-rg1}) via~(\ref{eq:num-smalltheta}) we have $ {\tilde\beta}/(k_B T)= O((f_+ +f_-)_\theta/\theta)$, which properly reduces to~(\ref{eq:stiff-rg2}). Again, this ``matching'' is not compelling since $\theta$-derivatives are involved. \subsection{Alternative view} \label{subsec:extension} We consider $\theta=O(1)$ and focus briefly on the implications for the stiffness of treating the adatom densities $\rho_\pm$ as input parameters. This approach is mathematically equivalent to that of section~\ref{subsec:stiff}; only the physical definitions are altered in recognition of $\rho_\pm$ as the driving parameters. This viewpoint was partly followed in section~\ref{subsec:density} for straight step edges ($\kappa=0$).\looseness=-1 We show that the adatom fluxes have the form \begin{equation} f_\pm= {D_A}^{\pm}\rho_\pm\pm D_p^\pm (\rho_+ - \rho_-)- \mbox{\ss}(\theta;\rho_+,\rho_-)\cdot\kappa~. \label{eq:fpm-ss} \end{equation} The coefficients $D_p^\pm(\theta)$ and $ {D_A}^\pm(\theta)$ are defined by~(\ref{eq:perm-Dpm}) and~(\ref{eq:Dpm-ES-dens}); and \begin{equation} \mbox{\ss}=\frac{2 \,C^\phi_1}{1+(A_+ +A_-)\cos\theta}\,\cos\theta\qquad O(P^{1/3})<\theta<\pi/4~, \label{eq:ss-def} \end{equation} where $C^\phi_1$ and $A_\pm$ are defined by~(\ref{eq:Cph1-def}) and~(\ref{eq:Apm-def}). Furthermore, the $f_\pm$-dependent $C^\phi_1$ is now evaluated at $f_\pm= {D_A}^\pm\rho_\pm \pm D_p^\pm(\rho_+ -\rho_-)$; thus, $ \mbox{\ss}$ becomes $\rho$-dependent. Notably, \begin{equation} \mbox{\ss} = O(1/\theta)\qquad O(P^{1/3})<\theta\ll 1~. \label{eq:ss-1overth} \end{equation} As noted in section \ref{stepStiffness}, these results do not have the usual form since $ \mbox{\ss}$ is not proportional to $\rho_*$. We derive~(\ref{eq:fpm-ss})--(\ref{eq:ss-1overth}) directly from~(\ref{eq:fpm-kappa}) by treating the term $\kappa\,\phi^{(1)}$ as a perturbation. For $\kappa\phi^{(1)}=0$ (section~\ref{subsec:density}), (\ref{eq:f-bcf}) for $f_\pm$ is recovered with $\rho_0=0$; see~(\ref{eq:rho-star-perm}). For $\kappa\neq 0$, (\ref{eq:fpm-kappa}) reads \begin{equation} f_\pm \sim (1+l_{j_\pm}\tan\theta) {D_T}\rho_\pm\cos\theta-(A_+ f_+ + A_- f_-)\cos\theta-C^\phi_1\kappa\cos\theta~. \label{eq:fpm-kappa-2} \end{equation}By viewing $C^\phi_1$ as a given external parameter, we solve the linear equations~(\ref{eq:fpm-kappa-2}) for $f_\pm$ and find~(\ref{eq:fpm-ss}) with~(\ref{eq:ss-def}); $ \mbox{\ss}$ follows as a function of $\rho_\pm$ by a single iteration. We now take $\theta\ll 1$. By~(\ref{eq:Apm-def}), $A_\pm=O(1/\theta)$ while by~(\ref{eq:phi1-ord-rg1}) we have $C^\phi_1=O(1/\theta^2)$ assuming $( f_+ + f_-)_\theta=O(1)\neq 0$. Thus, (\ref{eq:ss-def}) leads to~(\ref{eq:ss-1overth}). Note that the standard Gibbs-Thomson formula~(\ref{eq:GT}) is not applicable here since $\rho_*=0$ (and hence $\rho_0=0$). However, a linear-in-$\kappa$ term in $f_\pm$ is present, giving rise to a ``generalized'' stiffness $ \mbox{\ss}$ that is not bound to a reference density $\rho_*$. \section{Conclusion} \label{sec:conclusion} The Gibbs-Thomson formula and stiffness of a step edge or island boundary were studied systematically from an atomistic, kinetic perspective. Our starting point was a kinetic model for out-of-equilibrium processes~\cite{caflischetal99,caflischli03}. The kinetic effects considered here include diffusion of edge-atoms and convection of kinks along step edges, supplemented with mean-field algebraic laws that relate mass fluxes to densities. Under the assumption that the model reaches a kinetic steady state with straight steps, the step stiffness is determined by perturbing this state for small edge curvature and P\'eclet number $P$ with $|\kappa| < O(P)$, and applying the quasi-steady approximation . A noteworthy result is that for sufficiently small $\theta$, $O(P^{1/3})<\theta\ll 1$, the step stiffness behaves as $ {\tilde\beta} = O(1/\theta)$. This behavior is in qualitative agreement with independent calculations based on equilibrium statistical mechanics~\cite{stasevich06,stasevichetal05}. Our analysis offers the first derivation of the step stiffness, a near-equilibrium concept, in the context of nonequilibrium kinetics. The results here are thus a step towards a better understanding of how evolution out of equilibrium can be reconciled with concepts of equilibrium thermodynamics for crystal surfaces. Furthermore, this analysis provides a linkage of microscopic parameters, e.g. atomistic transition rates and coordination numbers, to mesoscopic parameters of a BCF-type description. This simpler description is often a more attractive alternative for numerical simulations of epitaxial growth.\looseness=-1 There are various aspects of the problem that were not addressed in our analysis. For instance, it remains an open research direction to compare our predictions with results stemming from other kinetic models~\cite{balykovvoigt05,balykovvoigt06,filimonovhervieu04}. The existence of a kinetic steady state with straight edges, although expected intuitively for a class of initial data, should be tested with numerical computations. Germane is the assumption of linear-in-$\kappa$ corrections in expansions for the associated densities. Our perturbation analysis is limited by the magnitudes of $\kappa$ and $P$; specifically, $|\kappa| < O(P)$. The formal derivations need to be re-worked for $\kappa > O(P)$ as $P\to 0$. The kinetic steady state here forms a basis solution for our perturbation theory, and is different from an equilibrium state. At equilibrium, detailed balance implies that the fluxes $f_+$, $f_-$ and each of the physical contributions (terms with different coordination numbers) in~(\ref{eq:w-def})--(\ref{eq:h-def}) for $w$, $g$, and $h$ must vanish identically~\cite{caflischetal99}. An analysis based on this equilibrium approach and comparisons with the present results are the subjects of work in progress. Generally, it also remains a challenge to compare in detail kinetic models such as ours with predictions put forth by Kallunki and Krug with regard to the Einstein relation for atom migration along a step edge~\cite{kallunkikrug03}. Our underlying step edge model is based on a simple cubic lattice, and it does not include separate rates for kink or corner rounding. Lastly, we mention two limitations inherent to our model. The mean-field laws for the mass fluxes are probably inadequate in physical situations where atom correlations are crucial. The study of effects beyond mean field, a compelling but difficult task, lies beyond our present scope. In the same vein, we expect that the effects of elasticity~\cite{connelletal06,kuktabat02,shenoy04} will in principle modify the mesoscopic kinetic rates (attachment-detachment and permeability coefficients) and the step stiffness. The inclusion of elastic effects in the kinetic model and the study of their implications is a viable direction of near-future work. \section*{Acknowledgments}We thank T.~L. Einstein, J. Krug, M.~S. Siegel, T.~J. Stasevich, A. Voigt, and P.~W. Voorhees for useful discussions. One of us (DM) is grateful for the hospitality extended to him by the Institute for Pure and Applied Mathematics (IPAM) at the University of California, Los Angeles, in the Fall 2005, when part of this work was completed.\looseness=-1
1,108,101,563,604
arxiv
\section{introduction}\label{sec1} Very recently the LHCb Collaboration has released a new observation of an excess around 2.86 GeV in the $\bar{D}^0K^-$ invariant mass spectrum of $B_s^0\to \bar{D}^0K^-\pi^+$, which can be an admixture of spin-1 and spin-3 resonances corresponding to $D_{s1}^*(2860)$ and $D_{s3}^*(2860)$ \cite{Aaij:2014xza,Aaij:2014baa}, respectively. As indicated by LHCb \cite{Aaij:2014xza,Aaij:2014baa}, it is the first time to identify a spin-3 resonance. In addition, $D_{s2}^*(2573)$ also appears in the the $\bar{D}^0K^-$ invariant mass spectrum. Before this observation, a charmed-strange state $D_{sJ}(2860)$ was reported by BaBar in the $DK$ channel \cite{Aubert:2006mh}, where the mass and width are $m=2856.6\pm1.5\pm5.0$ MeV and $\Gamma=47\pm7\pm10$ MeV \cite{Aubert:2006mh}, respectively, which was later confirmed by BaBar in the $D^*K$ channel \cite{Aubert:2009ah}. The $D_{sJ}(2860)$ has stimulated extensive discussions on its underlying structure. In Ref.~\cite{Zhang:2006yj}, $D_{sJ}(2860)$ is suggested as a $1^3D_3$ $c\bar{s}$ meson. This explanation was also supported by study of the effective Lagrangian approach \cite{Colangelo:2006rq,Colangelo:2012xi}, the Regge phenomenology \cite{Li:2007px}, constituent quark model \cite{Zhong:2009sk} and mass loaded flux tube model \cite{Chen:2009zt}. The ratio $\Gamma(D_{sJ}(2860)\to D^*K)/\Gamma(D_{sJ}(2860)\to DK)$ was calculated as 0.36 \cite{Colangelo:2007ds} by the effective Lagrangian method. However, the calculation by the QPC model shows that such a ratio is about 0.8 \cite{Li:2009qu}, which is close to the experimental value $1.10\pm0.15\pm0.19$ \cite{Aubert:2009ah}. Thus, $J^P=3^-$ assignment to $D_{sJ}(2860)$ is a possible explanation. In addition, $D_{sJ}(2860)$ as a mixture of charmed-strange states was given in Refs.~\cite{Li:2009qu,Zhong:2008kd,Zhong:2009sk}. $D_{sJ}(2860)$ could be a partner of $D_{s1}(2710)$, where both $D_{sJ}(2860)$ and $D_{s1}(2710)$ are a mixture of $2^3S_1$ and $1^3D_1$ $c\bar{s}$ states. By introducing such a mixing mechanism, the obtained ratio of $D^*K/DK$ for $D_{sJ}(2860)$ and $D_{s1}(2710)$ \cite{Li:2009qu} is consistent with the experimental data \cite{Aubert:2009ah}. Reference \cite{vanBeveren:2009jq} indicates that there exist two overlapping resonances (radially excited $J^P=0^+$ and $J^P=2^+$ $c\bar{s}$ states) at 2.86 GeV. Besides the above explanations under the conventional charmed-strange meson framework, $D_{sJ}(2860)$ was explained as a multiquark exotic state \cite{Vijande:2008zn}. {$D_{sJ}(2860)$ as a $J^P=0^+$ charmed-strange meson was suggested in Ref. \cite{Zhang:2006yj}. However, this scalar charmed-strange meson cannot decay into $D^*K$ \cite{Zhang:2006yj}, which contradicts the BaBar's observation of $D_{sJ}(2860)$ in its $D^*K$ decay channel \cite{Aubert:2006mh}. After the observation of $D_{sJ}(2860)$, $D_{sJ}(3040)$ was reported by BaBar in the $D^*K$ channel \cite{Aubert:2009ah}, which can be explained as the first radial excitation of $D_{s1}(2460)$ with $J^P=1^+$ \cite{Sun:2009tg}. In addition, the decay behaviors of other missing $2P$ charmed-strange mesons in experiment were given in Ref. \cite{Sun:2009tg}. } Briefly reviewing the research status of $D_{sJ}(2860)$, we notice that more theoretical and experimental efforts are still necessary to clarify the properties of $D_{sJ}(2860)$. It is obvious that the recent precise measurement of LHCb \cite{Aaij:2014xza,Aaij:2014baa} provides us a good chance to identify higher radial excitations in the charmed-strange meson family. The resonance parameters of the newly observed $D_{s1}^*(2860)$ and $D_{s3}^*(2860)$ by LHCb include \cite{Aaij:2014xza,Aaij:2014baa}: \begin{eqnarray} m_{D_{s1}^*(2860)}&=&2859\pm12\pm6\pm23\,\, {\mathrm{MeV}},\\ \Gamma_{D_{s1}^*(2860)}&=&159\pm23\pm27\pm72\,\, {\mathrm{MeV}},\\ m_{D_{s3}^*(2860)}&=&2860.5\pm2.6\pm2.5\pm6.0\,\,{\mathrm{MeV}},\\ \Gamma_{D_{s3}^*(2860)}&=&53\pm7\pm4\pm6\,\, {\mathrm{MeV}}, \end{eqnarray} where the errors are due to statistical one, experimentally systematic effects and model variations \cite{Aaij:2014xza,Aaij:2014baa}, respectively. At present, there are good candidates for the $1S$ and $1P$ states in the charmed-strange meson family (see Particle Data Group for more details \cite{Beringer:1900zz}). Thus, two newly observed $D_{s1}^*(2860)$ and $D_{s3}^*(2860)$ can be categorized into the $1D$ charmed-strange states when considering their spin quantum numbers and masses. Before observation of these two resonances, there were several theoretical predictions \cite{Godfrey:1985xj, Matsuki:2007zza, Di Pierro:2001uu, Ebert:2009ua} of the mass spectrum of the $1D$ charmed-strange meson family, which are collected in Table \ref{prediction}. Comparing the experimental data of $D_{s1}^*(2860)$ and $D_{s3}^*(2860)$ with the theoretical results, we notice that the masses of $D_{s1}^*(2860)$ and $D_{s3}^*(2860)$ are comparable with the corresponding theoretical predictions, which further supports that it is reasonable to assign $D_{s1}^*(2860)$ and $D_{s3}^*(2860)$ as the $1D$ states of the charmed-strange meson family \renewcommand{\arraystretch}{1.5} \begin{table}[htbp] \caption{Theoretical predictions for charmed-strange meson spectrum and comparison with the experimental data. \label{prediction}} \begin{tabular}{lccccc} \toprule[1pt $J^P(^{2s+1}L_J)$ & Expt. \cite{Beringer:1900zz} & GI \cite{Godfrey:1985xj} & MMS \cite{Matsuki:2007zza} & PE \cite{Di Pierro:2001uu} & EFG \cite{Ebert:2009ua} \\ \hlin $0^-(^1S_0)$ & 1968 & 1979 & 1967 & 1965 & 1969 \\ $1^-(^3S_1)$ & 2112 & 2129 & 2110 & 2113 & 2111 \\ \hlin $0^+(^3P_0)$ & 2318 & 2484 & 2325 & 2487 & 2509 \\ $1^+("^1P_1")$& 2460 & 2459 & 2467 & 2535 & 2536 \\ $1^+("^3P_1")$& 2536 & 2556 & 2525 & 2605 & 2574 \\ $2^+(^3P_2)$ & 2573 \cite{Aaij:2014xza, Aaij:2014baa} & 2592 & 2568 & 2581 & 2571 \\ \hlin $1^-(^3D_1)$ & 2859 \cite{Aaij:2014xza, Aaij:2014baa} & 2899 & 2817 & 2913 & 2913 \\ $2^-("^1D_2")$& -- & 2900 & -- & 2900 & 2931 \\ $2^-("^3D_2")$& -- & 2926 & 2856 & 2953 & 2961 \\ $3^-(^3D_3)$ & 2860 \cite{Aaij:2014xza, Aaij:2014baa} & 2917 & -- & 2925 & 2871 \\ \bottomrule[1pt \end{tabular} \end{table} Although both the mass spectrum analysis and the measurement of the spin quantum number support $D_{s1}^*(2860)$ and $D_{s3}^*(2860)$ as the $1D$ states, we still need to carry out a further test of this assignment through study of their decay behaviors. This study can provide more detailed information on the partial decay widths, which is valuable for future experimental investigation of $D_{s1}^*(2860)$ and $D_{s3}^*(2860)$. In addition, there exist four $1D$ states in the charmed-strange meson family. At present, the spin partners of $D_{s1}^*(2860)$ and $D_{s3}^*(2860)$ are still missing in experiment. Thus, we will also predict the properties of two missing $1D$ states in this work. This paper is organized as follows. In Section \ref{sec2}, after some introduction we illustrate the study of decay behaviors of $D_{s1}(2860)$ and $D_{s3}(2860)$. In Sec. \ref{sec3}, the paper ends with the discussion and conclusions. \section{Decay behavior of $D_{s1}(2860)$ and $D_{s3}(2860)$}\label{sec2} Among all properties of these $1D$ states, their two-body Okubo-Zweig-Iizuka (OZI)-allowed strong decays are the most important and typical properties. Hence, in the following we mainly focus on the study of their OZI-allowed strong decays. For the $1D$ states studied in this work, their allowed decay channels are listed in Table \ref{table:channel}. Among four $1D$ states in the charmed-strange meson family, there are two $J^P=2^-$ states, which is a mixture of $1^1D_2$ and $1^3D_2$ states, i.e., \begin{eqnarray} \left(\begin{array}{c}1D(2^-) \\1D^\prime(2^-)\end{array}\right)=\left(\begin{array}{cc}\cos\theta_{1D} & \sin\theta_{1D} \\-\sin\theta_{1D} & \cos\theta_{1D}\end{array}\right) \left(\begin{array}{c}1^3D_2 \\1^1D_2\end{array}\right), \end{eqnarray} where $\theta_{1D}$ is a mixing angle. { In the heavy quark limit, a general mixing angle between $^3L_L$ and $^1L_L$ is $\theta_{L}=\arctan(\sqrt{L/(L+1)})$, which indicates $\theta_{1D}=39^\circ$ \cite{Matsuki:2010zy}.} \begin{table}[htb] \caption{The two-body OZI-allowed decay modes of $1D$ charmed-strange mesons. Here, we use symbols, $\circ$ and --, to mark the OZI-allowed and -forbidden decay modes, respectively. $D_{s1}^*(2860)$ and $D_{s3}^*(2860)$ are $1^3D_1$ and $1^3D_3$ states, respectively. \label{table:channel} } \centering \begin{tabular}{l c c c c c c c c c c c c c} \toprule[1pt \multirow{2}{*}{Channels} & \multirow{2}{*}{$D^*_{s1}(2860)$} & $1D(2^-)$ & \multirow{2}{*}{$D_{s3}^*(2860)$}\\ & & $1D'(2^-)$ &\\ \hlin $D K$ &$\circ$ & -- &$\circ$ \\ $D^{*}K$ &$\circ$ &$\circ$ &$\circ$ \\ $D_s\eta$ &$\circ$ & -- &$\circ$ \\ $D_{s}^*\eta$ &$\circ$ &$\circ$ &$\circ$ \\ $DK^*$ &$\circ$ &$\circ$ &$\circ$ \\ $D_0^*(2400)K$ & -- &$\circ$ & -- \\ $D_{s0}^*(2317)\eta$ & -- &$\circ$ & -- \\ \bottomrule[1pt \end {tabular}\\ \label{table:decay} \end{table} In the following, we apply the quark pair creation (QPC) model \cite{Micu:1968mk,Le Yaouanc:1972ae, LeYaouanc:1988fx, vanBeveren:1979bd, vanBeveren:1982qb, Bonnaz:2001aj, roberts} to describe the OZI allowed two-body strong decays shown in Table \ref{table:channel}, where the QPC model was extensively adopted to study the strong decay of hadrons \cite{Zhang:2006yj, Liu:2009fe, Sun:2009tg, Sun:2010pg, Yu:2011ta, Wang:2012wa, Ye:2012gu, He:2013ttg, Sun:2013qca, Sun:2014wea, Pang:2014laa, He:2014xna}. In the QPC model, the quark-antiquark pair is created in QCD vacuum with vacuum quantum number $J^{PC}=0^{++}$. For a decay process, i.e., an initial observed meson $A$ decaying into two observed mesons $B$ and $C$, the process can be expressed as \begin{eqnarray} \langle BC|T|A \rangle = \delta ^3(\mathbf{P}_B+\mathbf{P}_C)\mathcal{M}^{{M}_{J_{A}}M_{J_{B}}M_{J_{C}}},\label{hh1} \end{eqnarray} where $\mathbf{P}_B(\mathbf{P}_C)$ is the three-momentum of the final meson $B(C)$ in the rest frame of $A$. $M_{J_{i}} (i=A,B,C)$ denotes the orbital magnetic momentum. Additionally, $\mathcal{M}^{{M}_{J_{A}}M_{J_{B}}M_{J_{C}}}$ is the helicity amplitude. The transition operator $T$ in Eq. (\ref{hh1}) is written as \cite{Micu:1968mk,Le Yaouanc:1972ae,LeYaouanc:1988fx,vanBeveren:1979bd, vanBeveren:1982qb,Bonnaz:2001aj,roberts} \begin{eqnarray} T& = &-3\gamma \sum_{m}\langle 1m;1-m|00\rangle\int d \mathbf{p}_3d\mathbf{p}_4\delta ^3 (\mathbf{p}_3+\mathbf{p}_4) \nonumber \\ && \times \mathcal{Y}_{1m}\left(\frac{\textbf{p}_3-\mathbf{p}_4}{2}\right)\chi _{1,-m}^{34}\phi _{0}^{34} \omega_{0}^{34}b_{3i}^{\dag}(\mathbf{p}_3)d_{4j}^{\dag}(\mathbf{p}_4), \end{eqnarray} which is introduced in a phenomenological way to reflect the property of quark-antiquark (denoted by indices 3 and 4) created from vacuum. $\mathcal{Y}_{lm}(\mathbf{p})=|\mathbf{p}|Y_{lm}(\mathbf{p}) $ is the solid harmonic. $\chi$, $\phi$, and $\omega$ are the general description of the spin, flavor, and color wave functions, respectively. By the Jacob-Wick formula \cite{Jacob:1959at}, the decay amplitude reads as \begin{eqnarray} \mathcal{M}^{JL}(\mathbf{P})&=&\frac{\sqrt{2L+1}}{2J_A+1}\sum_{M_{J_B}M_{J_C}}\langle L0;JM_{J_A}|J_AM_{J_A}\rangle \nonumber \\ &&\times \langle J_BM_{J_B};J_CM_{J_C}|{J_A}M_{J_A}\rangle \mathcal{M}^{M_{J_{A}}M_{J_B}M_{J_C}}. \end{eqnarray} Finally, the general decay width is \begin{eqnarray} \Gamma&=&\pi ^2\frac{|\mathbf{P}_B|}{m_A^2}\sum_{J,L}|\mathcal{M}^{JL}(\mathbf{P})|^2, \end{eqnarray} where $m_{A}$ is the mass of the initial state $A$. In the following, the helicity amplitudes $\mathcal{M}^{M_{J_A}M_{J_B}M_{J_C}}$ of the OZI-allowed strong decay channels in Table \ref{table:channel} are calculated, which is the main task of the whole calculation. { Here, we adopt the simple harmonic oscillator (SHO) wave function $\Psi_{n,\ell m}(\mathbf{k})$, where the value of a parameter $R$ appearing in the SHO wave function can be obtained such that it reproduces the realistic root mean square (rms) radius, which can be calculated by the relativistic quark model \cite{Close:2005se} with a Coulomb plus linear confining potential as well as a hyperfine interaction term.} In Table \ref{table:Rvalue}, we list the $R$ values adopted in our calculation. {The strength of $q\bar{q}$ is taken as $\gamma=6.3$ \cite{Sun:2009tg} while the strength of $s\bar{s}$ satisfies $\gamma_s=\gamma/\sqrt{3}$. We need to specify our $\gamma$ value adopted in the present work which is $\sqrt{96\pi}$ times larger than that adopted by other groups \cite{Godfrey:1986wj,Close:2005se}, where the $\gamma$ value as an overall factor can be obtained by fitting the experimental data (see Ref. \cite{Blundell:1996as} for more details of how to get the $\gamma$ value).} In addition, the constituent quark masses for charm, up/down, and strange quarks are 1.60 GeV, 0.33 GeV, and 0.55 GeV, respectively \cite{Close:2005se}. \begin{table}[ht] \caption{The $R$ values (in units of GeV$^{-1}$) \cite{Close:2005se} and masses (in units of MeV) \cite{Beringer:1900zz} of the mesons involved in present calculation. \label{table:Rvalue} } \centering \begin{tabular}{ccccccccc} \toprule[1pt States& $R$ & mass&States& $R$ &mass &States& $R$ &mass \\ \midrule[1pt] $D$ & 2.33 & 1867 &$D^\ast$ & 2.70 & 2008 &$D_0(2400)$ & 3.13 & 2318\\ $D_s$ & 1.92 & 1968 &$D_s^\ast$& 2.22 & 2112 &$D_{s0}(2317)$ & 2.70 & 2318\\ $K$ & 2.17 & 494 & $K^\ast$ & 3.13 & 896 & $\eta$ & 2.12 & 548 \\ \bottomrule[1pt] \end {tabular} \end{table} \begin{figure*}[htbp] \centerin \begin{tabular}{cc} \scalebox{0.34}{\includegraphics{3D1.eps}} \scalebox{0.34}{\includegraphics{3D3.eps} \end{tabular} \caption{(Color online). The total and partial decay widths of $1^3D_1$ (left panel) and $1^3D_3$ (right panel) charmed-strange mesons dependent on the $R$ value. Here, the dashed lines with the yellow bands are the experimental widths of $D_{s1}^*(2860)$ and $D_{s3}^*(2860)$ from LHCb \cite{Aaij:2014xza,Aaij:2014baa}. \label{Fig:3D1D3}} \end{figure*} With the above preparation, we obtain the total and partial decay widths of $D_{s1}^*(2860)$, $D_{s3}^*(2860)$ and their spin partners, and comparison with the experimental data \cite{Aaij:2014xza,Aaij:2014baa}. As shown in Table \ref{table:Rvalue}, the $R$ value of the $P$-wave charmed-strange meson is about 2.70 $\mathrm{GeV}^{-1}$. For the $D$-wave state, the $R$ value is estimated to be around 3.00 $\mathrm{GeV}^{-1}$ \cite{Close:2005se}. In present calculation, we vary the $R$ value for the $D$-wave charmed-strange meson from 2.4 to 3.6 $\mathrm{GeV}^{-1}$. In Fig. \ref{Fig:3D1D3}, we present the $R$ dependence of the total and partial decay widths of $D_{s1}(2860)$ and $D_{s3}(2860)$. \subsection{$D_{s1}(2860)$} The total width of $D_{s1}(2860)$ as the $1^3D_1$ state is given in Fig. \ref{Fig:3D1D3}, where the total width is obtained as $128\sim$ 177 MeV corresponding to the selected $R$ range, which is consistent with the experimental width of $D_{s1}(2860)$ ($\Gamma=159 \pm 23 \pm 27\pm 72$ MeV \cite{Aaij:2014xza,Aaij:2014baa}). The information of the partial decay widths depicted in Fig. \ref{Fig:3D1D3} also shows that $DK$ is the dominant decay mode of the $1^3D_1$ charm strange meson, which explains why $D_{s1}(2860)$ was experimentally observed in its $DK$ decay channel \cite{Aaij:2014xza,Aaij:2014baa}). In addition, our study also indicates that the $D^\ast K$ and $D K^\ast$ channels are also important for the $1^3D_1$ state, which have partial widths $35 \sim 44$ MeV and $24 \sim 40$ MeV, respectively. The $D_s \eta$ and $D_s^\ast \eta$ channels have partial decay widths with several MeV, which is far smaller than the partial decay widths of the $DK$, $D^\ast K$ and $D K^\ast$ channels. This phenomena can be understood since the decays of the $1^3D_1$ state into $D_s \eta$ and $D_s^\ast \eta$ have the smaller phase space. The above results show that $D_{s1}(2860)$ can be a good candidate for the $1^3D_1$ charmed-strange meson. Besides providing the partial decay widths, we also predict several typical ratios, i.e., \begin{eqnarray} \frac{\Gamma(D_{s1}(2860) \to D^\ast K)}{\Gamma(D_{s1}(2860) \to D K)}&=& 0.46 \sim 0.70,\label{aa1}\\ \frac{\Gamma(D_{s1}(2860) \to D K^\ast)}{\Gamma(D_{s1}(2860) \to D K)} &=& 0.25 \sim 0.85,\\ \frac{\Gamma(D_{s1}(2860) \to D_s \eta)}{\Gamma(D_{s1}(2860) \to D K)} &=& 0.10 \sim 0.14, \end{eqnarray} which can be further tested in the coming experimental measurements. {The Belle Collaboration once reported $D_{s1}(2710)$ in the $DK$ invariant mass spectrum, which has mass $2708\pm9^{+11}_{-10}$ MeV and width $108\pm23^{+36}_{-31}$ MeV \cite{Brodzicka:2007aa}. The analysis of angular distribution indicates that $D_{s1}(2710)$ has the spin-parity $J^P=1^-$ \cite{Brodzicka:2007aa}. Later, the BaBar Collaboration confirmed $D_{s1}(2710)$ in a new $D^*K$ channel \cite{Aubert:2009ah}, where the ratio $\Gamma(D^*K)/\Gamma(DK)=0.91\pm0.13\pm0.12$ for $D_{s1}(2710)$. In Refs. \cite{Zhang:2006yj,Colangelo:2007ds}, the assignment of $D_{s1}(2710)$ to the $1^3D_1$ charmed-strange meson was proposed. However, the obtained $\Gamma(D^*K)/\Gamma(DK)=0.043$ \cite{Colangelo:2007ds} is deviated far from the experimental data, which does not support the $1^3D_1$ charmed-strange assignment to $D_{s1}(2710)$. Especially, the present study of newly observed $D_{s1}(2860)$ shows that $D_{s1}(2860)$ is a good candidate of the $1^3D_1$ charmed-strange meson.} {If $D_{s1}(2710)$ is not a $1^3D_1$ charmed-strange meson, we need to find the place in the charmed-strange meson family. In Ref. \cite{Zhang:2006yj}, the authors once indicated that $D_{s1}(2710)$ as the $2^3S_1$ charmed-strange meson is not completely excluded\footnote{We need to explain the reason why the $2^3S_1$ charmed-strange meson is not completely excluded in Ref. \cite{Zhang:2006yj}. In Ref. \cite{Zhang:2006yj}, the total decay width of $D_{s1}(2710)$ as the $2^3S_1$ charmed-strange meson was calculated, which is 32 MeV. This value is obtained with the typical $R=3.2$ GeV$^{-1}$ value. As shown in Fig. 4 (d) in \cite{Zhang:2006yj}, the total decay width is strongly dependent on the $R$ value due to node effects. If taking other typical $R$ values which are not far away from $3.2$ GeV$^{-1}$, the total decay width can reach up to the lower limit of the experimental width of $D_{s1}(2710)$.}. By the effective Lagrangian approach \cite{Colangelo:2007ds} and under the assignment of $D_{s1}(2710)$ to the $2^3S_1$ charmed-strange meson, the $\Gamma(D_{s1}(2710)\to D^*K)/\Gamma(D_{s1}(2710)\to DK)=0.91$ was obtained, which is close to the experimental value \cite{Brodzicka:2007aa}. We also notice the recent work of $D_{sJ}(2860)$ \cite{Godfrey:2014fga}, where they also proposed that $D_{s1}(2710)$ is the $2^3S_1$ $c\bar{s}$ meson. } {Besides the above exploration in the framework of a conventional charmed-strange meson, the exotic state explanation to $D_{s1}(2710)$ was given in Ref. \cite{Vijande:2008zn}.} \begin{figure*}[htb] \centerin \scalebox{1}{\includegraphics{sdmixing_new.eps}} \caption{(Color online). {The total and partial decay widths (in units of MeV) of $D_{s1}(2860)$ dependent on the $R$ value. The solid, dashed and dotted curves correspond to the typical $2S$-$1D$ mixing angles $\theta= 0^\circ $, $\theta =15^\circ$ and $\theta=30^\circ$, respectively. The band in the left-top panel is the total decay width with errors, which was reported by the LHCb Collaboration \cite{Aaij:2014baa, Aaij:2014xza}. }\label{Fig:SDMixing}} \end{figure*} In the following, we include the mixing between $2^3S_1$ and $1^3D_1$ states to further discuss the mixing angle dependence of the decay behavior of $D_{s1}(2860)$. Here, $D_{s1}(2S)$ and $D_{s1}(2860)$ are the mixtures of $2^3S_1$ and $1^3D_1$ states, which satisfy the relation below \begin{eqnarray} \left( \begin{array}{c} |D_{s1}(2S)\rangle \\ |D_{s1}(2860)\rangle \end{array} \right) = \left( \begin{array}{cc} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{array} \right) \left( \begin{array}{c} |2^3S_1\rangle\\ |1^3D_1 \rangle \end{array} \right). \end{eqnarray} { The $2S$-$1D$ mixing angle should be small due to the relative large mass gap between $2S$ and $1D$ state, where we take some typical values $\theta= 0^\circ$, $\theta=15^\circ$, and $\theta=30^\circ$ to present our results. $\theta=0^\circ$ denotes that there is no $2S$-$1D$ mixing. In Fig. \ref{Fig:SDMixing}, we present the total and partial decay widths, which depend on the $R$ value, where we show the results with three different typical $\theta$ values. In the left-top panel of Fig. \ref{Fig:SDMixing}, we list the total decay width of $D_{s1}(2860)$ and the comparison with the experimental data. When taking $2.4\ \mathrm{GeV}^{-1}<R<3.6\ \mathrm{GeV}^{-1}$, the calculated total decay width varies from 130 MeV to 235 MeV, which indicates the theoretically estimated total decay width of $D_{s1}(2860)$ can overlap with the experimental measurement. In addition, we also notice that the partial decay widths for the $ D_s \eta$ and $ D_s^\ast \eta $ channels are less than 10 MeV, while for the $DK$, $D^\ast K$ and $D K^\ast$ decay modes, the corresponding partial decay widths vary from several 20 MeV to more than one hundred MeV, which strongly depend on the value of a mixing angle. In Ref. \cite{Li:2009qu}, the authors adopted a different convention for the $2S$-$1D$ mixing, where their $2S$-$1D$ mixing angle has a sign opposite to our scenario. Taking the same $R$ value and mixing angle for $D_{s1}(2860)$, we obtain the partial decay widths of $DK$ and $D^\ast K$ are less than 10 MeV, while the calculated partial decay width of $DK^\ast$ is more than 20 MeV, which is consistent with the results in Ref. \cite{Li:2009qu}. } \subsection{$D_{s3}(2860)$} The two-body OZI-allowed decay behavior of $D_{s3}(2860)$ as the $1^3D_3$ charmed-strange meson is presented in the right panel of Fig. \ref{Fig:3D1D3}, where the obtained total width can reach up to $42 \sim 60$ MeV, which overlaps with the LHCb's data ($53 \pm 7 \pm 4 \pm 6$ MeV \cite{Aaij:2014xza,Aaij:2014baa}). This fact further reflects that $D_{s3}(2860)$ is suitable for the $1^3D_3$ charmed-strange meson. Similar to $D_{s1}(2860)$, the $DK$ channel is also the dominant decay mode of $D_{s3}(2860)$, where the partial decay width of $D_{s3}(2860) \to DK$ is $25 \sim 30$ MeV in the selected $R$ value range. Additionally, we also calculate the partial decay width of $D_{s3}(2860) \to D^\ast K$ and $D_{s3}(2860) \to D K^\ast $, which are $14 \sim 24$ MeV and $0.9 \sim 2.5$ MeV, respectively. The partial decay widths for $D_s \eta$ and $D_s^\ast \eta$ channel are of order of 0.1 MeV. The corresponding typical ratios for $D_{s3}(2860)$ are \begin{eqnarray} \frac{\Gamma(D_{s3}(2860) \to D^\ast K)}{\Gamma(D_{s3}(2860) \to D K)} &=& 0.55 \sim 0.80,\label{bb1}\\ \frac{\Gamma(D_{s3}(2860) \to D K^\ast)}{\Gamma(D_{s3}(2860) \to D K)} &=& 0.03 \sim 0.09,\\ \frac{\Gamma(D_{s3}(2860) \to D_s \eta)}{\Gamma(D_{s1}(2860) \to D K)} &=& 0.018 \sim 0.020, \end{eqnarray} which can be tested in future experiment. {In Ref. \cite{Zhang:2006yj}, the ratio $\Gamma(D^*K)/\Gamma(DK)=13/22=0.59$ was given for $D_{sJ}(2860)$ observed by Belle as the $1^3D_3$ state, where the QPC model was also adopted and this ratio is obtained by taking a typical value $R=2.94$ GeV$^{-1}$. In the present work, we consider the range $R=2.4\sim 3.6$ GeV$^{-1}$ to present the $D^*K/DK$ ratio for $D_{s3}(2860)$. If comparing the ratio given in Eq. (\ref{bb1}) with the corresponding one obtained in Ref. \cite{Zhang:2006yj}, we notice that their value $\Gamma(D^*K)/\Gamma(DK)=0.59$ \cite{Zhang:2006yj} just falls into our obtained range $\Gamma(D^*K)/\Gamma(DK)=0.55\sim 0.80$. } {In addition, we need to make a comment on the experimental ratio $\Gamma(D^*K)/\Gamma(DK)=1.10\pm0.15\pm0.19$ for $D_{sJ}(2860)$, which was extracted by the BaBar Collaboration \cite{Aubert:2006mh}. Since LHCb indicated that there exist two resonances $D_{s1}(2860)$ and $D_{s3}(2860)$ in the $DK$ invariant mass spectrum \cite{Aaij:2014xza,Aaij:2014baa}, the experimental ratio $\Gamma(D^*K)/\Gamma(DK)$ for $D_{sJ}(2860)$ must be changed, which means that we cannot simply apply the old $\Gamma(D^*K)/\Gamma(DK)$ data in Ref. \cite{Aubert:2006mh} to draw a conclusion on the structure of $D_{sJ}(2860)$. We expect further experimental measurements of this ratio, which will be helpful to reveal the properties of the observed $D_{sJ}(2860)$ states. } \begin{figure*}[htbp] \centerin \scalebox{1}{\includegraphics{D2_new.eps}} \caption{(Color online). {The total and partial decay widths of $1D(2^-)$ (upper row) and $1D^\prime(2^-)$ (lower row) charmed-strange mesons dependent on the $R$ values. Here, a mixing angle of $39^\circ$ is chosen. The solid and dashed curves correspond to two predicted masses of $2^-$ states, 2850 MeV and 2950 MeV, respectively. }\label{Fig:D2}} \end{figure*} \subsection{$1D(2^-)$ and $1D^\prime(2^-)$} In the following, we discuss the decay behaviors of two missing $1D$ states in the present experiment, which is crucial to the experimental search for the $1D(2^-)$ and $1D^\prime(2^-)$ states. { We first fix the mixing angle $\theta_{1D}=39^\circ$ obtained in the heavy quark limit, and discuss the $R$ value dependence of the total and partial decay widths of the two missing $2^-$ states in experiment, which are presented in Fig. \ref{Fig:D2}.} Since these two $1D$ states have not yet been seen in experiment, we take the mass range $2850 \sim 2950$ MeV, which covers the theoretically predicted masses of the $1D(2^-)$ and $1D^\prime(2^-)$ states from different groups listed in Table \ref{prediction}, to discuss the decay behaviors of the $1D(2^-)$ and $1D^\prime(2^-)$ states. The total and partial decay width of $1D(2^-)$ are present in the upper panel of Fig. \ref{Fig:D2}. Here, two typical mass of $1D(2^-)$ state, 2850 MeV and 2950 MeV, are considered, which are corresponding to the solid and dashed curved in Fig. \ref{Fig:D2}. The estimated total decay width varies from 90 MeV to 190 MeV, which is due to the uncertainty of the predicted mass of the $1D(2^-)$ state and the $R$ value dependence as mentioned above. If the mass of the $1D(2^-)$ state can be constrained by future experiment, the uncertainty of the total width for the $1D(2^-)$ state can be further reduced. In any case, our study indicates that the $1D(2^-)$ state has a broad width. Additionally, as shown in Fig. \ref{Fig:D2}, the $1D(2^-)$ state can dominantly decay into $D^\ast K$, where $\mathcal{B}(1D(2^-)\to D^\ast K)=(77\sim 87)\%$, and $D K^\ast$ and $D_s^\ast \eta$ are its main decay modes. Comparing $D^\ast K$, $DK^\ast$ and $D_s^\ast \eta$ with each other, $D_s^\ast \eta$ is the weakest decay channel. Hence, we suggest experimental search for the $1D(2^-)$ state firstly via the $D^\ast K$ channel. We also obtain two typical ratios, i.e., \begin{eqnarray} &&\frac{\Gamma(1D(2^-)\to D K^\ast)}{\Gamma(1D(2^-)\to D^\ast K) } =0.11\sim 0.36\\ &&\frac{\Gamma(1D(2^-)\to D_s^\ast \eta)}{\Gamma(1D(2^-)\to D^\ast K) } =0.11\sim 0.18, \label{Eq:1D2b} \end{eqnarray} which can be accessible in experiment. As for the $1D^\prime(2^-)$ state, the total and partial decay width are present in the lower panel of Fig. \ref{Fig:D2}. we predict its total decay width ($(80\sim240)$ MeV), which shows that the $1D^\prime(2^-)$ state is also a broad resonance, where $DK^\ast$ is its dominant decay mode with a branching ratio $\mathcal{B}(1D^\prime (2^-)\to D K^\ast)=(0.64\sim 0.73)\%$. Its main decay mode includes $D^\ast K$, while $1D^\prime \to D_s^\ast \eta$ and $1D^\prime \to D_0(2400) K$ have small partial decay widths. Besides the above information, two typical ratios are listed as below \begin{eqnarray} &&\frac{\Gamma(1D^\prime (2^-)\to D^\ast K)}{\Gamma(1D^\prime (2^-)\to D K^\ast)} =0.36 \sim 0.53,\\ &&\frac{\Gamma(1D^\prime (2^-)\to D^\ast \eta)}{\Gamma(1D^\prime (2^-)\to D K^\ast)} =0.004 \sim 0.013. \end{eqnarray} \renewcommand{\arraystretch}{1.5} \begin{table*}[htbp] \caption{{The total and partial decay widths of two $2^-$ states (in units of MeV) dependent on the mixing angle $\theta_{1D}$ (The typical values are $\theta_{1D}=20^\circ,\ 30^\circ$ and $50^\circ$). Here, the $R$ value of $1D(2^-)$ and $1D'(2^-)$ is fixed as $R=2.85$ GeV$^{-1}$ \cite{Close:2005se}.}\label{Tab:D2} } \centering \begin{tabular}{l c c c c c c| c c c c c c }\toprule[1pt] &\multicolumn{6}{c|}{M=2850 MeV}&\multicolumn{6}{c}{M=2950 MeV}\\ &\multicolumn{2}{c}{$\theta_{1D}=20^\circ$} &\multicolumn{2}{c}{$\theta_{1D}=30^\circ$} & \multicolumn{2}{c|}{$\theta_{1D}=50^\circ$} & \multicolumn{2}{c}{$\theta_{1D}=20^\circ$} & \multicolumn{2}{c}{$\theta_{1D}=30^\circ$} & \multicolumn{2}{c}{$\theta_{1D}=50^\circ$} \\ Channels&$ 1D(2^-)$&$ 1D'(2^-) $&$ 1D(2^-)$&$ 1D'(2^-) $&$ 1D(2^-)$&$ 1D'(2^-) $&$ 1D(2^-)$&$ 1D'(2^-) $&$ 1D(2^-)$&$ 1D'(2^-) $&$ 1D(2^-)$&$ 1D'(2^-) $ \\ \midrule[1pt] $D^*K$&126.93&44.60&135.61&35.90&134.62&36.84&110.42&77.82&113.85 &74.39&113.46&74.76 \\ $DK^*$&26.06&70.04&13.54&82.59&1.99&94.18&56.19&118.06&38.58&135.69 &22.34&152.00 \\ $D_s^* \eta$&13.93&2.08&15.17&0.83&15.03&0.97&20.23&4.21&21.91&2.54 &21.72&2.72 \\ $D_0^*(2400)K$&0.012&0.011&0.008&0.015&0.002&0.022&0.50&0.08&0.42&0.16 &0.22&0.36\\ \midrule[1pt] Total& 166.93&116.73&164.33&119.34 &151.64 &132.01 &187.34 & 200.17& 174.76& 212.78 &157.74&229.84 \\ \bottomrule[1pt] \end {tabular}\\ \label{table:angle} \end{table*} It should be noticed that the threshold of $D_{s0}^\ast (2317) \eta$ is 2865 MeV and two $1D$ charmed-strange mesons with $J^P=2^-$ decaying into $D_{s0}^\ast(2317) \eta$ occur via $D-$wave. Thus, $1D(2^-)/1D^\prime(2^-)\to D_{s0}^\ast (2317) \eta$ is suppressed, which is supported by our calculation since the corresponding partial decay widths are of the order of a few keV for $1D(2^-) \to D_{s0}^\ast \eta$ and of the order of 0.1 keV for $1D^\prime(2^-) \to D_{s0}^\ast \eta$. { In Table \ref{Tab:D2}, we fix the $R$ value of $1D(2^-)$ and $1D^\prime(2^-)$ to be 2.85 GeV$^{-1}$ \cite{Close:2005se} and further discuss the total and partial decay widths dependent on the mixing angle $\theta_{1D}$, where three typical values $\theta_{1D}=20^\circ$, $\theta_{1D}=30^\circ$ and $\theta_{1D} =50^\circ$ are adopted. For $1D(2^-)$ state, its total decay width varies from 152 MeV to 187 MeV, which indicates that the total decay width is weakly dependent on the mixing angle $\theta_{1D}$. Moreover, $1D(2^-)$ state dominantly decays into $D^\ast K$, whose width varies from 110 MeV to 134 MeV caused by the uncertainty of the mass of $1D(2^-)$ state and the different mixing angle $\theta_{1D}$. In addition, the ratio of $1D(2^-) \to D_s^\ast \eta$ and $1D(2^-) \to D^\ast K$ is estimated to be $0.11 \sim 0.19$, which is consistent with that shown in Eq. (\ref{Eq:1D2b}). However, the predicted partial decay width of $1D(2^-) \to D K^\ast$ is strongly dependent on the mixing angle, which varies from 2 MeV to 56 MeV. The total decay width of $1D^\prime(2^-)$ varies from 117 MeV to 230 MeV depending on different predicted masses and mixing angles. Its dominant decay modes are $D^\ast K $ and $DK^\ast$ and the ratios of $1D^\prime(2^-) \to D^\ast K$ and $1D^\prime(2^-) \to D K^\ast$ are predicted to be $0.39 \sim 0.66$. The partial decay widths of $1D^\prime(2^-) \to D_s^\ast \eta$ and $1D^\prime(2^-) \to D_0(2400) K$ are relatively small and are given by several MeV and less than 0.5 MeV, respectively. } \section{Discussion and conclusions}\label{sec3} With the observation of two charmed-strange resonances $D_{s1}(2860)$ and $D_{s3}(2860)$, which was recently announced by the LHCb Collaboration \cite{Aaij:2014xza,Aaij:2014baa}, the observed charmed-strange states become more and more abundant. In this work, we have carried out a study of the observed $D_{s1}(2860)$ and $D_{s3}(2860)$, which indicates that $D_{s1}(2860)$ and $D_{s3}(2860)$ can be well categorized as $1^3D_1$ and $1^3D_3$ states in the charmed-strange meson family since the experimental widths of $D_{s1}(2860)$ and $D_{s3}(2860)$ can be reproduced by the corresponding calculated total widths of the $1^3D_1$ and $1^3D_3$ states. In addition, the result of their partial decay widths shows that the $DK$ decay mode is dominant both for $1^3D_1$ and $1^3D_3$ states, which naturally explains why $D_{s1}(2860)$ and $D_{s3}(2860)$ were first observed in the $DK$ channel. If $D_{s1}(2860)$ and $D_{s3}(2860)$ are the $1^3D_1$ and $1^3D_3$ states, respectively, our study also indicates that the $D^*K$ and $DK^*$ channels are the main decay mode of $D_{s1}(2860)$ and $D_{s3}(2860)$, respectively. Thus, we suggest for future experiment to search for $D_{s1}(2860)$ and $D_{s3}(2860)$ in these main decay channels, which can not only test our prediction presented in this work but also provide more information of the properties of $D_{s1}(2860)$ and $D_{s3}(2860)$. As the spin partners of $D_{s1}(2860)$ and $D_{s3}(2860)$, two $1D$ charmed-strange mesons with $J^P=2^-$ are still missing in experiment. Thus, in this work we also predict the decay behaviors of these two missing $1D$ charmed-strange mesons. Our calculation by the QPC model shows that both $1D(2^-)$ and $1D^\prime(2^-)$ states have very broad widths. For the $1D(2^-)$ and $1D^\prime(2^-)$ states, their dominant decay mode is $D^*K$ and $DK^*$, respectively. In addition, $DK^*$ and $D^*K$ are also the important decay modes of the $1D(2^-)$ and $1D^\prime(2^-)$ states, respectively. This investigation provides valuable information when further experimentally exploring these two missing $1D$ charmed-strange mesons. In summary, the observed $D_{s1}(2860)$ and $D_{s3}(2860)$ provide us a good opportunity to establish higher states in the charmed-strange meson family. The following experimental and theoretical efforts are still necessary to reveal the underlying properties of $D_{s1}(2860)$ and $D_{s3}(2860)$. Furthermore, it is a challenging research topic for future experiment to hunt the two predicted missing $1D$ charmed-strange mesons with $J^P=2^-$. Before closing this section, we would like to discuss the threshold effect or coupled-channel effect, which was proposed to solve the puzzling lower mass and narrow widths for $D_{s0}(2317)$ \cite{vanBeveren:2003kd} and $D_{s1}(2460)$ \cite{vanBeveren:2003jv}, and to understand the properties of $X(3872)$. We notice that there exist several typical $D^*K$, $DK^*$, and $D^*K^*$ thresholds, which are $\sim2580$ MeV, $\sim2762$ MeV, and $\sim 2902$ MeV, respectively. Here, the observed $D_{s1}(2860)$ and $D_{s3}(2860)$ are near the $D^*K^*$ threshold while $D_{s1}(2715)$ is close to the $DK^*$ threshold. This fact also shows that the threshold effect or coupled-channel effect is important to these observed charmed-strange states. For example, in Ref. \cite{Molina:2010tx}, the authors studied the $D^*K^*$ threshold effect on $D_{s2}^*(2573)$. Thus, further theoretical study of $D_{s1}(2860)$ and $D_{s3}(2860)$ by considering the threshold effect or coupled-channel effect is an interesting research topic. {In addition, the results presented in this work are calculated by using the SHO wave functions with a rms radius obtained within a relativistic quark model \cite{Godfrey:1985xj}, which can provide a quantitative estimate of the decay behavior of hadrons. However, the line shape of the SHO wave function is slightly different from the one obtained by the relativistic quark model. For example, the nodes may appear at different places for these two cases. Thus, adopting a numerical wave function from a relativistic quark model \cite{Godfrey:1985xj} may further improve the whole results, where we can compare the results obtained by using the SHO wave function and the numerical wave function, which is an interesting research topic\footnote{We would like to thank the anonymous referee for his/her valuable suggestion.}. } \section*{Acknowledgments} This project is supported by the National Natural Science Foundation of China under Grants No. 11222547, No. 11375240, No. 11175073, and No. 11035006, the Ministry of Education of China (SRFDP under Grant No. 2012021111000), and the Fok Ying Tung Education Foundation (No. 131006). \newpage
1,108,101,563,605
arxiv
\chapter{Basics of Linear and Conic Programs} \label{App-A} The mathematical programming theory has been thoroughly developed in width and depth since its birth in 1940s, when George Dantzig invented simplex algorithm for linear programming. The most influential findings in the field of optimization theory can be summarized as \cite{App-A-CVX-Book-Ben}: 1) Recognition of the fact that under mild conditions, a convex optimization program is computationally tractable: the computational effort under a given accuracy grows moderately with the problem size even in the worst case. In contrast, a non-convex program is generally computationally intractable: the computational effort of the best known methods grows prohibitively fast with respect to the problem size, and it is reasonable to believe that this is an intrinsic feature of such problems rather than a limitation of existing optimization techniques. 2) The discovery of interior-point methods, which was originally developed in 1980s to solve LPs and could be generalized to solve convex optimization problems as well. Moreover, between these two extremes (LPs and general convex programs), there are many important and useful convex programs. Although nonlinear, they still possess nice structured properties, which can be utilized to develop more dedicated algorithms. These polynomial-time interior-point algorithms turn out to be considerably more efficient than those exploiting only the convex property. The superiority of formulating a problem as a convex optimization problem is apparent. The most appealing advantage is that the problem can be solved reliably and efficiently. It is also convenient to build the associated dual problem, which gives insights on sensitivity information and may help develop distributed algorithm for solving the problem. Convex optimization has been applied in a number of energy system operational issues, and well acknowledged for its computational superiority. We believe that it is imperative for researchers and engineers to develop certain understanding on this important topic. As we have already learnt in previous chapters, many optimization problems in energy system engineering can be formulated as or converted to convex programs. The goal of this chapter is to help readers develop necessary background knowledge and skills to apply several well-structured convex optimization models, including LPs, SOCPs, and SDPs, i.e., to formulate or transform their problems as these specific convex programs. Certainly, convex transformation (or convexification) may be rather tricky and require special knowledge and skills. Nevertheless, the attempt often turns out to be worthwhile. We also pay special attention to nonconvex QCQPs, which can model various decision-making problems in engineering, such as optimal power flow and optimal gas flow. We discuss convex relaxation technique based on SDP,which is shown to be very useful to get a high-quality objective lower bound. We also present MILP formulations for some special QPs; because of the special problem structure, these MILP models can tackle practically sized problems in reasonable time. Most materials regarding convex sets and functions come from \cite{App-A-CVX-Book-Boyd} and its solution manual \cite{App-A-CVX-Book-Solution}; extensions of duality theory from linear programming to conic programming follows from \cite{App-A-CVX-Book-Ben}. We consolidate necessary contents in a convenient way to make this book self-contained and easy to follow. \section{Basic Notations} \label{App-A-Sect01} \subsection{Convex Sets} \label{App-A-Sect01-01} A set $C \in \mathbb R^n$ is convex if the line segment connecting any two points in $C$ is contained in $C$, i.e., for any $x_1,x_2 \in C$, we have $\theta x_1 + (1-\theta)x_2 \in C$, $\forall \theta \in [0,1]$. Roughly speaking, standing at anywhere in a convex set, you can see every other point in the set. Fig. \ref{fig:App-01-01} illustrates a simple convex set and a non-convex set in $\mathbb R^2$. \begin{figure}[!htp] \centering \includegraphics[scale=0.60]{Fig-App-01-01} \caption{Left: the circle is convex; right: the ring is non-convex.} \label{fig:App-01-01} \end{figure} The convex combination of $k$ points $x_1,\cdots,x_k$ is defined as $\theta_1 x_1 + \cdots + \theta_k x_k$, where $\theta_1,\cdots,\theta_k \ge 0$, and $\theta_1 + \cdots +\theta_k = 1$. A convex combination of points can be regarded as a weighted average of the points, with $\theta_i$ the weight of $x_i$ in the mixture. The convex hull of set $C$, denoted conv$(C)$, is the smallest convex set that contains $C$. Particularly, if $C$ has finite elements, then \begin{equation} \mbox{conv}(C) = \{ \theta_1 x_1 + \cdots + \theta_k x_k ~|~ x_i \in C,~ \theta_i \ge 0,~ i=1,~ \cdots,k,~ \theta_1 + \cdots + \theta_k = 1\} \notag \end{equation} Fig. \ref{fig:App-01-02} illustrates the convex hulls of two sets in $\mathbb R^2$. Some useful convex sets are briefly introduced. \begin{figure}[!htp] \centering \includegraphics[scale=0.35]{Fig-App-01-02} \caption{Left: The convex hull of eighteen points. Right: The convex hull of a kidney shaped set.} \label{fig:App-01-02} \end{figure} \vspace{12pt} {\noindent \bf 1. Cones} A set $C$ is called a cone, or nonnegative homogeneous, if for any $x \in C$, we have $\theta x \in C$, $\forall \theta \ge 0$. A set $C$ is a convex cone if it is convex and a cone: for any $x_1, x_2 \in C$ and $\theta_1,\theta_2 \ge 0$, we have $\theta_1 x_2 + \theta_2 x_2 \in C$. The conic combination (or nonnegative linear combination) of $k$ points $x_1,\cdots,x_k$ is defined as $\theta_1 x_1 + \cdots + \theta_k x_k$, where $\theta_1,\cdots,\theta_k \ge 0$. If a set of finite points $\{x_i\}$, $i=1,2\cdots$ resides in a convex cone $C$, then every conic combination of $\{x_i\}$ remains in $C$. Conversely, a set $C$ is a convex cone if and only if it contains all conic combinations of its elements. The conic hull of set $C$ is the smallest convex cone that contains $C$. Fig. \ref{fig:App-01-03} illustrates the conic hulls of two sets in $\mathbb R^2$. \begin{figure}[!htp] \centering \includegraphics[scale=0.80]{Fig-App-01-03} \caption{The conic hulls of the two sets \cite{App-A-CVX-Book-Boyd}.} \label{fig:App-01-03} \end{figure} Some widely used cones are introduced. \vspace{12pt} {\noindent \bf a. The nonnegative orthant} The nonnegative orthant is defined as \begin{equation} \label{eq:App-01-Orthant-P} \mathbb R^n_+ = \{ x \in \mathbb R^n ~|~ x \ge 0 \} \end{equation} It is the set of vectors composed of non-negative entries. It is clearly a convex cone. \vspace{12pt} {\noindent \bf b. Second-order cone} The unit second-order cone is defined as \begin{equation} \label{eq:App-01-Unit-SOC} \mathbb L^{n+1}_C = \{(x,t) \in \mathbb R^{n+1} ~|~ \| x \|_2 \le t\} \end{equation} It is also called the Lorentz cone or ice-cream cone. Fig. \ref{fig:App-01-04} exhibits $\mathbb L^3_C$. \begin{figure}[!htp] \centering \includegraphics[scale=0.70]{Fig-App-01-04} \caption{$\mathbb L^3_C = \left\{ (x_1,x_2,t) ~\middle|~ \sqrt{x^2_1 + x^2_2} \le t \right\}$ in $\mathbb R^3$ \cite{App-A-CVX-Book-Boyd}.} \label{fig:App-01-04} \end{figure} For any $(x,t) \in \mathbb L^{n+1}_C$ and $(y,z) \in \mathbb L^{n+1}_C$, we have \begin{equation} \| \theta_1 x + \theta_2 y \|_2 \le \theta_1 \|x\|_2 + \theta_2 \|y\|_2 \le \theta_1 t + \theta_2 z \Rightarrow \theta_1 \begin{bmatrix} x \\ t \end{bmatrix} + \theta_2 \begin{bmatrix} y \\ z \end{bmatrix} \in \mathbb L^{n+1}_C \notag \end{equation} which means that the unit second-order cone is a convex cone. Sometimes, it is convenient to use the following inequality to represent a second-order cone in optimization problems \begin{equation} \label{eq:App-01-General-SOC} \|A x + b \|_2 \le c^T x + d \end{equation} where $A \in \mathbb R^{m \times n}$, $b \in \mathbb R^m$, $c \in \mathbb R^n$, $d \in \mathbb R$. It is the inverse image of the unit second-order cone under the affine mapping $f(x) =(A x + b, c^T x + d)$, and hence is convex. Second-order cones in forms of (\ref{eq:App-01-Unit-SOC}) and (\ref{eq:App-01-General-SOC}) are interchangeable. \begin{equation} \|A x + b \|_2 \le c^T x + d \Leftrightarrow \begin{bmatrix} A \\ c^T \end{bmatrix} x + \begin{bmatrix} b \\ d \end{bmatrix} \in \mathbb L^{m+1}_C \notag \end{equation} and hence is convex. \vspace{12pt} {\noindent \bf c. Positive semidefinite cone} The set of symmetric $m \times m$ matrices is denoted by \begin{equation} \mathbb S^m = \{X \in \mathbb R^{m \times m}~|~ X = X^T\} \notag \end{equation} which is a vector space with dimension $m(m + 1)/2$. The set of symmetric positive semidefinite matrices is denoted by \begin{equation} \mathbb S^m_+ = \{X \in \mathbb S^m ~|~ X \succeq 0 \} \notag \end{equation} The set of symmetric positive definite matrices is denoted by \begin{equation} \mathbb S^m_{++} = \{X \in \mathbb S^m ~|~ X \succ 0 \} \notag \end{equation} Clearly, $\mathbb S^m_+$ is a convex cone: if $A, B \in \mathbb S^m_+$, then for any $x \in \mathbb R^m$ and positive scalars $\theta_1,\theta_2 \ge 0$, we have \begin{equation} x^T(\theta_1 A + \theta_2 B) x = \theta_1 x^T A x + \theta_2 x^T B x \ge 0 \notag \end{equation} implying $\theta_1 A + \theta_2 B \in \mathbb S^m_+$. A positive semidefinite cone in $\mathbb R^2$ can be expressed via three variables $x,y,z$ as \begin{equation} \begin{bmatrix} x & y \\ y & z \end{bmatrix} \succeq 0 \Leftrightarrow x \ge 0,~ xz \ge y^2 \notag \end{equation} which is plotted in Fig. \ref{fig:App-01-05}. In fact, $\mathbb L^3_C$ and $\mathbb S^2_+$ are equivalent to each other. To see this, the hyperbolic inequality $xz \ge y^2$ with $x \ge 0, z \ge 0$ defines the same feasible region in $\mathbb R^3$ as the following second-order cone \begin{equation} \left\| \begin{gathered} 2y \\ x-z \end{gathered} \right\|_2 \le x + z,~ x \ge 0,~ z \ge 0 \notag \end{equation} In higher-order dimensions, every second-order cone can be written as an LMI via Schur complement as \begin{equation} \label{eq:App-01-SOC-LMI} \|A x + b \|_2 \le c^T x + d \Rightarrow \left[ \begin{gathered} (c^T x + d)I \\ (A x + b)^T \end{gathered}~~ \begin{gathered} A x + b \\ c^T x + d \end{gathered} \right] \succeq 0 \end{equation} \begin{figure}[!htp] \centering \includegraphics[scale=0.70]{Fig-App-01-05} \caption{Positive semidefinite cone in $\mathbb S^2$ (or in $\mathbb R^3$) \cite{App-A-CVX-Book-Boyd}.} \label{fig:App-01-05} \end{figure} In this sense of representability, positive semidefinite cones are more general than second-order cones. However, the transformation in (\ref{eq:App-01-SOC-LMI}) may not be superior from the computational perspective, because SOCPs are more tractable than SDPs. \vspace{12pt} {\noindent \bf d. Copositive cone} A copositive cone $\mathbb C^n_+$ consists of symmetric matrices whose quadratic form is nonnegative over the nonnegative orthant $\mathbb R^n_+$: \begin{equation} \label{eq:App-01-COP-Cone} \mathbb C^n_+ = \{ A ~|~ A \in \mathbb S^n,~ x^T A x \ge 0,~ \forall x \in \mathbb R^n_+ \} \end{equation} The copositive cone $\mathbb C^n_+$ is closed, pointed, and convex \cite{App-A-COP-Cone-Property}. Clearly, $\mathbb S^n_+ \subseteq \mathbb C^n_+$, and every entry-wise nonnegative symmetric matrix $A$ belongs to $\mathbb C^n_+$. Actually, $\mathbb C^n_+$ is significantly larger than the positive semidefinite cone and the nonnegative symmetric matrix cone. \vspace{12pt} {\noindent \bf 2. Polyhedra} A polyhedron is defined as the solution set of a finite number of linear inequalities: \begin{equation} \label{eq:App-01-Poly-H} P = \{ x~|~ Ax \le b \} \end{equation} (\ref{eq:App-01-Poly-H}) is also called a hyperplane representation for a polyhedron. It is easy to show that polyhedra are convex sets. Sometimes, a polyhedron is also called a polytope. The two concepts are often used interchangeably in this book. Because of physical bounds of decision variables, the polyhedral feasible regions in practical energy system optimization problems are usually bounded, which means that there is no extreme ray. Polyhedra can be expressed via the convex combination as well. The convex hull of a finite number of points \begin{equation} \label{eq:App-01-Poly-V-1} \mbox{conv} \{v_1,\cdots,v_k\} = \{ \theta_1 v_1 + \cdots + \theta_k v_k ~|~ \theta \ge 0,~ 1^T \theta = 1 \} \end{equation} defines a polyhedron. (\ref{eq:App-01-Poly-V-1}) is called a convex hull representation. If the polyhedron is unbounded, a generalization of this convex hull representation is \begin{equation} \label{eq:App-01-Poly-V-2} \{ \theta_1 v_1 + \cdots + \theta_k v_k ~|~ \theta \ge 0,~ \theta_1 + \cdots + \theta_m = 1, m \le k \} \end{equation} which considers nonnegative linear combinations of $v_i$, but only the first $m$ coefficients whose summation is 1 are bounded, and the remaining ones can take arbitrarily large values. In view of this, the convex hull of points $v_1,\cdots,v_m$ plus the conic hull of points $v_{m+1}, \cdots,v_k$ is a polyhedron. The reverse is also correct: any polyhedron can be represented by convex hull and conic hull. How to represent a polyhedron depends on what information is available: if its boundaries are expressed via linear inequalities, the hyperplane representation is straightforward; if its extreme points and extreme rays are known in advance, the convex-conic hull representation is more convenient. With the growth in dimension, it is becoming more difficult to switch (derive one from the other) between the hyperplane representation and the hull representation. \subsection{Generalized Inequalities} \label{App-A-Sect01-02} A cone $K \subseteq \mathbb R^n$ is called a proper cone if it satisfies: 1) $K$ is convex and closed. 2) $K$ is solid, i.e., it has non-empty interior. 3) $K$ is pointed, i.e., $x \in K$, $-x \in K$ $\Rightarrow x = 0$. A proper cone $K$ can be used to define a generalized inequality, a partial ordering on $\mathbb R^n$, as follows \begin{equation} \label{eq:App-01-GNI} x \preceq_K y \Longleftrightarrow y - x \in K \end{equation} We denote $x \succeq_K y$ for $y \preceq_K x$. Similarly, a strict partial ordering can be defined by \begin{equation} \label{eq:App-01-GNI} x \prec_K y \Longleftrightarrow y - x \in \mbox{int}(K) \end{equation} where int$(K)$ stands for the interior of $K$, and write $x \succ_K y$ for $y \prec_K x$. The nonnegative orthant $\mathbb R^n_+$ is a proper cone. When $K = \mathbb R^n_+$, the partial ordering $\preceq_K$ comes down to the element-wise comparison between vectors: for $x,y \in \mathbb R^n$, $x \preceq_{\mathbb R^n_+} y$ means $ x_i \le y_i$, $i = 1,\cdots n$, or the traditional notation $x \le y$. The positive semidefinite cone $\mathbb S^n_+$ is a proper cone in $\mathbb S^n$. When $K = \mathbb S^n_+$, the partial ordering $\preceq_K$ comes down to a linear matrix inequality between symmetric matrices: for $X,Y \in \mathbb S^n$, $X \preceq_{\mathbb S^n_+} Y$ means $ Y - X$ is positive semidefinite. Because it arises so frequently, we can drop the subscript $\mathbb S^n_+$ when we write a linear matrix inequality $Y \succeq X$ or $X \preceq Y$. It is understood that such a generalized inequality corresponds to the positive semidefinite cone without particular mention. A generalized inequality is equivalent to linear constraints with $K = \mathbb R^n_+$; for other cones, such as the second-order cone $\mathbb L^{n+1}_C$ or the positive semidefinite cone $\mathbb S^n_+$, the feasible region is nonlinear but remains convex. \subsection{Dual Cones and Dual Generalized Inequalities} \label{App-A-Sect01-03} Let $K$ be a cone in $\mathbb R^n$. Its dual is defined as the following set \begin{equation} \label{eq:App-01-Dual-Cone} K^* = \{y ~|~ x^T y \ge 0,~ \forall x \in K\} \end{equation} Because $K^*$ is the intersection of homogeneous half spaces (half spaces passing through the origin). It is a closed convex cone. The interior of $K^*$ is given by \begin{equation} \label{eq:App-01-Interior-DCone} \mbox{int}(K^*) = \{y ~|~ x^T y > 0,~ \forall x \in K,~ x \ne 0 \} \end{equation} To see this, if $y^T x > 0$, $\forall x \in K$, then $(y+u)^T x > 0$, $\forall x \in K$ holds for all $u$ that is sufficiently small; hence $y \in \mbox{int}(K^*)$. Conversely, if $y \in K^*$ and $\exists x \in K : y^T x = 0$, $x \ne 0$, then $(y - tx)^T x < 0$, $\forall t > 0$, indicating $y \notin \mbox{int}(K^*)$. If int$(K) \ne \emptyset$, then $K^*$ is pointed. If this is not true, suppose $\exists y \ne 0$: $y \in K^*$, $-y \in K^*$, i.e., $y^T x \ge 0$, $\forall x \in K$ and $-y^T x \ge 0$, $\forall x \in K$, so we have $x^T y = 0$, $\forall x \in K$, which is in contradiction with int$(K) \ne \emptyset$. In conclusion, $K^*$ is a proper cone, if the original cone $K$ is so; $K^*$ is closed and convex, regardless of the original cone $K$. Fig. \ref{fig:App-01-06} shows a cone $K$ (the region between $L_2$ and $L_3$) and its dual cone $K^*$ (the region between $L_1$ and $L_4$) in $\mathbb R^2$. \begin{figure}[!htp] \centering \includegraphics[scale=0.50]{Fig-App-01-06} \caption{Illustration of a cone and its dual cone in $\mathbb R^2$.} \label{fig:App-01-06} \end{figure} In light of the definition of $K^*$, a non-zero vector $y$ is the normal of a homogeneous half space which contains $K$ if and only if $y \in K^*$. The intersection of all such half spaces containing $K$ constitutes the cone $K$ (if $K$ is closed), in view of this \begin{equation} \label{eq:App-01-Double-Dual} K = \bigcap_{y \in K^*} \left\{x ~|~ y^T x \ge 0 \right\} = \{x ~|~ y^T x \ge 0,~ \forall y \in K^* \} = K^{**} \end{equation} This fact can be also understood in $\mathbb R^2$ from Fig. \ref{fig:App-01-06}. The extreme cases for the normal vector $y$ such that the corresponding half space contains $K$ are $L_1$ and $L_4$, and the intersection of these half spaces for all $y \in K^*$ turns out to be the original cone $K$. Next, we investigate the dual cones of three special proper cones, i.e., $\mathbb R^n_+$, $\mathbb L^{n+1}_C$, and $\mathbb S^n_+$, respectively. \vspace{12pt} {\noindent \bf 1. The nonnegative orthant} By observing the fact \begin{equation} x^T y \ge 0,~ \forall x \ge 0 \Longleftrightarrow y \ge 0 \notag \end{equation} we naturally have $(\mathbb R^n_+)^* = \mathbb R^n_+$; in other words, the nonnegative orthant is self-dual. \vspace{12pt} {\noindent \bf 2. The second-order cone} Now, we show that the second-order cone is also self-dual: $(\mathbb L^{n+1}_C)^* = \mathbb L^{n+1}_C$. To this end, we need to demonstrate \begin{equation} x^T u + t v \ge 0,~ \forall (x,t) \in L^{n+1}_C \Longleftrightarrow \|u\|_2 \le v \notag \end{equation} $\Rightarrow$: Suppose the right-hand condition is false, and $\exists (u,v):\|u\|_2 > v$, by recalling Cauchy-Schwarz inequality $|a^T b| \le \|a\|_2 \|b\|_2$, we have \begin{equation} \min_{x} \left\{ x^T u ~\middle|~ \mbox{s.t.} \left\| x \right\|_2 \le t \right\} = - t \| u\|_2 \notag \end{equation} In such circumstance, $x^T u + t v = t(v - \|u\|_2) < 0$, $\forall t > 0$, which is in contradiction with the left-hand condition. $\Leftarrow$: Again, according to Cauchy-Schwarz inequality, we have \begin{equation*} x^T u + t v \ge -\|x\|_2 \|u\|_2 + t v \ge -\|x\|_2 \|u\|_2 + \|x\|_2 v = \|x\|_2 \left( v - \|u\|_2 \right) \ge 0 \end{equation*} \vspace{12pt} {\noindent \bf 3. The positive semidefinite cone} We investigate the dual cone of $\mathbb S^n_+$. The inner product of $X,Y \in \mathbb S^n$ is defined by the element-wise summation \begin{equation} \langle X,Y \rangle = \sum_{i=1}^n \sum_{j=1}^n X_{ij} Y_{ij} = \mbox{tr}(X Y^T) \notag \end{equation} We establish this fact: $(\mathbb S^n_+)^* = \mathbb S^n_+$, which boils down to \begin{equation} \mbox{tr}(X Y^T) \ge 0,~ \forall X \succeq 0 \Longleftrightarrow Y \succeq 0 \notag \end{equation} $\Rightarrow$: Suppose $Y \notin \mathbb S^n_+$, then $\exists q \in \mathbb R^n$ such that \begin{equation} q^T Y q = \mbox{tr}(q q^T Y^T) < 0 \notag \end{equation} which is in contradiction with the left-hand condition because $X = q q^T \in \mathbb S^n_+$. $\Leftarrow$: Now suppose $X, Y \in \mathbb S^n_+$. $X$ can be expressed via its eigenvalues $\lambda_i \ge 0$ and eigenvectors $q_i$ as $X = \sum_{i=1}^n \lambda_i q_i q^T_i$, then we arrive at \begin{equation} \mbox{tr}(X Y^T) = \mbox{tr} \left( Y\sum_{i=1}^n \lambda_i q_i q^T_i \right)= \sum_{i=1}^n \lambda_i q^T_i Y q_i \ge 0 \notag \end{equation} In summary, it follows that the positive semidefinite cone is self-dual. \vspace{12pt} {\noindent \bf 4. The completely positive cone} Following the same concept of matrix inner product, it is shown that $(\mathbb C^n_+)^*$ is the cone of so-called completely positive matrices and can be expressed as \cite{App-A-COP-Cone-Dual} \begin{equation} \label{eq:App-01-COMPL-Cone} \mathbb (\mathbb C^n_+)^* = \mbox{conv} \{ x x^T ~|~ x \in \mathbb R^n_+\} \end{equation} In contrast to previous three cones, the copositive cone $\mathbb C^n_+$ is not self-dual. \vspace{12pt} When the dual cone $K^*$ is proper, it induces a generalized inequality $\preceq_{K^*}$, which is called the dual generalized inequality of the one induced by cone $K$ (if $K$ is proper). According to the definition of dual cone, an important fact relating a generalized inequality and its dual is 1) $x \preceq_K y$ if and only if $\lambda^T x \le \lambda^T y$, $\forall \lambda \in K^*$. 2) $x \prec_K y$ if and only if $\lambda^T x < \lambda^T y$, $\forall \lambda \in K^*$, $\lambda \ne 0$. When $K = K^{**}$, the dual generalized inequality of $\preceq_{K^*}$ is $\preceq_K$, and the above property holds if the positions of $K$ and $K^*$ are swapped. \subsection{Convex Function and Epigraph} \label{App-A-Sect01-04} A function $f:\mathbb R^n \to \mathbb R$ is convex if its feasible region $X$ is a convex set, and for all $x_1,x_2 \in X$, the following condition holds \begin{equation} \label{eq:App-01-Convex-Function} f(\theta x_1 + (1-\theta)x_2) \le \theta f(x_1) + (1-\theta) f(x_2),~ \forall \theta \in [0,1] \end{equation} The geometrical interpretation of inequality (\ref{eq:App-01-Convex-Function}) is that the chord connecting points $(x_1, f(x_1))$ and $(x_2, f(x_2))$ always lies above the curve of $f$ between $x_1$ and $x_2$ (see Fig. \ref{fig:App-01-07}). Function $f$ is strictly convex if strict inequality holds in (\ref{eq:App-01-Convex-Function}) when $x_1 \ne x_2$ and $0 < \theta < 1$. Function $f$ is called (strictly) concave if $-f$ is (strictly) convex. An affine function is both convex and concave. The graph of a function $f:\mathbb R^n \to \mathbb R$ is defined as \begin{equation} \label{eq:App-01-Graph-f} \mbox{graph } f = \{(x,f(x))~|~ x \in X\} \end{equation} which is a subset of $\mathbb R^{n+1}$. The epigraph of a function $f:\mathbb R^n \to \mathbb R$ is defined as \begin{equation} \label{eq:App-01-Epigraph-f} \mbox{epi } f = \{(x,t)~|~ x \in X,~ f(x) \le t \} \end{equation} which is a subset of $\mathbb R^{n+1}$. These definitions are illustrated through Fig. \ref{fig:App-01-07}. \begin{figure}[!htp] \centering \includegraphics[scale=0.60]{Fig-App-01-07} \caption{Illustration of the graph of a convex function $f(x)$ (the solid line) and its epigraph (the shaded area) in $\mathbb R^2$.} \label{fig:App-01-07} \end{figure} Epigraph bridges the concepts of convex sets and convex functions: A function is convex if and only if its epigraph is a convex set. Epigraph is frequently used in formulating optimization problems. A nonlinear objective function can be replaced by a linear objective and an additional constraint in epigraph form. In this sense, we can assume that any optimization problem has a linear objective function. Nonetheless, this does not facilitate solving the problem, as non-convexity moves to the constraints, if the objective function is not convex. Nonetheless, the solution to an optimization problem with a linear objective can always be found at the boundary of the convex hull of its feasible region, implying that if we can characterize the convex hull, a problem in epigraph form admits an exact convex hull relaxation. However, in general, it is difficult to express convex hull in an analytical form. Analyzing convex functions is a well developed field. Broadening the knowledge in convex analysis could be mathematically demanding, especially for readers who are primarily interested in applications. We will not pursue in sophisticated theories in depth any more. Readers are referred to the literature suggested at the end of this chapter for further information. \section{From Linear to Conic Program} \label{App-A-Sect02} Linear programming is one of the most mature and tractable mathematical programming problems. In this section, we first investigate and explain the motivation of linear programming duality theory, then provide a unified model for conic programming problems. LPs, SOCPs, and SDPs are special cases of conic programs associated with generalized inequalities $\preceq_K$ where $K=\mathbb R^n$, $\mathbb L^{n+1}_C$, and $\mathbb S^n_+$, respectively. Our aim is to help readers who are not familiar with conic programs build their decision-making problems in these formats with structured convexity, and write out their dual problems more conveniently. The presentation logic is consistent with \cite{App-A-CVX-Book-Ben}, and most of the presented materials in this section also come from \cite{App-A-CVX-Book-Ben}. \subsection{Linear Program and its Duality Theory} \label{App-A-Sect02-01} A linear program is an optimization program with the form \begin{equation} \label{eq:App-01-LP-Compact} \min \{ c^T x ~|~ A x \ge b \} \end{equation} where $x$ is the vector of decision variables, $A$, $b$, $c$ are constant coefficient matrices with compatible dimensions. We assume LP (\ref{eq:App-01-LP-Compact}) is feasible, i.e., its feasible set $X = \{x~|~ Ax \ge b\}$ is a non-empty polyhedron; moreover, because of the limited ranges of decision variables representing physical quantities, we assume $X$ is bounded. In such circumstance, LP (\ref{eq:App-01-LP-Compact}) always has a finite optimum. LPs can be solved by mature algorithms, such as the simplex algorithm and the interior-point algorithm, which are not the main focus of this book. A question which is important both in theory and practice is: how to find a systematic way to bound the optimal value of (\ref{eq:App-01-LP-Compact})? Clearly, if $x$ is a feasible solution, an instant upper bound is given by $c^T x$. Lower bounding is to find a value $a$, such that $c^T x \ge a$ holds for all $x \in X$. A trivial answer is to solve the problem and retrieve its optimal value, which is the tightest lower bound. However, there may be a smarter way to retrieve a valid lower bound with much cheaper computational expense. To outline the basic motivation, let us consider the following example \begin{equation} \label{eq:App-01-LP-Example} \min \left\{ \sum_{i=1}^6 x_i ~\middle|~ \begin{gathered} 2 x_1 + 1 x_2 + 3 x_3 + 8 x_4 + 5 x_5 + 3 x_6 \ge 5 \\ 6 x_1 + 2 x_2 + 6 x_3 + 1 x_4 + 1 x_5 + 4 x_6 \ge 2 \\ 2 x_1 + 7 x_2 + 1 x_3 + 1 x_4 + 4 x_5 + 3 x_6 \ge 1 \\ \end{gathered} \right\} \end{equation} Although LP (\ref{eq:App-01-LP-Example}) is merely a toy case for modern solvers and computers, one may guess it is still a little bit complicated for mental arithmetic. In fact, we can claim the optimal value is 0.8 at a glance without any sophisticated calculation: summing up the three constraints yields an inequality \begin{equation} \label{eq:App-01-LP-Example-Weigh-Sum} 10(x_1 + x_2 + x_3 + x_4 + x_5 + x_6) \ge 8 \end{equation} which immediately gives the optimal value is 0.8. To understand why such a value is indeed the optimum, by adding the constraints together and dividing both sides by 10, inequality (\ref{eq:App-01-LP-Example-Weigh-Sum}) implies that the objective function must get a value which is greater than or equal to 0.8 at any feasible point; moreover, to demonstrate that 0.8 is attainable, we can find a point $x^*$ which activates the three constraints simultaneously, so (\ref{eq:App-01-LP-Example-Weigh-Sum}) becomes an equality. LP duality is merely a formal generalization of this simple trick. Multiplying each constraint in $Ax \ge b$ with a non-negative weight $\lambda_i$, and adding all constraints together, we will see \begin{equation} \lambda^T A x \ge \lambda^T b \notag \end{equation} If we choose $\lambda$ elaborately such that $\lambda^T A = c^T$, then $\lambda^T b$ will be a valid lower bound of the optimal value of (\ref{eq:App-01-LP-Compact}). To improve the lower bound estimation, one may optimize the weighting vector $\lambda$, giving rise to the following problem \begin{equation} \label{eq:App-01-LP-Dual} \max_{\lambda} \{ \lambda^T b ~|~ A^T \lambda = c,~ \lambda \ge 0 \} \end{equation} where $\lambda$ is the vector of decision variables or dual variables, and the feasible region $D=\{\lambda ~|~ A^T \lambda = c,~ \lambda \ge 0\}$ is a polyhedron. Clearly, (\ref{eq:App-01-LP-Dual}) is also an LP, and is called the dual problem of LP (\ref{eq:App-01-LP-Compact}). Correspondingly, (\ref{eq:App-01-LP-Compact}) is called the primal problem. From above construction, we immediately conclude $c^T x \ge \lambda^T b$. \begin{proposition} \label{pr:App-01-LP-Weak-Duality} (Weak duality): The optimal value of (\ref{eq:App-01-LP-Dual}) is less than or equal to the optimal value of (\ref{eq:App-01-LP-Compact}). \end{proposition} In fact, the optimal bound offered by (\ref{eq:App-01-LP-Dual}) is tight. \begin{proposition} \label{pr:App-01-LP-Strong-Duality} (Strong duality): Optimal values of (\ref{eq:App-01-LP-Dual}) and (\ref{eq:App-01-LP-Compact}) are equal. \end{proposition} To see this, an explanation is given in \cite{App-A-CVX-Book-Ben}. If a real number $a$ is the optimal value of the primal LP (\ref{eq:App-01-LP-Compact}), the system of linear inequalities \begin{equation} S^P: \left\{ \begin{gathered} -c^T x > - a : \lambda_0 \\ A x \ge b: \lambda \end{gathered} \right. \notag \end{equation} must have an empty solution set, indicating that at least one of the following two systems does have a solution (called separation property later) \begin{equation} S^D_1: \left\{ \begin{gathered} -\lambda_0 c + A^T \lambda = 0 \\ -\lambda_0 a + b^T \lambda \ge 0 \\ ~~ \lambda_0 > 0,~~ \lambda \ge 0 \end{gathered} \right. \notag \end{equation} \begin{equation} S^D_2: \left\{ \begin{gathered} -\lambda_0 c + A^T \lambda = 0 \\ -\lambda_0 a + b^T \lambda > 0 \\ ~~ \lambda_0 \ge 0,~~ \lambda \ge 0 \end{gathered} \right. \notag \end{equation} We can show that $S^P$ has no solutions if and only if $S^D_1$ has a solution. $S^D_1$ has a solution $\Rightarrow$ $S^P$ has no solution is clear. Otherwise, suppose that $S^P$ has a solution $x$, because $\lambda_0$ is strictly positive, the weighted summation of inequalities in $S^P$ leads to \begin{equation*} 0 = 0^T x =(-\lambda_0 c + A^T \lambda)^T x =-\lambda_0 c^T x + \lambda^T A x > -\lambda_0 a + \lambda^T b \end{equation*} which is in contradiction with the second inequality in $S^D_1$. $S^P$ has no solution $\Rightarrow$ $S^D_1$ has a solution. Suppose $S^D_1$ has no solution, $S^D_2$ must have a solution owing to the separation property (Theorem 1.2.1 in \cite{App-A-CVX-Book-Ben}). Moreover, if $\lambda_0 > 0 $, the solution of system $S^D_2$ also solves system $S^D_1$, so there must be $\lambda_0 = 0$. As a result, the solution of $S^D_2$ is independent of the values of $a$ and $c$. Let $c = 0$ and $a = 0$, the solution $\lambda$ of $S^D_2$ satisfies $A^T \lambda = 0$, $b^T \lambda > 0$. Therefore, for any $x$ with a compatible dimension, $\lambda^T(Ax-b) = \lambda^T Ax - \lambda^T b <0$ holds. In addition, because $\lambda \ge 0$, we can conclude that $Ax \ge b$ has no solution, a contradiction to the assumption that (\ref{eq:App-01-LP-Compact}) is feasible. Now, consider the solution of $S^D_1$. Without loss of generality, we can assume $\lambda_0 = 1$; otherwise, if $\lambda_0 \ne 1$, ($1,\lambda/\lambda_0$) also solves $S^D_1$. In view of this, in normalized condition ($\lambda_0=1$), $S^D_1$ comes down to \begin{equation} S^D_3: \left\{ \begin{gathered} A^T \lambda = c \\ b^T \lambda \ge a \\ \lambda \ge 0 \end{gathered} \right. \notag \end{equation} Now we can see the strong duality: Let $a^*$ be the optimal solution of (\ref{eq:App-01-LP-Compact}). For any $a < a^*$, $S^P$ has no solution, so $S^D_1$ has a solution $(1,\lambda^*)$. According to $S^D_3$, the optimal value of (\ref{eq:App-01-LP-Dual}) is no smaller than $a$, i.e., $a \le b^T \lambda^* \le a^*$. When $a$ tends to $a^*$, we can conclude that the primal and dual optimal values are equal. Since the primal problem always has a finite optimum (as we assumed before), so does the dual problem, as they share the same optimal value. Nevertheless, even if the primal feasible region is bounded, the dual feasible set $D$ may be unbounded, and the dual problem is always bounded above. Please refer to \cite{App-A-LP-Book-Dantzig,App-A-LP-Book-Bertsimas,App-A-LP-Book-Vanderbei} for more information on duality theory in linear programming. \begin{proposition} \label{pr:App-01-LP-OCD-Primal-Dual} (Primal-dual optimality condition) If LP (\ref{eq:App-01-LP-Compact}) is feasible and $X$ is bounded, then any feasible solution to the following system \begin{equation} \label{eq:App-01-LP-OCD-Primal-Dual} \begin{lgathered} A x \ge b \\ A^T \lambda = c,~ \lambda \ge 0 \\ c^T x = b^T \lambda \end{lgathered} \end{equation} solves the original primal-dual pair of LPs: $x^*$ is the optimal solution of (\ref{eq:App-01-LP-Compact}), and $\lambda^*$ is the optimal solution of (\ref{eq:App-01-LP-Dual}). \end{proposition} \noindent (\ref{eq:App-01-LP-OCD-Primal-Dual}) is also called the primal-dual optimality condition of LPs. It consists of linear inequalities and equalities, and there is no objective function to be optimized. Substituting $c = A^T \lambda$ into the last equation of (\ref{eq:App-01-LP-OCD-Primal-Dual}) gives $\lambda^T A x = \lambda^T b$, i.e. \begin{equation} \lambda^T (b-Ax) = 0 \notag \end{equation} Since $\lambda \ge 0$ and $Ax \ge b$, above equation is equivalent to \begin{equation} \lambda_i (b-Ax)_i = 0 \notag \end{equation} where notation $(b-Ax)_i$ and $\lambda_i$ stand for the $i$-th components of vectors $b-Ax$ and $\lambda$, respectively. This condition means that at most one of $\lambda_i$ and $(b-Ax)_i$ can take a strictly positive value. In other words, if the $i$-th inequality constraint is inactive, then its dual multiplier $\lambda_i$ must be 0; otherwise, if $\lambda_i >0$, then the corresponding inequality constraint must be binding. This phenomenon is called the complementarity and slackness condition. Applying KKT optimality condition for general nonlinear programs to LP (\ref{eq:App-01-LP-Compact}) we have: \begin{proposition} \label{pr:App-01-LP-OCD-KKT} (KKT optimality condition) If LP (\ref{eq:App-01-LP-Compact}) is feasible and $X$ is bounded, the following system \begin{equation} \label{eq:App-01-LP-OCD-KKT} \begin{gathered} 0 \le \lambda \bot A x - b \ge 0 \\ A^T \lambda = c \end{gathered} \end{equation} has a solution ($x^*,\lambda^*$) (may not be unique), where $a \bot b$ means $a^T b = 0$, $x^*$ solves (\ref{eq:App-01-LP-Compact}) and $\lambda^*$ solves (\ref{eq:App-01-LP-Dual}). \end{proposition} The question that which one of (\ref{eq:App-01-LP-OCD-Primal-Dual}) and (\ref{eq:App-01-LP-OCD-KKT}) is better can be subtle and has very different practical consequences. At the first look, the former one seems more tractable because (\ref{eq:App-01-LP-OCD-Primal-Dual}) is a linear system while (\ref{eq:App-01-LP-OCD-KKT}) contains complementarity and slackness conditions. However, the actual situation in practice is more complicated. For example, to solve a bilevel program with an LP lower level, the LP is often replaced by its optimality condition. In a bilevel optimization structure, some of the coefficients $A$, $b$, and $c$ are optimized by the upper-level agent, say, the coefficient vector $c$ representing the price is controlled by the upper level decision maker, while $A$ and $b$ are constants. If we use (\ref{eq:App-01-LP-OCD-Primal-Dual}), the term $c^T x$ in the single-level equivalence becomes non-convex, although $c$ is a constant in the lower level, preventing a global optimal solution from being found easily. In contrast to this, if we use (\ref{eq:App-01-LP-OCD-KKT}) and linearize the complementarity and slackness condition via auxiliary integer variables, the single-level equivalent problem can be formulated as an MILP, whose global optimal solution can be procured with reasonable computation effort. The dual problem of LPs which maximize its objective can be derived in the same way. Consider the LP \begin{equation} \label{eq:App-01-LP-Max-Primal} \max \{ c^T x ~|~ A x \le b \} \end{equation} For this problem, we need an upper bound on the objective function. To this end, associating a non-negative dual vector $\lambda$ with the constraint, and adding the weighted inequalities together, we have \begin{equation} \lambda^T A x \le \lambda^T b \notag \end{equation} If we intentionally choose $\lambda$ such that $\lambda^T A = c^T$, then $\lambda^T b$ will be a valid upper bound of the optimal value of (\ref{eq:App-01-LP-Max-Primal}). The dual problem \begin{equation} \label{eq:App-01-LP-Max-Dual} \min_{\lambda} \{ \lambda^T b ~|~ A^T \lambda = c,~ \lambda \ge 0 \} \end{equation} optimizes the weighting vector $\lambda$ to offer the tightest upper bound. Constraints in the form of equality and $\ge$ inequality can be considered using the same paradigm. Bearing in mind that we are seeking an upper bound, so we need a certification for $c^T x \le a$, so the dual variables for equalities have no signs and those for $\ge$ inequalities should be negative. Sometimes it is useful to define the dual cone of a polyhedron, despite that a bounded polyhedron is not a cone. Recall its definition, the dual cone of a polyhedron $P$ can be defined as \begin{equation} \label{eq:App-01-Dual-Polytope-1} P^* = \{y ~|~ x^T y \ge 0,~ \forall x \in P \} \end{equation} where $P = \{x ~|~ Ax \ge b \}$. As we have demonstrated in Sect. \ref{App-A-Sect01-03}, the dual cone is always closed and convex; however, for a general set, its dual cone does not have an analytical expression. For polyhedral sets, the condition in (\ref{eq:App-01-Dual-Polytope-1}) holds if and only if the minimal value of $x^T y$ over $P$ is non-negative. For a given vector $y$, let us investigate the minimum of $x^T y$ through an LP \begin{equation} \min_x \{y^T x ~|~ Ax \ge b \} \notag \end{equation} It is known from Proposition \ref{pr:App-01-LP-Weak-Duality} that $y^T x \ge b^T \lambda$, $\forall \lambda \in D_P$, where $D_P =\{\lambda ~|~ A^T \lambda = y, \lambda \ge 0 \}$. Moreover, if $\exists \lambda \in D_P$ such that $b^T \lambda < 0$, Proposition \ref{pr:App-01-LP-Strong-Duality} certifies the existence of $x \in P$ such that $y^T x = b^T \lambda < 0$. In conclusion, the dual cone of polyhedron $P$ can be cast as \begin{equation} \label{eq:App-01-Dual-Polytope-2} P^* = \{ y ~|~ \exists \lambda: b^T \lambda \ge 0,~ A^T \lambda = y,~ \lambda \ge 0 \} \end{equation} which is also a polyhedron. It can be observed from (\ref{eq:App-01-Dual-Polytope-2}) that all constraints in $P^*$ are homogeneous, so $P^*$ is indeed a polyhedral cone. \subsection{General Conic Linear Program} \label{App-A-Sect02-02} Linear programs cover vast topics in engineering optimization problems. Its duality program provides informative quantifications and valuable insights of the problem at hand, which help develop efficient algorithms for itself and facilitate building tractable reformulations for more complicated mathematical programming models, such as robust optimization, multi-level optimization and equilibrium problems. The algorithms of LPs, which are perfectly developed by now, can solve quite large instances (with up to hundreds of thousands of variables and constraints). Nevertheless, there are practical problems which cannot be modeled by LPs. To cope with these essentially nonlinear cases, one needs to explore new models and computational methods beyond the reach of LPs. The broadest class of optimization problems which the LP can be compared with is the class of convex optimization problems. Convexity marks whether a problem can be solved efficiently, and any local optimizer of a convex program must be a global optimizer. Efficiency is quantified by the number of arithmetic operations required to solve the problem. Suppose that all we know about the problem is its convexity: its objective and constraints are convex functions in decision variables $x \in \mathbb R^n$, and their values along with their derivatives at any given point can be evaluated within $M$ arithmetic operations. The best known complexity for finding an $\epsilon$-solution turns out to be \cite{App-A-CVX-Book-Ben} \begin{equation} O(1) n(n^3+M) \ln \left( \frac{1}{\epsilon} \right) \notag \end{equation} Although this bound grows polynomially with $n$, the computation time may be still unacceptable for a large $n$ like $n = 1,000$, which is in contrast to LPs which are solvable with $n = 100,000$. The reason is: linearity are much stronger than convexity; the structure of an affine function $a^T x + b$ solely depends on its constant coefficients $a$ and $b$; function values and derivatives are never evaluated in a state-of-the-art LP solver. There are many classes of convex programs which are essentially nonlinear, but still possess nice analytical structure, which can be used to develop more dedicated algorithms. These algorithms may perform much more efficiently than those exploiting only convexity. In what follows, we consider such a class of convex program, i.e., the conic program, which is a simple extension of LP. Its general form and mathematical model are briefly introduced, while the details about interior-point algorithms is beyond the scope of this book, which can be found in \cite{App-A-CVX-Book-Ben,App-A-CVX-Book-Boyd}. \vspace{12pt} {\noindent \bf 1. Mathematical model} When we consider to add some nonlinear factors in LP (\ref{eq:App-01-LP-Compact}), the most common way is to replace a linear function $a^T x$ with a nonlinear but convex function $f(x)$. As what has been explained, this may not be advantageous from a computational perspective. In contrast to this, we sustain all functions to be linear, but inject nonlinearity in the comparative operators $\ge$ or $\le$. Recall the definition of generalized inequalities $\succeq_K$ with cone $K$, we consider the following problem in this section \begin{equation} \label{eq:App-01-CP-Primal} \min_x \{ c^T x ~|~ A x \succeq_K b \} \end{equation} which is called a conic programming problem. An LP is a special case of the conic program with $K = \mathbb R^n_+$. With this generalization, we are able to formulate a much wider spectrum of optimization problems which cannot be modeled as LPs, while enjoy nice properties of structured convexity. \vspace{12pt} {\noindent \bf 2. Conic duality} Aside from developing high-performance algorithms, the most important and elegant theoretical result in the area of LP is its duality theorem. In view of their similarities in mathematical appearances, how can the LP duality theorem be extended to conic programs? Similarly, the motivation of duality is the desire of a systematic way to certify a lower bound on the optimal value of conic program (\ref{eq:App-01-CP-Primal}). Let us try the same trick: multiplying the dual vector $\lambda$ on both sides of $Ax \succeq_K b$, and adding them together, we obtain $\lambda^T A x$ and $b^T \lambda$; moreover, if we are lucky to get $A^T \lambda = c$, we guess $b^T \lambda$ can serve as a lower bound of the optimum of (\ref{eq:App-01-CP-Primal}) under some condition. The condition can be translated into: what is the admissible region of $\lambda$, such that the inequality $\lambda^T A x \ge b^T \lambda$ is a consequence of $Ax \succeq_K b$? A nice answer has been given at the end of Sect. \ref{App-A-Sect01-03}. Let us explain the problem from some simple cases. Particularly, when $K = \mathbb R^n_+$, the admissible region of $\lambda$ is also $\mathbb R^n_+$, because we have already known the fact that the dual variable of $\ge$ inequalities in an LP which minimizes its objective should be non-negative. However, $\mathbb R^n_+$ is no longer a feasible region of $\lambda$ for conic programs with generalized inequality $\succeq_K$ if $K \ne \mathbb R^n_+$. To see this, consider $\mathbb L^3_C$ and the corresponding generalized inequality \begin{equation} \begin{bmatrix} x \\ y \\ z \end{bmatrix} \succeq_{\mathbb L^3_C} \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \Longleftrightarrow z \ge \sqrt{x^2+y^2} \notag \end{equation} $(x,y,z)=(-1,-1,1.5)$ is a feasible solution. However, the weighted summation of both sides with $\lambda = [1,1,1]^T$ gives a false inequality $-0.5 \ge 0$. To find the feasible region of $\lambda$, consider the condition \begin{equation} \label{eq:App-01-CP-Lambda} \forall a \succeq_K 0 ~\Rightarrow~ \lambda^T a \ge 0 \end{equation} If (\ref{eq:App-01-CP-Lambda}) is true, we have the following logical inferences \begin{equation} \begin{gathered} \\ \Leftrightarrow \\ \Rightarrow \\ \Leftrightarrow \end{gathered} \quad \begin{aligned} Ax & \succeq_K b \\ Ax - b & \succeq_K 0 \\ \lambda^T (Ax-b) & \ge 0 \\ \lambda^T A x & \ge \lambda^T b \end{aligned} \notag \end{equation} Conversely, if $\lambda$ is an admissible vector for certifying \begin{equation} \forall (a,b: a \succeq_K b) ~\Rightarrow~ \lambda^T a \ge \lambda^T b \notag \end{equation} then, (\ref{eq:App-01-CP-Lambda}) is clearly true by letting $b=0$. Therefore, the admissible set of $\lambda$ for generalized inequality $\succeq_K$ with cone $K$ can be written as \begin{equation} \label{eq:App-01-CP-Dual-Cone} K^* =\{ \lambda ~|~ \lambda^T a \ge 0,~ \forall a \in K \} \end{equation} which contains vectors whose inner products with all vectors belonging to $K$ are nonnegative. Recall the definition in (\ref{eq:App-01-Dual-Cone}), we can observe that the set $K^*$ is actually the dual cone of cone $K$. Now we are ready to setup the dual problem of conic program (\ref{eq:App-01-CP-Primal}). As in the case of LP duality, we try to recover the objective function from the linear combination of constraints by choosing a proper dual variable $\lambda$, i.e., $\lambda^T A x= c^T x$, in addition, $\lambda \in K^*$ ensures $\lambda^T A x \ge \lambda^T b$, implying that $\lambda^T b$ is a valid lower bound of the objective function. The best bound one can expect is the optimum of the problem \begin{equation} \label{eq:App-01-CP-Dual} \max_\lambda \{ b^T \lambda ~|~ A^T \lambda = c,~ \lambda \succeq_{K^*} 0\} \end{equation} which is also a conic program, and called the dual problem of conic program (\ref{eq:App-01-CP-Primal}). From above construction, we have already known that $c^T x \ge b^T \lambda$ is satisfied for all feasible $x$ and $\lambda$, which is the weak duality of conic programs. In fact, the primal-dual pair of conic programs has following properties: \begin{proposition} \label{pr:App-01-CP-Conic-Duality} (Conic Duality Theorem) \cite{App-A-CVX-Book-Ben} : The following conclusions hold true for conic program (\ref{eq:App-01-CP-Primal}) and its dual (\ref{eq:App-01-CP-Dual}). 1) Conic duality is symmetric: the dual problem is still a conic one, and the primal and dual problems are dual to each other. 2) Weak duality holds: the duality gap $c^T x - b^T \lambda$ is nonnegative over the primal and dual feasible sets. 2) If either of the primal problem or the dual problem is strictly feasible and has a finite optimum, then the other is solvable, and the duality gap is zero: $c^T x^* = b^T \lambda^*$ for some $x^*$ and $\lambda^*$. 3) If either of the primal problem or the dual problem is strictly feasible and has a finite optimum, then a pair of primal-dual feasible solutions ($x, \lambda$) solves the respective problems if and only if \begin{equation} \label{eq:App-01-CP-OCD-PD} \begin{gathered} Ax \succeq_K b \\ A^T \lambda = c \\ \lambda \succeq_{K^*} 0 \\ c^T x = b^T \lambda \end{gathered} \end{equation} or \begin{equation} \label{eq:App-01-CP-OCD-KKT} \begin{gathered} 0 \preceq_{K^*} \lambda \bot Ax - b \succeq_K 0 \\ A^T \lambda = c \end{gathered} \end{equation} where (\ref{eq:App-01-CP-OCD-PD}) is called the primal-dual optimality condition, and (\ref{eq:App-01-CP-OCD-KKT}) is called the KKT optimality condition. \end{proposition} The proof can be found in \cite{App-A-CVX-Book-Ben} and is omitted here. To highlight the role of strict feasibility in Proposition \ref{pr:App-01-CP-Conic-Duality}, consider the following example \begin{equation} \min_x \left\{ x_2 ~\middle|~ \begin{bmatrix} x_1 \\ x_2 \\ x_1 \end{bmatrix} \succeq_{\mathbb L^3_C} 0 \right\} \notag \end{equation} The feasible region is \begin{equation*} \sqrt{x^2_1 + x^2_2} \le x_1 \Leftrightarrow x_2 = 0,~ x_1 \ge 0 \end{equation*} So its optimal value is 0. As explained before, second-order cones are self-dual: $(\mathbb L^3_C)^* = \mathbb L^3_C$, it is easy to see the dual problem is \begin{equation} \max_\lambda \left\{ 0 ~\middle|~ \lambda_1 + \lambda_3 = 0,~ \lambda_2 = 1,~ \lambda \succeq_{\mathbb L^3_C} 0 \right\} \notag \end{equation} The feasible region is \begin{equation} \left\{ \lambda ~\middle|~ \sqrt{\lambda_1^2 + \lambda^2_2} \le \lambda_3,~ \lambda_3 \ge 0,~ \lambda_2 = 1,~ \lambda_1 = - \lambda_3 \right\} \notag \end{equation} which is empty, because $ \sqrt{(-\lambda_3)^2 + 1} > \lambda_3$. This example demonstrates that the existence of a strictly feasible point is indispensable for conic duality. But this condition is not necessary in LP duality, which means strong duality holds in conic programming with stronger assumptions. Several classes of conic programs with particular cones are of special interests. The cones in these problems are self-dual, so we can set up the dual program directly, which allows to explore deeply into the original problem, or convert it into equivalent formulations which are more computationally friendly. The structure of these relatively simple cones also helps develop efficient algorithms for corresponding conic programs. In what follows, we will investigate two extremely important classes of conic programs. \subsection{Second-order Cone Program} \label{App-A-Sect02-03} {\bf 1. Mathematical models of the primal and dual problems} Second-order cone program is a special class of conic problem with $K =\mathbb L^{n+1}_C$. It minimizes a linear function over the intersection of a polytope and the Cartesian product of second-order cones, and can be formulated as \begin{equation} \label{eq:App-01-SOCP-Conic-Primal} \min_x \left\{ c^T x ~\middle|~ Ax-b \succeq_K 0 \right\} \end{equation} where $x \in \mathbb R^n$, and $K = \mathbb L^{m_1}_C \times \cdots \times \mathbb L^{m_k}_C \times \mathbb R^{m_p}_+$, in other words, the conic constraints in (\ref{eq:App-01-SOCP-Conic-Primal}) can be expressed as $k$ second-order cones $A_i x - b_i \succeq_{\mathbb L^{m_i}} 0$, $i = 1,\cdots,k$ plus one polyhedron $A_p x - b_p \ge 0$ with the following matrix partition \begin{equation} \begin{bmatrix} A; b\end{bmatrix} = \begin{bmatrix} [A_1;b_1] \\ \vdots \\ [A_k;b_k] \\ [A_p;b_p] \end{bmatrix} \notag \end{equation} Recall the definition of second-order cone, we further partition the sub-matrices $A_i,b_i$ into \begin{equation} \begin{bmatrix} A_i; b_i \end{bmatrix} = \left[ \begin{gathered} D_i \\ p^T_i \end{gathered} ~~ \begin{gathered} d_i \\ q_i \end{gathered} ~ \right],~ i = 1,\cdots,k \notag \end{equation} where $D_i \in \mathbb R^{(m_i-1) \times n}$, $p_i \in \mathbb R^n$, $d_i \in \mathbb R^{m_i-1}$, $q_i \in \mathbb R$. Then we can write (\ref{eq:App-01-SOCP-Conic-Primal}) as \begin{equation} \label{eq:App-01-SOCP-MP-Primal} \begin{aligned} \min_x ~~ & c^T x \\ \mbox{s.t.}~~ & A_p x \ge b_p \\ & \|D_i x - d_i\|_2 \le p^T_i x -q_i,~ i=1,\cdots,k \end{aligned} \end{equation} (\ref{eq:App-01-SOCP-MP-Primal}) is often more convenient for model builders. It is easy to see that the cone $K$ in (\ref{eq:App-01-SOCP-Conic-Primal}) is self-dual, as both second-order cone and non-negative orthant are self-dual. In this regard, the dual problem of SOCP (\ref{eq:App-01-SOCP-Conic-Primal}) can be expressed as \begin{equation} \label{eq:App-01-SOCP-Conic-Dual} \max_\lambda \left\{ b^T \lambda ~\middle|~ A^T \lambda = c,~ \lambda \succeq_K 0 \right\} \end{equation} Partitioning the dual vector as \begin{equation} \lambda = \begin{bmatrix} \lambda_1 \\ \vdots \\ \lambda_k \\ \lambda_p \end{bmatrix},~ \lambda_i \in \mathbb L^{m_i}_C,~ i = 1, \cdots, k,~ \lambda_p \ge 0 \notag \end{equation} We can write the dual problem as \begin{equation} \label{eq:App-01-SOCP-Conic-Dual-Decomp} \begin{aligned} \max_\lambda~ & \sum_{i=1}^k b^T_i \lambda_i + b^T_p \lambda_p \\ \mbox{s.t.}~~ & \sum_{i=1}^k A^T_i \lambda_i + A^T_p \lambda_p = c \\ & \lambda_i \in \mathbb L^{m_i}_C,~ i = 1, \cdots, k \\ & \lambda_p \ge 0 \end{aligned} \end{equation} We further partition $\lambda_i$ according to the norm representation in (\ref{eq:App-01-SOCP-MP-Primal}) \begin{equation} \lambda_i = \begin{bmatrix} \mu_i \\ \nu_i \end{bmatrix},~ \mu_i \in \mathbb R^{m_i-1},~ \nu_i \in \mathbb R \notag \end{equation} all second-order cone constraints are associated with dual variables as \begin{equation} \begin{bmatrix} D_i x \\ p^T_i x \end{bmatrix} - \begin{bmatrix} d_i \\ q_i \end{bmatrix} \in \mathbb L^{m_i}_C : \begin{bmatrix} \mu_i \\ \nu_i \end{bmatrix},~ i = 1, \cdots, k \notag \end{equation} so the admissible region of dual variables $(\mu_i,\nu_i)$ is \begin{equation} \begin{bmatrix} \mu_i \\ \nu_i \end{bmatrix} \in (\mathbb L^{m_i}_C)^* \Rightarrow \|\mu_i\|_2 \le \nu_i \notag \end{equation} Finally, we arrive at the dual form of (\ref{eq:App-01-SOCP-MP-Primal}) \begin{equation} \label{eq:App-01-SOCP-MP-Dual} \begin{aligned} \max_\lambda~ & \sum_{i=1}^k \left(\mu^T_i d_i + \nu_i q_i \right) + b^T_p \lambda_p \\ \mbox{s.t.}~~ & \sum_{i=1}^k \left(D^T_i \mu_i + \nu_i p_i \right) + A^T_p \lambda_p = c \\ & \|\mu_i\|_2 \le \nu_i,~ i = 1, \cdots, k \\ & \lambda_p \ge 0 \end{aligned} \end{equation} (\ref{eq:App-01-SOCP-MP-Primal}) and (\ref{eq:App-01-SOCP-MP-Dual}) are more convenient than (\ref{eq:App-01-SOCP-Conic-Primal}) and (\ref{eq:App-01-SOCP-Conic-Dual-Decomp}) respectively because norm constraints can be recognized by most commercial solvers, whereas generalized inequalities $\succeq_K$ and constraints with the form $\in \mathbb L^{m_i}_C$ are supported only in some dedicated packages. Strict feasibility can be expressed in a more straightforward manner via norm constraints: the primal problem is strictly feasible if $\exists x: \|D_i x - d_i\|_2 < p^T_i x -q_i,~ i=1,\cdots,k,~A_p x > b_p$; the dual problem is strictly feasible if $\|\mu_i\|_2 < \nu_i,~ i = 1, \cdots, k,~ \lambda_p > 0$. In view of this, (\ref{eq:App-01-SOCP-MP-Primal}) and (\ref{eq:App-01-SOCP-MP-Dual}) are treated as the standard forms of an SOCP and its dual by practitioners whose primary interests are applications. \vspace{12pt} {\noindent \bf 2. What can be expressed via SOCPs?} Mathematical programs raised in engineering applications may not always appear in standard convex forms, and convexity may be hidden in seemingly non-convex expressions. Therefore, an important step is to recognize the potential existence of a convex form that is equivalent to the original formulation. This task can be rather tricky. We introduce some frequently used functions and constraints that can be represented by second-order cone constraints. \vspace{6pt} {\noindent \bf a. Convex quadratic constraints} A convex quadratic constraint has the form \begin{equation} \label{eq:App-01-SOCP-Convex-QC-1} x^T P x + q^T x + r \le 0 \end{equation} where $P \in \mathbb S^n_+$, $q \in \mathbb R^n$, $r \in \mathbb R$ are constant coefficients. Let $t = q^Tx + r$, we have \begin{equation} t = \frac{(t+1)^2}{4} - \frac{(t-1)^2}{4} \notag \end{equation} Performing the Cholesky factorization $P = D^T D$, (\ref{eq:App-01-SOCP-Convex-QC-1}) can be represented by \begin{equation} \|Dx\|^2_2 + \frac{(t+1)^2}{4} \le \frac{(t-1)^2}{4} \notag \end{equation} So (\ref{eq:App-01-SOCP-Convex-QC-1}) is equivalent to the following second-order cone constraint \begin{equation} \label{eq:App-01-SOCP-Convex-QC-2} \left\| \begin{gathered} 2 D x \\ q^Tx + r + 1 \end{gathered} \right\|_2 \le q^Tx + r - 1 \end{equation} However, not every second-order cone constraint can be expressed via a convex quadratic constraint. By squaring $\|D x - d\|_2 \le p^T x -q$ we get an equivalent quadratic inequality \begin{equation} x^T (D^T D - p p^T ) x + 2(q p^T - d^T D)x +d^T d -q^2 \le 0 \end{equation} with $p^T x - q \ge 0$. The matrix $M = D^T D - p p^T$ is not always positive semidefinite. Indeed, $M \succeq 0$ if and only if $\exists u, \|u\|_2 \le 1: p=D^T u$. On this account, SOCPs are more general than convex QCQPs. \vspace{6pt} {\noindent \bf b. Hyperbolic constraints} Hyperbolic constraints are frequently encountered in engineering optimization problems. They are non-convex in their original forms but can be represented by a second-order cone constraint. A hyperbolic constraint has the form \begin{equation} \label{eq:App-01-SOCP-Hyper-1} x^T x \le yz,~ y > 0,~ z > 0 \end{equation} where $x \in \mathbb R^n$, $y,z \in \mathbb R_{++}$. Noticing the fact that $4yz = (y+z)^2 - (y-z)^2$, (\ref{eq:App-01-SOCP-Hyper-1}) is equivalent to the following second-order cone constraint \begin{equation} \label{eq:App-01-SOCP-Hyper-2} \left\| \begin{gathered} 2 x \\ y-z \end{gathered} \right\|_2 \le y+z,~ y > 0,~z > 0 \end{equation} However, a hyperbolic constraint can not be expressed via a convex quadratic constraint, because the compact quadratic form of (\ref{eq:App-01-SOCP-Hyper-1}) is \begin{equation} \begin{bmatrix} x \\ y \\ z \end{bmatrix}^T P \begin{bmatrix} x \\ y \\ z \end{bmatrix} \le 0,~ P = \begin{bmatrix} 2I & 0 & 0 \\ 0 & 0 & -1 \\ 0 & -1 & 0 \end{bmatrix} \notag \end{equation} where the matric $P$ is indefinite. Many instances can be regarded as special cases of hyperbolic constraints, such as the upper branch of hyperbola \begin{equation} \{ (x,y) ~|~ xy \ge 1,~ x > 0 \} \notag \end{equation} and the epigraph of a fractional-quadratic function $g(x,s)=x^T x/s$, $s>0$ \begin{equation} \left\{ (x,s,t) ~\middle|~ t \ge \frac{x^T x}{s},~ s > 0 \right\} \notag \end{equation} \vspace{6pt} {\noindent \bf c. Composition of second-order cone representable functions} A function is called second-order cone representable if its epigraph can be represented by second-order cone constraints. Second-order cone representable functions are closed under composition \cite{App-A-SOCP-Boyd}. Suppose two univariate convex functions $f_1(x)$ and $f_2(x)$ are second-order cone representable, and $f_1(x)$ is monotonically increasing, the composition $g(x) = f_1(f_2(x))$ is also second-order cone representable, because its epigraph $\{(x,t) ~|~ g(x) \le t\}$ can be expressed by \begin{equation} \{(x,t) ~|~ \exists s: f_1(s) \le t,~ f_2(x) \le s \} \notag \end{equation} where $f_1(s) \le t$ and $f_2(x) \le s$ essentially come down to second-order cone constraints. \vspace{12pt} {\noindent \bf d. Maximizing the production of concave functions} Suppose two functions $f_1(x)$ and $f_2(x)$ are concave with $f_1(x) \ge 0,~ f_2(x) \ge 0$, and $-f_1(x)$ and $-f_2(x)$ are second-order cone representable [which means $f_1(x) \ge t_1$ and $f_2(x) \ge t_2$ are (equivalent to) second-order cone constraints]. Consider the maximum of their production \begin{equation} \label{eq:App-01-SOCP-Bargain-1} \max_x \{ f_1(x) f_2(x) ~|~ x \in X, f_1(x) \ge 0, f_2(x) \ge 0 \} \end{equation} where the feasible region $X$ is the intersection of a polyhedron and second-order cones. It is not instantly clear whether problem (\ref{eq:App-01-SOCP-Bargain-1}) is a convex optimization problem or not. This formulation frequently arises in engineering applications, such as the Nash Bargaining problem and multi-objective optimization problems. By introducing auxiliary variables $t,t_1,t_2$, it is immediately seen that problem (\ref{eq:App-01-SOCP-Bargain-1}) is equivalent to the following SOCP \begin{equation} \label{eq:App-01-SOCP-Bargain-2} \begin{aligned} \max_{x,t,t_1,t_2}~~ & t \\ \mbox{s.t.}~~ & x \in X \\ & t_1 \ge 0,~ t_2 \ge 0,~ t_1t_2 \ge t^2 \\ & f_1(x) \ge t_1,~ f_2(x) \ge t_2 \end{aligned} \end{equation} At the optimal solution, $f_1(x)f_2(x)=t^2$. \vspace{12pt} {\noindent \bf 3. Polyhedral approximation of second-order cones} Although SOCPs can be solved very efficiently, the state-of-the-art in numerical computing of SOCPs is still incomparable to that in LPs. The salient computational superiority of LPs inspires a question: can we approximate an SOCP by an LP without dramatically increasing the problem size? There have been other reasons to explore LP approximations for SOCPs. For example, to solve a bilevel program with an SOCP lower level, the SOCP should be replaced by its optimality conditions. However, the primal-dual optimality condition (\ref{eq:App-01-CP-OCD-PD}) may introduce bilinear terms, while the second-order cone complementarity constraints in KKT optimality condition (\ref{eq:App-01-CP-OCD-KKT}) cannot be linearized easily. If the SOCP can be approximated by an LP, then the KKT optimality condition can be linearized and the original bilevel program can be reformulated as an MILP. Clearly, if we only work in original variables, the number of additional constraints would quickly grow unacceptable with the increasing problem dimension and required accuracy. In this section, we introduce the technique developed in \cite{App-A-SOCP-LP}, which lifts the problem into higher dimensions with moderate numbers of auxiliary variables and constraints. We start with the basic question: find a polyhedral $\epsilon$-approximation ${\rm \Pi}$ for $\mathbb L^3_C$ such that: 1) If $x \in \mathbb L^3_C$, then $\exists u: (x,u) \in {\rm \Pi}$. 2) If $(x,u) \in {\rm \Pi}$ for some $u$, then $\sqrt{x^2_1+x^2_2} \le (1+\epsilon) x_3$. Geometrically, the polyhedral cone $\rm \Pi$ includes a system of homogeneous linear equalities and inequalities in variables $x,u$; its projection on $x$-space is an $\epsilon$-outer approximation of $\mathbb L^3_C$, and the error bound is quantified by $\epsilon x_3$. The answer to this question is given in \cite{App-A-SOCP-LP}. It is shown that ${\rm \Pi}$ can be expressed by \begin{equation} \label{eq:App-01-SOCP-Polyhedral} \begin{aligned} (a) \quad & \left\{ \begin{lgathered} \xi^0 \ge |x_1| \\ \eta^0 \ge |x_2| \end{lgathered} \right. \\ (b) \quad & \left\{ \begin{lgathered} \xi^j = \cos \left( \frac{\pi}{2^{j+1}} \right) \xi^{j-1} + \sin \left( \frac{\pi}{2^{j+1}} \right) \eta^{j-1} \\ \eta^j \ge \left| -\sin \left( \frac{\pi}{2^{j+1}} \right) \xi^{j-1} + \cos \left( \frac{\pi}{2^{j+1}} \right) \eta^{j-1} \right| \end{lgathered} \right. ,~ j=1,\cdots,v \\ (c) \quad & \left\{ \begin{lgathered} \xi^v \le x_3 \\ \eta^v \le \tan \left( \frac{\pi}{2^{v+1}} \right) \xi^v \end{lgathered} \right. \end{aligned} \end{equation} Formulation (\ref{eq:App-01-SOCP-Polyhedral}) can be understood from an geometric point of view: 1) Given $x \in \mathbb L^3_C$, set $\xi^0 = |x_1|$, $\eta^0 = |x_2|$, which satisfies (a) in (\ref{eq:App-01-SOCP-Polyhedral}), and point $P^0 = (\xi^0,\eta^0)$ belongs to the first quadrant. Let \begin{equation} \left\{ \begin{lgathered} \xi^j = \cos \left( \frac{\pi}{2^{j+1}} \right) \xi^{j-1} + \sin \left( \frac{\pi}{2^{j+1}} \right) \eta^{j-1} \\ \eta^j = \left| -\sin \left( \frac{\pi}{2^{j+1}} \right) \xi^{j-1} + \cos \left( \frac{\pi}{2^{j+1}} \right) \eta^{j-1} \right| \end{lgathered} \right. \notag \end{equation} which ensures (b). Point $P^j = (\xi^j,\eta^j)$ is obtained from $P^{j-1}$ according to following operation: rotate $P^{j-1}$ by angle $\phi_j = \pi/2^{j+1}$ clockwise and get a mediate point $Q^{j-1}$; if $Q^{j-1}$ resides in the upper half-plane, $P^j = Q^{j-1}$; otherwise $P^j$ is the reflection of $Q^{j-1}$ with respect to the $x$-axis. By this construction, it is clear that all vectors from the origin to $P^j$ have the same Euclidean norm, i.e., $\|[x_1,x_2]\|_2$. Moreover, as $P^0$ belongs to the first quadrant, the angle of $Q^0$ must satisfy $-\pi/4 \le \arg(Q^0) \le \pi/4$, and $0 \le \arg(P^1) \le \pi/4$. With the procedure going on, we have $|\arg(Q^j)| \le \pi/2^{j+1}$, and $0 \le \arg(P^{j+1}) \le \pi/2^{j+1}$, for $j=1,\cdots,v$. In the last step, $\xi^v \le \|P^v\|_2 = \|[x_1,x_2]\|_2 \le x_3$ and $ 0 \le \arg(P^v) \le \pi/2^{v+1}$ hold, ensuring condition (c). In this manner, a point in $\mathbb L^3_C$ has been extended to a solution of (\ref{eq:App-01-SOCP-Polyhedral}). 2) Given $(x,u) \in {\rm \Pi}$, where $u = \{\xi^j,\eta^j\},j=1,\cdots,v$. Define $P^j = [\xi^j, \eta^j]$, and it directly follows from (a) and (b) that all $P^j$ belongs to the first quadrant, and $\left\|P^0\right\|_2 \ge \sqrt{x^2_1+x^2_2}$. Moreover, recall the construction of $Q^j$ in previous analysis, it is seen $\|P^j\|_2 = \|Q^j\|_2$; the absolute value of the vertical coordinate of $P^{j+1}$ is no less than that of $Q^j$; therefore, $\|P^{j+1}\|_2 \ge \|Q^j\|_2 = \|P^j\|_2$. At last \begin{equation} \left\| P^v \right\|_2 \le \dfrac{x_3}{\cos \left( \dfrac{\pi}{2^{v+1}} \right)}\notag \end{equation} so we arrive at $\sqrt{x^2_1+x^2_2} \le (1+\epsilon) x_3$, where \begin{equation} \epsilon = \dfrac{1}{\cos \left( \dfrac{\pi}{2^{v+1}} \right)} - 1 \end{equation} In this way, a solution of (\ref{eq:App-01-SOCP-Polyhedral}) has been approximately extended to $\mathbb L^3_C$. Now, let us consider the general case: approximating \begin{equation} \mathbb L^{n+1}_C = \left\{(y,t)~\middle|~ \sqrt{y^2_1+\cdots+y^2_n} \le t \right\} \notag \end{equation} via a polyhedral cone. Without loss of generality, we assume $n=2^K$. To make use of the outcome in (\ref{eq:App-01-SOCP-Polyhedral}), $y$ is split into $2^{K-1}$ pairs $(y_1,y_2),\cdots,(y_{n-1},y_n)$, which are called variables of generation 0. A successor variable is associated with each pair, which is called variable of generation 1, and is further divided into $2^{K-2}$ pairs and associated with variable of generation 2, and so on. After $K-1$ steps of dichotomy, we complete variable splitting with two variables of generation $K-1$. The only variable of generation $K$ is $t$. For notation convenience, let $y^l_i$ be $i$-th variable of generation $l$, the original vector $y=[y^0_1,\cdots,y^0_n]$, and $t=y^K_1$. The ``parents'' of $y^l_i$ are variables $y^{l-1}_{2i-1},y^{l-1}_{2i}$. The total number of variables in the ``tower'' is $2n-1$. Using the tower of variables $y^l$, $\forall l$, the system of constraints \begin{equation} \label{eq:App-01-SOCP-Tower-Cone} \sqrt{(y^{l-1}_{2i-1})^2 + (y^{l-1}_{2i})^2} \le y^l_i,~ i = 1,\cdots,2^{K-l},~ l = 1,\cdots,K \end{equation} gives the same feasible region on $y$ as $L^{n+1}_C$, and each second-order cone in $\mathbb L^3_C$ in (\ref{eq:App-01-SOCP-Tower-Cone}) can be approximated by a polyhedral cone given in (\ref{eq:App-01-SOCP-Polyhedral}). The size of this polyhedral approximation is unveiled in \cite{App-A-SOCP-LP}: 1) The dimension of the lifted variable is $p \le n+O(1)\sum_{l=1}^K 2^{K-l} v_l$. 2) The number of constraints is $q \le O(1) \sum_{l=1}^K 2^{K-l} v_l$. The quality of the approximation is \cite{App-A-SOCP-LP} \begin{equation} \beta = \prod_{l=1}^K \dfrac{1}{\cos \left( \dfrac{\pi}{2^{v_l+1}} \right)} - 1 \notag \end{equation} Given a desired tolerance $\epsilon$, choose \begin{equation} v_l = \lfloor O(1) l \ln \frac{2}{\epsilon} \rfloor \notag \end{equation} with a proper constant $O(1)$, we can guarantee the following bounds: \begin{equation} \begin{aligned} \beta & \le \epsilon \\ p & \le O(1)n \ln \frac{2}{\epsilon} \\ q & \le O(1)n \ln \frac{2}{\epsilon} \\ \end{aligned} \notag \end{equation} which implies that the required numbers of variables and constraints grow linearly in the dimension of the target second-order cone. \subsection{Semidefinite Program} \label{App-A-Sect02-04} {\bf 1. Notation clarification} In this section, variables appear in the form of symmetric matrices, some notations should be clarified first. The Frobenius inner product of two matrices $A,B \in \mathbb M^n$ is defined by \begin{equation} \label{eq:App-01-SDP-Frobenius} \langle A, B \rangle = \mbox{tr}(AB^T) = \sum_{i=1}^n \sum_{j=1}^n A_{ij} B_{ij} \end{equation} The Euclidean norm of a matrix $X \in \mathbb M^n$ can be defined through the Frobenius inner product as follows \begin{equation} \label{eq:App-01-SDP-Matrix-Norm} \left\| X \right\|_2 = \sqrt{\langle X,X \rangle} = \sqrt{\mbox{tr}(X^T X)} \end{equation} Equipped with the Frobenius inner product, the dual cone of a given cone $K \subset \mathbb S^n$ is defined by \begin{equation} \label{eq:App-01-SDP-Dual-Cone} K^*=\{Y \in \mathbb S^n ~|~ \langle Y, X \rangle \ge 0,~ \forall X \in K\} \end{equation} Among the cones in $\mathbb S^n$, this section talks about the positive semidefinite cone $\mathbb S^n_+$. As what has been demonstrated in Sect. \ref{App-A-Sect01-03}, $S^n_+$ is self-dual, i.e., $(\mathbb S^n_+)^* = \mathbb S^n_+$. The interior of cone $\mathbb S^n_+$ consists of all $n \times n$ matrices that are positive definite, and is denoted by $\mathbb S^n_{++}$. \vspace{12pt} {\noindent \bf 2. Primal and dual formulations of SDPs} When $K=\mathbb S^n_+$, conic program (\ref{eq:App-01-CP-Primal}) boils down to an SDP \begin{equation} \min_x \{ c^T x ~|~ A x - b \in \mathbb S^n_+ \} \notag \end{equation} which minimizes a linear objective over the intersection of affine plane $y = Ax -b$ and the positive semidefinite cone $\mathbb S^n_+$. However, the notation in such a form is a little confusing: $Ax-b$ is a vector, which is not dimensionally compatible with the cone $\mathbb S^n_+$. In fact, we have met a similar difficulty at the very beginning: the vector inner product does not apply to matrices, which is consequently replaced with the Frobenius inner product. There are two prevalent ways to resolve the confliction in dimension, leading to different formulations which will be discussed. \vspace{12pt} {\noindent \bf a. Formulation based on vector decision variables} In this formulation, $b$ is replaced with a matrix $B \in \mathbb S^n$, and $Ax$ is replaced with a linear mapping $\mathcal A x: \mathbb R^n \to \mathbb S^n$. In this way, $\mathcal A x - B$ becomes an element of $\mathbb S^n$. A simple way to specify the linear mapping $\mathcal A x$ is \begin{equation} \mathcal A x = \sum_{j=1}^n x_j A_j,~ x = [x_1,\cdots,x_n]^T, ~ A_1,\cdots,A_n \in \mathbb S^n \notag \end{equation} With all these input matrices, an SDP can be written as \begin{equation} \label{eq:App-01-SDP-LMI-Primal} \min_x \{ c^T x ~|~ x_1 A_1 + \cdots + x_n A_n - B \succeq 0 \} \end{equation} where the cone $\mathbb S^n_+$ is omitted in the operator $\succeq$ without causing confusion. The constraint in (\ref{eq:App-01-SDP-LMI-Primal}) is an LMI. This formulation is general enough to capture the situation in which multiple LMIs exist, because \begin{equation} \mathcal A_i x - B_i \succeq 0,~ i=1,\cdots,k \Leftrightarrow \mathcal A x - B \succeq 0 \notag \end{equation} with $\mathcal A x = \mbox{Diag} (\mathcal A_1x,\cdots,\mathcal A_k x)$ and $B = \mbox{Diag} (B_1,\cdots,B_k)$. The general form of conic duality can be specified in the case when the cone $K = \mathbb S^n_+$. Associating a matrix dual variable $\rm \Lambda$ with the LMI constraint, and recalling the fact that $(\mathbb S^n_+)^* = \mathbb S^n_+$, the dual program of SDP (\ref{eq:App-01-SDP-LMI-Primal}) reads: \begin{equation} \label{eq:App-01-SDP-LMI-Dual} \max_{\rm \Lambda} \{ \langle B, {\rm \Lambda} \rangle ~|~ \langle A_i, {\rm \Lambda} \rangle =c_i,~ i = 1, \cdots,n, ~ {\rm \Lambda} \succeq 0 \} \end{equation} which remains an SDP. Apply conic duality theorem given in Proposition \ref{pr:App-01-CP-Conic-Duality} to SDPs (\ref{eq:App-01-SDP-LMI-Primal}) and (\ref{eq:App-01-SDP-LMI-Dual}). 1) Suppose $A_1,\cdots,A_n$ are linearly independent, i.e., no nontrivial linear combination of $A_1,\cdots,A_n$ gives an all zero matrix. 2) The primal SDP (\ref{eq:App-01-SDP-LMI-Primal}) is strict feasible, i.e., $\exists x: x_1 A_1 + \cdots + x_n A_n \succ B$, and is solvable (the minimum is attainable) 3) The dual SDP (\ref{eq:App-01-SDP-LMI-Dual}) is strict feasible, i.e., $\exists {\rm \Lambda} \succ 0: \langle A_i, {\rm \Lambda} \rangle =c_i,i = 1,\cdots,n$, and is solvable (the maximum is attainable). The optimal values of (\ref{eq:App-01-SDP-LMI-Primal}) and (\ref{eq:App-01-SDP-LMI-Dual}) are equal, and the complementarity and slackness condition \begin{equation} \label{eq:App-01-SDP-Complement} \langle {\rm \Lambda}, x_1 A_1 + \cdots + x_n A_n - B \rangle = 0 \end{equation} is necessary and sufficient for a pair of primal and dual feasible solutions ($x,{\rm \Lambda})$ to be optimal for their respective problems. For a pair of positive semidefinite matrices, it can be shown that \begin{equation} \langle X Y \rangle = 0 \Leftrightarrow XY = YX = 0 \notag \end{equation} indicating that the eigenvalues of these two matrices in some certain basis are “complementary”: for every common eigenvector, at most one of the two eigenvalues of $X$ and $Y$ can be strictly positive. \vspace{12pt} {\noindent \bf b. Formulation based on matrix decision variables} This formulation directly incorporates a matrix decision variable $X \in \mathbb S^n_+$, and imposes other restrictions on $X$ through linear equations. In the objective function, the vector inner product $c^T x$ is replaced by a Frobenius inner product $\langle C, X \rangle$. In this way, an SDP can be written as \begin{equation} \label{eq:App-01-SDP-Hybrid-Primal} \begin{aligned} \min_X ~~ & \langle C, X \rangle \\ \mbox{s.t.}~~ & \langle A_i, X \rangle = b_i : \lambda_i,~ i=1,\cdots,m \\ & X \succeq 0 : {\rm \Lambda} \end{aligned} \end{equation} By introducing dual variables (following the colon) for individual constraints, the dual program of (\ref{eq:App-01-SDP-Hybrid-Primal}) can be constructed as \begin{equation} \begin{aligned} \max_{\lambda,\rm \Lambda} ~~ & b^T \lambda + \langle 0, {\rm \Lambda} \rangle \\ \mbox{s.t.}~~ & {\rm \Lambda} + \lambda_1 A_1 + \cdots + \lambda_n A_n = C \\ & {\rm \Lambda} \succeq 0 \end{aligned} \notag \end{equation} Eliminating $\rm \Lambda$, we obtain \begin{equation} \label{eq:App-01-SDP-Hybrid-Dual} \begin{aligned} \max_\lambda ~~ & b^T \lambda \\ \mbox{s.t.}~~ & C - \lambda_1 A_1 - \cdots - \lambda_n A_n \succeq 0 \end{aligned} \end{equation} It is observed that (\ref{eq:App-01-SDP-Hybrid-Primal}) and (\ref{eq:App-01-SDP-Hybrid-Dual}) are in the same form compared with (\ref{eq:App-01-SDP-LMI-Dual}) and (\ref{eq:App-01-SDP-LMI-Primal}), respectively, except for the signs of some coefficients. SDP handles positive semidefinite matrices, so it is especially powerful in eigenvalue related problems, such as Lyapunov stability analysis and controller design, which are the main field of control theorists. Moreover, every SOCP can be formulated as an SDP because \begin{equation} \| y \|_2 \le t \Leftrightarrow \begin{bmatrix} tI & y \\ y^T & t\end{bmatrix} \succeq 0 \notag \end{equation} Nevertheless, solving SOCPs via SDP may not be a good idea. Interior-point algorithms for SOCPs have much better worst-case complexity than those for SDPs. In fact, SDPs are extremely popular in the convex relaxation technique for non-convex quadratic optimization problems, owing to its ability to offer a nearly global optimal solution in many practical applications, such as the OPF problem in power systems. The SDP based convex relaxation method for non-convex QCQPs will be discussed in the next section. Here we talk about some special cases involving homogeneous quadratic functions or at most two non-homogeneous quadratic functions. \vspace{12pt} {\noindent \bf 3. Homogeneous quadratic programs} Consider the following quadratic program \begin{equation} \label{eq:App-01-SDP-Homo-QP} \begin{aligned} \min~~ & x^T B x \\ \mbox{s.t.}~~ & x^T A_i x \ge 0,~ i = 1,\cdots,m \end{aligned} \end{equation} where $A_1,\cdots,A_m,B \in \mathbb S^n$ are constant coefficients. Suppose that problem (\ref{eq:App-01-SDP-Homo-QP}) is feasible. Due to its homogeneity, the optimal value is clear: $-\infty$ or 0, depending on whether there is a feasible solution $x$ such that $x^T B x < 0$ or not. But it is unclear which situation takes place, i.e., to judge $x^T B x \ge 0$ over the intersection of homogeneous inequalities $x^T A_i x \ge 0$, $i=1,\cdots,m$, or whether the implication \begin{equation} \label{eq:App-01-SDP-H-SLemma-1} x^T A_i x \ge 0,~ i = 1,\cdots,m \Rightarrow x^T B x \ge 0 \end{equation} holds. \begin{proposition} \label{pr:App-01-SDP-H-SLemma-1} If there exist $\lambda_i \ge 0$, $i=1,2,\cdots$ such that $B \succeq \sum_i \lambda_i A_i$, then the indication in (\ref{eq:App-01-SDP-H-SLemma-1}) is true. \end{proposition} To see this, $B \succeq \sum_i \lambda_i A_i \Leftrightarrow x^T (B - \sum_i \lambda_i A_i ) x \ge 0 \Leftrightarrow x^T B x \ge \sum_i \lambda_i x^T A_i x$; therefore, $x^T B x$ is a direct consequence of $x^T A_i x \ge 0, i = 1,\cdots,m$, as the right-hand side of the last inequality is non-negative. Proposition \ref{pr:App-01-SDP-H-SLemma-1} provides a sufficient condition for (\ref{eq:App-01-SDP-H-SLemma-1}), and necessity is generally not guaranteed. Nevertheless, if $m=1$, the condition is both necessary and sufficient. \begin{proposition} \label{pr:App-01-SDP-H-SLemma-2} (S-Lemma) Let $A,B \in \mathbb S^n$ and a homogeneous quadratic inequality \begin{equation} (a) \quad x^T A x \ge 0 \notag \end{equation} is strictly feasible. Then the homogeneous quadratic inequality \begin{equation} (b) \quad x^T B x \ge 0 \notag \end{equation} is a consequence of (a) if and only if $\exists \lambda \ge 0: B \succeq \lambda A$. \end{proposition} Proposition \ref{pr:App-01-SDP-H-SLemma-2} is called the S-Lemma or S-Procedure. It can be proved by many means. The most instructive one, in our tastes, is based on the semidefinite relaxation, which can be found in \cite{App-A-CVX-Book-Ben}. \vspace{12pt} {\noindent \bf 4. Non-homogeneous quadratic programs with a single constraint} Consider the following quadratic program \begin{equation} \label{eq:App-01-SDP-Heter-QP} \begin{aligned} \min~~ & f_0 (x) = x^T A_0 x + 2 b^T_0 x + c_0 \\ \mbox{s.t.}~~ & f_1(x) = x^T A_1 x + 2 b^T_1 x + c_1 \le 0 \end{aligned} \end{equation} Let $f^*$ denote the optimal solution, so $f_0(x) - f^* \ge 0$ is a consequence of $-f_1(x) \ge 0$. A sufficient condition for this implication is $\exists \lambda \ge 0: f_0(x) - f^* + \lambda f_1(x) \ge 0$. The left-hand side is a quadratic function with matrix form \begin{equation} \left[ \begin{gathered} x \\ 1 \end{gathered} \right]^T \left[ \begin{gathered} A_0 + \lambda A_1 \\ (b_0 + \lambda b_1)^T \end{gathered}\quad \begin{gathered} b_0 + \lambda b_1 \\ c_0 + \lambda c_1 - f^* \end{gathered}\right] \left[ \begin{gathered} x \\ 1 \end{gathered} \right] \notag \end{equation} Its non-negativeness is equivalent to \begin{equation} \left[ \begin{gathered} A_0 + \lambda A_1 \\ (b_0 + \lambda b_1)^T \end{gathered} \quad \begin{gathered} b_0 + \lambda b_1 \\ c_0 + \lambda c_1 - f^* \end{gathered} \right] \succeq 0 \end{equation} Similar to the homogeneous case, this condition is also sufficient. In view of this, the optimal value $f^*$ of (\ref{eq:App-01-SDP-Heter-QP}) solves the following SDP \begin{equation} \begin{aligned} \min_{\lambda,f}~~ & f \\ \mbox{s.t.} ~~ & \left[ \begin{gathered} A_0 + \lambda A_1 \\ (b_0 + \lambda b_1)^T \end{gathered} \quad \begin{gathered} b_0 + \lambda b_1 \\ c_0 + \lambda c_1 - f \end{gathered} \right] \succeq 0 \end{aligned} \label{eq:App-01-SDP-Heter-QP-LMI} \end{equation} This conclusion is known as the non-homogeneous S-Lemma: \begin{proposition} \label{pr:App-01-SDP-N-SLemma-2} (Non-homogeneous S-Lemma) Let $A_i \in \mathbb S^n$, $b_i \in \mathbb R^n$, and $c_i \in \mathbb R$, $i=0,1$, if $\exists x: x^T A_1 x + 2 b^T_1 x + c_1 < 0$, the implication \begin{equation} x^T A_1 x + 2 b^T_1 x + c_1 \le 0 \Rightarrow x^T A_0 x + 2 b^T_0 x + c_0 \le 0 \notag \end{equation} holds if and only if \begin{equation} \exists \lambda \ge 0 : \left[ \begin{gathered} A_0 \\ b_0^T \end{gathered} ~~ \begin{gathered} b_0 \\ c_0 \end{gathered} \right] \preceq \lambda \left[ \begin{gathered} A_1 \\ b_1^T \end{gathered} ~~ \begin{gathered} b_1 \\ c_1 \end{gathered} \right] \end{equation} \end{proposition} Because the implication can boil down to the maximum of quadratic function $x^T A_0 x + 2 b^T_0 x + c_0$ being non-positive over set $\{x|x^T A_1 x + 2 b^T_1 x + c_1 \le 0 \}$, which is a special case of (\ref{eq:App-01-SDP-Heter-QP}) by letting $f_0(x) = -x^T A_0 x - 2 b^T_0 x - c_0$, Proposition \ref{pr:App-01-SDP-N-SLemma-2} is a particular case of (\ref{eq:App-01-SDP-Heter-QP-LMI}) with the optimum $f^*=0$. A formal proof based on semidefinite relaxation is given in \cite{App-A-CVX-Book-Ben}. Since a quadratic inequality describes an ellipsoid, Proposition \ref{pr:App-01-SDP-N-SLemma-2} can be used to test whether an ellipsoid is contained in another one. As a short conclusion, we summarize the relation of discussed convex programs in Fig. \ref{fig:App-01-08}. \begin{figure}[!htp] \centering \includegraphics[scale=0.50]{Fig-App-01-08} \caption{Relations of the discussed convex programs.} \label{fig:App-01-08} \end{figure} \section{Convex Relaxation Methods for Non-convex QCQPs} \label{App-A-Sect03} One of the most prevalent and promising applications of SDP is to build tractable approximations of computationally intractable optimization problems. One of the most quintessential appliances is the convex relaxation of quadratically constrained quadratic programs (QCQPs), which cover vast engineering optimization problems. QCQPs are generally non-convex and could have more than one locally optimal solution, and each of them may yield significant different objective values. However, gradient based algorithms can only find a local solution which largely depends on the initial point. One primary interest is to identify the global optimal solution or determine a high-quality bound for the optimum, which can be used to quantify the optimality gap of a given local optimal solution. The SDP relaxation technique for solving non-convex QCQPs are briefly reviewed in this section. \subsection{SDP Relaxation and Valid Inequalities} \label{App-A-Sect03-01} A standard fact of quadratic expression is \begin{equation} \label{eq:App-01-SDPr-Quad} x^T Q x = \langle Q, x x^T \rangle \end{equation} where $\langle \cdot \rangle$ stands for the Frobenius inner product. Following the logic in \cite{App-A-SDP-Relaxation-Tutor}, we focus our attention on QCQPs in the following form \begin{equation} \label{eq:App-01-SDPr-QCQP} \min~~ \{x^T C x + c^T x ~|~ x \in F \} \end{equation} where \begin{equation} \label{eq:App-01-SDPr-QCQP-Fea} F = \left\{ x \in \mathbb R^n ~\middle|~ x^T A_k x + a^T_k x \le b_k,~ k = 1, \cdots, m,~ l \le x \le u \right\} \end{equation} All coefficient matrices and vectors have compatible dimensions. If $A_k = 0$ in all constraints, then the feasible set $F$ is a polyhedron, and (\ref{eq:App-01-SDPr-QCQP}) reduces to a quadratic program (QP); If $A_k \succeq 0$, $k=1,\cdots,m$ and $C \succeq 0$, (\ref{eq:App-01-SDPr-QCQP}) is a convex QCQP, which is easy to solve. Without loss of generality, we assume $A_k$, $k=1,\cdots,m$ and $C$ are indefinite, $F$ is a non-convex set, and the objective is a non-convex function. In fact, a number of hard optimization problems can be cast as non-convex QCQP (\ref{eq:App-01-SDPr-QCQP}). For example, a polynomial optimization problem can be reduced to a QCQP by introducing a tower of condensing variables, e.g., $x_1 x_2 x_3 x_4$ could be replaced by quadratic term $x_{12} x_{34}$ with $x_{12} = x_1 x_2$ and $x_{34}= x_3 x_4$. Moreover, a binary constraint $x \in \{0,1\}$ is equivalent to quadratic equality $x(x-1)=1$ where $x$ is continuous. A common idea to linearize non-convex terms $x^T A_k x$ is to define new variables $X_{ij}=x_i x_j$, $i=1,\cdots,n$, $j=1,\cdots,n$. In this way, $x^T A_k x = \sum_i \sum_j A_{ij} x_i x_j = \sum_{ij} A_{ij}X_{ij}$, and the last term is linear. Recall (\ref{eq:App-01-SDPr-Quad}), this fact can be written in a compact form \begin{equation} x^T A_k x = \langle A_k, X\rangle, ~ X = x x^T \notag \end{equation} With this transformation, QCQP (\ref{eq:App-01-SDPr-QCQP}) becomes \begin{equation} \label{eq:App-01-SDPr-QCQP-Ext} \min~~ \{ \langle C, X \rangle + c^T x ~|~ (x,X) \in \hat F \} \end{equation} where \begin{equation} \label{eq:App-01-SDPr-QCQP-Fea-Ext} \hat F = \left\{ (x,X) \in \mathbb R^n \times \mathbb S^n ~\middle|~ \begin{gathered} \langle A_k, X \rangle + a^T_k x \le b_k,~ k = 1, \cdots, m~ \\ l \le x \le u,~~ X = x x^T \end{gathered} \right\} \end{equation} In problem (\ref{eq:App-01-SDPr-QCQP-Ext}), non-convexity are concentrated in the relation between the lifting variable $X$ and the original variable $x$, whereas all other constraints are linear. Moreover, if we replace $\hat F$ with its convex hull conv($\hat F$), the optimal solution of (\ref{eq:App-01-SDPr-QCQP-Ext}) will not change, because its objective function is linear. However, conv($\hat F$) does not have a closed form expression. Convex relaxation approaches can be interpreted as attempting to approximate conv($\hat F$) through structured convex constraints which can be recognized by existing solvers. We define the following linear relaxation \begin{equation} \label{eq:App-01-SDPr-QCQP-Fea-LPr} \hat L = \left\{ (x,X) \in \mathbb R^n \times \mathbb S^n ~\middle|~ \begin{gathered} \langle A_k, X \rangle + a^T_k x \le b_k \\ k = 1, \cdots, m~ \\ l \le x \le u \end{gathered} \right\} \end{equation} which contains only linear constraints. Now, let us consider the lifting constraint \begin{equation} \label{eq:App-01-SDPr-X=xxT} X = x x^T \end{equation} which is called a rank-1 constraint. However, a rank constraint is non-convex and cannot be accepted by most solvers. Notice the fact that if (\ref{eq:App-01-SDPr-X=xxT}) holds, then \begin{equation} \begin{bmatrix} 1 & x^T \\ x & X \end{bmatrix} = \begin{bmatrix} 1 & x^T \\ x & x x^T \end{bmatrix} = \begin{bmatrix} 1 \\ x \end{bmatrix} \begin{bmatrix} 1 \\ x \end{bmatrix}^T \succeq 0 \notag \end{equation} Define an LMI constraint \begin{equation} \label{eq:App-01-SDPr-QCQP-Fea-LMI} \mbox{LMI} = \left\{ (x,X) ~\middle|~ Y= \begin{bmatrix} 1 & x^T \\ x & X \end{bmatrix} \succeq 0 \right\} \end{equation} The positive semi-definiteness condition is true over conv($\hat F$). The basic SDP relaxation of (\ref{eq:App-01-SDPr-QCQP-Ext}) replaces the rank-1 constraint in $\hat F$ with a weaker but convex constraint (\ref{eq:App-01-SDPr-QCQP-Fea-LMI}), giving rise to the following SDP \begin{equation} \label{eq:App-01-SDPr-QCQP-Basic} \begin{aligned} \min~~ & \langle C, X \rangle + c^T x \\ \mbox{s.t.}~~ & (x,X) \in \hat L \cap \mbox{LMI} \end{aligned} \end{equation} Clearly, the LMI constraint enlarges the feasible region defined by (\ref{eq:App-01-SDPr-X=xxT}), so the optimal solution to (\ref{eq:App-01-SDPr-QCQP-Basic}) may not be feasible in the original QCQP, and the optimal value is a strict lower bound. In this situation, the SDP relaxation is inexact. Conversely, if matrix $Y$ is indeed rank-1 at the optimal solution, then the SDP relaxation is exact and $x$ solves the original QCQP (\ref{eq:App-01-SDPr-QCQP}). The basic SDP relaxation model (\ref{eq:App-01-SDPr-QCQP-Basic}) can be further improved by enforcing additional linkages between $x$ and $X$, which are called valid inequalities. Suppose linear inequalities $\alpha^T x \le \alpha_0$ and $\beta^T x \le \beta_0$ are chosen from $\hat L$, then the quadratic inequality \begin{equation} (\alpha_0 - \alpha^T x) (\beta_0 - \beta^T x) = \alpha_0 \beta_0 - \alpha_0 \beta^T x -\beta_0 \alpha^T x + x^T \alpha \beta^T x \ge 0 \notag \end{equation} holds for all $x \in \hat L$. The last quadratic term can be linearized via the lifting variable $X$, resulting in the following linear inequality \begin{equation} \label{eq:App-01-SDPr-QCQP-VIN} \alpha_0 \beta_0 - \alpha_0 \beta^T x - \beta_0 \alpha^T x + \langle \beta \alpha^T, X \rangle \ge 0 \end{equation} Any linear inequality in $\hat L$ (possibly the same) can be used to construct valid inequalities. Because additional constraints are imposed on $X$, the relaxation could be tightened, and the feasible region shrinks but may still be larger than conv($\hat F$). If we construct valid inequality (\ref{eq:App-01-SDPr-QCQP-VIN}) from side constraint $l \le x \le u$, we get \begin{equation} \label{eq:App-01-SDPr-QCQP-VIN-Quad} \left. \begin{gathered} (x_i - l_i) (x_j - l_j) \ge 0 \\ (x_i - l_i) (u_j - x_j) \ge 0 \\ (u_i - x_i) (x_j - l_j) \ge 0 \\ (u_i - x_i) (u_j - x_j) \ge 0 \\ \end{gathered} \right\},~~ \forall i,j = 1,\cdots,n,~ i \le j \end{equation} Expanding these quadratic inequalities, the coefficients of quadratic terms $x_i x_j$ are equal to 1, and we obtain simple bounds on $X_{ij}$ \begin{equation} \begin{gathered} x_i l_j + x_j l_i - l_i l_j \le X_{ij} \\ x_i u_j + x_j l_i - l_i u_j \ge X_{ij} \\ u_i x_j - u_i l_j + x_i l_j \ge X_{ij} \\ u_i x_j + x_i u_j - u_i u_j \le X_{ij} \end{gathered} \notag \end{equation} or in a compact matrix form \cite{App-A-SDP-Relaxation-Tutor} \begin{equation} \label{eq:App-01-SDPr-QCQP-VIN-RLT} \mbox {RLT} = \left\{ (x,X) ~\middle|~ \begin{gathered} l x^T + x l^T - l l^T \le X \\ u x^T + x u^T - u u^T \le X \\ x u^T + l x^T - l u^T \ge X \end{gathered} \right\} \end{equation} (\ref{eq:App-01-SDPr-QCQP-VIN-RLT}) is known as the reformulation-linearization technique after the term appeared in \cite{App-A-APP-RLT}. These constraints have been extensively studied since it was proposed in \cite{App-A-Convex-Concave-Envelop}, due to the simple structure and satisfactory performance in various applications. The improved SDP relaxation with valid inequalities can be written as \begin{equation} \label{eq:App-01-SDPr-QCQP-LMI+RLT} \begin{aligned} \min~~ & \langle C, X \rangle + c^T x \\ \mbox{s.t.}~~ & (x,X) \in \hat L \cap \mbox{LMI} \cap \mbox{RLT} \end{aligned} \end{equation} From the construction of $\hat L$, LMI, and RLT, it is directly concluded that \begin{equation} \label{eq:App-01-SDPr-QCQP-Inclusion} \mbox{conv}(\hat F) \subseteq \hat L \cap \mbox{LMI} \cap \mbox{RLT} \end{equation} The inclusion becomes tight only in some very special situations, such as those encountered in the homogeneous and non-homogeneous S-Lemma. Nevertheless, what we really need is the equivalence between the optimal solution of the relaxed problem (\ref{eq:App-01-SDPr-QCQP-LMI+RLT}) and that of the original problem (\ref{eq:App-01-SDPr-QCQP}): if the optimal matrix variable of (\ref{eq:App-01-SDPr-QCQP-LMI+RLT}) allows a rank-1 decomposition \begin{equation} \begin{bmatrix} 1 & x^T \\ x & X \end{bmatrix} = \begin{bmatrix} 1 \\ x \end{bmatrix} \begin{bmatrix} 1 \\ x \end{bmatrix}^T \notag \end{equation} which indicates $X$ has a rank-1 decomposition $X = x x^T$, then $x$ is optimal in (\ref{eq:App-01-SDPr-QCQP}), and the SDP relaxation is said to be exact, although $\mbox{conv}(\hat F)$ may be a strict subset of $\hat L \cap \mbox{LMI} \cap \mbox{RLT}$. \subsection{Successively Tightening the Relaxation} \label{App-A-Sect03-02} If the matrix $X$ has a rank higher than 1, the corresponding optimal solution $x$ in (\ref{eq:App-01-SDPr-QCQP-LMI+RLT}) may be infeasible in (\ref{eq:App-01-SDPr-QCQP}). The rank-1 constraint on $X$ can be exactly described by a pair of LMIs $X \succeq x x^T$ and $X \preceq x x^T$. The former one is redundant to (\ref{eq:App-01-SDPr-QCQP-Fea-LMI}) indicated by the Schur complement theorem; the latter one is non-convex, which is simply neglected in the SDP relaxation. \vspace{12pt} {\noindent \bf 1. A dynamical valid inequality generation approach} An approach is proposed in \cite{App-A-SDP-Relaxation-Tutor} to generate valid inequalities dynamically by harnessing the constraint violations in $X \preceq x x^T$. The motivation comes from the fact that \begin{equation} X -x x^T \preceq 0 \Leftrightarrow \langle X, v_i v^T_i \rangle \le (v^T_i x)^2,~ i=1,\cdots,n \notag \end{equation} where $\{v_1,\cdots,v_n\}$ is a set of orthogonal basis of $\mathbb R^n$. To see this, any vector $h \in \mathbb R^n$ can be expressed as the linear combination of the orthogonal basis as $h = \sum_{i=1}^n \lambda_i v_i$, therefore, $h^T (X-x x^T) h = \langle X, h h^T \rangle - (h^T x)^2 = \sum_{i=1}^n \lambda^2_i [\langle X, v_i v^T_i \rangle - (v^T_i x)^2] \le 0$. In view of this, \begin{equation} \hat F = \hat L \cap \mbox{LMI} \cap \mbox{NSD} \notag \end{equation} where \begin{equation} \begin{aligned} \mbox{NSD} & = \{(x,X)~|~ X -x x^T \preceq 0 \} \\ & = \{ (x,X) ~|~ \langle X, v_i v^T_i \rangle \le (v^T_i x)^2, i=1,\cdots,n \} \end{aligned} \notag \end{equation} If $\{v_1,\cdots,v_n\}$ is the standard orthogonal basis, \begin{equation} \label{eq:App-01-SDPr-QCQP-NSD-1} \mbox{NSD} = \{ (x,X) ~|~ X_{ii} \le x_i^2,~i=1,\cdots,n \} \end{equation} It is proposed in \cite{App-A-SDP-Relaxation-Tutor} to construct NSD as \begin{equation} \label{eq:App-01-SDPr-QCQP-NSD-2} \mbox{NSD} = \{ (x,X) ~|~ \langle X, \eta_i \eta^T_i \rangle \le (\eta^T_i x)^2,~i=1,\cdots,n \} \end{equation} where $\{\eta_1,\cdots,\eta_n\}$ are the eigenvectors of matrix $X - x x^T$, because they exclude infeasible points with respect to $X -x x^T \preceq 0$ most effectively. Non-convex constraints in (\ref{eq:App-01-SDPr-QCQP-NSD-1}) and (\ref{eq:App-01-SDPr-QCQP-NSD-2}) can be handled by a special disjunctive programming derived in \cite{App-A-QCQP-Extended} and the convex-concave procedure investigated in \cite{App-A-CCP-Boyd}. The former one is an exact approach which requires binary variables to formulate disjunctive constraints; the latter is a heuristic approach which only solves convex optimization problems. We do not further detail these techniques here. \vspace{12pt} {\noindent \bf 2. A rank penalty method \cite{App-A-SDP-Rank-CCP}} In view of the rank-1 exactness condition, another way to tighten SDP relaxation is to work on the rank of the optimal solution. A successive rank penalty approach is proposed in \cite{App-A-SDP-Rank-CCP}. We consider problem (\ref{eq:App-01-SDPr-QCQP-Ext}) as a rank-constrained SDP \begin{equation} \label{eq:App-01-SDP-Rank} \min~~ \{ \langle {\rm \Omega}, Y \rangle ~|~ Y \in \hat L \cap \mbox{LMI} \cap \mbox{RLT},~ \mbox{rank}(Y) = 1 \} \end{equation} where \begin{equation} {\rm \Omega} = \begin{bmatrix} 0 & 0.5 c^T \\ 0.5 c & C \end{bmatrix},~~ Y = \begin{bmatrix} 1 & x^T \\ x & X \end{bmatrix} \notag \end{equation} constraints $\hat L$, LMI, and RLT (rearranged for variable $Y$) are defined in (\ref{eq:App-01-SDPr-QCQP-Fea-LPr}), (\ref{eq:App-01-SDPr-QCQP-Fea-LMI}), and (\ref{eq:App-01-SDPr-QCQP-VIN-RLT}), respectively. The last constraint in (\ref{eq:App-01-SDP-Rank}) ensures that $Y$ has a rank-1 decomposition such that $X = x x^T$. Actually, LMI and RLT are redundant to the rank-1 constraint, but will give a high quality convex relaxation when the rank constraint is relaxed. To treat the rank-1 constraint in a soft manner, we introduce a dummy variable $Z$, and penalize the matrix rank in the objective function, giving rising to the following problem \begin{equation} \label{eq:App-01-SDP-Rank-Penalty} \begin{aligned} \min_{Y}~~ & \left\{ \langle {\rm \Omega}, Y \rangle + \min_{Z} \frac{\rho}{2} \| Y-Z \|^2_2 \right\} \\ \mbox{s.t.} ~~ & Y \in \hat L \cap \mbox{LMI} \cap \mbox{RLT},~ \mbox{rank}(Z) = 1 \end{aligned} \end{equation} If the penalty parameter $\rho$ is sufficiently large, the penalty term will be zero at the optimal solution, so $Y=Z$ and rank$(Z)=1$. One advantage of this treatment is that the constraints on $Y$ and $Z$ are decoupled, and the inner rank minimization problem has a closed-form solution. To see this, if rank$(Y) = k > 1$, the singular value decomposition of $Y$ has the form $Y = U {\rm \Sigma} V^T$, where \begin{equation} {\rm \Sigma} = \mbox{diag}(S,0),~ S = \mbox{diag}(\sigma_1,\cdots,\sigma_k),~ \sigma_1 \ge \cdots \ge \sigma_k > 0 \notag \end{equation} $U$ and $V$ are orthogonal matrices. Let matrix $D$ have the same dimension as $Y$, $D_{11}=\sigma_1$, and $D_{ij}=0$, $\forall (i,j) \ne (1,1)$, we have \begin{equation} \begin{aligned} & \quad \min_{Z}~ \left\{ \frac{\rho}{2} \| Y - Z \|^2_2 ~\middle|~ \mbox{rank}(Z) = 1 \right\} \\ & = \min_{Z}~ \left\{ \frac{\rho}{2} \| U(Y - Z)V^T \|^2_2 ~\middle|~ \mbox{rank}(Z) = 1 \right\} \\ & = \min_{Z}~ \left\{ \frac{\rho}{2} \| {\rm \Sigma} - U Z V^T \|^2_2 ~\middle|~ \mbox{rank}(Z) = 1 \right\} \\ & = \frac{\rho}{2} \| {\rm \Sigma} - D \|^2_2 = \frac{\rho}{2} \sum_{i=2}^k \sigma_i^2 \\ & = \frac{\rho}{2} \| Y \|^2_2 - \frac{\rho}{2} \sigma_1^2 (Y) \end{aligned} \notag \end{equation} To represent the latter term via a convex function, let matrix $\rm \Theta$ have the same dimension as $Y$, ${\rm \Theta}_{11} = 1$, and ${\rm \Theta}_{ij}=0$, $\forall (i,j) \ne (1,1)$, we have \begin{equation} \mbox{tr}(Y^T U {\rm \Theta} U^T Y) = \mbox{tr}(V {\rm \Sigma} U^T U {\rm \Theta} U^T U {\rm \Sigma} V^T) = \mbox{tr}(V {\rm \Sigma} {\rm \Theta} {\rm \Sigma} V^T) = \mbox{tr}({\rm \Sigma} {\rm \Theta} {\rm \Sigma}) = \sigma^2_1 (Y) \notag \end{equation} Define two functions $f(Y) = \langle {\rm \Omega}, Y \rangle + \frac{\rho}{2} \|Y\|^2_2$ and $g(Y) = \mbox{tr}(Y^T U {\rm \Theta} U^T Y)$. Because $\|Y\|_2$ is convex in $Y$ (Example 3.11, \cite{App-A-CVX-Book-Boyd}), so is $\|Y\|^2_2$ (composition rule, page 84, \cite{App-A-CVX-Book-Boyd}); clearly, $f(Y)$ is a convex function in $Y$, as it is the sum of a linear function and a convex function. For the latter one, the Hessian matrix of $g(Y)$ is \begin{equation*} \nabla^2_Y g(Y) = U {\rm \Theta} U^T = U {\rm \Theta}^T {\rm \Theta} U^T = ({\rm \Theta} U^T)^T {\rm \Theta} U^T \succeq 0 \end{equation*} so $g(Y)$ is also convex in $Y$. Substituting above results into problem (\ref{eq:App-01-SDP-Rank-Penalty}), the rank constrained SDP (\ref{eq:App-01-SDP-Rank}) boils down to \begin{equation} \label{eq:App-01-SDP-Rank-DCP} \min_{Y} \left\{ \langle {\rm \Omega}, Y \rangle + \frac{\rho}{2} \|Y\|^2_2 - \frac{\rho}{2} \mbox{tr}(Y^T U {\rm \Theta} U^T Y) ~\middle|~ Y \in \hat L \cap \mbox{LMI} \cap \mbox{RLT} \right\} \end{equation} The objective function is a DC function, and the feasible region is convex, so (\ref{eq:App-01-SDP-Rank-DCP}) is a DC program. One can employ the convex-concave procedure discussed in \cite{App-A-CCP-Boyd} to solve this problem. The flowchart is summarized in Algorithm \ref{Ag:App-01-SDP-Rank-CCP}. \begin{algorithm}[!htp] \normalsize \caption{\bf : Sequential SDP} \begin{algorithmic}[1] \STATE Choose an initial penalty parameter $\rho^0$, a penalty growth rate $\tau > 0$, and solve the following SDP relaxation model \begin{equation} \min~~ \{ \langle {\rm \Omega}, Y \rangle ~|~ Y \in \hat L \cap \mbox{LMI} \cap \mbox{RLT} \} \notag \end{equation} The optimal solution is $Y^*$. \STATE Construct the linear approximation of $g(Y)$ as \begin{equation} \label{eq:App-01-SDP-DCP-Concave-Lin} g_L (Y,Y^*) = g(Y^*) + \langle \nabla g(Y^*),Y-Y^* \rangle \end{equation} Solve the the following SDP \begin{equation} \label{eq:App-01-SDP-DCP-Master} \min_{Y} \left\{ f(Y) - \frac{\rho}{2} g_L(Y,Y^*) ~\middle|~ Y \in \hat L \cap \mbox{LMI} \cap \mbox{RLT} \right\} \end{equation} The optimal solution is $Y^*$. \STATE If rank$(Y^*)=1$, terminate and report the optimal solution $Y^*$; otherwise, update $\rho \leftarrow (1+\tau) \rho$, and go to step 2. \end{algorithmic} \label{Ag:App-01-SDP-Rank-CCP} \end{algorithm} For the convergence of Algorithm \ref{Ag:App-01-SDP-Rank-CCP}, we have the following properties. \begin{proposition} \label{pr:App-01-SDP-Rank-CCP-1} \cite{App-A-SDP-Rank-CCP} The optimal value sequence $F(Y^i)$ generated by Algorithm \ref{Ag:App-01-SDP-Rank-CCP} is monotonically decreasing. \end{proposition} Denote by $F(Y) = f(Y) - \frac{\rho}{2} g(Y)$ the objective function of (\ref{eq:App-01-SDP-Rank-DCP}) in the DC form, and $H(Y,Y^i) = f(Y) - \frac{\rho}{2} g_L(Y,Y^i)$ the convexified objective function in (\ref{eq:App-01-SDP-DCP-Master}) by linearizing the concave term in $F(Y)$. Two basic facts help explain this proposition: 1) $g_L(Y^*,Y^*) = g(Y^*)$, $\forall Y^*$ which directly follows from the definition in (\ref{eq:App-01-SDP-DCP-Concave-Lin}). 2) For any given $Y^*$, $g(Y) \ge g_L(Y,Y^*)$, $\forall Y$, because the graph of a convex function must lie over its tangent plane at any fixed point. First we can asset inequality $H(Y^{i+1},Y^i) \le H(Y^i,Y^i)$, because $H(Y,Y^i)$ is optimized in problem (\ref{eq:App-01-SDP-DCP-Master}). The optimum $H(Y^{i+1},Y^i)$ deserves a value no greater than that at any feasible point. Furthermore, with the definition of $H(Y^i,Y^i)$, we have \begin{equation} H(Y^{i+1},Y^i) \le H(Y^i,Y^i) = f(Y^i) - \frac{\rho}{2} g_L(Y^i,Y^i) = f(Y^i) - \frac{\rho}{2} g(Y^i) = F(Y^i) \notag \end{equation} On the other hand, \begin{equation} H(Y^{i+1},Y^i)=f(Y^{i+1}) - \frac{\rho}{2} g_L(Y^{i+1},Y^i) \ge f(Y^{i+1}) - \frac{\rho}{2} g(Y^{i+1})=F(Y^{i+1}) \notag \end{equation} Consequently, we arrive at the monotonic property \begin{equation} F(Y^{i+1}) \le F(Y^i) \notag \end{equation} \begin{proposition} \label{pr:App-01-SDP-Rank-CCP-2} \cite{App-A-SDP-Rank-CCP} The solution sequence $Y^i$ generated by Algorithm \ref{Ag:App-01-SDP-Rank-CCP} approaches to the optimal solution of problem (\ref{eq:App-01-SDP-Rank}) when $\rho \rightarrow \infty$. \end{proposition} It is easy to understand that whenever $\rho$ is sufficiently large, the penalty term will tend to 0, and the rank-1 constraint in (\ref{eq:App-01-SDP-Rank}) is met. A formal proof can be found in \cite{App-A-SDP-Rank-CCP}. A few more remarks are given below. 1) The convex-concave procedure in \cite{App-A-SDP-Boyd} is a local algorithm under mild conditions and needs a manually supplied initial point. Algorithm \ref{Ag:App-01-SDP-Rank-CCP}, however, is elaborately initiated at the solution offered by the SDP relaxation model, which usually appears to be close to the global optimal one for many engineering optimization problems. Therefore, Algorithm \ref{Ag:App-01-SDP-Rank-CCP} generally performs well and will identify the global optimal solution, although a provable guarantee is non-trivial. 2) In practical applications, Algorithm \ref{Ag:App-01-SDP-Rank-CCP} could converge without the penalty parameter approaching infinity, because when some constraint quantification holds, there exists an exact penalty parameter $\rho^*$, such that the optimal solution leads to a zero penalty term for any $\rho \ge \rho^*$ \cite{App-A-Exact-Penalty-1,App-A-Exact-Penalty-2}, and Algorithm \ref{Ag:App-01-SDP-Rank-CCP} converges in a finite number of steps. If the exact penalty parameter does not exist, Algorithm \ref{Ag:App-01-SDP-Rank-CCP} may fail to converge. In such circumstance, one can impose an upper bound on $\rho$, and use an alternative convergence criterion: the change of the objective value $F(Y)$ in two consecutive steps is less than a given threshold value. As a result, Algorithm \ref{Ag:App-01-SDP-Rank-CCP} will be able to find an approximate solution of problem (\ref{eq:App-01-SDP-Rank}), and the rank-1 constraint may not be enforced. 3) From the numeric computation perspective, a very large $\rho$ may cause ill-conditioned problem and lead to numerical instability, so it is useful to gradually increase $\rho$ from a small value. Another reason for the moderate growth of $\rho$ is that it does not cause dramatic change of optimal solutions in two successive iterations. As a result, $g_L(Y,Y^*)$ can provide relatively accurate approximation for $g(Y)$ in every iteration. 4) The penalty term $\rho_i p(Y^i)/2 = \rho_i \left[ \|Y^i\|^2_2 - \mbox{tr}(Y^{iT} U {\rm \Theta} U^T Y^i) \right]/2$ gives an upper bound on the optimality gap induced by rank relaxation. To see this, let $\rho^*$ and $Y^*$ be the exact penalty parameter and corresponding optimal solution of (\ref{eq:App-01-SDP-DCP-Master}), i.e., $p(Y^*)=0$; $\rho_i$ and $Y^i$ be the penalty parameter and optimal solution in $i$-th iteration. According to Proposition \ref{pr:App-01-SDP-Rank-CCP-1}, we have $\langle {\rm \Omega}, Y^* \rangle \le \langle {\rm \Omega}, Y^i \rangle + \rho_i p(Y^i)/2$; moreover, since the rank-1 constraint is relaxed before Algorithm \ref{Ag:App-01-SDP-Rank-CCP} could converge, $\langle {\rm \Omega}, Y^i \rangle \le \langle {\rm \Omega}, Y^* \rangle$ holds. Therefore, $\langle {\rm \Omega}, Y^i \rangle$ and $\langle {\rm \Omega}, Y^i \rangle + \rho_i p(Y^i)/2$ are lower and upper bounds for the optimal value of problem (\ref{eq:App-01-SDP-Rank}). In this regard, $\rho_i p(Y^i)/2$ is an estimation on the optimality gap. \subsection{Completely Positive Program Relaxation} \label{App-A-Sect03-03} Inspired by the convex hull expression in (\ref{eq:App-01-COMPL-Cone}), researchers have shown that most non-convex QCQPs can be modeled as linear programs over the intersection of a completely positive cone and a polyhedron \cite{App-A-COPr-1,App-A-COPr-2,App-A-COPr-3}. For example, consider minimizing a quadratic function over a standard simplex \begin{equation} \label{eq:App-01-COPr-Example-1} \begin{aligned} \min~~ & x^T Q x \\ \mbox{s.t.} ~~ & e^T x = 1 \\ & x \ge 0 \end{aligned} \end{equation} where $Q \in \mathbb S^n$, and $e$ denotes the all-one vector with $n$ entries. Following the paradigm similar to (\ref{eq:App-01-SDPr-QCQP-Ext}), let $X = x x^T$, and then we can construct a valid inequality \begin{equation*} 1 = x^T e e^T x = x^T E x = \langle E, X \rangle \end{equation*} where $E=ee^T$ is the all-one matrix. According to (\ref{eq:App-01-COMPL-Cone}), conv$\{xx^T| x \in \mathbb R^n_+ \}$ is given by $(\mathbb C^n_+)^*$. Therefore, problem (\ref{eq:App-01-COPr-Example-1}) transforms to \begin{equation} \label{eq:App-01-COPr-Example-2} \begin{aligned} \min~~ & \langle Q, X \rangle \\ \mbox{s.t.} ~~ & \langle E, X \rangle = 1 \\ & X \in \mathbb (\mathbb C^n_+)^* \end{aligned} \end{equation} Problem (\ref{eq:App-01-COPr-Example-2}) is a convex relaxation of (\ref{eq:App-01-COPr-Example-1}). Because the objective is linear, the optimal solution must be located at one extremal point of the convex hull of the feasible region. In view of the representation in (\ref{eq:App-01-COMPL-Cone}), the extremal points are exactly rank-1, so the convex relaxation (\ref{eq:App-01-COPr-Example-2}) is always exact. Much more general results are demonstrated in \cite{App-A-COPr-2} that every quadratic program with linear and binary constraints can be rewritten as a completely positive program. More precisely, a mixed-integer quadratic program \begin{equation} \label{eq:App-01-COPr-Example-3} \begin{aligned} \min~~ & x^T Q x + 2 c^T x \\ \mbox{s.t.} ~~ & a^T_i x = b_i,~ i = 1,\cdots,m \\ & x \ge 0,~ x_j \in \{0,1\}, j \in B \end{aligned} \end{equation} and the following completely positive program \begin{equation} \label{eq:App-01-COPr-Example-4} \begin{aligned} \min~~ & \langle Q, X \rangle + 2 c^T x \\ \mbox{s.t.} ~~ & a^T_i x = b_i,~ i = 1,\cdots,m \\ & \langle a_i a^T_i, X \rangle = b^2_i,~ i = 1,\cdots,m \\ & x_j = X_{jj},~ j \in B \\ & X \in \mathbb (C^n_+)^* \end{aligned} \end{equation} have the same optimal solution, as long as problem (\ref{eq:App-01-COPr-Example-3}) satisfies: $a^T_i x = b_i$, $\forall i$ and $x \ge 0$ implies $x_j \le 1$, $\forall j \in B$. Actually, this is a relatively mild condition \cite{App-A-COPr-2}. Complementarity constraints can be handled in the similar way. Whether problems with general quadratic constraints can be restated as completely positive programs in the similar way remains an open question. The NP-hardness of problem (\ref{eq:App-01-COPr-Example-3}) makes (\ref{eq:App-01-COPr-Example-4}) NP-hard itself. The complexity has been encapsulated into the last cone constraint. The relaxation model is still interesting due to its convexity. Furthermore, it can be approximated via a sequence of SDPs with growing sizes \cite{App-A-COPr-SOS} given an arbitrarily small error bound. \subsection{MILP Approximation} \label{App-A-Sect03-04} SDP relaxation technique introduces a squared matrix variable that contains $n(n+1)/2$ independent variables. Although exploiting the sparse pattern of $X$ via graphic theory is helpful to expedite problem solution, the computational burden is still high especially when the initial relaxation is inexact and a sequence of SDPs should be solved. Inspired by difference-of-convex programming an alternative choice is to express the non-convexity of QCQP by univariate concave functions, and approximate these concave functions via PWL functions compatible with mixed-integer programming solvers. This approach has been expounded in \cite{App-A-QCQP-MILP}. Consider nonconvex QCQP \begin{equation} \label{eq:App-01-QCQP-MILP-Appr-1} \begin{aligned} \min~~ & x^T A_0 x + a^T_0 x \\ \mbox{s.t.} ~~ & x^T A_k x + a^T_k x \le b_k,~ k = 1, \cdots, m \end{aligned} \end{equation} We can always find $\delta_0$, $\delta_1$, $\cdots$, $\delta_m \in \mathbb R^+$, such that $A_k + \delta_k I \succeq 0$, $k=0,\cdots,m$. For example, $\delta_k$ can take the absolute value of the most negative eigenvalue of $A_k$, and $\delta_k=0$ if $A_k \succeq 0$. Then, problem (\ref{eq:App-01-QCQP-MILP-Appr-1}) can be cast as \begin{equation} \label{eq:App-01-QCQP-MILP-Appr-2} \begin{aligned} \min~~ & x^T (A_0+\delta_0I) x + a^T_0 x -\delta_0 1^T y \\ \mbox{s.t.} ~~ & x^T (A_k + \delta_k I) x + a^T_k x - \delta_k 1^T y \le b_k,~ k = 1, \cdots, m \\ & y_i = x^2_i, i=1, \cdots, n \end{aligned} \end{equation} Problem (\ref{eq:App-01-QCQP-MILP-Appr-2}) is actually a difference-of-convex program; however, the nonconvex terms are consolidated in much simpler parabolic equalities, which can be linearized via the SOS2 based PWL approximation technique discussed in Appendix \ref{App-B-Sect01}. Except for the last $n$ quadratic equalities, remaining constraints and objective function of problem (\ref{eq:App-01-QCQP-MILP-Appr-2}) are all convex, so the linearized problem gives rise to a mixed-integer convex quadratic program. Alternatively, we can first perform convex relaxation by replacing $y_i = x^2_i$ with $y_i \ge x^2_i$, $i=1, \cdots, n$; if strict inequality holds at the optimal solution, a disjunctive cut is generated to remove this point from the feasible region. However, the initial convex relaxation can be very weak ($y=+\infty$ is usually an optimal solution). Predefined disjunctive cuts can be added \cite{App-A-QCQP-MILP}. Finally, nonconvex QCQP is a hard optimization problem. Developing an efficient algorithm should leverage specific problem structure. For example, SDP relaxation is suitable for OPF problems; MILP approximation can be used for small and dense problems. Unlike SDP relaxation works on a squared matrix variable, the number of auxiliary variables in (\ref{eq:App-01-QCQP-MILP-Appr-2}) and its mixed-integer convex quadratic program approximation is moderate. Therefore, this approach is promising to tackle practical problems whose coefficient matrices are usually sparse. Furthermore, no particular assumption is needed to guarantee the exactness of relaxation, so this method is general enough to tackle a wide spectrum of engineering optimization problems. \section{MILP Formulation of Nonconvex QPs} \label{App-A-Sect04} In a non-convex QCQP, if the constraints are all linear, it is called a nonconvex QP. There is no doubt that convex relaxation methods presented in the previous section can be applied to nonconvex QPs. However, the relaxation is generally inexact. In this section, we introduce exact MILP formulations to globally solve such a nonconvex optimization problem; unlike the mixed-integer programming approximation method in Sect. \ref{App-A-Sect03-04}, in which approximation error is inevitable, by using duality theory, the MILP models will be completely equivalent to the original QP. Thanks to the advent of powerful MILP solvers, this method is becoming increasingly competitive compared to existing global solution methods and is attracting more attentions from the research community. \subsection{Nonconvex QPs over polyhedra} The presented approach is devised in \cite{App-A-QP-MILP}. A nonconvex QP with linear constraints has the form of \begin{equation} \label{eq:App-01-NC-QP} \begin{aligned} \min~~ & \frac{1}{2} x^T Q x + c^T x \\ \mbox{s.t.} ~~ & Ax \le b \end{aligned} \end{equation} where $Q$ is a symmetric, but indefinite matrix; $A$, $b$, $c$ are constant coefficients with compatible dimensions. We assume that finite lower and upper limits of the decision variable $x$ have been included, and thus the feasible region is a bounded polyhedron. The KKT conditions of (\ref{eq:App-01-NC-QP}) can be written as: \begin{equation} \label{eq:App-01-NC-QP-KKT} \begin{gathered} c + Qx + A^T \xi = 0 \\ 0 \le \xi \bot b - Ax \ge 0 \end{gathered} \end{equation} If there is a multiplier $\xi$ so that the pair $(x,\xi)$ of primal and dual variables satisfies KKT condition (\ref{eq:App-01-NC-QP-KKT}), then $x$ is said to be a KKT point or a stationary point. The complementarity and slackness condition in (\ref{eq:App-01-NC-QP-KKT}) gives $b^T \xi = x^T A^ T \xi$. For any primal-dual pair $(x,\xi)$ that satisfies (\ref{eq:App-01-NC-QP-KKT}), the following relations hold \begin{equation} \label{eq:App-01-NC-QP-Lin} \begin{aligned} \frac{1}{2} x^T Qx + c^T x &= \frac{1}{2} c^T x +\frac{1}{2} x^T (c + Qx)\\ & = \frac{1}{2} c^T x - \frac{1}{2} x^T A^ T \xi = \frac{1}{2} \left( c^T x - b^T \xi \right) \end{aligned} \end{equation} As such, the non-convex quadratic objective function is equivalently stated as a linear function in the primal and dual variables without loss of accuracy. Thus, if problem (\ref{eq:App-01-NC-QP}) has an optimal solution, then the solution can be retrieved by solving an LPCC \begin{equation} \label{eq:App-01-NC-QP-LPCC} \begin{aligned} \min~~ & \frac{1}{2} \left( c^T x - b^T \xi \right) \\ \mbox{s.t.} ~~ & c + Qx + A^T \xi = 0 \\ &0 \le \xi \bot b - Ax \ge 0 \end{aligned} \end{equation} which is equivalent to the following MILP \begin{equation} \label{eq:App-01-NC-QP-MILP} \begin{aligned} \min~~ & \frac{1}{2} c^T x - b^T \xi \\ \mbox{s.t.} ~~ & c + Qx + A^T \xi = 0 \\ & 0 \le \xi \le M(1-z) \\ & 0 \le b - Ax \le Mz \\ & z ~\mbox{ binary} \end{aligned} \end{equation} where $M$ is a sufficiently large constant; $z$ is a vector of binary variables. Regardless of the value of $z_i$, at most one of $\xi_i$ and $(b-Ax)_i$ can take a strictly positive value. For more rigorous discussions on this method, please see \cite{App-A-QP-MILP}, in which an unbounded feasible region is considered. More tricks in MILP reformulation technique can be found in the next chapter. It should be pointed out that the set of optimal solutions of (\ref{eq:App-01-NC-QP}) is a subset of stationary points described by (\ref{eq:App-01-NC-QP-KKT}), because (\ref{eq:App-01-NC-QP-KKT}) is only a necessary condition for optimality but not sufficient. Nevertheless, as we assumed that the feasible region is a bounded polytope (thus compact), QP (\ref{eq:App-01-NC-QP}) must have a finite optimum, then according to \cite{App-A-QP-LPCC-Opt-Eqv}, the optimal value is equal to the minimum of objective function values perceived at stationary points. Therefore, MILP (\ref{eq:App-01-NC-QP-MILP}) provides an exact solution to (\ref{eq:App-01-NC-QP}). Finally, we shed some light on the selection of $M$, since it has notable impact on the computational efficiency of (\ref{eq:App-01-NC-QP-MILP}). An LP based bound preprocessing method is thoroughly discussed in \cite{App-A-QP-LPCC-Bounding}, which is used in a finite branch-and-bound method for solving LPCC (\ref{eq:App-01-NC-QP-LPCC}). Here we briefly introduce the bounding method. For the primal variable $x$ which represents physical quantities or measures, its bounds depends on practical situations and security considerations, and we assume that the bound is $0 \le x \le U$. The bound can be tightened by solving \begin{equation} \label{eq:App-01-NC-QP-LPCC-Primal-Bounding} \min (\max)~ \{x_j ~|~ Ax \le b,~ 0 \le x \le U \} \end{equation} In (\ref{eq:App-01-NC-QP-LPCC-Primal-Bounding}), we can incorporate individual bounds for the components of vector $x$, which never wrecks the optimal solution and can be supplemented in (\ref{eq:App-01-NC-QP-MILP}). For the dual variables, we consider (\ref{eq:App-01-NC-QP}) again with explicit bounds on primal variable $x$ \begin{equation*} \begin{aligned} \min~~ & \frac{1}{2} x^T Q x + c^T x \\ \mbox{s.t.} ~~ & Ax \le b: \xi \\ & 0 \le x \le U: \lambda,\rho \end{aligned} \end{equation*} where $\xi$, $\lambda$, $\rho$ following the colon are dual variables. Its KKT condition reads \begin{subequations} \label{eq:App-01-NC-QP-KKT-Primal-Bound} \begin{gather} c + Qx + A^T \xi - \lambda + \rho = 0 \label{eq:App-01-NC-QP-KKT-Primal-Bound-1}\\ 0 \le \xi \bot b - Ax \ge 0 \label{eq:App-01-NC-QP-KKT-Primal-Bound-2}\\ 0 \le x \bot \lambda \ge 0 \label{eq:App-01-NC-QP-KKT-Primal-Bound-3} \\ 0 \le U - x \bot \rho \ge 0 \label{eq:App-01-NC-QP-KKT-Primal-Bound-4} \end{gather} Multiplying both sides of (\ref{eq:App-01-NC-QP-KKT-Primal-Bound-1}) by a feasible solution $x^T$ \begin{equation} \label{eq:App-01-NC-QP-KKT-Primal-Bound-5} c^T x + x^T Q x + x^T A^T \xi - x^T \lambda + x^T \rho = 0 \end{equation} Substituting $\xi^T A x = \xi^T b$, $x^T \lambda = 0$, and $x^T \rho = \rho^T U$ concluded from (\ref{eq:App-01-NC-QP-KKT-Primal-Bound-2})-(\ref{eq:App-01-NC-QP-KKT-Primal-Bound-4}) into (\ref{eq:App-01-NC-QP-KKT-Primal-Bound-5}) outcomes \begin{equation} \label{eq:App-01-NC-QP-KKT-Primal-Bound-6} c^T x + x^T Q x + b^T \xi + U^T \rho = 0 \end{equation} \end{subequations} The upper bounds (lower bounds are 0) on the dual variables required for MILP (\ref{eq:App-01-NC-QP-MILP}) can be computed from the following LP: \begin{subequations} \label{eq:App-01-NC-QP-KKT-Dual-Bounding} \begin{align} \max~~ & \lambda_j \label{eq:App-01-NC-QP-KKT-Dual-Bounding-Obj} \\ \mbox{s.t.}~~ & c + Qx + A^T \xi - \lambda + \rho = 0 \label{eq:App-01-NC-QP-KKT-Dual-Bounding-Cons-1} \\ & \mbox{tr}(Q^T X) + c^T x + b^T \xi + U^T \rho = 0 \label{eq:App-01-NC-QP-KKT-Dual-Bounding-Cons-2} \\ & \mbox{Cons-RLT}=\{ (x,X)~|~(\ref{eq:App-01-SDPr-QCQP-VIN-RLT})\} \label{eq:App-01-NC-QP-KKT-Dual-Bounding-Cons-3} \\ & 0 \le x \le U,~ Ax \le b, \lambda, \xi, \rho \ge 0 \label{eq:App-01-NC-QP-KKT-Dual-Bounding-Cons-4} \end{align} \end{subequations} In (\ref{eq:App-01-NC-QP-KKT-Dual-Bounding-Cons-2}), quadratic equality (\ref{eq:App-01-NC-QP-KKT-Primal-Bound-6}) is linearized by letting $X = xx^T$, and (\ref{eq:App-01-NC-QP-KKT-Dual-Bounding-Cons-3}) is a linear relaxation for above rank-1 condition, as explained in Sect. \ref{App-A-Sect03-01}. By exploiting the relaxation revealed in (\ref{eq:App-01-NC-QP-KKT-Dual-Bounding-Cons-2}), it bas been proved that problem (\ref{eq:App-01-NC-QP-KKT-Dual-Bounding}) always has a finite optimum, because the recession cone of the set comprised of the primal and dual variables as well as their associated valid inequalities is empty, see the proof of Proposition 3.1 in \cite{App-A-QP-LPCC-Bounding}. This is a pivotal theoretical guarantee. Other bounding techniques which only utilize KKT conditions hardly ensure a finite optimum. \subsection{Standard Nonconvex QPs} The presented approach is devised in \cite{App-A-Standard-QP}. A standard nonconvex QP entails minimizing a nonconvex quadratic function over a unit probability simplex \begin{equation} \label{eq:App-01-Stand-QP} \begin{aligned} v(Q) = \min~ & x^T Q x \\ \mbox{s.t.} ~ & x \in {\rm \Delta}_n \end{aligned} \end{equation} where $Q$ is a symmetric matrices, and unit simplex \begin{equation*} {\rm \Delta}_n = \{x \in \mathbb R^n_+~|~e^T x = 1 \} \end{equation*} where $e$ is all-one vector. A nonhomogeneous objective can always be transformed to a quadratic form given the simplex constraint ${\rm \Delta}_n$: \begin{equation*} x^T Q x + 2c^T x = x^T (Q + e c^T + c e^T ) x,~ \forall x \in {\rm \Delta}_n \end{equation*} Standard nonconvex QPs have wide applications in portfolio optimization, quadratic resource allocation, graphic theory and so on. In addition, for a given symmetric matrix $Q$, a necessary and sufficient condition for $Q$ being copositive is $v(Q) \ge 0$. Copositive programming is a young and active research field, and can help the research in convex relaxation. A fundamental problem is copositivity test, which entails solving (\ref{eq:App-01-Stand-QP}) globally. Problem (\ref{eq:App-01-Stand-QP}) is a special case of nonconvex QP (\ref{eq:App-01-NC-QP}), so the methods in previous subsection also work for (\ref{eq:App-01-Stand-QP}). The core trick is to select a big-M parameter in linearizing complementarity and slackness conditions. Due to its specific structure, the valid big-M parameter for problem (\ref{eq:App-01-Stand-QP}) can be chosen in a much more convenient way. To see this, the KKT condition of (\ref{eq:App-01-Stand-QP}) reads as \begin{subequations} \label{eq:App-01-Stand-QP-KKT} \begin{align} Qx - \lambda e - \mu & = 0 \label{eq:App-01-Stand-QP-KKT-1} \\ e^T x & = 1 \label{eq:App-01-Stand-QP-KKT-2} \\ x & \ge 0 \label{eq:App-01-Stand-QP-KKT-3} \\ \mu & \ge 0 \label{eq:App-01-Stand-QP-KKT-4} \\ x_j \mu_j & = 0,~ j=1,\cdots,n \label{eq:App-01-Stand-QP-KKT-5} \end{align} \end{subequations} where $\lambda$ and $\mu$ are dual variables associated with equality constraint $e^T x = 1$ and inequality constraint $x \ge 0$. Because the feasible region is polyhedral, constraint quantification always holds, and any optimal solution of (\ref{eq:App-01-Stand-QP}) must solve KKT system (\ref{eq:App-01-Stand-QP-KKT}). Multiplying both sides of (\ref{eq:App-01-Stand-QP-KKT-1}) by $x$ results in $x^T Q x = \lambda x^T e - x^T \mu$; substituting (\ref{eq:App-01-Stand-QP-KKT-2}) and (\ref{eq:App-01-Stand-QP-KKT-5}) into the right-hand side concludes $x^T Q x = \lambda$. Provided with eligible big-M parameter, problem (\ref{eq:App-01-Stand-QP}) is (exactly) equivalent to the following MILP \begin{equation} \label{eq:App-01-Stand-QP-MILP-1} \begin{aligned} \min~~ & \lambda \\ \mbox{s.t.} ~~ & Qx - \lambda e - \mu = 0 \\ & e^T x = 1,~ 0 \le x \le y \\ &0 \le \mu_j \le M_j(1-y_j),~ j = 1, \cdots, n \end{aligned} \end{equation} where $y \in \{0,1\}^n$, and $M_j$ is the big-M parameter. It is the upper bound of dual variable $\mu_j$. To estimate such a bound, according to (\ref{eq:App-01-Stand-QP-KKT-1}) \begin{equation*} \mu_j = e^{T}_j Q x - \lambda,~ j=1,\cdots,n \end{equation*} where $e_j$ is the $j$-th column of $n \times n$ identity matrix. For the first term, \begin{equation*} x^T Q e_j \le \max_{i \in \{1,\cdots,n\}} Q_{ij},~ j=1,\cdots,n \end{equation*} As for the second term, we know $\lambda \ge v(Q)$, so any known lower bound of $v(Q)$ can be used to obtain an upper bound of $M_j$. One possible lower bound of $v(Q)$ is suggested in \cite{App-A-Standard-QP} as \begin{equation*} l(Q) = \min_{1 \le i,j \le n} Q_{ij} + \dfrac{1}{\sum_{k=1}^n \left(Q_{kk}- \min\limits_{1 \le i,j \le n} Q_{ij} \right)^{-1}} \end{equation*} If the minimal element of $Q$ locates on the main diagonal, the second term vanishes and $l(Q) = \min_{1 \le i,j \le n} Q_{ij}$. In summary, a valid choice of $M_j$ would be \begin{equation} \label{eq:App-01-Stand-QP-Big-M} M_j = \max_{i \in \{1,\cdots,n\}} Q_{ij} - l(Q),~ j = 1,\cdots,n \end{equation} It is found in \cite{App-A-Standard-QP} that if we relax (\ref{eq:App-01-Stand-QP-KKT-1}) as an inequality and solve the following MILP \begin{equation} \label{eq:App-01-Stand-QP-MILP-2} \begin{aligned} \min~~ & \lambda \\ \mbox{s.t.} ~~ & Qx - \lambda e - \mu \le 0 \\ & e^T x = 1,~ 0 \le x \le y \\ &0 \le \mu_j \le M_j(1-y_j),~ j = 1, \cdots, n \end{aligned} \end{equation} which is an relaxed version of (\ref{eq:App-01-Stand-QP-MILP-1}), the optimal solution will not change. However, in some instances, solving (\ref{eq:App-01-Stand-QP-MILP-2}) is significantly faster than solving (\ref{eq:App-01-Stand-QP-MILP-1}). More thorough theoretical analysis can be found in \cite{App-A-Standard-QP}. \section{Further Reading} \label{App-A-Sect05} Decades of wonderful research has resulted in elegant theoretical developments and sophisticated computational softwares, which have brought convex optimization to an unprecedented dominating stage where it serves as the baseline and reference model for optimization problems in almost every discipline. Only problems which can be formulated as convex programs are regarded as theoretically solvable. We suggest following materials for readers who want to build a solid mathematical background or know more about applications in the field of convex optimization. 1. Convex analysis and convex optimization. Convex analysis is a classic topic in mathematics, and focuses on basic concepts and topological properties of convex sets and convex functions. We recommend monographs \cite{App-A-Convex-Analysis-1,App-A-Convex-Analysis-2,App-A-Convex-Analysis-3}. The last one sheds more light on optimization related topics, including DC programming, polynomial programming, and equilibrium constrained programming, which are originally non-convex. The most popular textbooks on convex optimization include \cite{App-A-CVX-Book-Ben,App-A-CVX-Book-Boyd}. They contain important materials that everyone who wants to apply this technique should know. 2. Special convex optimization problems. The most mature convex optimization problems are LPs, SOCPs, and SDPs. We recommend \cite{App-A-LP-Book-Dantzig,App-A-LP-Book-Bertsimas,App-A-LP-Book-Vanderbei} for the basic knowledge of duality theory, simplex algorithm, interior-point algorithm, and applications of LPs. The modeling abilities of SOCPs and SDPs have been well discussed in \cite{App-A-CVX-Book-Ben,App-A-CVX-Book-Boyd}. A geometric program is a type of optimization problem whose objective and constraints are characterized by special monomials and posynomial functions. Through a logarithmic variable transformation, a geometric program can be mechanically converted to a convex optimization problem. Geometric programming is relatively restrictive in structure, and it may not be apparent to see whether a given problem can be expressed by a geometric program. We recommend a tutorial paper \cite{App-A-GOP-Boyd} and references therein on this topic. Copositive program is a relatively young field in operational research. It is a special class of conic programming which is more general than SDP. Basic information on copositive/completely positive programs is introduced in \cite{App-A-Copositive-1,App-A-Copositive-2,App-A-Copositive-3}. They are particularly useful in combinatorial and quadratic optimization. Though very similar to SDPs in appearances, copositive programs are NP-hard. Algorithms and applications of copositive and completely positive programs have continued to be highly active research fields \cite{App-A-COP-New-1,App-A-COP-New-2,App-A-COP-New-3}. 3. General convex optimization problems. Beside above mature convex optimization models that can be specified without high level of expertise, recognizing the convexity of a general mathematical programming problem may be rather tricky. A deep understanding on convex analysis is unavoidable. Furthermore, to solve the problem using off-the-shelf solvers, a user must find a way to transform the problem into one of the standard forms (if a general purpose NLP solver fails to solve it). The so-called disciplined convex programming method is proposed in \cite{App-A-Disp-CVX} to lower this expertise barrier. The method consists of a set of rules and conventions that one must follow when setting up the problem such that the convexity is naturally sustained. This methodology has been implemented in cvx toolbox under Matlab environment. 4. Convex relaxation methods. One major application of convex optimization is to derive tractable approximations for non-convex programs, so as to facilitate problem resolution in terms of computational efficiency and robustness. A general QCQP is a quintessential non-convex optimization problem. Among various convex relaxation approaches, the SDP relaxation is shown to be able to offer high quality solutions for many QCQPs raised in signal process \cite{App-A-SDPr-Signal-1,App-A-SDPr-Signal-2} and power system energy management \cite{App-A-SDPr-Power-1,App-A-SDPr-Power-2}. Decades of excellent studies on SDP relaxation methods for QCQPs are comprehensively reviewed in \cite{App-A-SDP-Relaxation-Tutor,App-A-SDPr-QCQP-Rev-1,App-A-SDPr-QCQP-Rev-2}. Some recent advances are reported in \cite{App-A-SDPr-QCQP-1,App-A-SDPr-QCQP-2,App-A-SDPr-QCQP-3,App-A-SDPr-QCQP-4,App-A-SDPr-QCQP-5,App-A-SDPr-QCQP-6,App-A-SDPr-QCQP-7}. The rank of the matrix variable has a decisive impact on the exactness (or tightness) of the SDP relaxation. Low rank SDP method are attracting increasing attentions from researchers, and many approaches are proposed to recover a low-rank solution. More information can be found in \cite{App-A-SDP-Rank-1,App-A-SDP-Rank-2,App-A-SDP-Rank-3,App-A-SDP-Rank-4,App-A-SDP-Rank-5} and references therein. 5. Sum-of-squares (SOS) programming is originally devised in \cite{App-A-SOS-1} to decompose a polynomial $f(x)$ as the square of another polynomial $g(x)$ (if there exists), such that $f(x)=[g(x)]^2$ must be non-negative. Non-negativity of a polynomial over a semi-algebraic set can be certified in a similar way via Positivstellensatz refutations. This can be done by solving a structured SDP \cite{App-A-SOS-1}, and implemented in a Matlab based toolbox \cite{App-A-SOS-2}. Based on these outcomes, a promising methodology is quickly developed for polynomial programs, which cover a broader class of optimization problems than QCQPs. It is proved that the global solution of a polynomial program can be found by solving a hierarchy of SDPs under mild conditions. This is very inspiring since polynomial programs are generally non-convex while SDPs are convex. We recommend \cite{App-A-Poly-SDP-1} for a very detailed discussion on this approach, and \cite{App-A-Poly-SDP-2,App-A-Poly-SDP-3,App-A-Poly-SDP-4,App-A-Poly-SDP-5} for some recent advances. However, users should be aware that this approach may be unpractical because the size of the relaxed SDP quickly becomes unacceptable after a few steps. Nonetheless, the elegant theory still marks a milestone in the research field. \input{ap01ref} \chapter{Formulation Recipes in Integer Programming} \label{App-B} As stated in Appendix \ref{App-A}, generally speaking, convex optimization problems can be solved efficiently. However, the majority of optimization problems encountered in practical engineering are non-convex, and gradient based NLP solvers terminate at a local optimum, which may be far away from the global one. In fact, any nonlinear function can be approximated by a PWL function with adjustable errors by controlling the granularity of partitions. A PWL function can be expressed via a logic form or incorporating integer variables. Thanks to the latest progress in branch-and-cut algorithms and the development of state-of-the-art MILP solvers, a large-scale MILP can often be solved globally within reasonable computational efforts \cite{App-MILP-Solver-Perform}, although the MILP itself is proved to be NP-hard. In view of this fact, PWL/MILP approximation serves as a viable option to tackle real-world non-convex optimization problems, especially those with special structures. This chapter introduces PWL approximation methods for nonlinear functions and linear representations of special non-convex constraints via integer programming techniques. When the majority of a problem at hand is linear or convex, while non-convexity arises from nonlinear functions with only one or two variables, linear complementarity constraints, logical inferences and so on, it is worth trying the methods in this chapter, in view of the fact that MILP solvers are becoming increasingly efficient to retrieve a solution with a pre-defined optimality gap. \section{Piecewise Linear Approximation of Nonlinear Functions} \label{App-B-Sect01} \subsection{Univariate Continuous Function} \label{App-B-Sect01-01} Considering a nonlinear continuous function $f(x)$ in a single variable $x$, we can evaluate the function values $f(x_0)$, $f(x_1)$, $\cdots$, $f(x_n)$ at given breakpoints $x_0$, $x_1$, $\cdots$, $x_k$, and replace $f(x)$ with the following PWL function \begin{equation} \label{eq:App-02-PWL-Logic} f(x) = \begin{cases} m_1 x + c_1, & x \in [x_0,x_1] \\ m_2 x + c_2, & x \in [x_1,x_2] \\ \qquad \vdots & \qquad \vdots \\ m_k x + c_k, & x \in [x_{k-1},x_k] \end{cases} \end{equation} \begin{figure}[!t] \centering \includegraphics[scale=0.45]{Fig-App-02-01} \caption{Piecewise linear and piecewise constant approximations.} \label{fig:App-02-01} \end{figure} As an illustrative example, two curves of the original nonlinear function and its PWL approximation are portrayed in part (a), Fig. \ref{fig:App-02-01}. The PWL function in (\ref{eq:App-02-PWL-Logic}) is a finite union of line segments, but still non-convex. Moreover, the logic representation in (\ref{eq:App-02-PWL-Logic}) is not compatible with commercial solvers. Given the fact that any point on a line segment can be expressed as a convex combination of two terminal points, (\ref{eq:App-02-PWL-Logic}) can be written as \begin{equation} \label{eq:App-02-PWL-CC} \begin{aligned} x & = \sum_i \lambda_i x_i \\ y & = \sum_i \lambda_i f(x_i) \\ \lambda & \ge 0,~ \sum_i \lambda_i =1 \\ \lambda & \in \mathbb{SOS}_2 \end{aligned} \end{equation} where $\mathbb{SOS}_2$ stands for the special ordered set of type 2, describing a vector of variables with at most two adjacent ones being able to take nonzero values. The $\mathbb{SOS}_2$ constraint on $\lambda$ can be declared via the build-in module of commercial solvers such as CPLEX or GUROBI. Please note that if $f(x)$ is convex and to be minimized, then the last $\mathbb{SOS}_2$ requirement is naturally met (thus can be relaxed), because the epigraph of $f(x)$ is a convex region. Otherwise, relaxing the last $\mathbb{SOS}_2$ constraint in (\ref{eq:App-02-PWL-CC}) gives rise to the convex hull of the sampled points $(x_0,y_0)$, $\cdots$, $(x_k,y_k)$. In general, the relaxation is inexact. Branch-and-bound algorithms which directly working on SOS variables exhibit good performance \cite{App-MILP-SOS}, but it is desirable to explore equivalent MILP formulations to leverage the superiority of state-of-the-art solvers. To this end, we first provide an explicit form using additional integer variables. \begin{equation} \label{eq:App-02-SOS2-MILP} \begin{aligned} \lambda_0 & \le z_1 \\ \lambda_1 & \le z_1 + z_2 \\ \lambda_2 & \le z_2 + z_3 \\ \vdots & \\ \lambda_{k-1} & \le z_{k-1} + z_k \\ \lambda_k & \le z_k \\ z_i & \in \{ 0,1 \},~ \forall i,~ \sum\nolimits_{i=1}^k z_i = 1 \\ \lambda_i & \ge 0,~ \forall i,~ \sum\nolimits_{i=0}^k \lambda_i = 1 \end{aligned} \end{equation} Formulation (\ref{eq:App-02-SOS2-MILP}) illustrates how integer variables can be used to enforce $\mathbb{SOS}_2$ requirements on the weighting coefficients. This formulation does not involve any manually supplied parameter, and often gives stronger bounds when the integrality of binary variables are relaxed. Sometimes, it is more convenient to use a piecewise constant approximation, especially when the original function $f(x)$ is not continuous. An example is exhibited in part (b), Fig. \ref{fig:App-02-01}. In this approach, the feasible interval of $x$ is partitioned into $S-1$ segments (associated with binary variables $\theta_s$, $s=1$, $\cdots$, $S-1$) by $S$ breakpoints $x_1$, $\cdots$, $x_S$ (associated with $S$ continuous weight variables $\lambda_s$, $s=1$, $\cdots$, $S$); In the $s$-th interval between $x_s$ and $x_{s+1}$, the function value $f(x)$ is approximated by the arithmetic mean $f_s = 0.5[f(x_s)+f(x_{s+1})]$, $s=1,\cdots,$ $S-1$, which is a constant as illustrated in Fig. \ref{fig:App-02-01}. With an appropriate number of partitions, an arbitrary function $f(x)$ can be approximated by a piecewise constant function as follows \begin{subequations} \label{eq:App-02-PWC-CC} \begin{equation} x = \sum_{s=1}^S \lambda_s x_s,~ y = \sum_{s=1}^{S-1} \theta_s f_s \label{eq:App-02-PWC-CC-1} \end{equation} \begin{equation} \lambda_1 \le \theta_1,~ \lambda_S \le \theta_{S-1} \label{eq:App-02-PWC-CC-2} \end{equation} \begin{equation} \lambda_s \le \theta_{s-1} + \theta_s,~ s = 2,\cdots, S-1 \label{eq:App-02-PWC-CC-3} \end{equation} \begin{equation} \lambda_s \ge 0,~ s = 1, \cdots, S,~ \sum\nolimits_{s=1}^S \lambda_s = 1 \label{eq:App-02-PWC-CC-4} \end{equation} \begin{equation} \theta_s \in \{0,1\}, s=1,\cdots,S-1,~ \sum\nolimits_{s=1}^{S-1} \theta_s = 1 \label{eq:App-02-PWC-CC-5} \end{equation} \end{subequations} In (\ref{eq:App-02-PWC-CC}), binary variable $\theta_s=1$ indicates interval $s$ is activated, and constraint (\ref{eq:App-02-PWC-CC-5}) ensures that only one interval will be activated; Furthermore, constraints (\ref{eq:App-02-PWC-CC-2})-(\ref{eq:App-02-PWC-CC-4}) enforce weigh coefficients $\alpha_s$, $s=1$, $\cdots$, $S$ to be $\mathbb{SOS}_2$; Finally, constraint (\ref{eq:App-02-PWC-CC-1}) expresses $y$ and $x$ via the linear combination of sampled values. The advantage of piecewise constant formulation (\ref{eq:App-02-PWC-CC}) lies in the binary expression of function value $y$, such that the product of $y$ and another continuous variable can be easily linearized via integer programming technique, which can be seen in Sect. \ref{App-B-Sect02-03}. Clearly, the required number of binary variables introduced in formulation (\ref{eq:App-02-SOS2-MILP}) is $k$, which grows linearly with respect to the number of breakpoints, and the final MILP model may suffer from computational overheads due to the presence of a large number of binary variables when more breakpoints are involved for improving accuracy. In what follows, we present a useful formulation that only engages a logarithmic number of binary variables and constraints. This technique is proposed in \cite{App-MILP-SOS2-LogCC-1,App-MILP-SOS2-LogCC-2,App-MILP-SOS2-LogCC-3}. Consider the following constraints: \begin{equation} \label{eq:App-02-SOS2-Log} \begin{aligned} \sum_{i \in L_n} \lambda_i & \le z_n,~ \forall n \in N \\ \sum_{i \in R_n} \lambda_i & \le 1 - z_n,~ \forall n \in N \\ z_n & \in \{0, 1\},~ \forall n \in N \\ \lambda & \ge 0,~ \sum\nolimits^k_{i=0} \lambda_i = 1 \end{aligned} \end{equation} where $L_n$ and $R_n$ are index sets of weights $\lambda_i$, $N$ is an index set corresponding to the number of binary variables. The dichotomy sequences $\{L_n, R_n \}_{n \in N}$ constitute a branching scheme on the indices of weights, such that constraint (\ref{eq:App-02-SOS2-Log}) guarantees that at most two adjacent elements of $\lambda$ can take strictly positive values, so as to meet the $\mathbb{SOS}_2$ requirement. The required number of binary variables $z_n$ is $\lceil \log_2 k \rceil$, which is significantly smaller than that involved in formulation (\ref{eq:App-02-SOS2-MILP}). \begin{figure}[!t] \centering \includegraphics[scale=0.50]{Fig-App-02-02} \caption{Gray codes and sets $L_n$, $R_n$ for two and three binary variables.} \label{fig:App-02-02} \end{figure} Next, we demonstrate how to design the sets $L_n$ and $R_n$ based on the concept of Gray codes. For notation brevity, we restrict the discussion to the instances with 2 and 3 binary variables (which are shown in Fig. \ref{fig:App-02-02}), indicating 5 and 9 breakpoints (or 4 and 8 intervals) in consequence. As shown in Fig. \ref{fig:App-02-02}, Gray codes G$_1$ - G$_8$ form a binary system where any two adjacent numbers only differ in one bit. For example, G$_4$ and G$_5$ differ in the first bit, and G$_5$ and G$_6$ differ in the second bit. Such Gray codes are used to describe which two adjacent weights are activated. In general, sets $R_n$ and $L_n$ are constructed as follows: the index $v \in L_n$ if the binary values of the $n$-th bit of two successive codes $G_n$ and $G_{n+1}$ are equal to 1, or $v \in R_n$ if they are equal to 0. This principle can be formally defined in a mathematical way as \begin{equation} \label{eq:App-02-SOS2-Log-L} L_n = \left\{v~\middle|~ \begin{aligned} (G^n_v &= 1 \mbox{ and } G^n_{v+1} = 1) \\ \cup~ (v & = 0 \mbox{ and } G^n_{1} =1) \\ \cup~ (v & = k \mbox{ and } G^n_{k} =1) \end{aligned} \right\} \end{equation} \begin{equation} \label{eq:App-02-SOS2-Log-R} R_n = \left\{v~\middle|~ \begin{aligned} (G^n_v &= 0 \mbox{ and } G^n_{v+1} = 0) \\ \cup~ (v & = 0 \mbox{ and } G^n_{1} = 0) \\ \cup~ (v & = k \mbox{ and } G^n_{k} = 0) \end{aligned} \right\} \end{equation} where $G^n_v$ stands for the $n$-th bit of code $G_v$. For example, sets R$_1$, R$_2$, R$_3$ and L$_1$, L$_2$, L$_3$ for Gray codes G$_1$-G$_8$ are shown in Fig. \ref{fig:App-02-02}. In such a way, we can establish the rule that only two adjacent weights can be activated via (\ref{eq:App-02-SOS2-Log}). To see this, consider that if $\lambda_i > 0$ for $i=4,5$ and $\lambda_i = 0$ for other indices, we let $z_1=1$, $z_2=1$, $z_3$ = 0, which leads to the following constraint set: \begin{equation*} \left\{ \begin{lgathered} \lambda_0 + \lambda_1 + \lambda_2 + \lambda_3 \le 1 - z_1 = 0 \\ \lambda_5 + \lambda_6 + \lambda_7 + \lambda_8 \le z_1 = 1 \\ \lambda_0 + \lambda_1 + \lambda_6 \le 1 - z_2 = 0 \\ \lambda_3 + \lambda_4 + \lambda_8 \le z_2 = 1 \\ \lambda_0 + \lambda_4 + \lambda_5 \le 1 - z_3 = 1 \\ \lambda_2 + \lambda_7 + \lambda_8 \le z_3 = 0 \\ \lambda_i \ge 0, \forall i, \sum\nolimits^8_{i=0} \lambda_i = 1 \end{lgathered} \right. \end{equation*} Thus we can conclude that \begin{gather*} \lambda_4 + \lambda_5 = 1, \lambda_4 \ge 0, \lambda_5 \ge 0, \\ \lambda_0=\lambda_1=\lambda_2=\lambda_3=\lambda_6=\lambda_7=\lambda_8=0 \end{gather*} This mechanism can be interpreted as follows: $z_1=1$ enforces $\lambda_i = 0$, $i=0,1,2,3$ through set $R_1$; $z_2=1$ further enforces $\lambda_6 = 0$ through set $R_2$; finally, $z_3=0$ enforces $\lambda_7 = \lambda_8 = 0$ through set $L_3$. Then the remaining weights $\lambda_4$ and $\lambda_5$ constitute the positive coefficients. In this regard, only $\log_2 8 =3$ binary variables and $2 \log_2 8 = 6$ additional constraints are involved. Compared with formulation (\ref{eq:App-02-SOS2-MILP}), the gray code can be regarded as extra branching operation enabled by problem structure, so the number of binary variables in expression (\ref{eq:App-02-SOS2-Log}) is greatly reduced in the case with a large value of $k$. As a special case, consider the following problem \begin{equation} \label{eq:App-02-Example-1-NLP} \min \left\{ \sum_i f_i(x_i) ~\middle|~ x \in X \right\} \end{equation} where $f_i(x_i),i=1,2,\cdots$ are convex univariate functions, and $X$ is a polytope. This problem is convex but nonlinear. The DCOPF problem, a fundamental issue in power market clearing, is given in this form, in which $f_i(x_i)$ is a convex quadratic function. Although any local NLP algorithm can find the global optimal solution of (\ref{eq:App-02-Example-1-NLP}), there are still reasons to seek approximated LP formulations. One is that problem (\ref{eq:App-02-Example-1-NLP}) may be embedded in another optimization problem and serve as its constraint. This is a pervasive modeling paradigm to study the strategic behaviors and market powers of energy providers, where the electricity market is cleared according to a DCOPF, and delivered energy of generation companies and nodal electricity prices are extracted from the optimal primal variables and dual variables associating with power balancing constraints, respectively. An LP representation allows to exploit the elegant LP duality theory for further analysis, and helps characterize optimal solution through primal-dual or KKT optimality conditions. To this end, we can opt to solve the following LP \begin{equation} \label{eq:App-02-Example-1-LP-Approx} \begin{aligned} \min_{x,y,\lambda} ~~ & \sum_i y_i \\ \mbox{s.t.}~~ & y_i = \sum_{k} \lambda_{ik} f_i(x_{ik}),~ \forall i \\ & x_i = \sum_k \lambda_{ik} x_{ik},~\forall i,~ x \in X \\ & \lambda \ge 0,~ \sum_k \lambda_{ik} =1,~ \forall i \\ \end{aligned} \end{equation} where $x_{ik}, k=1,2,\cdots$ are break points (constants) for variable $x_i$, and the associated weights are $\lambda_{ik}$. Because $f_i(x_i)$ are convex functions, the $\mathbb{SOS}_2$ requirement on the weight variable $\lambda$ is naturally met, so it is relaxed from the constraints. \subsection{Bivariate Continuous Nonlinear Function} \label{App-B-Sect01-02} Consider a continuous nonlinear function $f(x,y)$ in two variables $x$ and $y$. The entire feasible region is partitioned into $M \times N$ disjoint sub-rectangles by $M+N+2$ breakpoints $x_n$, $n=0,1,\cdots,N$ and $y_m$, $m=0,1,\cdots,M$, as illustrated in Fig. \ref{fig:App-02-03}, and the corresponding function values are $f_{mn} = f(x_m,y_n)$. By introducing a planar weighting coefficient matrix $\{ \lambda_{mn}\}$ for each grid point that satisfies \begin{subequations} \begin{equation} \label{eq:App-02-f(x,y)-Weight} \begin{gathered} \lambda_{mn} \ge 0,~ \forall m, \forall n \\ \sum^M_{m=0} \sum^N_{n=0} \lambda_{mn} =1 \end{gathered} \end{equation} we can present any point $(x,y)$ in the feasible region by a convex combination of the extreme points of the sub-rectangle it resides in: \begin{equation} \label{eq:App-02-f(x,y)-Variable} \begin{gathered} x = \sum^M_{m=0} \sum^N_{n=0} \lambda_{mn} x_n = \sum^N_{n=0} \left( \sum^M_{m=0} \lambda_{mn} \right) x_n \\ y = \sum^M_{m=0} \sum^N_{n=0} \lambda_{mn} y_m = \sum^M_{m=0} \left( \sum^N_{n=0} \lambda_{mn} \right) y_m \end{gathered} \end{equation} and its function value \begin{equation} \label{eq:App-02-f(x,y)-Value} f(x,y) = \sum^M_{m=0} \sum^N_{n=0} \lambda_{mn} f_{mn} \end{equation} \end{subequations} is also a convex combination of the function values at the corner points. \begin{figure}[!t] \centering \includegraphics[scale=0.60]{Fig-App-02-03} \caption{Breakpoints and active rectangle for PWL approximation.} \label{fig:App-02-03} \end{figure} As we can see from Fig. \ref{fig:App-02-03}, in a valid representation, if ($x^*, y^*$) belongs to a sub-rectangle, only the weight parameter associated with the four corner points can be non-negative, while others should be forced at 0. In such a pattern, the sum of columns/rows of matrix ${\rm \Lambda} = [\lambda_{mn}],\forall m,n$, which remains a vector, should constitute an $\mathbb{SOS}_2$, and $\rm \Lambda$ is called a planar $\mathbb{SOS}_2$, which can be implemented via two $\mathbb{SOS}_1$ constraints on the marginal weight vectors. In fact, at most three of the four corner points can be associated with uniquely determined non-negative weights. Consider point O and the active rectangle ABCD shown in Fig. \ref{fig:App-02-03}. The location of O can be expressed by a linear combination of the coordinates of corner points $x_A,x_B,x_C,x_D$ associating with non-negative weights $\lambda_A,\lambda_B,\lambda_C,\lambda_D$ as: \begin{subequations} \begin{equation} \label{eq:App-02-Rectangle} x_O = \lambda_A x_A + \lambda_B x_B + \lambda_C x_C + \lambda_D x_D \\ \end{equation} In the first case \begin{equation} \label{eq:App-02-Rectangle-Weight-Case1} \lambda^1_A,~ \lambda^1_B,~ \lambda^1_C,~ \lambda^1_D \ge 0,~ \lambda^1_A + \lambda^1_B + \lambda^1_C + \lambda^1_D = 1 \end{equation} In the second case \begin{equation} \label{eq:App-02-Rectangle-Weight-Case2} \lambda^2_A, \lambda^2_C, \lambda^2_D \ge 0,~ \lambda^2_B = 0,~ \lambda^2_A + \lambda^2_C + \lambda^2_D = 1 \end{equation} In the third case \begin{equation} \label{eq:App-02-Rectangle-Weight-Case3} \lambda^3_B, \lambda^3_C, \lambda^3_D \ge 0,~ \lambda^3_A = 0,~ \lambda^3_B + \lambda^3_C + \lambda^3_D = 1 \end{equation} We use superscripts 1, 2, 3 to distinguish values of weights in different representations. According to Caratheodory theorem, the non-negative weights are uniquely determined in (\ref{eq:App-02-Rectangle-Weight-Case2}) and (\ref{eq:App-02-Rectangle-Weight-Case3}), and in the former (latter) case, we say $\rm \Delta$ACD ($\rm \Delta$BCD) is activated or selected. Denote function values in these three cases by \begin{gather} f_1(x_O) = \lambda^1_A f(x_A) + \lambda^1_B f(x_B) + \lambda^1_C f(x_C) + \lambda^1_D f(x_D) \label{eq:App-02-Rectangle-Value-Case1} \\ f_2(x_O) = \lambda^2_A f(x_A) + \lambda^2_C f(x_C) + \lambda^2_D f(x_D) \label{eq:App-02-Rectangle-Weight-Case2} \\ f_3(x_O) = \lambda^3_B f(x_B) + \lambda^3_C f(x_C) + \lambda^3_D f(x_D) \label{eq:App-02-Rectangle-Weight-Case3} \end{gather} \end{subequations} Suppose $f(x_A)< f(x_B)$, the plane defined by points B, C, D lies above that defined by points A, C, D, hence $f_2(x_O) < f_1(x_O) < f_3(x_O)$. If a smaller (larger) function value is in favor, then $\rm \Delta$ACD ($\rm \Delta$BCD) will be activated at the optimal solution. Please bear in mind that as long as A, B, C, D are not in the same plane, $f_1(x_O)$ will be strictly less (greater) than $f_3(x_O)$ ($f_2(x_O)$). Therefore, (\ref{eq:App-02-Rectangle-Value-Case1}) will not become binding at the optimal solution, and the weights for active corners are uniquely determined. If rectangle ABCD is small enough, such a discrepancy can be neglected. Nonetheless, non-uniqueness of the corner weights has little injury on its application, because the optimal solution $x_O$ and optimal value will be consistent with the original problem. The weights do not correspond to physical strategies that need to be deployed, and the linearization method can be considered as a black box to the decision maker, who provides function values at $x_A,x_B,x_C,x_D$ and receives a unique solution $x_O$. Detecting the active sub-rectangle that $(x^*, y^*)$ resides in requires additional constraints on the weight parameter $\lambda_{mn}$. The aforementioned integer formulation is used to impose planar $\mathbb{SOS}_2$ constraints. Let $\lambda^n$ and $\lambda^m$ be the aggregated weights for $x$ and $y$, respectively, i.e., \begin{equation} \label{eq:App-02-f(x,y)-fmn} \begin{gathered} \lambda^n = \sum^M_{m=0} \lambda_{mn},~ \forall n \\ \lambda^m = \sum^N_{n=0} \lambda_{mn},~ \forall m \end{gathered} \end{equation} which are also called the marginal weight vectors, and introduce the following constraints: \begin{equation} \label{eq:App-02-f(x,y)-MILP-x} \mbox{For $x$: } \left\{ \begin{aligned} \sum_{n \in L^1_k} \lambda^n & \le z^1_k \\ \sum_{n \in R^1_k} \lambda^n & \le 1 - z^1_k \\ z^1_k & \in \{0, 1\} \end{aligned} \right\},~ \forall k \in K_1 \end{equation} \begin{equation} \label{eq:App-02-f(x,y)-MILP-y} \mbox{For $y$: } \left\{ \begin{aligned} \sum_{m \in L^2_k} \lambda^m & \le z^2_k \\ \sum_{m \in R^2_k} \lambda^m & \le 1 - z^2_k \\ z^2_k & \in \{0, 1\} \end{aligned} \right\},~ \forall k \in K_2 \end{equation} where $L^1_k$, $L^2_k$ and $R^2_k$, $R^2_k$ are index sets of weights $\lambda^n$ and $\lambda^m$, $K_1$ and $K_2$ are index sets of binary variables. The dichotomy sequences $\{L^1_k,R^1_k \}_{k \in K_1}$ and $\{L^2_k,R^2_k\}_{k \in K_2}$ constitute a branching scheme on the indices of weights, such that constraints (\ref{eq:App-02-f(x,y)-MILP-x}) and (\ref{eq:App-02-f(x,y)-MILP-y}) would guarantee that at most two adjacent elements of $\lambda^n$ and $\lambda^m$ can take strictly positive values, so as to detect the active sub-rectangle. In this approach, the required number of binary variables is $\lceil \log_2 M \rceil + \lceil \log_2 N \rceil$. The construction of these index sets has been explained in the univariate case. Likewise, the discussions for problems (\ref{eq:App-02-Example-1-NLP}) and (\ref{eq:App-02-Example-1-LP-Approx}) can be easily extended if the objective function is the sum of bi-variate convex functions, implying that the planar $\mathbb{SOS}_2$ condition is naturally met. \subsection{Approximation Error} This section answers a basic question: For a given function, how many intervals (break points) are needed to achieve certain error bound $\varepsilon$? For the ease of understanding, we restrict our attention to univariate function, including the quadratic function $f(x)= ax^2$, and more generally, the continuous function $f(x)$ that is three times continuously differentiable. Let $\phi_f(x)$ be the PWL approximation for function $f(x)$ on $X=\{x|x_l \le x \le x_m \}$, the absolute maximum approximation error is defined by ${\rm \Delta} = \max_{x \in X} |f(x)-\phi_f (x)|$. First let us consider the quadratic function $f(x) = a x^2$, which has been thoroughly studied in \cite{App-MILP-PWL-Error-1}. The analysis is briefly introduced here. Choose an arbitrary interval $[x_{i-1},x_i] \subset X$, the PWL approximation can be parameterize in a single variable $t \in [0,1]$ as \begin{gather*} x(t) = x_{i-1} + t (x_i-x_{i-1}) \\ \phi_f(x(t)) = ax_{i-1}^2 + at (x_i^2 - x_{i-1}^2) \end{gather*} Clearly, $f(x(0))=\phi_f(x(0))=x^2_{i-1}$, $f(x(1))=\phi_f(x(1))=x^2_i$, and $\phi_f(x(t)) > f(x(t))$, $\forall t \in (0,1)$. The maximal approximation error in the interval must be found at a critical point which satisfies \begin{equation*} \begin{aligned} & \dfrac{d}{dt}\left( \phi_f(x(t)) - f(x(t)) \right) \\ =~ & \dfrac{d}{dt} a \left[ x_{i-1}^2 + t (x_i^2 - x_{i-1}^2) - ( x_{i-1} + t (x_i-x_{i-1}))^2 \right] \\ =~ & \dfrac{d}{dt} a (x_i - x_{i-1})^2 (t-t^2) \\ =~ & a (x_i - x_{i-1})^2 (1-2t) \\ =~ & 0 \Rightarrow t = \frac{1}{2} \end{aligned} \end{equation*} implying that $x(1/2)$ is always a critical point where the approximation error reaches maximum, regardless of the partition of intervals, and the error is given by \begin{equation*} \begin{aligned} {\rm \Delta} & = \phi \left( x \left( \frac{1}{2} \right) \right) - f\left( x \left( \frac{1}{2} \right) \right) \\ & = a \left[ x_{i-1}^2 + \frac{1}{2} (x_i^2 - x_{i-1}^2) - \frac{1}{4} ( x_{i-1} + x_i)^2 \right] \\ & = \frac{a}{4} (x_i - x_{i-1})^2 \end{aligned} \end{equation*} which is quadratic in the length of the interval and independent of its location. In this regard, the intervals must be evenly distributed with equal length in order to get the best performance. If $X$ is divided into $n$ intervals, the absolute maximum approximation error is \begin{equation*} {\rm \Delta} = \frac{a}{4n^2} (x_m - x_l)^2 \end{equation*} Therefore, for a given tolerance $\epsilon$, the number of intervals should satisfy \begin{equation*} n \ge \sqrt{\frac{a}{\varepsilon}} \frac{x_m-x_l}{2} \end{equation*} For quadratic function $f(x)=a x^2$, coefficient $a$ determines its second-order derivative. For more general situations, above discussion implies that the number of intervals needed to perform a PWL approximation for function $f(x)$ may depend on its second-order derivative. This problem has been thoroughly studied in \cite{App-MILP-PWL-Error-2}. The conclusion is: for a three times continuously differentiable $f(x)$ in the interval $[x_l, x_m]$, the optimal number of segments $s(\varepsilon)$ under given error tolerance $\varepsilon$ can be selected as \begin{equation*} s(\varepsilon) \propto \dfrac{c}{\sqrt{\varepsilon}},~ \varepsilon \to 0^+ \end{equation*} where \begin{equation*} c = \dfrac{1}{4} \int^{x_m}_{x_l} \sqrt{|f^{\prime \prime}(x)|} \end{equation*} The conclusion still holds if $\sqrt{|f^{\prime \prime}(x)|}$ has integrable singularities at the endpoints. \section{Linear Formulation of Product Terms} \label{App-B-Sect02} Product of two variables, or a bilinear term, naturally arises in optimization models from various disciplines. For one example, in economic studies, if the price $c$ and the quantity $q$ of a commodity are variables, then the cost $cq$ would be a bilinear term. For another, in circuit analysis, if both of the voltage $v$ and the current $i$ are variables, then the electric power $vi$ would be a bilinear term. Bilinear terms are non-convex. Throughout history, linearizing bilinear terms using linear constraints and integer variables is a frequently used technique in optimization community. This section presents several techniques for the central question of product linearization: how to enforce constraint $z=xy$, depending on the types of $x$ and $y$. \subsection{Product of Two Binary Variables} \label{App-B-Sect02-01} If $x \in \mathbb {B}$ and $y \in \mathbb {B}$, then $z = x y$ is equivalent to the following linear inequalities \begin{equation} \label{eq:App-02-xy-BB} \begin{gathered} 0 \le z \le y \\ 0 \le x-z \le 1-y \\ x \in \mathbb{B},~y \in \mathbb{B},~ z \in \mathbb{B} \end{gathered} \end{equation} It can be verified that if $x=1$, $y=1$, then $z = 1$ is achieved; if $x = 0$ or $y = 0$, then $z = 0$ is enforced, regardless of the value of $y$ or $x$. This is equivalent to the requirement $z = x y$. If $x \in \mathbb {Z}^+$ belongs to interval $[x^L,x^U]$, and $y \in \mathbb {Z}^+$ belongs to interval $[y^L,y^U]$, given the following binary expansion \begin{equation} \label{eq:App-02-xy-ZZ-BE} \begin{aligned} x = x^L + \sum_{k=1}^{K_1} 2^{k-1} u_k \\ y = y^L + \sum_{k=1}^{K_2} 2^{k-1} v_k \end{aligned} \end{equation} where $K_1 = \lceil \log_2 (x^U-x^L) \rceil$, $K_2 = \lceil \log_2 (y^U-y^L) \rceil$. To develop a vector expression, define vectors $b^1 = [2^0, 2^1,\cdots,2^{K_1-1}]$, $u = [u_1,u_2,\cdots,u_{K_1}]^T$, $b^2 = [2^0, 2^1,\cdots,2^{K_2}]$, $v = [v_1,v_2,\cdots,v_{K_2}]^T$, and matrices $B = (b^1)^T b^2$, $z = u v^T$, then \begin{equation} x y = x^L y^L + x^L b^2 v + y^L b^1 u + \langle B, z \rangle \notag \end{equation} where product matrix $z = u v^T$, and $\langle B, z \rangle = \sum_i \sum_j B_{ij} z_{ij}$. Relation among $u$, $v$, and $z$ can be linearized via equation (\ref{eq:App-02-xy-BB}) element-wise. Its compact form is given by \begin{equation} \label{eq:App-02-xy-ZZ-Comp} \begin{gathered} {\bf 0}^{K_1 \times K_2} \le z \le {\bf 1}^{K_1 \times 1} v^T \\ {\bf 0}^{K_1 \times K_2} \le u {\bf 1}^{1 \times K_2} - z \le {\bf 1}^{K_1 \times K_2} - {\bf 1}^{K_1 \times 1} v^T \\ u \in \mathbb{B}^{K_1 \times 1},~ v \in \mathbb{B}^{K_2 \times 1},~ z \in \mathbb{B}^{K_1 \times K_2} \end{gathered} \end{equation} \subsection{Product of Integer and Continuous Variables} \label{App-B-Sect02-02} We consider the binary-continuous case. If $x \in \mathbb {R}$ belongs to interval $[x^L,x^U]$, and $y \in \mathbb {B}$, and then $z = x y$ is equivalent to the following linear inequalities \begin{equation} \label{eq:App-02-xy-BC} \begin{gathered} x^L y \le z \le x^U y \\ x^L (1-y) \le x-z \le x^U (1-y) \\ x \in \mathbb{R},~y \in \mathbb{B},~ z \in \mathbb{R} \end{gathered} \end{equation} It can be verified that if $y = 0$, then $z$ is enforced to be 0 and $x^L \le x \le x^U$ is naturally met; if $y = 1$, then $z = x$ and $x^L \le z \le x^U$ must be satisfied, indicating the same relationship on $x$, $y$, and $z$. As for the integer-continuous case, the integer variable can be represented as (\ref{eq:App-02-xy-ZZ-BE}) using binary variables, yielding a linear combination of binary-continuous products. It should be mentioned that the upper bound $x^U$ and the lower bound $x^L$ are crucial for creating linearization inequalities. If explicit bounds are not available at hand, one can incorporate a constant $M$ that is big enough. The value of $M$ will have a notable impact on the computation time. To enhance efficiency, a desired value should be the minimal $M$ that ensures that inequality $-M \le x \le M$ never becomes binding at optimum, as it leads to the strongest bound if integrality of binary variables is neglected, expediting the converge of the branch-and-bound procedure. However, such a value is generally unclear before we solve the problem. Nevertheless, we do not actually need to find the smallest value $M^{*}$. Any $M \ge M^*$ produces the same optimal solution and is valid for linearization. Please bear in mind that an over-large $M$ not only deteriorates the computation time, but also cause numeric instability due to a large conditional number. So a proper tradeoff must be made between efficiency and accuracy. A proper $M$ can be determined from estimating the bound of $x$ from certain heuristics, which is problem-dependent. \subsection{Product of Two Continuous Variables} \label{App-B-Sect02-03} If $x \in \mathbb {R}$ belongs to interval $[x^L,x^U]$, and $y \in \mathbb {R}$ belongs to interval $[y^L,y^U]$, there are three options for linearizing their product $xy$. The first one considers $z = xy$ as a bivariate function $f(x,y)$, and applies the planar $\mathbb{SOS}_2$ method in Sect. \ref{App-B-Sect01-02}. The second one discretizes $y$, for example, as follows \begin{equation} \begin{gathered} y = y^L + \sum_{k=1}^{K} 2^{k-1} u_k {\rm \Delta} y \\ {\rm \Delta} y = \dfrac{y^U - y^L}{2^K},~ u_k \in \mathbb {B},~ \forall k \end{gathered} \label{eq:App-02-xy-CC-Discrete-y} \end{equation} and \begin{equation} xy = x y^L + \sum_{k=1}^{K} 2^{k-1} v_k {\rm \Delta} y \\ \end{equation} where $v_k = u_k x$ can be linearized through equation (\ref{eq:App-02-xy-BC}) as \begin{equation} \label{eq:App-02-xy-CC-BE} \begin{gathered} x^L u_k \le v_k \le x^U u_k,~ \forall k \\ x^L (1 - u_k) \le x - v_k \le x^U (1 - u_k),~ \forall k \\ x \in \mathbb{R},~ u_k \in \mathbb{B},~ \forall k,~ v_k \in \mathbb{R},~ \forall k \end{gathered} \end{equation} In practical problems, bilinear terms often appear as the inner production of two vectors. For convenience, we present the compact linearization of $x^T y$ via binary expansion. Let $y$ be the candidate vector variable to be discretized; perform (\ref{eq:App-02-xy-CC-Discrete-y}) on each element of $y$ \begin{equation*} y_j = y^L_j + \sum_{k=1}^{K} 2^{k-1} u_{jk} {\rm \Delta} y_j,~ \forall j \end{equation*} and thus \begin{equation*} x_j y_j = x_j y^L_j + \sum_{k=1}^{K} 2^{k-1} v_{jk} {\rm \Delta} y_j,~ \forall j,~ v_{jk} = u_{jk} x_j,~ \forall j, \forall k \end{equation*} Relation $v_{jk} = u_{jk} x_j$ can be expressed via linear constraints \begin{equation*} x^L_j u_{jk} \le v_{jk} \le x^U_j u_{jk},~ x^L_j(1-u_{jk}) \le x_j - v_{jk} \le x^U_j(1-u_{jk}),~ \forall j, \forall k \end{equation*} Denote by $V$ and $U$ are matrix variables consisting of $v_{jk}$ and $u_{jk}$, respectively; $1_K$ stands for all-one column vector with a dimension of $K$; ${\rm \Delta}_Y$ is a diagonal matrix with ${\rm \Delta} y_j$ being non-zero entries; vector $\zeta = [2^0,2^1,\cdots,2^{K-1}]$. Combining all above element-wise expressions together, we have the linear formulation of $x^T y$ in a compact matrix form \begin{equation*} x^T y = x^T y^L + \zeta V^T {\rm \Delta} y \end{equation*} in conjunction with \begin{equation} \begin{gathered} y = y^L + {\rm \Delta}_Y U \zeta^T \\ (x^L \cdot 1_K^T) \otimes U \le V \le (x^U \cdot 1_K^T) \otimes U \\ (x^L \cdot 1_K^T) \otimes (1-U) \le x \cdot 1^T_K - V \le (x^U \cdot 1_K^T) \otimes (1-U) \\ x \in \mathbb{R}^J,~ y \in \mathbb R^J,~ U \in \mathbb{B}^{J \times K},~ V \in \mathbb{R}^{J \times K} \end{gathered} \label{eq:App-02-xy-CC-Vector-BE} \end{equation} where $\otimes$ represents element-wise product of two matrices with the same dimension. One possible drawback of this formulation is that the discretized variable is no longer continuous. The approximation accuracy can be improved by increasing the number of breakpoints without introducing too many binary variables, whose number is given by $\lceil \log_2 (y^U-y^L)/{\rm \Delta} y \rceil$. Furthermore, the variable to be discretized must have clear upper and lower bounds. This is not restrictive because decision variables of engineering problems are subject to physical operating limitations, such as the maximum and minimum output of a generator. Nevertheless, if $x$, for example, is unbounded in formulation, but the problem has a finite optimum, we can replace $x^U(x^L)$ in (\ref{eq:App-02-xy-CC-BE}) with a large enough big-M parameter $M(-M)$, so that the true optimal solution remains feasible. It should be pointed out that the value of $M$ may influence the computational efficiency of the equivalent MILP, as mentioned previously. The optimal choice of $M$ in general cases remains an open problem, but there could be heuristic methods for specific instances. For example, if $x$ stands for the marginal production cost, which is a dual variable whose bounds are unclear, one can alternatively determine a suitable bound from historical data or price forecast. An alternative formulation for the second option deals with product term $x f(y)$, and $xy$ is a special case when $f(y)=y$. By performing the piecewise constant approximation (\ref{eq:App-02-PWC-CC}) on function $f(y)$, the product becomes $xy = \sum_{s=1}^{S-1} x \theta_s y_s$, where $y_s$ is constant, $x$ is continuous, and $\theta_s$ is binary. The products $x \theta_s$, $s=1,\cdots,S-1$ can be readily linearized via the method in \ref{App-B-Sect02-02}. In this approach, the continuity of $x$ and $y$ are retained. However, the number of binary variables in the piecewise constant approximation for $f(y)$ grows linearly in the number of samples on $y$. The third one converts the product into a separable form, and then performs PWL approximation for univariate nonlinear functions. To see this, consider a bilinear term $x y$. Introduce two continuous variables $u$ and $v$ defined as follows \begin{equation} \label{eq:App-02-xy-CC-Decomp-1} \begin{gathered} u = \frac{1}{2} (x+y) \\ v = \frac{1}{2} (x-y) \end{gathered} \end{equation} Now we have \begin{equation} \label{eq:App-02-xy-CC-Decomp-2} x y = u^2 - v^2 \end{equation} In (\ref{eq:App-02-xy-CC-Decomp-2}), $u^2$ and $v^2$ are univariate nonlinear functions, and can be approximated by the PWL method presented in Sect. \ref{App-B-Sect01-01}. Furthermore, if $x_l \le x \le x_u$, $y_l \le y \le y_u$, then the lower and upper bounds of $u$ and $v$ are given by \begin{equation*} \begin{gathered} \frac{1}{2} (x_l + y_l) \le u \le \frac{1}{2} (x_u + y_u) \\ \frac{1}{2} (x_l - y_u) \le v \le \frac{1}{2} (x_u - y_l) \end{gathered} \end{equation*} Formulation (\ref{eq:App-02-xy-CC-Decomp-2}) has a connotative advantage. If $xy$ appears in the objective function which is to be minimized and is not involved in constraints, we only need to approximate $v^2$ because $u^2$ is convex and $-v^2$ is concave. The minimum amount of binary variables in this method is a logarithmic function in the number of break points, as explained in Sect. \ref{App-B-Sect01-01}. The bilinear term $xy$ can be replaced by a single variable $z$ in the following situation: 1) if the lower bounds $x_l$ and $y_l$ are nonnegative; 2) either $x$ or $y$ is not referenced anywhere else except in $xy$. For instance, $y$ is such a variable, then the bilinear term $xy$ can be replaced by variable $z$ and constraint $x y_l \le z \le x y_u$. Once the problem is solved, $y$ can be recovered by $y = z/x$ if $x > 0$, and the inequality constraint on $z$ guarantees $y \in [y_l,y_u]$; otherwise if $x = 0$, then $y$ is undetermined and has no impact on the optimum. \subsection{Monomial of Binary Variables} \label{App-B-Sect02-04} Previous cases discuss linearizing the product of two variables. Now we consider a binary monomial with $n$ variables \begin{equation} \label{eq:App-02-Monomial-Binary} z = x_1 x_2 \cdots x_n,~ x_i \in \{0,1\},~ i=1,2,\cdots,n \end{equation} Clearly, this monomial takes a binary value. Since the product of two binary can be expressed by a single one in light of (\ref{eq:App-02-xy-BB}), the monomial can be linearized recursively. Nevertheless, by making full use of the binary property of $z$, a smarter and concise way to represent (\ref{eq:App-02-Monomial-Binary}) is given by \begin{align} z & \in \{0,1\} \label{eq:App-02-Monomial-BL-1} \\ z & \le \dfrac{x_1 + x_2 + \cdots + x_n}{n} \label{eq:App-02-Monomial-BL-2}\\ z & \ge \dfrac{x_1 + x_2 + \cdots + x_n -n +1}{n} \label{eq:App-02-Monomial-BL-3} \end{align} If at least one of $x_i$ is equal to 0, because $\sum_{i=1}^n x_i - n + 1 \le 0$, (\ref{eq:App-02-Monomial-BL-3}) becomes redundant; moreover, $\sum^n_{i=1} x_i /n \le 1 - 1/n$, which removes $z=1$ from the feasible region, so $z$ will take a value of 0; otherwise, if all $x_i$ are equal to 1, $\sum^n_{i=1} x_i /n = 1$, and the right-hand side of (\ref{eq:App-02-Monomial-BL-3}) is $1/n$, which removes $z=0$ from the feasible region. Hence $z$ is forced to be 1. In conclusion, linear constraints (\ref{eq:App-02-Monomial-BL-1})-(\ref{eq:App-02-Monomial-BL-3}) have the same effect as (\ref{eq:App-02-Monomial-Binary}). In view of the above transformation technique, a binary polynomial program can always be reformulated as a binary linear program. Moreover, if a single continuous variable appears in the monomial, the problem can be reformulated as an MILP. \subsection{Product of Functions in Integer Variables} \label{App-B-Sect02-05} First, let us consider $z = f_1(x_1)f_2(x_2)$, where decision variables are positive integers, i.e., $x_i \in \{d_{i,1},d_{i,2},\cdots,d_{i,r_i}\},i=1,2$. Without particular tricks, $f_1$ and $f_2$ can be expressed as \begin{gather*} f_1 = \sum_{j=1}^{r_1} f_1(d_{1,j})u_{1,j},~ u_{1,j} \in \{0,1\},~ \sum_{j=1}^{r_1} u_{1,j} = 1 \\ f_2 = \sum_{j=1}^{r_2} f_2(d_{2,j})u_{2,j},~ u_{2,j} \in \{0,1\},~ \sum_{j=1}^{r_2} u_{2,j} = 1 \end{gather*} and the product of two binary variables can be linearized via (\ref{eq:App-02-xy-BB}). Above formulation introduces a lot of intermediary binary variables, and is not propitious to represent a product with more functions recursively. Ref. \cite{App-MILP-Fun-Prod} suggests another choice \begin{equation} z = \sum_{i=1}^{r_2} f_2(d_{2,j}) \sigma_{2,j},~ \sum_{i=1}^{r_2} \sigma_{2,j} = f_1(x_1),~ \sigma_{2,j} = f_1(x_1) u_{2,j} \label{eq:App-02-Fun-Prod} \end{equation} where $u_{2,j} \in \{0,1\}$ and $\sum_{j=1}^{r_2} u_{2,j} = 1$. Although $f_1(x_1) u_{2,j}$ remains nonlinear because of decision variable $x_1$, (\ref{eq:App-02-Fun-Prod}) can be used to linearize a product with more than two nonlinear functions. To see this, Denote by $z_1 = f_1(x_1)$, $z_i = z_{i-1}f_i(x_i)$, $i = 2,\cdots,n$; integer variable $x_i \in \{d_{i,1},d_{i,2},\cdots,d_{i,r_i}\}$, $f_i(x_i)>0$, $i=1,\cdots,n$. By using (\ref{eq:App-02-Fun-Prod}), $z_i,i=1,\cdots,n$ have the following expressions \cite{App-MILP-Fun-Prod} \begin{equation} \begin{aligned} & z_1 = \sum_{j=1}^{r_1} f_1(d_{1,j})u_{1,j} \\ & z_2 = \sum_{j=1}^{r_2} f_2(d_{2,j})\sigma_{2,j},~ \sum_{i=1}^{r_2} \sigma_{2,j} = z_1,~ \cdots \\ & z_n = \sum_{j=1}^{r_n} f_n(d_{n,j})\sigma_{n,j},~ \sum_{i=1}^{r_n} \sigma_{n,j} = z_{n-1} \\ & \left. \begin{lgathered} 0 \le z_{i-1} - \sigma_{i,j} \le \bar z_{i-1}(1-u_{i,j}) \\ 0 \le \sigma_{i,j} \le \bar z_{i-1} u_{i,j},~ u_{i,j} \in \{0,1\} \end{lgathered} \right\},~ \begin{lgathered} j = 1, \cdots, r_i, \\ i = 2, \cdots, n \end{lgathered} \\ & x_i = \sum_{j=1}^{r_i} d_{i,j} u_{i,j}, ~ \sum_{j=1}^{r_i} u_{i,j} = 1,~ i = 1,2, \cdots,n \end{aligned} \label{eq:App-02-Funs-Prod} \end{equation} In (\ref{eq:App-02-Funs-Prod}), the number of binary variables is $\sum_{i=1}^n r_i$, and grow linearly in the dimension of $x$ and the interval length of each $x_i$. To reduce the number of auxiliary binary variable $u_{i,j}$, the dichotomy procedure in Sect. \ref{App-B-Sect01-01} for SOS2 can be applied, which is discussed in \cite{App-MILP-Fun-Prod}. \subsection{Log-sum Functions} \label{App-B-Sect02-06} We consider log-sum function $\log(x_1+x_2+\cdots+x_n)$, which arises from solving a signomial geometric programming problem. The basic element in such a problem has a form of \begin{equation} \label{eq:App-02-Signomial} c_k \prod_{j=1}^l y_j^{a_{jk}} \end{equation} where $y_j > 0$, $c_k$ is a constant, and $a_{jk} \in \mathbb R$. Non-integer value of $a_{jk}$ makes signomial geometric programming problem even harder than polynomial programs. Under some variable transformation, the non-convexity of a signomial geometric program can be concentrated in some log-sum functions \cite{App-MILP-Signomial}. In view of the form in (\ref{eq:App-02-Signomial}), we discuss log-sum function in Sect. \ref{App-B-Sect02}. We aim to represent function $\log(x_1+x_2+\cdots+x_n)$ in terms of $\log x_1$, $\log x_2$, $\cdots$, $\log x_n$. Following the method in \cite{App-MILP-Signomial}, define a univariate function $F(X) = \log(1+e^X)$ and let $X_i=\log x_i$, ${\rm \Gamma}_i=\log(x_1+\cdots+x_i)$, $i = 1,\cdots,n$. The relation between $X_i$ and ${\rm \Gamma}_i$ can be revealed. Because \begin{equation*} \begin{aligned} F(X_{i+1}-{\rm \Gamma}_i) & = \log\left( 1+ e^{\log x_{i+1} - \log(x_1+\cdots+x_i)} \right) \\ & = \log \left( 1+ \dfrac{x_{i+1}}{x_1+\cdots+x_i}\right) = {\rm \Gamma}_{i+1} - {\rm \Gamma}_i \end{aligned} \end{equation*} By stipulating $W_i = X_{i+1}-{\rm \Gamma}_i $, we have the following recursive equations \begin{equation} \label{eq:App-02-Signomial-Recursive} \begin{aligned} {\rm \Gamma}_{i+1} & = {\rm \Gamma}_i + F(W_i),~ i = 1,\cdots,n-1 \\ W_i & = X_{i+1}-{\rm \Gamma}_i,~ i = 1,\cdots,n-1 \end{aligned} \end{equation} Function $F(W_i)$ can be linearized using the method in Sect. \ref{App-B-Sect01-01}. Based on this technique, an outer-approximation approach is proposed in \cite{App-MILP-Signomial} to solve signomial geometric programming problem via MILP. \section{Other Frequently used Formulations} \label{App-B-Sect03} \subsection{Minimum Values} \label{App-B-Sect03-01} Let $x_1$, $x_2$, $\cdots$, $x_n$ be continuous variables with known lower bound $x^L_i$ and upper bound $x^U_i$, and $L = \min \{x^L_1, x^L_2, \cdots, x^L_n\}$, then their minimum $y = \min \{x_1, x_2, \cdots, x_n\}$ can be expressed via linear constraints \begin{equation} \label{eq:App-02-Minimum-PWL} \begin{gathered} x^L_i \le x_i \le x^U_i,~ \forall i \\ y \le x_i, \forall i \\ x_i-(x^U_i - L)(1-z_i) \le y, \forall i \\ z_i \in \mathbb {B},~ \forall i,~ \sum_i z_i = 1 \end{gathered} \end{equation} The second inequality guarantees $y \le \min \{x_1, x_2, \cdots, x_n\}$; in addition, if $z_i =1$, then $y \ge x_i$, hence $y$ achieves the minimal value of $\{x_i\}$. According to the definition of $L$, $x_i - y \le x^U_i - L, \forall i$ holds, thus the third inequality is inactive for the remaining $n-1$ variables with $z_i = 0$. \subsection{Maximum Values} \label{App-B-Sect03-02} Let $x_1$, $x_2$, $\cdots$, $x_n$ be continuous variables with known lower bound $x^L_i$ and upper bound $x^U_i$, and $U = \max \{x^U_1, x^U_2, \cdots, x^U_n\}$, then their maximum $y = \max \{x_1, x_2, \cdots, x_n\}$ can be expressed via linear constraints \begin{equation} \label{eq:App-02-Maximum-PWL} \begin{gathered} x^L_i \le x_i \le x^U_i,~ \forall i \\ y \ge x_i, \forall i \\ x_i + (U - x^L_i)(1-z_i) \ge y, \forall i \\ z_i \in \mathbb {B},~ \forall i,~ \sum_i z_i = 1 \end{gathered} \end{equation} The second inequality guarantees $y \ge \max \{x_1, x_2, \cdots, x_n\}$; in addition, if $z_i =1$, then $y \le x_i$, hence $y$ achieves the maximal value of $\{x_i\}$. According to the definition of $U$, $y - x_i \le U - x^L_i, \forall i$ holds, thus the third inequality is inactive for the remaining $n-1$ variables with $z_i = 0$. \subsection{Absolute Values} \label{App-B-Sect03-03} Suppose $x \in \mathbb {R}$ and $|x| \le U$, the absolute value function $y = |x|$, which is nonlinear, can be expressed via PWL function as \begin{equation} \label{eq:App-02-Abs-PWL} \begin{gathered} 0 \le y - x \le 2 U z,~ U (1-z) \ge x \\ 0 \le y + x \le 2 U (1-z),~ -U z \le x \\ -U \le x \le U,~ z \in \mathbb{B} \end{gathered} \end{equation} When $x > 0$, the first line yields $z = 0$ and $y = x$, while the second line is inactive. When $x < 0$, the second line yields $z = 1$ and $y = -x$, while the first line is inactive. When $x = 0$, either $z = 0$ or $z = 1$ gives $y = 0$. In conclusion, (\ref{eq:App-02-Abs-PWL}) has the same effect as $y=|x|$. \subsection{Linear Fractional of Binary Variables} \label{App-B-Sect03-04} A linear fractional of binary variables takes the form of \begin{equation} \label{eq:App-02-BLF-1} \dfrac{a_0+\sum_{i=1}^n a_i x_i}{b_0+\sum_{i=1}^n b_i x_i} \end{equation} We assume $b_0+\sum_{i=1}^n b_i x_i \ne 0$ for all $x \in \{0,1\}^n$. Define a new continuous variable \begin{equation} \label{eq:App-02-BLF-2} y = \dfrac{1}{b_0+\sum_{i=1}^n b_i x_i} \end{equation} The lower bound and upper bound of $y$ can be easily computed. Then the linear fractional shown in (\ref{eq:App-02-BLF-1}) can be replaced with a linear expression \begin{equation} \label{eq:App-02-BLF-3} a_0 y + \sum_{i=1}^n a_i z_i \\ \end{equation} with constraints \begin{equation} \label{eq:App-02-BLF-4} b_0 y + \sum_{i=1}^n b_i z_i = 1 \end{equation} \begin{equation} \label{eq:App-02-BLF-5} z_i = x_i y,~ \forall i \end{equation} where (\ref{eq:App-02-BLF-5}) describes a product of a binary variable and a continuous variable, which can be linearized through equation (\ref{eq:App-02-xy-BC}). \subsection{Disjunctive Inequalities} \label{App-B-Sect03-05} Let $\{P^i\}, i = 1, 2, \cdots, m$ be a finite set of bounded polyhedra. Disjunctive inequalities usually arise when the solution space is characterized by the union $S = \cup_{i=1}^m P^i$ of these polyhedra. Unlike intersection operator which preserves convexity, disjunctive inequalities form a non-convex region. It can be represented by MILP model using binary variables. We introduce three emblematic methods. \vspace{12pt} {\noindent \bf 1. Big-M formulation} The hyperplane representations of polyhedra are given by $P^i = \{ x \in \mathbb{R}^n | A^i x \le b^i \}, i = 1, 2, \cdots, m$. By introducing binary variables $z_i, i = 1, 2, \cdots, m $, an MILP formulation for $S$ can be written as \begin{equation} \label{eq:App-02-Disj-Big-M} \begin{gathered} A^i x \le b^i + M^i(1-z_i) ,~ \forall i \\ z_i \in \mathbb{B},~\forall i,~ \sum^m_{i=1} z_i = 1 \end{gathered} \end{equation} where $M^i$ is a vector such that when $z_i = 0$, $A^i x \le b^i + M^i$ holds. To show the impact of the value of $M$ on the tightness of formulations (\ref{eq:App-02-Disj-Big-M}) when integrality constraints $z_i \in \mathbb{B}, \forall i$ are relaxed as $z_i \in [0,1], \forall i$, we contrivedly construct 4 polyhedra in $\mathbb {R}^2$, which are depicted in Fig. \ref{fig:App-02-04}. The continuous relaxations of (\ref{eq:App-02-Disj-Big-M}) with different values of $M$ are illustrated in the same graph, showing that the smaller the value of $M$, the tighter the relaxation of (\ref{eq:App-02-Disj-Big-M}). \begin{figure}[!t] \centering \includegraphics[scale=0.52]{Fig-App-02-04} \caption{Big-M formulation and their relaxed regions.} \label{fig:App-02-04} \end{figure} From a computational perspective, the element in $M$ should be as small as possible, because a huge constant without any insights about problem data will feature a bad conditional number. Furthermore, the continuous relaxation of MILP model will be very weak, resulting in poor objective value bounds and excessive branch-and-bound computation. The goal of big-M parameter selection is to create a model whose continuous relaxation is close to the convex hull of the original constraint, i.e. the smallest convex set that contains the original feasible region. A possible selection of the big-M parameter is \begin{equation} \label{eq:App-02-Value-Big-M} \begin{gathered} M^i_l = \left(\max_{j \ne i} M^{ij}_l \right) - b^i_l \\ M^{ij}_l = \max_x \left\{ [A^i x]_l: A^j x \le b^j \right\} \end{gathered} \end{equation} where subscript $l$ stands for the $l$-th element of a vector or $l$-th row of a matrix. As polyhedron $P^i$ are bounded, all bound parameters in (\ref{eq:App-02-Value-Big-M}) are well defined. However, even the tightest big-M parameter will yield a relaxed solution space that is generally larger than the convex hull of the original feasible set. In many applications, good variable bounds can be estimated from certain heuristic methods which explore specific problem data and structure. \vspace{12pt} {\noindent \bf 2. Convex hull formulation} Let $\mbox{vert}(P^i) = \{ v^i_l \}, l = 1,2, \cdots,L^i$ denote sets of vertices of polyhedra $\{P^i\}, i = 1,2,\cdots,m$, where $L^i$ is the number of vertices of $P^i$. The set of extreme rays is empty since $P^i$ is bounded. By introducing binary variables $z_i, i = 1, 2, \cdots, m $, an MILP formulation for $S$ is given by \begin{equation} \label{eq:App-02-Disj-Conv} \begin{gathered} \sum_{i=1}^m \sum_{l=1}^{L^i} \lambda^i_l v^i_l = x \\ \sum_{l=1}^{L^i} \lambda^i_l = z_i,~ \forall i \\ \lambda^i_l \ge 0,~ \forall i,~ \forall l \\ z_i \in \mathbb{B},~\forall i,~ \sum^m_{i=1} z_i = 1 \end{gathered} \end{equation} Formulation (\ref{eq:App-02-Disj-Conv}) does not rely on manually supplied parameter. Instead, it requires enumerating all extreme points of polyhedra $P^i$. Although the vertex representation and hyperplane representation of a polyhedron are interchangeable, given the fact that vertex enumeration is time consuming for high-dimensional polyhedra, (\ref{eq:App-02-Disj-Conv}) is useful only if $P^i$ are originally represented by extreme points. \vspace{12pt} {\noindent \bf 3. Lifted formulation} A smarter formulation exploits the fact that bounded polyhedra $P^i$ share the same recession cone $\{ 0 \}$, i.e., equation $A^i x = 0$ has no non-zero solutions. Otherwise, suppose $A^i x^* = 0$, $x^* \ne 0$, and $y \in P^i$, then $y + \lambda x^* \in P^i$, $\forall \lambda > 0$, because $A^i(y + \lambda x^*) = A^i y \le b^i$. As a result, $P^i$ is unbounded. Bearing this in mind, an MILP formulation for $S$ is given by \begin{equation} \label{eq:App-02-Disj-Lift} \begin{gathered} A^i x^i \le b^i z^i,~ \forall i \\ \sum_{i=1}^m x^i = x \\ z_i \in \mathbb{B},~\forall i \\ \sum^m_{i=1} z_i = 1 \end{gathered} \end{equation} Formulation (\ref{eq:App-02-Disj-Lift}) is also parameter-free. Since it incorporates additional continuous variable for each polytope, we call it a lifted formulation. It is easy to see that the feasible region of $x$ is the union of $P^i$: if $z_i = 0$, $x^i = 0$ as analyzed before; otherwise, if $z_i = 1$, $x = x^i \in P^i$. \vspace{12pt} {\noindent \bf 4. Complementarity and slackness condition} Complementarity and slackness condition naturally arises in the KKT optimality condition of a mathematical programming problem, an equilibrium problem, a hierarchical optimization problem, and so on. It is a quintessential law to characterize the logic condition under which a rational decision-making progress must obey. Here we pay attention to the linear case and equivalent MILP formulation, because nonlinear cases give rise to MINLPs, which are challenging to solve and not superior from the computational point of view. A linear complementarity and slackness condition can be written as \begin{equation} \label{eq:App-B-LCP-MILP-1} 0 \le y \bot Ax - b \ge 0 \end{equation} where vectors $x$ and $y$ are decision variables; $A$ and $b$ are constant coefficients with compatible dimensions; notation $\bot$ stands for the orthogonality of two vectors. In fact, (\ref{eq:App-B-LCP-MILP-1}) encompasses the following nonlinear constraints in traditional form \begin{equation} \label{eq:App-B-LCP-MILP-2} y \ge 0,~ Ax - b \ge 0,~ y^T(Ax-b) = 0 \end{equation} In view of the non-negativeness of $y$ and $Ax-b$, the orthogonality condition is equivalent to the element-wise logic form $y_i = 0$ or $a_i x-b_i=0$, $\forall i$, where $a_i$ is the $i$-th row of $A$; in other words, at most one of $y_i$ and $a_i x-b_i$ can take a strictly positive value, implying that the feasible region is either the slice $y_i = 0$ or the slice $a_i x-b_i=0$. Therefore, (\ref{eq:App-B-LCP-MILP-1}) can be regarded as a special case of the disjunctive constraints. In practical application, (\ref{eq:App-B-LCP-MILP-1}) usually serves as constraints in an optimization problem. For example, in a sequential decision making or a linear bilevel program, the KKT condition of the lower-level LP appears in the form of (\ref{eq:App-B-LCP-MILP-1}), which is the constraint of the upper-level optimization problem. The main computation challenge arises from the orthogonality condition, which is nonlinear and non-convex, and violates the linear independent constraint qualification, see Appendix \ref{App-D-Sect03} for an example. Nonetheless, in view of the switching logic between $y_i$ and $a_i x-b_i$, we can introduce a binary variable $z_i$ to select which slice is active \cite{App-MILP-Fortuny-Amat} \begin{equation} \label{eq:App-B-LCP-MILP-3} \begin{gathered} 0 \le a_i x-b_i \le M z_i,~ \forall i \\ 0 \le y_i \le M(1-z_i),~ \forall i \end{gathered} \end{equation} where $M$ is a large enough constant. According to (\ref{eq:App-B-LCP-MILP-3}), if $z_i=0$, then $(Ax - b)_i=0$ must hold, and the second inequality is redundant; otherwise, if $z_i=1$, then we have $y_i=0$, and the first inequality becomes redundant. (\ref{eq:App-B-LCP-MILP-3}) can be written in a compact form as \begin{equation} \label{eq:App-B-LCP-MILP-4} \begin{gathered} 0 \le A x - b \le M z \\ 0 \le y \le M(1-z) \end{gathered} \end{equation} It is worth mentioning that the big-M parameter $M$ has a notable impact on the feasible region of the relaxed problem as well as the computational efficiency of the MILP model, as illustrated in Fig. \ref{fig:App-02-04}. One should make sure that (\ref{eq:App-B-LCP-MILP-4}) would not remove the optimal solution from the feasible set. If both $x$ and $y$ have clear bounds, then $M$ can be easily estimated; otherwise, we may prudently employ a large $M$, at the cost of sacrificing the computational efficiency. Furthermore, if we are aiming to solve (\ref{eq:App-B-LCP-MILP-1}) without an objective function and other constraints, such a problem is called a linear complementarity problem (under some proper transformation), for which we can build parameter-free MILP models. More details can be found in Appendix \ref{App-D-Sect04-02}. \subsection{Logical Conditions} Logical conditions are associated with indicator constraints with a statement like ``if event A then event B''. An event can be described in many ways. For example, a binary variable $a=1$ can stand for event A happens, and otherwise $a=0$; a point $x$ belongs to a set $X$ can denote a system is under secure operating condition, and otherwise $x \notin X$. In view of this, the disjunctive constraints discussed above is a special case of logical condition. In this section, we expatiate on how some usual logical conditions can be expressed via linear constraints. Let A, B, C, $\cdots$ associated with binary variables $a$, $b$, $c$, $\cdots$ represent events. Main results for linearizing typical logical conditions are summarized in \ref{tab:App-B-Logic-MILP} \cite{App-MILP-Logic-Cons}. \begin{table}[htp] \small \renewcommand{\arraystretch}{1.3} \renewcommand{\tabcolsep}{1em} \caption{Linear form of some typical logic conditions} \centering \begin{tabular}{ll} \hline If A then B & $b \ge a$ \\ Not B & $1-b$ \\ If A then not B & $a+b \le 1$ \\ If not A then B & $a+b \ge 1$ \\ A if and only if B & $a=b$ \\ If A then B and C & $b+c \ge 2a$ \\ If A then B or C & $b+c \ge a$ \\ If B or C then A & $2a \ge b+c$ \\ If B and C then A & $a \ge b+c-1$ \\ If M or more of N events then A & $(N-M+1)a \ge b+c+\cdots-M+1$ \\ \hline \end{tabular} \label{tab:App-B-Logic-MILP} \end{table} Logical AND is formulated as a function of two binary inputs. Specifically, $c = a$ AND $b$ can be expressed as $c=\min\{a,b\}$ or $c=ab$. The former one can be linearized via (\ref{eq:App-02-Minimum-PWL}) and the letter one through (\ref{eq:App-02-xy-BB}), and both of them renders \begin{equation} \label{eq:App-B-Logic-AND-2} c \le a,~ c \le b,~ c \ge a+b-1,~ c \ge 0 \\ \end{equation} For the case with multiple binary inputs, i.e., $c=\min\{c_1,\cdots,c_n\}$, or $c = \prod_{i=1}^n c_i$, (\ref{eq:App-B-Logic-AND-2}) can be generalized as \begin{equation} \label{eq:App-B-Logic-AND-N} c \le c_i,~ \forall i,~ c \ge \sum\nolimits_{i=1}^n c_i -n +1,~ c \ge 0 \end{equation} Logical OR is formulated as a function of two binary inputs, i.e., $c=\max \{a,b\}$, which can be linearized via (\ref{eq:App-02-Maximum-PWL}), yielding \begin{equation} \label{eq:App-B-Logic-OR-2} c \ge a,~ c \ge b,~ c \le a+b,~ c \le 1 \\ \end{equation} For the case with multiple binary inputs, i.e., $c=\max \{c_1,\cdots,c_n\}$, (\ref{eq:App-B-Logic-OR-2}) can be generalized as \begin{equation} \label{eq:App-B-Logic-OR-N} c \ge c_i,~ \forall i,~ c \le \sum\nolimits_{i=1}^n c_i,~ c \le 1 \end{equation} \section{Further Reading} \label{App-B-Sect04} Throughout the half-century long research and development, MILP has become an indispensable and unprecedentedly powerful modeling tool in mathematics and engineering, thanks to the advent of efficient solvers that encapsulate many state-of-the-art techniques \cite{{App-MILP-History}}. This chapter aims to provide an overview on formulation recipes that transform complicated conditions into MILPs, so as to take full advantages of off-the-shelf solvers. The paradigm is able to deal with a fairly broad class of hard optimization problems. Readers who are interested in the strength of MILP model, may find in-depth discussions in \cite{App-MILP-Strength} and references therein. For those interested in the PWL approximation of nonlinear functions, we refer to \cite{App-MILP-PWL-Function-1,App-MILP-PWL-Function-2,App-MILP-PWL-Function-3} and references therein, for various models and methods. The most promising one may be the convex combination model with a logarithmic number of binary variables, whose implementation has been thoroughly discussed in \cite{App-MILP-SOS2-LogCC-1,App-MILP-SOS2-LogCC-2,App-MILP-SOS2-LogCC-3}. For those who are interested in the polyhedral study of single-term bilinear sets and MILP based methods for bilinear programs may find extensive information in \cite{App-MILP-MIBLP-1,App-MILP-MIBLP-2} and references therein. For those who need more knowledge about mathematical program with disjunctive constraints, in which constraint activity is controlled by logical conditions, we recommend \cite{App-MILP-Disj-Review}; specifically, the choice of big-M parameter is discussed in \cite{App-MILP-Disj-Big-M}. For those who wish to learn more about integer programming techniques, we refer to \cite{App-MILP-Union} for the formulation of union of polyhedra, \cite{App-MILP-Representability} for the representability of MILP, and \cite{App-MILP-MICQP,App-MILP-Duality} for the more general mixed-integer conic programming as well as its duality theory. To the best of our knowledge, dissertation \cite{App-MILP-Dissertation-MIT} launches the most comprehensive and in-depth study on MILP approximation of non-convex optimization problems. State-of-the-art MILP formulations which balance problem size, strength, and branching behavior are developed and compared, including those mentioned above. The discussions in \cite{App-MILP-Dissertation-MIT} offer insights on designing efficient MILP models that perform extremely well in practice, despite of their theoretically non-polynomial complexity in the worst case. \input{ap02ref} \chapter{Basics of Robust Optimization} \label{App-C} Real-world decision-making models often involve unknown data. Reasons for data uncertainty could come from inexact measurements or forecast errors. For example, in power system operation, the wind power generation and system loads are barely known exactly at the time when the generation schedule should be made; in inventory management, market price and demand volatility is the main source of financial risks. In fact, optimal solutions to mathematical programming problems can be highly sensitive to parameter perturbations \cite{RO-Detail-1}. The optimal solution to the nominal problem may be highly suboptimal or even infeasible in reality due to parameter inaccuracy. Consequently, there is a great need of a systematic methodology that is capable of quantifying the impact of data inexactness on the solution quality, and is able to produce robust solutions that are insensitive to data uncertainty. Optimization under uncertainty has been a focus of the operational research community for a long time. Two approaches are prevalent to deal with uncertain data in optimization, namely stochastic optimization (SO) and robust optimization (RO). They differ in the ways of modeling uncertainty. The former one assumes that the true probability distribution of uncertain data is known or can be estimated from available information, and minimizes the expected cost in its objective function. SO provides strategies that are optimal in the sense of statistics. However, the probability distribution itself may be inexact owing to the lack of enough data, and the performance of the optimal solution could be sensitive to the probability distribution chosen in the SO model. The latter one considers uncertain data resides in a pre-defined uncertainty set, and minimizes the cost in the worst-case scenario in its objective function. Constraint violation is not allowed for all possible data realizations in the uncertainty set. RO is popular because it relies on simple data and distribution-free. From the computational perspective, it is equivalent to convex optimization problems for a variety of uncertainty sets and problem types; for the intractable cases, it can be solved via systematic iteration algorithms. For more technical details about RO, we refer to \cite{RO-Detail-1, RO-Detail-2, RO-Guide,RO-Convex}, survey articles \cite{RO-Survey,RO-Survey-2018}, and many references therein. Recently, distributionally robust optimization (DRO), an emerging methodology that inherits the advantages of SO and RO, has attracted wide attention. In DRO, uncertain data are described by probability distribution functions which are not known exactly and restricted in a functional ambiguity set constructed from available information and structured properties. The expected cost associated with the worst-case distribution is minimized, and the probability of constraint violations can be controlled via robust chance constraints. In many cases, the DRO can be reformulated as a convex optimization problem, or solved iteratively via convex optimization. RO and DRO approaches are young and active research fields, and the challenge is to explore tractable reformulations with various kinds of uncertainties. SO is a relatively mature technique, and the current research is focusing on probabilistic modeling of uncertainty, chance constrained programming, multi-stage SO such as stochastic dual dynamic programming, as well as more efficient computational methods. There are several ways to categorize robust optimization methods. According to how uncertainty is dealt with, they can be classified into static (single-stage) RO and dynamic (multi-stage) RO. According to how uncertainty is modeled, they can be divided into RO and DRO. In the latter category, the ambiguity set for probability distribution can be further classified into the moment based one and the divergence based one. We will shed light on each of them in this chapter. Specifically, RO will be discussed in Sect. \ref{App-C-Sect01} and Sect. \ref{App-C-Sect02}, moment-based DRO will be presented in Sect. \ref{App-C-Sect03}, and divergence-based DRO, also called robust SO will be illuminated in Sect. \ref{App-C-Sect04}. In the operations research community, DRO and robust SO refer to the same thing: optimization problem with distributional uncertainty, and can be used interchangeably, although DRO is preferred by the majority of researchers. In this book, we intentionally distinguish them because the moment ambiguity set can be set up with little information and is more likely a RO; the divergence based set relies on an empirical distribution (may be inexact), so is more similar to an SO. In fact, the gap between SO and RO has been significantly narrowed by recent research progress in the sense of data-driven optimization. \section{Static Robust Optimization} \label{App-C-Sect01} For the purpose of clarity, we begin to explain the paradigm of static RO from LPs, the best known and most frequently used mathematical programming problem in engineering applications. It is relatively easy to derive tractable robust counterparts with various uncertainty sets. Nevertheless, most results can be readily generalized to robust conic programs. The general form of an LP with uncertain parameters can be written as follows: \begin{equation} \label{eq:App-03-SRO-ULP} \min_x \left\{ c^T x ~\middle|~ Ax \le b \right\}:(A,b,c) \in W \end{equation} where $x$ is the decision variable, $A$, $b$, $c$ are coefficient matrices with compatible dimensions, and $W$ denotes the set of all possible data realizations constructed from available information or historical data, or merely a rough estimation. Without loss of generality, we can assume that the objective function and the constraint right-hand side in (\ref{eq:App-03-SRO-ULP}) are certain, and uncertainty only exists in coefficient matrix $A$. To see this, it is not difficult to observe that problem (\ref{eq:App-03-SRO-ULP}) can be written as an epigraph form \begin{equation*} \min_{t,x,y} \{t~|~c^T x - t \le 0,~ Ax-by \le 0,~ y=1 \}: (A,b,c) \in W \end{equation*} By introducing additional scalar variables $t$ and $y$, coefficients appearing in the objective function and constraint right-hand side are constants. With this transformation, it will be more convenient to define the feasible solution and the optimal solution to (\ref{eq:App-03-SRO-ULP}). Hereinafter, we neglect the uncertainty in cost coefficient vector $c$ and constraint right-hand vector $b$ without particular mention, and consider problem \begin{equation} \label{eq:App-03-SRO-LP-UA} \min_x \left\{ c^T x ~\middle|~ Ax \le b \right\}: A \in W \end{equation} Next we present solution concepts of static RO under uncertain data. \subsection{Basic Assumptions and Formulations} \label{App-C-Sect01-01} Basic assumptions and definitions in static RO \cite{RO-Detail-1} are summarized as follows. \begin{assumption} \label{ap:App-03-SRO-1} Vector $x$ represents ``here-and-now'' decisions: they should be determined without knowing exact values of uncertain parameters. \end{assumption} \begin{assumption} \label{ap:App-03-SRO-2} Once the decisions are made, constraints must be feasible when the actual data is within the uncertainty set $W$, and may be either feasible or not when the actual data step outside the uncertainty set $W$. \end{assumption} These assumptions bring about the definition for a feasible solution of (\ref{eq:App-03-SRO-LP-UA}). \begin{definition} \label{df:App-03-SRO-Feasibility} A vector $x$ is called a robust feasible solution to (\ref{eq:App-03-SRO-LP-UA}) if the following condition holds: \begin{equation} \label{eq:App-03-SRO-Robust-Fea} A x \le b,~ \forall A \in W \end{equation} \end{definition} To prescribe an optimal solution, the worst-case criterion is widely accepted in RO studies, leading to the following definition: \begin{definition} \label{df:App-03-SRO-Optimality} The robust optimal value of (\ref{eq:App-03-SRO-LP-UA}) is the minimum value of the objective function over all possible $x$ that satisfies (\ref{eq:App-03-SRO-Robust-Fea}). \end{definition} After we have agreed on the meanings of feasibility and optimality of (\ref{eq:App-03-SRO-LP-UA}), we can seek the optimal solution among all robust feasible solutions to the problem. Now, the robust counterpart (RC) of the uncertain LP (\ref{eq:App-03-SRO-LP-UA}) can be described as: \begin{equation} \label{eq:App-03-SRO-RC} \begin{aligned} \min_x ~~ & c^T x \\ \mbox{s.t.} ~~ & a^T_i x \le b_i,~ \forall i,~ \forall A \in W \end{aligned} \end{equation} where $a^T_i$ is the $i$-th row of matrix $A$, and $b_i$ is the $i$-th element of vector $b$. We have two observations on the formulation of robust constraints in (\ref{eq:App-03-SRO-RC}). \begin{proposition} \label{pr:App-03-Invariance-1} Robust feasible solutions of (\ref{eq:App-03-SRO-RC}) remain the same if we replace $W$ with the Cartesian product $\hat W = W_1 \times \cdots \times W_n$, where $W_i = \{a_i | \exists A \in W \}$ is the projection of $W$ on the coefficient space of $i$-th row of $A$. \end{proposition} This is called the constraint-wise property in static RO \cite{RO-Detail-1}. The reason is \begin{equation} a^T_i x \le b_i,~ \forall A \in W~ \Leftrightarrow~ \max_{A \in W} a^T_i x \le b_i \Leftrightarrow~ \max_{a_i \in W_i} a^T_i x \le b_i \notag \end{equation} As a result, problem (\ref{eq:App-03-SRO-RC}) comes down to \begin{equation} \label{eq:App-03-SRO-RC-Hull} \begin{aligned} \min_x ~~ & c^T x \\ \mbox{s.t.} ~~ & a^T_i x \le b_i,\forall a_i \in W_i,~ \forall i \end{aligned} \end{equation} Proposition \ref{pr:App-03-Invariance-1} seems rather counter-intuitive. One may perceive that (\ref{eq:App-03-SRO-RC}) will be less conservative with uncertainty set $W$ since it is a subset of $\hat W$. In fact, later we will see that this intuition is true for adjustable robustness. \begin{proposition} \label{pr:App-03-Invariance-2} Robust feasible solutions of (\ref{eq:App-03-SRO-RC-Hull}) remain the same if we replace $W_i$ with its convex hull ${\rm conv}(W_i)$. \end{proposition} To see this, let vector $a^j_i$, $j=1,2,\cdots$ be the extreme points of $W_i$, then any point $\bar a_i \in$ conv$(W_i)$ can be expressed by $\bar a_i = \sum_j \lambda_j a^j_i$, where $\lambda_j \ge 0$, $\sum_j \lambda_j = 1$ are weight coefficients. If $x$ is feasible for all extreme points $a^j_i$, i.e., $a^j_i x \le b_i$, $\forall j$, then \begin{equation} \bar a^T_i x = \sum_j \lambda_j a^j_i x \le \sum_j \lambda_j b_i = b_i \notag \end{equation} which indicates that the constraint remains intact for all uncertain parameters reside in conv$(W_i)$. Combining Propositions \ref{pr:App-03-Invariance-1} and \ref{pr:App-03-Invariance-2}, we can conclude that the robust counterpart of an uncertain LP with a certain objective remains intact even if sets $W_i$ of uncertain data are extended to their closed convex hulls, and $W$ to the Cartesian product of the resulting sets. In other words, we can make a further assumption on the uncertainty set without loss of generality. \begin{assumption} \label{ap:App-03-SRO-3} The uncertainty set $W$ is the Cartesian product of closed and convex sets. \end{assumption} \subsection{Tractable Reformulations} \label{App-C-Sect01-02} The constraint-wise property enables us to analyze the robustness of each constraint $a^T_i x \le b_i$, $\forall a_i \in W_i$ separately. Without particular mention, we will omit the subscript $i$ for brevity. To facilitate discussion, it is convenient to parameterize the uncertain vector as $a = \bar a + P \zeta$, where $\bar a$ is the nominal value of $a$, $P$ is a constant matrix, $\zeta$ is a new variable that is uncertain. This section will focus on how to derive tractable reformulation for robust constraints in the form of \begin{equation} \label{eq:App-03-SRO-RC-Single} (\bar a + P \zeta)^T x \le b,~ \forall \zeta \in Z \end{equation} where $Z$ is the uncertainty set of variable $\zeta$. For same reasons, we can assume that $Z$ is closed and convex. A “computationally tractable” problem means that there are known solution algorithms which can solve the problem with polynomial running time in its input size even in the worst case. It has been shown in \cite{RO-Detail-1} that problem (\ref{eq:App-03-SRO-RC-Hull}) is generally intractable even if each $W_i$ is closed and convex. Nevertheless, tractability can be preserved for some special classes of uncertainty sets. Some well-known results are summarized in the following. Condition (\ref{eq:App-03-SRO-RC-Single}) contains an infinite number of constraints due to the enumeration over set $Z$. Later we will see that for some particular uncertainty sets, the $\forall$ quantifier as well as the uncertain parameter $\zeta$ can be eliminated by using duality theory, and the resulting constraint in variable $x$ is still convex. \vspace{12pt} {\noindent \bf 1. Polyhedral uncertainty set} We start with a commonly used uncertainty set: a polyhedron \begin{equation} \label{eq:App-03-SRO-US-Polyhedron} Z = \{ \zeta ~|~ D \zeta + q \ge 0\} \end{equation} where $D$ and $q$ are constant matrices with compatible dimensions. To exclude the $\forall$ quantifier for variable $\zeta$, we investigate the worst case of the left-hand side and require \begin{equation} \label{eq:App-03-SRO-Poly-1} \bar a^T x + \max_{\zeta \in Z} (P^T x)^T \zeta \le b \end{equation} For a fixed $x$, the second term is the optimum of an LP in variable $\zeta$. Duality theory of LP says that the following relation holds \begin{equation} \label{eq:App-03-SRO-Poly-2} (P^T x)^T \zeta \le q^T u,~ \forall \zeta \in Z,~ \forall u \in U \end{equation} where $u$ is the dual variable, and $U = \{ u ~|~ D^T u + P^T x =0,~ u \ge 0\}$ is the feasible region of the dual problem. Please be cautious on the sign of $u$. We actually replace $u$ with $-u$ in the original dual LP. Therefore, a necessary condition to validate (\ref{eq:App-03-SRO-RC-Single}) is \begin{equation} \label{eq:App-03-SRO-Poly-3} \exists u \in U: \bar a^T x + q^T u \le b \end{equation} It is also sufficient if the second term takes its minimum value over $U$, because strong duality always holds for LPs, i.e. $(P^T x)^T \zeta = q^T u$ is satisfied at the optimal solution. In this regard, (\ref{eq:App-03-SRO-Poly-1}) is equivalent to \begin{equation} \label{eq:App-03-SRO-Poly-4} \bar a^T x + \min_{u \in U}~ q^T u \le b \end{equation} In fact, the ``min'' operator in (\ref{eq:App-03-SRO-Poly-4}) can be omitted in a RC optimization problem that minimizes the objective function, and thus renders polyhedral constraints, although (\ref{eq:App-03-SRO-Poly-1}) is not given in a closed form and seems non-convex. In summary, the RC problem of an uncertain LP with polyhedral uncertainty \begin{equation} \label{eq:App-03-SRO-RC-LP-1} \begin{aligned} \min_x ~~ & c^T x \\ \mbox{s.t.} ~~ & (\bar a_i + P_i \zeta_i)^T x \le b_i,~ \forall \zeta_i \in Z_i,~\forall i\\ & Z_i = \{ \zeta_i ~|~ D_i \zeta_i + q_i \ge 0\},~ \forall i \end{aligned} \end{equation} can be equivalently formulated as \begin{equation} \label{eq:App-03-SRO-RC-LP-2} \begin{aligned} \min_x ~~ & c^T x \\ \mbox{s.t.} ~~ & \bar a^T_i x + q^T_i u_i \le b_i,~\forall i \\ & D^T_i u_i + P^T_i x = 0,~ u_i \ge 0,~\forall i \end{aligned} \end{equation} which is still an LP. \vspace{12pt} {\noindent \bf 2. Cardinality constrained uncertainty set} Cardinality constrained uncertainty set is a special class of polyhedral uncertainty set which incorporates a budget constraint and defined as follows \begin{equation} \label{eq:App-03-SRO-US-Card} Z({\rm \Gamma}) = \left\{ \zeta ~\middle|~ -1 \le \zeta_j \le 1,~ \forall j,~ \sum_j |\zeta_j| \le \rm \Gamma \right \} \end{equation} where $\rm \Gamma$ is called the budget of uncertainty \cite{RO-Price-Robust}. Motivated by the fact that each entry $\zeta_j$ is unlikely to reach 1 or $-1$ at the same time, the budget constraint controls the total data deviation from their forecast values. In other words, the decision maker can achieve a compromise between the level of solution robustness and the optimal cost by adjusting the value of $\rm \Gamma$, which should be less than the dimension of $\zeta$, otherwise the the budget constraint will be redundant. Although the cardinality constrained uncertainty set $Z({\rm \Gamma})$ is essentially a polyhedron, the number of its facets, or the number of linear constraints in (\ref{eq:App-03-SRO-US-Polyhedron}), grows exponentially in the dimension of $\zeta$, leading to a huge and dense coefficient matrix for the uncertainty set. To circumvent this difficulty, we can lift it into a higher dimensional space as follows by introducing auxiliary variables \begin{equation} \label{eq:App-03-SRO-US-Card-Lift} Z({\rm \Gamma}) = \left\{ \zeta,\sigma ~\middle|~ - \sigma_j \le \zeta_j \le \sigma_j,~ \sigma_j \le 1,~\forall j,~ \sum_j \sigma_j \le \rm \Gamma \right \} \end{equation} The first inequality naturally suggests $\sigma_j \ge 0$, $\forall j$. It is easy to see the equivalence of (\ref{eq:App-03-SRO-US-Card}) and (\ref{eq:App-03-SRO-US-Card-Lift}), and the numbers of variables and constraints in the latter one grows linearly in the dimension of $\zeta$. Following a similar paradigm, certifying constraint robustness with a cardinality constrained uncertainty set requires the optimal value function of the following LP in variables $\zeta$ and $\sigma$ representing the uncertainty \begin{equation} \label{eq:App-03-SRO-Card-1} \begin{aligned} \max_{\zeta,\sigma}~~ (P^T & x)^T \zeta \\ \mbox{s.t.}~~ - \zeta_j - \sigma_j & \le 0,~ \forall j : u^n_j \\ \zeta_j - \sigma_j & \le 0,~ \forall j : u^m_j \\ \sigma_j & \le 1,~ \forall j : u^b_j \\ \sum_j \sigma_j & \le {\rm \Gamma} : u_r \end{aligned} \end{equation} where $u^n_j$, $u^m_j$, $u^b_j$, $\forall j$, and $u_r$ following a colon are the dual variables associated with each constraint. The dual problem of (\ref{eq:App-03-SRO-Card-1}) is given by \begin{equation} \label{eq:App-03-SRO-Card-2} \begin{aligned} \min_{u^n,u^m,u^b,u_r}~~ & u_r {\rm \Gamma} + \sum_j u^b_j \\ \mbox{s.t.}~~ & u^m_j - u^n_j = (P^T x)_j,~ \forall j \\ & -u^m_j - u^n_j + u^b_j + u_r = 0,~ \forall j \\ & u^m_j,~ u^n_j,~ u^b_j \ge 0,~ \forall j,~ u_r \ge 0 \end{aligned} \end{equation} In summary, the RC problem of an uncertain LP with cardinality constrained uncertainty \begin{equation} \label{eq:App-03-SRO-RC-Card-1} \begin{aligned} \min_x ~~ & c^T x \\ \mbox{s.t.} ~~ & (\bar a_i + P_i \zeta_i)^T x \le b_i,~ \forall \zeta_i \in Z_i({\rm \Gamma}_i),~\forall i \end{aligned} \end{equation} can be equivalently formulated as \begin{equation} \label{eq:App-03-SRO-RC-Card-2} \begin{aligned} \min_x ~~ & c^T x \\ \mbox{s.t.} ~~ & \bar a^T_i x + u_{ri} {\rm \Gamma}_i + \sum_j u^b_{ij} \le b_i,~\forall i \\ & u^m_{ij} - u^n_{ij} = (P^T_i x)_j,~ \forall i,~ \forall j \\ &-u^m_{ij} - u^n_{ij} + u^b_{ij} + u_{ir} = 0,~ \forall i,~ \forall j \\ & u^m_{ij},~ u^n_{ij},~ u^b_{ij} \ge 0,~ \forall i,~ \forall j,~ u_{ir} \ge 0,~ \forall i \end{aligned} \end{equation} which is still an LP. \vspace{12pt} {\noindent \bf 3. Several other uncertainty sets} Equivalent convex formulations of the uncertain constraint (\ref{eq:App-03-SRO-RC-Single}) with some other uncertainty sets are summarized in Table \ref{tab:App-03-SRO-RCs} \cite{RO-Guide}. These outcomes are derived using the similar method described previously. \begin{table}[!t] \scriptsize \renewcommand{\arraystretch}{2.0} \renewcommand{\tabcolsep}{1em} \caption{Equivalent convex formulations with different uncertainty sets} \centering \begin{tabular}{|c|c|c|c|} \hline Uncertainty & $Z$ & Robust reformulation & Tractability \\ \hline Box & $\|\zeta\|_\infty \le 1$ & $\bar a^T x + \| P^T x \|_1 \le b$ & LP\\ Ellipsoidal & $\|\zeta\|_2 \le 1$ & $\bar a^T x+\| P^T x \|_2 \le b$ & LP\\ $p$-norm & $\|\zeta\|_p \le 1$ & $\bar a^T x + \| P^T x \|_q \le b$ & Convex program \\ Proper cone & $D \zeta + q \in K$ & $ \left\{ \begin{lgathered} \bar a^T x + q^T u \le b \\ D^T u + P^T x = 0 \\ u \in K^* \end{lgathered} \right.$ & Conic LP \\ Convex constraints & $h_k(\zeta) \le 0, \forall k$ & $ \left\{ \begin{lgathered} \bar a^T x + \sum_k \lambda_k h^*_k \left( \frac{u^k}{\lambda_k} \right) \le b \\ \sum_k u^k = P^T x \\ \lambda_k \ge 0, \forall k \end{lgathered} \right.$ & Convex program \\ \hline \end{tabular} \label{tab:App-03-SRO-RCs} \end{table} Table \ref{tab:App-03-SRO-RCs} includes three cases: the p-norm uncertainty, the conic uncertainty, and general convex uncertainty. In the $p$-norm case, the H{\"o}lder's inequality is used, i.e.: \begin{equation} (P^T x)^T \zeta \le \|P^T x\|_p \| \zeta \|_q \end{equation} where $\|\cdot\|_p$ and $\|\cdot\|_q$ with $p^{-1} + q^{-1} = 1$ are a pair of dual norms. Since norm function of any order is convex \cite{CVX-Book-Boyd}, the resulting RC is a convex program. Moreover, if $q$ is a positive rational number, the $q$-order cone constraints can be represented by a set of SOC inequalities \cite{SOCP-p-norm}, which is computationally more friendly. Box ($\infty$-norm) and ellipsoidal (2-norm) uncertainty sets are special kinds of $p$-norm ones. In the general conic case, conic duality theory \cite{CVX-Book-Ben} is used. $K^*$ stands for the dual cone of $K$, and the polyhedral uncertainty is a special kind of this case when $K$ is the nonnegative orthant. In the general convex case, Fenchel duality, a basic theory in convex analysis, is needed. Notation $h^*$ stands for the convex conjugate function, i.e. $h^*(x) = \sup_y x^T y -h(y)$. The detailed proof of RC reformulations and more examples can be found in \cite{SRO-CVX-RCs}. Above analysis focuses on the situation in which problem functions are linear in decision variables, and problem\ data are affine in some uncertain parameters, such as the form $a = \bar a + P \zeta$. For robust quadratic optimization, robust semidefinite optimization, robust conic optimization, and robust discrete optimization, in which the optimization problem is nonlinear and discontinuous, please refer to \cite{RO-Detail-1} and \cite{RO-Detail-2}; for quadratic type uncertainty, please refer to \cite{RO-Detail-1} (in Sect. 1.4) and \cite{SRO-CVX-RCs}. \subsection{Formulation Issues} \label{App-C-Sect01-03} To help practitioners build a well-defined and easy-to-solve robust optimization model, some important modeling issues and deeper insights are discussed in this section. \vspace{12pt} {\noindent \bf 1. Choosing the uncertainty set} Since a robust solution remains feasible if the uncertain data does not step outside the uncertainty set, the level of robustness mainly depends on the shape and size of the uncertainty set. The more reliable, the higher the cost. One may wish to seek a trade-off between reliability and economy. This inspires the development of smaller uncertainty sets with a certain probability guarantee that the constraint violation is unlikely to happen. Such guarantees are usually described via a chance constraint \begin{equation} \label{eq:App-03-SRO-Chance-Cons} \Pr\nolimits _\zeta [a(\zeta)^T x \le b] \ge 1 - \varepsilon \end{equation} For $\varepsilon = 0$, chance constraint (\ref{eq:App-03-SRO-Chance-Cons}) is protected in the traditional sense of RO. When $\varepsilon >0$, it becomes challenging to derive tractable reformulation for (\ref{eq:App-03-SRO-Chance-Cons}), especially when the probability distribution of uncertain data is unclear or inaccurate. In fact, this issue is closely related to the DRO that will be discussed later on. Here we provide some simple results which help the decision maker choose the parameter of the uncertainty set. It is revealed that if $\mathbb E [\zeta] = 0$, the components of $\zeta$ are independent, and the uncertainty set takes the form \begin{equation} \label{eq:App-03-SRO-US-BOX-ELP} Z = \{\zeta ~|~ \|\zeta\|_2 \le {\rm \Omega},~ \|\zeta\|_\infty \le 1 \} \end{equation} then chance constraint (\ref{eq:App-03-SRO-Chance-Cons}) holds with a probability of at least $1-\exp(-{\rm \Omega^2}/2)$ (see \cite{RO-Detail-1}, Proposition 2.3.3). Moreover, if the uncertainty set takes the form \begin{equation} \label{eq:App-03-SRO-US-BOX-Budget} Z = \{\zeta ~|~ \|\zeta\|_1 \le {\rm \Gamma},~ \|\zeta\|_\infty \le 1 \} \end{equation} then chance constraint (\ref{eq:App-03-SRO-Chance-Cons}) holds with a probability of at least $1-\exp(-{\rm \Gamma^2}/2L)$, where $L$ is the dimension of $\zeta$ (see \cite{RO-Detail-1}, Proposition 2.3.4, and \cite{RO-Price-Robust}). It is proposed to construct uncertainty sets based on the central limit theorem. If each component of $\zeta$ is independent and identically distributed with mean $\mu$ and variance $\sigma^2$, the uncertainty set can be built as \cite{SRO-US-CLT} \begin{equation} \label{eq:App-03-SRO-US-CLT} Z = \left\{ \zeta ~\middle|~ \left|\sum_{i=1}^L \zeta_i - L \mu \right| \le \rho \sqrt{L} \sigma,~ \|\zeta\|_\infty \le 1 \right\} \end{equation} where parameter $\rho$ is used to control the probability guarantee. Variations of this formulation can take other distributional information into account, such as data correlation and long tail-effect. It is a special kind of polyhedral uncertainty, however, it is unbounded for $L > 1$, since the components can be arbitrarily large as long as their summation is relatively small. Unboundedness may prevents establishing tractable RCs. Additional references are introduced in further reading. \vspace{12pt} {\noindent \bf 2. How to solve a problem without a clear tractable reformulation?} The existence of a tractable reformulation for a static RO problem largely depends on the type of the uncertainty set. If the robust counterpart cannot be written as a tractable convex program, a smart remedy is to use an adaptive scenario generation procedure: first solve the problem with a smaller uncertainty set $Z_S$ which is a subset of the original one $Z$, and the problem with $Z_S$ has a known tractable reformulation. If the optimal solution $x^*$ is robust against all scenarios in $Z$, it is also an optimal solution of the original problem. Otherwise, we have to identify a scenario $\zeta^* \in Z$ which leads to the most severe violation, which can be implemented by solving \begin{equation} \label{eq:App-03-SRO-Scen-Gen-Sub} \max~ \left\{ (P^T x^*)^T \zeta ~|~{\zeta \in Z} \right\} \end{equation} where $Z$ is a closed and convex set as validated in Assumption \ref{ap:App-03-SRO-3}, and then append a cutting plane \begin{equation} \label{eq:App-03-SRO-Scenario-Cut} a(\zeta^*)^T x \le b \end{equation} to the reformulation problem. (\ref{eq:App-03-SRO-Scenario-Cut}) removes $x$ that will cause infeasibility in scenario $\zeta^*$, so is called a feasibility cut. It is linear and does not alter tractability. Then the updated problem is solved again. According to Proposition \ref{pr:App-03-Invariance-2}, the new solution $x^*$ will be robust for uncertain data in the convex hull of $Z_S \cup \zeta^*$. Above procedure continues until robustness is certified over the original uncertainty set $Z$. This simple approach often converges quickly in a few number of iterations. Its advantage is that tractability is preserved. When we choose $Z_S = \zeta^0$, where $\zeta^0$ is the nominal scenario or forecast, it could be more efficient than using convex reformulations, because only LPs (whose sizes are almost equal to the problem without uncertainty, and grows slowly) and simple convex programs (\ref{eq:App-03-SRO-Scen-Gen-Sub}) are solved, see \cite{SRO-Cut-Generation} for a comparison. This paradigm is an essential strategy for solving the adjustable RO problems in the next section. \vspace{12pt} {\noindent \bf 3. How to deal with equality constraints?} Although the theory of static RO is relatively mature, it encounters difficulties in dealing with equality constraints. For example, consider $x + a = 1$ where $a \in [0,0.1]$ is uncertain. However, one can seldom find a solution that makes the equality hold true for multiple values of $a$. The problem remains if you write a equality into a pair of opposite inequalities. In fact, this issue is inevitable in the static setting. In addition, this limitation will lead to completely different robust counterpart formulations for originally equivalent deterministic problems. Consider the inequality $a x \le 1$, which is equivalent to $a x + s = 1,~ s \ge 0$. Suppose $a$ is uncertain and belongs to interval $[1,2]$, their respective robust counterparts are given by \begin{equation} \label{eq:App-03-SRO-Example-1} a x \le 1,~ \forall a \in [1,2] \end{equation} and \begin{equation} \label{eq:App-03-SRO-Example-2} a x + s = 1,~ \forall a \in [1,2],~ s \ge 0 \end{equation} The feasible set for (\ref{eq:App-03-SRO-Example-1}) is $x \le 1/2$, and is $x = 0$ for (\ref{eq:App-03-SRO-Example-2}). By observing this difference, it is suggested that a static RO model should avoid using slack variables in constraints with uncertain parameters. Sometimes, the optimization problem may contain state variables which can respond to parameter changes by adjusting their values. In such circumstance, equality constraint can be used to eliminate state variables. Nevertheless, such an action may lead to a problem that contains nonlinear uncertainties, which are challenging to solve. An example is taken from \cite{RO-Guide} to illustrate this issue. The constraints are \begin{equation} \label{eq:App-03-SRO-Example-3} \begin{lgathered} \zeta_1 x_1 + x_2 + x_3 = 1 \\ x_1 + x_2 + \zeta_2 x_3 \le 5 \end{lgathered} \end{equation} where $\zeta_1$ and $\zeta_2$ are uncertain. If $x_1$ is a state variable and $\zeta_1 \ne 0$, substituting $x_1 = (1 - x_2 - x_3)/\zeta_1$ in the second inequality results in \begin{equation} \left( 1-\frac{1}{\zeta_1} \right) x_2 + \left( \zeta_2 - \frac{1}{\zeta_1} \right) x_3 \le 5 - \frac{1}{\zeta_1} \notag \end{equation} in which the uncertainty becomes nonlinear in the coefficients. If $x_2$ is a state variable, substituting $x_2 = 1 - \zeta_1 x_1 - x_3$ in the inequality yields \begin{equation} (1 - \zeta_1) x_1 + (\zeta_2 -1) x_3 \le 4 \notag \end{equation} in which the uncertainty sustains linear in the coefficients. If $x_3$ is a state variable, substituting $x_3 = 1 - \zeta_1 x_1 - x_2$ in the inequality gives \begin{equation} (1 - \zeta_1 \zeta_2) x_1 + (1 - \zeta_2) x_2 \le 5 - \zeta_2 \notag \end{equation} in which the uncertainty is nonlinear in the coefficients. In conclusion, in the case that $x_2$ is a state variable, the problem is easier from a computational perspective. It is important to note that the physical interpretation of variable elimination is to determine the adjustable variable with exact information on the uncertain data. If no adjustment is allowed in (\ref{eq:App-03-SRO-Example-3}), the only robust feasible solution is $x_1 = x_3 =0$, $x_2 = 1$, which is rather restrictive. The adjustable RO will be elaborated in detail in the next section. \vspace{12pt} {\noindent \bf 4. Pareto efficiency of the robust solution} The concept of Pareto efficiency in RO problems is proposed in \cite{SRO-Pareto-1}. If the optimal solution under the worst-case data realization is not unique, it is rational to compare their performances in non-worst-case scenarios: an alternative solution may give an improvement in the objective value for at least one data scenario without deteriorating the objective performances in all other scenarios. To present related concept tersely, we restrict the discussion on the following robust LP with objective uncertainty \begin{equation} \label{eq:App-03-RLP-Obj} \max\{p^T x~|~ \mbox{s.t. } x \in X,~ \forall p \in W \} = \max_{x \in X} \left\{ \min_{p \in W} p^T x \right\} \end{equation} where $W = \{p ~|~ D p \ge d \}$ is a polyhedral uncertainty set for the price vector $p$; $X = \{ x~|~A x \le b \}$ is the feasible region which is independent of the uncertainty. More general cases are elaborated in \cite{SRO-Pareto-1}. We consider this form because it is easy to discuss related issues, although objective uncertainty can be moved into constraints. For a given strategy $x$, the worst-case uncertainty is \begin{equation} \label{eq:App-03-RLP-Dual} \min \{ p^T x ~|~ \mbox{s.t.}~ p \in W \} = \max \{ d^T y ~|~ \mbox{s.t.}~ y \in Y \} \end{equation} where $Y =\{ y ~|~ D^T y = x,~ y \ge 0\}$ is the feasible set for dual variable $y$. Substituting (\ref{eq:App-03-RLP-Dual}) in (\ref{eq:App-03-RLP-Obj}) gives \begin{equation} \label{eq:App-03-RLP-Obj-RC} \max\{d^T y~|~ \mbox{s.t. } D^T y = x,~ y \ge 0,~x \in X\} \end{equation} which is an LP. Its solution $x$ is the robust optimal one to (\ref{eq:App-03-RLP-Obj}), and the worst-case price $p$ can be found by solving the left-hand side LP in (\ref{eq:App-03-RLP-Dual}). Let $z^{RO}$ be the optimal value of (\ref{eq:App-03-RLP-Obj-RC}), and then the set of robust optimal solutions for (\ref{eq:App-03-RLP-Obj}) can be expressed via \begin{equation} \label{eq:App-03-RLP-Obj-XRO} X^{RO} = \{x~|~ x \in X: \exists y \in Y \mbox{ such that } y^T d \ge z^{RO} \} \end{equation} If (\ref{eq:App-03-RLP-Obj-RC}) has a unique optimal solution, $X^{RO}$ is a singleton; otherwise, a Pareto optimal robust solution can be formally defined. \begin{definition} \label{df:App-03-Pareto-Efficiency} \cite{SRO-Pareto-1} $x \in X^{RO}$ is a Pareto optimal solution for problem (\ref{eq:App-03-RLP-Obj}) if there is no other $\bar x \in X$ such that $p^T \bar x \ge p^T x,~ \forall p \in W$ and $\bar p^T \bar x > \bar p^T x$ for some $\bar p \in W$. \end{definition} The terminology ``Pareto optimal'' is borrowed from multi-objective optimization theory: RO problem (\ref{eq:App-03-RLP-Obj}) is viewed as a multi-objective LP with infinitely many objectives, each of which corresponds to a particular $p \in W$. Some interesting problems are elaborated. \vspace{12pt} {\noindent \bf a. Pareto efficiency test} In general, it is not clear whether $X^{RO}$ contains multiple solutions, at least before a solution $x \in X^{RO}$ is found. To test whether a given solution $x$ is a robust optimal one or not, it is proposed to solve a new LP \begin{equation} \label{eq:App-03-PRO-Test-LP} \begin{aligned} \max_{y} ~~ & \bar p^T y \\ \mbox{s.t.} ~~ & y \in W^* \\ & x + y \in X \end{aligned} \end{equation} where $\bar p$ is a relative interior of the polyhedral uncertainty set $W$, which is usually set to the nominal scenario, and $W^* = \{ y~|~\exists \lambda:d^T \lambda \ge 0,~ D^T \lambda = y,~ \lambda \ge 0\}$ is the dual cone of $W$. Please refer to Sect. \ref{App-A-Sect02-01} and equation (\ref{eq:App-01-Dual-Polytope-2}) for the dual cone of a polyhedral set. Since $y = 0$, $\lambda = 0$ is always feasible in (\ref{eq:App-03-PRO-Test-LP}), the optimal value is either zero or strictly positive. In the former case, $x$ is also a Pareto optimal solution; in the latter case, $\bar x = x + y^*$ dominates $x$ and itself is Pareto optimal for any $y^*$ that solves LP (\ref{eq:App-03-PRO-Test-LP}) \cite{SRO-Pareto-1}. The interpretation of (\ref{eq:App-03-PRO-Test-LP}) is clear: since $y \in W^*$, $y^T p$ must be non-negative for all $p \in W$. If we can find $y$ that leads to a strict objective improvement for $\bar p$, then $x+y$ would be Pareto optimal. In view of the above interpretation, it is a direct conclusion that for an arbitrary relative interior point $\bar p \in W$, the optimal solutions to the problem \begin{equation} \label{eq:App-03-PRO-Find-LP} \max~ \left\{ \bar p^T x ~|~ x \in X^{RO} \right\} \end{equation} are Pareto optimal. \vspace{12pt} {\noindent \bf b. Characterizing the set of Pareto optimal solutions} It is interesting to characterize the Pareto optimal solution set $X^{PRO}$. After we get $z^{RO}$ and $X^{RO}$, solve the following LP \begin{equation} \label{eq:App-03-XPRO} \begin{aligned} \max_{x,y,\lambda} ~~ & \bar p^T y \\ \mbox{s.t.} ~~ & d^T \lambda \ge 0,~ D^T \lambda = y,~ \lambda \ge 0 \\ & x \in X^{RO},~ x + y \in X \end{aligned} \end{equation} and we can conclude $X^{PRO}=X^{RO}$ if and only if the optimal value of (\ref{eq:App-03-XPRO}) is equal to 0 \cite{SRO-Pareto-1}. If this is true, the decision maker would not have to worry about Pareto efficiency, as any solution in $X^{RO}$ is also Pareto optimal. More broadly, the set $X^{PRO}$ is shown to be non-convex and is contained in the boundary of $X^{RO}$. \vspace{12pt} {\noindent \bf c. Optimization over Pareto optimal solutions} In the case that $X^{PRO}$ is not a singleton, one may consider to optimize a linear secondary objective over $X^{PRO}$, i.e.: \begin{equation} \label{eq:App-03-OPTI-XPRO} \max \{ r^T x ~|~ \mbox{s.t. } x \in X^{PRO} \} \end{equation} It is demonstrated in \cite{SRO-Pareto-1} that if $r$ lies in the relative interior of $W$, the decision maker can simply replace $X^{PRO}$ with $X^{RO}$ in (\ref{eq:App-03-OPTI-XPRO}) without altering the problem solution, due to the property revealed in (\ref{eq:App-03-PRO-Find-LP}). In more general cases, problem (\ref{eq:App-03-OPTI-XPRO}) can be formulated as an MILP \cite{SRO-Pareto-1} \begin{equation} \label{eq:App-03-OPTI-XPRO-MILP} \begin{aligned} \max_{x,\mu,\eta,z}~~ & r^T x \\ \mbox{s.t.}~~ & x \in X^{RO} \\ & \mu \le M(1-z) \\ & b - A x \le Mz \\ & DA^T \mu - d \eta \ge D \bar p \\ & \mu, \eta \ge 0, z \in \{0,1\}^{m} \end{aligned} \end{equation} where $M$ is a sufficiently large number, $m$ is the dimension of vector $z$. To show their equivalence, it is revealed that the feasible set of (\ref{eq:App-03-OPTI-XPRO-MILP}) depicts an optimal solution of (\ref{eq:App-03-PRO-Test-LP}) with a zero objective value \cite{SRO-Pareto-1}. In other words, the constraints of (\ref{eq:App-03-OPTI-XPRO-MILP}) contain the KKT optimality condition of (\ref{eq:App-03-PRO-Test-LP}). To see this, the binary vector $z$ imposes the complementarity and slackness condition $\mu^T (b-Ax)=0$, which ensures $\lambda, \mu, \eta$ are the optimal solution of the following primal-dual LP pair \begin{equation} \mbox{Primal : } \begin{aligned} \max_{\lambda}~~ & \bar p^T D^T \lambda \\ \mbox{s.t.}~~ & \lambda \ge 0 \\ & d^T \lambda \ge 0 \\ & A D^T \lambda \le b - A x \end{aligned} \quad \mbox{Dual : } \begin{aligned} \min_{\mu, \eta}~~ & \mu^T (b-Ax) \\ \mbox{s.t.}~~ & \mu \ge 0 \\ & \eta \ge 0 \\ & D A^T \mu - d \eta \ge D \bar p \end{aligned} \notag \end{equation} The original variable $y$ in (\ref{eq:App-03-PRO-Test-LP}) is eliminated via equality $ D^T \lambda = y$ in the dual cone $W^*$. According to strong duality, the optimal value of the primal LP (\ref{eq:App-03-PRO-Test-LP}) is $\bar p^T D^T \lambda = \bar p^T y = 0$, and Pareto optimality is guaranteed. In practice, Pareto inefficiency is not a contrived phenomenon, see various examples in \cite{SRO-Pareto-1} and power market examples in \cite{SRO-Pareto-2,SRO-Pareto-3}. \vspace{12pt} {\noindent \bf 5. On max-min and min-max formulations} In many literatures, the robust counterpart problem of (\ref{eq:App-03-SRO-LP-UA}) is written as a min-max form \begin{equation} \label{eq:App-03-SRO-RC-min-max} \begin{aligned} \mbox{Opt-1} = \min_x \max_{A \in W} \left\{c^T x ~|~ \mbox{s.t. } A x \le b \right\} \end{aligned} \end{equation} which means $x$ is determined before $A$ takes a value in $W$, and the decision maker can foresee the worst consequence of deploying $x$ brought by the perturbation of $A$. To make a prudent decision that is insensitive to data perturbation, the decision maker resorts to minimizing the maximal objective. The max-min formulation \begin{equation} \label{eq:App-03-SRO-RC-max-min} \begin{aligned} \mbox{Opt-2} = \max_{A \in W} \min_x \left\{c^T x ~|~ \mbox{s.t. } A x \le b \right\} \end{aligned} \end{equation} has a different interpretation: the decision maker can first observe the realization of uncertainty, and then recovers the constraints by deploying a corrective action $x$ as a response to the observed $A$. Certainly, this specific $x$ may not be feasible for other $A \in W$. On the other hand, the uncertainty, like a rational player, can foresee the optimal action taken by the human decision maker, and select a strategy that will yield a maximal objective value even an optimal corrective action is deployed. From above analysis, the feasible region of $x$ in (\ref{eq:App-03-SRO-RC-min-max}) is a subset of that in (\ref{eq:App-03-SRO-RC-max-min}), because (\ref{eq:App-03-SRO-RC-max-min}) only accounts for a special scenario in $W$. As a result, their optimal values satisfy Opt-1 $\ge$ Opt-2. Consider the following problem in which the uncertainty is not constraint-wise \begin{equation} \label{eq:App-03-SRO-RC-Toy} \begin{aligned} \min_x ~~ & x_1 + x_2 \\ \mbox{s.t.} ~~ & x_1 \ge a_1,~ x_2 \ge a_2,~ \forall a \in W \end{aligned} \end{equation} where $W = \{a ~|~ a \ge 0,~ \|a\|_2 \le 1 \}$. For the min-max formulation, since $x$ should be feasible for all possible values of $a$, it is necessary to require $x_1 \ge 1$ and $x_2 \ge 1$, and Opt-1 $=2$ for problem (\ref{eq:App-03-SRO-RC-Toy}). As for the max-min formulation, as $x$ is determined in response to the value of $a$, it is clear that the optimal choice is $x_1 = a_1$ and $x_2 = a_2$, so the problem becomes \begin{equation} \begin{aligned} \max_a ~~ & a_1 + a_2 \\ \mbox{s.t.} ~~ & a^2_1 + a^2_2 \le 1 \end{aligned} \notag \end{equation} whose optimal value is Opt-2 $=\sqrt{2} <$ Opt-1. As a short conclusion, static RO models discussed in this section are used to immunize against constraint violation or objective volatility caused by data perturbations, without jeopardizing computational tractability. General approaches involve reformulating the original uncertainty dependent constraints into deterministic convex ones without uncertain data, such that feasible solutions of the robust counterpart program remain feasible for all data realizations in the pre-specified uncertainty set, which interprets the meaning of robustness. \section{Adjustable Robust Optimization} \label{App-C-Sect02} Several reasons call for developing new decision-making mechanisms to overcome limitations of the static RO approach: 1) Equality constraints often give rise to infeasible robust counterpart problems in the static setting; 2) real-world decision-making process may involve multiple stages, in which some decisions indeed can be made after the uncertain data has been known or can be predicted accurately. Take power system operation for an example, the on-off status of generation units must be made several hours before real-time dispatch when the renewable power is unclear; however, the output of some units (called AGC units) can change in response to the real values of system demands and renewable generations. This section will be devoted to the adjustable robust optimization (ARO) with two stages, which leverages the adaptability in the second stage. We still focus our attention on the linear case. \subsection{Basic Assumptions and Formulations} \label{App-C-Sect02-01} The essential difference between static RO and ARO approaches stems from the manner of decision making. \begin{assumption} \label{ap:App-03-ARO-1} In an ARO problem, some variables are ``here-and-now'' decisions, whereas the rest are ``wait-and-see'' decisions: they can be made at a later moment according to the observed data. \end{assumption} In analogy to the static case, the decision-making mechanism can be explained. \begin{assumption} \label{ap:App-03-ARO-2} Once the here-and-now decisions are made, there must be at least one valid wait-and-see decision which is able to recover constraints in response to the observed data realization, if the actual data is within the uncertainty set. \end{assumption} In this regard, we can say here-and-now decisions are robust against the uncertainty, and wait-and-see decisions are adaptive to the uncertainty. These terminologies are borrowed from two-stage SO models. In fact, there is a close relation between two-stage SO and two-stage RO \cite{ARO-TSSO-Relation-1,ARO-TSSO-Relation-2}. Now we are ready to post the compact form of a linear ARO problem with an uncertain constraint right-hand side: \begin{equation} \label{eq:App-03-ARO} \min_{x \in X} \left\{ c^T x + \max_{w \in W} \min_{y(w) \in Y(x,w)} d^T y(w) \right\} \end{equation} where $x$ is the here-and-now decision variable (or the first-stage decision variable), and $X$ is the feasible region of $x$; $w$ is the uncertain parameter, and $W$ is the uncertainty set, which has been discussed in the previous section; $y(w)$ is the wait-and-see decision variable (or second-stage decision variable), which can be adjusted according to the actual data of $w$, so it is represented as a function of $w$; $Y$ is the feasible region of $y$ given the values of $x$ and $w$, because the here-and-now decision is not allowed to change in this stage, and the exact value of $w$ is known. It has a polyhedral form \begin{equation} \label{eq:App-03-ARO-Y(x,w)} Y(x,w) = \{y ~|~ Ax + By + Cw \le b \} \end{equation} where $A$, $B$, $C$, and $b$ are constant matrices and vector with compatible dimensions. It is clear that both of the here-and-now decision $x$ and the data uncertainty $w$ can influence the feasible region $Y$ in the second stage. We define $w = 0$ the nominal scenario and assume $0$ is a relative interior of $W$. Otherwise, we can decompose the uncertainty as $w = w^0 + \Delta w$ and merge the constant term $Cw^0$ into the right-hand side as $b \to b - C w^0$, where $w^0$ is the predicted or expected value of $w$, and ${\rm \Delta} w$ is the forecast error, which is the real uncertain parameter. It should be pointed out that $x$, $w$, and $y$ may contain discrete decision variables. Later we will see, integer variables in $x$ and $w$ do not significantly alter the solution algorithm of ARO. However, because integrality in $y$ prevents the use of LP duality theory, the computation will be greatly challenged. Although we assume coefficient matrices are constants in (\ref{eq:App-03-ARO-Y(x,w)}), most results in this section can be generalized if matrix $A$ is a linear function in $w$; the situation would be complicated in matrix $B$ is uncertainty-dependent. The purpose for the specific form in (\ref{eq:App-03-ARO-Y(x,w)}) is that it is more dedicated to the problems considered in this book: uncertainties originates from renewable/load volatility can be modeled by term $Cw$ in (\ref{eq:App-03-ARO-Y(x,w)}), and the coefficients representing component and network parameters are constants. Assumption \ref{ap:App-03-ARO-2} inspires the definition for a feasible solution of ARO (\ref{eq:App-03-ARO}). \begin{definition} \label{df:App-03-ARO-XR} A first-stage decision $x$ is called robust feasible in (\ref{eq:App-03-ARO}) if the feasible region $Y(x,w)$ is non-empty for all $w \in W$, and the set of robust feasible solutions are given by: \begin{equation} \label{eq:App-03-ARO-XR} X_R = \{x ~|~ x \in X : \forall w \in W,~ Y(x,w) \ne \emptyset \} \end{equation} \end{definition} Please be aware of the sequence in (\ref{eq:App-03-ARO-XR}): $x$ takes its value first, and then parameter $w$ chooses a value in $W$ before some $y \in Y$ does. The non-emptiness of $Y$ is guaranteed by the selection of $x$ for an arbitrary $w$. If we swap the latter two terms and write $Y(x,w) \ne \emptyset$, $\forall w \in W$, like the form in a static RO, it sometimes cause confusion that both $x$ and $y$ are here-and-now type decisions, the adaptiveness vanishes, and thus $X_R$ may become empty if uncertainty appears in an equality constraint, as analyzed in the previous section. The definition of an optimal solution depends on the decision maker's attitude towards the cost in the second stage. In (\ref{eq:App-03-ARO}), we adopt the following definition. \begin{definition} \label{df:App-03-SRO-Optimality-MMC} (Min-max cost criterion) An optimal solution of (\ref{eq:App-03-ARO}) is a pair of here-and-now decision $x \in X_R$ and wait-and-see decision $y(w^*)$ corresponding to the worst-case scenario $w^* \in W$, such that the total cost in scenario $w^*$ is minimal, where the worst-case scenario $w^*$ means that for the fixed $x$, the optimal second-stage cost is maximized over $W$. \end{definition} Other criteria may give different robust formulations. For example, the minimum nominal cost formulation and min-max regret formulation. \begin{definition} \label{df:App-03-SRO-Optimality-MNC} (Minimum nominal cost criterion) An optimal solution under the minimum nominal cost criterion is a pair of here-and-now decision $x \in X_R$ and wait-and-see decision $y^0$ corresponding to the nominal scenario $w^0 = 0$, such that the total cost in scenario $w^0$ is minimal. \end{definition} The minimum nominal cost criterion leads to the following robust formulation \begin{equation} \label{eq:App-03-ARO-V1} \begin{aligned} \min~~ & c^T x + d^T y \\ \mbox{s.t.} ~~ & x \in X_R \\ & y \in Y(x,w^0) \end{aligned} \end{equation} where robustness is guranteed by $X_R$. To explain the concept of regret, the minimum perfect-information total cost is \begin{equation*} C_P(w) = \min \left\{ c^T x + d^T y ~|~ x \in X,~ y \in Y(x,w) \right\} \end{equation*} where $w$ is known to the decision maker. For a fixed first-stage decision $x$, the maximum regret is defined as \begin{equation*} \mbox{Reg}(x) = \max_{w \in W} \left\{ \min_{y \in Y(x,w)} \{c^T x + d^T y\} - C_P(w) \right\} \end{equation*} \begin{definition} \label{df:App-03-SRO-Optimality-MMC} (Min-max regret criterion) An optimal solution under the min-max regret criterion is a pair of here-and-now decision $x$ and wait-and-see decision $y(w)$, such that the worst-case regret under all possible scenarios $w \in W$ is minimized. \end{definition} The min-max regret cost criterion leads to the following robust formulation \begin{equation} \label{eq:App-03-ARO-MMR} \min_{x \in X} \left\{ c^T x + \max_{w \in W} \left\{ \min_{y \in Y(x,w)} d^T y - \min_{x^\prime \in X,y^\prime \in Y(x,w)} \left\{ c^T x^\prime + d^T y^\prime \right\} \right\} \right\} \\ \end{equation} \vspace{12pt} In an ARO problem, we can naturally assume that the uncertainty set is a polyhedron. To see this, if $x$ is a robust solution under an uncertainty set consists of discrete scenarios, i.e., $W=\{w^1,w^2,\cdots w^S\}$, according to Definition \ref{df:App-03-ARO-XR}, there exist corresponding $\{y^1,y^2,\cdots y^S\}$ such that \begin{equation} \begin{gathered} B y^1 \le b - A x -C w^1 \\ B y^2 \le b - A x -C w^2 \\ \vdots \\ B y^S \le b - A x -C w^S \end{gathered} \notag \end{equation} For non-negative weighting parameters $\lambda_1, \lambda_2,\cdots, \lambda_S \ge 0$, $\sum_{s=1}^S \lambda_s =1$, we have \begin{equation} \sum_{s=1}^S \lambda_s ( B y^s ) \le \sum_{s=1}^S \lambda_s (b-Ax-Cw^s) \notag \end{equation} or equivalently \begin{equation} B \sum_{s=1}^S \lambda_s y^s \le b - Ax - C \sum_{s=1}^S \lambda_s w^s \notag \end{equation} indicating that for any $w = \sum_{s=1}^S \lambda_s w^s \in \mbox{conv}(\{w^1,w^2,\cdots w^S\})$, the wait-and-see decision $y = \sum_{s=1}^S \lambda_s y^s$ can recover all constraints, and thus $Y(x,w) \ne \emptyset$. This property inspires the following proposition that is in analogy to Proposition \ref{pr:App-03-Invariance-2} \begin{proposition} \label{pr:App-03-Invariance-3} Suppose $x$ is a robust feasible solution for a discrete uncertainty set $\{w^1,w^2,\cdots w^S\}$, then it remains robust feasible if we replace the uncertainty set with its convex hull. \end{proposition} Proposition \ref{pr:App-03-Invariance-3} also implies that in order to ensure the robustness of $x$, it is sufficient to consider the extreme points of a bounded polytope. Suppose the vertices of the polyhedral uncertainty set are $w^s$, $s=1,2,\cdots,S$. Consider the following set \begin{equation} \label{eq:App-03-ARO-XR-Lift} {\rm \Xi} = \{x,y^1,y^2,\cdots,y^S ~|~ A x + B y^s \le b -C w^s,~ s = 1,2,\cdots,S \} \end{equation} Robust feasible region $X_R$ is the projection of polyhedron $\rm \Xi$ on $x$-space, which is also a polyhedron (Theorem B.2.5 in \cite{CVX-Book-Ben}). \begin{proposition} \label{pr:App-03-XR} If the uncertainty set has a finite number of extreme points, set $X_R$ is a polytope. \end{proposition} Despite the nice theoretical properties, it is still difficult to solve an ARO problem in its general form (\ref{eq:App-03-ARO}). There have been considerable efforts spent on developing different approximations and approaches to tackle the computational challenges. We leave the solution methods of ARO problems to the next subsection. Here we demonstrate the benefit from postponing some decisions to the second stage via a simple example taken from \cite{RO-Detail-1}. Consider an uncertain LP \begin{equation} \begin{aligned} \min_x ~~ x_1 \\ \mbox{s.t.}~~ & x_2 \ge 0.5 \xi x_1 + 1 ~~ (a_\xi) \\ & x_1 \ge (2 - \xi) x_2 ~~~~ (b_\xi) \\ & x_1 \ge 0,~ x_2 \ge 0 ~~~~ (c_\xi) \end{aligned} \notag \end{equation} where $\xi \in [0, \rho]$ is an uncertain parameter and $\rho$ is a constant (level of uncertainty) which may take a value in the open interval $(0,1)$. In a static setting, both $x_1$ and $x_2$ must be independent of $\xi$. When $\xi = \rho$, constraint $(a_{\xi})$ suggests $x_2 \ge 0.5 \rho x_1 +1$; when $\xi = 0$, constraint $(b_\xi)$ indicates $x_1 \ge 2 x_2$; as a result, we arrive at the conclusion $x_1 \ge \rho x_1 + 2$, so the optimal value in the static case satisfies \begin{equation} \mbox{Opt} \ge x_1 \ge \dfrac{2}{1-\rho} \notag \end{equation} Thus the optimal value tends to infinity when $\rho$ approaches 1. Now consider the adjustable case, in which $x_2$ is a wait-and-see decision. Let $x_2 = 0.5 \xi x_1 + 1$, ($a_\xi$) is always satisfied; substituting $x_2$ in constraint ($b_\xi$) yields: \begin{equation} x_1 \ge (2 - \xi) (\dfrac{1}{2} \xi x_1 + 1),~ \forall \xi \in [0, \rho] \notag \end{equation} Substituting $x_1=4$ into above inequality we have \begin{equation} 4 \ge 2(2 - \xi)\xi + 2-\xi, \forall \xi \in [0, \rho] \notag \end{equation} This inequality can be certified by the fact that $\xi \ge 0$ and $\xi(2-\xi) \le 1$, $\forall \xi \in \mathbb R$, indicating that $x_1=4$ is a robust feasible solution. Therefore, the optimal value should be no greater than 4 in the adjustable case for any $\rho$. The difference of optimal values in two cases can go arbitrarily large, depending on the value of $\rho$. \subsection{Affine Policy Based Approximation Model} \label{App-C-Sect02-02} ARO problem (\ref{eq:App-03-ARO}) is difficult to solve because the functional dependence of the wait-and-see decision on $w$ is arbitrary, and there lacks a closed-form formula to characterize the optimal solution function $y(w)$ or certify whether $Y(x,w)$ is empty or not. At this point, we consider to approximate the recurse function $y(w)$ using a simpler one, naturally, an affine function \begin{equation} \label{eq:App-03-ARO-Affine-Policy} y(w) = y^0 + G w \end{equation} where $y^0$ is the action in the second stage for the nominal scenario $w=0$, and $G$ is the gain matrix to be designed. (\ref{eq:App-03-ARO-Affine-Policy}) is called a linear decision rule or affine policy. It explicitly characterizes the wait-and-see decisions as an affine function in the revealed uncertain data. The rationality for employing an affine policy instead of other parametric ones is that it yields computationally tractable robust counterpart reformulations. This finding is firstly reported in \cite{ARO-Affine-Policy}. To validate (\ref{eq:App-03-ARO-XR}) under the linear decision rule, substituting (\ref{eq:App-03-ARO-Affine-Policy}) in (\ref{eq:App-03-ARO-Y(x,w)}) \begin{equation} \label{eq:App-03-AARO-RC-1} Ax + By^0 + (BG + C) w \le b,~ \forall w \in W \end{equation} In (\ref{eq:App-03-AARO-RC-1}), decision variables are $x$, $y^0$, and $G$, which should be made before $w$ is known, and thus are here-and-now decisions. The wait-and-see decision (or the incremental part) is naturally determined from (\ref{eq:App-03-ARO-Affine-Policy}) without further optimization, and cost reduction is considered in the determination of gain matrix $G$. (\ref{eq:App-03-AARO-RC-1}) is in form of (\ref{eq:App-03-SRO-RC-Single}), and hence its robust counterpart can be derived via the methods in Appendix \ref{App-C-Sect01-02}. Here we just provide the results of polyhedral uncertainty as an example. Suppose the uncertainty set is described by \begin{equation} W = \{w ~|~ S w \le h \} \notag \end{equation} If we assume that $y^0$ is the optimal second stage decision when $w=0$, then we have \begin{equation} Ax + By^0 \le b \notag \end{equation} Furthermore, (\ref{eq:App-03-AARO-RC-1}) must hold if \begin{equation} \label{eq:App-03-AARO-RC-2} \max_{w \in W} (BG + C)_i w \le 0,~ \forall i \end{equation} where $(\cdot)_i$ stands for the $i$-th row of the input matrix. According to LP duality theory, \begin{equation} \label{eq:App-03-AARO-RC-3} \max_{w \in W}~ (BG + C)_i w = \min_{{\rm \Lambda}_i \in {\rm \Pi}_i}~ {\rm \Lambda_i} h, ~ \forall i \end{equation} where $\rm \Lambda$ is a matrix consists of the dual variables, ${\rm \Lambda_i}$ is the $i$-th row of $\rm \Lambda$ and also the dual variable of the $i$-th LP in (\ref{eq:App-03-AARO-RC-3}), and the set \begin{equation} {\rm \Pi}_i = \{ {\rm \Lambda}_i ~|~ {\rm \Lambda}_i \ge 0,~ {\rm \Lambda}_i S = (BG + C)_i \} \notag \end{equation} is the feasible region of the $i$-th dual LP. The minimization operator in the right-hand side of (\ref{eq:App-03-AARO-RC-3}) can be omitted if the objective is to seek a minimum. Moreover, if we adopt the minimum nominal cost criterion, the ARO problem with a linear decision rule in the second stage can be formulated as an LP \begin{equation} \label{eq:App-03-AARO-RC-LP} \begin{aligned} \min~~ & c^T x + d^T y^0 \\ \mbox{s.t.} ~~ & Ax + By^0 \le b, {\rm \Lambda} h \le 0 \\ & {\rm \Lambda} \ge 0,~ {\rm \Lambda} S = BG + C \end{aligned} \end{equation} In (\ref{eq:App-03-AARO-RC-LP}), decision variables are vectors $x$ and $y^0$, gain matrix $G$ and dual matrix $\rm \Lambda$. The constraints actually constitute a lifted formulation for $X_R$ in (\ref{eq:App-03-ARO-XR}). If the min-max cost criterion is employed, the objective can be transformed into a linear inequality constraint with uncertainty via an epigraph form, whose robust form can be derived using similar procedures shown above. Affine policy based method is attractive because it reduces the conservatism in the static RO approach by incorporating corrective actions, and sustains computational tractability. In theory, the affine assumption more or less restricts the adaptability in the recourse stage. Nevertheless, research work in \cite{AARO-Opt-1,AARO-Opt-2,AARO-Opt-3} shows that linear decision rules are indeed optimal or near optimal for many practical problems. For more information on other decision rules and their reformulations, please see \cite{RO-Detail-1} (Chapter 14.3) for the quadratic decision rule, \cite{ARO-Extend-Affine-Policy} for the extended linear decision rule, \cite{ARO-Finite-Adapt-1,ARO-Finite-Adapt-2} for the piecewise constant decision rule (finite adaptability), \cite{ARO-PWL-DR-1,ARO-PWL-DR-2} for the piecewise linear decision rule, and \cite{ARO-General-DR} for generalized decision rules. The methods in \cite{ARO-Finite-Adapt-1,ARO-PWL-DR-2} can be used to cope with integer wait-and-see decision variables. See also \cite{RO-Guide}. \subsection{Algorithms for Fully Adjustable Models} \label{App-C-Sect02-03} Fully adjustable models are generally NP-hard \cite{ARO-Benders-Decomposition}. To find the solution in Definition \ref{df:App-03-SRO-Optimality}, the model is decomposed into a master problem and a subproblem, which are solved iteratively, and a sequence of lower bound and upper bound of the optimal values are generated, until they get close enough to each other. To explain the algorithm for ARO problems, we discuss two instances. \vspace{12pt} {\noindent \bf 1. Second-stage problem is an LP} Now we consider problem (\ref{eq:App-03-ARO}) without specific functional assumptions on the wait-and-see variables. We start from the second-stage LP with fixed $x$ and $w$: \begin{equation} \label{eq:App-03-ARO-Inner-LP} \begin{aligned} \min_{y}~~ & d^T y \\ \mbox{s.t.}~~ & By \le b - Ax - Cw : u \end{aligned} \end{equation} where $u$ is the dual variable, and the dual LP of (\ref{eq:App-03-ARO-Inner-LP}) is \begin{equation} \label{eq:App-03-ARO-Inner-Dual} \begin{aligned} \max_{u}~~ & u^T (b - Ax - Cw) \\ \mbox{s.t.}~~ & B^T u = d,~ u \le 0 \end{aligned} \end{equation} If the primal LP (\ref{eq:App-03-ARO-Inner-LP}) has a finite optimum, the dual LP (\ref{eq:App-03-ARO-Inner-Dual}) is also feasible and has the same optimum; otherwise, if (\ref{eq:App-03-ARO-Inner-LP}) is infeasible, then (\ref{eq:App-03-ARO-Inner-Dual}) will be unbounded. Sometimes, an improper choice of $x$ indeed leads to an infeasible second-stage problem. To detect infeasibility, consider the following LP with slack variables \begin{equation} \label{eq:App-03-ARO-Inner-LP-Slack} \begin{aligned} \min_{y,s}~~ & 1^T s \\ \mbox{s.t.}~~ & s \ge 0 \\ & B y - I s \le b - Ax - Cw : u \end{aligned} \end{equation} Its dual LP is \begin{equation} \label{eq:App-03-ARO-Inner-Dual-Slack} \begin{aligned} \max_{u}~~ & u^T (b - Ax - Cw) \\ \mbox{s.t.}~~ & B^T u = 0,~ -1 \le u \le 0 \end{aligned} \end{equation} (\ref{eq:App-03-ARO-Inner-LP-Slack}) and (\ref{eq:App-03-ARO-Inner-Dual-Slack}) are always feasible and have the same finite optimums. If the optimal value is equal to 0, then LP (\ref{eq:App-03-ARO-Inner-LP}) is feasible; otherwise, if the optimal value is strictly positive, then LP (\ref{eq:App-03-ARO-Inner-LP}) is infeasible. For notation brevity, define feasible sets for the dual variable \begin{equation} \begin{aligned} U_O & = \{ u ~|~ B^T u = d,~ u \le 0 \} \\ U_F &= \{ u ~|~ B^T u = 0,~ -1 \le u \le 0 \} \end{aligned} \notag \end{equation} The former one is associated with the dual form (\ref{eq:App-03-ARO-Inner-Dual}) of the second-stage optimization problem (\ref{eq:App-03-ARO-Inner-LP}); the latter one corresponds to the dual form (\ref{eq:App-03-ARO-Inner-Dual-Slack}) of the second-stage feasibility test problem (\ref{eq:App-03-ARO-Inner-LP-Slack}). Next, we proceed to the middle level with fixed $x$: \begin{equation} \label{eq:App-03-ARO-Middle-Linear-max-min} R(x)=\max_{w \in W} \min_{y \in Y(x,w)} d^T y \end{equation} which is a linear max-min problem that identifies the worst-case uncertainty. If LP (\ref{eq:App-03-ARO-Inner-LP}) is feasible for an arbitrarily given value of $w \in W$, then we conclude $x \in X_R$ defined in (\ref{eq:App-03-ARO-XR}); otherwise, if LP (\ref{eq:App-03-ARO-Inner-LP}) is infeasible for some $w \in W$, then $x \notin X_R$ and $R(x) = +\infty$. To check whether $x \in X_R$ or not, we investigate the following problem \begin{equation} \label{eq:App-03-ARO-Middle-Linear-max-min-Fea} \begin{aligned} \max_{w}~~ & \min_{y,s} 1^T s \\ \mbox{s.t.}~~ & w \in W,~ s \ge 0 \\ & B y - I s \le b - Ax - Cw : u \end{aligned} \end{equation} It maximizes the minimum of (\ref{eq:App-03-ARO-Inner-LP-Slack}) over all possible values of $w \in W$. Since the minimums of (\ref{eq:App-03-ARO-Inner-LP-Slack}) and (\ref{eq:App-03-ARO-Inner-Dual-Slack}) are equal, problem (\ref{eq:App-03-ARO-Middle-Linear-max-min-Fea}) is equivalent to maximizing the optimal value of (\ref{eq:App-03-ARO-Inner-Dual-Slack}) over the uncertainty set $W$, leading to a bilinear program \begin{equation} \label{eq:App-03-ARO-Middle-BLP-Fea} \begin{aligned} r(x) =\max_{u,w}~~ & u^T (b - Ax - Cw) \\ \mbox{s.t.}~~ & w \in W,~ u \in U_F \end{aligned} \end{equation} Because both $W$ and $U_F$ are bounded, (\ref{eq:App-03-ARO-Middle-BLP-Fea}) must have a finite optimum. Clearly, $0 \in U_F$, so $r(x)$ must be non-negative. In fact, if $r(x) = 0$, then $x \in X_R$; if $r(x)> 0$, then $x \notin X_R$. With the duality transformation, the opposite optimization operators in (\ref{eq:App-03-ARO-Middle-Linear-max-min-Fea}) come down to a traditional NLP. For similar reasons, by replacing the second-stage LP (\ref{eq:App-03-ARO-Inner-LP}) with its dual LP (\ref{eq:App-03-ARO-Inner-Dual}), problem (\ref{eq:App-03-ARO-Middle-Linear-max-min}) is equivalent to the following bilinear program \begin{equation} \label{eq:App-03-ARO-Middle-BLP-Opt} \begin{aligned} r(x) =\max_{u,w}~~ & u^T (b - Ax - Cw) \\ \mbox{s.t.}~~ & w \in W,~ u \in U_O \end{aligned} \end{equation} The fact that a linear max-min problem can be transformed as a bilinear program using LP duality is reported in \cite{Linear-max-min-BLP}. Bilinear programs can be locally solved by general purpose NLP solvers, but the non-convexity prevents a global optimal solution from being found easily. In what follows, we introduce some methods that exploit specific features of the uncertainty set and are widely used by the research community. In view that (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) only differ in the dual feasibility set, we will use set $U$ to refer either $U_F$ or $U_O$ in the unified solution method. \vspace{12pt} {\noindent \bf a. General polytope} Suppose that the uncertainty set is described by \begin{equation} W = \{w ~|~ S w \le h \} \notag \end{equation} An important feature in (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) is that the constraint set $W$ and $U$ are separated and there is no constraint that involves $w$ and $u$ simultaneously, so the bilinear program can be considered in the following format \begin{equation} \label{eq:App-03-ARO-BLP-Poly-1} \begin{aligned} \max_{u \in U}~~ u^T (b - A x) + \max_{w} ~~ & (- u^T C w ) \\ \mbox{s.t.}~~ & S w \le h : \xi \end{aligned} \end{equation} The bilinear term $u^T C w$ is non-convex. If we treat the second part $\max _{w \in W} (- u^T C w )$ as an LP in $w$ where $u$ is a parameter, whose KKT optimality condition is given by \begin{equation} \label{eq:App-03-ARO-BLP-Poly-2} \begin{gathered} 0 \le \xi \bot h - S w \ge 0 \\ S^T \xi + C^T u = 0 \end{gathered} \end{equation} The stationary point of LCP (\ref{eq:App-03-ARO-BLP-Poly-2}) gives the optimal primal and dual solutions simultaneously. As the uncertainty set is a bounded polyhedron, the optimal solution must be bounded, and strong duality holds, so we can replace $- u^T C w$ in the objective with a linear term $h^T \xi$ and additional constraints in (\ref{eq:App-03-ARO-BLP-Poly-2}). Moreover, the complementarity and slackness condition in (\ref{eq:App-03-ARO-BLP-Poly-2}) can be linearized via the method in Appendix \ref{App-B-Sect03-05}. In summary, problem (\ref{eq:App-03-ARO-BLP-Poly-1}) can be solved via an equivalent MILP \begin{equation} \label{eq:App-03-ARO-BLP-Poly-MILP} \begin{aligned} \max_{u,w,\xi}~~ & u^T (b - A x) + h^T \xi \\ \mbox{s.t.}~~ & u \in U,~ \theta \in \{0,1\}^m \\ & S^T \xi + C^T u = 0 \\ & 0 \le \xi \le M(1 - \theta) \\ & 0 \le h - S w \le M \theta \end{aligned} \end{equation} where $m$ is the dimension of $\theta$, and $M$ is a large enough constant. Compared with (\ref{eq:App-03-ARO-BLP-Poly-1}), non-convexity migrates from the objective function to the constraints with binary variables. The number of binary variables in (\ref{eq:App-03-ARO-BLP-Poly-MILP}) only depends on the number of constraints in set $W$, and is independent of the dimension of $x$. Another heuristic method for bilinear programs in the form of (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) is the mountain climbing method in \cite{BLP-Mountain-Climbing}, which is summarized in Algorithm \ref{Ag:App-03-BLP-Mountain-Climbing} \begin{algorithm}[!htp] \normalsize \caption{\bf : Mountain climbing} \begin{algorithmic}[1] \STATE Choose a convergence tolerance $\varepsilon>0$, and an initial $w^* \in W$; \STATE Solve the following LP with current $w^*$ \begin{equation} \label{eq:App-03-BLP-Mountain-Climbing-LP1} R_1 = \max_{u \in U} ~ u^T ( b - A x - C w^*) \end{equation} The optimal solution is $u^*$ and the optimal value is $R_1$; \STATE Solve the following LP with current $u^*$ \begin{equation} \label{eq:App-03-BLP-Mountain-Climbing-LP2} R_2 = \max_{w \in W} ~ (b - A x - C w)^T u^* \end{equation} The optimal solution is $w^*$ and the optimal value is $R_2$; \STATE If $R_2 - R_1 \le \varepsilon$, report the optimal value $R_2$ as well as the optimal solution $w^* ,u^*$, and terminate; otherwise, go to step 2. \end{algorithmic} \label{Ag:App-03-BLP-Mountain-Climbing} \end{algorithm} The optimal solutions of LPs must be found at one of the vertices of its feasible region, hence $w^* \in {\rm{vert}}(W)$ and $u^* \in {\rm{vert}}(U)$ hold. As its name implies, the sequence of objective values generated by Algorithm \ref{Ag:App-03-BLP-Mountain-Climbing} is monotonically increasing, until a local maximum is found \cite{BLP-Mountain-Climbing}. The convergence is guaranteed by the finiteness of $\mbox{vert}(U)$ and $\mbox{vert}(W)$. If we try multiple initial points that are chosen elaborately and pick up the best one among the returned results, the solution quality is often satisfactory. The key point is, these initial points should span along most directions in the $w$-subspace. For example, one may search the $2m$ points on the boundary of $W$ in directions $\pm e^m_i$, $i=1,2,\cdots,m$, where $m$ is the dimension of $w$, and $e^m_i$ is the $i$-th column of an $m \times m$ identity matrix. As LPs can be solved very efficiently, Algorithm \ref{Ag:App-03-BLP-Mountain-Climbing} is especially suitable for the instances with very complicated $U$ and $W$, and usually outperforms general NLP solvers for bilinear programs with disjoint constraints. Algorithm \ref{Ag:App-03-BLP-Mountain-Climbing} is also valid if $W$ is other convex set, say, an ellipsoid, and converges to a local optimum in a finite number of iterations for a given precision \cite{BLP-Mountain-Climbing-BCVX}. \vspace{12pt} {\noindent \bf b. Cardinality constrained uncertainty set} A continuous cardinality constrained uncertainty set in the form of (\ref{eq:App-03-SRO-US-Card}) is a special class of the polyhedral case, see the transformation in (\ref{eq:App-03-SRO-US-Card-Lift}). Therefore, the previous method can be applied, and the number of inequalities in the polyhedral form is $3m+1$, which is equal to the number of binary variables in MILP (\ref{eq:App-03-ARO-BLP-Poly-MILP}). As revealed in Proposition \ref{pr:App-03-Invariance-3}, for a polyhedral uncertainty set, we can merely consider the extreme points. Consider a discrete cardinality constrained uncertainty set \begin{subequations} \label{eq:App-03-ARO-US-Card} \begin{equation} \label{eq:App-03-ARO-W-Card} W = \left\{ w \middle| \begin{gathered} w_j = w^0_j + w^+_j z^+_j - w^-_j z^-_j, \forall j \\ \exists ~ z^+,z^- \in Z \end{gathered} \right\} \end{equation} \begin{equation} \label{eq:App-03-ARO-Z-Card} Z = \left\{ z^+,z^- \middle| \begin{gathered} z^+,~ z^- \in \{0,1\}^m \\ z^+_j + z^-_j \le 1,~ \forall j \\ 1^T ( z^+ + z^-) \le {\rm \Gamma} \end{gathered} \right\} \end{equation} \end{subequations} where the budget of uncertainty ${\rm \Gamma} \le m$ is an integer. In (\ref{eq:App-03-ARO-W-Card}), each element $w_j$ takes one of three possible values: $w^0_j$, $w^0_j + w^+_j$, and $w^0_j - w^-_j$, and at most $\rm \Gamma$ of the $m$ elements $w_j$ can take a value that is not equal to $w^0_j$. If the forecast error is symmetric, i.e., $w^+_j = w^-_j$, then (\ref{eq:App-03-ARO-US-Card}) is called symmetric as the nominal scenario locates at the center of $W$. We discuss this case separately because this representation allows to linearize the non-convexity in (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) with fewer binary variables. Expanding the bilinear term $u^T C w$ in an element-wise form \begin{equation} u^T C w = u^T C w^0 + \sum_i \sum_j ( c_{ij} w^+_j u_i z^+_j - c_{ij} w^-_j u_i z^-_j ) \notag \end{equation} where $c_{ij}$ is the element of matrix $C$. Let \begin{equation} v^+_{ij} = u_i z^+_j,~ v^-_{ij} = u_i z^-_j,~ \forall i,~ \forall j \notag \end{equation} the bilinear term can be expressed via a linear function. The product involving a binary variable and a continuous variable can be linearized via the method illuminated in Appendix \ref{App-B-Sect02-02}. In conclusion, bilinear subproblems (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) can be solved via MILP \begin{equation} \label{eq:App-03-ARO-W-Card-MILP} \begin{aligned} \max ~~ & u^T ( b - A x ) - u^T C w^0 - \sum_i \sum_j ( c_{ij} w^+_j v^+_{ij} - c_{ij} w^-_j v^-_{ij} ) \\ \mbox{s.t.} ~~ & u \in U,~ \{z^+,z^- \} \in Z \\ & 0 \le v^+_{ij} - u_j \le M (1 - z^+_j), -M z^+_j \le v^+_{ij} \le 0,~ \forall i, \forall j \\ & 0 \le v^-_{ij} - u_j \le M (1 - z^-_j), -M z^-_j \le v^-_{ij} \le 0,~ \forall i, \forall j \end{aligned} \end{equation} where $M=1$ for problem (\ref{eq:App-03-ARO-Middle-BLP-Fea}) since $-1 \le u \le 0$, and $M$ is a sufficiently large number for problem (\ref{eq:App-03-ARO-Middle-BLP-Opt}), because there is no clear bounds for the dual variable $u$. The number of binary variables in MILP (\ref{eq:App-03-ARO-W-Card-MILP}) is $2m$, which is less than that in (\ref{eq:App-03-ARO-BLP-Poly-MILP}) if the uncertainty set is replaced by its convex hull. The number of additional continuous variables $v^+_{ij}$ and $v^-_{ij}$ is also moderate since the matrix $C$ is sparse. Finally, we are ready to give the decomposition algorithm which is proposed in \cite{ARO-CCG}. In light of Proposition \ref{pr:App-03-Invariance-3}, it is sufficient to consider the extreme points $w^1$, $w^2$, $\cdots$, $w^S$ in the uncertainty set, inspiring the following epigraph formulation which is equivalent to (\ref{eq:App-03-ARO}) \begin{equation} \label{eq:App-03-ARO-Epigraph} \begin{aligned} \min_{x,y^s,\eta } ~~ & c^T x + \eta \\ \mbox{s.t.} ~~ & x \in X \\ & \eta \ge d^T y^s,~ \forall s \\ & A x + B y^s \le b - C w^s, \forall s \end{aligned} \end{equation} Recall (\ref{eq:App-03-ARO-XR-Lift}), the last constraint is in fact a lifted formulation for $X_R$. For polytope and cardinality constrained uncertainty sets, the number of extreme points are finite, but may grow exponentially in the dimension of uncertainty. Actually, it is difficult and also unnecessary to enumerate every extreme point, because most of them actually provide redundant constraints. A smart method is to identify active scenarios which contribute binding constraints in $X_R$. This motivation has been widely used in complex optimization problems and formalized in Sect. \ref{App-C-Sect01-03}. The procedure of the adaptive scenario generation algorithm for ARO is summarized in Algorithm \ref{Ag:App-03-ARO-Adaptive-Scenario-Generation}. \begin{algorithm}[!htp] \normalsize \caption{\bf : Adaptive scenario generation} \begin{algorithmic}[1] \STATE Choose a tolerance $\varepsilon > 0$, set $LB = -\infty$, $UB = +\infty$, iteration index $k = 0$, and the critical scenario set $O = w^0$; \STATE Solve the following master problem \begin{equation} \label{eq:App-03-ARO-ASG-Master} \begin{aligned} \min_{x,y^s,\eta } ~~ & c^T x + \eta \\ \mbox{s.t.} ~~ & x \in X,~ \eta \ge d^T y^s,~ s = 0,\cdots, k \\ & A x + B y^s \le b - C w^s, \forall w^s \in O \end{aligned} \end{equation} The optimal solution is $x^{k+1}$, $\eta^{k+1}$, and update $LB = c^T x^{k+1} + \eta^{k+1} $; \STATE Solve bilinear feasibility testing problem (\ref{eq:App-03-ARO-Middle-BLP-Fea}) with $x^{k+1}$, the optimal solution is $w^{k+1}$, $u^{k+1}$; if the optimal value $r^{k+1} > 0$, update $O = O \cup w^{k+1}$, and add a scenario cut \begin{equation} \label{eq:App-03-ARO-ASG-Cut} \eta \ge d^T y^{k+1},~ A x + B y^{k+1} \le b - C w^{k+1} \end{equation} with a new variable $y^{k+1}$ to the master problem (\ref{eq:App-03-ARO-ASG-Master}), update $k \leftarrow k+1$, and go to Step 2; \STATE Solve bilinear optimality testing problem (\ref{eq:App-03-ARO-Middle-BLP-Opt}) with $x^{k+1}$, the optimal solution is $w^{k+1}$, $u^{k+1}$, and the optimal value is $R^{k+1}$; update $O = O \cup w^{k+1}$ and $UB = c^T x^{k+1} + R^{k+1}$, create scenario cut (\ref{eq:App-03-ARO-ASG-Cut}) with a new variable $y^{k+1}$. \STATE If $UB - LB \le \varepsilon$, report the optimal solution, terminate; otherwise, add the scenario cut in step 4 to the master problem (\ref{eq:App-03-ARO-ASG-Master}), update $k \leftarrow k+1$, and go to step 2; \end{algorithmic} \label{Ag:App-03-ARO-Adaptive-Scenario-Generation} \end{algorithm} Algorithm \ref{Ag:App-03-ARO-Adaptive-Scenario-Generation} converges in a finite number of iterations, which is bounded by the number of extreme points of the uncertainty set. In practice, this algorithm often converges in a few iterations, because problems (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) always identify the most critical scenario that should be considered. This is why we name the algorithm ``adaptive scenario generation''. It is called ``constraint-and-column generation algorithm'' in \cite{ARO-CCG}, because the numbers of decision variables (columns) and constraints increase simultaneously. Please note that the scenario cut streamlines the feasibility cut and optimality cut used in the existing literature. Bilinear subproblems (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) can be solved by the methods discussed previously, according to the form of the uncertainty set. In Algorithm \ref{Ag:App-03-ARO-Adaptive-Scenario-Generation}, we utilize $w$ to create scenario cuts, which are also called primal cuts. In fact, the optimal dual variable $u$ of (\ref{eq:App-03-ARO-Middle-BLP-Fea}) and (\ref{eq:App-03-ARO-Middle-BLP-Opt}) provides sensitivity information, and can be used to construct dual cuts, which is a single inequality in the first-stage variable $x$. See Benders decomposition algorithm in \cite{ARO-Benders-Decomposition}. Since scenario cuts are much tighter than Benders cuts, Algorithm \ref{Ag:App-03-ARO-Adaptive-Scenario-Generation} is the most prevalent method for solving ARO problems. If matrix $A$ is uncertainty-dependent, the scenario constraints in the master problem (\ref{eq:App-03-ARO-ASG-Master}) becomes $A(w^s) x + B y^s \le b - C w^s$, $\forall w^s$, where $A(w^s)$ is constant but varies in different scenarios; the objective function of bilinear subproblems changes to $u^T [b - A(w)x - Cw]$, where $x$ is given in the subproblem. If $A$ can be expressed as a linear function in $w$, the problem structure remains the same, and previous methods are still valid. Even if the second-stage problem is an SOCP, the adaptive scenario generation framework remains applicable, and the key procedure is to solve a max-min SOCP. Such a model originates from the robust operation of a power distribution network with uncertain generation and demand. By dualizing the inner-most SOCP, the max-min SOCP is cast as a bi-convex program, which can be globally or locally solved via an MISOCP or the mountain climbing method. Recently, the duality theory of fully-adjustable robust optimization problem has been proposed in \cite{Duality-ARO}. It has been shown that this kind of problem is self-dual, i.e., the dual problem remains an ARO. However, solving the dual problem may enjoy better efficiency. An extended CCG algorithm which always produces a feasible fist-stage decision (if one exists) is proposed in \cite{Ext-CCG-ARO}. \vspace{12pt} {\noindent \bf 2. Second-stage problem is an MILP} Now we consider the case in which some of the wait-and-see decisions are discrete. As what can be observed from the previous case, the most important tasks in solving an ARO problem is to validate feasibility and optimality, which can boil down to solving a linear max-min problem. When the wait-and-see decisions are continuous and the second-stage problem is linear, LP duality theory is applied such that the linear max-min problem is cast as a traditional bilinear program. However, discrete variables appearing in the second stage make the recourse problem a mixed-integer linear max-min problem with a non-convex inner level, preventing the use of LP duality theory. As a result, validating feasibility and optimality becomes more challenging. The compact form of an ARO problem with integer wait-and-see decisions can be written as \begin{equation} \label{eq:App-03-ARO-MIP-Recourse} \min_{x \in X} \left\{ c^T x + \max_{w \in W} \min_{y,z \in Y(x,w)} d^T y + g^T z \right\} \end{equation} where $z$ is binary and depends on the exact value of $w$; the feasible region \begin{equation} Y(x,w) = \left\{ y,z ~\middle|~ \begin{gathered} By + Gz \le b - Ax - Cw \\ y \in \mathbb{R}^{m_1},~ z \in {\rm \Phi} \end{gathered} \right\} \notag \end{equation} where feasible set ${\rm \Phi} = \{z | z \in \mathbb{B}^{m_2}, T z \le v\}$; $m_1$ and $m_2$ are dimensions of $y$ and $z$; $T$ and $v$ are constant coefficients; all coefficient matrices have compatible dimensions. We assume that the uncertainty set $W$ can be represented by a finite number of extreme points. This kind of problem is studied in \cite{ARO-MIP-Nested-CCG}. A nested constraint-and-column generation algorithm is proposed. Different from the mainstream idea that directly solves a linear max-min program as a bilinear program, the mixed-integer max-min program in (\ref{eq:App-03-ARO-MIP-Recourse}) is expanded to a tri-level problem \begin{equation} \label{eq:App-03-MILP-max-min} \begin{aligned} \max_{w \in W} \min_{z \in {\rm \Phi}} g^T z + \min_{y}~~ & d^T y \\ \mbox{s.t.}~~ & By \le b - Ax - Cw -G z \end{aligned} \end{equation} For the ease of discussion, we assume all feasible sets are bounded, because decision variables of practical problems have physical bounds. By replacing the innermost LP in variable $y$ with its dual LP, problem (\ref{eq:App-03-MILP-max-min}) becomes \begin{equation} \label{eq:App-03-MILP-Recoure-Trilevel} \max_{w \in W} \left\{ \min_{z \in {\rm \Phi}} \left\{ g^T z + \max_{u \in U} u^T (b - Ax - Cw -G z) \right\} \right\} \end{equation} where $u$ is the dual variable, and set $U = \{ u ~|~ u \le 0,~ B^T u = d \}$. Because both $w$ and $z$ are expressed via binary variables, bilinear terms $u^T C w$ and $u^T G z$ have linear representations by using the method in Appendix \ref{App-B-Sect02-02}. Since $\rm \Phi$ has a countable number of elements, problem (\ref{eq:App-03-MILP-Recoure-Trilevel}) (in its linearized version) has the same form as ARO problem (\ref{eq:App-03-ARO}), and can be solved by Algorithm \ref{Ag:App-03-ARO-Adaptive-Scenario-Generation}. More exactly, write (\ref{eq:App-03-MILP-Recoure-Trilevel}) into an epigraph form by enumerating all possible elements $z \in {\rm \Phi}$, then perform Algorithm \ref{Ag:App-03-ARO-Adaptive-Scenario-Generation} and identify binding elements. In this way, the minimization operator in the middle level is eliminated. The nested adaptive scenario generation algorithm for ARO problem (\ref{eq:App-03-ARO-MIP-Recourse}) with mixed-integer recourses is summarized in Algorithm \ref{Ag:App-03-ARO-Nested-ASG}. Because both $W$ and $\rm \Phi$ are finite sets with countable elements, Algorithm \ref{Ag:App-03-ARO-Nested-ASG} converges in a finite number of iterations. Notice that we do not distinguish feasibility and optimality subproblems in above algorithm due to their similarities. One can also introduce slack here-and-now variables in the second stage and penalty terms in the objective function, such that the recourse problem is always feasible. It should be pointed out that Algorithm \ref{Ag:App-03-ARO-Nested-ASG} incorporates double loops, and an MILP should be solved in each iteration in the inner loop, so we'd better not expect too much on its efficiency. Nonetheless, it is the first systematic method to solve an ARO problem with integer variables in the second stage. Another concept which should be clarified is that although the second-stage discrete variable $z$ is treated as scenario and enumerated on the fly when solving problem (\ref{eq:App-03-MILP-Recoure-Trilevel}) in step 3 (the inner loop), it is a decision variable of the master problem (\ref{eq:App-03-ARO-Nested-ASG-Master}) in the outer loop. \begin{algorithm}[!htp] \normalsize \caption{\bf : Nested adaptive scenario generation} \begin{algorithmic}[1] \STATE Choose a tolerance $\varepsilon > 0$, set $LB = -\infty$, $UB = +\infty$, iteration index $k = 0$, and the critical scenario set $O = w^0$; \STATE Solve the following master problem \begin{equation} \label{eq:App-03-ARO-Nested-ASG-Master} \begin{aligned} \min_{x,y,z,\eta } ~~ & c^T x + \eta \\ \mbox{s.t.} ~~ & x \in X \\ & \eta \ge d^T y^s + g^T z^s,~ z^s \in {\rm \Phi},~ s = 0,\cdots,k \\ & A x + B y^s + G z^s \le b - C w^s,~ \forall w^s \in O \end{aligned} \end{equation} The optimal solution is $x^{k+1}$, $\eta^{k+1}$, and update $LB = c^T x^{k+1} + \eta^{k+1} $; \STATE Solve problem (\ref{eq:App-03-MILP-Recoure-Trilevel}) with $x^{k+1}$, the optimal solution is $(z^{k+1},w^{k+1},u^{k+1})$, and optimal value is $R^{k+1}$; update $O = O \cup w^{k+1}$, $UB = \min \{UB, c^T x^{k+1} + R^{k+1}\}$, create new variables $(y^{k+1},z^{k+1})$ and scenario cuts \begin{equation} \label{eq:App-03-ARO-Nested-ASG-Cut} \begin{gathered} \eta \ge d^T y^{k+1} + g^T z^{k+1},~ z^{k+1} \in {\rm \Phi} \\ A x + B y^{k+1} + G z^{k+1} \le b - C w^{k+1} \end{gathered} \end{equation} \STATE If $UB - LB \le \varepsilon$, terminate and report the optimal solution and optimal value; otherwise, add scenario cuts (\ref{eq:App-03-ARO-Nested-ASG-Cut}) to the master problem (\ref{eq:App-03-ARO-Nested-ASG-Master}), update $k \leftarrow k+1$, and go to step 2; \end{algorithmic} \label{Ag:App-03-ARO-Nested-ASG} \end{algorithm} As a short conclusion, to overcome the limitation of traditional static RO approaches which require all decisions should be made without exact information on the underlying uncertainty, ARO employs a two-stage decision-making framework and allows a subset of decision variables to be made after the uncertain data are revealed. Under some special decision rules, computational tractability can be preserved. In fully adjustable cases, the ARO problem can be solved by a decomposition algorithm. The subproblem comes down to a (mixed-integer) linear max-min problem, which is generally challenging to solve. We introduce MILP reformulations for special classes of uncertainty sets, which are compatible with commercial solvers, and help solve an engineering optimization problem in a systematic way. \section{Distributionally Robust Optimization} \label{App-C-Sect03} Static and adjustable RO models presented in Sect. \ref{App-C-Sect01} and Sect. \ref{App-C-Sect02} do not rely on specifying probability distributions of the uncertain data, which are used in SO approaches for generating scenarios, evaluating probability of constraint violation, or deriving analytic solutions for some specific problems. Instead, RO design principle aims to cope with the worst-case scenario in a pre-defined uncertainty set in the space of uncertain variables, which is a salient distinction between these two approaches. If the exact probability distribution is precisely known, optimal solutions to SO models would be less conservative than the robust ones from the statistical perspective. However, the optimal solution to SO models could have poor statistical performances if the actual distribution is not identical to the designated one \cite{Bertsimas-2006}. As for the RO approach, as it hedges against the worst-case scenario, which rarely happens in reality, the robust strategy could be conservative thus suboptimal in most cases. A method which aims to build a bridge connecting SO and RO approaches is the DRO, whose optimal solutions are designed for the worst-case probability distribution within a family of candidate distributions, which are described by statistic information, such as moments, and structure properties, including symmetry, unimodality, and so on. This approach is generally less conservative than the traditional RO because dispersion effect of uncertainty is taken into account, i.e., the probability of an extreme event is low. Meanwhile, the statistic performances of the solution is less sensitive to the perturbation in probability distributions than that of an SO model, as it hedges against the worst distribution. Publications on this method have been proliferating rapidly in the past few years. This section only sheds light on some most representative methods which have been used in energy system studies. \subsection{Static Distributionally Robust Optimization} \label{App-C-Sect03-01} In analogy with the terminology used in Sect \ref{App-C-Sect01}, ``static'' means that all decision variables are here-and-now type. Theoretical outcomes in this part mainly come from \cite{Static-DRO}. A static DRO problem can be formulated as \begin{equation} \label{eq:App-03-DRO-Model-1} \begin{aligned} \min_x ~~ & c^T x \\ \mbox{s.t.} ~~ & x \in X \\ & \Pr \left( a_i (\xi)^T x \le b_i(\xi),~ i=1,\cdots,m \right) \ge 1 - \varepsilon,~ \forall f(\xi) \in {\mathcal P} \end{aligned} \end{equation} where $x$ is the decision variable, $X$ is a closed and convex set that is independent of the uncertain parameter, $c$ is a deterministic vector, and $\xi$ is the uncertain data, whose probability density function $f(\xi)$ is not known exactly, and belongs to $\mathcal P$, a set comprised of candidate distributions. Robust chance constraint in (\ref{eq:App-03-DRO-Model-1}) requires a finite number of linear inequalities depending on $\xi$ to be met with a probability of at least $1-\varepsilon$, regardless of the true probability density function of $\xi$. We assume uncertain coefficients $a_i$ and $b_i$ are linear functions in $\xi$, i.e. \begin{equation} \begin{gathered} a_i(\xi) = a^0_i + \sum_{j=1}^k a^j_i \xi_j \\ b_i(\xi) = b^0_i + \sum_{j=1}^k b^j_i \xi_j \end{gathered} \notag \end{equation} where $a^0_i$, $a^j_i$ are constant vectors and $b^0_i$, $b^j_i$ are constant scalars. Define \begin{equation} y^j_i (x) = (a^j_i)^T x - b^j_i,~\forall i,~ \forall j \notag \end{equation} the chance constraint in (\ref{eq:App-03-DRO-Model-1}) can be expressed via \begin{equation} \label{eq:App-03-DRO-RCC} \Pr \left( y^0_i (x) + y_i(x)^T \xi \le 0,~ i=1,\cdots,m \right) \ge 1 - \varepsilon,~ \forall f(\xi) \in {\mathcal P} \end{equation} where vector $y_i(x)=[y^1_i(x),\cdots,y^k_i(x)]^T$ is affine in $x$. Since the objective is certain and constraint violation is bounded by a small probability, problem (\ref{eq:App-03-DRO-Model-1}) is also called a robust chance-constrained program. Chance constraints can be transformed into tractable ones that are convex in variable $x$ only for a few special cases. For example, if $\xi$ follows a Gaussian distribution, $\varepsilon \le 0.5$, and $m=1$, then the individual chance constraint without distribution uncertainty is equivalent to a single SOC constraint \cite{CCO-Gauss}. For $m >1$, joint chance constraints form convex feasible region when the right-hand side terms $b_i(\xi)$ are uncertain and follow a log-concave distribution \cite{Static-DRO,CCP-RHS-Log-Concave}, while coefficients $a_i$, $i=1,\cdots,m$ are deterministic. Constraint (\ref{eq:App-03-DRO-RCC}) is even more challenging at first sight: not only the random vector $\xi$, but also the probability distribution function $f(\xi)$ itself is uncertain. Because in many practical situations, probability distribution must be estimated from enough historical data, which may not be available at hand. Typically, one may only have access to some statistical indicators about $f(\xi)$, e.g. its mean value, covariance, and support set. Using a specific $f(\xi) \in \mathcal P$ may lead to over-optimistic solutions which fail to satisfy the probability guarantee under the true distribution. Similar to the paradigm in static RO, a prudent way to immunize a chance constraint against uncertain probability distribution is to investigate the situation in the worst case, inspiring the following distributionally robust chance constraint, which is equivalent to (\ref{eq:App-03-DRO-RCC}) \begin{equation} \label{eq:App-03-DRO-DRCC} \inf_{f(\xi) \in \mathcal P} \Pr \left( y^0_i (x) + y_i(x)^T \xi \le 0,~ i=1,\cdots,m \right) \ge 1 - \varepsilon \end{equation} Clearly, if $x$ satisfies (\ref{eq:App-03-DRO-DRCC}), the probability of constraint violation is upper bounded by $\varepsilon$ for the true probability distribution of $\xi$. This section introduces convex optimization models for approximating robust chance constraints under uncertain probability distributions, whose first- and second-order moments as well as the support set (or equivalently the feasible region) of random variable are known. More precisely, we let $\mathbb E_P (\xi) = \mu \in \mathbb R^k$ be the mean value and $\mathbb E_P ((\xi-\mu)(\xi-\mu)^T) = {\rm \Sigma} \in \mathbb S^k_{++}$ be the covariance matrix of random variable $\xi$ under the true distribution $P$. We define the moment matrix \begin{equation} {\rm \Omega} = \begin{bmatrix} {\rm \Sigma} + \mu \mu^T & \mu \\ \mu^T & 1 \end{bmatrix} \notag \end{equation} for ease of notation. To help readers understand the fundamental ideas in DRO, we briefly introduce the worst-case expectation problem, which will be used throughout this section. Recall that $\mathcal P$ represents the set of all probability distributions on $\mathbb R^k$ with mean vector $\mu$ and covariance matrix ${\rm \Sigma} \succ 0$,. the problem is formulated by \begin{equation} \theta^m_P = \sup_{f(\xi) \in \mathcal P} \mathbb E \left[ (g(\xi))^+ \right] \notag \end{equation} where $g:\mathbb R^k \to \mathbb R$ is a function of $\xi$; $(g(\xi))^+$ means the maximum between 0 and $g(\xi)$. Write the problem into an integral format \begin{equation} \label{eq:App-03-DRO-Worst-Expectation-Primal} \begin{aligned} \theta^m_P = \sup_{f(\xi) \in \mathcal P} ~~ & \int_{\xi \in \mathbb R^k} \max \{0, g(\xi) \} f(\xi) \mbox{d} \xi \\ \mbox{s.t.} ~~ & f(\xi) \ge 0,~ \forall \xi \in \mathbb R^k \\ & \int_{\xi \in \mathbb R^k} f(\xi) \mbox{d} \xi = 1 :~ \lambda_0 \\ & \int_{\xi \in \mathbb R^k} \xi f(\xi) \mbox{d} \xi = \mu :~ \lambda \\ & \int_{\xi \in \mathbb R^k} \xi \xi^T f(\xi) \mbox{d} \xi = {\rm \Sigma} + \mu \mu^T :~ {\rm \Lambda} \end{aligned} \end{equation} In problem (\ref{eq:App-03-DRO-Worst-Expectation-Primal}), the decision variables are the values of $f(\xi)$ over all possible $\xi \in \mathbb R^k$, so there are infinitely many decision variables, and problem (\ref{eq:App-03-DRO-Worst-Expectation-Primal}) is an infinite-dimensional LP. The former two constraints enforce $f(\xi)$ to be a valid distribution function; the latter two ensure consistent first- and second-order moments. The optimal solution gives the worst-case distribution. However, it is difficult to solve (\ref{eq:App-03-DRO-Worst-Expectation-Primal}) in its primal form. We now associate dual variables $\lambda_0 \in \mathbb R$, $\lambda \in \mathbb R^k$, and ${\rm \Lambda} \in \mathbb S^k$ with each integral constraint, and the dual problem of (\ref{eq:App-03-DRO-Worst-Expectation-Primal}) can be constructed following the duality theory of conic LP, which is given by \begin{equation} \label{eq:App-03-DRO-Worst-Expectation-Dual} \begin{aligned} \theta^m_D = \inf_{\lambda_0,\lambda,{\rm \Lambda} } ~~ & \lambda_0 + \mu^T \lambda + \mbox{tr} [{\rm \Lambda}^T({\rm \Sigma} + \mu \mu^T)] \\ \mbox{s.t.} ~~ & \lambda_0 + \xi^T \lambda + \mbox{tr} [{\rm \Lambda}^T (\xi \xi^T)] \\ & \quad \ge \max \{ 0,g(\xi) \}, \forall \xi \in \mathbb R^k \end{aligned} \end{equation} To understand this dual form in (\ref{eq:App-03-DRO-Worst-Expectation-Dual}), we can image a discrete version of (\ref{eq:App-03-DRO-Worst-Expectation-Primal}), in which $\xi_1$, $\cdots$, $\xi_n$ are sampled scenarios of the uncertain parameter, and their associated probabilities $f(\xi_1)$, $\cdots$, $f(\xi_n)$ are decision variables of (\ref{eq:App-03-DRO-Worst-Expectation-Primal}). Moreover, if we replace the integral arithmetic in the constraints with the summation arithmetic, (\ref{eq:App-03-DRO-Worst-Expectation-Primal}) comes down to a traditional LP, and its dual is also an LP, where the constraint becomes \begin{equation*} \lambda_0 + \xi^T_i \lambda + \mbox{tr} [{\rm \Lambda}^T (\xi_i \xi^T_i)] \ge \max \{ 0,g(\xi_i) \},~ i = 1, \cdots, n \end{equation*} Let $n \to +\infty$ and $\xi$ spread over $\mathbb R^k$, we can get the dual problem (\ref{eq:App-03-DRO-Worst-Expectation-Dual}). Unlike the primal problem (\ref{eq:App-03-DRO-Worst-Expectation-Primal}) that has infinite decision variables, the dual problem (\ref{eq:App-03-DRO-Worst-Expectation-Dual}) has finite variables and an infinite number of constraints. In fact, we are optimizing over the coefficients of a polynomial in $\xi$. Because $\rm \Sigma \succ 0$, Slater condition is met, and thus strong duality holds (this conclusion can be found in many other literatures, such as \cite{Zero-Gap-GPI}), i.e., $\theta^m_P = \theta^m_D$. In the following, we will eliminate $\xi$ and reduce the constraint into convex ones in dual variables $\lambda_0$, $\lambda$, and $\rm \Lambda$. Recall the definition of matrix $\rm \Omega$, the compact form of problem (\ref{eq:App-03-DRO-Worst-Expectation-Dual}) can be expressed as \begin{equation} \label{eq:App-03-DRO-Worst-Expectation-Dual-Comp} \begin{aligned} \inf_{M \in \mathbb S^{k+1}} ~~ & \mbox{tr} [ {\rm \Omega}^T M ] \\ \mbox{s.t.} ~~ & \begin{bmatrix} \xi^T & 1 \end{bmatrix} M \begin{bmatrix} \xi^T & 1 \end{bmatrix}^T \ge 0,~ \forall \xi \in \mathbb R^k \\ & \begin{bmatrix} \xi^T & 1 \end{bmatrix} M \begin{bmatrix} \xi^T & 1 \end{bmatrix}^T \ge g(\xi),~ \forall \xi \in \mathbb R^k \\ \end{aligned} \end{equation} where the matrix decision variable is \begin{equation} M = \begin{bmatrix} {\rm \Lambda} & \dfrac{\lambda}{2} \\ \dfrac{\lambda^T}{2} & \lambda_0 \end{bmatrix} \notag \end{equation} and the first constraint is equivalent to an LMI $M \succeq 0$. A special case of the worst-case expectation problem is \begin{equation} \label{eq:App-03-DRO-GPI-Primal} \theta^m_P = \sup_{f(\xi) \in \mathcal P} \Pr [ \xi \in S ] \end{equation} which quantifies the maximum probability of the event $\xi \in S$, where $S$ is a Borel measurable set. This problem has a close relationship with generalized probability inequalities discussed in \cite{Zero-Gap-GPI} and the generalized moments problem studied in \cite{Moment-Book}. By defining an indicator function as \begin{equation} \mathbb I_{S} (\xi) = \left\{ \begin{gathered} 1 \\ 0 \end{gathered} \quad \begin{lgathered} \mbox{if } \xi \in S \\ \mbox{otherwise} \end{lgathered} \right. \notag \end{equation} The dual problem of (\ref{eq:App-03-DRO-GPI-Primal}) can be written as \begin{equation} \label{eq:App-03-DRO-GPI-Dual} \begin{aligned} \inf_{M \in \mathbb S^{k+1}} ~~ & \mbox{tr} [ {\rm \Omega}^T M ] \\ \mbox{s.t.} ~~ & M \succeq 0,~ \begin{bmatrix} \xi^T & 1 \end{bmatrix} M \begin{bmatrix} \xi^T & 1 \end{bmatrix} \ge 1,~ \forall \xi \in S \\ \end{aligned} \end{equation} which is a special case of (\ref{eq:App-03-DRO-Worst-Expectation-Dual-Comp}) when $g(\xi) = \mathbb I_S(\xi)$. Next we present how to formulate a robust chance constraint (\ref{eq:App-03-DRO-DRCC}) as convex constraints that can be recognized by convex optimization solvers. \vspace{12pt} {\noindent \bf 1. Individual chance constraints} Consider a single robust chance constraint \begin{equation} \label{eq:App-03-DRO-DRCC-Single} \inf_{f(\xi) \in \mathcal P} \Pr \left( y^0 (x) + y(x)^T \xi \le 0 \right) \ge 1 - \varepsilon \end{equation} The feasible set in $x$ is denoted by $X^S_R$. To eliminate the optimization over function $f(\xi)$, we leverage the concept of conditional value-at-risk (CVaR) introduced by \cite{CVaR}. For a given loss function $L(\xi)$ and tolerance $\varepsilon \in (0,1)$, the CVaR at level $\varepsilon$ is defined as \begin{equation} \label{eq:App-03-CVaR} \mbox{CVaR}(L(\xi), \varepsilon) = \inf_{\beta \in \mathbb R} \beta + \frac{1}{\varepsilon} \mathbb E_{f(\xi)} \left( \left[ L(\xi) - \beta \right]^+ \right) \end{equation} where the expectation is taken over a given probability distribution $f(\xi)$. CVaR is the conditional expectation of loss greater than the $(1-\varepsilon)$-quantile of the loss distribution. Indeed, condition \begin{equation} \Pr \left[ L(\xi) \le \mbox{CVaR}(L(\xi), \varepsilon) \right] \ge 1- \varepsilon \notag \end{equation} holds regardless of the probability distribution and loss function $L(\xi)$ \cite{Static-DRO}. Therefore, to certify $\Pr(L(\xi) \le 0) \ge 1-\varepsilon$, a sufficient condition without probability evaluation is $\mbox{CVaR}(L(\xi), \varepsilon) \le 0$, or more precisely: \begin{equation} \label{eq:App-03-CVaR-CC} \begin{aligned} & \sup_{f(\xi) \in \mathcal P} \mbox{CVaR} \left( y^0(x) + y(x)^T \xi, \varepsilon \right) \le 0 \\ \Longrightarrow & \inf_{f(\xi) \in \mathcal P} \Pr \left( y^0(x) + y(x)^T \xi \le 0 \right) \ge 1 - \varepsilon \end{aligned} \end{equation} According to (\ref{eq:App-03-CVaR}), above worst-case CVaR can be expressed by \begin{equation} \begin{lgathered} \sup_{f(\xi) \in \mathcal P} \mbox{CVaR} \left( y^0(x) + y(x)^T \xi, \varepsilon \right) \\ = \sup_{f(\xi) \in \mathcal P} \inf_{\beta \in \mathbb R} \left\{ \beta + \frac{1}{\varepsilon} \mathbb E_{f(\xi)} \left( \left[ y^0(x) + y(x)^T \xi - \beta \right]^+ \right) \right\} \\ = \inf_{\beta \in \mathbb R} \left\{ \beta + \frac{1}{\varepsilon} \sup_{f(\xi) \in \mathcal P} \mathbb E_{f(\xi)} \left( \left[ y^0(x) + y(x)^T \xi - \beta \right]^+ \right) \right\} \end{lgathered} \end{equation} The maximization and minimization operators are interchangeable because of the saddle point theorem in \cite{Saddle-Point}. Recall previous analysis; the worst-case expectation can be computed from problem \begin{equation} \begin{aligned} \inf_{\beta,M \in \mathbb S^{k+1}} ~~ & \mbox{tr} [ {\rm \Omega}^T M ] \\ \mbox{s.t.} ~~ & M \succeq 0,~ \\ & \begin{bmatrix} \xi^T & 1 \end{bmatrix} M \begin{bmatrix} \xi^T & 1 \end{bmatrix} \ge y^0(x) + y(x)^T \xi - \beta,~ \forall \xi \in \mathbb R^k \\ \end{aligned} \notag \end{equation} The semi-infinite constraint has a matrix quadratic form \begin{equation} \begin{bmatrix} \xi \\ 1 \end{bmatrix}^T \left( M - \begin{bmatrix} 0 & \dfrac{y(x)}{2} \\ \dfrac{y(x)^T}{2} & y^0(x) - \beta \end{bmatrix} \right) \begin{bmatrix} \xi \\ 1 \end{bmatrix} \ge 0,~ \forall \xi \in \mathbb R^k \notag \end{equation} which is equivalent to \begin{equation} M - \begin{bmatrix} 0 & \dfrac{y(x)}{2} \\ \dfrac{y(x)^T}{2} & y^0(x) - \beta \end{bmatrix} \succeq 0 \notag \end{equation} As a result, the worst-case CVaR can be calculated from an SDP \begin{equation} \label{eq:App-03-Worst-CVaR-SDP} \begin{aligned} \sup_{f(\xi) \in \mathcal P} ~~ & \mbox{CVaR} \left( y^0(x) + y(x)^T \xi, \varepsilon \right) \\ = \inf_{\beta, M} ~~ & \beta + \frac{1}{\varepsilon} \mbox{tr}({\rm \Omega}^T M) \\ \mbox{s.t.} ~~ & M \succeq 0 \\ & M \succeq \begin{bmatrix} 0 & \dfrac{y(x)}{2} \\ \dfrac{y(x)^T}{2} & y^0(x) - \beta \end{bmatrix} \end{aligned} \end{equation} It is shown that the indicator $\Rightarrow$ in (\ref{eq:App-03-CVaR-CC}) is in fact an equivalence $\Leftrightarrow$ \cite{Static-DRO} in static DRO. In conclusion, robust chance constraint (\ref{eq:App-03-DRO-DRCC-Single}) can be written as a convex set in variable $x$, $\beta$, and $M$ as follows \begin{equation} \label{eq:App-03-DRO-XSR} X^S_R = \left\{ x ~\middle|~ \begin{lgathered} \exists \beta \in \mathbb R,~ M \succeq 0 \mbox{ such that} \\ \beta + \frac{1}{\varepsilon} \mbox{tr}({\rm \Omega}^T M) \le 0 \\ M \succeq \begin{bmatrix} 0 & \dfrac{y(x)}{2} \\ \dfrac{y(x)^T}{2} & y^0(x) - \beta \end{bmatrix} \end{lgathered} \right\} \end{equation} \vspace{12pt} {\noindent \bf 2. Joint chance constraints} Now consider the joint robust chance constraints \begin{equation} \label{eq:App-03-DRO-DRCC-Joint} \inf_{f(\xi) \in \mathcal P} \Pr \left( y^0_i (x) + y_i(x)^T \xi \le 0, i = 1,\cdots,m \right) \ge 1 - \varepsilon \end{equation} The feasible set in $x$ is denoted by $X^J_R$. Let $\alpha$ be the vector of strictly positive scaling parameters, and $\mathcal A = \{ \alpha ~|~ \alpha > 0 \}$. It is clear that constraint \begin{equation} \label{eq:App-03-DRO-DRCC-Joint-Para} \inf_{f(\xi) \in \mathcal P} \Pr \left[ \max_{i=1,\cdots, m} \left\{ \alpha_i \left( y^0_i (x) + y_i(x)^T \xi \right) \right\} \le 0 \right] \ge 1 - \varepsilon \end{equation} imposes the same feasible region in variable $x$ as (\ref{eq:App-03-DRO-DRCC-Joint}). Nonetheless, it turns out that parameter $\alpha_i$ can be co-optimized to improve the quality of the convex approximation for $X^J_R$. (\ref{eq:App-03-DRO-DRCC-Joint-Para}) is a single robust chance constraint, and can be conservatively approximated by a worst-case CVaR constraint \begin{equation} \label{eq:App-03-DRO-DRCC-Joint-Para-Worst-CVaR} \sup_{f(\xi) \in \mathcal P} \mbox{CVaR} \left[ \max_{i=1,\cdots, m} \left\{ \alpha_i \left( y^0_i (x) + y_i(x)^T \xi \right) \right\}, \varepsilon \right] \le 0 \end{equation} It defines a feasible region in variable $x$ with auxiliary parameter $\alpha \in \mathcal A$, which is denoted by $X^J_R(\alpha)$. Clearly, $X^J_R(\alpha) \subseteq X^J_R$, $\forall \alpha \in \mathcal A$. Unlike (\ref{eq:App-03-Worst-CVaR-SDP}), condition (\ref{eq:App-03-DRO-DRCC-Joint-Para-Worst-CVaR}) is $\alpha$-dependent. By observing the fact that \begin{equation} \begin{aligned} & \begin{bmatrix} \xi^T & 1 \end{bmatrix} M \begin{bmatrix} \xi^T & 1 \end{bmatrix} \ge \max_{i=1,\cdots,m} \left\{ \alpha_i \left( y^0_i (x) + y_i(x)^T \xi \right) \right\} - \beta,~ \forall \xi \in \mathbb R^k \\ \Longleftrightarrow & \begin{bmatrix} \xi^T & 1 \end{bmatrix} M \begin{bmatrix} \xi^T & 1 \end{bmatrix} \ge \alpha_i \left( y^0_i (x) + y(x)_i^T \xi \right) - \beta,~ \forall \xi \in \mathbb R^k,~ i = 1,\cdots,m \\ \Longleftrightarrow & M - \begin{bmatrix} 0 & \dfrac{\alpha_i y_i(x)}{2} \\ \dfrac{\alpha_i y_i(x)^T}{2} & \alpha_i y^0_i(x) - \beta \end{bmatrix} \succeq 0, ~ i = 1,\cdots, m \end{aligned} \notag \end{equation} and employing the optimization formulation of the worst-case expectation problem, the worst-case CVaR in (\ref{eq:App-03-DRO-DRCC-Joint-Para-Worst-CVaR}) can be calculated by \begin{equation} \label{eq:App-03-DRO-DRCC-Joint-Para-Worst-CVaR-SDP} \begin{aligned} & J(x,\alpha) = \sup_{f(\xi) \in \mathcal P} \mbox{CVaR} \left[ \max_{i=1,\cdots, m} \left\{ \alpha_i \left( y^0_i (x) + y_i(x)^T \xi \right) \right\}, \varepsilon \right] \\ = & \inf_{\beta \in \mathbb R} \left\{ \beta + \frac{1}{\varepsilon} \sup_{f(\xi) \in \mathcal P} \mathbb E_{f(\xi)} \left( \left[ \max_{i=1,\cdots,m} \left\{ \alpha_i \left( y^0_i(x) + y_i(x)^T \xi \right) \right\} - \beta \right]^+ \right) \right\} \\ = & \inf_{\beta, M} \left\{ \beta + \frac{1}{\varepsilon} \mbox{tr}({\rm \Omega}^T M) ~\middle|~ \mbox{s.t.} ~ M \succeq 0,~ M \succeq \begin{bmatrix} 0 & \dfrac{\alpha_i y_i(x)}{2} \\ \dfrac{\alpha_i y_i(x)^T}{2} & y^0_i(x) - \beta \end{bmatrix}, \forall i \right\} \end{aligned} \end{equation} In conclusion, for any fixed $\alpha \in \mathcal A$, the worst-case CVaR constraint (\ref{eq:App-03-DRO-DRCC-Joint-Para-Worst-CVaR}) can be written as a convex set in variables $x$, $\beta$, and $M$ as follows \begin{equation} \label{eq:App-03-DRO-XJR-alpha} X^J_R(\alpha) = \left\{ x ~\middle|~ \begin{lgathered} \exists \beta \in \mathbb R,~ M \succeq 0 \mbox{ such that} \\ \beta + \frac{1}{\varepsilon} \mbox{tr}({\rm \Omega}^T M) \le 0 \\ M \succeq \begin{bmatrix} 0 & \dfrac{\alpha_i y_i(x)}{2} \\ \dfrac{\alpha_i y_i(x)^T}{2} & y^0_i(x) - \beta \end{bmatrix}, \forall i \end{lgathered} \right\} \end{equation} Moreover, it is revealed in \cite{Static-DRO} that the union $\bigcup_{\alpha \in \mathcal A} X^J_R(\alpha)$ gives an exact description of $X^J_R$, which indicates that the original robust chance constrained program \begin{equation} \label{eq:App-03-DRO-Model-2} \min_x \left\{ c^T x ~\middle|~ \mbox{s.t. } x \in X \cap X^J_R \right \} \end{equation} and the worst-case CVaR formulation \begin{equation} \min_{x,\alpha} \left\{ c^T x ~\middle|~ \mbox{s.t. } x \in X \cap X^J_R(\alpha),~ \alpha \in \mathcal A \right \} \notag \end{equation} or equivalently \begin{equation} \label{eq:App-03-DRO-Model-3} \min_{x,\alpha} \left\{ c^T x ~\middle|~ \mbox{s.t. } x \in X,~ \alpha \in \mathcal A,~ J(x,\alpha) \le 0 \right \} \end{equation} have the same optimal value. The constraints of (\ref{eq:App-03-DRO-Model-3}) contain bilinear matrix inequalities, which means that if either $x$ or $\alpha$ is fixed, $J(x,\alpha) \le 0$ in (\ref{eq:App-03-DRO-Model-3}) can come down to LMIs, however, when both $x$ and $\alpha$ are variables, the constraint is non-convex, making problem (\ref{eq:App-03-DRO-Model-3}) difficult to solve. In view of the biconvex feature \cite{{BLP-Mountain-Climbing-BCVX}}, a sequential convex optimization procedure is presented to find an approximated solution. \begin{algorithm}[!htp] \normalsize \caption{\bf } \begin{algorithmic}[1] \STATE Choose a convergence tolerance $\varepsilon > 0$; Let the iteration counter $k=1$, $x^0 \in X \cap X^J_R(\alpha)$ be a feasible solution for some $\alpha$ and $f^0 = c^T x^0$; \STATE Solve the following subproblem with input $x^{k-1}$ \begin{equation} \label{eq:App-03-BMI-Sub} \min_\alpha ~ \left\{ J(x,\alpha) ~|~ \mbox{s.t. } \alpha \ge \delta \bf 1 \right\} \end{equation} where $\bf 1$ denotes the all-one vector with a compatible dimension, and $\delta > 0$ is a small constant; the worst-case CVaR functional is defined in (\ref{eq:App-03-DRO-DRCC-Joint-Para-Worst-CVaR-SDP}). The optimal solution is $\alpha^k$; \STATE Solve the following master problem with input $\alpha^k$ \begin{equation} \label{eq:App-03-BMI-Master} \min_x ~ \left\{ c^T x ~|~ \mbox{s.t. } x \in X,~ J(x,\alpha^k) \le 0\right\} \end{equation} The optimal solution is $x^k$ and the optimal value is $f^k$; \STATE If $|f^k - f^{k-1}| / |f^{k-1}| \le \varepsilon$, terminate and report the optimal solution $x^k$; otherwise, update $k \leftarrow k+1$, and go to step 2. \end{algorithmic} \label{Ag:App-03-BMI-Mountain-Climbing} \end{algorithm} The main idea of this algorithm is to identify the best feasible region $X^J_R(\alpha)$ through successively solving the subproblem (\ref{eq:App-03-BMI-Sub}), and therefore improving the objective value. The performance of Algorithm \ref{Ag:App-03-BMI-Mountain-Climbing} is intuitively explained below. Because parameter $\alpha$ is optimized in the subproblem (\ref{eq:App-03-BMI-Sub}) given the value $x^k$, there must be $J(x^k,\alpha^{k+1}) \le J(x^k,\alpha^k) \le 0$, $\forall k$, demonstrating that $x^k$ is a feasible solution of the master problem (\ref{eq:App-03-BMI-Master}) in iteration $k+1$; therefore, the optimal values of (\ref{eq:App-03-BMI-Master}) in two consecutive iterations satisfy $c^T x^{k+1} \le c^T x^k$, as the objective evaluated at the optimal solution $x^{k+1}$ in iteration $k+1$ deserves a value no greater than that is incurred at any feasible solution. In this regard, the optimal value sequence $f^k$, $k=1,2,\cdots$ is monotonically decreasing. If $X$ is bounded, the optimal solution sequence $x^k$ is also bounded, and the optimal value converges. Algorithm \ref{Ag:App-03-BMI-Mountain-Climbing} does not necessarily find the global optimum of problem (\ref{eq:App-03-DRO-Model-3}). Nevertheless, it is desired by practical problems due to its robustness since it involves only convex optimization. In many practical applications, the uncertain data $\xi$ is known to be within a strict subset of $\mathbb R^k$, which is called the support set. We briefly outline how to incorporate the support set in the distributionally robust chance constraints. We assume the support set $\rm \Xi$ is the intersection of a finite number of ellipsoids, i.e. \begin{equation} \label{eq:App-03-Support-Set} {\rm \Xi} = \left\{ \xi \in \mathbb R^k ~\middle|~ \xi^T W_i \xi \le 1,~ i = 1,\cdots,l \right\} \end{equation} where $W_i \in \mathbb S^k_+$, $i=1,\cdots,l$, and we have $\Pr(\xi \in {\rm \Xi}) = 1$. Let $\mathcal P_{\rm \Xi}$ be the set of all candidate probability distributions supported on $\rm \Xi$ which have identical first- and second-order moments. Consider the worst-case expectation problem (\ref{eq:App-03-DRO-Worst-Expectation-Primal}). If we replace $\mathcal P$ with $\mathcal P_{\rm \Xi}$, the constraints of the dual problem (\ref{eq:App-03-DRO-Worst-Expectation-Dual}) become \begin{alignat}{2} \begin{bmatrix} \xi^T & 1 \end{bmatrix} M \begin{bmatrix} \xi^T & 1 \end{bmatrix}^T & \ge 0,~ & \quad & \forall \xi \in {\rm \Xi} \label{eq:App-03-S-Lemma-1} \\ \begin{bmatrix} \xi^T & 1 \end{bmatrix} M \begin{bmatrix} \xi^T & 1 \end{bmatrix}^T & \ge g(\xi),~ & & \forall \xi \in {\rm \Xi} \label{eq:App-03-S-Lemma-2} \end{alignat} According to (\ref{eq:App-03-Support-Set}), $1-\xi^T W_i \xi$ must be non-negative if and only if $\xi \in {\rm \Xi}$, and hence a sufficient condition for (\ref{eq:App-03-S-Lemma-1}) is the existence of constants $\tau_i \ge 0$, $i = 1,\cdots,l$, such that \begin{equation} \label{eq:App-03-S-Lemma-3} \begin{bmatrix} \xi^T & 1 \end{bmatrix} M \begin{bmatrix} \xi^T & 1 \end{bmatrix}^T - \sum_{i=1}^l \tau_i \left( 1-\xi^T W_i \xi \right) \ge 0 \end{equation} Under this condition, as long as $\xi \in {\rm \Xi}$, we have \begin{equation*} \begin{bmatrix} \xi^T & 1 \end{bmatrix} M \begin{bmatrix} \xi^T & 1 \end{bmatrix}^T \ge \sum_{i=1}^l \tau_i \left( 1-\xi^T W_i \xi \right) \ge 0 \end{equation*} Arrange (\ref{eq:App-03-S-Lemma-3}) as a matrix quadratic form \begin{equation*} \begin{bmatrix} \xi^T & 1 \end{bmatrix} \left(M - \sum_{i=1}^l \tau_i \begin{bmatrix} -W_i & {\bf 0} \\ {\bf 0}^T & 1 \end{bmatrix} \right) \begin{bmatrix} \xi \\ 1 \end{bmatrix} \ge 0,~ \forall \xi \in \mathbb R^k \end{equation*} As a result, (\ref{eq:App-03-S-Lemma-1}) can be reduced to an LMI in variables $M$ and $\tau$ \begin{equation} \label{eq:App-03-S-Lemma-4} M - \sum_{i=1}^l \tau_i \begin{bmatrix} -W_i & {\bf 0} \\ {\bf 0}^T & 1 \end{bmatrix} \succeq 0 \end{equation} For similar reasons, by letting $g(\xi) = y^0(x) + y(x)^T \xi - \beta$, (\ref{eq:App-03-S-Lemma-2}) can be conservatively approximated by the following LMI \begin{equation} \label{eq:App-03-S-Lemma-5} M - \sum_{i=1}^l \tau_i \begin{bmatrix} -W_i & {\bf 0} \\ {\bf 0}^T & 1 \end{bmatrix} \succeq \begin{bmatrix} 0 & \frac{1}{2} y(x) \\ \frac{1}{2} y(x)^T & y^0(x) - \beta \end{bmatrix} \end{equation} In fact, (\ref{eq:App-03-S-Lemma-4}) and (\ref{eq:App-03-S-Lemma-5}) are special cases of S-Lemma. Based upon these outcomes, most formulations in this section can be extended to consider the bounded support set $\rm \Xi$ in the form of (\ref{eq:App-03-Support-Set}). For polyhedral and some special classes of convex support sets, one may utilize the nonlinear Farkas lemma (Lemma 2.2 in \cite{Static-DRO}) to derive tractable reformulations. \subsection{Adjustable Distributionally Robust Optimization} \label{App-C-Sect03-02} As explained in Appendix \ref{App-C-Sect01}, the traditional static RO encounters difficulties in dealing with equality constraints. This plight remains in the DRO approach following a static setting. Consider $x + \xi = 1$ where $\xi \in [0,0.1]$ is uncertain, while its mean and variance are known. For any given $x^*$, the worst-case probability $\inf_{f(\xi) \in \mathcal P} \Pr[x^* + \xi = 1] =0$, because one can always find a feasible probability distribution function $f(\xi)$ that satisfies the first- and second-order moment constraints, whereas $f(1-x^*) = 0$. To vanquish this difficulty, it is necessary to incorporate wait-and-see decisions. A simple remedy is to impose an affine recourse policy without involving optimization in the second stage, giving rise to an affine-adjustable RO with distributional uncertainty and linear decision rule, which can be solved by the method in Appendix \ref{App-C-Sect03-01}. This section aims to investigate the following adjustable DRO with completely flexible wait-and-see decisions \begin{equation} \label{eq:App-03-ADRO-Model-1} \min_{x \in X} \left\{ c^T x + \sup_{f(w) \in \mathcal P} \mathbb E_{f(w)} Q(x,w) \right\} \end{equation} where $x$ is the first-stage (here-and-now) decision, and $X$ is its feasible set; the uncertain parameter is denoted by $w$; the probability distribution $f(w)$ belongs to the Chebyshev ambiguity set (whose first- and second-order moments are known) \begin{equation} \label{eq:App-03-ADRO-Ambiguity-Set} \mathcal P =\left\{ f(w) \middle| \begin{gathered} f(w) \ge 0,~\forall w \in W \\ \int_{w \in W} f(w) \mbox{d} w = 1 \\ \int_{w \in W} w f(w)\mbox{d} w = \mu \\ \int_{w \in W} w w^T f(w)\mbox{d}w = {\rm \Theta} \end{gathered} \right\} \end{equation} supported on $W = \{w~|~ (w - \mu)^T Q (w-\mu) \le {\rm \Gamma} \}$, where matrix ${\rm \Theta} = {\rm \Sigma} + \mu \mu^T$ represents the second-order moment; $\mu$ is the mean value and $\rm \Sigma$ is the covariance matrix. The expectation in (\ref{eq:App-03-ADRO-Model-1}) is taken over the worst-case $f(w)$ in $\mathcal P$, and the second-stage problem under fixed $x$ and $w$ is an LP \begin{equation} \label{eq:App-03-ADRO-Model-2} Q(x,w) = \min_{y \in Y(x,w)} d^T y \end{equation} $Q(x,w)$ is its optimal value function under fixed $x$ and $w$. The feasible set of the second-stage problem is \begin{equation} Y(x,w) = \{ y ~|~ B y \le b - A x -C w \} \notag \end{equation} Matrices $A$, $B$, $C$ and vectors $b$, $c$, $d$ are constant coefficients in the model. We assume that the second-stage problem is always feasible, i.e., $\forall x \in X$, $\forall w \in W:$ $Y(x,w) \ne \emptyset$ and is bounded, and thus $Q(x,w)$ has a finite optimal value. This can be implemented by introducing wait-and-see type slack variables and adding penalties in the objective of (\ref{eq:App-03-ADRO-Model-2}). The difference between problems (\ref{eq:App-03-ARO}) and (\ref{eq:App-03-ADRO-Model-1}) stems from the descriptions of uncertainty and the criteria in the objective function: more information of the dispersion effect, such as the covariance matrix, is taken into account in the latter one, and the objective function in (\ref{eq:App-03-ADRO-Model-1}) is an expectation reflecting the statistical behavior of the second-stage cost, rather than the one in (\ref{eq:App-03-ARO}) which is associated with only a single worst-case scenario, and leaves the performances in all other scenarios un-optimized. Because the probability distribution is uncertain, it is prudent to investigate the worst-case outcome in which the expected cost of the second stage is maximized. This formulation is advantageous in several ways: first, the requirement on the exact probability distribution is not necessary, and the optimal solution is insensitive to the family of distributions with common mean and covariance; second, the dispersion of the uncertainty is also taken into account, which helps reduce model conservatism: since the variance is fixed, a scenario that leaves far away from the forecast would have a low probability; finally, it is often important to tackle the tail effect, which indicates that the occurrence of a rare event may induce heavy losses in spite of its low probability. Such phenomenon is naturally taken into account in (\ref{eq:App-03-ADRO-Model-1}). In what follows, we outline the method proposed in \cite{App03-Sect3-ADRO-1} to solve the adjustable DRO problem (\ref{eq:App-03-ADRO-Model-1}). A slight modification is that an ellipsoid support set is considered. \vspace{12pt} {\noindent \bf 1. The worst-case expectation problem} We consider the following worst-case expectation problem with a fixed $x$ \begin{equation} \label{eq:App-03-ADRO-Sub-Worst-Expectation} \sup_{f(w) \in \mathcal P} \mathbb E_{f(w)} Q(x,w) \end{equation} According to the discussions for problem (\ref{eq:App-03-DRO-Worst-Expectation-Primal}), the dual problem of (\ref{eq:App-03-ADRO-Sub-Worst-Expectation}) is \begin{equation} \label{eq:App-03-ADRO-Sub-Worst-Expectation-Dual} \begin{aligned} \min_{H,h,h_0}~~ & \mbox{tr}(H^T {\rm \Theta}) + \mu^T h + h_0 \\ \mbox{s.t.}~~ & w^T H w + h^T w + h_0 \ge Q(x,w),~\forall w \in W \end{aligned} \end{equation} where $H$, $h$, $h_0$ are dual variables. Nevertheless, the optimal value function $Q(x,w)$ is not given in a closed form. From the LP duality theory \begin{equation} Q(x,w) = \max_{u \in U} ~ u^T (b - A x - C w) \notag \end{equation} where $u$ is the dual variable of LP (\ref{eq:App-03-ADRO-Model-2}), and its feasible set is given by \begin{equation} U = \{ u ~|~ B^T u = d,~ u \le 0 \} \notag \end{equation} Because we have assumed that $Q(x,w)$ is bounded, the optimal solution of the dual problem can be found at one of the extreme points of $U$, i.e., \begin{equation} \label{eq:App-03-ADRO-Sub-Critical-Vertex} \exists u^* \in \mbox{vert}(U): \ Q(x,w) = (b-Ax-Cw)^T u^* \end{equation} where $\mbox{vert}(U) = \{u^1,u^2,\cdots,u^{N_E }\}$ stands for the vertices of polyhedron $U$, and $N_E = |\mbox{vert}(U)|$ is the cardinality of vert($U$). In view of this, the constraint of (\ref{eq:App-03-ADRO-Sub-Worst-Expectation-Dual}) can be expressed as \begin{equation} w^T H w + h^T w + h_0 \ge (b-Ax-Cw)^T u^i,~ \forall w \in W,~ i=1,\cdots,N_E \notag \end{equation} Recall the definition of $W$; a certification for above condition is \begin{equation} \begin{aligned} w^T H w & + h^T w + h_0 - (b-Ax-Cw)^T u^i \\ & \ge \lambda [{\rm \Gamma} - (w-\mu)^T Q (w-\mu)] \ge 0,~ \forall w \in \mathbb R^k,~ i=1,\cdots,N_E \end{aligned} \notag \end{equation} which has the following compact matrix form \begin{equation} \label{eq:App-03-ADRO-Sub-Cons-LMI} \begin{bmatrix} w \\ 1 \end{bmatrix}^T M^i \begin{bmatrix} w \\ 1 \end{bmatrix} \ge 0, \forall w \in \mathbb R^k,~ i=1,\cdots,N_E \end{equation} where \begin{equation} \label{eq:App-03-ADRO-Sub-Cons-LMI-MI} M^i = \begin{bmatrix} H + \lambda Q & \dfrac{h-C^T u^i}{2} - \lambda Q \mu \\ \dfrac{h^T- (u^i)^T C}{2}-\lambda \mu^T Q & h_0 -(b-Ax)^T u^i - \lambda ({\rm \Gamma} - \mu^T Q \mu) \end{bmatrix} \end{equation} and (\ref{eq:App-03-ADRO-Sub-Cons-LMI}) simply reduces to $M^i \succeq 0$, $i=1,\cdots,N_E$. Finally, problem (\ref{eq:App-03-ADRO-Sub-Worst-Expectation-Dual}) comes down to the following SDP \begin{equation} \label{eq:App-03-ADRO-Sub-Worst-Expectation-Dual-SDP} \begin{aligned} \min_{H,h,h_0,\lambda}~~ & \mbox{tr}(H^T {\rm \Theta}) + \mu^T h + h_0 \\ \mbox{s.t.}~~ & M^{i} (H,h,h_0,\lambda) \succeq 0,~ i=1,\cdots,N_E \\ & \lambda \in \mathbb R^+ \end{aligned} \end{equation} where $M^{i} (H,h,h_0,\lambda)$ is defined in (\ref{eq:App-03-ADRO-Sub-Cons-LMI-MI}). Above results can be readily extended if the support set is the intersection of ellipsoids. \vspace{12pt} {\noindent \bf 2. Adaptive constraint generation algorithm} Due to the positive semi-definiteness of the covariance matrix $\rm \Sigma$, the duality gap between problems (\ref{eq:App-03-ADRO-Sub-Worst-Expectation}) and (\ref{eq:App-03-ADRO-Sub-Worst-Expectation-Dual}) is zero \cite{App03-Sect3-ADRO-1}, and hence we can replace the worst-case expectation in (\ref{eq:App-03-ADRO-Model-1}) with its dual form, yielding \begin{equation} \label{eq:App-03-ADRO-Model-3} \begin{aligned} \min ~~ & c^T x + \mbox{tr}(H^T {\rm \Theta}) + \mu^T h + h_0 \\ \mbox{s.t.}~~ & M^{i} (H,h,h_0,\lambda) \succeq 0,~ i=1,\cdots,N_E \\ & x \in X,~ \lambda \in \mathbb R^+ \end{aligned} \end{equation} Problem (\ref{eq:App-03-ADRO-Model-3}) is an SDP. However, the number of vertices in set $U$ ($|\mbox{vert}(U)|$) may increase exponentially in the dimension of $U$. It is non-trivial to enumerate all of them. However, because of weak duality, only the one which is optimal in the dual problem provides an active constraint, as shown in (\ref{eq:App-03-ADRO-Sub-Critical-Vertex}), and the rest are redundant inequalities. To identify the critical vertex in (\ref{eq:App-03-ADRO-Sub-Critical-Vertex}), we solve problem (\ref{eq:App-03-ADRO-Model-3}) in iterations: in the master problem, a subset of $\mbox{vert}(U)$ is used to formulate a relaxation, then check whether the following constraint \begin{equation} \label{eq:App-03-ADRO-Sub-Double-Enumeration} w^T H w + h^T w + h_0 \ge (b-Ax-Cw)^T u,~ \forall w \in W,~ \forall u \in U \end{equation} is fulfilled. If yes, the relaxation is exact and the optimal solution is found; otherwise, find a new vertex of $U$ at which constraint (\ref{eq:App-03-ADRO-Sub-Double-Enumeration}) is violated, and then add a cut to the master problem so as to tighten the relaxation, till constraint (\ref{eq:App-03-ADRO-Sub-Double-Enumeration}) is satisfied. The flowchart is summarized in Algorithm \ref{Ag:App-03-ADRO-Delayed-Vertex-Generation}. \begin{algorithm}[!htp] \normalsize \caption{\bf } \begin{algorithmic}[1] \STATE Choose a convergence tolerance $\epsilon > 0$ and an initial vertex set $V_E \subseteq \mbox{vert}(U)$. \STATE Solve the following master problem \begin{equation} \label{eq:App-03-ADRO-Model-AVG-Master} \begin{aligned} \min ~~ & c^T x + \mbox{tr}(H^T {\rm \Theta}) + \mu^T h + h_0 \\ \mbox{s.t.}~~ & M^{i} (H,h,h_0,\lambda) \succeq 0,~ \forall u^i \in V_E \\ & x \in X,~ \lambda \in \mathbb R^+ \end{aligned} \end{equation} The optimal value is $R^*$, and the optimal solution is $(x^*, H, h, h_0)$. \STATE Solve the following sub-problem with obtained $(x^*, H, h, h_0)$ \begin{equation} \label{eq:App-03-ADRO-Model-AVG-Sub} \begin{aligned} \min_{w,u}~~ & w^T H w + h^T w+h_0 - (b-Ax^* -Cw)^T u \\ \mbox{s.t.}~~ & w \in W,~ u \in U \end{aligned} \end{equation} The optimal value is $r^*$, and the optimal solution is $u^* $and $w^*$. \STATE If $r^* \ge - \varepsilon$, terminate and report the optimal solution $x^*$and the optimal value $R^*$; otherwise, $V_E = V_E \cup u^*$, add an LMI cut $M (H,h,h_0,\lambda) \succeq 0$ associated with the current $u^*$ to the master problem (\ref{eq:App-03-ADRO-Model-AVG-Master}), and go to step 2. \end{algorithmic} \label{Ag:App-03-ADRO-Delayed-Vertex-Generation} \end{algorithm} Algorithm \ref{Ag:App-03-ADRO-Delayed-Vertex-Generation} terminates in a finite number of iterations which is bounded by $|\mbox{vert}(U)|$. Actually, it will converge within a few iterations, because the sub-problem (\ref{eq:App-03-ADRO-Model-AVG-Sub}) in step 3 always identifies the most critical vertex in $\mbox{vert}(U)$. It is worth mentioning that the subproblem (\ref{eq:App-03-ADRO-Model-AVG-Sub}) is a non-convex program. Despite that it can be solved by general NLP solvers, we suggest three approaches with different computational complexity and optimality guarantees. 1. If the support set $W = \mathbb R^k$, it can be verified that matrix $M_i$ becomes \begin{equation} \begin{bmatrix} H & \dfrac{h + C^T u^i}{2} \\ \dfrac{(h + C^T u^i)^T}{2} & h_0 - (b - Ax)^T u^i \end{bmatrix} \succeq 0 \notag \end{equation} Then there must be $H \succeq 0$, and non-convexity appears in the bilinear term $u^T C w$. In such circumstance, problem (\ref{eq:App-03-ADRO-Model-AVG-Sub}) can be solved via a mountain climbing method similar to Algorithm \ref{Ag:App-03-BLP-Mountain-Climbing} (but here the mountain is actually a pit because the objective is to be minimized). 2. In the case that $W$ is an ellipsoid, above iterative approach is still applicable; however, the $w$-subproblem in which $w$ is to be optimized may become non-convex because $H$ may be indefinite. Since $u$ is fixed in the $w$-subproblem, non-convex term $w^T H w$ can be decomposed as the difference of two convex functions as $w^T (H + \alpha I) w - \alpha w^T w$, where $\alpha$ is a constant such that $H + \alpha I$ is positive-definite, and the $w$-subproblem can be solved by the convex-concave procedure elaborated in \cite{CCP-Boyd}, or any existing NLP solver. 3. As a non-convex QP, problem (\ref{eq:App-03-ADRO-Model-AVG-Sub}) can be globally solved by the MILP method presented in Appendix \ref{App-A-Sect04}. This method could be time consuming with the growth in problem sizes. \section{Data-driven Robust Stochastic Program} \label{App-C-Sect04} Most classical SO methods assume that the probability distribution of uncertain factors is exactly known, which is an input of the problem. However, such information heavily relies on historical data, and may not be available at hand or accurate enough. Using an inaccurate distribution in a classical SO model could lead to biased results. To cope with ambiguous probability distributions, a natural way is to consider a set of possible candidates derived from available data, instead of a single distribution, just as the moment-inspired ambiguity set used in DRO. In this section, we investigate some useful SO models with distributional uncertainty described by divergence ambiguity sets, which is referred to as robust SO. When the distribution is discrete, the distributional uncertainty is interpreted by the perturbation of probability value associated with each scenario; when the distribution is continuous, the distance of two density functions should be specified first. In this section, we consider $\rm \Phi$-divergence and Wasserstein metric based ambiguity sets. \subsection{Robust Chance Constrained Stochastic Program} \label{App-C-Sect04-01} We introduce robust chance-constrained stochastic programs with distributional robustness. The ambiguous PDF is modeled based on $\phi$-divergence, and the optimal solution provides constraint feasibility guarantee with desired probability even in the worst-case distribution. In short, the underlying problem possesses the following features: 1) The PDF is continuous and the constraint violation probability is a functional. 2) Uncertain parameters do not explicitly appear in the objective function. Main results of this section come from \cite{App03-Sect4-RCCP}. \vspace{12pt} {\noindent \bf 1. Problem formulation} In a traditional chance-constrained stochastic linear program, the decision maker seeks a cost-minimum solution at which some certain constraints can be met with a given probability, yielding: \begin{equation} \label{eq:App-03-CCP-Model} \begin{aligned} \min ~~ & c^T x \\ \mbox{s.t.}~~ & \Pr [C(x,\xi)] \ge 1-\alpha \\ & x \in X \end{aligned} \end{equation} where $x$ is the vector of decision variables; $\xi$ is the vector of uncertain parameters, and the exact (joint) probability distribution is apparent to the decision maker; vector $c$ represents the cost coefficients; $X$ is a polyhedron that is independent of $\xi$; $\alpha$ is the risk level or the maximum allowed probability of constraint violation; $C(x,\xi)$ collects all uncertainty dependent constraints, whose general form is given by \begin{equation} \label{eq:App-03-CCP-Cons-Uncertain} C(x,\xi) = \{\xi~|~ \exists y : A(\xi) x + B(\xi) y \le b(\xi) \} \end{equation} where $A$, $B$, $b$ are constant coefficient matrices that may contain uncertain parameters; $y$ is a recourse action that can be made after $\xi$ is known. In the presence of $y$, we call (\ref{eq:App-03-CCP-Model}) a two-stage problem; otherwise, it is a single-stage problem if $y$ is null. We don't consider the cost of recourse actions in the objective function in its current form. In case of need, we can add the second-stage cost $d^T y(\xi)$ in the objective function, and $\xi$ is a specific scenario which $y(\xi)$ corresponds to; for instance, robust optimization may consider a max-min cost scenario or a max-min regret scenario; traditional SO often tackles the expected second-stage cost $\mathbb E[d^T y(\xi)]$. We leave it to the end of this section to discuss how to deal with the second-stage cost in the form of worst-case expectation like (\ref{eq:App-03-ADRO-Sub-Worst-Expectation}), and show that the problem can be convexified under some technical assumptions. In the chance constraint, for a given $x$, the probability of constraint satisfaction can be evaluated for a particular probability distribution of $\xi$. Traditional studies on chance-constrained programs often assume that the distribution of $\xi$ is perfectly known. However, this assumption can be very strong because it requires a lot of historical data. Moreover, the optimal solution may be sensitive to the true distribution and thus highly suboptimal in practice. To overcome these difficulties, a prudent method is to consider a set of probability distributions belonging to a pre-specified ambiguity set $D$, and require that the chance constraint should be satisfied under all possible distributions in $D$, resulting in the following robust chance-constrained programming problem: \begin{equation} \label{eq:App-03-RCCP-Model} \begin{aligned} \min ~~ & c^T x \\ \mbox{s.t.}~~ & \inf_{f(\xi) \in D} \Pr[C(x,\xi)] \ge 1-\alpha \\ & x \in X \end{aligned} \end{equation} where $f(\xi)$ is the probability density function of random variable $\xi$. The ambiguity set $D$ in (\ref{eq:App-03-RCCP-Model}) which includes distributional information can be constructed in a data-driven fashion, such as the moment based ones used in Appendix \ref{App-C-Sect03}. Please see \cite{Am-Set-Overview} for more information on establishing $D$ based on moment data and other structural properties, such as symmetry and unimodality. The tractability of (\ref{eq:App-03-RCCP-Model}) largely depends on the form of $D$. For example: if $D$ is built on the mean value and covariance matrix (which is called a Chebyshev ambiguity set), a single robust chance constraint can be reformulated as an LMI and a set of joint robust chance constraints can be approximated by BMIs \cite{Static-DRO}; probability of constraint violation under more general moment based ambiguity sets can be evacuated by solving conic optimization problems \cite{Am-Set-Overview}. A shortcoming of moment description is that it does not provide a direct measure on the distance between the candidate PDFs in $D$ and a reference distribution. Two PDFs with the same moments may differ a lot in other aspects. Furthermore, the worst-case distribution corresponding to a Chebyshev ambiguity set always puts more weights away from the mean value, subject to the variance. As such, the long-tail effect is a source of conservatism. In this section, we consider the confidence set built around a reference distribution. The motivation is: the decision maker may have some knowledge on what distribution the uncertainty follows, although such a distribution could be inexact, and the true density function would not deviate far away from it. To describe distributional ambiguity in term of a PDF, the first problem is how to characterize the distance between two functions. One common measure on the distance between density functions is the $\phi$-divergence, which is defined as \cite{Am-Set-Phi-Divergence} \begin{equation} \label{eq:App-03-RCCP-Phi-Div} D_\phi(f \| f_0) = \int_{\rm \Omega} \phi \left( \dfrac{f(\xi)}{f_0(\xi)} \right) f_0(\xi) d\xi \end{equation} where $f$ and $f_0$ stand for the particular density function and the estimated one (or the reference distribution), respectively; function $\phi$ satisfies: \begin{equation*} \begin{lgathered} \mbox{(C1)}~~ \phi(1) = 0 \\ \mbox{(C2)}~~ 0\phi(x/0) = \begin{cases} x \lim_{p \to +\infty} \phi(p)/p & \mbox{if } x > 0 \\ 0 & \mbox{if } x = 0 \end{cases} \\ \mbox{(C3)}~~ \phi(x)= +\infty~ \mbox{for } x<0 \\ \mbox{(C4)}~~ \phi(x)~ \mbox{is a convex function on}~ \mathbb R^+ \end{lgathered} \end{equation*} It is proposed in \cite{Am-Set-Phi-Divergence} that the ambiguity set can be built as: \begin{equation} \label{eq:App-03-Conf-Set-Phi-Div} D = \{P: D_\phi(f \| f_0) \le d, f={\rm d} P/{\rm d} \xi\} \end{equation} where the tolerance $d$ can be adjusted by the decision maker according to their attitudes towards risks. The ambiguity set in (\ref{eq:App-03-Conf-Set-Phi-Div}) can be denoted as $D_{\phi}$, without causing confusion with the definition of $\phi$-divergence $D_\phi(f \| f_0) $. Compared to the moment-based ambiguity sets, especially the Chebyshev ambiguity set, where only the first- and second-order moments are involved, the density based description captures the overall profile of the ambiguous distribution, so may hopefully provide less conservative solutions. However, it hardly guarantees consistent moments. Which one is better depends on data availability: if we are more confident on the reference distribution, (\ref{eq:App-03-Conf-Set-Phi-Div}) may be better; otherwise, if we only have limited statistic information such as mean and variance, then the moment-based ones are more straightforward. \begin{table}[!htp] \footnotesize \renewcommand{\arraystretch}{1.3} \renewcommand{\tabcolsep}{1em} \caption{Instances of $\phi$-divergences} \centering \begin{tabular}{cc} \toprule Divergence & function $\phi(x)$ \\ \midrule KL-divergence & $x \log x - x + 1$ \\ reverse KL-divergence & $- \log x$ \\ Hellinger distance & $({\sqrt x} -1)^2$ \\ Variation distance & $|x-1|$ \\ J-divergence & $(x-1)\log x$ \\ $\chi^2$ divergence & $(x-1)^2$ \\ $\alpha$-divergence & $\begin{cases} \dfrac{4}{1-\alpha^2} \left( 1-x^{(1+\alpha)/2} \right) & \mbox{If}~ \alpha \ne \pm 1 \\ x \ln x & \mbox{If}~ \alpha = 1 \\ - \ln x & \mbox{If}~ \alpha = -1 \end{cases}$ \\ \bottomrule \end{tabular} \label{tab:App03-RCCP-01} \end{table} Many commonly seen divergence measures are special cases of $\phi$-divergence, coinciding with a particular choice of function $\phi$. Some examples are given in Table \ref{tab:App03-RCCP-01} \cite{App03-Sect4-Example-Phi-Div}. In what follows, we will use the KL-divergence. According to its corresponding function $\phi$, the KL-divergence is given by \begin{equation} \label{eq:App-03-RCCP-Phi-Div} D_\phi(f \| f_0) = \int_{\rm \Omega} \log \left( \dfrac{f(\xi)}{f_0(\xi)} \right) f(\xi) d\xi \end{equation} Before presenting the main results in \cite{App03-Sect4-RCCP}, the definition of conjugate duality is given. For a univariate function $g: \mathbb R \to \mathbb R \cup \{+\infty\}$, its conjugate function $g^*: \mathbb R \to \mathbb R \cup \{+\infty\}$ is defined as \begin{equation*} g^*(t) = \sup_{x \in \mathbb R} \{tx - g(x)\} \end{equation*} For a valid function $\phi$ for $\phi$-divergence satisfying (C1)-(C4), its conjugate function $\phi^*$ is convex, nondecreasing, and the following condition holds \cite{App03-Sect4-RCCP} \begin{equation} \label{eq:App-03-DRSO-Conjugate-Ineq} \phi^*(x) \ge x \end{equation} Besides, if $\phi^*$ is a finite constant on a closed interval $[a, b]$, then it is a finite constant on the interval $(-\infty,b]$. \vspace{12pt} {\noindent \bf 2. Equivalent formulation} It is revealed in \cite{App03-Sect4-RCCP} that when the confidence set $D$ is constructed based on $\phi$-divergence, robust chance constrained program (\ref{eq:App-03-RCCP-Model}) can be easily transformed into a traditional chance-constrained program (\ref{eq:App-03-CCP-Model}) at the reference distribution by calibrating the confidence tolerance $\alpha$. \begin{theorem} \label{thm:App03-RCCP-Phi-Div} {\rm \cite{App03-Sect4-RCCP}} Let $\mathbb P_0$ be the cumulative distribution function generated by density function $f_0$, then the robust chance constraint \begin{equation} \label{eq:App-03-Thm-1} \inf_{\mathbb P(\xi) \in \{D_{\phi}(f\|f_0) \le d\} } \Pr[C(x,\xi)] \ge 1-\alpha \end{equation} constructed based on $\phi$-divergence is equivalent to a traditional chance constraint \begin{equation} \label{eq:App-03-Thm-2} \Pr\nolimits_0[C(x,\xi)] \ge 1-\alpha^\prime_+ \end{equation} where $\Pr_0$ means that the probability is evaluated at the reference distribution $\mathbb P_0$, $\alpha^\prime_+ = \max\{\alpha^\prime,0\}$, and $\alpha^\prime$ can be computed by \begin{equation*} \alpha^\prime = 1 - \inf_{z \in Z} \left\{ \dfrac{\phi^*(z_0+z)-z_0-\alpha z +d}{\phi^*(z_0+z)-\phi^*(z_0)} \right\} \end{equation*} where \begin{equation*} Z = \left\{ z \middle| \begin{lgathered} z>0,~ z_0 + \pi z \le l_\phi \\ \underline m(\phi^*) \le z+z_0 \le \overline m(\phi^*) \end{lgathered} \right\} \end{equation*} In above formula, constants $l_\phi = \lim_{x \to +\infty} \phi(x)/x$, $\overline m(\phi^*) = \inf \{ m: \phi^*(m)=+\infty\}$, $\underline m(\phi^*) = \sup \{ m: \phi^*$ is a finite constant on $(-\infty,m]\}$, Table \ref{tab:App03-RCCP-02} summarizes the values of these parameters for typical $\phi$-divergence measures, and \begin{equation*} \pi = \begin{cases} -\infty & \mbox{if Leb}\{[f_0=0]\}=0 \\ 0 & \mbox{if Leb $\{[f_0=0]\}>0$ and Leb$\{[f_0=0] \backslash C(x,\xi) \}=0$} \\ 1 & \mbox{otherwise} \end{cases} \end{equation*} where Leb$\{\cdot\}$ is the Lebesgue measure on $\mathbb R^{Dim(\xi)}$. \end{theorem} \begin{table}[!htp] \footnotesize \renewcommand{\arraystretch}{1.3} \renewcommand{\tabcolsep}{1em} \caption{Values of $l_\phi$, $\underline m(\phi^*)$, and $\overline m(\phi^*)$ for $\phi$-divergences} \centering \begin{tabular}{cccc} \toprule $\phi$-Divergence & $l_\phi$ & $\underline m(\phi^*)$ & $\overline m(\phi^*)$\\ \midrule KL-divergence & $+\infty$ & $-\infty$ & $+\infty$ \\ Hellinger distance & $1$ & $-\infty$ & $1$ \\ Variation distance & $1$ & $-1$ & $1$ \\ J-divergence & $+\infty$ & $-\infty$ & $+\infty$ \\ $\chi^2$ divergence & $+\infty$ & $-2$ & $+\infty$ \\ \bottomrule \end{tabular} \label{tab:App03-RCCP-02} \end{table} The values of $\alpha^\prime$ for the Variation distance and the $\chi^2$ divergence have analytical expressions; for the KL divergence, $\alpha^\prime$ can be computed from one-dimensional line search. Results are shown in Table \ref{tab:App03-RCCP-03}. \begin{table}[!htp] \footnotesize \renewcommand{\arraystretch}{1.3} \renewcommand{\tabcolsep}{1em} \caption{Values of $\alpha^\prime$ for some $\phi$-divergences} \centering \begin{tabular}{cc} \toprule $\phi$-Divergence & $\alpha^\prime$ \\ \midrule $\chi^2$ divergence & $\alpha^\prime = \alpha- \dfrac{\sqrt{d^2 + 4d (\alpha-\alpha^2)}-(1-2\alpha)d}{2d+2}$ \\ & \\ Variation distance & $\alpha^\prime = \alpha - \dfrac{1}{2}d$ \\ & \\ KL-divergence & $\alpha^\prime = 1- \inf_{x \in (0,1)} \left\{ \dfrac{{\rm e}^{-d}x^{1-\alpha}-1}{x-1} \right\}$ \\ \bottomrule \end{tabular} \label{tab:App03-RCCP-03} \end{table} For the KL divergence, calculating $\alpha^\prime$ entails solving $\inf_{x\in (0,1)} h(x)$ where \begin{equation*} h(x) = \dfrac{{\rm e}^{-d}x^{1-\alpha}-1}{x-1} \end{equation*} Its first-order derivative is given by \begin{equation*} h^\prime (x) = \dfrac{1-\alpha{\rm e}^{-d}x^{1-\alpha}-(1-\alpha){\rm e}^{-d} x^{-\alpha}}{(x-1)^2},~~ \forall x \in (0,1) \end{equation*} To claim the convexity of $h(x)$, we need to show that $h^\prime (x)$ is an increasing function in $x \in (0,1)$. To this end, first notice that the denominator $(x-1)^2$ is a decreasing function in $x$ on the open interval $(0,1)$; then we can show the numerator is an increasing function in $x$, because its first-order derivative gives \begin{equation*} (1-\alpha{\rm e}^{-d}x^{1-\alpha}-(1-\alpha){\rm e}^{-d} x^{-\alpha})^\prime _x = \alpha (1-\alpha){\rm e}^{-d} (x^{-\alpha-1}-x^{-\alpha}) >0,~ \forall x \in (0,1) \end{equation*} Hence $h^\prime(x)$ is monotonically increasing, and $h(x)$ is a convex function in $x$. Moreover, because $h^\prime(x)$ is continuous in $(0,1)$, and $\lim_{x \to 0^+} h^\prime(x) = -\infty$, $\lim_{x \to 1^-} h^\prime(x) = +\infty$, there must be some $x^* \in [\delta, 1-\delta]$ such that $h^\prime(x^*)=0$, i.e., the infimum of $h(x)$ is attainable. The minimum of $h(x)$ can be calculated by solving a nonlinear equation $h^\prime(x)=0$ via Newton's method, or a derivative-free line search, such as the golden section search algorithm. Either scheme is computationally inexpensive. Finally, we discuss the connection between the modified tolerance $\alpha^\prime$ and its original value $\alpha$. Because a set of distributions are considered in (\ref{eq:App-03-Thm-1}), the threshold in (\ref{eq:App-03-Thm-2}) should be greater than the original one, i.e., $1-\alpha^\prime \ge 1-\alpha$ must hold. To see this, recall inequality (\ref{eq:App-03-DRSO-Conjugate-Ineq}) of conjugate function, we have \begin{equation*} \alpha \phi^* (z_0 + z) + (1 - \alpha) \phi^∗ (z_0) \ge \alpha (z_0 + z) + (1 - \alpha) z_0 \end{equation*} The right-hand side gives $\alpha z + z_0$; in the ambiguity set (\ref{eq:App-03-RCCP-Phi-Div}), $d$ is strictly positive, therefore \begin{equation*} \alpha \phi^* (z_0 + z) + (1 - \alpha) \phi^∗ (z_0) \ge \alpha z + z_0 -d \end{equation*} which gives \begin{equation*} \phi^* (z_0 + z) - z_0 - az + d \ge (1 - \alpha) (\phi^∗ (z_0 + z) - \phi^∗ (z_0) ) \end{equation*} Recall the expression of $\alpha^\prime$ in Theorem \ref{thm:App03-RCCP-Phi-Div}, we arrive at \begin{equation*} 1-\alpha^\prime = \dfrac{\phi^* (z_0+z)-z_0-az+d}{\phi^∗ (z_0+z)-\phi^∗(z_0)} \ge 1 - \alpha \end{equation*} which is the desired conclusion. Theorem \ref{thm:App03-RCCP-Phi-Div} concludes that the complexity of handling a robust chance constraint is almost the same as that of tackling a traditional chance constraint associated with the reference distribution $\mathbb P_0$, except for the efforts on computing $\alpha^\prime$. If $\mathbb P_0$ belongs to the family of log-concave distributions, then the chance constraint is convex. As a special case, if $\mathbb P_0$ is the Gaussian distribution or a uniform distribution on ellipsoidal support, a single chance constraint can boil down to a second-order cone \cite{App03-Sect4-Example-Q-Distribution}. For more general cases, the chance constraint is non-convex in $x$. In such circumstance, we will use risk based reformulation and the sampling average approximation (SAA) approach. \vspace{12pt} {\noindent \bf 3. Risk and SAA based reformulation} Owing to the different descriptions on dispersion ambiguity and presence of the wait-and-see decision $y$, unlike DRO problem (\ref{eq:App-03-DRO-Model-1}) with static robust chance constraint (\ref{eq:App-03-DRO-DRCC}) which can be transformed into an SDP, constraint (\ref{eq:App-03-Thm-1}) is treated in a different way, as demonstrated in Theorem \ref{thm:App03-RCCP-Phi-Div}: it comes down to a traditional chance constraint (\ref{eq:App-03-Thm-2}) while the dispersion ambiguity is taken into account by a modification in the confidence level. The remaining task is to express (\ref{eq:App-03-Thm-2}) as a solver-compatible form. \vspace{12pt} {\noindent \bf 1) Loss function} For given $x$ and $\xi$, constraints in $C(x,\xi)$ cannot be met if no $y$ satisfying $A(\xi) x + B(\xi) y \le b(\xi)$ exists. To quantify the constraint violation under scenario $\boldsymbol{\xi}$ and first-stage decision $x$, define the following loss function $L(x,\xi)$ \begin{equation} \label{eq:App-03-RCCP-Loss-Function} \begin{aligned} L(x,\xi) = \min_{y,\sigma} ~~ & \sigma \\ \text{s.t.}~~ & A(\xi) x + B(\xi) y \le b(\xi) + \sigma {\bf 1} \end{aligned} \end{equation} where {\bf 1} is an all-one vector with compatible dimension. If $L(x,\xi) \ge 0$, the minimum of slackness $\sigma$ under the joint efforts of the recourse action $y$ is defined as the loss; otherwise, demands are satisfiable after the uncertain parameter is known. As we assume $C(x,\xi)$ is a bounded polytope, problem (\ref{eq:App-03-RCCP-Loss-Function}) is always feasible and bounded below. Therefore, the loss function $L(\boldsymbol{x}, \boldsymbol{\xi})$ is well-defined, and the chance constraint (\ref{eq:App-03-Thm-2}) can be written as \begin{equation} \label{eq:App-03-RCC-Loss-Fun} \Pr\nolimits_0 [ L(x,\xi) \leq 0 ] \geq 1 - \alpha^\prime_+ \end{equation} In this way, the joint chance constraints are consolidated into a single one, just like what has been done in (\ref{eq:App-03-DRO-DRCC-Joint}) and (\ref{eq:App-03-DRO-DRCC-Joint-Para}). \vspace{12pt} {\noindent \bf 2) VaR based reformulation: An MILP} For a given probability tolerance $\beta$ and a first-stage decision $x$, the $\beta$-VaR for loss function $L(x,\xi)$ under the reference distribution PDF $\mathbb P_0$ is defined as \begin{equation} \label{eq:App-03-RCC-VaR-Def} \beta \mbox{-VaR}(x) = \min \left\{ a \in \mathbb {R} \middle| \int_{L(x,\xi) \leq a} f_0(\xi) \mbox{d} \xi \ge \beta \right\} \end{equation} which interprets the threshold $a$ such that the loss is no greater than $a$ will hold with a probability no less than $\beta$. According to (\ref{eq:App-03-RCC-VaR-Def}), an equivalent expression of chance constraint (\ref{eq:App-03-RCC-Loss-Fun}) is \begin{equation} \label{eq:App-03-RCC-CC-VaR} (1 - \alpha^\prime_+) \mbox{-VaR} (x) \le 0 \end{equation} So that probability evaluation is obviated. Furthermore, if SAA is used, (\ref{eq:App-03-RCC-Loss-Fun}) and (\ref{eq:App-03-RCC-CC-VaR}) indicate that the scenarios which will lead to $L(x,\xi) > 0$ account for a fraction of $\alpha_{1+}$ among all sampled data. Let $\xi_1,\xi_2, \cdots, \xi_q$ be $q$ scenarios sampled from random variable $\boldsymbol{\xi}$. We use $q$ binary variables $z_1, z_2, \cdots, z_q$ to identify possible infeasibility: $z_k = 1$ implies that constraints cannot be satisfied in scenario $\xi_k$. To this end, let $M$ be a large enough constant, consider inequality \begin{equation} \label{eq:App-03-RCC-Loss-SSA} A(\xi_k) x + B(\xi_k) y_k \le b(\xi_k) + M z_k \end{equation} In (\ref{eq:App-03-RCC-Loss-SSA}), if $z_k = 0$, recourse action $y_k$ will recover all constraints in scenario $\xi_k$, and thus $C(x,\xi_k)$ is non-empty; otherwise, if no such a recourse action $y_k$ exists, then constraint violation will take place. To reconcile infeasibility, $z_k = 1$ so that (\ref{eq:App-03-RCC-Loss-SSA}) becomes redundant, and there is actually no constraint for scenario $\boldsymbol{\xi_k}$. The fraction of sampled scenarios which will incur inevitable constraint violations is counted by $\sum_{k=1}^q z_k/q$. So we can write out the following MILP reformulation for robust chance-constrained program (\ref{eq:App-03-RCCP-Model}) based on VaR and SAA \begin{equation} \label{eq:App-03-RCCP-VaR-MILP} \begin{aligned} \min ~~ & c^T x \\ \mbox{s.t.}~~ & x \in X \\ & A(\xi_k) x + B(\xi_k) y_k \le b(\xi_k) + M z_k,~ k=1,\cdots,q \\ & \sum_{k = 1}^{q} z_k \leq q\alpha^\prime_+,~ z_k \in \{ 0, 1 \},~ k=1,\cdots,q \end{aligned} \end{equation} In MILP (\ref{eq:App-03-RCCP-VaR-MILP}), constraint violation can happen in at most $q \alpha_{1+}$ out of $q$ scenarios in the reference distribution, according to Theorem \ref{thm:App03-RCCP-Phi-Div}, and the reliability requirement (\ref{eq:App-03-Thm-1}) under all possible distributions in ambiguity set $D_\phi$ can be guaranteed by the selection of $\alpha^\prime_+$. Improved MILP formulations of chance constraints which do not rely on the specific big-M parameter are comprehensively studied in \cite{App03-Sect4-CC-SAA-MILP}, and some structure properties of the feasible region are revealed. \vspace{12pt} {\noindent \bf 3) CVaR based reformulation: An LP} The number of binary variables in MILP (\ref{eq:App-03-RCCP-VaR-MILP}) is equal to the number of sampled scenarios. To guarantee the accuracy of SAA, a large number of scenarios are required, preventing MILP (\ref{eq:App-03-RCCP-VaR-MILP}) from being solved efficiently. To ameliorate this plight, we provide a conservative LP approximation for problem (\ref{eq:App-03-RCCP-Model}) based on the properties of CVaR revealed in \cite{CVaR}. The $\beta$-CVaR for the loss function $L(x,\xi)$ is defined as \begin{equation} \label{eq:App-03-VaR-Def} \beta \mbox{-CVaR} (x) = \frac{1} {1 - \beta} \int_{L(x,\xi) \ge \beta \text{-VaR} (x)} L(x,\xi) f(\boldsymbol{\xi}) d \xi \end{equation} which interprets the conditional expectation of loss that is no less than $\beta$-VaR; therefore, relation \begin{equation} \label{eq:App-03-VaR-CVaR} \beta \mbox{-VaR} \leq \beta \mbox{-CVaR} \end{equation} always holds, and a conservative approximation of constraint (\ref{eq:App-03-RCC-CC-VaR}) is \begin{equation} (1 - \alpha^\prime_+) \mbox{-CVaR} (x) \le 0 \label{eq:App-03-RCC-CC-CVaR} \end{equation} Inequality (\ref{eq:App-03-RCC-CC-CVaR}) is a sufficient condition for (\ref{eq:App-03-RCC-CC-VaR}) and (\ref{eq:App-03-RCC-Loss-Fun}). This conservative replacement is apposite to the spirit of robust optimization. In what follows, we will reformulate (\ref{eq:App-03-RCC-CC-CVaR}) in a solver-compatible form. According to \cite{CVaR}, the left-hand side of (\ref{eq:App-03-RCC-CC-CVaR}) is equal to the optimum of the following minimization problem \begin{equation} \min_{\gamma} \left\{ \gamma + \frac{1}{\alpha^\prime_+} \int_{\xi \in \mathbb {R}^K} \max \{ L(x,\xi) - \gamma, 0 \} f(\xi) \mbox{d} \xi \right\} \label{eq:App-03-CVaR-Opt-Int} \end{equation} By performing SAA, the integral in (\ref{eq:App-03-CVaR-Opt-Int}) renders a summation over discrete sampled scenarios $\xi_1,\xi_2, \cdots, \xi_q$, resulting in \begin{equation} \label{eq:App-03-CVaR-Opt-Discrete} \min_{\gamma} \left\{ \gamma + \frac{1}{q \alpha^\prime_+} \sum_{k=1}^q \max \left\{ L(x,\xi_k) - \gamma, 0 \right\} \right\} \end{equation} By introducing auxiliary variable $s_k$, the feasible region defined by (\ref{eq:App-03-RCC-CC-CVaR}) can be expressed via \begin{gather*} \exists \gamma \in \mathbb R, s_k \in \mathbb R^+,~\sigma_k \in \mathbb R,~ k = 1,\cdots,q \\ \sigma_k - \gamma \le s_k,~ k = 1,\cdots,q \\ A(\xi_k) x + B(\xi_k) y_k \le b(\xi_k) + \sigma_k {\bf 1}, ~ k = 1,\cdots,q \\ \gamma + \frac{1}{q \alpha^\prime_+} \sum_{k=1}^q s_k \leq 0 \end{gather*} Now we can write out the the conservative LP reformulation for robust chance constrained program (\ref{eq:App-03-RCCP-Model}) based on CVaR and SAA \begin{equation} \label{eq:App-03-RCCP-CVaR-LP} \begin{aligned} \min_{x,y,s,\gamma} ~~ & c^T x \\ \mbox{s.t.}~~ & x \in X,~ \gamma + \frac{1}{q \alpha^\prime_+} \sum_{k=1}^q s_k \leq 0,~s_k \geq 0,~ k = 1,\cdots,q \\ & A(\xi_k) x + B(\xi_k) y_k - b(\xi_k) \le (\gamma + s_k) {\bf 1}, ~ k = 1,\cdots,q \end{aligned} \end{equation} where $\sigma_k$ is eliminated. According to (\ref{eq:App-03-VaR-CVaR}), condition (\ref{eq:App-03-RCC-CC-CVaR}) guarantees (\ref{eq:App-03-RCC-CC-VaR}) as well as (\ref{eq:App-03-RCC-Loss-Fun}), so chance constraint in (\ref{eq:App-03-Thm-1}) holds with a probability no less (usually higher) than $1-\alpha$, regardless of the true distributions in confidence set $D_\phi$. Since (\ref{eq:App-03-VaR-CVaR}) is usually a strict inequality, this fact will introduce some extent of conservatism in the CVaR based LP model (\ref{eq:App-03-RCCP-CVaR-LP}). Relations among different mathematical models discussed in this section are summarized in Fig. \ref{fig:Fig-App03-01}. \begin{figure}[!htp] \centering \includegraphics[scale = 0.40]{Fig-App-03-01} \caption{Relations of the models discussed in this section.} \label{fig:Fig-App03-01} \end{figure} \vspace{12pt} {\noindent \bf 4. Considering second-stage cost} Finally, we elaborate how to solve problem (\ref{eq:App-03-RCCP-Model}) with a second-stage cost in the sense of worst-case expectation, i.e. \begin{equation} \label{eq:App-03-RCCP-MEdy-1} \begin{aligned} \min_x~ & \left\{c^T x + \max_{P(\xi) \in D_{KL}} \mathbb E_P [Q(x,\xi)] \right\} \\ \mbox{s.t.}~~ & x \in X \\ & \sup_{P(\xi) \in D^\prime} \Pr[C(x,\xi)] \ge 1-\alpha \end{aligned} \end{equation} where $Q(x,\xi)$ is the optimal value function of the second-stage problem \begin{equation*} \begin{aligned} Q(x,\xi) = \min ~~ & q^T y \\ \mbox{s.t.}~~ & B(\xi) y \le b(\xi) - A(\xi) x \end{aligned} \end{equation*} which is an LP for a fixed first-stage decision $x$ and a given parameter $\xi$; \begin{equation*} D_{KL}=\{P(\xi)~|~ D^{KL}_\phi(f \| f_0) \le d_{KL}(\alpha^*),~ f = {\rm d} P/ {\rm d} \xi\} \end{equation*} is the KL-divergence based ambiguity set, and $d_{KL}$ is an $\alpha$-dependent threshold which determines the size of the ambiguity set, and $\alpha^*$ reflects the confidence level: the real distribution is contained in $D_{KL}$ with a probability no less than $\alpha^*$. For discrete distributions, the KL-divergence measure has the form of \begin{equation*} D^{KL}_\phi(f \parallel f_0) = \sum_s \rho_s \log \dfrac{\rho_s}{\rho^0_s} \end{equation*} In either case, there are infinitely many PDFs satisfying the inequality in the ambiguity set $D_{KL}$ when $d_{KL} > 0$. Otherwise, when $d_{KL}=0$, the ambiguity set $D_{KL}$ becomes a singleton, and the model (\ref{eq:App-03-RCCP-MEdy-1}) degenerates to a traditional SO problem. In practice, the user can specify the value of $d_{KL}$ according to the attitude towards risks. Nevertheless, the proper value of $d_{KL}$ can be obtained from probability theory. Intuitively, the more historical data we possess, the closer the reference PDF $f_0$ leaves from the true one, and the smaller $d_{KL}$ should be set. Suppose we have totally $M$ samples with equal probabilities to fit in $N$ bins, and there are $M_1$, $M_2$, $\cdots$, $M_N$ samples fall into each bin, then the discrete reference PDF for the histogram is $\{\pi_1, \cdots,\pi_N\}$, where $\pi_i = M_i/M$, $i = 1, \cdots, N$. Let $\pi^r_1$, $\cdots$, $\pi^r_N$ be the real probability of each bin, according to the discussions in \cite{Am-Set-Phi-Divergence}, random variable $2M \sum_{i=1}^N \pi^r_i \log (\pi^r_i/\pi_i)$ follows $\chi^2$ distribution with $N-1$ degrees of freedom. Therefore, the confidence threshold can be calculated from \begin{equation*} d_{KL}(\alpha^*) = \dfrac{1}{2M}\chi_{N-1,\alpha^*}^2 \end{equation*} where $\chi_{N-1,\alpha^*}^2$ stands for the $\alpha^*$ upper quantile of $\chi^2$ distribution with $N-1$ degrees of freedom. For other divergence based ambiguity sets, please see more discussions in \cite{Am-Set-Phi-Divergence}. Robust chance constraints in (\ref{eq:App-03-RCCP-MEdy-1}) are tackled using the method presented previously, and the objective function will be treated independently. The ambiguity sets in the objective function and chance constraints could be the same one or different ones, and thus are distinguished by $D_{KL}$ and $D^\prime$. Sometimes, it is imperative to coordinately optimize the costs in both stages. For example, in the facility planning problem, the first stage represents the investment decision and the second stage describes the operation management. If we only optimize the first-stage cost, then the facilities with lower investment costs will be preferred, but they may suffer from higher operating costs, and not be the optimal choice from the long-term aspect. To solve (\ref{eq:App-03-RCCP-MEdy-1}), we need a tractable reformulation for the worst-case expectation problem under KL-divergence ambiguity set \begin{equation} \label{eq:App-03-MEdy-KL-Div} \max_{P(\xi) \in D_{KL}} \mathbb E_P [Q(x,\xi)] \end{equation} under fixed $x$. It is proved in \cite{App03-Sect4-RCCP-mEdy,Am-Set-Phi-Divergence} that problem (\ref{eq:App-03-MEdy-KL-Div}) is equivalent to \begin{equation} \label{eq:App-03-MEdy-KL-Div-Dual} \min_{\alpha \ge 0} ~ \alpha \log \mathbb E_{P_0} [{\rm e}^{Q(x,\xi)/\alpha}] + \alpha d_{KL} \end{equation} where $\alpha$ is the dual variable. Formulation (\ref{eq:App-03-MEdy-KL-Div-Dual}) has two advantages: first, the expectation is evaluated associated with the reference distribution $P_0$, which is much easier than optimizing over the ambiguity set $D_{KL}$; second, the maximum operator switches to a minimum operator, which is consistent with the objective function of the decision making problem. We will use SAA to express the expectation, giving rise to a discrete version of problem (\ref{eq:App-03-MEdy-KL-Div-Dual}). In fact, in discrete cases, (\ref{eq:App-03-MEdy-KL-Div-Dual}) can be derived from (\ref{eq:App-03-MEdy-KL-Div}) using Lagrange duality. The following interpretation is given in \cite{App03-Sect4-KL-Div-UC}. Denote by $\xi_1,\cdots,\xi_s$ the representative scenarios in the discrete distribution; their corresponding probabilities in the reference PDF and the actual PDF are given by $P_0 = \{p^0_1,\cdots,p^0_s\}$ and $P = \{p_1,\cdots,p_s\}$, respectively. Then problem (\ref{eq:App-03-MEdy-KL-Div}) can be written in a discrete form as \begin{equation} \label{eq:App-03-MEdy-KL-Div-Discrete} \begin{aligned} \max_p~~ & \sum_{i=1}^s p_i Q(x,\xi_i) \\ \mbox{s.t.} ~~ & \sum_{i=1}^s p_i \log \left( \dfrac{p_i}{p^0_i} \right) \le d_{KL} \\ & p \ge 0,~ 1^T p =1 \end{aligned} \end{equation} where vector $p=[p_1,\cdots,p_s]^T$ is the decision variable. According to Lagrange duality theory, the objective function of the dual problem is \begin{equation} \label{eq:App-03-MEdy-KL-Div-Discrete-Dual-Obj-1} g(\alpha,\mu) = \alpha d_{KL} + \mu + \sum_{i=1}^s \max_{p_i \ge 0} p_i \left( Q(x,\xi_i) - \mu - \alpha \log \left( \dfrac{p_i}{p^0_i} \right) \right) \end{equation} where $\mu$ is the dual variable associated with equality constraint $1^T p =1$, and $\alpha$ with the KL-divergence inequality. Substituting $t_i = p_i / p^0_i$ into (\ref{eq:App-03-MEdy-KL-Div-Discrete-Dual-Obj-1}) and eliminating $p_i$, we get \begin{equation*} g(\alpha,\mu) = \alpha d_{KL} + \mu + \sum_{i=1}^s \max_{t_i \ge 0}~ p^0_i t_i \left( Q(x,\xi_i) - \mu - \alpha \log t_i \right) \end{equation*} Calulating the first-order derivative of $t_i ( Q(x,\xi_i) - \mu - \alpha \log t_i )$ with respect to $t_i$, the optimal solution is \begin{equation*} t_i = {\rm e} ^{\frac{ Q(x,\xi_i) - \mu - \alpha}{\alpha}} > 0 \end{equation*} and the maximum is \begin{equation*} \alpha {\rm e} ^{\frac{ Q(x,\xi_i) - \mu - \alpha}{\alpha}} \end{equation*} As a result, the dual objective reduces to \begin{equation} \label{eq:App-03-MEdy-KL-Div-Discrete-Dual-Obj-2} g(\alpha,\mu) = \alpha d_{KL} + \mu + \alpha \sum_{i=1}^s p^0_i {\rm e} ^{\frac{ Q(x,\xi_i) - \mu - \alpha}{\alpha}} \end{equation} and the dual problem of (\ref{eq:App-03-MEdy-KL-Div-Discrete}) can be rewritten as \begin{equation} \label{eq:App-03-MEdy-KL-Div-Discrete-Dual-1} \min_{\alpha \ge 0,\mu}~ g(\alpha,\mu) \end{equation} The optimal solution $\mu^*$ must satisfy $\partial g / \partial \mu = 0$, yielding \begin{equation*} \sum_{i=1}^s p^0_i {\rm e} ^{\frac{ Q(x,\xi_i) - \mu^* - \alpha}{\alpha}}=1 \end{equation*} or \begin{equation*} \mu^* = \alpha \log \sum_{i=1}^s p^0_i ~{\rm e}^{Q(x,\xi_i)/\alpha} - \alpha \end{equation*} Substituting above relations into $g(\alpha,\mu)$ results in the following dual problem \begin{equation} \label{eq:App-03-MEdy-KL-Div-Discrete-Dual-2} \min_{\alpha \ge 0}~ \left\{ \alpha d_{KL} + \alpha \log \sum_{i=1}^s p^0_i ~{\rm e}^{Q(x,\xi_i)/\alpha} \right\} \end{equation} which is a discrete form of (\ref{eq:App-03-MEdy-KL-Div-Dual}). In (\ref{eq:App-03-RCCP-MEdy-1}), replacing the inner problem (\ref{eq:App-03-MEdy-KL-Div}) with its Lagrangian dual form (\ref{eq:App-03-MEdy-KL-Div-Discrete-Dual-2}), we can obtain an equivalent mathematical program \begin{equation} \label{eq:App-03-RCCP-MEdy-2} \begin{aligned} \min~ & \left\{c^T x + \alpha d_{KL} + \alpha \log \sum_{i=1}^s p^0_i ~{\rm e}^{\theta_i/\alpha} \right\} \\ \mbox{s.t.}~~ & x \in X,~ \alpha \ge 0,~ \theta_i = q^T y_i,~ \forall i \\ & A(\xi_i) x + B(\xi_i) y_i \le b(\xi_i),~ \forall i \\ & \mbox{Cons-RCC} \end{aligned} \end{equation} where Cons-RCC stands for the LP based formulation of robust chance constraints, so the constraints in problem (\ref{eq:App-03-RCCP-MEdy-2}) are all linear, and the only nonlinearity rests in the last term of the objective function. In what follows, we will show it is actually a convex function in $\theta_i$ and $\alpha$. In the first step, we claim that the following function is convex (\cite{CVX-Book-Boyd}, page 87, in Example 3.14) \begin{equation*} h_1(\theta) = \log \left( \sum_{i=1}^s {\rm e}^{\theta_i} \right) \end{equation*} Since the composition with an affine mapping preserves convexity (\cite{CVX-Book-Boyd}, Sect. 3.2.2), a new function \begin{equation*} h_2(\theta) = h_1 (A \theta + b) \end{equation*} remains convex under linear mapping $\theta \to A \theta + b$. Let $A$ be an identity matrix, and \begin{equation*} b = \begin{bmatrix} \log p^0_1 \\ \vdots \\ \log p^0_s \end{bmatrix} \end{equation*} then we have \begin{equation*} h_2(\theta) = \log \left( \sum_{i=1}^s p^0_i {\rm e}^{\theta_i} \right) \end{equation*} is a convex function; at last, function \begin{equation*} h_3(\alpha,\theta) = \alpha h_2(\theta/\alpha) \end{equation*} is the perspective of $h_2(\theta)$, so is also convex (\cite{CVX-Book-Boyd}, page 89, Sect. 3.2.6). In view of this convex structure, (\ref{eq:App-03-RCCP-MEdy-2}) essentially gives rise to a convex program, and the local minimum is also the global one. However, according to our experiments, general purpose NLP solvers still have difficulty to solve (\ref{eq:App-03-RCCP-MEdy-2}). Therefore, we employ the outer approximation method \cite{App03-Sect4-OA-1,App03-Sect4-OA-2}. The motivation is to solve the epigraph form of (\ref{eq:App-03-RCCP-MEdy-2}), in which nonlinearity is moved into the constraints; then linearize the feasible region with an increasing number of cutting planes generated in an iteration algorithm, until certain convergence criterion is met. In this way, the hard problem (\ref{eq:App-03-RCCP-MEdy-2}) can be solved via a sequence of LPs. The outer approximation algorithm is outlined in Algorithm \ref{Ag:App-03-DRO-Outer-Approximation}. Because (\ref{eq:App-03-RCCP-MEdy-2}) is a convex program, the cutting planes will not remove any feasible point, and Algorithm \ref{Ag:App-03-DRO-Outer-Approximation} finds the global optimal solution in finite steps, regardless of the initial point. But for sure, the number of iterations is affected by the quality of initial guess. A proper initiation could be obtained by solving a traditional SO problem without considering distribution uncertainty. \begin{algorithm}[!htp] \normalsize \caption{\bf Outer Approximation} \begin{algorithmic}[1] \STATE Choose an initial point $(\theta^1,\alpha^1)$ and convergence tolerance $\epsilon > 0$, the initial objective value is $R^1=0$, and iteration index $k=1$. \STATE Solve the following master problem which is an LP \begin{equation} \label{eq:App-03-RCCP-MEdy-OA-Lin} \begin{aligned} \min_{\alpha,\theta,\gamma,x}~~ & c^T x + \alpha d_{KL} + \gamma \\ \mbox{s.t.}~~ & h_3(\alpha^j,\theta^j) + \nabla h_3(\alpha^j,\theta^j) \begin{bmatrix} \alpha - \alpha^j \\ \theta - \theta^j \end{bmatrix} \le \gamma,~ j=1,\cdots,k \\ & x \in X,~ \alpha \ge 0,~ \theta_i = q^T y_i,~ \forall i \\ & A(\xi_i) x + B(\xi_i) y_i \le b(\xi_i),~ \forall i \\ & \mbox{Cons-RCC} \end{aligned} \end{equation} The optimal value is $R^{k+1}$, and the optimal solution is $(x^{k+1},\theta^{k+1},\alpha^{k+1})$. \STATE If $R^{k+1} - R^k \le \varepsilon$, terminate and report the optimal solution $(x^{k+1},\theta^{k+1},\alpha^{k+1})$; otherwise, update $k \leftarrow k+1$, calculate the gradient $\nabla h_3$ at the obtained solution $(\alpha^k,\theta^k)$, add the following cut to problem (\ref{eq:App-03-RCCP-MEdy-OA-Lin}), and go to step 2. \begin{equation} \label{eq:App-03-RCCP-MEdy-OA-Cut} h_3(\alpha^k,\theta^k) + \nabla h_3(\alpha^k,\theta^k) \begin{bmatrix} \alpha - \alpha^k \\ \theta - \theta^k \end{bmatrix} \le \gamma \end{equation} \end{algorithmic} \label{Ag:App-03-DRO-Outer-Approximation} \end{algorithm} \begin{figure}[!htp] \centering \includegraphics[scale = 0.60]{Fig-App-03-02} \caption{Illustration of the outer approximation algorithm.} \label{fig:Fig-App03-02} \end{figure} The motivation of Algorithm \ref{Ag:App-03-DRO-Outer-Approximation} is illustrated in \ref{fig:Fig-App03-02}. The original objective function is nonlinear but convex. In the epigraph form (\ref{eq:App-03-RCCP-MEdy-OA-Lin}), we generate a set of linear cuts (\ref{eq:App-03-RCCP-MEdy-OA-Cut}) dynamically according to the optimal solution found in step 2, then the convex region can be approximated with arbitrarily high accuracy around the optimal solution. The convergence of the very basic version of outer approximation method has been analyzed in \cite{App03-Sect4-OA-3,App03-Sect4-OA-4}. In fact, Algorithm \ref{Ag:App-03-DRO-Outer-Approximation} is very efficient to solve problem (\ref{eq:App-03-RCCP-MEdy-2}), because problem (\ref{eq:App-03-RCCP-MEdy-OA-Lin}) is an LP, the objective function is smooth, and the algorithm often converges in a few number of iterations. \subsection{Stochastic Program with Discrete Distributions} \label{App-C-Sect04-02} In ARO discussed in Appendix \ref{App-C-Sect02}, the uncertain parameter is assumed to reside in the so-called uncertainty set. Every element in this set is treated equally, so the scenario in the worst case must be one of the extreme points of the uncertainty set, which is the main source of conservatism in the traditional RO paradigm. In contrast, in the classic two-stage SO, uncertain parameter $\xi$ is modeled through a certain probability distribution $P$, and the expected cost is minimized, giving rise to \begin{equation} \label{eq:App-03-TTSP} \begin{aligned} \min ~~ & c^T x + \mathbb E_P [Q(x,\xi)] \\ \mbox{s.t.}~~ & x \in X \end{aligned} \end{equation} where the bounded polyhedron $X$ is the feasible region of first-stage decision $x$, $\xi$ is the uncertain parameter, and $Q(x,\xi)$ is the optimal value function of the second-stage problem, which is an LP for fixed $x$ and $\xi$ \begin{equation} \label{eq:App-03-TTSP-Stage2} \begin{aligned} Q(x,\xi) = \min ~~ & q^T y \\ \mbox{s.t.}~~ & B(\xi) y \le b(\xi) - A(\xi) x \end{aligned} \end{equation} where $q$ is the cost coefficients, $A(\xi)$, $B(\xi)$, and $b(\xi)$ are constant matrices affected by uncertain data, $y(\xi)$ is the second-stage decision, which is the reaction to the realization of uncertainty. Since the true PDF of $\xi$ is difficult to obtain in some circumstances, in this section, we do not require perfect knowledge on the probability distribution $\mathbb P$ of random variable $\xi$, and let it be ambiguous around a reference distribution and reside in an ambiguity set $D$, which can be constructed from limited historical data. We take all possible distributions in the ambiguity set into consideration, so as to minimize the expected cost in the worst-case distribution, resulting in the following model \begin{equation} \label{eq:App-03-RTSP} \begin{aligned} \min ~~ & c^T x + \max_{P(\xi) \in D} \mathbb E_P [Q(x,\xi)] \\ \mbox{s.t.}~~ & x \in X \end{aligned} \end{equation} Compared with (\ref{eq:App-03-RCCP-Model}), constraint violation is not allowed in problem (\ref{eq:App-03-RTSP}), and the second-stage expected cost in the worst-case distribution is considered. It is a particular case of (\ref{eq:App-03-RCCP-MEdy-1}) without chance constraints. Specifically, we will utilize discrete distributions in this section. This formulation enjoys several benefits. One is the easy exposition of the density function. In previous sections, the candidate in the moment or divergence based ambiguity sets is not given in an analytical form, and vanishes during the dual transformation. As a result, we don't have clear knowledge on the worst-case distribution. For discrete distributions, the density function is a vector of real entries associated with the probability of each representative scenario. We can easily construct the ambiguity set and optimize an expectation over discrete distributions. The other originates from the computational perspective, which can be seen later. Main results in this section come from \cite{App03-Sect4-DRSO-Zhao,App03-Sect4-DRSO-Ding}. \vspace{12pt} {\noindent \bf 1. Modeling the confidence set} For a given set of historical data with $M$ elements, which can be regarded as $M$ samples of the random variable, we can draw a histogram with $K$ bins as an estimation of the reference distribution. Suppose that the numbers of samples fall in each bin is $M_1,M_2,\cdots,M_K$, where $\sum^K_{i=1} M_i=M$, then the reference (empirical) distribution of the uncertain data is given by $\mathbb P_0 = [p^0_1,\cdots,p^0_K]$, where $p^0_i = M_i/M$, $i=1,\cdots,K$. Since the data may not be enough to fit a PDF with high accuracy, the actual distribution should be close to but might be different from its reference. It is proposed in \cite{App03-Sect4-DRSO-Zhao} to construct the ambiguity set using statistical inference corresponding to a given tolerance. Two types of ambiguity sets are suggested based on $L_1$ norm and $L_{\infty}$ norm \begin{equation} \label{eq:App-03-RTSP-D1} D_1 = \left\{ \mathbb P \in \mathbb R^K_+ \middle| \| \mathbb P - \mathbb P_0 \|_1 \le \theta \right\} = \left\{ p \in {\rm \Delta}_K \middle| \sum_{i=1}^K \left| p_i - p^0_i \right| \le \theta \right\} \end{equation} \begin{equation} \label{eq:App-03-RTSP-D2} D_\infty = \left\{ \mathbb P \in \mathbb R^K_+ \middle| \| \mathbb P - \mathbb P_0 \|_\infty \le \theta \right\} = \left\{ p \in {\rm \Delta}_K \middle| \max_{1\le i \le K} \left| p_i - p^0_i \right| \le \theta \right\} \end{equation} where ${\rm \Delta}_K = \{p \in [0,1]^K:{\bf 1}^T p = 1\}$. These two ambiguity sets can be easily expressed by polyhedral sets as follows \begin{equation} \label{eq:App-03-RTSP-D1-Poly} D_1 = \left\{ p \in {\rm \Delta}_K \middle| \begin{lgathered} \exists t \in \mathbb R^K_+ : \sum\nolimits_{k=1}^K t_k \le \theta \\ t_k \ge p_k - p^0_k,~ k = 1,\cdots,K \\ t_k \ge p^0_k - p_k,~ k = 1,\cdots,K \end{lgathered} \right\} \end{equation} \begin{equation} \label{eq:App-03-RTSP-D2-Poly} D_\infty = \left\{ p \in {\rm \Delta}_K \middle| \begin{lgathered} \theta \ge p_k - p^0_k,~ k=1,\cdots, K \\ \theta \ge p^0_k - p_k,~ k=1,\cdots, K \end{lgathered} \right\} \end{equation} where $p=[p_1,\cdots,p_K]^T$ is the variable in the ambiguity set; $t=[t_1,\cdots,t_K]^T$ is the lifting (auxiliary) variable in $D_1$; parameter $\theta$ reflects decision maker's confidence level on the distance between the reference distribution and the true one. Apparently, the more historical data we utilize, the smaller their distance will be. Provided with $M$ observations and $K$ bins, the quantitative relation between the value of $\theta$ and the number of samples are given by \cite{App03-Sect4-DRSO-Zhao} \begin{equation} \label{eq:App-03-RTSP-D1-Conf} \Pr \{ \|\mathbb P - \mathbb P_0\|_1 \le \theta \} \ge 1-2K {\rm e}^{-2M \theta/K} \end{equation} \begin{equation} \label{eq:App-03-RTSP-D2-Conf} \Pr \{ \|\mathbb P - \mathbb P_0\|_\infty \le \theta \} \ge 1-2K {\rm e}^{-2M \theta} \end{equation} According to (\ref{eq:App-03-RTSP-D1-Conf}) and (\ref{eq:App-03-RTSP-D2-Conf}), if we want to maintain (\ref{eq:App-03-RTSP-D1}) and (\ref{eq:App-03-RTSP-D2}) with a confidence level of $\beta$, parameter $\theta$ should be selected as \begin{equation} \label{eq:App-03-RTSP-D1-Theta} \mbox{For } D_1:~ \theta_1 = \dfrac{K}{2M} \ln \dfrac{2K}{1-\beta} \end{equation} \begin{equation} \label{eq:App-03-RTSP-D2-Theta} \mbox{For } D_\infty:~ \theta_\infty = \dfrac{1}{2M} \ln \dfrac{2K}{1-\beta} \end{equation} As the size of sampled data approaches infinity, $\theta_1$ and $\theta_\infty$ decrease to 0, and the reference distribution converges to the true one. Accordingly, problem (\ref{eq:App-03-RTSP}) becomes a traditional two-stage SO. \vspace{12pt} {\noindent \bf 2. CCG based decomposition algorithm} Let $\xi^k$ denote the representative scenario of the $k$-th bin, $p_k$ be the corresponding probability, and $P = [p_1,\cdots,p_K]$ belongs to the ambiguity set in form of (\ref{eq:App-03-RTSP-D1}) or (\ref{eq:App-03-RTSP-D2}), then problem (\ref{eq:App-03-RTSP}) can be written as \begin{equation} \label{eq:App-03-RTSP-Discrete} \begin{aligned} \min ~~ & c^T x + \max_{\mathbb P} \sum_{k=1}^K p_k \min q^T y^k \\ \mbox{s.t.}~~ & x \in X,~ \mathbb P \in D \\ & A(\xi^k) x + B(\xi^k) y^k \le b(\xi^k), \forall k \end{aligned} \end{equation} Problem (\ref{eq:App-03-RTSP-Discrete}) has a min-max-min structure and can be solved by the Benders decomposition method \cite{App03-Sect4-DRSO-Zhao} or the CCG method \cite{App03-Sect4-DRSO-Ding}. The latter one will be introduced in the rest of this section. It decomposes problem (\ref{eq:App-03-RTSP-Discrete}) into a lower bounding master problem and an upper bounding subproblem, which are solved iteratively until the gap between the upper bound and lower bound gets smaller than a convergence tolerance. The basic idea has been explained in Appendix \ref{App-C-Sect02-03}. As we can see in \cite{App03-Sect4-DRSO-Ding}, the second-stage problem can be a broader class of convex programs, such as an SOCP. \vspace{12pt} {\noindent \bf 1) Subproblem} For a given first-stage decision $x$, the subproblem aims to find the worst-case distribution, which comes down to a max-min program shown below \begin{equation} \label{eq:App-03-RTSP-SP-1} \max_{\mathbb P \in D} \sum_{k=1}^K p_k \min_{y^k \in Y_k(x)} q^T y^k \end{equation} where \begin{equation} \label{eq:App-03-RTSP-SP-Yk} Y_k = \{ y^k ~|~ B(\xi^k) y^k \le b(\xi^k) - A(\xi^k) x\},~ \forall k \end{equation} Problem (\ref{eq:App-03-RTSP-SP-1}) has some unique features that facilitate the computation: (1) Feasible sets $Y_k$ are decoupled. (2) The probability variables $p_k$ do not affect feasible sets $Y_k$. (3) The ambiguity set $D$ and feasible sets $Y_k$ are decoupled. Although (\ref{eq:App-03-RTSP-SP-1}) seems nonlinear due to the production of scalar variable $p_k$ and vector variable $y^k$ in the objective function, as we can see in the following discussion, it is equivalent to an LP or can be decomposed into several LPs, and thus can be solved efficiently. \vspace{6pt} {\emph{An equivalent LP} } Because $p_k \ge 0$, we can exchange the summation operator and the minimization operator, and problem (\ref{eq:App-03-RTSP-SP-1}) can be written as \begin{equation} \label{eq:App-03-RTSP-SP-2} \max_{\mathbb P \in D} \min_{y^k \in Y_k(x)} \sum_{k=1}^K p_k q^T y^k \end{equation} For the inner minimization problem, $p_k$ is constant, so it is an LP, whose dual problem is \begin{equation*} \begin{aligned} \max_{\mu^k}~~ & \sum_{k=1}^K \left(b(\xi^k)-A(\xi^k) x \right)^T \mu^k \\ \mbox{s.t.}~~ & \mu_k \le 0,~ B^T (\xi^k) \mu^k = p_k q,~ \forall k \end{aligned} \end{equation*} where $\mu^k$ are dual variables. Substituting it into (\ref{eq:App-03-RTSP-SP-2}), and combining two maximization operators, we obtain \begin{equation} \label{eq:App-03-RTSP-SP-3} \begin{aligned} \max_{p_k,\mu^k}~~ & \sum_{k=1}^K \left(b(\xi^k)-A(\xi^k) x \right)^T \mu^k\\ \mbox{s.t.}~~ & \mu_k \le 0,~ B^T (\xi^k) \mu^k = p_k q,~ \forall k \\ & (p_1,\cdots,p_k) \in D \end{aligned} \end{equation} Since $D$ is polyhedral, problem (\ref{eq:App-03-RTSP-SP-3}) is in fact an LP. The optimal solution offers the worst-case distribution $[p^*_1,\cdots,p^*_K]$, which will be used to generate cuts in the master problem. The recourse actions $y^k$ in each scenario will be provided by the optimal solution of the master problem. Despite of the fact that LP is acknowledged as the most tractable mathematical programming problem, however, when $K$ is extremely large, it is still challenging to solve (\ref{eq:App-03-RTSP-SP-3}) or even store it in a computer. Nevertheless, the separability of feasible regions allows solving (\ref{eq:App-03-RTSP-SP-1}) in a decomposition manner. \vspace{6pt} {\emph{A decomposition method} } As mentioned above, $p_k$ has no impact on $Y_k$, which are decoupled; moreover, because $p_k$ is a scalar in the objective function of each inner minimization problem, it does not affect the optimal solution $y^k$. In view of this convenience, problem (\ref{eq:App-03-RTSP-SP-1}) can be decomposed into $K+1$ smaller LPs, and can be solved in parallel. To this end, for each $\xi^k$, solve the following LP: \begin{equation*} h^*_k = \min_{y^k \in Y_k(x)} q^T y^k,~ k=1,\cdots,K \end{equation*} The optimal value is $h^*_k$; after obtaining optimal values $(h^*_1,\cdots,h^*_K)$ of the $K$ LPs, we can retrieve the worst-case distribution through solving an additional LP \begin{equation*} \max_{\mathbb P \in D}~~ \sum_{k=1}^K p_k h^*_k \end{equation*} In fact, if the second-stage problem is a conic program (in \cite{App03-Sect4-DRSO-Ding}, it is an SOCP), above discussions are still valid, as long as the strong duality holds. It is interesting to notice that in the ARO problem in Sect. \ref{App-C-Sect02-03}, the subproblem comes down to a non-convex bilinear program after dualizing the inner minimization problem, and is generally NP-hard; in this section, the subproblem actually gives rise to LPs, whose complexity is polynomial in problem sizes. The reason accounting for this difference is that the uncertain parameter in (\ref{eq:App-03-RTSP-Discrete}) is expressed by sampled scenarios and thus is constant; the distributional uncertainty appearing in the objective function does not influence the constraints of the second stage problem, and thus the linear max-min problem (\ref{eq:App-03-RTSP-SP-2}) reduces to an LP after a dual transformation. \vspace{12pt} {\noindent \bf 2) The CCG algorithm} The motivation of CCG algorithm has been thoroughly discussed in Appendix \ref{App-C-Sect02-03}. In this section, for a fixed $x$, the optimal value of subproblem (\ref{eq:App-03-RTSP-SP-1}) is denoted by $Q(x)$, and $c^T x + Q(x)$ gives an upper bound of the optimal solution of (\ref{eq:App-03-RTSP-Discrete}), because the first-stage variable is un-optimized. Then a set of new variables and optimality cuts are generated and added into master problem. If the subproblem is infeasible in some scenario, then a set of feasibility cuts are assigned to the master problem. The master problem starts from a subset of $D$, which is updated by including the worst-case distribution identified by the subproblem. Forasmuch, the master problem is a relax version of the original problem (\ref{eq:App-03-RTSP-Discrete}), and provides a lower bound on the optimal value. The flowchart of the CCG procedure for problem (\ref{eq:App-03-RTSP-Discrete}) is given in Algorithm \ref{Ag:App-03-RTSO-CCG}. This algorithm will terminate in a finite number of iterations, as the confidence set $D$ has finite extreme points. \begin{algorithm}[!htp] \normalsize \caption{\bf } \begin{algorithmic}[1] \STATE Choose a convergence tolerance $\varepsilon>0$, and an initial probability vector $p^0 \in D$; Set LB $=-\infty$, UB $=+\infty$, and iteration index $s=0$. \STATE Solve the master problem \begin{equation} \label{eq:App-03-RTSO-CCG-MP} \begin{aligned} \min_{x,\eta,y^{k,m}} ~~ & c^T x + \eta \\ \mbox{s.t.} ~~ & x \in X,~~ \eta \ge \sum_{k=1}^K p^m_k q^T y^{k,m},~ m \in \mbox{Opt}\{ 0,1,\cdots,s\} \\ & A(\xi^k) x + B(\xi^k) y^{k,m} \le b(\xi^k),~ m \in \mbox{Opt}\{ 0,1,\cdots,s\},~ \forall k \\ & A(\xi^k) x + B(\xi^k) y^{k,m} \le b(\xi^k),~ m \in \mbox{Fea}\{ 0,1,\cdots,s\},~k \in I(s) \end{aligned} \end{equation} where Opt$\{*\}/$Fea$\{*\}$ selects the iterations in which an optimality (feasibility) cut is generated; $I(s)$ depicts the index of scenarios in which the second-stage problem is infeasible in iteration $s$. The optimal solution is $(x^*,\eta^*)$; update LB $=c^T x^* + \eta^*$; \STATE Solve subproblem (\ref{eq:App-03-RTSP-SP-1}) with current $x^*$. If there exists some $\xi^k$ such that $Y_k(x^*)=\emptyset$, then generate new variable $y^{k,s}$, update $I(s)$, and add the following feasibility cut to the master problem \begin{equation} \label{eq:App-03-RTSO-CCG-Fea-Cut} A(\xi^k) x + B(\xi^k) y^{k,s} \le b(\xi^k),~ k \in I(s) \end{equation} Otherwise, if $Y_k(x^*) \ne \emptyset, \forall k$, subproblem (\ref{eq:App-03-RTSP-SP-1}) can be solved. The optimal solution is $p^{s+1}$, and the optimal value is $Q(x^*)$; update UB $=$ min$\{\mbox{UB}, c^T x^* + Q(x^*)\}$, create new variables $(y^{1,s+1},\cdots,y^{k,s+1})$, and add the following optimality cut to the master problem \begin{equation} \label{eq:App-03-RTSO-CCG-Opt-Cut} \begin{gathered} \eta \ge \sum_{k=1}^K p^{s+1}_k q^T y^{k,s+1} \\ A(\xi^k) x + B(\xi^k) y^{k,s+1} \le b(\xi^k),~ \forall k \end{gathered} \end{equation} \STATE If UB$-$LB$< \varepsilon$, terminate and report the optimal first-stage solution $x^*$ as well as the worst-case distribution $p^{s+1}$; otherwise, update $s \leftarrow s+1$, and go to step 2. \end{algorithmic} \label{Ag:App-03-RTSO-CCG} \end{algorithm} \subsection{Formulations based on Wasserstein Metric} Up to now, the KL-divergence based ambiguity set based formulations have received plenty of research, because it enjoys some convenience when deriving the robust counterpart. For example, it has already known in Sect. \ref{App-C-Sect04-01} that robust chance constraints under KL-divergence ambiguity set can reduce to a traditional chance constraints under the empirical distribution with a rescaled confidence level, and the worst-case expectation problem under KL-divergence ambiguity set is equivalent to a convex program. However, according to its definition, KL-divergence ambiguity set may encounter theoretical difficulty to represent confidence sets for continuous distribution \cite{Am-Set-Wasserstein-1}, because the empirical distribution calibrated from finite data must be discrete, and any distribution in the KL-divergence ambiguity set must assign positive probability mass to each sampled scenario. As a continuous distribution has a density function, it must reside outside the KL-divergence ambiguity set regardless of the sampled scenarios. In contrast, Wasserstein metric based ambiguity sets contain both discrete and continuous distributions. It offers an explicit confidence level for the unknown distribution belonging to the set, and enables the decision maker more informative guidance to control the model conservativeness. This section introduces state-of-the-art results in robust SO with Wasserstein metric based ambiguity sets. The most critical problem is the robust counterparts of the worst-case expectation problem and robust chance constraints, which will be discussed respectively. They can be embedded in single- and two-stage robust SO problems without substantial barriers. The materials in this section mainly come from \cite{Am-Set-Wasserstein-1}. \vspace{12pt} {\noindent \bf 1. Wasserstein metric based ambiguity set} Let $\rm \Xi$ be the support set of multi-dimensional random variable $\xi \in \mathbb R^m$. $M({\rm \Xi})$ represent all probability distributions $\mathbb Q$ supported on $\rm \Xi$, and $\mathbb E_{\mathbb Q}[\|\xi\|]=\int_{\rm \Xi} \|\xi\| \mathbb Q({\rm d}\xi) < \infty$, where $\|\cdot\|$ stands for an arbitrary norm on $\mathbb R^m$. \begin{definition} \label{def:App-03-Wass-Metric} Wasserstein metric $d_W: M({\rm \Xi}) \times M({\rm \Xi}) \to \mathbb R_+$ is defined as \begin{equation*} d_W(\mathbb Q,\mathbb Q_0) = \inf \left( \int_{\rm \Xi^2} \left\| \xi - \xi^0 \right\| {\rm \Pi} ({\rm d} \xi, {\rm d} \xi^0) \middle| \begin{gathered} \mbox{ $\rm \Pi$ is a joint distribution of $\xi$ and} \\ \mbox{ $\xi^0$ with marginals $\mathbb Q$ and $\mathbb Q_0$} \end{gathered} \right) \end{equation*} for two probability distributions $\mathbb Q, \mathbb Q_0 \in M({\rm \Xi})$. \end{definition} As a special case, for two discrete distributions, Wasserstein metric is given by \begin{equation} d_W(\mathbb Q,\mathbb Q_0) = \inf_{\pi \ge 0} \left( \sum_i \sum_j \pi_{ij} \left\| \xi_j - \xi^0_i \right\| ~\middle|~ \begin{gathered} \sum\nolimits_j \pi_{ij} = p^0_i,~ \forall i \\ \sum\nolimits_i \pi_{ij} = p_j,~ \forall j \end{gathered}~~ \right) \label{eq:App-03-Wass-Def-Dis} \end{equation} where $p^0_i$ and $p_j$ denote the probability of representative scenario $\xi^0_i$ and $\xi_j$. In either case, the decision variable $\rm \Pi$ (or $\pi_{ij}$) represents the probability mass transported from $\xi^0_i$ to $\xi_j$, therefore, the Wasserstein metric can be viewed as the minimal cost of a transportation plan, where the distance $\| \xi_j - \xi^0_i \|$ encodes the transportation cost of unit mass. Sometimes, the Wasserstein metric can be represented in the dual form \begin{equation} d_W(\mathbb Q,\mathbb Q_0) = \sup_{f \in L} \left( \int_{\rm \Xi} f(\xi) \mathbb Q({\rm d} \xi) - \int_{\rm \Xi} f(\xi) \mathbb Q_0({\rm d} \xi) \right) \label{eq:App-03-Wass-Def-Dual} \end{equation} where $L=\{f:|f(\xi)-f(\xi^0)|\le \| \xi - \xi^0 \|,\forall \xi,\xi^0 \in {\rm \Xi} \}$ (Theorem 3.2, \cite{Am-Set-Wasserstein-1}, which was firstly discovered by Kantorovich and Rubinstein \cite{Kantorovich-Rubinstein} for distributions with a bounded support). With above definition, the Wasserstein ambiguity set is the ball of radius $\epsilon$ centered at the empirical distribution $\mathbb Q_0$ \begin{equation} \label{eq:App-03-Wass-Ambiguity-Set} D_W = \left\{ \mathbb Q \in M({\rm \Xi}):d_W(\mathbb Q,\mathbb Q_0) \le \epsilon \right\} \end{equation} where $\mathbb Q_0$ is constructed with $N$ independent data samples \begin{equation*} \mathbb Q_0 = \frac{1}{N} \sum_{i=1}^N \delta_{\xi^0_i} \end{equation*} where $\delta_{\xi^0_i}$ stands for Dirac distribution concentrating unit mass at $\xi^0_i$. Particularly, we require the unknown distribution $\mathbb Q$ follow a light tail assumption, i.e., there exists $a > 1$ such that \begin{equation*} \int_{\rm \Xi} {\rm e}^{\|\xi\|^a} \mathbb Q({\rm d} \xi) < \infty \end{equation*} This assumption indicates that the tail of distribution $\mathbb Q$ decays at an exponential rate. If $\rm \Xi$ is bounded and compact, this assumption trivially holds. Under this assumption, modern measure concentration theory provides the following finite sample guarantee for the unknown distribution belonging to Wasserstein ambiguity set \begin{equation} \Pr\left[ d_W (\mathbb Q, \mathbb Q_0) \ge \epsilon \right] \le \begin{dcases} c_1 {\rm e}^{-c_2 N \epsilon^{\max\{m,2\}}} & \mbox{ if }\epsilon \le 1\\ c_1 {\rm e}^{-c_2 N \epsilon^a} & \mbox{ if } \epsilon > 1 \end{dcases} \label{eq:App-03-Wass-Conf-Level} \end{equation} where $c_1, c_2$ are positive constants depending on $a$, $A$, and $m$ and $m \ne 2$. Equation (\ref{eq:App-03-Wass-Conf-Level}) provides a priori estimate of the confidence level for $\mathbb Q \notin D_W$. On the other hand, we can utilize (\ref{eq:App-03-Wass-Conf-Level}) to select parameter $\epsilon$ of the Wasserstein ambiguity set such that $D_W$ contains the uncertain distribution $\mathbb Q$ with probability $1-\beta$ for some prescribed $\beta$. This requires solving $\epsilon$ from the right-hand side of (\ref{eq:App-03-Wass-Conf-Level}) with a given left-hand side $\beta$, resulting in \begin{equation} \epsilon = \begin{dcases} \left( \dfrac{\ln(c_1 \beta^{-1})}{c_2 N} \right)^{1/\max\{m,2\}} & \mbox{ if } N \ge \dfrac{\ln(c_1 \beta^{-1})}{c_2} \\ \left( \dfrac{\ln(c_1 \beta^{-1})}{c_2 N} \right)^{1/a} & \mbox{ if } N < \dfrac{\ln(c_1 \beta^{-1})}{c_2} \end{dcases} \label{eq:App-03-Wass-Radius} \end{equation} Wasserstein ambiguity set with above radius can be regarded as a confidence set for the unknown distribution $\mathbb Q$ as in statistical testing. \vspace{12pt} {\noindent \bf 2. Worst-case expectation problem} A robust SO problem under Wasserstein metric naturally requests to minimize the worst-case expected cost: \begin{equation} \label{eq:App-03-Wass-RSO} \inf_{x \in X} \sup_{\mathbb Q \in D_W} \mathbb E_{\mathbb Q} [h(x,\xi)] \end{equation} We demonstrate how to solve the core problem: the worst-case expectation \begin{equation} \label{eq:App-03-Wass-SupE} \sup_{\mathbb Q \in D_W} \mathbb E_{\mathbb Q} [l(\xi)] \end{equation} where $l(\xi) = \max_{1 \le k \le K} l_k(\xi)$ is the payoff function, consisting of the point-wise maximum of $K$ elementary functions. For notation brevity, the dependence on $x$ is suppressed and will be recovered later on when necessary. We further assume that the support set $\rm \Xi$ is closed and convex, and specific $l(\xi)$ will be discussed. Problem (\ref{eq:App-03-Wass-SupE}) renders an infinite-dimensional optimization problem for continuous distribution. Nonetheless, the inspiring work in \cite{Am-Set-Wasserstein-1} show that (\ref{eq:App-03-Wass-SupE}) can be reformulated as a finite-dimensional convex program for various payoff functions. To see this, expand the worst-case expectation as \begin{equation*} \sup_{\mathbb Q \in D_W} \mathbb E_{\mathbb Q} [l(\xi)] = \left\{ \begin{aligned} \sup_{\rm \Pi}~ & \int_{\rm \Xi} l(\xi) \mathbb Q ({\rm d} \xi) \\ \mbox{s.t.}~ & \int_{\rm \Xi^2} \left\| \xi - \xi^0 \right\| {\rm \Pi} ({\rm d} \xi, {\rm d} \xi^0) \le \epsilon \\ & \mbox{ $\rm \Pi$ is a joint distribution of $\xi$} \\ & \mbox{ $\xi^0$ with marginals $\mathbb Q$ and $\mathbb Q_0$} \end{aligned} \right. \end{equation*} According to the law of total probability, $\rm \Pi$ can be decomposed as the marginal distribution $\mathbb Q_0$ of $\xi^0$ and the conditional distributions $\mathbb Q_i$ of $\xi$ given $\xi^0 = \xi^0_i$: \begin{equation*} {\rm \Pi} = \frac{1}{N} \sum_{i=1}^N \delta_{\xi^0_i} \otimes \mathbb Q_i \end{equation*} and the worst-case expectation evolves into a generalized moment problem in conditional distributions $\mathbb Q_i$, $i \le N$ \begin{equation*} \sup_{\mathbb Q \in D_W} \mathbb E_{\mathbb Q} [l(\xi)] = \left\{ \begin{aligned} \sup_{\mathbb Q_i \in M({\rm \Xi})} & \frac{1}{N} \sum_{i=1}^N \int_{\rm \Xi} l(\xi) \mathbb Q_i ({\rm d} \xi) \\ \mbox{s.t.} ~~~ & \frac{1}{N} \sum_{i=1}^N \int_{\rm \Xi} \left\| \xi - \xi^0_i \right\| \mathbb Q_i ({\rm d} \xi) \le \epsilon \end{aligned} \right. \end{equation*} Using standard Lagrangian duality, we obtain \begin{equation*} \begin{aligned} \sup_{\mathbb Q \in D_W} \mathbb E_{\mathbb Q} [l(\xi)] = \sup_{\mathbb Q_i \in M({\rm \Xi})} \inf_{\lambda \ge 0} & \frac{1}{N} \sum_{i=1}^N \int_{\rm \Xi} l(\xi) \mathbb Q_i ({\rm d} \xi)\\ & + \lambda \left(\epsilon -\frac{1}{N} \sum_{i=1}^N \int_{\rm \Xi} \left\| \xi - \xi^0_i \right\| \mathbb Q_i ({\rm d} \xi) \right) \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} & \le \inf_{\lambda \ge 0} \sup_{\mathbb Q_i \in M({\rm \Xi})} \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N \int_{\rm \Xi} \left( l(\xi) - \lambda \left\| \xi - \xi^0_i \right\| \right) \mathbb Q_i ({\rm d} \xi) \\ & = \inf_{\lambda \ge 0} \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N \sup_{\xi \in {\rm \Xi}} \left( l(\xi) - \lambda \left\| \xi - \xi^0_i \right\| \right) \end{aligned} \end{equation*} Decision variables $\lambda$ and $\xi$ have finite dimensions. The last problem can be reformulated as \begin{equation} \begin{aligned} \inf_{\lambda, s_i}~& \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N s_i \\ \mbox{s.t.} ~ & \sup_{\xi \in {\rm \Xi}} \left( l_k(\xi) - \lambda \left\| \xi - \xi^0_i \right\| \right) \le s_i \\ & i=1,\cdots,N,~k=1,\cdots,K \\ & \lambda \ge 0 \end{aligned} \label{eq:App-03-Wass-SupE-Reduce-1} \end{equation} From the definition of dual norm, we know $\lambda \left\| \xi - \xi^0_i \right\| = \max_{\|z_{ik}\|_* \le \lambda} \langle z_{ik}, \xi-\xi^0_i \rangle$, so the constraints give rise to \begin{equation*} \begin{aligned} \sup_{\xi \in {\rm \Xi}} \left( l_k(\xi) - \lambda \left\| \xi - \xi^0_i \right\|\right) = & \sup_{\xi \in {\rm \Xi}} \left( l_k(\xi) - \max_{\|z_{ik} \|_* \le \lambda} \langle z_{ik}, \xi-\xi^0_i \rangle \right) \\ = & \sup_{\xi \in {\rm \Xi}} \min_{\|z_{ik}\|_* \le \lambda} l_k(\xi) - \langle z_{ik}, \xi-\xi^0_i \rangle \\ \le & \min_{\|z_{ik}\|_* \le \lambda} \sup_{\xi \in {\rm \Xi}}~ l_k(\xi) - \langle z_{ik}, \xi-\xi^0_i \rangle \end{aligned} \end{equation*} Substituting it into problem (\ref{eq:App-03-Wass-SupE-Reduce-1}) leads to a more restricted feasible set and a larger objective value, yielding \begin{equation} \begin{aligned} \inf_{\lambda, s_i}~& \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N s_i \\ \mbox{s.t.} ~ & \min_{\|z_{ik}\|_* \le \lambda} \sup_{\xi \in {\rm \Xi}}~ l_k(\xi) - \langle z_{ik}, \xi-\xi^0_i \rangle \le s_i \\ & i=1,\cdots,N,~ k = 1,\cdots, K \\ & \lambda \ge 0 \end{aligned} \label{eq:App-03-Wass-SupE-Reduce-2} \end{equation} The constraints of (\ref{eq:App-03-Wass-SupE-Reduce-2}) trivially suggests the feasible set of $\lambda$ is $\lambda \ge \|z_{ik}\|_*$, and the min operator in constraints can be omitted because it is in compliance with the objective function. Therefore, we arrive at \begin{equation} \begin{aligned} \inf_{\lambda, s_i}~& \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N s_i \\ \mbox{s.t.} ~ & \sup_{\xi \in {\rm \Xi}}~ \left( l_k(\xi) - \langle z_{ik}, \xi \rangle \right) + \langle z_{ik}, \xi^0_i \rangle \le s_i,~ \lambda \ge \|z_{ik}\|_* \\ & i=1,\cdots,N, ~ k = 1,\cdots K \end{aligned} \label{eq:App-03-Wass-SupE-Reduce-3} \end{equation} It is proved in \cite{Am-Set-Wasserstein-1} that problems (\ref{eq:App-03-Wass-SupE}) and (\ref{eq:App-03-Wass-SupE-Reduce-3}) are actually equivalent. Next, we will derive the concrete forms of (\ref{eq:App-03-Wass-SupE-Reduce-3}) under specific payoff function $l(\xi)$ and uncertainty set $\rm \Xi$. Unlike \cite{Am-Set-Wasserstein-1} which relies on conjugate functions in convex analysis, we mainly exploit LP duality theory, which is more friendly to readers with engineering background. {\bf Case 1:} Convex PWL payoff function $l(\xi) = \max_{1 \le k \le K} \{a^T_k \xi + b_k\}$ and bounded polyhedral uncertainty set ${\rm \Xi} = \{\xi \in \mathbb R^m:C \xi \le d\}$. The key point is the supremum regarding $\xi$ in the following constraint \begin{equation*} \sup_{\xi \in {\rm \Xi}} \left( a^T_k \xi - \langle z_{ik}, \xi \rangle \right) + b_k + \langle z_{ik}, \xi^0_i \rangle \le s_i \end{equation*} For each $k$, the supremum is an LP \begin{equation*} \begin{aligned} \max~~ & (a_k - z_{ik})^T \xi \\ \mbox{s.t.} ~~& C \xi \le d \end{aligned} \end{equation*} Its dual LP reads \begin{equation*} \begin{aligned} \min~~ & d^T \gamma_{ik} \\ \mbox{s.t.} ~~& C^T \gamma_{ik} = a_k - z_{ik} \\ & \gamma_{ik} \ge 0 \end{aligned} \end{equation*} Therefore, $z_{ik} = a_k - C^T \gamma_{ik}$. Because of strong duality, we can replace the supremum by the objective of the dual LP, which gives rise to: \begin{equation*} d^T \gamma_{ik} + b_k + \langle a_k - C^T \gamma_{ik}, \xi^0_i \rangle \le s_i,~ i=1,\cdots,N,~ k = 1,\cdots,K \end{equation*} Arrange all constraints together, we obtain a convex program which is equivalent to problem (\ref{eq:App-03-Wass-SupE-Reduce-3}) in Case 1: \begin{equation} \begin{aligned} \inf_{\lambda, s_i}~& \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N s_i \\ \mbox{s.t.} ~ & b_k + a^T_k \xi^0_i + \gamma^T_{ik} (d - C^T \xi^0_i) \le s_i,~ i=1,\cdots,N,~ k = 1,\cdots,K \\ & \lambda \ge \|a_k - C^T \gamma_{ik}\|_*,~ \gamma_{ik} \ge 0,~ i=1,\cdots,N,~ k = 1,\cdots,K \end{aligned} \label{eq:App-03-Wass-SupE-Conic-Case1} \end{equation} In the absence of distributional uncertainty, or $\epsilon = 0$ which implies that Wasserstein ambiguity set $D_W$ is a singleton, $\lambda$ can take any non-negative value without changing the objective function. Because all sampled scenarios must belong to the support set, i.e. $d - C^T \xi^0_i \ge 0$, $\forall i$ holds, so there must be $\gamma_{ik}=0$ at the optimal solution, leading to an optimal value of $\sum_{i=1}^N s_i/N$, where $s_i=\max_{1 \le k \le K} \{a^T_k \xi^0_i + b_k\}$, which represents the sample average of the payoff function under the empirical distribution. {\bf Case 2:} Concave PWL payoff function $l(\xi) = \min_{1 \le k \le K} \{a^T_k \xi + b_k\}$ and bounded polyhedral uncertainty set ${\rm \Xi} = \{\xi \in \mathbb R^m:C \xi \le d\}$. In such circumstance, the supremum regarding $\xi$ in the constraint becomes \begin{equation*} \max_{\xi \in {\rm \Xi}}~ \left\{ -z^T_i \xi + \min_{1 \le k \le L} \left\{ a^T_k \xi + b_k \right\} \right\} \end{equation*} which is equivalent to an LP \begin{equation*} \begin{aligned} \max~~ & -z^T_i \xi + \tau_i \\ \mbox{s.t.} ~~ & A \xi + b \ge \tau_i {\bf 1} \\ & C \xi \le d \end{aligned} \end{equation*} where the $k$-th row of $A$ is $a^T_k$; the $k$-th entry of $b$ is $b_k$; $\bf 1$ is all-one vector with a compatible dimension. Its dual LP reads \begin{equation*} \begin{aligned} \min~~ & b^T \theta_i + d^T \gamma_i \\ \mbox{s.t.} ~~& - A^T \theta_i + C^T \gamma_i= -z_i \\ & {\bf 1}^T \theta_i = 1,~ \theta_i \ge 0,~ \gamma_i \ge 0 \end{aligned} \end{equation*} Therefore, $z_i = A^T \theta_i - C^T \gamma_i$. Because of strong duality, we can replace the supremum by the objective of the dual LP, which gives rise to: \begin{equation*} b^T \theta_i + d^T \gamma_i + \langle A^T \theta_i - C^T \gamma_i, \xi^0_i \rangle \le s_i,~ i=1,\cdots,N \end{equation*} Arrange all constraints together, we obtain a convex program which is equivalent to problem (\ref{eq:App-03-Wass-SupE-Reduce-3}) in Case 2: \begin{equation} \begin{aligned} \inf_{\lambda, s_i}~~& \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N s_i \\ \mbox{s.t.} ~~ & \theta^T_i (b+A\xi^0_i) + \gamma^T_i ( d - C \xi^0_i) \le s_i,~ i=1,\cdots,N \\ & \lambda \ge \|A^T \theta_i - C^T \gamma_i \|_*,~ i=1,\cdots,N \\ &\gamma_i \ge 0,~ \theta_i \ge 0,~ {\bf 1}^T \theta_i = 1,~ i=1,\cdots,N \end{aligned} \label{eq:App-03-Wass-SupE-Conic-Case2} \end{equation} There will be no $k$ index for the constraints, because it is packaged in $A$ and $b$. An analogous analysis shows that if $\epsilon=0$, there must be $\gamma_i = 0$ and \begin{equation*} s_i = \min \{\theta^T_i (b+A\xi^0_i): \theta_i \ge 0,~ {\bf 1}^T \theta_i = 1 \} = \min_{1 \le k \le K} \{a^T_k \xi + b_k\} \end{equation*} implying $\sum_{i=1}^N s_i/N$ is the sample average of the payoff function under the empirical distribution. Now we focus our attention on the min-max problem (\ref{eq:App-03-Wass-RSO}) which frequently arises in two-stage robust SO, which entails evaluation of the expected recourse cost from an LP parameterized in $\xi$. We investigate two cases depending on where $\xi$ appears. {\bf Case 3:} Uncertain cost coefficients: $l(\xi) = \min_y \{y^T Q \xi: Wy \ge h - Ax\}$ where $x$ is the first-stage decision variable, $y$ represents the recourse action, and the feasible region is always non-empty. In this case, the supremum regarding $\xi$ in the constraint becomes \begin{equation*} \begin{aligned} & \max_{\xi \in {\rm \Xi}}~ \left\{ -z^T_i \xi + \min_{y} \left\{ y^T Q \xi: Wy \ge h - Ax \right\} \right\} \\ =& \min_{y} \left\{\max_{\xi \in {\rm \Xi}} \left\{ \left(Q^T y-z_i\right)^T \xi \right\} : Wy \ge h - Ax\right\} \end{aligned} \end{equation*} Replace the inner LP with its dual, we get an equivalent LP \begin{equation*} \begin{aligned} \min_{\gamma_i,y_i}~~ & d^T \gamma_i \\ \mbox{s.t.} ~~ & C^T \gamma_i = Q^T y_i - z_i,~ \gamma_i \ge 0 \\ & Wy_i \ge h - Ax \end{aligned} \end{equation*} Here we associated variable $y$ with a subscript $i$ to highlight its dependence on the value of $\xi$. Therefore, $z_i = Q^T y_i - C^T \gamma_i$, and we can replace the supremum by the objective of the dual LP, which gives rise to: \begin{equation*} d^T \gamma_i + \langle Q^T y_i - C^T \gamma_i, \xi^0_i \rangle \le s_i,~ i=1,\cdots,N \end{equation*} Arrange all constraints together, we obtain a convex program which is equivalent to problem (\ref{eq:App-03-Wass-SupE-Reduce-3}) in Case 3: \begin{equation} \begin{aligned} \inf_{\lambda, s_i}~~& \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N s_i \\ \mbox{s.t.} ~~ & y^T_i Q \xi^0_i + \gamma^T_i(d - C^T \xi^0_i) \le s_i,~ i=1,\cdots,N\\ & \lambda \ge \|Q^T y_i - C^T \gamma_i \|_*, ~\gamma_i \ge 0,~ i=1,\cdots,N\\ & W y_i \ge h-Ax,~ i=1,\cdots,N \end{aligned} \label{eq:App-03-Wass-SupE-Conic-Case3} \end{equation} Without distributional uncertainty, $\epsilon = 0$, $\lambda$ can be arbitrary nonnegative value; for similar reason, we have $\gamma_i=0$ and $s_i =y^T_i Q \xi^0_i$ at optimum. So problem (\ref{eq:App-03-Wass-SupE-Conic-Case3}) is equivalent to the SAA problem under the empirical distribution \begin{equation*} \min_{y_i}~ \left\{ \frac{1}{N} \sum_{i=1}^N y^T_i Q \xi^0_i: W y_i \ge h-Ax \right\} \end{equation*} {\bf Case 4:} Uncertain constraint right-hand side: \begin{equation*} \begin{aligned} l(\xi) & = \min_y ~\{q^T y: Wy \ge H \xi + h - Ax\} \\ & = \max_{\theta} \left\{ \theta^T(H\xi + h - Ax):W^T \theta = q, \theta \ge 0 \right\} \\ & = \max_k ~ v^T_k (H\xi + h - Ax) = \max_k \left\{ v^T_k H \xi + v^T_k(h-Ax) \right\} \end{aligned} \end{equation*} where $v_k$ is the vertices of polyhedron $\{\theta:W^T \theta = q, \theta \ge 0\}$. In this way, $l(\xi)$ is expressed as a convex PWL function. Applying the result in Case 1, we obtain a convex program which is equivalent to problem (\ref{eq:App-03-Wass-SupE-Reduce-3}) in Case 4: \begin{equation} \begin{aligned} \inf_{\lambda, s_i}~& \lambda \epsilon + \frac{1}{N} \sum_{i=1}^N s_i \\ \mbox{s.t.} ~ & v^T_k(h-Ax) + v^T_k H \xi^0_i + \gamma^T_{ik} (d - C^T \xi^0_i) \le s_i,~ i=1,\cdots,N,~ \forall k \\ & \lambda \ge \|H^T v_k - C^T \gamma_{ik}\|_*,~ \gamma_{ik} \ge 0,~ i=1,\cdots,N,~ \forall k \end{aligned} \label{eq:App-03-Wass-SupE-Conic-Case4} \end{equation} For similar reason, without distributional uncertainty, we have $\gamma_{ik}=0$ and $s_i = v^T_k(h-Ax) + v^T_k H \xi^0_i = q^Ty_i$ at optimum, where the last equality is because of strong duality. So problem (\ref{eq:App-03-Wass-SupE-Conic-Case3}) is equivalent to the SAA problem under the empirical distribution \begin{equation*} \min_{y_i}~ \left\{ \frac{1}{N} \sum_{i=1}^N q^T y_i: W y_i \ge H \xi + h - Ax \right\} \end{equation*} The following discussions are devoted to the computational tractability. \begin{itemize} \item If the 1-norm or $\infty$-norm is used to define Wasserstein metric, their dual norms are $\infty$-norm and 1-norm respectively, then problems (\ref{eq:App-03-Wass-SupE-Conic-Case1})-(\ref{eq:App-03-Wass-SupE-Conic-Case4}) reduce to LPs whose sizes grow with the number $N$ of sampled data. If the Euclidean norm is used, the resulting problems will be SOCP. \item For Case 1, Case 2 and Case 3, the remaining equivalent LPs scale polynomially and can be therefore readily solved. As for Case 4, the number of vertices may grow exponential in the problem size. However, one can adopt a decomposition algorithm similar to CCG which iteratively identifies critical vertices without enumerating all of them. \item The computational complexity of all equivalent convex programs is independent of the size of the Wasserstein ambiguity set. \item It is shown in \cite{Am-Set-Wasserstein-1} that the worst-case expectation can also be computed from the following problem \begin{equation} \label{eq:App-03-Wass-SupE-Extreme-Q} \begin{aligned} \sup_{\alpha_{ik},q_{ik}} ~~& \frac{1}{N} \sum_{i=1}^N \sum_{k=1}^K \alpha_{ij} l_k \left( \xi^0_i - \frac{q_{ik}}{\alpha_{ik}} \right) \\ \mbox{s.t.}~~ & \frac{1}{N} \sum_{i=1}^N \sum_{k=1}^K \|q_{ik}\| \le \epsilon\\ & \alpha_{ik} \ge 0, \forall i,\forall k,~ \sum_{k=1}^K \alpha_{ik} =1, \forall i \\ & \xi^0_i - \frac{q_{ik}}{\alpha_{ik}} \in {\rm \Xi},~ \forall i, \forall k \end{aligned} \end{equation} Non-convex term arise from the fraction $q_{ik}/\alpha_{ik}$. In fact, problem (\ref{eq:App-03-Wass-SupE-Extreme-Q}) is convex following the definition of extended perspective function \cite{Am-Set-Wasserstein-1}. Moreover, if $[\alpha_{ik}(r),q_{ik}(r)]_{r \in \mathbb N}$ is a sequence of feasible solutions and the corresponding objective values converge to the supremum of (\ref{eq:App-03-Wass-SupE-Extreme-Q}), then the discrete distribution \begin{equation*} \mathbb Q_r = \frac{1}{N} \sum_{i=1}^N \sum_{k=1}^K \alpha_{ik}(r)\delta_{\xi_{ik}(r)},~ \xi_{ik}(r) = \xi^0_i - \frac{q_{ik}(r)}{\alpha_{ik}(r)} \end{equation*} approaches the worst-case distribution in $D_W$ \cite{Am-Set-Wasserstein-1}. \end{itemize} \vspace{12pt} {\noindent \bf 3. Static robust chance constraints} Another important issue in SO is chance constraint. Here we discuss robust joint chance constraints in the following form \begin{equation} \label{eq:App-03-Wass-RCC-Def} \inf_{\mathbb Q \in D_W} \Pr [a(x)^T \xi_i \le b_i(x), i=1,\cdots,I] \ge 1-\beta \end{equation} where $x$ is the decision variable; the chance constraint involves $I$ inequalities with uncertain parameter $\xi_i$ supported on set ${\rm \Xi}_i \subseteq \mathbb R^n$ for each $i$. The joint probability distribution $\mathbb Q$ belongs to the Wasserstein ambiguity set. $a(x) \in \mathbb R^n$ and $b(x) \in \mathbb R$ are affine mappings of $x$, where $a(x)=\eta x + (1-\eta) {\bf 1}$, $\eta \in \{0,1\}$, and $b_i(x) = B^T_ix + b^0_i$. When $\eta=1$ ($\eta=0$), (\ref{eq:App-03-Wass-RCC-Def}) involves left-hand (right-hand) uncertainty. ${\rm \Xi}=\prod_i {\rm \Xi_i}$ is the support set of $\xi = [\xi^T_1,\cdots,\xi^T_I]^T$. The robust chance constraint (\ref{eq:App-03-Wass-RCC-Def}) requires that all inequalities be met for all possible distributions in Wasserstein ambiguity set $D_W$ with a probability of at least $1-\beta$, where $\beta \in (0,1)$ denotes a prescribed risk tolerance. The feasible region stipulated by (\ref{eq:App-03-Wass-RCC-Def}) is $X$. We will introduce main results from \cite{Am-Set-Wasserstein-2} while avoiding rigorous mathematical proofs. \begin{assumption} \label{ap:App-03-Wass-RCC} The support set $\rm \Xi$ is an $n \times I$-dimensional vector space, and the distance metric in Wasserstein ambiguity set is $d(\xi,\zeta)=\|\xi - \zeta\|$. \end{assumption} \begin{theorem} \cite{Am-Set-Wasserstein-2} Under Assumption (\ref{ap:App-03-Wass-RCC}), $X=Z_1 \cup Z_2$, where \begin{equation} \label{eq:App-03-Wass-RCC-Z1} Z_1 = \left\{x \in \mathbb R^n ~\middle|~ \begin{lgathered} \epsilon v - \beta \gamma \le \frac{1}{N} \sum_{j=1}^N z_j \\ z_j + \gamma \le \max \left\{ b_i(x) - a(x)^T \zeta^j_i,0 \right\} \\ i = 1, \cdots I,~ j = 1,\cdots, N \\ z_j \le 0,~ j = 1,\cdots,N \\ \|a(x)\|_* \le v,~ \gamma \ge 0 \end{lgathered} \right\} \end{equation} where $\epsilon$ is the radius of the Wasserstein ambiguity set, $N$ is the number of sampled scenarios in the empirical distribution, and \begin{equation} \label{eq:App-03-Wass-RCC-Z2} Z_2 = \{x \in \mathbb R^n ~|~ a(x) = 0,~ b_i(x) \ge 0,~i=1,\cdots I \} \end{equation} \label{th:App-03-Wass-RCC-Exact} \end{theorem} In Theorem \ref{th:App-03-Wass-RCC-Exact}, $Z_2$ is trivial: If $ \eta = 1$, then $Z_2=\{x \in \mathbb R^n~|~ x=0,~b_i \ge 0,~ \forall i\}$; If $ \eta = 0$, then $Z_2=\emptyset$. $Z_1$ can be reformulated as an MILP compatible form if it is bounded. By linearizing the second constraint, we have \begin{equation} Z_1 = \left\{x \in \mathbb R^n ~\middle|~ \begin{lgathered} \epsilon v - \beta \gamma \le \frac{1}{N} \sum_{j=1}^N z_j \\ z_j + \gamma \le s_{ij},~ \forall i, \forall j \\ b_i(x) - a(x)^T \zeta^j_i \le s_{ij} \le M_{ij} y_{ij},~\forall i, \forall j \\ s_{ij} \le b_i(x) - a(x)^T \zeta^j_i+M_{ij}(1-y_{ij}),~\forall i, \forall j \\ \|a(x)\|_* \le v,~ \gamma \ge 0,~z_j \le 0,~ \forall j \\ s_{ij} \ge 0,~ y_{ij} \in \{0,1\},~ \forall i,~ \forall j \end{lgathered} \right\} \label{eq:App-03-Wass-RCC-Z1-MILP} \end{equation} where $\forall i$ and $\forall j$ are short for $i=1,\cdots,I$ and $j=1,\cdots,N$, respectively; \begin{equation*} M_{ij} \ge \max_{x \in Z_1} \left| b_i(x) - a(x)^T \zeta^j_i \right| \end{equation*} It is easy to see that if $b_i(x) - a(x)^T \zeta^j_i <0$, then $y_{ij}=0$ (otherwise $s_{ij} \le b_i(x) - a(x)^T \zeta^j_i <0$), hence $s_{ij} = 0=\max \{ b_i(x) - a(x)^T \zeta^j_i,0 \}$. If $b_i(x) - a(x)^T \zeta^j_i >0$, then $y_{ij}=1$ (otherwise $b_i(x) - a(x)^T \zeta^j_i \le M_{ij}y_{ij} = 0$), hence $s_{ij} = b_i(x) - a(x)^T \zeta^j_i =\max\{ b_i(x) - a(x)^T \zeta^j_i,0\}$. If $b_i(x) - a(x)^T \zeta^j_i =0$, then we have $s_{ij} = 0$ regardless of the value of $y_{ij}$. In conclusion, (\ref{eq:App-03-Wass-RCC-Z1}) and (\ref{eq:App-03-Wass-RCC-Z1-MILP}) are equivalent. For right-hand uncertainty in which $\eta=0$, $a(x)={\bf 1}$, $X = Z_1$ because $Z_2 = \emptyset$. Moreover, variable $v$ in (\ref{eq:App-03-Wass-RCC-Z1-MILP}) is equal to 1 if 1-norm is used in Wasserstein ambiguity set $D_W$, indicating $v \ge \|{\bf 1}\|_\infty =1$ in $Z_1$. In (\ref{eq:App-03-Wass-RCC-Z1-MILP}), a total number of $I \times N$ binary variables are introduced to linearize the $\max\{a,b\}$ function, making the problem challenging to solve. An inner approximation of $Z$ is to simply replace $\max \{ b_i(x) - a(x)^T \zeta^j_i,0 \}$ with its first input, yielding a parameter-free approximation \begin{equation} \label{eq:App-03-Wass-RCC-CVaR} Z = \left\{x \in \mathbb R^n ~\middle|~ \begin{lgathered} \epsilon v - \beta \gamma \le \frac{1}{N} \sum_{j=1}^N z_j \\ z_j + \gamma \le b_i(x) - a(x)^T \zeta^j_i,~ \forall i,\forall j \\ z_j \le 0,~ \forall j,~ \|a(x)\|_* \le v,~ \gamma \ge 0 \end{lgathered} \right\} \end{equation} This formulation can be derived from CVaR model, and enjoys better computational tractability. \vspace{12pt} {\noindent \bf 4. Adaptive robust chance constraints} Robust chance constraint program with Wasserstein metric is studied in \cite{App03-Sect4-DRSO-Was-4} in a different but more general form. The problem is as follows \begin{equation} \label{eq:App-03-DRCC-Wass-1} \begin{aligned} \min_{x \in X}~~ & c^T x \\ \mbox{s.t.} ~~ & \inf_{\mathbb Q \in D_W} \Pr [F(x,\xi) \le 0] \ge 1-\beta \end{aligned} \end{equation} where $X$ is a bounded polyhedron, $F: \mathbb R^n \times {\rm \Xi} \to \mathbb R$ is a scalar function that is convex in $x$ for every $\xi$. This formulation is general enough to capture joint chance constraints. To see this, suppose $F$ contains $K$ individual constraints, then $F$ can be defined as the component-wise maximum as in (\ref{eq:App-03-DRO-DRCC-Joint-Para}). Here we develop a technique to solve two-stage problems where $F(x,\xi)$ is the optimal value of another LP parameterized in $x$ and $\xi$. More precisely, we consider \begin{subequations} \label{eq:App-03-TSDRCC-Wass} \begin{equation} \label{eq:App-03-TSDRCC-Wass-1} \begin{aligned} \min_{x \in X}~~ & c^T_1 x \\ \mbox{s.t.} ~~ & \sup_{\mathbb Q \in D_W} \Pr [f(x,\xi) \ge c^T_2 \xi] \le \beta \end{aligned} \end{equation} \begin{equation} \label{eq:App-03-TSDRCC-Wass-2} \begin{aligned} f(x,\xi) = \min ~~ & c^T_3 y \\ \mbox{s.t.} ~~ & Ax + By + C\xi \le d \end{aligned} \end{equation} \end{subequations} where in (\ref{eq:App-03-TSDRCC-Wass-1}), the robust chance constraint can be regarded a risk limiting requirement, and the threshold value depends on uncertain parameter $\xi$. We assume LP (\ref{eq:App-03-TSDRCC-Wass-2}) is always feasible (relatively complete recourse) and has finite optimum. Second-stage cost can be considered in the objective function of (\ref{eq:App-03-TSDRCC-Wass-1}) in form of worst-case expectation which has been discussed in previous sections and is omitted here for the sake of brevity. Here we focus on coping with second-stage LP in robust chance constraint. Define loss function \begin{equation} \label{eq:App-03-TSDRCC-Wass-Loss-Fun} g(x,\xi) = f(x,\xi) - c^T_2 \xi \end{equation} Recall the relation between chance constraint and CVaR discussed in Sect. \ref{App-C-Sect03-01}, a sufficient condition of robust chance constraint in (\ref{eq:App-03-TSDRCC-Wass-1}) is CVaR$(g(x,\xi),\beta) \le 0$, $\forall \mathbb Q \in D_W$, or equivalently \begin{equation} \label{eq:App-03-TSDRCC-CVaR-1} \sup_{\mathbb Q \in D_W} \inf_{\gamma \in \mathbb R} \beta \gamma + \mathbb E_{\mathbb Q} (\max \{g(x,\xi)-\gamma,0\}) \le 0 \end{equation} According to \cite{App03-Sect4-DRSO-Was-4}, constraint (\ref{eq:App-03-TSDRCC-CVaR-1}) can be conservatively approximated by \begin{equation} \label{eq:App-03-TSDRCC-CVaR-2} \epsilon L + \inf_{\gamma \in \mathbb R} \left\{ \beta \gamma + \frac{1}{N} \sum_{i=1}^N \max \{g(x,\xi^i)-\gamma,0\} \right\} \le 0 \end{equation} where $\epsilon$ is the parameter in Wasserstein ambiguity set $D_W$, $L$ is a constant satisfying $g(x,\xi) \le L\| \xi \|_1$, and $\xi^i$, $i=1,\cdots,N$ are samples of uncertain data. Substituting (\ref{eq:App-03-TSDRCC-Wass-2}) and (\ref{eq:App-03-TSDRCC-Wass-Loss-Fun}) into (\ref{eq:App-03-TSDRCC-CVaR-2}), we obtain an LP that is equivalent to problem (\ref{eq:App-03-TSDRCC-Wass}) \begin{equation} \label{eq:App-03-TSDRCC-CVaR-Eqv-LP} \begin{aligned} \min~~ & c^T_1 x \\ \mbox{s.t.} ~~ & x \in X,~ \epsilon L + \beta \gamma + \frac{1}{N} \sum_{i=1}^N s_{i} \le 0 \\ & s_i \ge 0,~ s_i \ge c_3^T y^i - c^T_2 \xi^i - \gamma,~ i = 1,\cdots,N \\ & Ax + By^i + C\xi^i \le d,~ i = 1,\cdots,N \\ \end{aligned} \end{equation} where $y^i$ is the second-stage decision associated with $\xi^i$. This formulation could be very conservative due to three reasons. First, worst-case distribution is considered; second, CVaR constraint (\ref{eq:App-03-TSDRCC-CVaR-1}) is a pessimistic approximation of chance constraints; finally, sampling constraint (\ref{eq:App-03-TSDRCC-CVaR-2}) is a pessimistic approximation of (\ref{eq:App-03-TSDRCC-CVaR-1}). More discussions on robust chance constraints with Wasserstein metric under various settings can be found in \cite{App03-Sect4-DRSO-Was-4}. \vspace{12pt} {\noindent \bf 5. Use of forecast data} Wasserstein metric enjoys many advantages, such as finite-sample performance guarantee and existence of tractable reformulation. However, moment information is not used, especially the first-order moment reflecting the prediction, which can be updated with time rolling on, so the worst-case distribution generally has a mean value different from the forecast (if available). To incorporate forecast data, we propose the following Wasserstein ambiguity set with fixed-mean \begin{equation} \label{eq:App-03-Wass-Mom} D^M_W = \left\{ \mathbb Q \in D_W \middle| \mathbb E_{\mathbb Q} [\xi] = \hat \xi \right\} \end{equation} and the worst-case expectation problem can be expressed as \begin{subequations} \label{eq:App-03-MaxE-Wass-Mom} \begin{align} \sup_{\mathbb Q \in D^M_W} ~~ & \mathbb E_{\mathbb Q} [l(\xi)] \\ = \sup_{f^n(\xi)} ~~ & \frac{1}{N} \sum_{n=1}^N \int_{\rm \Xi} l(\xi) f^n(\xi) {\rm d} \xi \\ \mbox{s.t.} ~~ & \frac{1}{N} \sum_{n=1}^N \int_{\rm \Xi} \| \xi - \xi^n \|_p f^n(\xi) {\rm d} \xi \le \epsilon : \lambda \\ & \int_{\rm \Xi} f^n(\xi) {\rm d} \xi = 1 : \theta_n,~ n = 1,\cdots, N \\ & \frac{1}{N} \sum_{n=1}^N \int_{\rm \Xi} \xi f^n(\xi) {\rm d} \xi = \hat \xi : \rho \end{align} \end{subequations} where $l(\xi)$ is a loss function similar to that in (\ref{eq:App-03-Wass-SupE}), $f^n(\xi)$ is the conditional density function under historical data sample $\xi^n$, dual variables $\lambda$, $\theta_n$, and $\rho$ are listed following a colon. Similar to the discussions for problem (\ref{eq:App-03-DRO-Worst-Expectation-Primal}), the dual problem of (\ref{eq:App-03-MaxE-Wass-Mom}) is \begin{subequations} \label{eq:App-03-Dual-MaxE-Wass-Mom} \begin{align} \min_{\lambda \ge 0,\theta_n,\rho} ~~ & (\lambda \epsilon + \rho^T \hat \xi) N + \sum_{n=1}^N \theta_n \label{eq:App-03-Dual-MaxE-Wass-Mom-Obj}\\ \mbox{s.t.} ~~ & \theta_n + \lambda \| \xi - \xi^n \|_p + \rho^T \xi \ge l(\xi), \forall \xi \in {\rm \Xi},~ \forall n \label{eq:App-03-Dual-MaxE-Wass-Mom-Cons} \end{align} \end{subequations} For $p=2$, polyhedral $\rm \Xi$ and PWL $l(\xi)$, constraint (\ref{eq:App-03-Dual-MaxE-Wass-Mom-Cons}) can be transformed into the intersection of PSD cones, and problem (\ref{eq:App-03-Dual-MaxE-Wass-Mom}) gives rise to an SDP; some examples can be found in Sect. \ref{App-C-Sect03-01}. If $\rm \Xi$ is described by a single quadratic constraint, constraint (\ref{eq:App-03-Dual-MaxE-Wass-Mom-Cons}) can be reformulated by using the well-known S-Lemma, which has been discussed in Sect. \ref{App-A-Sect02-04}, and problem (\ref{eq:App-03-Dual-MaxE-Wass-Mom}) still comes down to an SDP. For $p=1$ or $p=+\infty$, polyhedral $\rm \Xi$ and PWL $l(\xi)$, constraint (\ref{eq:App-03-Dual-MaxE-Wass-Mom-Cons}) can be transformed into a polyhedron using duality theory, and problem (\ref{eq:App-03-Dual-MaxE-Wass-Mom}) gives rise to an LP. Because the ambiguity set is more restrictive, problem (\ref{eq:App-03-Dual-MaxE-Wass-Mom}) would be less conservative than problem (\ref{eq:App-03-Wass-SupE-Reduce-3}) in which the mean value of uncertain data is free. A Wasserstein-moment metric with variance is exploited in \cite{App03-Sect4-DRSO-Was-5} and applied to wind power dispatch. Nevertheless, the ambiguity set neglects first-order moment and considers second-order moment. This formulation is useful when little historical data is available at hand. \vspace{12pt} As a short conclusion, distributionally robust optimization and data-driven robust stochastic optimization leverage statistical information on the uncertain data and overcome the conservatism of traditional robust optimization approaches which are built upon the worst-case scenario. The core issue is the equivalent convex reformulation of the worst-case expectation problem or the robust chance constraint over the uncertain probability distribution restricted in the ambiguity set. Optimization over a moment based ambiguity set can be formulated as a semi-infinite LP, whose dual problem gives rise to SDPs, and hence can be readily solved. When additional structure property is taken into account, such as unimodality, more sophisticated treatment is need. As for the robust stochastic programming, tractable reformulation of the worst-case expectation and robust chance constraints is the central issue. Robust chance constraint under a $\phi$-divergence based ambiguity set are equivalent to traditional chance constraint under the empirical distribution but with a modified confidence level, and it can be transformed into an MILP or approximated by LP based on risk theory under the help of sampling average approximation technique, so does a robust chance constraint under a Wasserstein metric based ambiguity set, following somewhat different expressions. The worst-case expectation under $\phi$-divergence based ambiguity set boils down to a convex program with linear constraints and a nonlinear objective function, which can be efficiently solved via outer approximation algorithm. The worst-case expectation under Wasserstein ambiguity set comes down to a conic program which is convex and readily solvable. Unlike the max-min problem in traditional robust optimization method identifying the worst-case scenario which the decision maker wishes to avoid, the worst-case expectation problem in distributionally robust optimization and robust stochastic programming is solved in its dual form, whose solution is less intuitive to the decision maker; moreover, it may not be easy to recover the primal optimal solution, i.e., the worst-case probability. The worst-case distribution in the robust chance constrained stochastic programming is discussed in \cite{App03-Sect4-RCCP,Am-Set-Wasserstein-1}; the worst-case discrete distribution in a two-stage stochastic program with min-max expectation can be computed via a polynomial complexity algorithm. Nonetheless, from a practical perspective, what the human decision makers actually need to deploy is merely the here-and-now decision, and the worst probability distribution is usually not very important, since corrective actions can be postponed to a later stage when the uncertain data have been observed or can be predicted with high accuracy. \section{Further Reading} \label{App-C-Sect05} Uncertainty is ubiquitous in real-life decision-making problems, and the decision maker usually has limited information and statistic data on the uncertain factors, which makes robust optimization very attractive in practice, as it is tailored to the available information at hand, and often gives rise to computationally tractable reformulations. Although the original idea can date back to \cite{RO-Soyster} in 1970s, it is during the past two decades that the fundamental theory of robust optimization has been systematically developed. This research field is even more active during the past five years. This chapter aims to help beginners get an overview on this method and understand how to apply robust optimization in practice. We provide basic models and tractable reformulations, called the robust counterparts, for various robust optimization models under different assumptions on the uncertainty and decision-making manner. Basic theory of robust optimization is provided in \cite{RO-Detail-1,RO-Detail-2}. Comprehensive surveys can be found in \cite{RO-Guide,RO-Survey}. Here we shed more light on several important topics in robust optimization. Uncertainty sets play a decisive role on the performance of a robust solution. A larger set could protect the system against a higher level of uncertainty, and increase the cost as well. However, the probability that uncertain data take their wort-case values is usually small. The decision-maker needs to make a trade-off between reliability and economy. Ambiguous chance constraints and their approximations are discussed in Chapter 2 of \cite{RO-Detail-1}, based on which the parameter in the uncertainty set can be selected. It is proposed in \cite{Un-Set-Data-Driven} to construct uncertainty sets from historical data and statistical tests. The connection of uncertainty sets and coherent risk measures are revealed in \cite{Un-Set-Risk-Measure}. It is shown that the distortion risk measure leads to a polyhedral uncertainty set. Specifically, the connection of CVaR and uncertainty sets is discussed in \cite{Un-Set-CVaR}. A reverse correspondence is reported in \cite{Risk-Measure-Un-Set}, demonstrating that robust optimization could generalize the concepts of risk measures. A data-driven approach is proposed in \cite{Un-Set-Data-Driven} to construct uncertainty sets for robust optimization based on statistical hypothesis tests. The counterpart problems are shown to be tractable, and optimal solutions satisfy constraints with finite-sample probabilistic guarantee. Distributionally robust optimization integrates statistic information, worst-case expectation, and robust probability guarantee in a holistic optimization framework, in which the uncertainty is modeled via an ambiguous probability distribution. The choice of ambiguity sets for candidate distributions affects not only the model conservatism, but also the existence of tractable reformulations. Various ambiguity sets have been proposed in the literature, which can be roughly classified into two categories: 1) Moment ambiguity sets. All PDFs share the same moment data, usually the first- and second-order moments, and structured properties, such as symmetry and unimodality. For example, Markov ambiguity set contains all distributions with the same mean and support, and the worst-case expectation is shown to be equivalent to LPs \cite{Am-Set-Markov}. Chebyshev ambiguity set is composed of all distributions with known expectation and covariance matrix, and usually leads to SDP counterparts \cite{Static-DRO,Am-Set-Chebyshev-1,Am-Set-Chebyshev-2}; the Gauss ambiguity set contains all unimodal distributions in the Chebyshev ambiguity set, and also gives rise to SDP reformulations \cite{Am-Set-Gauss-1}. 2) Divergence ambiguity sets. All PDFs are close to a reference distribution in term of a specified measure. For example, the Wasserstein ambiguity quantifies the divergence via Wasserstein metric \cite{Am-Set-Wasserstein-1,Am-Set-Wasserstein-2,Am-Set-Wasserstein-3}; the $\phi$-divergence ambiguity \cite{Am-Set-Phi-Divergence,Am-Set-Phi-Div-1} characterizes the divergence of two probability density functions through the distance of special non-negative weights (for discrete distributions) or integrals (for continuous distributions). More information on the types of ambiguity sets and reformulations of their distributionally robust counterparts can be found in \cite{Am-Set-Overview}. According to the latest research progress, the moment based distributionally robust optimization is relatively mature and has been widely adopted in engineering, because the semi-infinite LP formulation and its dual for the worst-case expectation problem offer a systematic approach to analyze the impact of uncertain distributions. However, when more complicated ambiguity sets are involved, such as the Gauss ambiguity set, deriving a tractable reformulation needs more sophisticated approaches. The study on the latter category, which directly imposes uncertainty on the distributions is attracting growing attentions in the past two or three years, because it makes full use of historical data, which can better capture the unique feature of uncertain factors under investigation. Data-driven robust stochastic programming, conceptually the same as distributionally robust optimization but preferred by some researchers, has been studied using $\phi$-divergence in \cite{App03-Sect4-DRSO-Phi-1,App03-Sect4-DRSO-Phi-2}, and Wasserstein metric in \cite{Am-Set-Wasserstein-1,Am-Set-Wasserstein-2,Am-Set-Wasserstein-3,App03-Sect4-DRSO-Was-1,App03-Sect4-DRSO-Was-2,App03-Sect4-DRSO-Was-3,App03-Sect4-DRSO-Was-4}, because a tractable counterpart problem can be derived under such ambiguity sets. Many decision-making problems in engineering and finance often require that a certain risk measure associated with random variables should be limited below a threshold. However, the probability distribution of random variables is not exactly known; therefore, the risk limiting constraint must be able to withstand perturbations of distribution in a reasonable range. This entails a tractable reformulation of a risk measure under distributional uncertainty. This problem has been comprehensively discussed in \cite{Am-Set-Wasserstein-3}. In more recent publications, CVaR under moment ambiguity set with unimodality is studied in \cite{App03-Sect4-DR-Risk-1}; VaR and CVaR under moment ambiguity set are discussed in \cite{App03-Sect4-DR-Risk-2}; distortion risk measure under Wasserstein ambiguity set is considered in \cite{App03-Sect4-DR-Risk-3}. In multi-stage decision making, causality is a pivotal issue for practical implementation, which means that the wait-and-see decisions in the current stage cannot depend on the information of uncertainty in future stages. For example, in a unit commitment problem with 24 periods, the wind power output is observed period-by-period. It is shown in \cite{App03-Sect5-Causal-1} that the two-stage robust model in \cite{ARO-Benders-Decomposition} offers non-causal dispatch strategies, which are in fact not robust. A multi-stage causal unit commitment model is suggested in \cite{App03-Sect5-Causal-1,App03-Sect5-Causal-2} based on affine policy. Causality is put to effect by imposing block diagonal constraints on the gain matrix of affine policy. Causality is also called non-anticipativity in some literature, such as \cite{App03-Sect5-Causal-3}, which is attracting attention from practitioners \cite{App03-Sect5-Causal-4,App03-Sect5-Causal-5}. For some other interesting topics on robust optimization, such as the connection with stochastic optimization, connection with risk theory, and applications in engineering problems other than those in power systems, readers can refer to \cite{RO-Survey}. Nonlinear issues have been addressed in \cite{SRO-CVX-RCs,App03-Sect5-RNLP-1}. Optimization models with uncertain SOC and SDP constraints are discussed in \cite{App03-Sect5-RSDP-1,App03-Sect5-RSDP-2}. The connection among robust optimization, data utilization, and machine learning has been reviewed in \cite{App03-Sect5-Opt-Data-ML}. \input{ap03ref} \chapter{Equilibrium Problems} \label{App-D} The concept of an equilibrium describes a state that the system has no incentive to change. These incentives can be profit-driven in the case of competitive markets or a reflection of physical laws such as energy flow equations. In this sense, equilibrium is encompasses broader concepts than the solution of a game. Equilibrium is a fundamental notation appearing in various disciplines in economics and engineering. Identifying the equilibria allows eligible authorities to predict the system state at a future time or design reasonable policies for regulating a system or a market. This is not saying that an equilibrium state must appear sooner or later, partly because decision makers in reality have only limited rationality and information. Nevertheless, the awareness of such an equilibrium could be helpful for system design and operation. In this chapter, we restrict our attention in the field of game theory, which entails simultaneously solving multiple interactive optimization problems. We review the notions of some quintessential equilibrium problems and show how they can be solved via traditional optimization methods. These problems can be roughly categorized into two classes: the first one contains only one level: all players must make a decision simultaneously, which is referred to as a Nash-type game; the second one has two levels: decisions are made sequentially by two groups of players, called the leaders and the followers. This category is widely known as Stackelberg-type games, or multi-leader-follower games, or equilibrium programs with equilibrium constraints (EPEC). Unlike a traditional mathematical programming problem where the decision maker is unique, in an equilibrium problem or a game, multiple decision makers seek optimums of individual optimization problems parameterized in the optimal solutions of others. General notations used throughout this chapter are defined as follows. Specific symbols are explained in the individual sections. In the game theoretic language, a decision maker is called a player. Vector $x=(x_1,\cdots,x_n)$ refers to the joint decisions of all upper-level players or the so-called leaders in a bilevel setting, where $x_i$ stands for the decisions of leader $i$; $x_{-i}=(x_1,\cdots,x_{i-1},x_{i+1},\cdots,x_n)$ refers to the rivals' actions for leader $i$. Similarly, $y=(y_1,\cdots,y_m)$ refers to the joint decisions of all lower-level players or the so-called followers, where $y_j$ stands for the decisions of follower $j$; $y_{-j}=(y_1,\cdots,y_{j-1},y_{j+1},\cdots,y_m)$ refers to the rivals' actions for follower $j$. $\lambda$ and $\mu$ are Lagrangian dual multipliers associated with inequality and equality constraints. \section{Standard Nash Equilibrium Problem} \label{App-D-Sect01} After J. F. Nash published his work on the equilibrium of $n$-person non-cooperative games in early 1950s \cite{App-04-Nash-1,App-04-Nash-2}, game theory quickly became a new branch of operational research. Nash equilibrium problem (NEP) captures the interactive behaviors of strategic players, in which each player's utility depends on the actions of other players. During decades of wonderful research, a variety of new concepts and algorithms of Nash equilibriums have been proposed and applied to almost every area of knowledge. This section just reviews some basic concepts and the most prevalent best-response algorithms. \subsection{Formulation and Optimality Condition} \label{App-D-Sect01-01} In a standard $n$-person non-cooperative game, each player minimizes his payoff function $f_i(x_i,x_{-i})$ which depends on all players' actions. The strategy set $X_i=\{x_i \in \mathbb R^{k_i} ~|~ g_i(x_i) \le 0\}$ of player $i$ is independent of $x_{-i}$. The joint strategy set of the game is the Cartesian product of $X_i$, i.e., $X = \prod_{i=1}^n X_i$, and $X_{-i} = \prod_{j \ne i} X_j$. Roughly speaking, the non-cooperative game is a collection of coupled optimization problems, where player $i$ chooses $x_i \in X_i$ that minimizes his payoff $f_i(x_i,x_{-i})$ given his rivals' strategies $x_{-i}$, or mathematically \begin{equation} \label{eq:App-04-NE-Problem} \left. \begin{aligned} \min_{x_i} ~~ & f_i(x_i,x_{-i}) \\ \mbox{s.t.}~~ & g_i(x_i) \le 0 : \lambda_i \end{aligned} \right\},~ i = 1,\cdots,n \end{equation} In the problem of player $i$, the decision variable is $x_i$, and $x_{-i}$ is regarded as parameters; $\lambda_i$ is the dual variable. The Nash equilibrium consists of a strategy profile such that every player's strategy constitutes the best response to all other players' strategies, or in other words, no player can further reduce his payoff by changing his action unilaterally. Therefore, the Nash equilibrium is a stable state which can sustain spontaneously. The mathematical definition is formally given below. \begin{definition} \label{df:App-04-NE} A strategy vector $x^* \in X$ is a Nash equilibrium if the condition \begin{equation} \label{eq:App-04-NE-Definition} f_i(x^*_i,x^*_{-i}) \le f_i(x_i,x^*_{-i}),~ \forall x_i \in X_i \end{equation} holds for all players. \end{definition} Condition (\ref{eq:App-04-NE-Definition}) naturally interprets the fact that at a Nash equilibrium, if any player choose an alternative strategy, his payoff may grow, which is undesired. To depict a Nash equilibrium, a usual approach is the fixed-point of best-response mapping. Let $B_i(x_{-i})$ be the set of optimal strategies of player $i$ given the strategies $x_{-i}$ of others, then set $B(x) = \prod_{i=1}^n B_i(x_{-i})$ is the best-response mapping of the game. It is clear that $x^*$ is a Nash equilibrium if and only if $x^* \in B(x^*)$, i.e., $x^*$ is a fixed point of $B(x)$. This fact establishes the foundation for analyzing Nash equilibria using the well-developed fixed-point theory. However, conducting the fixed-point analysis usually requires the best-response mapping $B(x)$ in a closed form. Moreover, to declare the existence and uniqueness of a Nash equilibrium, the mapping should be contractive \cite{App-04-Fixed-Point-1}. These strong assumptions inevitably limit the applicability of fixed-point method. For example, in many instances, the best-response mapping $B(x)$ is neither contractive nor continuous, but Nash equilibria may still exist. Another way to characterize the Nash equilibrium is the KKT system approach. Generally speaking, in a standard Nash game, each player is facing an NLP parameterized in the rivals' strategies. If we consolidate the KKT optimality conditions of all these NLPs in (\ref{eq:App-04-NE-Problem}), we get the following KKT system \begin{equation} \label{eq:App-04-NE-KKT} \left. \begin{gathered} \nabla_{x_i} f_i (x_i ,x_{-i}) + \lambda_i^T \nabla_{x_i} g_i (x_i) = 0 \\ \lambda_i \ge 0,~g(x_i) \le 0,~ \lambda_i^T g_i (x_i) = 0 \end{gathered} \right\}~i = 1,\cdots,n \end{equation} If $x^*$ is a Nash equilibrium that satisfies (\ref{eq:App-04-NE-Definition}), and any standard constraint qualification holds for every player's problem in (\ref{eq:App-04-NE-Problem}), then $x^*$ must be a stationary point of the concentrated KKT system (\ref{eq:App-04-NE-KKT}) \cite{App-04-GNEP-KKT-1}; and vice versa: if all problems in (\ref{eq:App-04-NE-Problem}) meet a standard constraint qualification, and a point $x^*$ together with a proper vector of dual multipliers $\lambda = (\lambda_1,\cdots,\lambda_n)$ solves KKT system (\ref{eq:App-04-NE-KKT}), then $x^*$ is also a Nash equilibrium that satisfies (\ref{eq:App-04-NE-Definition}). Problem (\ref{eq:App-04-NE-KKT}) is an NCP and is the optimality condition of Nash equilibrium. It is a natural attempt to retrieve an equilibrium by solving NCP (\ref{eq:App-04-NE-KKT}) without deploying an iterative algorithm, which may suffer from divergence. To obviate the computational challenges brought by the complementarity and slackness constraints in KKT system (\ref{eq:App-04-NE-KKT}), a merit function approach and an interior-point method are comprehensively discussed in \cite{App-04-GNEP-KKT-1}. \subsection{Variational Inequality Formulation} \label{App-D-Sect01-02} An alternative perspective to study the NEP is to formulate it as a variational inequality (VI) problem. This approach is pursued in \cite{App-04-GNEP-VI-1}. The advantage of variational inequality approach is that it permits an easy access to existence and uniqueness results without the best-response mapping. From a computational point of view, it naturally leads to easily implementable algorithms along with provable convergence performances. Given a closed and convex set $X \in \mathbb R^n$ and a mapping $F:X \to \mathbb R^n$, a variational inequality problem, denoted by VI($X,F$), is to determine a point $x^* \in X$ satisfying \cite{App-04-VI-1} \begin{equation} \label{eq:App-04-VI} (x-x^*)^T F(x^*) \ge 0,~ \forall x \in X \end{equation} To see the connection between a VI problem and a traditional convex optimization problem that seeks a minimum of a convex function $f(x)$ over a convex set $X$, let us assume that the optimal solution is $x^*$, then the feasible region must not lie in the half space where $f(x)$ decreases; geometrically, the line segment connecting any $x \in X$ with $x^*$ must form an acute angle with the gradient of $f$ at $x^*$, which can be mathematically described as $(x-x^*)^T \nabla f(x^*) \ge 0,~ \forall x \in X$. This condition can be concisely expressed by VI($X,\nabla f$) \cite{App-04-VI-2}. However, when the Jacobian matrix of $F$ is not symmetric, $F$ cannot be written as the gradient of another scalar function, and hence the variational inequality problem encompasses broader classes of problems than traditional mathematical programs. For example, when $X = \mathbb R^n$, problem (\ref{eq:App-04-VI}) degenerates into a system of equations $F(x^*)=0$; when $X = \mathbb R^n_+$, problem (\ref{eq:App-04-VI}) comes down to an NCP $0 \le x^* \bot F(x^*) \ge0$. To see the later case, $x^* \ge 0$ because it belongs to $X$; if any element of $F(x^*)$ is negative, say the first element $[F(x^*)]_1 < 0$, we let $x_1 = x^*_1 +1$, and $x_i=x^*_i$, $i=2,\cdots$, then $(x-x^*)^T F(x^*) = [F(x^*)]_1 < 0$, which is contradictive to (\ref{eq:App-04-VI}). Hence $ F(x^*) \ge 0$ must hold. Let $x=0$ in (\ref{eq:App-04-VI}), we have $(x^*)^T F(x^*) \le 0$. Because $x^* \ge 0$ and $F(x^*) \ge 0$, there must be $(x^*)^T F(x^*) = 0$, resulting in the target NCP. The monotonicity of $F$ plays a central role in the theoretical analysis of VI problems, just like the role of convexity in mathematical programming. It has a close relationship with the Jacobian matrix $\nabla F$ \cite{App-04-GNEP-VI-1,App-04-VI-3}: \begin{svgraybox} \normalsize \renewcommand{\arraystretch}{1.5} \renewcommand{\tabcolsep}{1em} \centering \begin{tabular}{lll} $F(x)$ is monotone on $X$ & $\Leftrightarrow$ & $\nabla F(x) \succeq 0$, $\forall x \in X$ \\ $F(x)$ is strictly monotone on $X$ & $\Leftarrow$ & $\nabla F(x) \succ 0$, $\forall x \in X$ \\ $F(x)$ is strongly monotone on $X$ & $\Leftrightarrow$ & $\nabla F(x) -c_{m}I \succeq 0$, $\forall x \in X$ \\ \end{tabular} \end{svgraybox} \noindent where $c_m$ is a strictly positive constant. As a correspondence to convexity, a differentiable function $f$ is convex (strictly convex, strongly convex) on $X$ if and only if $\nabla f$ is monotone (strictly monotone, strongly monotone) on $X$. Conceptually, monotonicity (convexity) is the weakest, since the matrix $\nabla F(x)$ can have zero eigenvalues; strict monotonicity (strict convexity) is stronger, as all eigenvalues of matrix $\nabla F(x)$ are strictly positive; strong monotonicity (strong convexity) is the strongest, because the smallest eigenvalue of matrix $\nabla F(x)$ should be greater than a given positive number. Intuitively, a strong convex function must be more convex than a given convex quadratic function; for example, $f(x)=x^2$ is strongly convex on $\mathbb R$; $f(x)=1/x$ is convex on $\mathbb R_{++}$ and strongly convex on $(0,1]$. To formulate an NEP as a VI problem and establish the existence and uniqueness result of Nash equilibria, we list some assumptions on the convexity and smoothness of each player's problem. \begin{assumption} \label{ap:App-04-NE-Convex-Smooth} \cite{App-04-GNEP-VI-1} 1) The strategy set $X_i$ is non-empty, closed and convex; 2) Function $f_i(x_i,x_{-i})$ is convex in $x_i \in X_i$ for fixed $x_{-i} \in X_{-i}$; 3) Function $f_i(x_i,x_{-i})$ is continuously differentiable in $x_i \in X_i$ for fixed $x_{-i} \in X_{-i}$; 4) Function $f_i(x_i,x_{-i})$ is twice continuously differentiable in $x \in X$ with bounded second derivatives. \end{assumption} \begin{proposition} \label{pr:App-04-NE-VI} \cite{App-04-GNEP-VI-1} In a standard NEP NE($X,f$), where $f = (f_1,\cdots,f_n)$, if conditions 1)-3) in Assumption \ref{ap:App-04-NE-Convex-Smooth} are met, then the game is equivalent to a variational inequality problem VI($X,F$) with \begin{equation*} X = X_1 \times \cdots \times X_n \end{equation*} and \begin{equation*} F(x) = (\nabla_{x_1} f_1 (x),\cdots,\nabla_{x_n} f_n (x)) \end{equation*} \end{proposition} In the VI problem corresponding to a traditional mathematical program, the Jacobian matrix $\nabla F$ is symmetric, because it is the Hessian matrix of a scalar function. However, in Proposition \ref{pr:App-04-NE-VI}, the Jacobian matrix $\nabla F$ for an NEP is generally non-symmetric. Building upon the VI reformulation, the standard results on solution properties of VI problems \cite{App-04-VI-1} can be extended to standard NEPs. \begin{proposition} \label{pr:App-04-NE-Property} Given an NEP NE($X,f$), all conditions in Assumption \ref{ap:App-04-NE-Convex-Smooth} are met, then we have the following statements: 1) If $F(x)$ is strictly monotone, then the game has at most one Nash equilibrium. 2) If $F(x)$ is strongly monotone, then the game has a unique Nash equilibrium. \end{proposition} Some sufficient guarantees for $F(x)$ to be (strictly, strongly) monotone are given in \cite{App-04-GNEP-VI-1}. It should be pointed out that the equilibrium concept in the sense of Proposition \ref{pr:App-04-NE-Property} is termed the pure-strategy Nash equilibrium, so as to distinguish it from the mixed-strategy Nash equilibrium which will appear later on. \subsection{Best Response Algorithms} \label{App-D-Sect01-03} A major benefit of the VI reformulation is that it leads to easily implementable solution algorithms. Here we list two of them. Readers who are interested in the proofs on their performances can consult \cite{App-04-GNEP-VI-1}. \vspace{12pt} {\noindent \bf 1. Algorithms for strongly convex cases} The first algorithm is a totally asynchronous-iterative one, in which players may update their strategies with different frequencies. Let $T=\{0,1,2,\cdots\}$ be the indices of iteration steps, and $T_i \subseteq T$ be the set of steps in which player $i$ updates his own strategy $x_i$. The notation $x^k_i$ implies that at step $k \notin T_i$, $x^k_i$ remains unchanged. Let $t^i_j(k)$ be the latest step at which the strategy of player $j$ is received by player $i$ at step $k$. Therefore, if player $i$ updates his strategy at step $k$, he uses the following strategy profile offered by other players: \begin{equation} \label{eq:App-04-NE-Strategy-Update} x^{t^i(k)}_{-i} = \left(x^{t^i_1(k)}_1,\cdots,x^{t^i_{i-1}(k)}_{i-1}, x^{t^i_{i+1}(k)}_{i+1},\cdots,x^{t^i_n(k)}_{n} \right) \end{equation} Using above definitions, the totally asynchronous-iterative algorithm is summarized in Algorithm \ref{Ag:App-04-NE-Asy-BR}. Some technique conditions for which the schedules $T_i$ and $t^i_j(k)$ should satisfy in order to be implementable in practice are discussed in \cite{App-04-Update-Sequence-1,App-04-Update-Sequence-2}, which are assumed to be satisfied without particular mention. \begin{algorithm}[!htp] \normalsize \caption{\bf : Asynchronous best-response algorithm} \begin{algorithmic}[1] \STATE Choose a convergence tolerance $\varepsilon>0$ and a feasible initial point $x^0 \in X$; the iteration index is $k=0$; \STATE For player $i=1,\cdots,n$, update the strategy $x^{k+1}_i$ as \begin{equation} \label{eq:App-04-NE-Asy-BR} x^{k+1}_i = \begin{cases} x^*_i \in \arg \min_{x_i} \left\{ f_i\left( x_i,x^{t^i(k)}_{-i} \right) ~\middle|~ x_i \in X_i \right\} & \mbox{if } k \in T_i \\ x^n_i & \mbox{otherwise} \end{cases} \end{equation} \STATE If $\| x^{k+1} - x^k \|_2 \le \varepsilon$, terminate and report $x^{k+1}$ as the Nash equilibrium; otherwise, update $k \leftarrow k+1$, and go to step 2. \end{algorithmic} \label{Ag:App-04-NE-Asy-BR} \end{algorithm} A sufficient condition which guarantees the convergence of Algorithm \ref{Ag:App-04-NE-Asy-BR} is provided in \cite{App-04-GNEP-VI-1}. Roughly speaking, Algorithm \ref{Ag:App-04-NE-Asy-BR} would converge if $f_i(x)$ is strongly convex in $x_i$. However, this is a strong assumption, which cannot be satisfied even if there is only one point where the partial Hessian matrix $\nabla^2_{x_i} f_i(x)$ of player $i$ is singular. Algorithm \ref{Ag:App-04-NE-Asy-BR} reduces to some classic algorithms by enforcing a special updating procedure, i.e., a particular selection of $T_i$ and $t^i_j(k)$. For example, if players update their strategies simultaneously (sequentially), Algorithm \ref{Ag:App-04-NE-Asy-BR} becomes the Jacobi (Gauss-Seidel) type iterative scheme. Interestingly, the asynchronous best-response algorithm is robust against data missing or delay, and is guaranteed to find the unique Nash equilibrium. This feature greatly relaxes the requirement on data synchronization and simplifies the design of communication systems, and makes this class of algorithm very appealing in distributed system operations. \vspace{12pt} {\noindent \bf 2. Algorithms for convex cases} To relax the strong monotonicity assumption on $F(x)$, the second algorithm has been proposed in \cite{App-04-GNEP-VI-1}, which only uses the monotonicity property and is summarized below. Algorithm \ref{Ag:App-04-NE-Proximal} converges to a Nash equilibrium, if each player's optimization problem is convex (or $F(x)$ is monotone), which significantly improves its applicability. \begin{algorithm}[!htp] \normalsize \caption{\bf : Proximal Decomposition Algorithm} \begin{algorithmic}[1] \STATE Given $\{ \rho_n\}_{n=0}^\infty$, $\varepsilon >0$, and $\tau > 0$, choose a feasible initial point $x^0 \in X$; \STATE Find an equilibrium $z^0$ of the following NEP using Algorithm \ref{Ag:App-04-NE-Asy-BR} \begin{equation} \label{eq:App-04-NE-Proximal} \left. \begin{aligned} \min_{x_i} ~~ & f_i( x_i,x_{-i}) + \tau \| x_i - x^0_i \|^2_2 \\ \mbox{s.t.}~~ & x_i \in X_i \end{aligned} \right\}, i=1,\cdots,n \end{equation} \STATE If $\| z^0 - x^0 \|_2 \le \varepsilon$, terminate and report $x^0$ as the Nash equilibrium; otherwise, update $x^0 \leftarrow (1-\rho_n) x^0 + \rho_n z^0$, and go to step 2. \end{algorithmic} \label{Ag:App-04-NE-Proximal} \end{algorithm} Algorithm \ref{Ag:App-04-NE-Proximal} is a double-loop method: the inner loop identifies a Nash equilibrium of the regularized game (\ref{eq:App-04-NE-Proximal}) with $x^0$ being a parameter, which is updated in each iteration, and the outer loop updates $x^0$ by selecting a new point along the line connecting $x^0$ and $z^0$. Notice that in step 2, as long as $\tau$ is large enough, the Hessian matrix $\nabla^2_{x_i} f_i (x) + 2 \tau I$ must be positive definite, and thus the best-response algorithm applied to (\ref{eq:App-04-NE-Proximal}) is guaranteed to converge to the unique Nash equilibrium. See \cite{App-04-GNEP-VI-1} for more details about parameter selection. The penalty term $\tau \| x_i - x^0_i \|^2$ limits the change of optimal strategies in two consecutive iterations, and can be interpreted as a damping factor that attenuates possible oscillations during the computation. It is worth mentioning that the penalty parameter $\tau$ significantly impacts the convergence rate of Algorithm \ref{Ag:App-04-NE-Proximal} and should be carefully selected. If it is too small, the damping effect of the penalty term is limited, and the oscillation may still take place; if it is too large, the increment of $x$ in each step is very small, and Algorithm \ref{Ag:App-04-NE-Proximal} may suffer from a slow convergence rate. The optimal value of $\tau$ is problem-dependent. There is not a universal way to determine its best value. Recently, single-loop distributed algorithms for monotone Nash games are proposed in \cite{App-04-GNEP-ITR}, which authors believe to be promising in practical applications. In these two schemes, the regularization parameter is updated at once after each iteration is completed, rather than when the regularized problem is approximately solved, and players can select their parameter independently. \subsection{Nash Equilibrium of Matrix Games} \label{App-D-Sect01-04} As explained before, not all Nash games have an equilibrium, especially when the strategy set and the payoff function are non-convex or discrete. To widen the equilibrium notion and reveal deeper insights on the behaviors of players in such instances, it is instructive to revisit some simple games, called the matrix game, which is the primary research object of game theorists. The bimatrix game refers to a matrix game involving two players P1 and P2. The numbers of possible strategies of P1 and P2 are $m$ and $n$, respectively. $A = \{a_{ij}\} \in \mathbb M^{m \times n}$ is the payoff matrix of P1: when P1 chooses strategy $i$ and P2 selects strategy $j$, the payoff of P1 is $a_{ij}$. The payoff matrix $B \in \mathbb M^{m \times n}$ of P2 can be defined in the same way. In a matrix game, each player is interested to determine a probability distribution of his actions, such that his expected payoff is minimized. Let $x_i$ ($y_j$) be the probability that P1 (P2) will use strategy $i$ ($j$), vectors $x=[x_1,\cdots,x_m]^T$ and $y = [y_1,\cdots,y_n]^T$ are called mixed strategies, clearly, \begin{equation} \label{eq:App-04-MG-Mixed-Strategy} \begin{gathered} x \ge 0,~ \sum_{i=1}^m x_i = 1 \quad \mbox{or} \quad x \in {\rm \Delta}_m\\ y \ge 0,~ \sum_{j=1}^n y_j = 1 \quad \mbox{or} \quad y \in {\rm \Delta}_n\\ \end{gathered} \end{equation} where ${\rm \Delta}_m$ and ${\rm \Delta}_n$ are simplex slices in $\mathbb R^m$ and $\mathbb R^n$. \vspace{12pt} {\noindent \bf 1. Two-person zero-sum games} The zero-sum game represents a totally competitive situation: P1's gain is P2's loss, so the sum of their payoff matrices is $A+B = 0$, as its name suggests. Such type of game has been well studied in vast literature since von Neumann found the famous Minimax Theorem in 1928. The game is revisited from a mathematical programming perspective in \cite{App-04-Minimax-LP}. The proposed linear programming method is especially powerful for instances with a high-dimensional payoff matrix. Next, we briefly introduce this method. Let us begin with a payoff matrix $A = \{a_{ij}\}$, $a_{ij} > 0$, $\forall i,j$ with strictly positive entries (otherwise, we can add a constant to every entry, such that the smallest entry becomes positive, and the equilibrium strategy remains the same). The expected payoff of P1 is given by \begin{equation} \label{eq:App-04-ZSG-Payoff} V_A = \sum_{i=1}^m \sum_{j=1}^n x_i a_{ij} y_j = x^T A y \end{equation} which must be positive because of the element-wise positivity assumption on $A$. Since $B=-A$ and minimizing $-x^T A y$ is equivalent to maximizing $x^T A y$, the two-person zero-sum game has a min-max form as \begin{equation} \label{eq:App-04-ZSG-1} \min_{x \in {\rm \Delta}_m} \max_{y \in {\rm \Delta}_n} ~~ x^T A y \end{equation} or \begin{equation} \label{eq:App-04-ZSG-2} \max_{y \in {\rm \Delta}_n} \min_{x \in {\rm \Delta}_m} ~~ x^T A y \end{equation} The solution to the two-person zero-sum matrix game (\ref{eq:App-04-ZSG-1}) or (\ref{eq:App-04-ZSG-2}) is called a mixed-strategy Nash equilibrium, or the saddle point of a min-max problem. It satisfies \begin{gather} (x^*)^T A y^* \le x^T A y^* ,~ \forall x \in {\rm \Delta}_m \notag \\ (x^*)^T A y^* \ge y^T A x^* ,~ \forall y \in {\rm \Delta}_n \notag \end{gather} To solve this game, consider (\ref{eq:App-04-ZSG-1}) in the following format \begin{equation} \label{eq:App-04-ZSG-3} \min_x \left\{ v_1(x) ~\middle|~ x \in {\rm \Delta}_m \right\} \end{equation} where $v_1(x)$ is the optimal value function of the problem faced by P2 with the fixed strategy $x$ of P1 \begin{equation} v_1(x) = \max_y \left\{ x^T A y ~\middle|~ y \in {\rm \Delta}_n \right\}\notag \end{equation} In view of the feasible region defined in (\ref{eq:App-04-MG-Mixed-Strategy}), $v_1(x)$ is equal to the maximal element of vector $x^T A$, which is strictly positive, and the inequality \begin{equation} A^T x \le {\bf 1}^n v_1(x) \notag \end{equation} holds. Furthermore, introducing a normalized vector $\bar x = x/v_1(x)$, we have \begin{equation} \begin{gathered} \bar x \ge 0, ~~ A^T \bar x \le {\bf 1}^n \\ v_1(x) = ({\bar x}^T {\bf 1}^m)^{-1} \end{gathered} \notag \end{equation} Taking these relations into account, problem (\ref{eq:App-04-ZSG-3}) becomes \begin{equation} \label{eq:App-04-ZSG-4} \begin{aligned} \min_{\bar x} ~~ & ({\bar x}^T {\bf 1}^m)^{-1} \\ \mbox{s.t.}~~ & A^T \bar x \le {\bf 1}^n \\ & \bar x \ge 0 \end{aligned} \end{equation} Because the objective is strictly positive and monotonic, the optimal solution of (\ref{eq:App-04-ZSG-4}) keeps unchanged if we choose to maximize ${\bar x}^T {\bf 1}^m$ under the same constraints, giving rise to the following LP \begin{equation} \label{eq:App-04-ZSG-5} \begin{aligned} \max_{\bar x} ~~ & {\bar x}^T {\bf 1}^m \\ \mbox{s.t.}~~ & A^T \bar x \le {\bf 1}^n \\ & \bar x \ge 0 \end{aligned} \end{equation} Let $\bar x^*$ and $\bar v^*_1$ be the optimal solution and optimal value of LP (\ref{eq:App-04-ZSG-5}). According to the analysis of variable transformation, the optimal expected payoff $v^*_1$ and the optimal mixed strategy $x^*$ of P1 in this game are given by \begin{equation} \label{eq:App-04-ZSG-6} v^*_1 = 1 / \bar v^*_1,~~ x^* = \bar x^* / \bar v^*_1 \end{equation} Consider (\ref{eq:App-04-ZSG-2}) in the same way, we obtain the following LP for P2: \begin{equation} \label{eq:App-04-ZSG-7} \begin{aligned} \min_{\bar y} ~~ & {\bar y}^T {\bf 1}^n \\ \mbox{s.t.}~~ & A \bar y \ge {\bf 1}^m \\ & \bar y \ge 0 \end{aligned} \end{equation} Denote by $\bar y^*$ and $\bar v^*_2$ the optimal solution and optimal value of LP (\ref{eq:App-04-ZSG-7}), and then the optimal expected payoff $v^*_2$ and the optimal mixed strategy $y^*$ of P2 in this game can be posed as \begin{equation} \label{eq:App-04-ZSG-8} v^*_2 = 1 / \bar v^*_2,~~ y^* = \bar y^* / \bar v^*_2 \end{equation} In summary, the mixed-strategy Nash equilibrium of two-person zero-sum matrix game (\ref{eq:App-04-ZSG-1}) is $(x^*,y^*)$, and the payoff of P1 is $v^*_1$. Interestingly, we notice that problems (\ref{eq:App-04-ZSG-5}) and (\ref{eq:App-04-ZSG-7}) constitute a pair of dual LPs, implying that their optimal values are equal, and the optimal solution $y^*$ in (\ref{eq:App-04-ZSG-8}) also solves the inner LP of (\ref{eq:App-04-ZSG-1}). This observation leads to two important conclusions: 1) The Nash equilibrium of a two-person zero-sum matrix game, or the saddle point, can be computed by solving a pair of dual LPs. In fact, if one player's strategy, say $x^*$, have been obtained from (\ref{eq:App-04-ZSG-5}) and (\ref{eq:App-04-ZSG-6}), the rival's strategy can be retrieved by solving (\ref{eq:App-04-ZSG-1}) with $x = x^*$. 2) The decision sequence of a two-person zero-sum game is interchangeable without influencing the saddle point. \vspace{12pt} {\noindent \bf 2. General bimatrix games} In more general two-person matrix games, the sum of payoff matrices is not equal to zero, and each player wishes to minimize its own expected payoff taking the other player's strategy as given. In the setting of mixed strategies, players are selecting the probability distribution among available strategies rather than a single action (the pure strategy), and the respective optimization problems are as follows \begin{equation} \label{eq:App-04-BMG-1} \begin{aligned} \min_x ~~ & x^T A y \\ \mbox{s.t.}~~ & x^T {\bf 1}^m =1 : \lambda \\ & x \ge 0 \end{aligned} \end{equation} \begin{equation} \label{eq:App-04-BMG-2} \begin{aligned} \min_y ~~ & x^T B y \\ \mbox{s.t.}~~ & y^T {\bf 1}^n =1 : \gamma \\ & y \ge 0 \end{aligned} \end{equation} The pair of probability distributions $(x^*,y^*)$ is called a mixed-strategy Nash equilibrium if \begin{gather} (x^*)^T A y^* \le x^T A y^* ,~ \forall x \in {\rm \Delta}_m \notag \\ (x^*)^T B y^* \le (x^*)^T B y ,~ \forall y \in {\rm \Delta}_n \notag \end{gather} Unlike the zero-sum case, there is not an equivalent LP that can extract the Nash equilibrium. Performing the KKT system approach, we write out the KKT condition for (\ref{eq:App-04-BMG-1}) \begin{gather*} A y - \lambda {\bf 1}^m - \mu = 0 \\ 0 \le \mu \bot x \ge 0 \\ x^T {\bf 1}^m = 1 \end{gather*} where $\mu$ is the dual variable associated with the non-negative constraint, and can be eliminated from the first equality. Concentrating the KKT conditions of LPs (\ref{eq:App-04-BMG-1}) and (\ref{eq:App-04-BMG-2}) gives \begin{equation} \label{eq:App-04-BMG-3} \begin{gathered} 0 \le A y - \lambda {\bf 1}^m \bot x \ge 0,~ x^T {\bf 1}^m = 1 \\ 0 \le B^T x - \gamma {\bf 1}^n \bot y \ge 0,~ y^T {\bf 1}^n = 1 \\ \end{gathered} \end{equation} Complementarity condition (\ref{eq:App-04-BMG-3}) can be solved by setting $\lambda = \gamma = 1$ and omitting equality constraints, and recovering them at a later normalization step, i.e., we first solve \begin{equation} \label{eq:App-04-BMG-4} \begin{gathered} 0 \le A y - {\bf 1}^m \bot x \ge 0 \\ 0 \le B^T x - {\bf 1}^n \bot y \ge 0 \\ \end{gathered} \end{equation} Suppose that the solution is ($\bar x, \bar y$), then the Nash equilibrium is \begin{equation} \label{eq:App-04-BMG-5} \begin{gathered} x^* = \bar x / \bar x^T {\bf 1}^m \\ y^* = \bar y / \bar y^T {\bf 1}^n \end{gathered} \end{equation} and the corresponding multipliers are derived from (\ref{eq:App-04-BMG-3}) as \begin{equation} \label{eq:App-04-BMG-6} \begin{gathered} \lambda^* = (x^*)^T A y^* \\ \gamma^* = (x^*)^T B y^* \\ \end{gathered} \end{equation} On the other hand, if ($x^*, y^*$) is a Nash equilibrium and solves (\ref{eq:App-04-BMG-3}) with multipliers ($\lambda^*,\gamma^*$), we can observe that ($x^*/\gamma^*,y^*/\lambda^*$) solves (\ref{eq:App-04-BMG-4}), therefore \begin{equation} \label{eq:App-04-BMG-7} \begin{gathered} \bar x = \dfrac{x^*}{(x^*)^T B y^*} \\ \bar y = \dfrac{y^*}{(x^*)^T A y^*} \end{gathered} \end{equation} Now we can see that identifying the mixed-strategy Nash equilibrium of a bimatrix game entails solving KKT system (\ref{eq:App-04-BMG-3}) or (\ref{eq:App-04-BMG-4}), which is called a linear complementarity problem (LCP). A classical algorithm for LCP is the Lemke's method \cite{App-04-LCP-Lemke-1,App-04-LCP-Lemke-2}. Another systematic way to solve an LCP is to reformulate it as an MILP using the method described in Appendix \ref{App-B-Sect03-05}. Nonetheless, there are more tailored MILP models for LCPs, which will be detailed in Sect. \ref{App-D-Sect04-02}. Unlike the pure-strategy Nash equilibrium, whose existence relies on some assumptions on convexity, the mixed-strategy Nash equilibrium for matrix games, which is the discrete probability distribution among available actions, always exists \cite{App-04-Nash-2}. If a game with two players has no pure-strategy Nash equilibrium, and each player can choose actions from a finite strategy set, we can then calculate the payoff matrices as well as the mixed-strategy Nash equilibrium, which informs the likelihood that the player will adopt each corresponding pure strategy. \subsection{Potential Games} \label{App-D-Sect01-05} Despite that a direct certification of the existence and uniqueness of a pure-strategy Nash equilibrium for a general game model is non-trivial, when the game possesses some special structures, such a certification becomes axiomatic. One of these guarantees is the existence of an exact potential function, and the associated problem is known as the potential game \cite{App-04-Potential-Game-1}. Four types of potential games are listed in \cite{App-04-Potential-Game-1}, categorized by the type of the potential function. Other extensions of the potential game have been studied as well. For a complete introduction, we recommend \cite{App-04-Potential-Game-2}. \begin{definition} \label{pr:App-04-PG-1} (Exact potential game) A game is an exact potential game if there is a potential function $U(x)$ such that: \begin{equation} \label{eq:App-04-PG-1} \begin{gathered} f_i(x_i,x_{-i}) - f_i(y_i,x_{-i}) = U(x_i,x_{-i}) - U(y_i,x_{-i}) \\ \forall x_i,y_i \in X_i,~ \forall x_{-i} \in X_{-i},~ i = 1,\cdots,n \end{gathered} \end{equation} \end{definition} In an exact potential game, the change in the utility/payoff of any single player due to the unilateral strategy deviation leads to the same amount of change in the potential function. Among various variations of potential games which are defined by relaxing the strict equality (\ref{eq:App-04-PG-1}), the exact potential game is the most fundamental one and has attracted the majority of research interests. Throughout this section, the term potential game means the exact one without particular mention. The condition for a game being a potential game and the method for constructing the potential function are given in the following proposition. \begin{proposition} \label{pr:App-04-PG-1} \cite{App-04-Potential-Game-1} Suppose the payoff functions $f_i$, $i=1,\cdots,n$ in a game are twice continuously differentiable, then a potential function exists if and only if \begin{equation} \label{eq:App-04-PG-2} \dfrac{\partial^2 f_i}{\partial x_i \partial x_j} = \dfrac{\partial^2 f_j}{\partial x_i \partial x_j},~~ \forall i,j = 1,\cdots,n \end{equation} and the potential function can be constructed as \begin{equation} \label{eq:App-04-PG-3} U(v) - U(z) = \sum_{i=1}^n \int_0^1 (x^\prime_i(t))^T \dfrac{\partial f_i}{\partial x_i} (x(t)) d t \end{equation} where $x(t): [0,1] \to X$ is a continuously differentiable path in $X$ connecting strategy profile $v$ and a fixed strategy profile $z$, such that $x(0)=z$, $x(1)=v$. \end{proposition} To obtain (\ref{eq:App-04-PG-3}), first, a direct consequence of (\ref{eq:App-04-PG-1}) is \begin{equation} \label{eq:App-04-PG-4} \dfrac{\partial f_i}{\partial x_i} = \dfrac{\partial U}{\partial x_i},~ i=1,\cdots,n \end{equation} For any smooth curve $C(t): [0,1] \to X$ and any function $U$ with a continuous gradient $\nabla U$, the gradient theorem in calculus tells us \begin{equation} U(C_{end}) - U(C_{start}) = \int_C \nabla U(s) d s \notag \end{equation} where vector $s$ represents points along the integral trajectory $C$ parameterized in a scalar variable. Introducing $s=x(t)$: when $t=0$, $s=C_{start}=z$; when $t=1$, $s=C_{end}=v$. By the chain rule, $ds = x^\prime(t) dt$, and hence we get \begin{equation} \begin{aligned} U(v) - U(z) & = \int_0^1 (x^\prime(t))^T \nabla U(x(t)) d t \\ & = \sum_{i=1}^n \int_0^1 (x_i^\prime(t))^T \dfrac{\partial U}{\partial x_i} (x(t)) d t \end{aligned} \notag \end{equation} Then, if $U$ is a potential function, substituting (\ref{eq:App-04-PG-4}) into above equation gives equation (\ref{eq:App-04-PG-3}). In summary, for a standard NEP with continuous payoff functions, we can check whether it is a potential game, and further construct its potential function, if the right-hand side of (\ref{eq:App-04-PG-3}) has a closed form expression. Nevertheless, in some particular cases, the potential function can be observed without calculating an integral. 1. The payoff functions of the game can be decomposed as \begin{equation} f_i(x_i,x_{-i}) = p_i(x_i) + Q(x),~ \forall i = 1,\cdots,n \notag \end{equation} where the first term only depends on $x_i$, and the second term that couples all players' strategies and appears in every utility function is identical. In such circumstance, the potential function is instantly posed as \begin{equation} U(x) = Q(x) + \sum_{i=1}^n p_i(x_i) \notag \end{equation} which can be verified through its definition in (\ref{eq:App-04-PG-1}). 2. The payoff functions of the game can be decomposed as \begin{equation} f_i(x_i,x_{-i}) = p_i(x_{-i}) + Q(x),~ \forall i = 1,\cdots,n \notag \end{equation} where the first term only depends on the joint actions of opponents $x_{-i}$, and the second term is common and identical to all players. In such circumstance, the potential function is $Q(x)$. This is easy to understand because $x_{-i}$ is constant in the decision-making problem of player $i$ and thus the first term $p_i(x_{-i})$ can be omitted from the objective function. 3. The payoff function of each player has a form of \begin{equation} f(x_i,x_{-i}) = \left( a+b \sum_{j=1}^n x_j \right) x_i + c_i(x_i),~ i=1,\cdots,n \notag \end{equation} Obviously, \begin{equation} \dfrac{\partial^2 f_i}{\partial x_i \partial x_j} = \dfrac{\partial^2 f_j}{\partial x_i \partial x_j} = b \notag \end{equation} therefore, a potential function exists and is given by \begin{equation} U(x) = a \sum_{i=1}^n x_i + \dfrac{b}{2} \sum_{i=1}^n \sum_{j \ne i} x_ix_j + b \sum_{i=1}^n x^2_i + \sum_{i=1}^n c_i(x_i) \notag \end{equation} The potential function provides a convenient way to analyze the Nash equilibria of potential games, since the function coincides with incentives of all players. \begin{proposition} \label{pr:App-04-PG-2} \cite{App-04-Potential-Game-2} If game $G_1$ is a potential game with potential function $U(x)$; $G_2$ is another game with the same number of players and their payoff functions are $F_1(x_1,x_{-1}) = \cdots = F_n(x_n,x_{-n}) = U(x)$. Then $G_1$ and $G_2$ have the same set of Nash equilibria. \end{proposition} This is easy to understand because an equilibrium of $G_1$ satisfies \begin{equation} f_i(x^*_i,x^*_{-i}) \le f_i(x_i,x^*_{-i}), \forall x_i \in X_i,~ i = 1,\cdots,n \notag \end{equation} By the definition of potential function (\ref{eq:App-04-PG-1}), this gives \begin{equation} \label{eq:App-04-PG-5} U(x^*_i,x^*_{-i}) \le U(x_i,x^*_{-i}), \forall x_i \in X_i,~ i = 1,\cdots,n \end{equation} So any equilibrium of $G_1$ is an equilibrium of $G_2$. Similarly, the reverse holds, too. In Proposition \ref{pr:App-04-PG-2}, the identical interest game $G_2$ is actually an optimization problem. The potential function builds a bridge between an NEP and a mathematical programming problem. Let $X = X_1 \times \cdots \times X_n$, (\ref{eq:App-04-PG-5}) can be written as \begin{equation} \label{eq:App-04-PG-6} U(x^*) \le U(x), \forall x \in X \end{equation} On this account, we have \begin{proposition} \label{pr:App-04-PG-3} \cite{App-04-Potential-Game-1} Every minimizer of the potential function $U(x)$ in $X$ is a (pure-strategy) Nash equilibrium of the potential game. \end{proposition} Proposition \ref{pr:App-04-PG-3} is very useful. It reveals the fact that computing a Nash equilibrium of a potential game is equivalent to solving a traditional mathematical program. Meanwhile, the existence and uniqueness results of Nash equilibrium for potential games can be understood from the solution property of NLPs. \begin{proposition} \label{pr:App-04-PG-4} \cite{App-04-Potential-Game-2} Every potential game with a continuous potential function $U(x)$ and a compact strategy space $X$ has at least one (pure-strategy) Nash equilibrium. If $U(x)$ is strictly convex, then the Nash equilibrium is unique. \end{proposition} Propositions \ref{pr:App-04-PG-3}-\ref{pr:App-04-PG-4} make no reference on the convexity of individual payoff functions of players. Moreover, if the potential function $U(x)$ is non-convex and has multiple local minimums, then each local optimizer corresponds to a local Nash equilibrium where $X_i$ in (\ref{eq:App-04-PG-5}) is replaced with the intersection of $X_i$ with a neighborhood region of $x^*_i$. \section{Generalized Nash Equilibrium Problem} \label{App-D-Sect02} In above developments for standard NEPs, we have assumed that the strategy sets are decoupled: the available strategies of each player do not depend on other players' choices. However, there are indeed many practical cases where the strategy sets are interactive. For example, when players consume a common resource, the total consumption should not exceed the inventory quantity. The generalized Nash equilibrium problem (GNEP), invented in \cite{App-04-GNEP-1}, relaxes the strategy independence assumption in classic NEPs and allows the feasible set of each player's actions to depend on the rivals' strategies. For a comprehensive review, we recommend \cite{App-04-GNEP-2}. \subsection{Formulation and Optimality Condition} \label{App-D-Sect02-01} Denote by $X_i(x_{-i})$ the strategy set of player $i$ when others select $x_{-i}$. In a GNEP, given the value of $x_{-i}$, each player $i$ determines a strategy $x_i \in X_i(x_{-i})$ which minimizes a payoff function $f_i(x_i,x_{-i})$. In this regard, a GNEP with $n$ players is the joint solution of $n$ coupled optimization problems \begin{equation} \label{eq:App-04-GENP-MP} \left. \begin{aligned} \min_{x_i} ~~ & f_i(x_i,x_{-i}) \\ \mbox{s.t.}~~ & x_i \in X_i(x_{-i}) \end{aligned} \right\},~~ i=1,\cdots,n \end{equation} In (\ref{eq:App-04-GENP-MP}), correlation takes place not only in the objective function, but also in the constraints. \begin{definition} \label{df:App-04-GENP} A generalized Nash equilibrium (GNE), or the solution of a GNEP, is a feasible point $x^*$ such that \begin{equation} f(x^*_i,x^*_{-i}) \le f(x_i,x^*_{-i}),~ \forall x_i \in X_i (x_{-i}) \end{equation} holds for all players. \end{definition} In its full generality, the GNEP is much more difficult than an NEP due to the variability of strategy sets. In this section, we restrict our attention to a particular class of GNEP: the so-called GNEP with shared convex constraints. In such a problem, the strategy sets can be expressed as \begin{equation} \label{eq:App-04-GENP-Convex-Xi} X_i(x_{-i}) = \left\{x_i ~\middle|~ x_i \in Q_i,~ g(x_i,x_{-i}) \le 0 \right\},~ i = 1,\cdots,n \end{equation} where $Q_i$ is a closed and convex set which involves only $x_i$; $g(x_i,x_{-i}) \le 0$ represents the shared constraints. They consist of a set of convex inequalities coupling all players' strategies and are identical in $X_i(x_{-i})$, $i=1,\cdots,n$. Sometimes, $Q_i$ and $g(x_i,x_{-i}) \le 0$ are also mentioned as local and global constraints, respectively. In the absence of shared constraints, the GNEP reduces to a standard NEP. Define the feasible set of strategy profile $x=(x_1,\cdots,x_n)$ in a GNEP \begin{equation} \label{eq:App-04-GENP-Convex-X} X = \left\{x ~\middle|~ x \in \prod_{i=1}^n Q_i,~ g(x) \le 0 \right\} \end{equation} It is easy to see that $X_i(x_{-i})$ is a slice of $X$. A geometric interpretation of (\ref{eq:App-04-GENP-Convex-Xi}) is illustrated in Fig. \ref{fig:App-04-01}. It is seen that the choice of $x_1$ influences the feasible interval $X_2(x_1)$ of Player 2. \begin{figure}[!htp] \centering \includegraphics[scale=0.50]{Fig-App-04-01} \caption{Relations of $X$ and the individual strategy sets.} \label{fig:App-04-01} \end{figure} We make some assumptions on the smoothness and convexity for a GNEP with shared constraints. \begin{assumption} \label{ap:App-04-GNEP-Convex-Smooth} \ 1) Strategy set $Q_i$ of each player is nonempty, closed, and convex. 2) Payoff function $f_i(x_i,x_{-i})$ of each player is twice continuously differentiable in $x$ and convex in $x_i$ for every fixed $x_{-i}$. 3) Functions $g(x) = (g_1(x),\cdots,g_m(x))$ are differentiable and convex in $x$. \end{assumption} In analogy with the NEP, concatenating the KKT optimality condition of each optimization problem in (\ref{eq:App-04-GENP-MP}) gives us what is called the KKT condition of the GNEP. For notation brevity, we omit local constraints ($Q_i=\mathbb R^{n_i}$) and assume that $X_i(x_{-i})$ contains only global constraints. Write out the KKT condition of GNEP (\ref{eq:App-04-GENP-MP}) \begin{equation} \label{eq:App-04-GNEP-KKT} \left. \begin{gathered} \nabla_{x_i} f_i (x_i ,x_{-i}) + \lambda_i^T \nabla_{x_i} g (x) = 0 \\ \lambda_i \ge 0,~g(x) \le 0,~ \lambda_i^T g (x) = 0 \end{gathered} \right\}~i = 1,\cdots,n \end{equation} where $\lambda_i$ is the Lagrange multiplier vector associated with the global constraints in the $i$-th player's problem. \begin{proposition} \label{pr:App-04-GNEP-KKT} \cite{App-04-GNEP-2} \ 1) Let $\bar x = (\bar x_1,\cdots,\bar x_n)$ be the equilibrium of a GNEP, then a multiplier vector $\bar \lambda = (\bar \lambda_1,\cdots,\bar \lambda_n)$ exists, such that the pair $(\bar x, \bar \lambda)$ solves KKT system (\ref{eq:App-04-GNEP-KKT}). 2) If $(\bar x, \bar \lambda)$ solves KKT system (\ref{eq:App-04-GNEP-KKT}), and Assumption \ref{ap:App-04-GNEP-Convex-Smooth} holds, then $\bar x$ is an equilibrium of GNEP (\ref{eq:App-04-GENP-MP}) with shared convex constraints. \end{proposition} However, in contrast to an NEP, the solutions of an GNEP may be non-isolated and constitute a low dimensional manifold, because $g(x)$ is a common constraint shared by all, and the Jacobian of the KKT system may appear to be singular. A meticulous explanation is provided in \cite{App-04-GNEP-2}. We give a graphic interpretation for this phenomenon. Consider a GNEP with two players: \begin{gather} \mbox{Player 1:} \quad \left\{ \begin{aligned} \max_{x_1} ~~ & x_1 \\ \mbox{s.t.}~~ & x_1 \in X_1(x_2) \end{aligned} \right. \notag \\ \mbox{Player 2:} \quad \left\{ \begin{aligned} \max_{x_2} ~~ & x_2 \\ \mbox{s.t.}~~ & x_2 \in X_2(x_1) \end{aligned} \right. \notag \end{gather} where \begin{gather} X_1(x_2) = \{x_1 ~|~ x_1 \ge 0,~ g(x_1,x_2) \le 0 \} \notag \\ X_2(x_1) = \{x_2 ~|~ x_2 \ge 0,~ g(x_1,x_2) \le 0 \} \notag \end{gather} and the global constraint set is \begin{equation*} \left\{ x ~\middle|~ \begin{gathered} g_1 = 2 x_1 + x_2 \le 0 \\ g_2 = x_1 + 2 x_2 \le 0 \end{gathered} \right\} \end{equation*} The feasible set $X$ of the strategy profile is plotted in Fig. \ref{fig:App-04-02}. It can be verified that any point on the line segments \begin{equation} L_1 = \left\{ (x_1,x_2) ~\middle|~ 0 \le x_1 \le \frac{2}{3},~ \frac{2}{3} \le x_2 \le 1,~ x_1 + 2 x_2 = 2 \right\} \notag \end{equation} and \begin{equation} L_2 = \left\{ (x_1,x_2) ~\middle|~ \frac{2}{3} \le x_1 \le 1,~ 0 \le x_2 \le \frac{2}{3},~ 2 x_1 + x_2 = 2 \right\} \notag \end{equation} is an equilibrium point that satisfies Definition \ref{df:App-04-GENP}. \begin{figure}[!htp] \centering \includegraphics[scale=0.40]{Fig-App-04-02} \caption{Illustration of the equilibria of a simple GNEP.} \label{fig:App-04-02} \end{figure} To refine a meaningful equilibrium from the infinitely many candidates, it is proposed to impose additional conditions on the Lagrange multipliers associated with shared constraints \cite{App-04-GNEP-3}. The outcome is called a restricted Nash equilibrium. Two special cases are discussed here. \vspace{12pt} {\noindent \bf 1. Normalized Nash equilibrium} The normalized Nash equilibrium is firstly introduced in \cite{App-04-NNE-Rosen}. It incorporates a cone constraint on the dual multipliers \begin{equation} \label{eq:App-04-NNE} \lambda_i = \beta_i \lambda_0,~ \beta_i > 0,~ i = 1,\cdots,n \end{equation} where $\lambda_0 \in \mathbb R^m_+$. Solving KKT system (\ref{eq:App-04-GNEP-KKT}) with constraint (\ref{eq:App-04-NNE}) gives an equilibrium solution. It is shown that for any given $\beta \in \mathbb R^n_{++}$, a normalized Nash equilibrium exists as long as the game is feasible. Moreover, if the mapping \begin{equation} F(\beta): \mathbb R^n \to \mathbb R^n = \left( \begin{gathered} \frac{1}{\beta_1} \nabla_{x_1} f_1(x_1,x_{-1}) \\ \vdots \\ \frac{1}{\beta_n} \nabla_{x_n} f_n(x_n,x_{-n}) \end{gathered} \right) \notag \end{equation} parameterized in $\beta$ is strictly monotone (by assuming convexity of payoff functions), then the normalized Nash equilibrium is unique. The relation given in (\ref{eq:App-04-NNE}) indicates that the dual variables $\lambda_i$ associated with the shared constraints are a constant vector scaled by different scalars. From an economic perspective, this means that the shadow prices of common resources at any normalized Nash equilibrium are proportional among each player. \vspace{12pt} {\noindent \bf 2. Variational equilibrium} Recall the variational inequality formulation for the NEP in Proposition \ref{pr:App-04-NE-VI}, a GNEP with shared convex constraints can be treated in the same way: Let $F(x) = (\nabla_{x_1} f_1 (x),\cdots,\nabla_{x_n} f_n (x))$ be a mapping, and the feasible region $X$ is defined in (\ref{eq:App-04-GENP-Convex-X}), then every solution of variational inequality problem VI($X,F$) gives an equilibrium solution of the GNEP, which is called the variational equilibrium (VE). However, unlike an NEP and its associated VI problem which have the same solutions, not all equilibria of the GNEP are preserved when it is passed to a corresponding VI problem; see \cite{App-04-GNEP-VI-2,App-04-GNEP-VI-3} for examples and further details. In fact, a solution $x^*$ of a GNEP is a VE if and only if it solves KKT system (\ref{eq:App-04-GNEP-KKT}) with the following constraints on the Lagrange dual multipliers \cite{App-04-GNEP-VI-1,App-04-GNEP-2,App-04-GNEP-VI-2}: \begin{equation} \label{eq:App-04-VE} \lambda_1 = \cdots = \lambda_n = \lambda_0 \in \mathbb R^m_+ \end{equation} implying that all players perceive the same shadow prices of common resource at a VE. The VI approach has two important implications. First, it allows us analyze a GNEP using well-developed VI theory, such as conditions which could guarantee the existence and uniqueness of the equilibrium point; second, condition (\ref{eq:App-04-VE}) gives an interesting economic interpretation of the VE, and inspires pricing-based distributed algorithms to compute an equilibrium solution, which will be discussed in the next section. The concept of potential game for NEPs directly applies to GNEPs. If a GNEP with shared convex constraints possesses a potential function $U(x)$ which satisfies (\ref{eq:App-04-PG-1}), an equilibrium can be retrieved from a mathematical program which minimizes the potential function over the feasible set $X$ defined in (\ref{eq:App-04-GENP-Convex-X}). To reveal the connection of the optimal solution and the VE, we omit constraints in the local strategy sets $Q_i$, $i=1,\cdots,n$ for notation simplicity, and write out the mathematical program as follows \begin{equation} \label{eq:App-04-Potential-GNEP-1} \begin{aligned} \min_x ~~ & U(x) \\ \mbox{s.t.} ~~ & g(x) \le 0 \end{aligned} \end{equation} whose KKT optimality condition is given by \begin{equation} \label{eq:App-04-Potential-GNEP-2} \begin{gathered} \nabla_{x} U (x) + \lambda^T \nabla_{x} g (x) = 0 \\ \lambda \ge 0,~g(x) \le 0,~ \lambda^T g (x) = 0 \end{gathered} \end{equation} The first equality can be decomposed into $n$ sub-equations \begin{equation} \label{eq:App-04-Potential-GNEP-3} \nabla_{x_i} U (x) + \lambda^T \nabla_{x_i} g (x) = 0,~ i=1,\cdots,n \end{equation} Recall (\ref{eq:App-04-PG-4}), $\nabla_{x_i} U (x) = \nabla_{x_i} f_i (x_i ,x_{-i})$, substituting it into (\ref{eq:App-04-Potential-GNEP-2}) we have \begin{gather} \nabla_{x_i} f_i (x_i ,x_{-i}) + \lambda^T \nabla_{x_i} g (x) = 0,~ i=1,\cdots,n \notag \\ \lambda \ge 0,~g(x) \le 0,~ \lambda^T g (x) = 0 \notag \end{gather} which is exactly KKT system (\ref{eq:App-04-GNEP-KKT}) with identical shadow price constraint (\ref{eq:App-04-VE}). In this regard, we can see \begin{proposition} \label{pr:App-04-Potential-GNEP} Optimizing the potential function of a GNEP with shared convex constraints gives a variational equilibrium. \end{proposition} Consider the example shown in Fig. \ref{fig:App-04-02} again, $(2/3,2/3)$ is the unique VE of the GNEP, which is plotted in Fig. \ref{fig:App-04-03}. The corresponding dual variables of global constraints $g_1 \le 0$ and $g_2 \le 0$ are $(1/3,1/3)$. \begin{figure}[!htp] \centering \includegraphics[scale=0.40]{Fig-App-04-03} \caption{Illustration of the variational equilibrium.} \label{fig:App-04-03} \end{figure} \subsection{Best-Response Algorithm} \label{App-D-Sect02-02} The presence of shared constraints wrecks the Cartesian structure of $\prod_{i=1}^n Q_i$ in a standard Nash game, and prevents a direct application of the best response methods presented in Appendix \ref{App-D-Sect01-03} to solve an NEP. Moreover, even if an equilibrium can be found, it may depend on the initial point as well as the optimization sequence, because solutions of a GNEP are non-isolated. To illustrate this pitfall, take Fig. \ref{fig:App-04-03} for an example. Suppose we pick up an arbitrary point $x_0 \in X$ as the initial value. If we first maximize $x_1$ ($x_2$), the point moves to $B$ ($A$), and then in the second step, $x_2$ ($x_1$) does not change, because it is already an equilibrium solution in the sense of Definition \ref{df:App-04-GENP}. In view of this, fixed-point iteration may give any outcome on the line segments connecting $(2/3,2/3)$ and $(0,1)/(1,0)$, depending on the initiation. This section introduces the distributed algorithms proposed in \cite{App-04-GNEP-VI-1} which identify a VE of GNEP (\ref{eq:App-04-GENP-MP}) with shared convex constraints. Motivated by the Lagrange decomposition framework, we can rewrite problem (\ref{eq:App-04-GENP-MP}) in a more convenient form. Consider finding a pair ($x,\lambda$), where $x$ is the equilibrium of the following standard NEP $\mathcal G(\lambda)$ with a given vector $\lambda$ of Lagrange multipliers \begin{equation} \label{eq:App-04-GNEP-DIS-1} \mathcal G(\lambda): \quad \left\{ \begin{aligned} \min_{x_i} ~~ & f_i(x_i,x_{-i}) + \lambda ^T g(x) \\ \mbox{s.t.}~~ & x_i \in Q_i \end{aligned} \right\},~ i=1,\cdots,n \end{equation} and furthermore, a complementarity constraint \begin{equation} \label{eq:App-04-GNEP-DIS-2} 0 \le \lambda \bot -g(x) \ge 0 \end{equation} Problem (\ref{eq:App-04-GNEP-DIS-1})-(\ref{eq:App-04-GNEP-DIS-2}) has a clear economic interpretation: suppose the shared constraints represent the availability of some common resources, vector $\lambda$ can be viewed as the prices paid by players for consuming these resources. Actually, when a resource is adequate, the inequality constraint is not binding and the Lagrange dual multiplier is zero; the dual multiplier or shadow price is positive only if a resource becomes scarce, indicated by a binding inequality constraint. This relation has been imposed in constraint (\ref{eq:App-04-GNEP-DIS-2}). The KKT conditions of $\mathcal G(\lambda)$ (\ref{eq:App-04-GNEP-DIS-1}) in conjunction with condition (\ref{eq:App-04-GNEP-DIS-2}) turn out to be the VE condition of GNEP (\ref{eq:App-04-GENP-MP}). In view of this connection, a VE can be found by solving (\ref{eq:App-04-GNEP-DIS-1})-(\ref{eq:App-04-GNEP-DIS-2}) in a distributed manner based on previous algorithms developed for NEPs. Likewise, we discuss strongly convex cases and convex cases separately, due to their different convergence guarantees. \vspace{12pt} {\noindent \bf 1. Algorithms for strongly convex cases} Suppose that the game $\mathcal G(\lambda)$ in (\ref{eq:App-04-GNEP-DIS-1}) is strongly convex and has a unique Nash equilibrium $x(\lambda)$ for any given $\lambda \ge 0$. This uniqueness condition allows defining the map \begin{equation} \label{eq:App-04-GNEP-ALG-1} {\rm \Phi} (\lambda): \lambda \to -g(x(\lambda)) \end{equation} which quantifies the negative violation of the shared constraints at $x(\lambda)$. Based on (\ref{eq:App-04-GNEP-DIS-1})-(\ref{eq:App-04-GNEP-DIS-2}), the distributed algorithm is provided as follows. \begin{algorithm}[!htp] \normalsize \caption{\bf } \begin{algorithmic}[1] \STATE Choose an initial price vector $\lambda^0 \ge 0$. The iteration index is $k = 0$. \STATE Given $\lambda^k$, find the unique equilibrium $x(\lambda^k)$ of $\mathcal G(\lambda^k)$ using Algorithm \ref{Ag:App-04-NE-Asy-BR}. \STATE If $0 \le \lambda^k \bot {\rm \Phi} (\lambda^k) \ge 0$ is satisfied, terminate and report $x(\lambda^k)$ as the VE; otherwise, choose $\tau_k > 0$, and update the price vector according to \begin{equation} \lambda^{k+1}=\left[\lambda^k - \tau_k {\rm \Phi} (\lambda^k) \right]^+ \notag \end{equation} set $k \leftarrow k+1$, and go to step 2. \end{algorithmic} \label{Ag:App-04-GNE-1} \end{algorithm} Algorithm \ref{Ag:App-04-GNE-1} is a double-loop method. The range of parameter $\tau_n$ and convergence proof have been thoroughly discussed in \cite{App-04-GNEP-VI-1} based on the monotonicity of the mapping $F+ \nabla g(x) \lambda$, where $F=(\nabla_{x_i} f_i)_{i=1}^n$, and $\nabla g(x)$ is a matrix whose $i$-th column is equal to $\nabla g_i$. \vspace{12pt} {\noindent \bf 2. Algorithms for convex cases} Now we consider the case in which the VI associated with problem (\ref{eq:App-04-GNEP-DIS-1})-(\ref{eq:App-04-GNEP-DIS-2}) is merely monotone (at least one problem in (\ref{eq:App-04-GNEP-DIS-1}) is not strongly convex). In such circumstance, the convergence of Algorithm \ref{Ag:App-04-GNE-1} is no longer guaranteed. This is not only because Algorithm \ref{Ag:App-04-NE-Asy-BR} for the inner loop game $\mathcal G(\lambda)$ may not converge, but also because the outer loop has to be complicated. To circumvent this difficulty, we try to convexify the game using regularization terms as what has been done in Algorithm \ref{Ag:App-04-NE-Proximal}. To this end, we have to explore an optimization reformulation for the complementarity constraint (\ref{eq:App-04-GNEP-DIS-2}), which is given by \begin{equation} \lambda \in \arg \min_{\bar \lambda} \left\{ - \bar \lambda^T g(x) ~\middle|~ \bar \lambda \ge 0 \right\} \notag \end{equation} Then, consider the following ordinary NEP with $n+1$ players in which the last player controls the price vector $\lambda$: \begin{equation} \label{eq:App-04-GNEP-n+1} \begin{aligned} \min_{x_i} ~~ & \left\{ f_i(x_i,x_{-i}) + \lambda ^T g(x) ~\middle|~ x_i \in Q_i \right\},~ i = 1,\cdots,n \\ \min_{\lambda} ~~ & \left\{ -\lambda^T g(x) ~\middle|~ \lambda \ge 0 \right\} \end{aligned} \end{equation} where the last player solves an LP in variable $\lambda$ parameterized in $x$. At the equilibrium, $g(x) \le 0$ is implicitly satisfied. To see this, because $Q_i$ is bounded, problems of the first $n$ players must have a finite optimum for arbitrary $\lambda$. If $g(x) \nleq 0$, the last problem has an infinite optimum, imposing a large penalty on the constraint that is violated, and thus the first $n$ players will alter their strategies accordingly. Whenever $g(x) \le 0$ is met, the last LP must have a zero minimum, which satisfies (\ref{eq:App-04-GNEP-DIS-2}). In summary, this extended game (\ref{eq:App-04-GNEP-n+1}) has the same equilibria as problem (\ref{eq:App-04-GNEP-DIS-1})-(\ref{eq:App-04-GNEP-DIS-2}). Since the strategy sets of (\ref{eq:App-04-GNEP-n+1}) have a Cartesian structure, Algorithm \ref{Ag:App-04-NE-Proximal} can be applied to find an equilibrium. \begin{algorithm}[!htp] \normalsize \caption{\bf } \begin{algorithmic}[1] \STATE Given $\{ \rho_n\}_{n=0}^\infty$, $\varepsilon >0$, and $\tau > 0$, choose a feasible initial point $x^0 \in X$ and an initial price vector $\lambda^0$; the iteration index is $k=0$. \STATE Given $z^k = (x^k,\lambda^k)$, find a Nash equilibrium $z^{k+1} = (x^{k+1},\lambda^{k+1})$ of the following regularized NEP using Algorithm \ref{Ag:App-04-NE-Asy-BR} \begin{equation} \begin{aligned} \min_{x_i} ~~ & \left\{ f_i(x_i,x_{-i}) + \lambda ^T g(x) + \tau \left\| x_i - x^k_i \right\|^2_2 ~\middle|~ x_i \in Q_i \right\},~ i = 1,\cdots,n\\ \min_{\lambda} ~~ & \left\{ -\lambda^T g(x) + \tau \left\| \lambda - \lambda^k \right\|^2_2 ~ ~\middle|~ \lambda \ge 0 \right\} \end{aligned} \notag \end{equation} \STATE If $\| z^{k+1} - z^k \|_2 \le \varepsilon$, terminate and report $x^{k+1}$ as the variational equilibrium; otherwise, update $k \leftarrow k+1$, $z^k \leftarrow (1-\rho_k) z^{k-1} + \rho_k z^k$, and go to step 2. \end{algorithmic} \label{Ag:App-04-GNE-2} \end{algorithm} The convergence of Algorithm \ref{Ag:App-04-GNE-2} is guaranteed under a sufficiently large $\tau$. More quantitative discussions on parameter selection and convergence conditions can be found in \cite{App-04-GNEP-VI-1}. In practice, the value of $\tau$ should be carefully chosen to achieve satisfactory computational performances. \vspace{12pt} In NEPs and GNEPs, players make simultaneous decisions. In real-life decision making problems, there are many situations in which players can move sequentially. In the rest of this chapter, we consider three kinds of bilevel games, in which the upper-level (lower-level) players are called leaders (followers), and leaders make decisions prior to follower’s. The simplest one is the Stackelberg game, or the single-leader-single-follower game, or just the bilevel program; Stackelberg game can be generalized by incorporating multiple players in the upper and lower levels. Players at the same level make decisions simultaneously, whereas followers' actions are subject to leaders' movements, forming an NEP parameterized in the leaders' decisions. When there is only one leader, the problem is called a mathematical program with equilibrium constraints (MPEC); when there are multiple leaders, the problem is referred to as an equilibrium program with equilibrium constraints (EPEC). It is essentially a bilevel GNEP among the leaders. \section{Bilevel Programs} \label{App-D-Sect03} Bilevel program is a special mathematical program with another optimization problems nested in the constraints. The main problem is called the upper-level problem, and the decision maker is the leader; the one nested in constraints is called the lower-level problem, and the decision maker is the follower. In game theory, a bilevel program is usually referred to as the Stackelberg game, which arises in many economic and engineering design problems. \subsection{Bilevel Programs with a Convex Lower Level} {\bf 1. Mathematic model and single-level equivalence} A bilevel program is the most basic instance of bilevel games. The leader moves first and chooses a decision $x$; then the follower selects its strategy $y$ solving the lower-level problem parameterized in $x$ \begin{equation} \label{eq:App-04-BLP-LLP} \begin{aligned} \min_y ~~ & f(x,y) \\ \mbox{s.t.}~~ & g(x,y) \le 0: \lambda \\ & h(x,y) = 0: \mu \end{aligned} \end{equation} where $\lambda$ and $\mu$ following the colon are dual variables associated with inequality and equality constraints, respectively. We assume that problem (\ref{eq:App-04-BLP-LLP}) is convex and the KKT condition is necessary and sufficient for a global optimum \begin{equation} \label{eq:App-04-BLP-LLP-KKT} \mbox{Cons-KKT} = \left\{(x,y,\lambda,\mu) ~\middle|~ \begin{gathered} \nabla_y f(x,y) + \lambda^T \nabla_y g(x,y) + \mu^T \nabla_y h(x,y) = 0 \\ 0 \le \lambda \bot - g(x,y) \ge 0, ~ h(x,y) = 0 \end{gathered} \right\} \end{equation} The set of optimal solutions of problem (\ref{eq:App-04-BLP-LLP}) is denoted by $S(x)$. If (\ref{eq:App-04-BLP-LLP}) is strictly convex, the optimal solution is unique, and $S(x)$ reduces to a singleton. When the leader minimizes its payoff function $F(x,y)$, the best response $y(x) \in S(x)$ is taken into account. The leader's problem is formally described as \begin{equation} \label{eq:App-04-BLP-ULP} \begin{aligned} \min_{x, \bar y} ~~ & F(x,\bar y) \\ \mbox{s.t.}~~ & x \in X \\ & \bar y \in S(x) \end{aligned} \end{equation} Notice that although $\bar y$ acts as a decision variable of the leader, it is actually controlled by the follower through the best response mapping $S(x)$. When the leader makes decisions, it will take the response from the follower into account. When $S(x)$ is a singleton, qualifier $\in$ reduces to $=$; otherwise, if $S(x)$ contains more than one elements, (\ref{eq:App-04-BLP-ULP}) assumes that the follower will choose the one which is preferred by the leader. Therefore, (\ref{eq:App-04-BLP-ULP}) is called an optimistic equivalence. On the contrary, the pessimistic equivalence assumes that the follower will choose the one which is unfavorable for the leader, which is more difficult to solve. As for the optimistic case, replacing $\bar y \in S(x)$ with KKT condition (\ref{eq:App-04-BLP-LLP-KKT}) leads to the NLP formulation of the bilevel program, or more exactly, a mathematical program with complementarity constraints (MPCC) \begin{equation} \label{eq:App-04-BLP-MPCC} \begin{aligned} \min_{x,\bar y,\lambda,\mu} ~~ & F(x,\bar y) \\ \mbox{s.t.}~~ & x \in X,~ (x,\bar y,\lambda,\mu) \in \mbox{Cons-KKT} \end{aligned} \end{equation} Although the lower-level problem (\ref{eq:App-04-BLP-LLP}) is convex, the best reaction map of the follower characterized by Cons-KKT is non-convex, so a bilevel program is intrinsically non-convex and generally difficult to solve. \vspace{12pt} {\noindent \bf 2. Why bilevel programs are difficult to solve?} Two difficulties prevent an MPCC from being solved reliably and efficiently. 1) The feasible region of (\ref{eq:App-04-BLP-ULP}) is non-convex: even if objective functions and constraints of the leader and the follower are linear, the complementarity and slackness condition in (\ref{eq:App-04-BLP-LLP-KKT}) is still non-convex. An NLP solver only finds a local solution for non-convex problems, if succeeds, and global optimality can hardly be guaranteed. 2) Despite of its non-convexity, the failure to meet ordinary constraint qualifications creates another barrier for solving an MPCC. NLP algorithms generally stop when a stationary point of the KKT conditions is found; however, due to the presence of the complementarity and slackness condition, the dual multipliers may not be well-defined because of the violation of standard constraint qualifications. Therefore, NLP solvers may fail to find a local optimum without particular treatment on the complementarity constraints. To see how constraint qualifications are violated, consider the following simplest linear complementarity constraint \begin{equation} x \ge 0,~ y \ge 0, ~ x^T y =0,~ x \in \mathbb R^5,~ y \in \mathbb R^5 \notag \end{equation} The Jacobian matrix of the active constraints at point ($\bar x, \bar y$) is \begin{equation} J = \left[ ~ \begin{gathered} e_{\bar x} \\ 0 \\ \bar y \end{gathered} ~ \begin{gathered} 0 \\ e_{\bar y} \\ \bar x \end{gathered} ~ \right] \notag \end{equation} where $e_{\bar x}$ and $e_{\bar y}$ are zero-one matrices corresponding to the active constraints $x_i =0$, $i \in I$, $y_j = 0$, $j \in J$, where $I \bigcup J = \{1,2,3,4,5\}$, and $I \bigcap J$ is not necessarily empty. Suppose that $I = \{1,2,4\}$ and $J = \{3,5\}$, then \begin{equation} J = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \bar y_1 & \bar y_2 & \bar y_3 & \bar y_4 & \bar y_5 & \bar x_1 & \bar x_2 & \bar x_3 & \bar x_4 & \bar x_5 \end{bmatrix} \notag \end{equation} Since $\bar x_1 = \bar x_2 = \bar x_4 = 0$ and $\bar y_3 = \bar y_5 = 0$, it is apparent that the row vectors of $J$ are linearly dependent at point ($\bar x, \bar y$). The same applies to any ($\bar x, \bar y$) regardless of the indices $I$ and $J$ of active constraints, because whenever $y_j > 0$, complementarity will enforce $x_i=0,i=j$, creating a binding inequality in $x \ge 0$ and a row in matrix $J$ whose $i$-th element is 1; whenever $x_i > 0$, complementarity will enforce $y_j=0,j=i$, creating a binding inequality in $y \ge 0$ and a row in matrix $J$ whose $(i+5)$-th element is 1. Therefore, the last row of $J$ can be represented by a linear combination of the other rows. Above discussion and conclusion on linear complementarity constraints also apply to the nonlinear case, because the Jacobian matrix $J$ has the same structure. In this regard, above difficulty is an intrinsic phenomenon in MPCCs. \begin{proposition} \label{pr:App-04-MPCC-MFCQ} Complementarity and slackness conditions violate the linear independent constraint qualification at any feasible solution. \end{proposition} From a geometric perspective, the feasible region of complementarity constraints consists of slices like $x_i=0$, $y_j = 0$; there is no strictly feasible point and the Slater's condition does not hold. In conclusion, general purpose NLP solvers are not numerically reliable for solving MPCCs, although they were once used to carry out such tasks. \vspace{12pt} {\noindent \bf 3. Methods for solving MPCCs} In view of the limitations of standard NLP algorithms, new constraint qualifications are proposed to define stationary solutions so as to solve MPCCs through conventional NLP methods, such as the Bouligand-, Clarke-, Mordukhovich-, weakly-, and Strongly-stationary constraint qualifications. See \cite{App-04-BLP-CQ-1,App-04-BLP-CQ-2,App-04-BLP-CQ-3,App-04-BLP-CQ-4} for further information. Through some proper transformation, MPCCs can be solved via standard NLP algorithms as well. Several approaches are available for this task. \vspace{12pt} {\noindent \bf a. Regularization method \cite{App-04-MPCC-Reg-1,App-04-MPCC-Reg-2,App-04-MPCC-Reg-3}} In this approach, the non-negativity and complementarity requirements \begin{equation} \label{eq:App-04-MPCC-Reg-1} x \ge 0,~ y \ge 0,~ xy =0 \end{equation} are approximated by \begin{equation} \label{eq:App-04-MPCC-Reg-2} x \ge 0,~ y \ge 0, ~ xy \le \varepsilon \end{equation} Please note that $xy \ge 0$ is a natural result of non-negativity requirements on $x$ and $y$. When $\varepsilon=0$, (\ref{eq:App-04-MPCC-Reg-2}) is equivalent to (\ref{eq:App-04-MPCC-Reg-1}); when $\varepsilon > 0$, (\ref{eq:App-04-MPCC-Reg-2}) defines a larger feasible region than (\ref{eq:App-04-MPCC-Reg-1}), so this approach is sometimes called a relaxation method. The smaller $\varepsilon$ is, the closer any feasible point $(x,y)$ is to achieve complementarity. if $x$ and $y$ are vectors with non-negative elements, $x^T y =0$ is the same as $x_i y_i =0$, $i=1,2,\cdots$. The same procedure can be applied if $x$ and $y$ are replaced by nonlinear functions. Since Slater's condition holds for the feasible set defined by (\ref{eq:App-04-MPCC-Reg-2}) with $\epsilon > 0$, NLP solvers can be used to solve related optimization problem. In a regularization procedure for solving an MPCC, the relaxation (\ref{eq:App-04-MPCC-Reg-2}) is applied with gradually decreased value of $\varepsilon$ for implementation issues. If the initial value of $\varepsilon$ is too small, the solver may be numerically unstable and fail to find a feasible solution. \vspace{12pt} {\noindent \bf b. Penalization method \cite{App-04-MPCC-Pen-1,App-04-MPCC-Pen-2,App-04-MPCC-Pen-3}} In this approach, the complementarity condition $xy=0$ is removed from the set of constraints; instead, an associated penalty term $xy/\varepsilon$ is added to the objective function to create an extra cost whenever complementarity is not satisfied. Since $x$ and $y$ are non-negative, as indicated by (\ref{eq:App-04-MPCC-Reg-1}), the penalty term would never take a negative value. In this way, the feasible region becomes much simpler. In a penalization procedure for solving an MPCC, a sequence of NLPs are solved iteratively with gradually decreased value of $\varepsilon$, and the violation of complementarity condition gradually approaches to 0 as iterations proceed. If $\varepsilon$ is initiated too small, the penalty coefficient $1/\varepsilon$ is very large which may cause an ill-conditioned problem and numeric instability. One advantage of this iterative procedure is that the optimal solution in iteration $k$ can be used as the initial guess in iteration $k+1$, since the feasible region does not change, and the solution in every iteration is feasible in the next one. A downside of this approach is that the NLP solver generally identifies a local optimum. In consequence, a smaller $\epsilon$ may not necessarily lead to a solution that gets closer to the feasible region. \vspace{12pt} {\noindent \bf c. Smoothing method \cite{App-04-MPCC-Smooth-1,App-04-MPCC-Smooth-2}} This approach employs the perturbed Fischer-Burmeister function \begin{equation} \label{eq:App-04-Fischer-Burmeister-1} \phi(x,y,\varepsilon) = x + y - \sqrt{x^2 + y^2 + \varepsilon} \end{equation} which is firstly introduced in \cite{App-04-MPCC-SQP-1} for LCPs, and shown particularly useful in SQP methods for solving MPCCs in \cite{App-04-MPCC-SQP-2}. Clearly, when $\varepsilon= 0$, the function $\phi$ reduces to the standard Fischer-Burmeister function \begin{equation} \label{eq:App-04-Fischer-Burmeister-2} \phi(x,y,0) = 0 ~\Longleftrightarrow~ x \ge 0,~ y \ge 0,~ xy=0 \end{equation} $\phi(x,y,0)$ is not smooth at the origin $(0,0)$. When $\varepsilon > 0$, the function $\phi$ satisfies \begin{equation} \label{eq:App-04-Fischer-Burmeister-2} \phi(x,y,\varepsilon) = 0 ~\Longleftrightarrow~ x \ge 0,~ y \ge 0,~ xy = \varepsilon/2 \end{equation} and is smooth in $x$ and $y$. In view of this, complementarity and slackness condition (\ref{eq:App-04-MPCC-Reg-1}) can be replaced by $\phi(x,y,\varepsilon) = 0$ and further embedded in NLP models. When $\varepsilon$ tends to 0, (\ref{eq:App-04-MPCC-Reg-1}) is enforced approximately. \vspace{12pt} {\noindent \bf d. Sequential quadratic programming (SQP) \cite{App-04-MPCC-SQP-1,App-04-MPCC-SQP-2,App-04-MPCC-SQP-3}} SQP is a general purpose NLP method. In each iteration of SQP, the quadratic functions in complementarity constraints are approximated by a linear one, and the nonlinear objective function is replaced with their second-order Taylor series, constituting a quadratic program with linear constraints (maybe in conjunction with trust region bounds). At the optimal solution, nonlinear constraints are linearized, the objective function is approximated again, and then the SQP algorithm proceeds to the next iteration. When applied to an MPCC, the SQP method is often capable of finding a local optimal solution, without a sequence of user-specified $\varepsilon_k$ approaching to 0, probably because the SQP solver itself is endowed with some softening ability, e.g., when a quadratic program encounters numeric issues, the SQP solver SNOPT automatically relaxes some hard constraints and penalizes violations in the objective function. The aforementioned classical methods are discussed in \cite{App-04-BLP-NLP-1}, and numeric experiences are reported in \cite{App-04-MPCC-NLP-Test}. \vspace{12pt} {\noindent \bf e. MINLP methods} Due to the wide applications in various engineering disciplines, solution methods of MPCCs continue to be an active research area. Recall that the complementarity constraints in form of $g(x) \ge 0$, $h(x) \ge 0$, $g(x)h(x)=0$ is equivalent to \begin{equation} 0 \le g(x) \le M z,~ 0 \le h(x) \le M(1-z) \notag \end{equation} where $z$ is a binary variable, $M$ is a sufficiently large constant. Therefore, an MPCC can be converted to a mixed integer nonlinear program (MINLP). MINLP removes the numeric difficulty in MPCC; however, the computation complexity remains. If all functions in (\ref{eq:App-04-BLP-MPCC}) are linear, or there are only a few complementarity constraints, the resulting MILP or MINLP model may be solved within reasonable time; otherwise, the branch-and-bound algorithm could offer upper and lower bounds on the optimal value. \vspace{12pt} {\noindent \bf f. Convex relaxation/approximation methods} If all functions in MPCC (\ref{eq:App-04-BLP-MPCC}) are linear, it is a non-convex QCQP in which non-convexity originates from the complementarity constraints. When the problem scale is large, the MILP method may be time-consuming. Inspired by the success of convex relaxation methods in non-convex QCQPs, there have been increasing interests for developing convex relaxation methods for MPCCs. An SDP relaxation method is proposed in \cite{App-04-MPCC-SDP-1}, which is embedded in a branch-and-bound algorithm to solve the MPCC. For the MPCC derived from a bilevel polynomial program, it is proposed to solve a sequence of SDPs with increasing problem sizes, so as to solve the original problem globally \cite{App-04-MPCC-SDP-2,App-04-MPCC-SDP-3}. Convex relaxation methods have been applied to power market problems in \cite{App-04-MPCC-SDP-4,App-04-MPCC-SDP-5}. Numerical experiments show that the combination of MILP and SDP relaxation can greatly reduce the computation time. Nonetheless, please bear in mind that in the SDP relaxation model, the decision variable is a matrix with a dimension of $n \times n$, so solving the SDP model may still be a challenging task, although it is convex. Recently, a DC programming approach is proposed in \cite{App-04-LPCC-DCP} to solve LPCC in the penalized version. In this approach, the quadratic penalty term is decomposed into the difference of two convex quadratic functions, and the concave part is then linearized. Computational performances reported in \cite{App-04-LPCC-DCP} are very promising. \subsection{Special Bilevel Programs } Although general bilevel programs are difficult, there are special cases which can be solved relatively easily. One of such classes of programs is the linear bilevel program, in which objective functions are linear and constraints are polyhedra. The linear max-min problem is a special case of the linear bilevel program, in which the leader and the follower have completely opposite targets. Furthermore, two special market models are studied. \vspace{12pt} {\noindent \bf 1. Linear bilevel program} A linear bilevel program can be written as \begin{equation} \label{eq:App-04-Linear-Bilevel-1} \begin{aligned} \max_x ~~ & c^T x + d^T y(x) \\ \mbox{s.t.} ~~& C x \le d \\ & \begin{aligned} y(x) \in \arg \min_y ~~ & f^T y \\ \mbox{s.t.} ~~ & By \le b - Ax \end{aligned} \end{aligned} \end{equation} In problem (\ref{eq:App-04-Linear-Bilevel-1}), the follower makes a decision $y$ after the leader deploys its action $x$, which influences the feasible region of $y$. Meanwhile, the leader can predict the follower's optimal response $y(x)$, and choose a strategy that finally optimizes $c^T x + d^T y(x)$. Other matrices and vectors are constant coefficients. Given the upper level decision $x$, the follower is facing an LP, whose KKT optimality condition is given by \begin{equation} \begin{gathered} B^T u = f \\ 0 \ge u ~\bot~ Ax + By - b \le 0 \end{gathered} \notag \end{equation} The last constraint is equivalent to the following linear constraints \begin{equation} \begin{gathered} -M(1-z) \le u \le 0 \\ -Mz \le Ax+By-b \le 0 \end{gathered} \notag \end{equation} where $z$ is a vector consisting of binary variables, and $M$ is a large enough constant. In problem (\ref{eq:App-04-Linear-Bilevel-1}), replacing follower's LP with its KKT condition gives rise to an MILP \begin{equation} \label{eq:App-04-Linear-Bilevel-MILP} \begin{aligned} \max_{x,y,u,z} ~~ & c^T x + d^T y \\ \mbox{s.t.} ~~& C x \le d, ~ B^T u = f \\ & -M(1-z) \le u \le 0 \\ &-Mz \le Ax + By - b \le 0 \end{aligned} \end{equation} If the number of complementarity constraints is moderate, MILP (\ref{eq:App-04-Linear-Bilevel-MILP}) can be often solved efficiently, despite of its NP-hard complexity in the worst-case. Since MILP solvers and computation hardware keep improving nowadays, it is always worthy of bearing this technique in mind. Please also be aware that the big-M parameter notably impacts the performance of solving MILP (\ref{eq:App-04-Linear-Bilevel-MILP}). A heuristic method to determine such a parameter in linear bilevel programs is proposed in \cite{App-04-LPCC-BigM}. This method firstly solves two LPs and generates a feasible solution of the equivalent MPCC; then solves a regularized version of the MPCC model using NLP solvers and identifies a local optimal solution near the obtained feasible point; finally, the big-M parameter and the binary variables are initiated according to the local optimal solution. In this way, no manually-supplied parameter is needed, and the MILP model is properly strengthened. Another optimality certification of follower's LP is the following primal-dual optimality condition \begin{equation} \begin{gathered} B^T u = f,~ u \le 0,~ Ax + By \le b \\ u^T (b-Ax) = f^T y \end{gathered} \notag \end{equation} The first line summarizes feasible regions of the primal and dual variables. The last equation enforces equal values on the optimums of the primal and the dual problems, which is known as the strong duality condition. Replacing follower's LP with the primal-dual optimality condition gives an NLP: \begin{equation} \label{eq:App-04-Linear-Bilevel-NLP} \begin{aligned} \max_{x,y,u} ~~ & c^T x + d^T y \\ \mbox{s.t.} ~~& C x \le d, ~ Ax + By \le b \\ & u \le 0,~ B^T u = f \\ & u^T (b-Ax) = f^T y \end{aligned} \end{equation} The following discussion are divided in two categories based on the type of variable $x$. a. $x$ is {\bf continuous}. In such a general situation, there is no effective way to solve problem (\ref{eq:App-04-Linear-Bilevel-NLP}), due to the last bilinear equality. Notice the fact that $f^T y \ge u^T (b-Ax) $ always holds on the feasible region because of the weak duality, the last constraint can be relaxed and penalized in the objective function, resulting in a bilinear program over a polyhedron \cite{App-04-LBLP-Pen-1,App-04-LBLP-Pen-2,App-04-LBLP-Pen-3} \begin{equation} \label{eq:App-04-Linear-Bilevel-NLP-Pen} \begin{aligned} \max_{x,y,u} ~~ & c^T x + d^T y - \sigma [f^T y - u^T (b-Ax)] \\ \mbox{s.t.} ~~& C x \le d, ~ Ax + By \le b \\ & u \le 0,~ B^T u = f \end{aligned} \end{equation} where $\sigma > 0$ is a penalty parameter. In problem (\ref{eq:App-04-Linear-Bilevel-NLP-Pen}), the constraints on $u$ and $(x,y)$ are decoupled, so this problem can be solved by Algorithm \ref{Ag:App-03-BLP-Mountain-Climbing} (mountain climbing) in Appendix \ref{App-C-Sect02-03}, if global optimality is not mandatory. In some problems, the upper-level decision influences the lower-level cost function, and has no impact on the feasible region in the lower level. For example, the tax rate design or a retail market pricing belongs to such category. The same procedure can be performed to solve this kind of bilevel problem. We recommend the MILP model, because in the penalized model, both $f^T y$ and $u^T Ax$ are non-convex. A tailored retail market model will be introduced later. b. $x$ is {\bf binary}. In such circumstance, the bilinear term $ u^T A x =\sum_{ij} A_{ij} u_i x_j$ can be linearized by replacing $u_i x_j$ with a new continuous variable $v_{ij}$ together with auxiliary linear inequalities enforcing $v_{ij}=u_i x_j$. In this way, the last inequality translates into \begin{equation*} \begin{gathered} u^T b - \sum\nolimits_{ij} A_{ij} v_{ij} = f^T y \\ u^l_i x_j \le v_{ij} \le 0,~ u^l_i (1-x_j) \le u_i-v_{ij} \le 0,~ \forall i,j \end{gathered} \end{equation*} where $u^l_i$ is a proper bound that does not discard the original optimal solution. As we can see, a bilevel linear program with binary upper-level variables is not necessarily harder than all continuous instances. This formulation is very useful to model interdiction problems in which $x$ mimics attack strategy. c. $x$ can be {\bf discretized}. Even if $x$ is continuous, we can approximate it via binary expansion \begin{equation*} x_i = x^l_i + {\rm \Delta}_i \sum_{k=0}^{K} 2^k z_{ik},~ z_{ik} \in \{0,1\} \end{equation*} where $x^l_i$ ($x^m_i$) is the lower (upper) bound of $x_i$, and ${\rm \Delta}_i = (x^m_i-x^l_i)/2^{K+1}$ is the step size. With this transformation, the bilinear term $u^T A x$ becomes \begin{equation*} \sum_{ij} A_{ij} u_i x^l_i + \sum_{ij} A_{ij} u_i {\rm \Delta}_j \sum_{k=0}^{K} 2^k z_{jk} \end{equation*} The first term is linear, and $u_i z_{jk}$ in the second term can be linearized in a similar way. However, this entails introducing continuous variable with respect to indices $i$, $k$ and $k$. A low-complexity linearization method is suggested in \cite{App-04-BLLP-MILP-Sim}. It re-orders the summations in the second term as \begin{equation*} \sum_j \sum_{k=0}^{K} {\rm \Delta}_j 2^k z_{jk} \sum_i A_{ij} u_i \end{equation*} which can be linearized through defining an auxiliary continuous variable $v_{jk} = z_{jk} \sum_i A_{ij} u_i$ and stipulating \begin{equation*} - M z_{jk} \le v_{jk} \le M z_{jk},~ - M (1-z_{jk}) \le \sum_i A_{ij} u_i - v_{jk} \le M (1-z_{jk}) \end{equation*} where $M$ is a large enough constant. The core idea behind this trick is to treat $u^T A$ as a whole vector which has the same dimension as $x$, because for bilinear form $x^T v = \sum_i x_i v_i$, the dimension of summation is one, while for $x^T Q v = \sum_{ij} Q_{ij} x_i v_j$, the dimension of summation is two. This observation inspires us to conform vector dimensions while deploying such linearization. \vspace{12pt} {\noindent \bf 2. Linear max-min problem} A linear max-min problem is a special case of the linear bilevel program, which can be written as \begin{equation} \label{eq:App-04-Linear-Max-Min-1} \begin{aligned} \max_x ~~ & c^T x + d^T y(x) \\ \mbox{s.t.} ~~& x \in X \\ & \begin{aligned} y(x) \in \arg \min_y ~~ & c^T x + d^T y \\ \mbox{s.t.} ~~ & y \in Y,~ By \le b -Ax \end{aligned} \end{aligned} \end{equation} In problem (\ref{eq:App-04-Linear-Max-Min-1}), the follower seeks an objective that is completely opposite to that of the leader. This kind of problem frequently arises in robust optimization and has been discussed in Appendix \ref{App-C-Sect02-03} from the computational perspective. Here we revisit it from a game theoretical point of view. Problem (\ref{eq:App-04-Linear-Max-Min-1}) can be expressed as a two-person zero-sum game \begin{equation} \label{eq:App-04-Linear-Max-Min-2} \max_{x \in X} \min_{y \in Y} \left\{ c^T x + d^T y ~\middle|~ Ax + By \le b \right\} \\ \end{equation} However, the coupled constraints make it different from a saddle point problem in the sense of a Nash game or a matrix game. Indeed, it is a Stackelberg game. Let us investigate the interchangeability of the max and min operators (decision sequence). We have already shown in Appendix \ref{App-D-Sect01-04} that swapping the order of max and min operators in a two-person zero-sum matrix game does not influence the equilibrium. However, this is not the case of (\ref{eq:App-04-Linear-Max-Min-2}) \cite{App-04-Linear-max-min}, because \begin{equation} \begin{aligned} & \max_{x \in X} \min_{y \in Y} \{ c^T x + d^T y ~|~ Ax + By \le b \} \\ =& \max_{x \in X} \left\{ c^T x + \min_{y \in Y} \{d^T y ~|~ By \le b - Ax\} \right\} \\ \ge & \max_{x \in X} \left\{ c^T x + \min_{y \in Y}~ d^T y \right\} \\ =& \max_{x \in X} ~ c^T x + \min_{y \in Y} ~ d^T y \\ =& \min_{y \in Y} \left\{ d^T y + \max_{x \in X} ~ c^T x \right\} \\ \ge & \min_{y \in Y} \left\{ d^T y + \max_{x \in X} \{ c^T x ~|~ Ax \le b - By \} \right\} \\ =& \min_{y \in Y} \max_{x \in X} \{ c^T x + d^T y ~|~ Ax + By \le b \} \end{aligned} \notag \end{equation} In fact, strict inequality usually holds in the third and sixth line. This result implies that owing to the presence of strategy coupling, the leader rests in a superior status, which is different from the Nash game in which players possess the same positions. To solve linear max-min problem (\ref{eq:App-04-Linear-Max-Min-2}), there is no doubt that the aforementioned MILP transformation for general linear bilevel programs gives a possible mean for this task. Nevertheless, the special structure of (\ref{eq:App-04-Linear-Max-Min-2}) allows several alternatives which are more dedicated and effective. To this end, we will transform it into an equivalent optimization problem using LP duality theory. For the ease of notation, we merge polytope $Y$ into the coupled constraint, and the dual of lower-level LP in (\ref{eq:App-04-Linear-Max-Min-1}) (or the inner LP in (\ref{eq:App-04-Linear-Max-Min-2})) reads \begin{equation} \max_u ~ \{ u^T (b - Ax) ~|~ u \in U \} \notag \end{equation} where $U = \{u ~|~ B^T u = d,~ u \le 0\}$ is the feasible region of dual variable $u$. As strong duality always holds for LPs, we have $d^T y = u^T (b-Ax)$. Substituting it into (\ref{eq:App-04-Linear-Max-Min-1}) we obtain \begin{equation} \label{eq:App-04-Linear-Max-Min-Bilinear} \begin{aligned} \max ~~ & c^T x + u^T b - u^T A x \\ \mbox{s.t.} ~~ & x \in X,~ u \in U \end{aligned} \end{equation} Problem (\ref{eq:App-04-Linear-Max-Min-Bilinear}) is a bilinear program due to the product term $u^T A x$ in variables $u$ and $x$. Several methods for solving such a problem locally or globally have been set forth in Appendix \ref{App-C-Sect02-03}, as a fundamental methodology in robust optimization. Although variable $y$ of the follower does not appear in (\ref{eq:App-04-Linear-Max-Min-Bilinear}), it can be easily recovered from the lower level of (\ref{eq:App-04-Linear-Max-Min-1}) with the obtained leader's strategy $x$. \vspace{12pt} {\noindent \bf 3. A retail market problem} In a retail market, a retailer releases the prices of some goods; according to the retail prices, the customer decides on the optimal purchasing strategy subject to the demands on each goods as well as production constraints; finally, the retailer produces or trades with a higher level market to manage the inventory, and delivers the goods to customers. This retail market can be modeled through a bilevel program. In the upper level \begin{subequations} \label{eq:App-04-RMarket-Retailer} \begin{align} \max_{x,z} ~~ & x^T D_C y(x) - p^T D_M z \label{eq:App-04-RMarket-UP-1}\\ \mbox{s.t.}~~ & A x \le a \label{eq:App-04-RMarket-UP-2} \\ & B_1 y(x) + B_2 z \le b \label{eq:App-04-RMarket-UP-3} \end{align} \end{subequations} (\ref{eq:App-04-RMarket-UP-1})-(\ref{eq:App-04-RMarket-UP-3}) form retailer's problem, where vector $x$ denotes the prices of goods released by the retailor; vector $y(x)$ stands for the amounts of goods purchased by the customer, which is determined from an optimal production planning problem; $p$ is the production cost or the price in the higher level market; $z$ represents the production/purchase strategy of the retailer. Other matrices and vectors are constant coefficients. The first term in objective function (\ref{eq:App-04-RMarket-UP-1}) is the income paid by the customer, and the second term is the payoff of the retailer. The objective function is the total profit to be maximized. Because there is no competition and the retailer has full market power, to avoid unfair retail prices, we assume that both sides have reached certain agreements on the pricing policy, which is modeled through constraint (\ref{eq:App-04-RMarket-UP-2}). It includes simple lower and upper bounds as well as other bilateral contract, such as the restriction on the average price over a certain period or the price correlation among multiple goods. The inventory dynamics and other technique constraints are depicted by constraint (\ref{eq:App-04-RMarket-UP-3}). Given the retail prices, customers solve the optimal production planning problem in the lower level \begin{equation} \label{eq:App-04-RMarket-Customer} \begin{aligned} \min_y ~~ & x^T D_C y \\ \mbox{s.t.}~~ & F y \ge f \end{aligned} \end{equation} and determine the optimal purchasing strategy. The objective function in (\ref{eq:App-04-RMarket-Customer}) is the total cost of customers, where the price vector $x$ is constant coefficient; constraints capture the demands and all other technique requirements in the production process. Bilevel program (\ref{eq:App-04-RMarket-Retailer})-(\ref{eq:App-04-RMarket-Customer}) are not linear, although (\ref{eq:App-04-RMarket-Customer}) is indeed an LP, because of the bilinear term $x^T D_C y$ in (\ref{eq:App-04-RMarket-UP-1}), where both $x$ and $y$ are variables (the retailer controls $y$ indirectly through prices). The KKT condition of LP (\ref{eq:App-04-RMarket-Customer}) reads \begin{equation} \begin{gathered} D^T_C x = F^T u \\ 0 \le u \bot Fy - f \ge 0 \end{gathered} \notag \end{equation} where $u$ is the dual variable. The complementarity constraints can be linearized via binary variables, which has been clarified in Appendix $\ref{App-B-Sect03-05}$. Furthermore, strong duality gives \begin{equation} x^T D_C y = f^T u \notag \end{equation} The right-hand side is linear in $u$. Therefore, problem (\ref{eq:App-04-RMarket-Retailer})-(\ref{eq:App-04-RMarket-Customer}) and the following MILP \begin{equation} \label{eq:App-04-RMarket-MILP} \begin{aligned} \max_{x,y,u,v,z} ~~ & f^T u - p^T D_M z \\ \mbox{s.t.} ~~ & A x \le a,~ B_1 y + B_2 z \le b \\ & v \in \mathbb B^{N_f},~ D^T_C x = F^T u \\ & 0 \le u \le M(1-v) \\ & 0 \le Fy - f \le Mv \end{aligned} \end{equation} have the same optimal solution in primal variables, where $N_f$ is the dimension of $f$. We can learn from this example that when the problem exhibits a certain structure, the non-convexity can be eliminated without introducing additional dimensions of complexity. In problem (\ref{eq:App-04-RMarket-Retailer}), the price is a primal variable quoted by a decision maker, and is equal to the knock-down price. This scheme is called pay-as-bid. Next, we give an example of a marginal pricing market where the price is determined by the dual variables of a market clearing problem. \vspace{12pt} {\noindent \bf 4. A wholesale market problem} In a wholesale market, a provider bids its offering prices to a market organizer. The organizer collects information on available resources and the bidding of the provider, and then clears the market by scheduling the production in the most economic way. The provider is paid at the marginal cost. This problem can be modeled by a bilevel program \begin{equation} \label{eq:App-04-Pool-Market-Provider} \max_{\beta} ~ \lambda(\beta)^T p(\beta) - f(p(\beta)) \end{equation} where $\beta$ is the offering price vector of the provider, $p(\beta)$ is the quantity of goods ordered by the market organizer, function $f(p) = \sum_i f_i(p_i)$, where $f_i(p_i)$ is a univariate convex function representing the production cost, and $\lambda(\beta)$ is the marginal prices of each kind of goods. Both of them depend on the value of $\beta$, and are determined from the market clearing problem in the lower level \begin{subequations} \label{eq:App-04-Pool-Market-MC} \begin{align} \min_{p,u} ~~ & \beta^T p + c^T u \label{eq:App-04-Pool-Market-MC-Obj} \\ \mbox{s.t.}~~ & p_n \le p \le p_m: \eta_n, \eta_m \label{eq:App-04-Pool-Market-Cons-1}\\ & p + F u = d: \lambda \label{eq:App-04-Pool-Market-Cons-2} \\ & A u \le a: \xi \label{eq:App-04-Pool-Market-Cons-3} \end{align} \end{subequations} where $u$ includes all other variables, such as the amount of each kind of goods collected from other providers or produced locally, the system operating variable, and so on; $c$ is the coefficient including prices of goods offered by other providers, and the production cost if the organizer wishes to produce the goods by itself. Objective function (\ref{eq:App-04-Pool-Market-MC-Obj}) represents the total cost in the market to be minimized. Constraint (\ref{eq:App-04-Pool-Market-Cons-1}) defines offering limits of the upper-level provider; constraint (\ref{eq:App-04-Pool-Market-Cons-2}) is the system-wide production-demand balancing condition of each goods, the dual variable $\lambda$ at the optimal solution gives the marginal cost of each goods; (\ref{eq:App-04-Pool-Market-Cons-3}) imposes constraints which the system operation must obey, such as network flow and inventory dynamics. In the provider's problem (\ref{eq:App-04-Pool-Market-Provider}), the offering price $\beta$ is not restricted by finite upper bounds pricing policies (but such a policy can certainly be modeled), because the competition appears in the lower level: if $\beta$ is not reasonable, the market organizer would resort to other providers or count on its own production capability. Compared with the situation in a retail market, problems (\ref{eq:App-04-Pool-Market-Provider})-(\ref{eq:App-04-Pool-Market-MC}) are even more complicated: the dual variable $\lambda$ appears in the objective function of the provider, and the term $\lambda^T p$ is non-convex. In the following, we reveal that it can be exactly expressed as a linear function in the primal and dual variables via (somehow tricky) algebraic transformations. KKT conditions of the market clearing LP (\ref{eq:App-04-Pool-Market-MC}) are summarized as follows \begin{subequations} \label{eq:App-04-Pool-Market-MC-KKT} \begin{gather} \beta = \lambda + \eta_n + \eta_m \label{eq:App-04-Pool-Market-MC-KKT-1} \\ \eta_n^T (p-p_n) = 0 \label{eq:App-04-Pool-Market-MC-KKT-2} \\ \eta_m^T (p_m-p) = 0 \label{eq:App-04-Pool-Market-MC-KKT-3} \\ c= A^T \xi + F^T \lambda \label{eq:App-04-Pool-Market-MC-KKT-4} \\ \xi^T (Au - a) = 0 \label{eq:App-04-Pool-Market-MC-KKT-5} \\ \eta_n \ge 0,~ \eta_m \le 0,~ \xi \le 0 \label{eq:App-04-Pool-Market-MC-KKT-6} \\ (\ref{eq:App-04-Pool-Market-Cons-1})-(\ref{eq:App-04-Pool-Market-Cons-3}) \label{eq:App-04-Pool-Market-MC-KKT-7} \end{gather} \end{subequations} According to (\ref{eq:App-04-Pool-Market-MC-KKT-1}), \begin{subequations} \begin{equation} \label{eq:App-04-Pool-Market-MILP-1} \beta^T p = \lambda^T p + \eta^T_n p + \eta^T_m p \end{equation} From (\ref{eq:App-04-Pool-Market-MC-KKT-2}) and (\ref{eq:App-04-Pool-Market-MC-KKT-3}) we have \begin{equation} \label{eq:App-04-Pool-Market-MILP-2} \eta_n^T p = \eta^T_n p_n,~ \eta_m^T p = \eta^T_m p_m, \end{equation} Substituting (\ref{eq:App-04-Pool-Market-MILP-2}) in (\ref{eq:App-04-Pool-Market-MILP-1}) renders \begin{equation} \label{eq:App-04-Pool-Market-MILP-3} \lambda^T p = \beta^T p - \eta^T_n p_n - \eta^T_m p_m \end{equation} Furthermore, strong duality of LP implies the following equality \begin{equation} \beta^T p + c^T u = \eta^T_n p_n + \eta^T_m p_m + d^T \lambda + a^T \xi \notag \end{equation} or \begin{equation} \label{eq:App-04-Pool-Market-MILP-4} \beta^T p - \eta^T_n p_n - \eta^T_m p_m = d^T \lambda + a^T \xi - c^T u \end{equation} Substituting (\ref{eq:App-04-Pool-Market-MILP-4}) in (\ref{eq:App-04-Pool-Market-MILP-3}) results in \begin{equation} \label{eq:App-04-Pool-Market-MILP-5} \lambda^T p = d^T \lambda + a^T \xi - c^T u \end{equation} The right-hand side is a linear expression for $\lambda^T p$ in primal variable $u$ and dual variables $\lambda$ and $\xi$. \end{subequations} Combining the KKT condition (\ref{eq:App-04-Pool-Market-MC-KKT}) and (\ref{eq:App-04-Pool-Market-MILP-5}) gives an MPCC which is equivalent to the bilevel wholesale market problem (\ref{eq:App-04-Pool-Market-Provider})-(\ref{eq:App-04-Pool-Market-MC}) \begin{equation} \label{eq:App-04-Pool-Market-MPCC} \begin{aligned} \max ~~ & d^T \lambda + a^T \xi - c^T u - f(p(\beta)) \\ \mbox{s.t.} ~~ & (\ref{eq:App-04-Pool-Market-MC-KKT-1})-(\ref{eq:App-04-Pool-Market-MC-KKT-7}) \end{aligned} \end{equation} Because complementarity conditions (\ref{eq:App-04-Pool-Market-MC-KKT-2}), (\ref{eq:App-04-Pool-Market-MC-KKT-3}), (\ref{eq:App-04-Pool-Market-MC-KKT-5}) can be linearized, and convex function $f(p)$ can be approximated by PWL functions, MPCC (\ref{eq:App-04-Pool-Market-MPCC}) can be recast as an MILP. \subsection{Bilevel Mixed-integer Program} Although LP can tackle many economic problems and market activities in real life, there are indeed even more decision-making problems which are beyond the reach of LP, for example, power market clearing considering unit commitment \cite{App-04-BiMIP-TEP}. KKT optimality condition or strong duality from LP theory do not apply to discrete optimization problems due to their intrinsic non-convexity. Furthermore, this is no computationally viable approach to express the optimality condition of a general discrete program in closed form, making a bilevel mixed-integer programs much more challenging to solve than a bilevel linear program. Some traditional algorithms either rely on enumerative branch-and-bound strategies based on a weak relaxation or depends on complicated operations that are problem-specific. To our knowledge, the reformulation and decomposition algorithm proposed in \cite{App-04-BiMIP-Zeng} is the first approach that can solve general bilevel mixed-integer programs in a systematic way, and will be introduced in this section. The bilevel mixed-integer program has the following form \begin{equation} \label{eq:App-04-BiMIP-Comp-1} \begin{aligned} \min ~~ & f^T x + g^T y + h^T z \\ \mbox{s.t.} ~~ & Ax \le b,~ x \in \mathbb R^{m_c} \times \mathbb B^{m_d}\\ & (y,z) \in \arg \max~~ w^T y + v^T z \\ & \qquad \qquad \quad \mbox{s.t.}~~ P y + Nz \le r -K x \\ & \qquad \qquad \qquad ~~~ y \in \mathbb R^{n_c},~ z \in \mathbb B^{n_d} \end{aligned} \end{equation} where $x$ is the upper-level decision variable and appears in constraints of the lower-level problem; $y$ and $z$ represent lower-level continuous decision variable and discrete decision variable, respectively. We do not distinguish upper-level continuous variable and discrete variable because they have little impact on the exposition of the algorithm, unlike the ones appeared in the lower level. If the lower-level has multiple solutions, the follower chooses the one in favor of the leader. In the current form, the upper-level constraints are independent of lower-level variables. Nevertheless, coupling constraints in the upper level can be easily incorporated \cite{App-04-BiMIP-Yue}. In this section, we assume that the relatively complete recourse property in \cite{App-04-BiMIP-Zeng} holds, i.e., for any feasible pair $(x, z)$, the feasible set for lower-level continuous variable $y$ is non-empty. Under this premise, the optimal solution exists. This assumption is mild because we can add slack variables in the lower-level constraints and penalize constraint violation in the lower-level objective function. For instances in which the relatively complete recourse property is missing, please refer to the remedy in \cite{App-04-BiMIP-Yue}. To eliminate $\in$ qualifier in (\ref{eq:App-04-BiMIP-Comp-1}), we duplicate decision variables and constraints of the lower-level problem and set up an equivalent formulation: \begin{equation} \label{eq:App-04-BiMIP-Comp-2} \begin{aligned} \min ~~ & f^T x + g^T y^0 + h^T z^0 \\ \mbox{s.t.} ~~ & Ax \le b,~ x \in \mathbb R^{m_c} \times \mathbb B^{m_d}\\ & K x + P y^0 + N z^0 \le r \\ & w^T y^0 + v^T z^0 \ge \max~~ w^T y + v^T z \\ & \qquad \qquad \qquad ~~~ \mbox{s.t.}~~ P y + Nz \le r -K x \\ & \qquad \qquad \qquad \qquad ~~ y \in \mathbb R^{n_c},~ z \in \mathbb B^{n_d} \end{aligned} \end{equation} In this formulation, the leader controls its original variable $x$ as well as replicated variables $y^0$ and $z^0$. Conceptually, the leader will use $(y^0, z^0)$ to anticipate the response of follower and its impact on his objective function. Clearly, if the lower-level problem has a unique optimal solution, it must be equal to $(y^0, z^0)$. It is worth mentioning that although more variables and constraints are incorporated in (\ref{eq:App-04-BiMIP-Comp-2}), this formulation is actually an informative and convenient expression for algorithm development, as $\ge$ would be more friendly to general purpose mathematical programming solvers. Up to now, the obstacle of solving (\ref{eq:App-04-BiMIP-Comp-2}) remains: discrete variable $z$ in the lower level, which prevents the use of optimality condition of LP. To overcome this difficulty, we treat $y$ and $z$ separately and restructure the lower-level problem as: \begin{equation} \label{eq:App-04-BiMIP-Obj-L} w^T y^0 + v^T z^0 \ge \max_{z \in Z}~ v^T z + \max_y \{w^T y |Py \le r-Kx-Nz\} \end{equation} where $Z$ represents the set consisting of all possible values of $z$. Despite the large cardinality of $Z$, the second optimization is a pure LP, and can be replaced with its KKT condition, resulting in: \begin{equation} \label{eq:App-04-BiMIP-L-LP-KKT} \begin{aligned} w^T y^0 + v^T z^0 & \ge \max_{z \in Z}~ v^T z + w^T y \\ & \qquad \mbox{s.t.}~ P^T \pi = w \\ & \qquad \quad ~~ 0 \le \pi \bot r-Kx-Nz-Py \ge 0 \end{aligned} \end{equation} The complementarity constraints can be linearized via the method in Sect. \ref{App-B-Sect03-05}. Then, by enumerating $z^j$ over $Z$ with associated variables $(y^j,\pi^j)$, we arrive at an MPCC that is equivalent to problem (\ref{eq:App-04-BiMIP-Comp-2}) \begin{equation} \label{eq:App-04-BiMIP-MPCC-Full} \begin{aligned} \min ~~ & f^T x + g^T y^0 + h^T z^0 \\ \mbox{s.t.} ~~ & Ax \le b,~ x \in \mathbb R^{m_c} \times \mathbb B^{m_d}\\ & K x + P y^0 + N z^0 \le r,~ P^T \pi^j = w,~ \forall j \\ & 0 \le \pi^j \bot r - K x - N z^j - P y^j \ge 0,~ \forall j \\ & w^T y^0 + v^T z^0 \ge w^T y^j + v^T z^j,~ \forall j \end{aligned} \end{equation} Without particular mention, (\ref{eq:App-04-BiMIP-MPCC-Full}) is compatible with MILP solvers. Except for the KKT optimality condition, another popular approach entails applying primal-dual condition for the LP regarding the lower-level continuous variable $y$. Following this line, rewrite this LP in (\ref{eq:App-04-BiMIP-Obj-L}) by strong duality, we obtain \begin{equation} \label{eq:App-04-BiMIP-L-LP-PD} \begin{aligned} w^T y^0 + v^T z^0 & \ge \max_{z \in Z}~ v^T z + \min ~ \pi^T ( r-Kx-Nz) \\ & \qquad \qquad \qquad ~ \mbox{s.t.}~ P^T \pi = w,~ \pi \ge 0 \end{aligned} \end{equation} In (\ref{eq:App-04-BiMIP-L-LP-PD}), if all variables in $x$ are binary, the bilinear terms $\pi^T K x$ and $\pi^T N z$ from the leader's point of view can be linearized via the method in Sect. \ref{App-B-Sect02-02}. The min operator in the right-hand side can be omitted because the upper-level objective function is to be minimized, giving rise to \begin{equation} \label{eq:App-04-BiMIP-BLIP-Full} \begin{aligned} \min ~~ & f^T x + g^T y^0 + h^T z^0 \\ \mbox{s.t.} ~~ & Ax \le b,~ x \in \mathbb R^{m_c} \times \mathbb B^{m_d}\\ & K x + P y^0 + N z^0 \le r \\ & w^T y^0 + v^T z^0 \ge v^T z^j + ( r-Kx-Nz)^T \pi^j,~ \forall j \\ & P^T \pi^j = w,~ \pi^j \ge 0,~ \forall j \end{aligned} \end{equation} Clearly, (\ref{eq:App-04-BiMIP-BLIP-Full}) has fewer constraints compared to (\ref{eq:App-04-BiMIP-MPCC-Full}). Nevertheless, whenever $x$ contains continuous variables, linearizing $\pi^T K x$ would incur more binary variables. One may think that it is hopeless to solve above enumeration forms (\ref{eq:App-04-BiMIP-MPCC-Full}) and (\ref{eq:App-04-BiMIP-BLIP-Full}) due to the large cardinality of $Z$. In a way similar to the CCG algorithm for solving robust optimization, we can start with a subset of $Z$ and solve relaxed version of problem (\ref{eq:App-04-BiMIP-MPCC-Full}), until the lower bound and upper bound of optimal value converge. The flowchart is shown in Algorithm \ref{Ag:App-04-BiMIP-CCG} \begin{algorithm}[!t] \normalsize \caption{\bf : CCG algorithm for bilevel MILP} \begin{algorithmic}[1] \STATE Set LB $=-\infty$, UB $= +\infty$, and $l = 0$; \STATE Solve the following master problem \begin{equation} \label{eq:App-04-BiMIP-CCG-Master} \begin{aligned} \min ~~ & f^T x + g^T y^0 + h^T z^0 \\ \mbox{s.t.} ~~ & Ax \le b,~ x \in \mathbb R^{m_c} \times \mathbb B^{m_d}\\ & K x + P y^0 + N z^0 \le r,~ P^T \pi^j = w,~ \forall j \le l \\ & 0 \le \pi^j \bot r - K x - N z^j - P y^j \ge 0,~ \forall j \le l \\ & w^T y^0 + v^T z^0 \ge w^T y^j + v^T z^j,~ \forall j \le l \end{aligned} \end{equation} The optimal solution is $(x^*, y^{0*}, z^{0*}, y^{1*},\cdots,y^{l*}, \pi^{1*}, \cdots, \pi^{l*})$, and the optimal value is $v^*$. Update lower bound LB $=v^*$. \STATE Solve the following lower-level MILP with obtained $x^*$ \begin{equation} \label{eq:App-04-BiMIP-CCG-SP-1} \begin{aligned} \theta (x^*) = \max ~~ & w^T y + v^T z \\ \mbox{s.t.}~~ & P y + Nz \le r - K x^* \\ & y \in \mathbb R^{n_c},~ z \in \mathbb B^{n_d} \end{aligned} \end{equation} The optimal value is $\theta (x^*)$. \STATE Solve an additional MILP to refine a solution that is favor of the leader \begin{equation} \label{eq:App-04-BiMIP-CCG-SP-2} \begin{aligned} {\rm \Theta} (x^*) = \min ~~ & g^T y + h^T z \\ \mbox{s.t.}~~ & w^T y + v^T z \ge \theta(x^*) \\ & P y + Nz \le r - K x^* \\ & y \in \mathbb R^{n_c},~ z \in \mathbb B^{n_d} \end{aligned} \end{equation} The optimal solution is $(y^*,z^*)$, and the optimal value is ${\rm \Theta}(x^*)$. Update upper bound UB $= \min \{\mbox{UB}, f^T x^* + {\rm \Theta} (x^*)\}$. \STATE If UB $-$ LB $=0$, terminate and report optimal solution; otherwise, set $z^{l+1}=z^*$, create new variables $(y^{l+1}, \pi^{l+1})$, adding the following cuts to master problem \begin{equation*} \begin{gathered} w^T y^0 + v^T z^0 \ge w^T y^{l+1} + v^T z^{l+1} \\ 0 \le \pi^{l+1} \bot r - K x - N z^{l+1} - P y^{l+1} \ge 0,~ P^T \pi^{l+1} = w \end{gathered} \end{equation*} Update $l \leftarrow l+1$, and go to step 2. \end{algorithmic} \label{Ag:App-04-BiMIP-CCG} \end{algorithm} Because $Z$ has finite elements, Algorithm \ref{Ag:App-04-BiMIP-CCG} must terminate in a finite number of iterations, which is bounded by the cardinality of $Z$. When it converges, LB equals to UB without a positive gap. To see this, suppose that in iteration $l_1$, $(x^*, y^{0*}, z^{0*})$ is obtained in step 2 with LB $<$ UB, and $z^*$ is produced in step 4. Particularly, we assume that $z^*$ was previously derived in some iteration $l_0 < l_1$. Then, in step 5, new variables and cuts associated with $z^* = z^{l_1+1}$ will be generated and augmented with the master problem. As those variables and constraints already exist after iteration $l_0$, the augmentation is essentially redundant, and the optimal value of master problem in iteration $l_1+1$ remains the same as that in iteration $l_1$, so does LB. Consequently, in iteration $l_1+1$ \begin{equation*} \begin{aligned} \mbox{LB} & = f^T x^* + g^T y^{0*} + h^T z^{0*} \\ & = f^T x^* + \min~ g^T y^0 + h^T z^0 \\ & \qquad \qquad ~~ \mbox{s.t.}~ P y^0 + N z^0 \le r - Kx^*,~ P^T \pi^j = w,~ \forall j \le l_1+1 \\ & \qquad \qquad \qquad 0 \le \pi^j \bot r - K x^* - N z^j - P y^j \ge 0,~ \forall j \le l_1+1 \\ & \qquad \qquad \qquad w^T y^0 + v^T z^0 \ge w^T y^j + v^T z^j,~ \forall j \le l_1+1 \\ & \ge f^T x^* + \min~ g^T y^0 + h^T z^0 \\ & \qquad \qquad ~~ \mbox{s.t.}~ P y^0 + N z^0 \le r - Kx^*,~ P^T \pi^j = w,~ j = l_1+1 \\ & \qquad \qquad \qquad 0 \le \pi^j \bot r - K x^* - N z^j - P y^j \ge 0,~ j = l_1+1 \\ & \qquad \qquad \qquad w^T y^0 + v^T z^0 \ge w^T y^j + v^T z^j,~ j = l_1+1\\ & \ge f^T x^* + \min~ g^T y^0 + h^T z^0 \\ & \qquad \qquad ~~ \mbox{s.t.}~ P y^0 + N z^0 \le r - Kx^* \\ & \qquad \qquad \qquad w^T y^0 + v^T z^0 \ge \theta(x^*) \\ & = f^T x^* + {\rm \Theta}(x^*) \end{aligned} \end{equation*} The second $\ge$ follows from the fact that $z^{l_1+1}$ is the optimal solution to problem (\ref{eq:App-04-BiMIP-CCG-SP-1}) and KKT condition in constraints warrants that $v^Tz^{l_1+1} + w^Ty^{l_1+1} = \theta(x^*)$. In the next iteration, the algorithm terminates since LB $\ge$ UB. It should be pointed out that although a large amount of variables and constraints are generated in step 5, in practice, Algorithm \ref{Ag:App-04-BiMIP-CCG} often converges to an optimal solution within a small number of iterations that could be drastically smaller than the cardinality of $Z$, because the most critical scenarios in $Z$ can be discovered from problem (\ref{eq:App-04-BiMIP-CCG-SP-1}). It is suggested in \cite{App-04-BiMIP-Zeng} that the master problem could be tightened by introducing variables $(\hat y, \hat \pi)$ representing the primal and dual variables of lower-level problem corresponding to $(x, z^0)$ and augmenting the following constraints \begin{equation*} \begin{gathered} w^T y^0 + v^T z^0 \ge w^T \hat y + v^T z^0 \\ 0 \le \hat \pi \bot r - K x - N z^0 - P \hat y \ge 0,~ P^T \hat \pi = w \end{gathered} \end{equation*} It is believed that such constraints includes some useful information that is parametric not only to $x$ but also to $z^0$, and is not available from any fixed samples $z^1,\cdots,z^l$. It is also pointed out that for instance with pure integer variables in the lower-level problem, this strategy is generally ineffective. \section{Mathematical Programs with Equilibrium Constraints} \label{App-D-Sect04} A mathematical program with equilibrium constraints (MPEC) is an extension of the bilevel program by incorporating multiple followers competing with each other, resulting in a GNEP in the lower level. In this regard, an MPEC is a single-leader-multi-follower Stackelberg game. In a broader sense, MPEC is an optimization problem with variational inequalities. MPECs are difficult to solve because of the complementarity constraints. \subsection{Mathematical Formulation} \label{App-D-Sect04-01} In an MPEC, the leader deploys its action $x$ prior to the followers; then each follower selects its optimal decision $y_j$ taking the decision of the leader $x$ and rivals' strategies $y_{-j}$ as given. The MPEC can be formulated in two levels: \begin{subequations} \label{eq:App-04-MPEC-1} \begin{align} \mbox{Leader:} \qquad & \left\{ \begin{aligned} \min_{x,\bar y, \bar \lambda, \bar \mu} ~~ & F(x,\bar y, \bar \lambda, \bar \mu) \\ \mbox{s.t.} ~~ & G(x,\bar y) \le 0 \\ & (\bar y, \bar \lambda, \bar \mu) \in S(x) \end{aligned} \right. \label{eq:App-04-MPEC-Leader} \\ \mbox{Followers:} \qquad & \left\{ \begin{aligned} \min_{y_j,\lambda_j, \mu_j} ~~ & f_j (x,y_j,y_{-j}) \\ \mbox{s.t.} ~~ & g_j (x,y_j) \le 0 : \mu_j \\ & h(x,y) \le 0 : \lambda_j \end{aligned} \right\},~ \forall j \label{eq:App-04-MPEC-Followers} \end{align} \end{subequations} In (\ref{eq:App-04-MPEC-Leader}), the leader minimizes its payoff function $F$ which depends on the choice of its own $x$, the decisions of the followers $y$, and the dual variables $\lambda$ and $\mu$ from the lower level, because these dual variables may represent the prices of goods determined by the lower-level market clearing model. Constraints include inequalities and equalities (as a pair of opposite inequalities), as well as the optimality condition of the lower-level problem. In (\ref{eq:App-04-MPEC-Followers}), $x$ is treated as a parameter, and the competition among followers comes down to a GNEP with shared convex constraints: the payoff function $f_j(x,y_j,y_{-j})$ of follower $j$ is assumed to be convex in $y_j$; inequality $g_j(x,y_j) \le 0$ defines a local constraint of follower $j$ which is convex in $y_j$ and does not involve $y_{-j}$; inequality $h(x,y) \le 0$ is the shared constraint which is convex in $y$. Since each follower's problem is convex, the KKT condition is both necessary and sufficient for optimality. We assume that the set of GNEPs $S(x)$ is always non-empty. The GNEP encompasses several special cases in the lower level. If the global constraint is absent, it degenerates into an NEP; moreover, if the objective functions of followers are also decoupled, the lower level reduces to independent convex optimization programs. By replacing the lower-level GNEP with its KKT condition (\ref{eq:App-04-GNEP-KKT}), the MPEC (\ref{eq:App-04-MPEC-1}) becomes an MPCC, which can be solved by some suitable methods explained before. As the lower level GNEP usually possesses infinitely many equilibria, the outcome found by the MPCC reformulation is the favourite one from the leader's perspective. We can also require the Lagrange multipliers for the shared constraints should be equal, so as to restrict the GNEP to VEs. If the followers' problems are linear, the primal-dual optimality condition is an alternative choice in addition to the KKT condition, as it often involves fewer constraints. Nevertheless, the strong duality may introduce products involving primal and dual variables, such as those in (\ref{eq:App-04-Linear-Bilevel-NLP}) and (\ref{eq:App-04-Linear-Bilevel-NLP-Pen}), which remain non-convex and require special treatments. \subsection{Linear Complementarity Problem} \label{App-D-Sect04-02} A linear complementarity problem (LCP) requires finding a feasible solution subject to the following constraints \begin{equation} \label{eq:App-04-LCP-1} 0 \le x \bot P x + q \ge 0 \end{equation} where $P$ is a square matrix; $q$ is a vector. Their dimensions are compatible with $x$. LCP is a special case of MPCC without an objective function. This type of problem frequently arises in various disciplines including market equilibrium analysis, computational mechanics, game theory, and mathematical programming. The theory of LCPs is a well-developed field. Detailed discussions can be found in \cite{App-04-LCP-Book}. In general, an LCP is NP-hard, although it is polynomially solvable for some special cases. One situation is when the matrix $P$ is positive semidefinite. In such circumstance, problem (\ref{eq:App-04-LCP-1}) can be solved via the following convex quadratic program \begin{equation} \label{eq:App-04-LCP-CQP} \begin{aligned} \min~~ & x^T P x + q^T x \\ \mbox{s.t.}~~ & x \ge 0,~ P x + q \ge 0 \end{aligned} \end{equation} (\ref{eq:App-04-LCP-CQP}) is a CQP which is readily solvable. Its optimum must be non-negative according to the constraints. If the optimal value of (\ref{eq:App-04-LCP-CQP}) is 0, then its optimal solution also solves LCP (\ref{eq:App-04-LCP-1}); otherwise, if the optimal value is strictly positive, LCP (\ref{eq:App-04-LCP-1}) is infeasible. In fact, this conclusion holds no matter whether $P$ is positive semidefinite or not. However, if $P$ is indefinite, identifying the global optimum of a non-convex QP (\ref{eq:App-04-LCP-CQP}) is also NP-hard, and thus does not facilitate solving the LCP. There is a large body of literature discussing algorithms for solving LCPs. One of the most representative ones is the Lemke's pivoting method developed in \cite{App-04-LCP-Lemke}, and another emblematic one is the interior-point method proposed in \cite{App-04-LCP-IPO}. One drawback of the former method is its exponentially growing worst-case complexity, which makes it less efficient for large problems. The latter approach runs in polynomial time, but it requires the positive semidefiniteness of $P$, which is a strong assumption and limits its application. In this section, we will not present comprehensive reviews on the algorithms for LCP. We will introduce MILP formulations for problem (\ref{eq:App-04-LCP-1}) devised in \cite{App-04-LCP-MILP-1,App-04-LCP-MILP-2}. They make no reference on any special structure of matrix $P$. More importantly, they offer an option to access the solutions of practical problems in a systematic way. Recall the MILP formulation techniques presented in Appendix \ref{App-B-Sect03-05}, it is easy to see that problem (\ref{eq:App-04-LCP-1}) can be equivalently expressed as linear constraints with additional binary variable $z$ as follows \begin{equation} \label{eq:App-04-LCP-MILC} 0 \le x \le Mz,~ 0 \le P x + q \le M(1-z) \end{equation} Integrality of $z$ maintains the element-wise complementarity of $x$ and $Px+q$: at most one of $x_i$ and $(Px+q)_i$ can be strictly positive. Formulation (\ref{eq:App-04-LCP-MILC}) entails a manually specified parameter $M$, which is not instantly available at hand. On the one hand, it must be big enough to preserve all extreme points of (\ref{eq:App-04-LCP-1}). On the other hand, it is expected to be as small as possible from a computational perspective, otherwise, the continuous relaxation of (\ref{eq:App-04-LCP-MILC}) would be very loose. In this regard, (\ref{eq:App-04-LCP-MILC}) is too cursory, although it might work well. To circumvent above difficulty, it is proposed in \cite{App-04-LCP-MILP-1} to solve a bilinear program without a big-M parameter \begin{equation} \label{eq:App-04-LCP-BLP} \begin{aligned} \min_{x,z}~~ & z^T ( P x + q ) + ( {\bf 1} - z )^T x \\ \mbox{s.t.}~~ & x \ge 0,~ P x + q \ge 0,~ z \mbox{ binary} \end{aligned} \end{equation} If (\ref{eq:App-04-LCP-1}) has a solution $x^*$, the optimal value of (\ref{eq:App-04-LCP-BLP}) is 0: for $x^*_i>0$, we have $z^*_i=1$ and $(P x^* + q)_i=0$; for $(P x^* + q)_i>0$, we have $z^*_i=0$ and $x^*_i=0$. The optimal solution is consistent with the feasible solution of (\ref{eq:App-04-LCP-MILC}). The objective can be linearized by introducing auxiliary variables $w_{ij}=z_i x_j$, $\forall i,j$. However, applying normal integer formulation techniques in Appendix \ref{App-B-Sect02-02} on variable $w_{ij}$ again needs the upper bound of $x_i$, another interpretation of the big-M parameter. A parameter-free MILP formulation is suggested in \cite{App-04-LCP-MILP-1}. To understand the basic idea, recall the fact that $(1-z_i)x_i=0$; if we impose $x_i=w_{ii}=x_iz_i$, $i=1,2,\cdots$ in the constraint, $({\bf 1} - z )^T x$ in the objective can be omitted. Furthermore, multiplying both sides of $\sum_j P_{kj} x_j + q_k \ge 0$, $k=1,2,\cdots$ with $z_i$ gives $\sum_j P_{kj} w_{ij} + q_k z_i\ge 0$, $\forall i,k$. Since $z_i \in \{0,1\}$, $\sum_j P_{kj} x_j + q_k \ge \sum_j P_{kj} w_{ij} + q_k z_i$, $\forall i,k$ and $0 \le w_{ij} \le x_j$, $\forall i,j$ naturally hold. Collecting up these valid inequalities, we obtain an MILP \begin{equation} \label{eq:App-04-LCP-MILP-1} \begin{aligned} \min_{x,z,w}~~ & q^T z + \sum_i \sum_j P_{ij} w_{ij} \\ \mbox{s.t.}~~ & \sum_j P_{kj} x_j + q_k \ge \sum_j P_{kj} w_{ij} + q_k z_i \ge 0,~ \forall i,k \\ & 0 \le w_{ij} \le x_j,~ \forall i,j,~ w_{jj} = x_j,~ \forall j, ~ z \mbox{ binary} \end{aligned} \end{equation} Instead of enforcing every $\sum_j P_{kj} w_{ij} + q_k z_i$ being at 0, we relax them as inequalities and minimize their summation. More valid inequalities can be added in (\ref{eq:App-04-LCP-MILP-1}) by exploiting linear cuts of $z$. It is proved in \cite{App-04-LCP-MILP-1} that relation $w_{ij}=z_i x_j$, $\forall i,j$ is implicitly guaranteed at the optimal solution of (\ref{eq:App-04-LCP-MILP-1}). In view of this, MILP (\ref{eq:App-04-LCP-MILP-1}) is equivalent to LCP (\ref{eq:App-04-LCP-1}) in the following sense: (\ref{eq:App-04-LCP-1}) has a solution if and only if (\ref{eq:App-04-LCP-MILP-1}) has an optimal value equal to zero, and the optimal solution to (\ref{eq:App-04-LCP-MILP-1}) incurring a zero objective value is a solution of LCP (\ref{eq:App-04-LCP-1}). MILP (\ref{eq:App-04-LCP-MILP-1}) is superior compared with (\ref{eq:App-04-LCP-MILC}) and big-M linearization based MILP formulation of MINLP (\ref{eq:App-04-LCP-BLP}) because it is parameter-free and gives tighter continuous relaxation. Nevertheless, the number of constraints in (\ref{eq:App-04-LCP-MILP-1}) is significantly larger than that in formulation (\ref{eq:App-04-LCP-MILC}). This method has been further analyzed in \cite{App-04-LCP-MILP-2} and extended to binary-constrained mixed LCPs. Another parameter-free MILP formulation is suggested in \cite{App-04-LCP-MILP-3}, which takes the form of \begin{equation} \label{eq:App-04-LCP-MILP-2} \begin{aligned} \max_{\alpha,y,z}~~ & \alpha \\ \mbox{s.t.}~~ & 0 \le (P y)_i + q_i \alpha \le 1-z_i,~\forall i \\ & 0 \le y_i \le z_i,~ z_i \in \{0,1\},~ \forall i \\ & 0 \le \alpha \le 1 \end{aligned} \end{equation} Since $\alpha =0$, $y=0$, $z=0$ is always feasible, MILP (\ref{eq:App-04-LCP-MILP-2}) is feasible and has an optimum no greater than 1. By observing the constraints, we can conclude that if MILP (\ref{eq:App-04-LCP-MILP-2}) has a feasible solution with $\bar \alpha > 0$, then $x=y/\bar \alpha$ solves problem (\ref{eq:App-04-LCP-1}). If the optimal solution $\bar \alpha= 0$, then problem (\ref{eq:App-04-LCP-1}) has no solution; otherwise, suppose $\bar x$ solves (\ref{eq:App-04-LCP-1}), and let $\bar \alpha^{-1} = \max \{\bar x_i, (P\bar x)_i+q_i,i=1,\cdots\}$, then for any $0 <\alpha \le \bar \alpha$, $\bar y = \alpha \bar x$ is feasible in (\ref{eq:App-04-LCP-MILP-2}). As a result, the optimal solution should be no less than $\bar \alpha$, rather than 0. Compared with formulation (\ref{eq:App-04-LCP-MILC}), the big-M parameter is adaptively scaled by optimizing $\alpha$. Because (\ref{eq:App-04-LCP-MILP-2}) works with an intermediate variable $y$, when LCP (\ref{eq:App-04-LCP-1}) should be jointly solved with other conditions on $x$, formulation (\ref{eq:App-04-LCP-MILP-2}) is not advantageous, because non-convex variable transformation $x=y/\alpha$ must be appended to link both parts. Robust solutions of LCPs with uncertain $P$ and $q$ are discussed in \cite{App-04-Robust-LCP}. It is found that when $P \succeq 0$, robust solutions can be extracted from an SOCP under some mild assumptions on the uncertainty set; otherwise, the more general problem with uncertainty can be reduced to a deterministic non-convex QCQP. This technique is particularly useful in uncertain traffic equilibrium problems and uncertain Nash-Cournot games. Uncertain VI problems and MPCCs can be tackled in the similar vein after some proper transformations. It is shown in \cite{App-04-LCP-BLP-MPEC} that a linear bilevel program or its equivalent MPEC can be globally solved via a sequential LCP method. A hybrid enumerative method is suggested which substantially reduces the effort for searching a solution of the LCP or certifying that the LCP has no solution. When the LCP is easy to solve, this approach is attractive. Several extensions of LCP, including the discretely-constrained mixed LCP, discretely-constrained Nash-Cournot game, discretely-constrained MPEC, and logic constrained equilibrium problem as well as their applications in energy markets and traffic system equilibrium have been investigated in \cite{App-04-DM-LCP-1,App-04-DM-LCP-2,App-04-DM-LCP-3,App-04-DM-LCP-4}. In a word, due to its wide applications, LCP is still an active research field, and MILP remains an appealing method for solving LCPs for practical problems. \subsection{Linear Programs with Complementarity Constraints} A linear program with complementarity constraints (LPCC) entails solving a linear optimization problem with linear complementarity constraints. It is a special case of MPCC if all functions in the problem are linear, and a generalization of LCP by incorporating an objective function to be optimized. An LPCC has the following form \begin{equation} \label{eq:App-04-LPCC} \begin{aligned} \max_{x,y}~~ & c^T x + d^T y \\ \mbox{s.t.}~~ & Ax + By \ge f \\ & 0 \le y \bot q + N x + M y \ge 0 \end{aligned} \end{equation} A standard approach for solving (\ref{eq:App-04-LPCC}) is to linearize the complementarity constraint by introducing a binary vector $z$ and solve the following MILP \begin{equation} \label{eq:App-04-LPCC-MILP-BigM} \begin{aligned} \max_{x,y,z}~~ & c^T x + d^T y \\ \mbox{s.t.}~~ & Ax + By \ge f \\ & 0 \le q + N x + M y \le M z \\ & 0 \le y \le M(1-z) \\ & z \in \{0,1\}^m \end{aligned} \end{equation} If both of $x$ and $y$ are bounded variables, we can readily derive the proper value of $M$ in each inequality; otherwise, finding high quality bounds is nontrivial even if they do exist. The method in \cite{App-04-LPCC-BigM} can be used to determine proper bounds of $M$, if the NLP solver can successfully find local solutions of the bounding problems. Using a arbitrarily large value may solve the problem correctly. Nevertheless, parameter-free method is still of great theoretical interests. A smart Benders decomposition algorithm is proposed in \cite{App-04-LPCC-Benders} to solve (\ref{eq:App-04-LPCC-MILP-BigM}) without requiring the value of $M$. The completely positive programming method developed in \cite{App-04-LPCC-CPP-Relax} can also be used to solve (\ref{eq:App-04-LPCC-MILP-BigM}). For more theory and algorithm for LPCC, please see \cite{App-04-LPCC-DCP}, \cite{App-04-LPCC-1}-\cite{App-04-LPCC-7} and references therein. Interesting connections among conic QPCCs, QCQPs, and completely positive programs are revealed in \cite{App-04-LPCC-8}. \section{Equilibrium Programs with Equilibrium Constraints} \label{App-D-Sect05} An equilibrium program with equilibrium constraints (EPEC) is the most general extension of the bilevel program. It incorporates multiple leaders and multiple followers competing with each other in the upper level and the lower level, respectively, resulting in two GNEPs in both levels. In this regard, an EPEC is a multi-leader-follower Stackelberg game. \subsection{Mathematical model} \label{App-D-Sect05-01} In an EPEC, each leader $i$ deploys an action $x_i$ prior to the followers while taking movements of other leaders $x_{-i}$ into account and anticipating the best responses $y(x)$ from the followers; then each follower selects its optimal decision $y_j$ by taking the strategies of leaders $x$ and rivals' actions $y_{-j}$ as given. The EPEC can be formulated in two levels \begin{subequations} \label{eq:App-04-EPEC-1} \begin{align} \mbox{Leaders:} \qquad & \left\{ \begin{aligned} \min_{x_i,\bar y, \bar \lambda, \bar \mu} ~~ & F_i (x_i,x_{-i},\bar y, \bar \lambda, \bar \mu) \\ \mbox{s.t.} ~~ & G_i (x_i) \le 0 \\ & (\bar y, \bar \lambda, \bar \mu) \in S(x_i,x_{-i}) \end{aligned} \right\},~\forall i \label{eq:App-04-EPEC-Leaders} \\ \mbox{Followers:} \qquad & \left\{ \begin{aligned} \min_{y_j} ~~ & f_j (x,y_j,y_{-j}) \\ \mbox{s.t.} ~~ & g_j (x,y_j) \le 0 : \mu_j \\ & h(x,y) \le 0 : \lambda_j \end{aligned} \right\},~ \forall j \label{eq:App-04-EPEC-Followers} \end{align} \end{subequations} In (\ref{eq:App-04-EPEC-Leaders}), each leader minimizes its payoff function $F_i$ which depends on its own choice $x_i$, the decisions of followers $y$, dual variables $\lambda$ and $\mu$ are parameterized in competitors' strategies $x_{-i}$. Tuple $(\bar y, \bar \lambda, \bar \mu)$ in the upper level is restricted by the optimality condition of the lower-level problem. Although the inequality constraints of leaders are decoupled, and we do not explicitly consider global constraints in the upper-level GNEP, the leaders' strategy sets as well as their payoff functions are still correlated through the best reaction map $S(x_i,x_{-i})$, and hence (\ref{eq:App-04-EPEC-Leaders}) itself is a GNEP, which is non-convex. The followers' problem (\ref{eq:App-04-EPEC-Followers}) is a GNEP with shared constraints, which is the same as the situation in an MPEC. The same convexity assumptions are made in (\ref{eq:App-04-EPEC-Followers}). The structure of EPEC (\ref{eq:App-04-EPEC-1}) is depicted in Fig. \ref{fig:App-04-04}. The equilibrium solution of EPEC (\ref{eq:App-04-EPEC-1}) is defined as the GNE among leaders' MPECs. It is common knowledge that EPECs often have no pure strategy equilibrium due to the intrinsic non-convexity of MPECs. \begin{figure}[!htp] \centering \includegraphics[scale=0.60]{Fig-App-04-04} \caption{The structure of an EPEC.} \label{fig:App-04-04} \end{figure} \subsection{Methods for Solving an EPEC} \label{App-D-Sect05-02} An EPEC can be viewed as a set of coupled MPEC problems: leader $i$ is facing an MPEC composed of problem $i$ in (\ref{eq:App-04-EPEC-Leaders}) together with all followers' problems in (\ref{eq:App-04-EPEC-Followers}), which is parameterized in $x_{-i}$. By replacing lower-level GNEP with its KKT optimality conditions, it can be imaged that the GNEP among leaders have non-convex constraints which inherit the tough properties of complementarity constraints. Thus, solving an EPEC is usually extremely challenging. To our knowledge, systematic algorithms of EPEC are firstly developed in dissertations \cite{App-04-EPEC-Algorithm-1,App-04-EPEC-Algorithm-2,App-04-EPEC-Algorithm-3}. The primary application of such an equilibrium model is found in energy market problems, see \cite{App-04-EPEC-Algorithm-4} for an excellent introduction. Unlike the NEP and GNEP discussed in Sect. \ref{App-D-Sect01} and Sect. \ref{App-D-Sect02}, where the strategy sets are convex or jointly convex, because the lower-level problems are replaced with KKT optimality conditions and the MPEC for the leader is intrinsically non-convex, provable existence and uniqueness guarantees for the solution to EPECs are non-trivial. There is sustainable attempt on the analysis of EPEC solution properties. For example, the existence of a unique equilibrium for certain EPEC instances are discussed in \cite{App-04-EPEC-Solution-1,App-04-EPEC-Solution-2}, and in \cite{App-04-EPEC-Solution-3} for a nodal price based power market model. However, the existence and uniqueness of solution are only guaranteed under restrictive conditions. Counterexamples have been given in \cite{App-04-EPEC-Solution-4} to demonstrate that there is no general result for the existence of solutions to EPECs due to their non-convexity. The non-uniqueness issue is studied in \cite{App-04-EPEC-Algorithm-2,App-04-EPEC-Solution-5}. It is shown that even in the simplest instances, local uniqueness of the EPEC equilibrium solution may not be guaranteed, and a manifold of equilibria may exist. This can be understood because EPEC is a generalization of GNEP, whose solution property is illustrated in Sect. \ref{App-D-Sect02-01}. When the payoff functions possess special structures, say, a potential function exists, then the existence of a global equilibrium can be investigated using the theory of potential games \cite{App-04-EPEC-Solution-6,App-04-EPEC-Solution-7,App-04-EPEC-Potential-MPEC}. In summary, the theory of EPEC solutions are much more complicated than the single-level NEP and GNEP. This section reviews several representative algorithms which are widely used in literature. The former two are generic and seen in \cite{App-04-EPEC-Algorithm-1,App-04-EPEC-Algorithm-2}; the third one is motivated by the convenience brought by the property of potential games, and reported in \cite{App-04-EPEC-Potential-MPEC}; at last, a pricing game in a competitive market, which appears to be non-convex at first sight, is presented to show the hidden convexity in such a special equilibrium model. \vspace{12pt} {\noindent \bf 1. Best response algorithm} Since the equilibrium of an EPEC is a GNEP among leaders' MPEC problems, the most intuitive strategy for identifying an equilibrium solution is the best response algorithm. In some literature, it is also called diagonalization method or sequential MPEC method. This approach can be further categorized into Jacobian type and Gauss-Seidel type method, according to the information used when players update their strategies. To explain the algorithmic details, denote by MPEC($i$) the problem of leader $i$: the upper level is problem $i$ in (\ref{eq:App-04-EPEC-Leaders}), and the lower level is the GNEP described in (\ref{eq:App-04-EPEC-Followers}) given all leaders' strategies. Let $x^k_i$ be the strategy of leader $i$ in iteration $k$, and $x^k=(x^k_1,\cdots,x^k_m)$ the strategy profile of leaders. The Gauss-Seidel type algorithm proceeds as follows \cite{App-04-Complement-Book}: \begin{algorithm}[H] \normalsize \caption{\bf : Best-response (Diagonalization) algorithm for EPEC} \begin{algorithmic}[1] \STATE Choose an initial strategy profile $x^0$ for leaders, set convergence tolerance $\varepsilon>0$, an allowed number of iterations $K$, and the iteration index $k=0$; \STATE Let $x^{k+1}=x^k$. Loop for players $i=1,\cdots,m$: \begin{enumerate} \item[a.] Solve MPEC($i$) for leader $i$ given $x^{k+1}_{-i}$. \item[b.] Replace $x^{k+1}_i$ with the optimal strategy of leader $i$ just obtained. \end{enumerate} \STATE If $\| x^{k+1} - x^k \|_2 \le \varepsilon$, the upper level converges; solve lower-level GNEP (\ref{eq:App-04-EPEC-Followers}) with $x^*=x^{k+1}$ using the algorithms elaborated in Sect. \ref{App-D-Sect02-02}, and the equilibrium among followers is $y^*$. Report $(x^*,y^*)$ and terminate. \STATE If $k = K$, report failure of convergence and quit. \STATE Update $k \leftarrow k+1$, and go to step 2. \end{algorithmic} \label{Ag:App-04-EPEC-Diagonalization} \end{algorithm} Without an executable criterion to judge the existence and uniqueness of solution, possible outcomes of Algorithm \ref{Ag:App-04-EPEC-Diagonalization} are discussed in three situations. 1. There is no equilibrium. Algorithm \ref{Ag:App-04-EPEC-Diagonalization} does not converge. In such circumstance, one may turn to seeking a mixed-strategy Nash equilibrium, which always exists. Examples are given in \cite{App-04-Complement-Book}: if there are two leaders, we can list possible strategy combinations and solve the lower-level GNEP among followers, then compute respective payoffs of the two leaders, and then build a bimatrix game, whose mixed-strategy Nash equilibrium can be calculated from solving an LCP, as explained in Sect. \ref{App-D-Sect01-04}. 2. There is a unique equilibrium, or there are multiple equilibria. Algorithm \ref{Ag:App-04-EPEC-Diagonalization} may converge or not, and which equilibrium will be found (if it converges) depends on the initial strategy profile offered in step 1. 3. Algorithm \ref{Ag:App-04-EPEC-Diagonalization} may converge to a local equilibrium in the sense of \cite{App-04-EPEC-Solution-3}, if each MPEC is solved by a local NLP method which does not guarantee global optimality. The true equilibrium can be found only if each leader's MPEC can be globally solved. The MILP reformulation (if possible) offers one plausible way for this task. \vspace{12pt} {\noindent \bf 2. KKT system method} To tackle the divergence issue in the best response algorithm, it is proposed to apply the KKT condition to each leader's MPEC and solve the resulting KKT systems simultaneously \cite{App-04-EPEC-Algorithm-3,App-04-EPEC-Algorithm-5}. The solution turns out to be a strong stationary equilibrium point of EPEC (\ref{eq:App-04-EPEC-1}). There is no convergence issue in this approach, since no iteration is deployed. However, special attention should be paid to some potential problems mentioned below. 1. Since the EPEC is essentially a GNEP among leaders, the concentrated KKT system may have non-isolated solutions. To refine a meaningful outcome, we can manually specify a secondary objective function, which is optimized subject to the KKT system. 2. The embedded (twice) application of KKT condition for the lower-level problems and upper-level problems inevitably introduces extensive of complementarity and slackness conditions, which greatly challenges solving the concentrated KKT system. In this regard, scalability may be a main bottleneck for this approach. If the lower-level GNEP is linear, it may be better to use primal-dual optimality condition first for followers, and then KKT condition for leaders. 3. Because each leader's MPEC is non-convex, a stationary point of the KKT condition is not necessarily an optimal solution of the leader; as a result, the solution of the concentrated KKT system may not be an equilibrium of the EPEC. To validate the result, one can conduct the best-response method initiated at the candidate solution with a slight perturbation. \vspace{12pt} {\noindent \bf 3. Potential MPEC method} When the upper-level problems among leaders admit a potential function satisfying (\ref{eq:App-04-PG-1}), the EPEC can be reformulated as an MPEC, and the relations of their solutions are revealed by comparing the KKT condition of the normalized Nash stationary points of the EPEC and the KKT condition of the associated MPEC \cite{App-04-EPEC-Potential-MPEC}. For example, if the leaders' objectives are given by \begin{equation} F_i(x_i,x_{-i},y) = F^S_i(x_i) + H(x,y) \notag \end{equation} or in other words, the payoff function $F_i(x_i,x_{-i},y)$ can be decomposed as the sum of two parts: the first one $F^S_i(x_i)$ only depends on the local variable $x_i$, and the second one $H(x,y)$ is common to all leaders. In such circumstance, the potential function can be expressed as \begin{equation} U(x,y) = H(x,y) + \sum_{i=1}^m F^S_i(x_i) \notag \end{equation} Please see Sect. \ref{App-D-Sect01-05} for the condition under which a potential function exists and special instances in which a potential function can be easily found. Suppose that leaders' local constraints are given by $x_i \in X_i$ which is independent of $x_{-i}$ and $y$, and the best reaction map of followers with fixed $x$ is given by $(\bar y, \bar \lambda, \bar \mu) \in S(x)$. Clearly, the solution of MPEC \begin{equation} \begin{aligned} \min_{x,\bar y,\bar \lambda,\bar \mu} ~~ & U(x,\bar y) \\ \mbox{s.t.} ~~ & x_i \in X_i,~ i=1,\cdots,m \\ & (\bar y, \bar \lambda, \bar \mu) \in S(x) \end{aligned} \notag \end{equation} must be an equilibrium solution of the original EPEC. This approach leverages the property of potential games and is superior over the previous two methods (if a potential function exists): KKT condition is applied only once to the lower level problems, and the equilibrium can be retrieved by solving MPEC only once. \vspace{12pt} {\noindent \bf 4. A pricing game in a competitive market} We consider an EPEC taken from the examples in \cite{App-04-EPEC-Hidden-Convexity-1}, which models a strategic pricing game in a competitive market. The hidden convexity in this EPEC is revealed. For ease of exposition, we study the case with two leaders and one follower. The results can be extended to the situation where more than two leaders exist. The pricing game with two leaders can be formulated by the following EPEC \begin{subequations} \label{eq:App-04-Two-Leader-Pricing-Game} \begin{align} \mbox{Leader 1:} \quad & \max_{x_1}~ \left\{ y^T(x_1,x_2) A_1 x_1 ~\middle|~ \ B_1 x_1 \le b_1 \right\} \label{eq:App-04-TLPG-Leader-1} \\ \mbox{Leader 2:} \quad & \max_{x_2}~ \left\{ y^T(x_1,x_2) A_2 x_2 ~\middle|~ \ B_2 x_2 \le b_2 \right\} \label{eq:App-04-TLPG-Leader-2} \\ \mbox{Follower:} \quad & \max_y~ \left\{ f(y) - y^T A_1 x_1 - y^T A_2 x_2 ~\middle|~ Cy=d \right\} \label{eq:App-04-TLPG-Follower} \end{align} \end{subequations} In (\ref{eq:App-04-TLPG-Leader-1}) and (\ref{eq:App-04-TLPG-Leader-2}), two leaders announce their offering prices $x_1$ and $x_2$, respectively, subject to some certain pricing policy described in their corresponding constraints. The follower then decides how many goods should be purchased from each leader, according to the optimal solution of problem (\ref{eq:App-04-TLPG-Follower}), where the profit of the follower \begin{equation} f(y)=-\dfrac{1}{2} y^T Q y + c^T y \notag \end{equation} is a strongly concave quadratic function, i.e. $Q \succ 0$, and matrix $C$ has full rank in its rows. Each player in the market wishes to maximize his own profit. The utilities of leaders are the payments from trading with the follower; the profit of follower is the revenue minus the purchasing cost. At first sight, EPEC (\ref{eq:App-04-Two-Leader-Pricing-Game}) is non-convex, not only because the leaders' objective functions are bilinear, but also because the best response mapping is generally non-convex. In light of the strong convexity of (\ref{eq:App-04-TLPG-Follower}), the following KKT condition: \begin{equation} \begin{gathered} c-Qy-A_1 x_1-A_2 x_2-C^T \lambda = 0 \\ Cy-d=0 \end{gathered} \notag \end{equation} is necessary and sufficient for a global optimum. Because constraints in (\ref{eq:App-04-TLPG-Follower}) are all equalities, there is no complementarity and slackness condition. Solve this set of linear equations, we can obtain the optimal solution $y$ in a closed form. To this end, substituting \begin{equation} y=Q^{-1}(c-A_1 x_1-A_2 x_2-C^T \lambda) \notag \end{equation} into the second equation, we have \begin{equation} \lambda = M \left[N(c-A_1 x_1-A_2 x_2)-d \right] \notag \end{equation} where \begin{equation} M = \left[C Q^{-1} C^T \right]^{-1},~ N = C Q^{-1} \notag \end{equation} Moreover, eliminating $\lambda$ in the expression of $y$ gives the best reaction map \begin{equation} \label{eq:App-04-TLPG-Follower-Best-Reaction} y = r + D_1 x_1 + D_2 x_2 \end{equation} where \begin{equation} \begin{gathered} r = Q^{-1} c + N^T M d - N^T M N c \\ D_1 = N^T M N A_1 -Q^{-1} A_1 \\ D_2 = N^T M N A_2 -Q^{-1} A_2 \\ \end{gathered} \notag \end{equation} Substituting (\ref{eq:App-04-TLPG-Follower-Best-Reaction}) into the objective functions of leaders, EPEC (\ref{eq:App-04-Two-Leader-Pricing-Game}) reduces to a standard Nash game \begin{equation} \begin{aligned} \mbox{Leader 1:} \quad & \max_{x_1}~ \left\{ \theta_1 (x_1,x_2) ~\middle|~ \ B_1 x_1 \le b_1 \right\} \\ \mbox{Leader 2:} \quad & \max_{x_2}~ \left\{ \theta_2 (x_1,x_2) ~\middle|~ \ B_2 x_2 \le b_2 \right\} \\ \end{aligned} \notag \end{equation} where \begin{equation} \begin{gathered} \theta_1(x_1,x_2) = r^T A_1 x_1 + x_1^T D_1^T A_1 x_1 + x_2^T D_2^T A_1 x_1\\ \theta_2(x_1,x_2) = r^T A_2 x_2 + x_2^T D_2^T A_2 x_2 + x_1^T D_1^T A_2 x_2 \end{gathered} \notag \end{equation} The partial Hessian matrix of $\theta_1(x_1,x_2)$ can be calculated as \begin{equation*} \nabla^2_{x_1} \theta_1(x_1,x_2) = 2A^T_1 (N^T M N -Q^{-1}) A_1 \end{equation*} As $Q \succ 0$, its inverse matrix $Q^{-1} \succ 0$; denote by $Q^{-1/2}$ the square root of $Q^{-1}$, and \begin{equation} P_J = I - Q^{-1/2} C^T (CQ^{-1} C^T)^{-1} C Q^{-1/2} \notag \end{equation} It is easy to check that $P_J$ is a projection matrix, which is symmetric and idempotent, i.e., $P_J = P^2_J =P^3_J =\cdots$. Moreover, it can be verified that the Hessian matrix $\nabla^2_{x_1} \theta_1(x_1,x_2)$ can be expressed via \begin{equation} \nabla^2_{x_1} \theta_1(x_1,x_2) = 2A^T_1 (N^T M N -Q^{-1}) A_1 = -2 A^T_1 Q^{-1/2} P_J Q^{-1/2} A_1 \notag \end{equation} For any vector $z$ with a proper dimension, \begin{equation} \begin{aligned} z^T \nabla^2_{x_1} \theta_1(x_1,x_2) z ~ =~ & -2 z^T \left( A^T_1 Q^{-1/2} P_J Q^{-1/2} A_1 \right) z \\ =~ & -2 z^T A^T_1 Q^{-1/2} P^T_J P_J Q^{-1/2} A_1 z \\ =~ & -2 (P_J Q^{-1/2} A_1 z)^T (P_J Q^{-1/2} A_1 z) \le 0 \end{aligned} \notag \end{equation} We can see that $\nabla^2_{x_1} \theta_1(x_1,x_2) \preceq 0$. The similar analysis also applies to $\nabla^2_{x_2} \theta_2(x_1,x_2)$. Therefore, the problems of leaders are actually convex programs, and a pure-strategy Nash equilibrium exists. \section{Conclusions and Further Reading} \label{App-D-Sect06} Equilibrium problems entail solving interactive optimization problems simultaneously, and serve as the foundation for modeling competitive behaviors among strategic decision makers, and analyzing the stable outcome of a game. This chapter provides an overview on two kinds of equilibrium problems that frequently arise in various economic and engineering applications. One-level equilibrium problems, including the NEP and GNEP, are introduced first. The existence of equilibrium can be ensured under some convexity and monotonicity assumptions. Distributed methods for solving one-level games are presented. When each player solves a strictly convex optimization problem, distributed algorithms converge with provable guarantee, and thus are preferred, whereas the KKT system renders nonlinear equations and is relatively difficult to solve. To address incomplete information and uncertainty in player's decision making, a robust optimization based game model is proposed in \cite{App-04-Robust-Game-Theory}, which is distribution-free and relaxes Harsanyi's assumptions on Bayesian games. Particularly, the robust Nash equilibrium of a bimatrix game with uncertain payoffs can be characterized via the solution of a second-order cone complementarity problem \cite{App-04-Robust-NE-1}, and more general cases involving $n$ players and continuous payoffs are discussed in \cite{App-04-Robust-NE-2}. Distributional uncertainty is tackled in \cite{App-04-DR-CC-Game}, in which the mixed-strategy Nash equilibrium of a distributionally robust chance-constrained game is studied. A generalized Nash game arises when the strategy sets of players are coupled. Due to practical interests from a variety of engineering disciplines, the solution method for GNEPs is still an active research area. The volume of articles is growing quickly in recent years, say, \cite{App-04-GNEP-Algorithm-1,App-04-GNEP-Algorithm-2,App-04-GNEP-Algorithm-3,App-04-GNEP-Algorithm-4,App-04-GNEP-Algorithm-5,App-04-GNEP-Algorithm-6,App-04-GNEP-Algorithm-7,App-04-GNEP-Algorithm-8,App-04-GNEP-Algorithm-9}, to name just a few. GNEPs with uncertainties are studied in \cite{App-04-GNEP-Uncertainty-1,App-04-GNEP-Uncertainty-2}. Bilevel equilibrium problems, including the bilevel program, MPEC, and EPEC, are investigated. These problems are intrinsically hard to solve, due to the non-convexity induced by the best reaction map of followers, and solution properties have been revealed for specific instances under restrictive assumptions. We recommend \cite{App-04-Complement-Book,App-04-BLP-Book-1,App-04-BLP-Book-2} for theoretical foundations and energy market applications of bilevel equilibrium models, and \cite{App-04-BLP-Review-Pozo} for an up-to-date survey. The theories on bilevel programs and MPEC are relatively mature. Recent research efforts have been spent on new constraint qualifications and optimality conditions, for example, the work in \cite{App-04-MPEC-CQ-1,App-04-MPEC-CQ-2,App-04-MPEC-CQ-3,App-04-MPEC-CQ-4}. The MILP reformulation is preferred by most power system applications, because the ability of MILP solvers keep improving, and a global optimal solution can be found. Stochastic MPEC is proposed in \cite{App-04-Stochastic-MPEC-1} to model uncertainty using probability distributions. Algorithms are developed in \cite{App-04-Stochastic-MPEC-2,App-04-Stochastic-MPEC-3,App-04-Stochastic-MPEC-4,App-04-Stochastic-MPEC-5}, and a literature review can be found in \cite{App-04-Stochastic-MPEC-6}. Owing to the inherent hardness, discussions on EPEC models are limited to special cases, such as those with shared P-matrix linear complementarity constraints \cite{App-04-Muti-Leader-Follower-Game-1}, power market models \cite{App-04-Muti-Leader-Follower-Game-1,App-04-Muti-Leader-Follower-Game-2,App-04-Muti-Leader-Follower-Game-3}, those with convex quadratic objectives and linear constraints \cite{App-04-Muti-Leader-Follower-Game-1}, and Markov game models \cite{App-04-EPEC-Markov-Regularization}. Methods for solving EPEC are based on relaxing or regularizing complementarity constraints \cite{App-04-EPEC-Markov-Regularization,App-04-EPEC-Relaxation}, as well as evolutionary algorithms \cite{App-04-Muti-Leader-Follower-Game-EA}. Robust equilibria of EPEC are discussed in \cite{App-04-Robust-SNE}. An interesting connection between the bilevel program and the GNEP has been revealed in \cite{App-04-BiP-GNEP}, establishing a new look on these game models. We believe that the equilibrium programming models will become an imperative tool for designing and analyzing interconnected energy systems and related markets, in view of the physical interdependence of heterogenous energy flows and strategic interactions among different network operators. \input{ap04ref} \chapter*{} \begin{center} {\Large \bf Tutorials on Advanced Optimization Methods} \end{center} \ \begin{center} {\bf Wei Wei, Tsinghua University} \end{center} \ This material is the appendix part of my book collaborated with Professor Jianhui Wang at Southern Methodist University: {\color{blue} Wei Wei, Jianhui Wang. Modeling and Optimization of Interdependent Energy Infrastructures. Springer Nature Switzerland, 2020.} {\small \url{https://link.springer.com/book/10.100 This material provides thorough tutorials on some optimization techniques frequently used in various engineering disciplines, including \begin{enumerate} \item[$\clubsuit$] Convex optimization \item[$\clubsuit$] Linearization technique and mixed-integer linear programming \item[$\clubsuit$] Robust optimization \item[$\clubsuit$] Equilibrium/game problems \end{enumerate} It discusses how to reformulate a difficult (non-convex, multi-agent, min-max) problem to a solver-compatible form (semidefinite program, mixed-integer linear program) via convexification, linearization, and decomposition, so the original problem can be reliably solved by commercial/open-source software. Fundamental algorithms (simplex algorithm, interior-point algorithm) are not the main focus. This material is a good reference for self-learners who have basic knowledge in linear algebra and linear programming. It is one of the main references for an optimization course taught at Tsinghua University. If you need teaching slides, please contact {\color{blue}[email protected]} or find the up-to-date contact information at {\small \url{https://sites.google.com/view/weiweipes/}} \pdfbookmark[2]{Bookmarktitle}{internal_label} \tableofcontents \input{Appd01} \input{Appd02} \input{Appd03} \input{Appd04} \backmatte \printindex \end{document}
1,108,101,563,606
arxiv
\section{Introduction}\label{intro} In the main part of our note we consider splitting-type energies as a particular class of variational integrals \begin{equation}\olabel{intro 1} J[u] := \int_{\Omega} f(\nabla u) \, \D x \end{equation} defined for functions $u$: $\mathbb{R}^n \supset \Omega \to \mathbb{R}$. In the splitting case, the energy density $f$: $\mathbb{R}^n \to \mathbb{R}$ admits an additive decomposition into different parts depending on the various first partial derivatives of the admissible functions $u$, for example, we can consider the case \begin{equation}\olabel{intro 2} f(\nabla u) = \sum_{i=1}^n f_i(\partial_i u) \end{equation} with functions $f_i$: $\mathbb{R} \to \mathbb{R}$, $i=1$, \dots , $n$, of possibly different growth rates.\\ This also leads us to the more general category of variational problems with non-standard growth, where in the simplest case one studies an energy functional $J$ defined in equation \reff{intro 1} with convex density $f$ being bounded from above and from below by different powers of $|\nabla u|$.\\ For an overview of the various aspects of variational problems with non-standard growth including aspects as existence and regularity we refer the interested reader, e.g., to \cite{Gi:1987_1}, \cite{Ma:1989_1}, \cite{Ch:1992_1}, \cite{ELM:1999_1}, \cite{FS:1993_1}, \cite{Bi:1818}, \cite{BF:2007_1}, \cite{BFZ:2007_1} and to the recent paper \cite{BM:2020_1} together with the references quoted therein.\\ Regarding the question of interior regularity of $J$-minimizers in the non-standard setting, the above references provide rather complete answers, while the problem of boundary regularity for the solution of \begin{equation}\olabel{intro 3} J[u] \to \min \, ,\qquad u=u_0 \quad\mbox{on}\; \partial \Omega\, , \end{equation} with sufficiently smooth boundary datum $u_0$ seems to be not so well investigated.\\ We mention the contribution \cite{LS:2014_1}, where the higher integrability of the minimizer $u$ up to the boundary is established for densities $f$ being in some sense close to the splitting-class defined in \reff{intro 2}. A precise formulation of the assumptions concerning the density $f$ is given in inequality (2.1) of the paper \cite{LS:2014_1}, and we also like to stress that a survey of the known global regularity results is provided in the introduction of this reference.\\ Even more recently the contribution of Koch \cite{Ko:2021_1} addresses the global higher integrability of the gradient of solutions of variational problems with $(p,q)$-growth, which to our knowledge is the first result to improve global integrabilty of the gradient allowing full anisotropy with different growth rates with respect to different partial derivatives. The boundary data are supposed to belong to some fractional order spaces (see, e.g., \cite{AF:2003_1} or \cite{DD:2012_1}) and roughly speaking are handled like an additional $x$-dependence following ideas as outlined in, e.g., \cite{ELM:2004_1} for the interior situation. Hence, the admissible energy densities correspond to the examples presented in \cite{ELM:2004_1}. On the other hand the case of vectorial functions $u$: $\mathbb{R}^n \supset \Omega \to \mathbb{R}^m$, $n\geq 2$, $m \geq 1$ as well as the subquadratic situation are included.\\ In our note we first discuss the variational problem \reff{intro 3} on a bounded Lipschitz domain $\Omega \subset \mathbb{R}^2$ with energy functional $J$ defined in equation \reff{intro 1} under the following assumptions on the data: for $i=1$, $2$ let $f_i \in C^2(\mathbb{R})$ satisfy \begin{equation}\olabel{intro 4} a_i (1+t^2)^{\frac{q_i-2}{2}} \leq f_i''(t) \leq A_i (1+t^2)^{\frac{q_i -2}{2}}\, , \qquad t\in \mathbb{R}\, , \end{equation} with constants $a_i$, $A_i >0$ and exponents \begin{equation}\olabel{intro 5} q_i > 2 \, . \end{equation} We define the energy density $f$: $\mathbb{R}^2 \to \mathbb{R}$ according to (recall the ``splitting condition'' \reff{intro 2}) \begin{equation}\olabel{intro 6} f(Z) := f_1(z_1) + f_2(z_2)\, ,\qquad Z = (z_1,z_2) \in \mathbb{R}^2\, . \end{equation} Note that \reff{intro 4} yields the anisotropic ellipticity condition \begin{equation}\olabel{intro 7} c_1 (1+|Z|^2)^{\frac{p-2}{2}}|Y|^2 \leq D^2f(Z)(Y,Y) \leq c_2 (1+|Z|^2)^{\frac{q-2}{2}}|Y|^2 \end{equation} valid for $Z$, $Y \in \mathbb{R}^2$ with positive constants $c_1$, $c_2$ and for the choice of exponents $p=2$ and $q:= \max\{q_1,q_2\}$. In fact, \reff{intro 7} is a direct consequence of \reff{intro 5} and the formula \begin{equation}\olabel{intro 8} D^2f(Z)(Y,Y) = \sum_{i=1}^2 f_i''(z_i)|y_i|^2\, ,\qquad Z, \, Y \in \mathbb{R}^2 \, . \end{equation} Finally, we fix a functiom \begin{equation}\olabel{intro 9} u_0\in W^{1,\infty}(\Omega) \end{equation} remarking that actually $u_0 \in W^{1,t}(\Omega)$ with exponent $t$ being sufficiently large (depending on $q_1$ and $q_2$) is needed in our calculations. For a definition of the Sobolev spaces $W^{k,r}(\Omega)$ and their local variants we refer to \cite{AF:2003_1}.\\ The function $u_0$ acts as a prescribed boundary datum in the minimization problem \begin{equation}\olabel{intro 10} J[u] := \int_{\Omega} f(\nabla u) \, \D x \to \min \qquad\mbox{in}\; u_0+W^{1,1}_0(\Omega)\, . \end{equation} Now we state our result in the splitting case by the way establishing global higher integrabilty without relating $q_1$ and $q_2$ and under quite weak assumptions on $u_0$ which means that we do not have to impose additional hypotheses concerning the regularity of the trace of $u_0$ in the sense that ${u_0}_{|\partial \Omega}$ belongs to some fractional Sobolev space. The price we have to pay is a weight function which has to be incorporated. \begin{theorem}\olabel{main} Let the assumptions \reff{intro 4}-\reff{intro 6} and \reff{intro 9} hold. Moreover, let $u \in u_0 +W^{1,1}_0(\Omega)$ denote the unique solution of problem \reff{intro 10}. Then it holds \begin{equation}\olabel{intro 11} |u-u_0|^t \Big(\big|\partial_1 u\big|^{\frac{3}{2}q_1} + \big|\partial_2 u\big|^{\frac{3}{2}q_2}\Big) \in L^1(\Omega) \end{equation} for any choice of the exponent $t$ such that \begin{equation}\olabel{intro 12} t > T(q_1,q_2) \end{equation} with a finite lower bound $T(q_1,q_2) \geq 1$ depending on the growth rates $q_1$, $q_2$. \end{theorem} \begin{remark}\olabel{intro rem 1} An admissible choice for the quantity $T(q_1,q_2)$ from \reff{intro 12} is \begin{equation}\olabel{intro 13} T(q_1,q_2) = 6 \max\Bigg\{1, \frac{1}{2} \frac{{q_{\max}}}{{q_{\min}}}\frac{{q_{\min}} -1}{{q_{\min}} -2}, \frac{{q_{\min}}}{{q_{\min}} -2} - \frac{1}{6}\Bigg\} \frac{{q_{\min}}}{{q_{\min}} -2} \, . \end{equation} Here and in the following we let \[ {q_{\min}} := \min\{q_1,q_2\}\, , \qquad {q_{\max}} := \max\{q_1,q_2\}\, . \] In the case $q_i >5$, $i=1$, $2$, we can replace \reff{intro 13} by the expression \begin{equation}\olabel{intro 14} T(q_1,q_2) = 6 {q_{\max}} \max\Bigg\{ \frac{1}{2{q_{\min}} -4},\frac{1}{{q_{\max}}+2}\Bigg\} \end{equation} which means that in the case of higher initial integrability we can decrease the exponent $t$ in \reff{intro 11} leading to a stronger result. We also note that for the choice $q_i > 5$, $i=1$, $2$, the condition \reff{intro 9} can be replaced by the requirement $u_0 \in W^{1,{q_{\min}}}(\Omega)$ together with $\partial_i u_0 \in L^{q_i}(\Omega)$, $i=1$, $2$. \end{remark} \begin{remark}\olabel{intro rem 2} The existence and the uniqueness of a solution $u$ to problem \reff{intro 10} easily follow from the growth properties of $f$ and its strict convexity combined with our assumption \reff{intro 9} on the boundary datum $u_0$. Obviously the minimizer satisfies $\partial_i u\in L^{q_i}(\Omega)$, $i=1$, $2$, and from Theorem 1.1 ii) in \cite{BFZ:2007_1} we deduce interior H\"older continuity of the first partial derivatives. This in turn implies that $u$ is in the space $W^{2,2}_{\op{loc}}(\Omega)$. \end{remark} \begin{remark}\olabel{intro rem 3} Of course we expect that our theorem can be extended to the case $n>2$, i.e.~to densities \[ f(Z) = \sum_{i=1}^n f_i(z_i)\, \qquad Z \in \mathbb{R}^n \, , \] with functions $f_i$ satisfying \reff{intro 4} and \reff{intro 5} being replaced by $q_i > n$, $i=1$, \dots , $n$. We restrict our considerations in the splitting case to the 2D-situation for two reasons: first, a generalization to the case $n \geq 2$ would be accompanied by a tremendous technical effort without promising a deeper insight. Secondly, the interior regularity results of \cite{BFZ:2007_1} as the starting point for our arguments would have to be generalized to the full splitting case in $n$ dimensions which goes beyond the aim of this note. \end{remark} \begin{remark}\olabel{intro rem 4} Inequality \reff{intro 11} can be seen as weighted global higher integrability result for $\nabla u$. In fact, on account of \reff{intro 5}, $u$ is H\"older continuous up to the boundary, which means $|u(x) - u_0(x)| \approx \op{dist}^\alpha (x,\partial \Omega)$ for points $x$ near $\partial \Omega$. Thus, in this vague sense, Theorem \ref{main} yields $|\partial_i u|^{t_i} \op{dist}^\beta (\cdot , \partial \Omega) \in L^1(\Omega)$ for a certain range of exponents $\beta >0$ and $t_i > q_i$.\\ Another interpretation will be given in Corollary \ref{ein cor 1} on the behaviour of a suitable H\"older coefficient. \end{remark} In the second part of our paper we drop the splitting condition \reff{intro 6} and look at densities $f$ with anisotropic ($p$,$q$)-growth in the sense of \reff{intro 7} and for a moment we still restrict our considerations to the two-dimensional case. Then the arguments of the first part can be carried over provided that $p$ and $q$ are not too far apart. \begin{theorem}\label{theo nosplit} Assume that $f$: $\mathbb{R}^2 \to \mathbb{R}$ is of class $C^2(\mathbb{R}^2)$ satisfying \reff{intro 7} with exponents $2 < p \leq q$ such that \begin{equation}\olabel{nosplit 1} q < \min\Big\{p+\frac{1}{2},2p-2\Big\} \, . \end{equation} Let \reff{intro 9} hold for the boundary datum $u_0$ and let $u \in u_0 + W^{1,1}_0(\Omega)$ denote the unique solution of problem \reff{intro 10}. Then we have ($\Gamma := 1 + |\nabla u|^2$) \[ |u-u_0|^{2\kappa} \Gamma^s \in L^1(\Omega) \] for any exponents $\kappa$, $s$ such that \begin{equation}\olabel{nosplit 2} s\in \Big(\frac{q}{2}+\frac{3}{4},p-\frac{1}{4}\Big) \end{equation} and \begin{equation}\olabel{nosplit 3} \kappa > \max\Bigg\{\frac{p}{p-2},\frac{s}{s- \big(\frac{q}{2}+\frac{3}{4}\Big)}, \frac{2s(p-1)}{(p-2)^2}\Bigg\} \, . \end{equation} \end{theorem} \begin{remark}\olabel{intro rem 5} Since $f$ is strictly convex and since we have \reff{intro 9}, existence and uniqueness of a solution $u$ to problem \reff{intro 10} are immediate. Note that \reff{nosplit 1} implies (4.2) in Marcellini's paper \cite{Ma:1991_1}, hence $u \in W^{1,q}_{\op{loc}}(\Omega)$ on account of Theorem 4.1 in \cite{Ma:1991_1}. Quoting Corollary 2.2 from this reference we deduce $u \in C^{1,\alpha}(\Omega)$ and $u \in W^{2,2}_{\op{loc}}(\Omega)$. \end{remark} In our final theorem we incorporate two new features: we modify the idea of the proofs of the previous results by starting with a more subtle inequality being applied to terms on the left-hand sides of our foregoing calculations by the way extending the range of admissible exponents $p$ and $q$. Moreover, we include the general case of $n$-dimensions, $n \geq 2$. This can be done without technical efforts comparable to the ones which are needed to generalize the 2D splitting case.\\ Now the precise formulation of our last result is as follows: \begin{theorem}\label{aniso theo} Consider the variational problem \reff{intro 10} on a bounded Lipschitz domain $\Omega \subset \mathbb{R}^n$ now just assuming ($p$,$q$)-growth of the $C^2$-density $f$: $\mathbb{R}^n \to \mathbb{R}$ in the sense of \reff{intro 7} without splitting structure. Let the boundary datum $u_0$ satisfy \reff{intro 9}. Suppose that \begin{equation}\olabel{aniso main 1} 2 \leq n < p \leq q \qquad\mbox{and}\qquad q < p + \frac{2 (p-n)}{n}\, . \end{equation} By assumption \reff{aniso main 1} we may choose a real number $\kappa > 1$ sufficiently large such that \begin{equation}\olabel{aniso main 2} q < p + \frac{2(p-n)}{n} - \frac{2p-1}{n \kappa} \, . \end{equation} Moreover, let \begin{equation}\olabel{aniso main 3} \overline{s} < p \frac{\kappa(n+2)-2}{2n\kappa} - \frac{\kappa -1}{2n\kappa} \, . \end{equation} Increasing $\kappa$, if necessary, we suppose in addition that \begin{equation}\olabel{aniso main 4} \overline{s} < (\kappa -1) \frac{p-n}{n} + \frac{p}{2} \end{equation} and that we have the inequalities \reff{complete 7} below. Then it holds \[ \int_{\Omega} \Gamma^{\overline{s}} |u-u_0|^{2(\kappa -1)} \, \D x \leq c \, , \qquad \Gamma := 1+|\nabla u|^2\, , \] for a finite constant $c$ depending on $n$, $p$, $q$, $\kappa$ and $\overline{s}$. \end{theorem} \begin{remark}\label{main rem 1} Passing to the limit $\kappa \to \infty$, \reff{aniso main 3} becomes \[ \overline{s} \leq \frac{p}{2} + \frac{2p-1}{2n}\, . \] \end{remark} \begin{remark}\label{main rem 2} We note that Remark \ref{intro rem 5} extends to the setting studied in Theorem \ref{aniso theo}, since now condition (4.2) from \cite{Ma:1991_1} is a consequence of the more restrictive inequality \reff{aniso main 1}. \end{remark} \vspace*{2ex} Our paper is organized as follows: in Section \ref{proof} we present the proof of Theorem \ref{main} for a given Lipschitz boundary datum $u_0$ (recall \reff{intro 9}) and just supposing \reff{intro 5}. In the case $q_1$, $q_2 > 5$ the lower bound $T(q_1,q_2)$ occurring in \reff{intro 12} looks much nicer and \reff{intro 9} can be considerably weakened. We sketch some arguments in Section \ref{fuenf}. The two-dimensional non-splitting case, i.e.~the setting described in Theorem \ref{theo nosplit}, is discussed in Section \ref{nosplit proof} and in Section \ref{aniso} we discuss the higher dimensional situation of Theorem \ref{aniso theo}.\\ During the proofs of our results we make essential use of Caccioppoli-type inequalities. As a main new feature these inequalities involve ``small weights'', for instance weight functions like $(1+|\partial_i u|)^\alpha$, $i=1$, $2$, with a certain range of negative exponents $\alpha$. A rather general analysis of these tools is presented in Section \ref{cacc}.\\ Before we get into all these technical details we finish the introduction by adding some extra comments concerning the interpretation of our results. For simplicity we restrict ourselves to the setting described in Theorem \ref{main} and denote by $u$ the unique solution of problem \reff{intro 10} under the hypothesis of Theorem \ref{main}.\\ We start with the following observation: \begin{proposition}\label{ein prop 1} Fix $x_0 \in \Omega$. Then there exists a neighbourhood $U=U(x_0)\Subset \Omega$ of $x_0$ such that we may assume that for all $x \in U$ \[ |\hat{u}|(x) = |u-u_0| (x) \geq \underline{d} := \frac{1}{2} d(x_0) \, , \qquad \hat{u} := u-u_0\, , \] where \[ d(x) = \op{dist}(x,\partial \Omega) \, , \] denotes the distance function to the boundary $\partial \Omega$. \end{proposition} {\it Proof of Proposition \ref{ein prop 1}.} The distance function $d$ is discussed, e.g., in \cite{GT:1998_1}, Appendix 14.6. In particular, if $\Omega$ is a domain in $\mathbb{R}^n$ with non-empty boundary, then by (14.91) of \cite{GT:1998_1} $d$ is uniformly Lipschitz continuous.\\ Let us suppose w.l.o.g.~that $\hat{u}(x_0) \geq 0$. If $\hat{u}(x_0) < d(x_0)$, then we choose $\tilde{u}_0 := u_0-d$ as admissible Lipschitz boundary datum with $u_0 = \tilde{u}_0$ on $\partial \Omega$. Hence both $u_0$ and $\tilde{u}_0$ produce the same minimizer $u$ of the Dirichlet problem \reff{intro 10} and by definition we have \[ (u-\tilde{u}_0)(x_0) = (u - u_0 + d)(x_0) \geq d(x_0) \, . \] The considerations of our note completely remain unchanged if we replace $u_0$ by $\tilde{u}_0$. Hence we have w.l.o.g.~that $\hat{u}(x_0) \geq d(x_0)$. Proposition \ref{ein prop 1} then follows from the continuity of $u$ and $u_0$. \qed\\ Let us fix $x_0\in \Omega$, and define $U$, $\hat{u}$ as in Proposition \ref{ein prop 1}. Moreover we let $h$: $\Omega \to \mathbb{R}$, $h:= |\hat{u}|^{\frac{4}{3} \frac{\kappa}{{q_{\min}}}+1}$, hence \[ |\nabla h| \leq \Bigg[\frac{4}{3} \frac{\kappa}{{q_{\min}}} +1\Bigg] |\hat{u}|^{\frac{4}{3}\frac{\kappa}{{q_{\min}}}}|\nabla \hat{u}|\, . \] Theorem \ref{main} gives \[ \int_{\Omega} |\nabla h|^{\frac{3}{2}{q_{\min}}} \, \D x \leq c\, ,\qquad{i.e.}\qquad h\in W^{1,\frac{3}{2}{q_{\min}}}(\Omega) \, . \] Letting $\zeta = \frac{4}{3} \frac{\kappa}{{q_{\min}}}+1$, the imbedding into H\"older spaces yields $|\hat{u}|^{\zeta} \in C^{0,\mu}(\overline{\Omega})$ with $\mu = 1- \frac{4}{3{q_{\min}}}$.\\ Now we consider the Taylor-expansion of the function $g$: $\mathbb{R}^+ \to \mathbb{R}$, $w \mapsto w^{\zeta}$ around a fixed $\tilde{w} >0$: for $w$ sufficiently close to $\tilde{w}$ we have \[ w^{\zeta} = \tilde{w}^\zeta + \zeta \tilde{w}^{\zeta -1} (w-\tilde{w}) + O(|w-\tilde{w}|^2) \, . \] Suppose w.l.o.g.~that $\hat{u} > 0$ in $U$. Inserting $\tilde{w} = \hat{u}(x_0)$ and $w = \hat{u}(x)$, $x$ sufficiently close to $x_0$, in the Taylor-expansion we obtain using Proposition \ref{ein prop 1} \begin{eqnarray}\label{ein 1} \frac{|\hat{u}^\zeta(x) - \hat{u}^\zeta(x_0)|}{\zeta \underline{d}^{\zeta -1}}& \geq & \frac{|\hat{u}^\zeta(x) - \hat{u}^\zeta(x_0)|}{\zeta \hat{u}^{\zeta -1}(x_0)}\nonumber\\ & \geq & u(x) - u(x_0) + u_0(x_0) - u_0(x) + O(|\hat{u}(x) - \hat{u}(x_0)|^2)\, . \end{eqnarray} Since $|\hat{u}|^\zeta$ is of class $C^{0,\mu}(\overline{\Omega})$ and since according to \reff{intro 9} $u_0$ is Lipschitz, by \reff{ein 1} we find a constant $c >0$ not depending on $x_0$ such that \begin{equation}\label{ein 2} u(x)- u(x_0) \leq c \frac{1}{\zeta \underline{d}^{\zeta -1}} |x-x_0|^\mu + O(|x-x_0|^{2\mu})\, . \end{equation} Changing the roles of $x$ and $x_0$ and following the arguments leading to \reff{ein 2}, we find a constant $c >0$ not depending on $x_0$ such that for all $x$ sufficiently close to $x_0$ we have \begin{equation} |u(x) - u(x_0)| \leq c \frac{1}{\zeta \underline{d}^{\zeta -1}}|x-x_0|^\mu \, . \end{equation} Summarizing the results we obtain the following corollary to Theorem \ref{main}. \begin{cor}\label{ein cor 1} Under the assumptions of Theorem \ref{main} we let \[ \zeta : = \frac{4}{3} \frac{\kappa}{{q_{\min}}}+1 \, , \qquad \mu := 1- \frac{4}{3{q_{\min}}} \, . \] If $x_0 \in \Omega$ is fixed, then for every sufficiently small neighbourhood $U = U(x_0)\Subset \Omega$ of $x_0$ the H\"older coefficient of $u$ (see \cite{GT:1998_1} for notation) satisfies \[ [u]_{\mu;x_0} := \sup_{U} \frac{|u(x)-u(x_0)|}{|x-x_0|^\mu} \leq c \frac{1}{\zeta d^{\zeta-1}(x_0)} \, , \] where the constant $c$ is not depending on $x_0$. \end{cor} \section{Proof of Theorem \ref{main}}\label{proof} We proceed in several steps assuming from now on the validity of the hypotheses \reff{intro 4}, \reff{intro 5}, \reff{intro 6} and \reff{intro 9}. Let $u$ denote the unique solution of problem \reff{intro 10} and recall the (interior) regularity properties of $u$ stated in \mbox{Remark \ref{intro rem 2}}. We start with the following elementary observation; \begin{proposition}\olabel{proof prop 1} Consider numbers $\kappa$, $s_1$, $s_2 \geq 1$. Then there is a finite constant $c$ such that for $i=1$, $2$ and any $\eta \in C^1_0(\Omega)$ \begin{eqnarray}\olabel{proof 1} \int_{\Omega} \Gamma_i^{s_i} |\eta|^{2\kappa} \, \D x &\leq & c \Bigg\{ \Bigg[\int_{\Omega} \big| \nabla \Gamma_i^{\frac{s_i}{2}}\big|\, |\eta|^\kappa \, \D x\Bigg]^2 + \Bigg[\int_{\Omega} \Gamma_i^{\frac{s_i}{2}} |\nabla \eta| |\eta|^{\kappa-1} \, \D x\Bigg]^2\Bigg\}\nonumber\\ &=:& c \big\{ (I)_i +(II)_i\big\} \, ,\qquad \Gamma_i := 1+|\partial_i u|^2\, . \end{eqnarray} \end{proposition} \emph{Proof of Proposition \ref{proof prop 1}.} Let $v:= \Gamma_i^{s_i/2} |\eta|^{\kappa}$ and observe that due to $u \in W^{2,2}_{\op{loc}} \cap W^{1,\infty}_{\op{loc}}(\Omega)$ the function $v$ is in the space $W^{1,1}_{0}(\Omega)$, hence the Sobolev-Poincar\'{e} inequality implies \[ \int_{\Omega} v^2 \, \D x \leq c \Bigg[ \int_{\Omega} |\nabla v|\, \D x\Bigg]^2\, , \] and inequality \reff{proof 1} directly follows by observing that $\big|\nabla |\eta|\big| \leq |\nabla \eta|$. \qed\\ Next we replace $\eta$ by a sequence $\eta_m$, $m\in \mathbb{N}$, being defined through \begin{equation}\olabel{proof 2} \eta_m := \varphi_m (u-u_0) \end{equation} with $\varphi_m \in C^1_0(\Omega)$, $0 \leq \varphi_m \leq 1$, \begin{eqnarray}\olabel{proof 3} \varphi_m = 1 \; \mbox{on}\; \Big\{x\in \Omega: \, \op{dist}(x,\partial \Omega) \geq \frac{1}{m}\Big\}\, ,&& \varphi_m = 0\; \mbox{on}\; \Big\{x\in \Omega: \, \op{dist}(x,\partial \Omega) \leq \frac{1}{2m}\Big\}\, ,\nonumber\\ |\nabla \varphi_m| \leq c\,m \, ,&& c=c(\partial \Omega)\, . \end{eqnarray} Note that due to \reff{intro 5} and \reff{intro 9} inequality \reff{proof 1} extends to $\eta_m$, moreover, according to \cite{GT:1998_1}, Theorem 7.17, we have \begin{equation}\olabel{proof 4} |u(x) - u_0(x)| \leq c \rho^{1- \frac{2}{{q_{\min}}}}\, , \qquad {q_{\min}} := \min\{q_1,q_2\}\, , \end{equation} for points $x\in \Omega$ at distance $\leq \rho$ to $\partial \Omega$, provided $\rho$ is sufficiently small.\\ The quantity $(II)_i$ defined in \reff{proof 1} with respect to the choice $\eta = \eta_m$ behaves as follows: \begin{proposition}\olabel{proof prop 2} Let $\delta := {q_{\min}}-2$ and consider numbers $s_i$, $\kappa \geq 1$, $i=1$, $2$, such that \begin{equation}\olabel{proof 5} s_i > \frac{\delta}{2}\, ,\qquad \kappa \geq \hat{\kappa} := \frac{{q_{\min}} -1}{{q_{\min}}-2}\, . \end{equation} Then it holds for $(II)_i = (II_m)_i$, $i=1$, $2$, defined in \reff{proof 1} with respect to the function $\eta_m$ from \reff{proof 2} \begin{equation}\olabel{proof 6} (II)_i \leq c \int_{\Omega} \Gamma_i^{s_i - \frac{\delta}{2}} |\eta_m|^{2 \kappa - 2 \hat{\kappa}} \, \D x =: c (III)_i \, , \qquad i=1 ,\, 2\, , \end{equation} $c$ denoting a finite constant being uniform in $m$. \end{proposition} \emph{Proof of Proposition \ref{proof prop 2}.} By H\"older's inequality we have \begin{eqnarray*} (II)_i &=& \Bigg[ \int_{\Omega} \Gamma_i^{\frac{s_i}{2} - \frac{\delta}{4}} |\eta_m|^{\kappa -{\hat{\kappa}}} |\eta_m |^{{\hat{\kappa}}-1} |\nabla \eta_m| \Gamma_i^{\frac{\delta}{4}}\, \D x\Bigg]^2\\ &\leq & \Bigg[ \int_{\Omega} \Gamma_i^{s_i - \frac{\delta}{2}} |\eta_m|^{2\kappa - 2 {\hat{\kappa}}}\, \D x\Bigg] \Bigg[ \int_{\Omega} \Gamma_i^{\frac{\delta}{2}} |\eta_m|^{2{\hat{\kappa}}-2} |\nabla \eta_m|^2 \, \D x \Bigg] \end{eqnarray*} and for the second integral on the r.h.s.~we observe (recall \reff{proof 2} and \reff{proof 3}, \reff{proof 4}) \begin{eqnarray}\olabel{proof 6a} |\eta_m|^{2{\hat{\kappa}}-2} |\nabla \eta_m|^2 & \leq & c \Big[ |\nabla (u-u_0)|^2 + |\nabla \varphi_m |^2 |u-u_0|^{2{\hat{\kappa}}} \Big]\nonumber\\ &\leq & c \Big[ |\nabla u|^2 + |\nabla u_0|^2 + m^2 \Big(\frac{1}{m}\Big)^{2{\hat{\kappa}} (1-2/{q_{\min}})}\Big] \, . \end{eqnarray} Since $\Omega$ is Lipschitz we observe that $|\op{spt} \nabla \varphi_m| \leq c/m$ and obtain \begin{equation}\olabel{proof 6b} \int_{\op{spt} \nabla \varphi_m} \Gamma_i^{\frac{\delta}{2}} m^{2-2{\hat{\kappa}} (1-\frac{2}{{q_{\min}}})} \, \D x \leq \int_{\Omega} \Gamma_i^{\frac{{q_{\min}}}{2}} \, \D x + m^{[2-2{\hat{\kappa}} (1-\frac{2}{{q_{\min}}})]\frac{{q_{\min}}}{2}} m^{-1} \end{equation} The choice \reff{proof 5} of ${\hat{\kappa}}$ shows that the right-hand side of \reff{proof 6b} is bounded. Letting $\Gamma := 1 + |\nabla u|^2$ we arrive at \begin{eqnarray*} (II)_i&\leq & c\, (III)_i \Bigg[\int_{\Omega} \Big[ \Gamma^{\frac{\delta}{2}}\Gamma + \Gamma^{\frac{\delta}{2}} |\nabla u_0|^2\Big]\, \D x +1 \Bigg]\\ & \leq & c \, (III)_i \Bigg[ \int_{\Omega} \Gamma^{\frac{{q_{\min}}}{2}}\, \D x + \int_{\Omega} |\nabla u_0|^{{q_{\min}}}\, \D x +1 \Bigg] \, , \end{eqnarray*} where in the final step we have used the definition of $\delta$. Now \reff{proof 6} clearly is a consequence of \reff{intro 9} (in fact, $u_0 \in W^{1,{q_{\min}}}(\Omega)$ would be sufficient). \qed \\ In the following we will discuss the quantity $(III)_i$ defined in \reff{proof 6} under the assumptions of Proposition \ref{proof prop 2}.\\ We have by Young's inequality for any $\varepsilon > 0$ \begin{eqnarray*} (III)_i &=& \int_{\Omega} \Gamma_i^{s_i \frac{s_i - \delta/2}{s_i}} |\eta_m|^{2 \kappa - 2 \hat{\kappa}}\, \D x = \int_{\Omega} \Big[ \Gamma_i^{s_i} |\eta_m|^{2\kappa}\Big]^{\frac{s_i - \delta/2}{s_i}} |\eta_m|^{2\kappa - 2\hat{\kappa} - 2 \kappa \frac{s_i - \delta/2}{s_i}}\, \D x\\ &\leq & \varepsilon \int_{\Omega} \Gamma_i^{s_i} |\eta_m|^{2\kappa} \, \D x + c(\varepsilon) \int_{\Omega} |\eta_m|^\vartheta \, \D x \end{eqnarray*} with exponent \[ \vartheta := \Big[\frac{s_i}{s_i - \delta/2}\Big]^* \Big(2\kappa - 2 \hat{\kappa} - 2 \kappa \frac{s_i - \delta/2}{s_i}\Big) \, , \] $[\dots ]^*$ denoting the exponent conjugate to $\frac{s_i}{s_i - \delta/2}$. Note that $\vartheta \geq 0$ provided we additionally assume that $\kappa \geq 1$ satisfies the inequality \begin{equation}\olabel{proof 7} \kappa \geq \frac{2s_i}{\delta} \hat{\kappa} = 2 s_i \frac{{q_{\min}} -1}{({q_{\min}}-2)^2}\, . \end{equation} Inserting the above estimate for $(III)_i$ into \reff{proof 6}, we find \[ (II)_i \leq \varepsilon \int_{\Omega} \Gamma_i^{s_i} |\eta_m|^{2\kappa} \, \D x + c(\varepsilon) \, , \] and if we choose $\varepsilon$ sufficiently small, we see that \reff{proof 1} yields the following result: \begin{proposition}\olabel{proof prop 3} Let $s_i$, $\kappa \geq 1$ satisfy \reff{proof 5} and \reff{proof 7}. Then there exists a constant $c$ being independent of $m$ such that ($i=1$, $2$) \begin{equation}\olabel{proof 8} \int_{\Omega} \Gamma_i^{s_i} |\eta_m|^{2\kappa} \, \D x \leq c \Big[ (I)_i+1\Big] \, ,\qquad (I)_i := \Bigg[\int_{\Omega} \big| \nabla \Gamma_i^{\frac{s_i}{2}}\big| \, |\eta_m|^\kappa \, \D x\Bigg]^2 \, . \end{equation} \end{proposition} In order to prove Theorem \ref{main} it remains to discuss the quantity $(I) = (I)_i$ for $i=1$, $2$, i.e.~now second derivatives of $u$ have to be handled in an appropriate way. It holds \begin{eqnarray*} (I)_1& \leq & c \Bigg[ \int_{\Omega} \Gamma_1^{\frac{s_1}{2}-1} |\nabla \Gamma_1| \, |\eta_m|^{\kappa} \, \D x\Bigg]^2 \nonumber\\ &\leq & c \Bigg[ \int_{\Omega} \Gamma_1^{\frac{s_1-1}{2}} |\partial_1 \partial_1 u|\, |\eta_m|^{\kappa} \, \D x \Bigg]^2 + c \Bigg[\int_{\Omega} \Gamma_1^{\frac{s_1 -1}{2}} |\partial_1 \partial_2 u|\, |\eta_m|^{\kappa}\, \D x\Bigg]^2 =: T_1 + \tilde{T}_1\, , \end{eqnarray*} and in the same manner \begin{eqnarray*} (I)_2 &\leq & c \Bigg[ \int_{\Omega} \Gamma_2^{\frac{s_2-1}{2}} |\partial_2 \partial_2 u|\, |\eta_m|^{\kappa} \, \D x \Bigg]^2 + c \Bigg[\int_{\Omega} \Gamma_2^{\frac{s_2 -1}{2}} |\partial_1 \partial_2 u|\, |\eta_m|^{\kappa}\, \D x\Bigg]^2 =: T_2 + \tilde{T}_2\, . \end{eqnarray*} We have \begin{eqnarray*} T_i &=& c \Bigg[ \int_{\Omega} \Gamma_i^{\frac{q_i-2}{4}} |\partial_i \partial_i u|\, |\eta_m|^{\kappa} \Gamma_i^{\frac{\alpha_i}{2}} \Gamma_i^{\frac{s_i -1}{2}- \frac{q_i-2}{4} - \frac{\alpha_i}{2}} \, \D x \Bigg]^2\\ & \leq & c \Bigg[ \int_{\Omega} \Gamma_i^{\frac{q_i-2}{2}}|\partial_i \partial_i u|^2 |\eta_m|^{2\kappa} \Gamma_i^{\alpha_i}\, \D x\Bigg] \Bigg[ \int_{\Omega} \Gamma_i^{s_i -1 - \frac{q_i -2}{2} - \alpha_i} \, \D x \Bigg]\, , \end{eqnarray*} where the last estimate follows from H\"older's inequality and $\alpha_i$, $i=1$, $2$, denote real numbers such that for the moment \begin{equation}\olabel{proof 9} s_i \leq q_i + \alpha _i \, , \qquad i=1, \, 2\, . \end{equation} Note that the condition \reff{proof 9} guarantees the validity of \begin{equation}\olabel{proof 10} \int_{\Omega} \Gamma_i^{s_i - 1 - \frac{q_i -2}{2} - \alpha_i} \, \D x \leq c \, , \qquad i=1,\, 2\, , \end{equation} for a finite constant $c$. Recalling \reff{intro 4} we see that \reff{proof 10} yields the bound \begin{equation}\olabel{proof 11} T_i \leq c \int_{\Omega} f_i''(\partial_i u) |\partial_i \partial_i u|^2 |\eta_m|^{2\kappa} \Gamma_i^{\alpha_i} \, \D x \end{equation} again for $i=1$, $2$.\\ Let us look at the quantities $\tilde{T}_i$: we have by H\"older's inequality \begin{eqnarray}\olabel{proof 12} \tilde{T}_1 &=& c \Bigg[ \int_{\Omega} \Gamma_1^{\frac{q_1-2}{4}} |\partial_1 \partial_2 u| |\eta_m|^{\kappa} \Gamma_1^{\frac{s_1-1}{2}} \Gamma_2^{\frac{\alpha}{2}} \Gamma_1^{-\frac{q_1-2}{4}} \Gamma_2^{- \frac{\alpha_2}{2}}\, \D x\Bigg]^2\nonumber\\ & \leq & c \Bigg[\int_{\Omega} \Gamma_1^{\frac{q_1-2}{2}} |\partial_1 \partial_2 u|^2 |\eta_m|^{2\kappa} \Gamma_2^{\alpha_2} \, \D x \Bigg] \Bigg[ \int_{\Omega} \Gamma_1^{s_1 -1 - \frac{q_1-2}{2}} \Gamma_2^{-\alpha_2}\, \D x\Bigg]\nonumber\\ &\leq & c \Bigg[ \int_{\Omega} f_1''(\partial_1 u) |\partial_1 \partial_2 u|^2 |\eta_m|^{2\kappa} \Gamma_2^{\alpha_2}\, \D x \Bigg] \Bigg[\int_{\Omega} \Gamma_1^{s_1 -1 - \frac{q_1 -2}{2}} \Gamma_2^{-\alpha_2} \, \D x\Bigg]\nonumber\\ &=:& c S_1' \cdot S_1'' \, . \end{eqnarray} In order to benefit from the inequality \reff{proof 12}, we replace \reff{proof 9} by the stronger bound \begin{equation}\olabel{proof 13} s_i \leq \frac{3}{4} q_i \, ,\qquad i=1, \, 2\, , \end{equation} together with the requirement \begin{equation}\olabel{proof 14} \alpha_i \in (-1/2,0)\, ,\qquad i=1, \, 2\, . \end{equation} Here we note that \reff{proof 13} together with \reff{proof 14} yields \reff{proof 9}. We then obtain on account of Young's inequality \[ S_1'' \leq c \Bigg[ \int_{\Omega} \Gamma_1^{\frac{q_1}{2}} \, \D x + \int_{\Omega} \Gamma_2^\beta \, \D x \Bigg] \] with exponent \[ \beta := - \alpha_2 \Big[\frac{q_1/2}{s_1 - q_1/2}\Big]^* = - \alpha_2 \frac{q_1/2}{q_1-s_1} \leq - 2 \alpha _2 \leq \frac{q_2}{2} \, , \] where we used \reff{proof 13}, \reff{proof 14} and \reff{proof 14}, \reff{intro 5}, respectively, for the last inequalities. Thus \reff{proof 12} reduces to \begin{equation}\olabel{proof 15} \tilde{T}_1 \leq c \int_{\Omega} f_1''(\partial_1 u) |\partial_1 \partial_2 u|^2 |\eta_m|^{2\kappa} \Gamma_2^{\alpha_2}\, \D x \, . \end{equation} In the same spirit it follows \begin{equation}\olabel{proof 16} \tilde{T}_2 \leq c \int_{\Omega} f_2''(\partial_2 u) |\partial_1 \partial_2 u|^2 |\eta_m|^{2\kappa} \Gamma_1^{\alpha_1}\, \D x \, . \end{equation} Let us return to \reff{proof 8}: we have by \reff{proof 11}, \reff{proof 15} and \reff{proof 16} and by \reff{intro 8} \begin{eqnarray*} \lefteqn{\int_{\Omega} \Big[\Gamma_1^{s_1} + \Gamma_2^{s_2}\Big] |\eta_m|^{2\kappa} \, \D x \leq c \big[1+T_1+\tilde{T}_2+\tilde{T}_1+T_2]}\\ &=& c \Bigg[ 1+ \int_{\Omega} \Big(f_1''(\partial_1 u)|\partial_1\partial_1 u|^2 + f_2''(\partial_2 u) |\partial_1\partial_2 u|^2\Big) |\eta_m|^{2\kappa} \Gamma_1^{\alpha_1} \, \D x\\ &&+ \int_{\Omega} \Big(f_1''(\partial_1 u)|\partial_1\partial_2 u|^2 + f_2''(\partial_2 u) |\partial_2\partial_2 u|^2\Big) |\eta_m|^{2\kappa} \Gamma_2^{\alpha_2} \, \D x \Bigg]\\ &=& c \Bigg[ 1 + \int_{\Omega} D^2f(\nabla u) \big(\partial_1 \nabla u,\partial_1\nabla u\big) |\eta_m|^{2\kappa} \Gamma_1^{\alpha_1}\, \D x\\ && + \int_{\Omega} D^2f(\nabla u) \big(\partial_2\nabla u,\partial_2 \nabla u\big)|\eta_m|^{2\kappa} \Gamma_2^{\alpha_2}\, \D x \Bigg] \, . \end{eqnarray*} The remaining integrals are handled with the help of Proposition \ref{prop cacc 1} (compare also inequality (4.6) from \cite{BF:2020_3}) replacing $l$ by $\kappa$ and $\eta$ by $|\eta_m|$, respectively. We note that the proof of Proposition \ref{prop cacc 1} obviously remains valid with these replacements. We emphasize that \reff{proof 14} is an essential assumption to apply Proposition \ref{prop cacc 1}. We get \begin{eqnarray}\olabel{proof 17} \int_{\Omega} \Big[\Gamma_1^{s_1} + \Gamma_2^{s_2}\Big] |\eta_m|^{2\kappa} & \leq & c \Bigg[ 1 + \int_{\Omega} D^2f(\nabla u) \big(\nabla |\eta_m|,\nabla |\eta_m|\big) \Gamma_1^{1+\alpha_1}|\eta_m|^{2\kappa -2}\, \D x\nonumber\\ && + \int_{\Omega} D^2f(\nabla u) \big(\nabla |\eta_m|,\nabla |\eta_m|\big) \Gamma_2^{1+\alpha_2} |\eta_m|^{2\kappa -2}\, \D x\Bigg]\, . \end{eqnarray} Finally we let $s_i = \frac{3}{4}q_i$, $i=1$, $2$, which is the optimal choice with respect to \reff{proof 13}. We note that with this choice \reff{proof 7} follows from \begin{equation}\olabel{proof 7a} \kappa \geq \frac{3}{2} \frac{{q_{\max}}({q_{\min}}-1)}{({q_{\min}} -2)^2}\, , \end{equation} and \reff{proof 7a} is valid if $\kappa$ is chosen according to assumption \reff{intro 12} from Theorem \ref{main} with $T(q_1,q_2)$ defined in \reff{intro 13}.\\ From \reff{intro 4} and \reff{intro 8} we deduce \begin{eqnarray}\olabel{proof 18} \mbox{r.h.s.~of \reff{proof 17}}& \leq & c \Bigg[ 1+ \sum_{i=1}^2 \int_{\Omega} \Gamma_i^{\frac{q_i}{2}+\alpha_i} |\partial_i \eta_m|^2 |\eta_m|^{2\kappa-2} \, \D x \nonumber\\ && + \int_{\Omega} \Gamma_2^{\frac{q_2 -2}{2}} |\partial_2 \eta_m|^2 |\eta_m|^{2\kappa -2} \Gamma_1^{1+\alpha_1}\, \D x\nonumber\\ && + \int_{\Omega} \Gamma_1^{\frac{q_1-2}{2}} |\partial_1 \eta_m|^2 |\eta_m|^{2\kappa -2} \Gamma_2^{1+\alpha_2} \, \D x \Bigg]\, . \end{eqnarray} We recall the definition \reff{proof 2} of $\eta_m$ and the gradient bound for $\varphi_m$ stated in \reff{proof 3}. This yields for integrand of the first integral on the right-hand side of \reff{proof 18}: \begin{eqnarray*} \Gamma_i^{\frac{q_i}{2}+\alpha_i} |\partial_i \eta_m|^2 |\eta_m|^{2 \kappa -2} &\leq & c \Bigg[ \Gamma_i^{\frac{q_i}{2}+\alpha_i} |\partial_i (u-u_0)|^2 |\eta_m|^{2\kappa -2} +\Gamma_i^{\frac{q_i}{2}+\alpha_i} |\partial_i \varphi_m|^2 |u-u_0|^{2\kappa}\Bigg]\, . \end{eqnarray*} We quote inequality \reff{proof 4} and recall that $\nabla \varphi_m$ has support in the set $\{x\in \Omega:\, \op{dist}(x,\partial\Omega)\leq 1/m\}$ satisfying $|\op{spt}\nabla \varphi_m|\leq c/m$, hence \begin{eqnarray*} \int_{\Omega} \Gamma_i^{\frac{q_i}{2}+\alpha_i} |\partial_i \varphi_m|^2 |u-u_0|^{2\kappa} \, \D x &\leq & \int_{\op{spt}\nabla \varphi_m} \Gamma_i^{\frac{q_i}{2}+\alpha_i}m^{2-2\kappa \frac{{q_{\min}}-2}{{q_{\min}}}} \, \D x\\ &\leq & c \int_{\Omega} \Gamma_i^{\frac{q_i}{2}} \, \D x + c \int_{\op{spt} \nabla \varphi_m} m^{[ 2-2\kappa \frac{{q_{\min}}-2}{{q_{\min}}}] \gamma^*_i} \, \D x \, , \end{eqnarray*} where (recall \reff{proof 14}) \[ \gamma_i = \frac{q_i}{q_i+2\alpha_i}\, ,\qquad \gamma_i^* = \Bigg[\frac{q_i}{q_i+2\alpha_i}\Bigg]^* = - \frac{q_i}{2\alpha_i}\, . \] Thus the second integral on the right-hand side is bounded if, with $\alpha_i$ sufficiently close to $-1/2$, we have \begin{equation}\olabel{proof 18a} \kappa > \frac{2{q_{\max}}-1}{2 {q_{\max}}} \frac{{q_{\min}}}{{q_{\min}}-2} \, . \end{equation} Assuming \reff{proof 18a} we arrive at \begin{equation}\olabel{proof 19} \int_{\Omega} \Gamma_i^{\frac{q_i}{2}+\alpha_i} |\partial_i \eta_m|^2 |\eta_m|^{2\kappa -2} \, \D x \leq c \Bigg[1 + \int_{\Omega} \Gamma_i^{\frac{q_i}{2}+\alpha_i}|\partial_i u - \partial_i u_0|^2 |\eta_m|^{2\kappa -2}\, \D x\Bigg] \end{equation} for any $\alpha_i$ sufficiently close to $-1/2$. Assuming this we next let \[ \beta_i = \frac{3q_i}{2q_i+4(1+\alpha_i)} \qquad\mbox{with conjugate exponent}\qquad \beta_i^* = \frac{3q_i}{q_i - 4 (1+\alpha_i)} \] and apply Young's inequality in an obvious way to get \[ \int_{\Omega} \Gamma_i^{\frac{q_i}{2}+\alpha_i+1} |\eta_m|^{2\kappa -2}\, \D x \leq \varepsilon \int_{\Omega} \Gamma_i^{s_i}|\eta_m|^{2\kappa}\, \D x + c(\varepsilon) \int_{\Omega} |\eta_m|^{2\kappa - 2 \beta_i^*}\, \D x \, , \] where the first term can be absorbed in the left-hand side of \reff{proof 17}, while the second one bounded under the assumption \begin{equation} \olabel{proof 19a} \kappa > \frac{3{q_{\min}}}{{q_{\min}} -2}\, . \end{equation} Here we used the fact that for $q > 2$ the function $q/(q-2)$ is a decreasing function. Altogether we have shown that for exponents $\alpha_i$ close to $-1/2$ the first integral on the right-hand side of \reff{proof 18} splits into two parts, where the first one can be absorbed in the left-hand side of \reff{proof 18} and the second one stays bounded. During our calculations we evidently used \reff{intro 9}, however \reff{intro 9} can be replaced by weaker integrability assumtions concerning $\partial_i u$. We leave the details to the reader. \\ Let us finally consider the ``mixed terms'' on the right-hand side of \reff{proof 18}. We first observe the inequality \begin{eqnarray}\olabel{proof 20} \lefteqn{\int_{\Omega} \Gamma_2^{\frac{q_2-2}{2}} |\partial_2 \eta_m|^2 |\eta_m|^{2 \kappa -2}\Gamma_1^{1+\alpha_1}\, \D x}\nonumber\\ &\leq & c \int_{\Omega} \Gamma_2^{\frac{q_2-2}{2}} (\partial_2(u-u_0))^2 |\eta_m|^{2\kappa-2} \Gamma_1^{1+\alpha_1}\, \D x\nonumber\\ &&+\int_{\Omega} \Gamma_2^{\frac{q_2-2}{2}} |\partial_2 \varphi_m|^2 |u-u_0|^2 |\eta_m|^{2\kappa-2} \Gamma_1^{1+\alpha_1}\, \D x\, . \end{eqnarray} Considering the limit case $\alpha_1 =-1/2$, the first integral on the right-hand side of \reff{proof 20} basically is of the form \[ \int_{\Omega} \Gamma_2^{\frac{q_2}{2}} \Gamma_1^{\frac{1}{2}} |\eta_m|^{2\kappa -2} \, \D x \] and this integral directly results from an application of Caccioppoli's inequality. If we like to show an integrability result for $\Gamma_2^{t_2}$ with some power $t_2 > q_2/2$, then the idea is to apply Young's inequality choosing \[ \beta = \frac{2t_2}{q_2}\qquad\mbox{with conjugate exponent}\qquad \beta^*= \frac{2t_2}{2t_2-q_2} \] leading to the quantities $\Gamma_2^{t_2}$ and $\Gamma_1^{\frac{t_2}{2t_2 - q_2}}$. If $t_1 =t_1(q_1) > q_1/2$ denotes the desired integrability exponent for $\Gamma_1$, then this requires the bound \[ \frac{t_2}{2t_2-q_2} \leq t_1(q_1) \qquad\mbox{for all}\; q_1 > 2 \] and of course we need the same condition changing the roles of $t_1$ and $t_2$. With the symmetric Ansatz $t_i = \theta q_i$ for some $\theta > 1/2$ we are immediately led to $t_i = \frac{3}{4} q_i$, $i=1$, $2$, which again motivates our choice of $s_i$.\\ More precisely: discussing the first integral on the right-hand side of \reff{proof 20} we choose $\beta_1 = 3/2$ with conjugate exponent $\beta_1^* =3$ and obtain \begin{eqnarray*} \int_{\Omega} \Gamma_2^{\frac{q_2}{2}} |\eta_m|^{2\kappa -2} \Gamma_1^{1+\alpha_1} \, \D x & = & \int_{\Omega} \Gamma_2^{\frac{q_2}{2}} \Gamma_1^{1+\alpha_1} |\eta_m|^{\frac{2\kappa}{\beta_1}} |\eta_m|^{\frac{2\kappa}{\beta_1^*} -2} \, \D x \\ & \leq &\varepsilon \int_{\Omega} \Gamma_2^{s_2} |\eta_m|^{2\kappa} \, \D x + c(\varepsilon) \int_{\Omega} \Gamma_1^{3 (1+\alpha_1)} |\eta_m|^{2\kappa -6}\, \D x \, . \end{eqnarray*} Here the first integral is absorbed in the left-hand side of \reff{proof 17} and since $\alpha_1$ is chosen sufficiently close to $-1/2$, the second integral is bounded provided that we suppose in addition \begin{equation}\olabel{proof 21} q_1 > 3 \quad\mbox{and}\quad \kappa \geq 3\, . \end{equation} If $q_1 < 3$, using \[ \beta_2 =\frac{q_1}{4 (1+\alpha_1)} \, , \qquad \beta_2^* = \frac{q_1}{q_1-4(1+\alpha_1)}\, , \] we are led to ($\tilde{\varepsilon} \ll \varepsilon$) \[ c(\varepsilon) \int_{\Omega} \Gamma_1^{3(1+\alpha_1)} \eta_m^{2\kappa -6} \, \D x \leq \tilde{\varepsilon}\int_{\Omega} \Gamma_1^{s_1} \eta^{2\kappa} dx + c(\tilde{\varepsilon}) \int_{\Omega} \eta^{2\kappa - 6 \beta_2^*}\, \D x\, , \] where the first integral is absorbed in the right-hand side of \reff{proof 17} and if $\alpha_1$ sufficiently close to $-1/2$ we now suppose \begin{equation}\olabel{proof 22} \kappa > \frac{3q_1}{q_1-2} \, . \end{equation} Note that the condition \reff{proof 22} is a consequence of \reff{proof 19a}.\\ It remains to discuss the second integral on the right-hand side of \reff{proof 20}. Using Young's inequality with \[ \beta_3 = \frac{3}{2} \frac{q_2}{q_2 - 2}\, , \qquad \beta_3^* = \frac{3q_2}{q_2+4}\, , \] we obtain \begin{eqnarray} \lefteqn{\int_{\Omega} \Gamma_2^{\frac{q_2-2}{2}} |\partial_2 \varphi_m|^2 |u-u_0|^2 |\eta_m|^{2\kappa-2} \Gamma_1^{1+\alpha_1}\, \D x}\nonumber\\ &\leq & \varepsilon \int_{\Omega} \Gamma_2^{s_2} |\eta_m|^{2\kappa} \, \D x\nonumber\\ &&+ c(\varepsilon) \int_{\Omega} \Gamma_1^{(1+\alpha_1) \frac{3q_2}{q_2+4}} | \partial_2 \varphi_m|^{2\beta_3^*} |u-u_0|^{3 \beta_3^*}|\eta_m|^{2\kappa-2\beta_3^*}\, \D x\, , \end{eqnarray} where again the first integral is absorbed in the left-hand side of \reff{proof 17}. Considering the second integral we choose \[ \beta_4 = \frac{q_1 (q_2+4)}{4 q_2 (1+ \alpha_1)} \qquad\mbox{with conjugate exponent}\qquad \beta_4^* = \frac{q_1(q_2+4)}{q_2[q_1-4(1+\alpha_1)]+4q_1} \, . \] This gives ($\tilde{\varepsilon} \ll \varepsilon$) \begin{eqnarray}\olabel{proof 23} \lefteqn{c(\varepsilon) \int_{\Omega} \Gamma_1^{(1+\alpha_1) \frac{3q_2}{q_2+4}} | \partial_2 \varphi_m|^{2\beta_3^*} |u-u_0|^{3 \beta_3^*}|\eta_m|^{2\kappa-2\beta_3^*}\, \D x}\nonumber\\ &\leq & \tilde{\varepsilon} \int_{\Omega} \Gamma_1^{s_1} |\eta_m|^{2\kappa}\, \D x \nonumber\\ &&+ c(\tilde{\varepsilon}) \int_{\op{spt}\nabla \varphi_m} |\partial_2 \varphi_m|^{2 \beta_3^*\beta_4^*} |u-u_0|^{2\beta_3^* \beta_4^*} |\eta_m|^{2\kappa-2\beta_3^*\beta_4^*}\, \D x\, . \end{eqnarray} As usual the first integral on the right-hand side of \reff{proof 23} is absorbed in the left-hand side of \reff{proof 17} and we calculate \begin{equation}\olabel{proof 24} \beta_3^* \beta_4^* = \frac{3 q_1 q_2}{q_2[q_1 - 4(1+\alpha_1)]+4q_1}\, . \end{equation} We obtain recalling \reff{proof 3} and \reff{proof 4} \begin{eqnarray} \lefteqn{ \int_{\op{spt}\nabla \varphi_m} |\partial_2 \varphi_m|^{2 \beta_3^*\beta_4^*} |u-u_0|^{2\beta_3^* \beta_4^*} |\eta_m|^{2\kappa-2\beta_3^*\beta_4^*}\, \D x}\nonumber\\ & \leq & \int_{\op{spt}\nabla \varphi_m} |\partial_2 \varphi_m|^{2 \beta_3^*\beta_4^*} |u-u_0|^{2\beta_3^* \beta_4^*} |u-u_0|^{2\kappa-2\beta_3^*\beta_4^*}\, \D x\nonumber\\ & \leq & c m^{-1} m^{\frac{6 q_1 q_2}{q_2[q_1 - 4(1+\alpha_1)]+4q_1}} m^{- \frac{{q_{\min}}-2}{{q_{\min}}} 2 \kappa} \, . \end{eqnarray} For $\alpha_1$ sufficiently close to $-1/2$ this leads to the requirement \begin{equation}\olabel{proof 25} \kappa > \frac{{q_{\min}}}{{q_{\min}}-2}\, \frac{1}{2} \Bigg[\frac{6q_1q_2}{q_2(q_1-2)+4q_1} - 1 \Bigg] \, . \end{equation} With $q_1>2$ fixed and for $q_2 > 2$ we consider the function $g(q_2) = \frac{6q_1q_2}{q_2(q_1-2) + 4q_1}$. We have \begin{eqnarray*} g'(q_2) &=& \frac{6 q_1}{q_2(q_1-2) + 4q_1} - \frac{6q_1q_2}{(q_2(q_1-2)+4q_1)^2} (q_1-2)\\ &=& \frac{24 q_1^2}{(q_2(q_1-2)+4q_1)^2} > 0 \, , \end{eqnarray*} hence $g$ is an increasing function and \[ g(q_2) \leq \lim_{t \to \infty} \frac{6 q_1 t}{t(q_1-2)+4q_1} = \frac{6q_1}{q_1-2} \, . \] Thus, \begin{equation}\olabel{proof 26} \kappa \geq \frac{{q_{\min}}}{{q_{\min}}-2}\, \frac{1}{2} \Bigg[6 \frac{{q_{\min}}}{{q_{\min}}-2} - 1\Bigg] \end{equation} implies the validity of \reff{proof 25}.\\ Summarizing the conditions imposed on $\kappa$ during our calculations, i.e.~recalling the bounds \reff{proof 5}, \reff{proof 7a}, \reff{proof 18a}, \reff{proof 19a}, \reff{proof 26} we are led to the lower bound \begin{equation}\olabel{proof 27} \kappa > 3 \max\Bigg\{1, \frac{1}{2} \frac{{q_{\max}}}{{q_{\min}}}\frac{{q_{\min}} -1}{{q_{\min}} -2}, \frac{{q_{\min}}}{{q_{\min}} -2} - \frac{1}{6}\Bigg\} \frac{{q_{\min}}}{{q_{\min}} -2} \end{equation} for the exponent $\kappa$. Assuming the validity of \reff{proof 27} and returning to \reff{proof 17} we now have shown that for $\alpha_i$ sufficiently close to $-1/2$ the right-hand side of \reff{proof 17} can be splitted into terms which either can be absorbed in the left-hand side of \reff{proof 17} or stay uniformly bounded, hence \begin{equation}\olabel{proof 28} \int_{\Omega} \Big[\Gamma_1^{\frac{3}{4}q_1} + \Gamma_2^{\frac{3}{4}q_2}\Big] |\eta_m|^{2\kappa}\, \D x \leq c \end{equation} for a finite constant $c$ independent of $m$. Passing to the limit $m \to \infty$ in \reff{proof 28} our claim \reff{intro 11} follows. Obviously \reff{intro 12} is a consequence of \reff{proof 27} and the definition of $T(q_1,q_2)$ stated in \reff{intro 13}, which completes the proof of Theorem \ref{main} for arbitrary exponents $q_1$, $q_2 >2$. \qed \begin{remark}\label{proof rem 1} Let us add some comments on the behaviour of $\kappa$, which means that we look at the lower bound for the exponent $\kappa$ given by the right-hand side of inequality \reff{proof 27}. \begin{enumerate} \item Since $u-u_0 \in W^{1,{q_{\min}}}(\Omega)$ and thereby $u-u_0 \in C^{0,\nu}(\overline{\Omega})$, $\nu := 1- 2/{q_{\min}}$, the H\"older exponent enters \reff{proof 27}, which also corresponds to the natural effect that \item $\kappa \to \infty$ as ${q_{\min}} \to 2$. \item The ratio ${q_{\max}}/{q_{\min}}$ determines the growth of $\kappa$. \item In the limit ${q_{\max}} = {q_{\min}} \to \infty$ condition \reff{proof 27} reduces to $\kappa > 3$. \end{enumerate} \end{remark} \section{Comments on the case ${q_{\min}} > 5$}\label{fuenf} We now choose the sequence $\eta_m \in C^1_0(\Omega)$ according to \begin{eqnarray}\olabel{fuenf 1} \partial_i \eta_m \to \partial_i u \, ,&&\mbox{in}\; L^{q_i}(\Omega) \, ,\quad i= 1 ,\, 2 \, , \nonumber\\ \eta_m \to u-u_0 &&\mbox{uniformly as}\; m\to \infty \, . \end{eqnarray} For some elementary properties of anisotropic Sobolev spaces including an appropriate version of this density result we refer, e.g., to \cite{Ra:1979_1}, \cite{Ra:1981_1}). We emphasize that during the following calculations condition \reff{intro 9} can be replaced by the weaker requirement $\partial_i u_0 \in L^{q_i}(\Omega)$, $i=1$, $2$. Proposition \ref{proof prop 1} obviously holds for $\eta_m$, and with $\delta$ as in Proposition \ref{proof prop 2} we now obtain \reff{proof 6} with the choice ${\hat{\kappa}} =1$ observing that going through the proof of \reff{proof 6} the quantity $(II)_i$ can be handled as follows: first we note \[ (II)_i \leq \Bigg[\int_{\Omega} \Gamma^{s_i - \frac{\delta}{2}} |\eta_m|^{2\kappa -2}\, \D x \Bigg]\cdot \Bigg[ \int_{\Omega} |\nabla \eta_m|^2 \Gamma_i^{\frac{\delta}{2}}\, \D x\Bigg]\, , \] and then we use \[ \int_{\Omega} |\nabla \eta_m|^2 \Gamma_i^{\frac{\delta}{2}} \, \D x \leq c \Bigg[1+\int_{\Omega} |\nabla u|^{q_{\min}} \, \D x\Bigg] \, , \] which is a consequence of \reff{fuenf 1}.\\ We continue with the discussion of $(III)_i$ for the choice ${\hat{\kappa}} =1$, and observe that \reff{proof 7} has to be replaced by \begin{equation}\olabel{fuenf 2} \kappa \geq \frac{2}{\delta} s_i\, . \end{equation} Exactly as in Section \ref{proof} we obtain the inequalities \reff{proof 17}, \reff{proof 18} now being valid for any exponents $s_i$, $\alpha_i$, $\kappa$ such that ($i=1$, $2$) \begin{equation}\olabel{fuenf 3} \alpha_i \in (-1/2,0)\, ,\qquad s_i \leq \frac{3}{4} q_i\, ,\qquad \kappa \geq \frac{2}{\delta} s_i \, . \end{equation} Still following the lines of Section \ref{proof} we let $s_i = \frac{3}{4} q_i$, $i=1$, $2$ , and replace \reff{fuenf 2} by \begin{equation}\olabel{fuenf 4} \kappa \geq \frac{3}{2 \delta} {q_{\max}}\, . \end{equation} We let \[ \rho_i := 2 \kappa -2 -2\kappa \Big[\frac{q_i}{2}+\alpha_i\Big] \frac{1}{s_i} \] and consider the terms on the right-hand side of \reff{proof 18}. Young's inequality yields ($0 < \varepsilon < 1$) \begin{eqnarray}\olabel{fuenf 5} \lefteqn{\sum_{i=1}^2 \int_{\Omega} \Gamma_i^{\frac{q_i}{2}+\alpha_i} |\partial_i \eta_m|^2 |\eta_m|^{2\kappa-2} \, \D x}\nonumber\\ &= & \sum_{i=1}^2 \int_{\Omega}\Big[\Gamma_i^{s_i} |\eta_m|^{2\kappa}\Big]^{\frac{1}{s_i} \big[\frac{q_i}{2}+\alpha_i\big]} |\partial_i \eta_m|^2 |\eta_m|^{\rho_i}\, \D x\nonumber \\ &\leq & \sum_{i=1}^2 \Bigg[\varepsilon \int_{\Omega} \Gamma_i^{s_i} |\eta_m|^{2 \kappa}\, \D x + c(\varepsilon) \int_{\Omega} |\eta_m|^{\rho_i \gamma_i^*} |\partial_i \eta_m|^{2 \gamma_i^*} \, \D x\Bigg] \end{eqnarray} with \[ \gamma_i := \frac{s_i}{\frac{q_i}{2}+\alpha_i} = \frac{3 q_i}{2 q_i + 4 \alpha_i}\, ,\qquad \gamma_i^* =\frac{3q_i}{q_i-4\alpha_i}\, . \] While the $\varepsilon$-part of \reff{fuenf 5} can be absorbed, the remaining integrals are bounded if for $i=1$, $2$ \[ \rho_i \geq 0 \qquad\mbox{and}\qquad 2 \gamma_i^* \leq q_i \, . \] Noting that the function $t \mapsto t/(t+2)$, $t \geq 0$, is increasing and by choosing $\alpha_i$, $i=1$, $2$, sufficiently close to $-1/2$, we see that these requirements are consequences of the strict inequalities \begin{equation}\olabel{fuenf 6} {q_{\max}} > 4\, ,\qquad \kappa > \frac{3 {q_{\max}}}{{q_{\max}} +2} \, . \end{equation} We thus have assuming \reff{fuenf 4} and \reff{fuenf 6} \begin{eqnarray}\olabel{fuenf 7} \int_{\Omega} \Big[\Gamma_1^{s_1} + \Gamma_2^{s_2}\Big] |\eta_m|^{2\kappa}\, \D x &\leq & c \Bigg[ 1+ \int_{\Omega} \Gamma_2^{\frac{q_2 -2}{2}} |\partial_2 \eta_m|^2 |\eta_m|^{2\kappa -2} \Gamma_1^{1+\alpha_1}\, \D x\nonumber\\ &&+ \int_{\Omega} \Gamma_1^{\frac{q_1-2}{2}} |\partial_1 \eta_m|^2 |\eta_m|^{2\kappa -2} \Gamma_2^{1+\alpha_2} \, \D x \Bigg]\, . \end{eqnarray} Let us have closer look at the first integral on the right-hand side of \reff{fuenf 7}. Youngs's inequality gives \begin{eqnarray}\olabel{fuenf 8} \lefteqn{\int_{\Omega} \Gamma_2^{\frac{q_2 -2}{2}} |\partial_2 \eta_m|^2 |\eta_m|^{2\kappa -2} \Gamma_1^{1+\alpha_1}\, \D x}\nonumber\\ &\leq & \int_{\Omega} |\partial_2 \eta_m|^{q_2}\, \D x + \int_{\Omega} \Gamma_2^{\frac{q_2}{2}} |\eta_m|^{(2\kappa -2) \frac{q_2}{q_2 -2}} \Gamma_1^{(1+\alpha_1)\frac{q_2}{q_2 -2}}\, \D x\nonumber\\ &\leq & c\Bigg[1+ \int_{\Omega} \Big[ \Gamma_2^{s_2}|\eta_m|^{2\kappa}\Big]^{\frac{2}{3}} |\eta_m|^{(2\kappa-2) \frac{q_2}{q_2-2} - \frac{4}{3}\kappa} \Gamma_1^{(1+\alpha_1) \frac{q_2}{q_2 -2}} \, \D x \Bigg]\nonumber\\ &\leq & c + \varepsilon \int_{\Omega} \Gamma_2^{s_2} |\eta_m|^{2\kappa} \, \D x \nonumber\\ &&+ c(\varepsilon) \int_{\Omega} |\eta_m|^{3 \big[(2\kappa -2) \frac{q_2}{q_2 -2} - \frac{4}{3} \kappa\big]} \Gamma_1^{3(1+\alpha_1)\frac{q_2}{q_2 -2}} \, \D x \, . \end{eqnarray} Here the first integral is absorbed and the second is bounded if we have (again $\alpha_2$ being sufficiently close to $-1/2$) \[ (2\kappa-2) \frac{q_2}{q_2-2} - \frac{4}{3}\kappa \geq 0 \qquad\mbox{and}\qquad \frac{3}{2}\frac{q_2}{q_2-2} < \frac{q_1}{2}\, . \] The first condition follows from the requirement \reff{fuenf 6}, the second one holds if we assume in addition that ${q_{\min}} > 5$.\\ In the same way the last term on the right-hand side of \reff{fuenf 7} is handled and by combining \reff{fuenf 4}, \reff{fuenf 6} we have shown Theorem \ref{main} together with the formula \reff{intro 14}. \qed\\ \section{Proof of Theorem \ref{theo nosplit}}\label{nosplit proof} We proceed along the lines of Section \ref{proof} assuming that all the hypothese of Theorem \ref{theo nosplit} are satisfied. In place of \reff{proof 1} we have ($\Gamma:= 1+|\nabla u|^2$) \begin{eqnarray}\olabel{pr 1} \int_{\Omega} \Gamma^s |\eta|^{2\kappa} \, \D x &\leq & c \Bigg\{\Bigg[\int_{\Omega} |\nabla \Gamma^{\frac{s}{2}}| \, |\eta|^\kappa \, \D x\Bigg]^2 +\Bigg[ \int_{\Omega} \Gamma^{\frac{s}{2}} |\nabla \eta|\,|\eta|^{\kappa-1} \, \D x\Bigg]^2\Bigg\}\nonumber\\ & =: & c\{(I)+(II)\}\, , \end{eqnarray} and \reff{pr 1} holds for any $s$, $\kappa \geq 1$ and $\eta \in C^1_0(\Omega)$. We define the sequence $\eta_m$ as in \reff{proof 2} observing that \reff{proof 4} holds with ${q_{\min}}$ replaced by $p$. Letting $\delta := p-2$, defining $\hat{\kappa} := \frac{p-1}{p-2}$ we obtain for \begin{equation}\olabel{pr 2} s > \frac{\delta}{2}\, , \quad \kappa \geq \hat{\kappa} \end{equation} applying obvious modifications in the proof of \reff{proof 6} \begin{equation}\olabel{pr 3} (II) \leq c \int_{\Omega} \Gamma^{s-\frac{\delta}{2}}|\eta_m|^{2\kappa - 2 \hat{\kappa}}\, \D x =: c (III) \, . \end{equation} Here $(II)$ just denotes the term occurring on the right-hand side of \reff{pr 1} with $\eta$ being replaced by $\eta_m$. The quantity $(III)$ is discussed analogously to the term $(III)_i$: in accordance with the calculations presented after the proof of Proposition \ref{proof prop 2} we get for any $\varepsilon >0$ \begin{eqnarray}\olabel{pr 4} (III)&\leq & \varepsilon \int_{\Omega} \Gamma^s |\eta_m|^{2\kappa}\, \D x +c(\varepsilon) \int_{\Omega} |\eta_m|^\vartheta \, \D x\, ,\nonumber\\ \vartheta &:=& \Big[\frac{s}{s-\frac{\delta}{2}}\Big]^* \Big(2\kappa - 2 \hat{\kappa} - 2 \kappa \frac{s-\frac{\delta}{2}}{s}\Big)\, , \end{eqnarray} wtith exponent $\vartheta \geq 0$, if we assume that \begin{equation}\olabel{pr 5} \kappa \geq 2 \frac{s}{\delta}\hat{\kappa} \end{equation} is satisfied. Choosing $\varepsilon$ sufficiently small, inserting \reff{pr 4} in \reff{pr 3} and returning to \reff{pr 1} it is shown (compare \reff{proof 9}) \begin{equation}\olabel{pr 6} \int_{\Omega} \Gamma^s |\eta_m|^{2\kappa}\, \D x \leq c\big\{(I)+1\big\}\, \quad (I) := \Bigg[\int_{\Omega} |\nabla \Gamma^{\frac{s}{2}}| \,|\eta_m|^\kappa\, \D x\Bigg]^2\, . \end{equation} Adjusting the calculations presented after inequality \reff{proof 9} to the situation at hand we consider numbers $s$, $\alpha$ satisfying \begin{equation}\olabel{pr 7} s \leq p + \alpha \end{equation} and find recalling \reff{intro 7} and using \reff{pr 7} \begin{eqnarray*} (I) &\leq & c \Bigg[\int_{\Omega} \Gamma^{\frac{s-1}{2}}|\nabla^2 u| |\eta_m|^\kappa \, \D x\Bigg]^2\\ &\leq & c \Bigg[ \int_{\Omega} \Gamma^{\frac{p-2}{2}} |\nabla^2 u|^2 |\eta_m|^{2\kappa} \Gamma^\alpha \, \D x\Bigg] \cdot \Bigg[\int_{\Omega} \Gamma^{s-1-\frac{p-2}{2}-\alpha}\Bigg]\, \D x\\ &\leq & c \int_{\Omega} D^2f(\nabla u) \big(\partial_i \nabla u,\partial_i \nabla u\big) |\eta_m|^{2\kappa} \Gamma^\alpha \, \D x\, , \end{eqnarray*} where from now on the sum is taken with respect to the index $i$. Let us assume in addition to \reff{pr 7} that we have \begin{equation}\olabel{pr 8} \alpha > - \frac{1}{4} \end{equation} as lower bound for the parameter $\alpha$. Quoting \reff{cacc 2} from Proposition \ref{prop cacc 2} (with $\eta$ being replaced by $|\eta_m|^\kappa$) and returning to \reff{pr 6} we find: \begin{equation}\olabel{pr 9} \int_{\Omega} \Gamma^s |\eta_m|^{2\kappa} \, \D x \leq c \int_{\Omega} D^2f(\nabla u) \big(\nabla |\eta_m|,\nabla |\eta_m|\big) \Gamma^{1+\alpha} |\eta_m|^{2\kappa -2}\, \D x\, . \end{equation} On the right-hand side of \reff{pr 9} we use the second inequality from the ellipticity condition \reff{intro 7} to obtain (in analogy to the inequalities \reff{proof 17}, \reff{proof 18}) \begin{equation}\olabel{pr 10} \int_{\Omega} \Gamma^s |\eta_m|^{2\kappa} \, \D x \leq c \int_{\Omega} \Gamma^{\frac{q}{2}+\alpha}|\nabla \eta_m|^2 |\eta_m|^{2\kappa -2} \, \D x \, . \end{equation} The right-hand side of \reff{pr 10} is discussed following the arguments presented after \reff{proof 18}: we first have (recall \reff{proof 4}) \begin{eqnarray*} \int_{\Omega} \Gamma^{\frac{q}{2}+\alpha} |\nabla \varphi_m|^2 |u-u_0|^{2\kappa}\, \D x &\leq & \int_{\op{spt}\nabla \varphi_m} \Gamma^{\frac{q}{2}+\alpha} m^{2-2\kappa \frac{p-2}{p}} \, \D x\\ &\leq & c \int_{\Omega} \Gamma^{\frac{p}{2}}\, \D x + \int_{\op{spt}\nabla \varphi_m} m^{\gamma^*[2-2\kappa\frac{p-2}{p}]}\, \D x\, , \end{eqnarray*} where we have defined \[ \gamma:= \frac{p/2}{\frac{q}{2} +\alpha}\, , \qquad \gamma^* := \frac{\gamma}{\gamma-1}\, . \] Obviously this requires the bound $\frac{p}{2} \geq \frac{q}{2}+\alpha$, i.e.~the condition \begin{equation}\olabel{pr 11} q < p + \frac{1}{2} \end{equation} in combination with \reff{pr 8} yields the maximal range of anisotropy , since then we can choose $\alpha$ sufficiently close to $-1/4$ to guarantee $p > q+2\alpha$. Moreover, we assume the validity of $-1 + \gamma^*[2-2\kappa (p-2)/p]\leq 0$. This is true again for $\alpha$ chosen sufficiently close to $-1/4$, provided that \begin{equation}\olabel{pr 12} \kappa > \frac{p+q-\frac{1}{2}}{2(p-2)}\, . \end{equation} Recalling \reff{pr 11} we see that \reff{pr 12} is a consequence of the stronger bound \begin{equation}\olabel{pr 13} \kappa > \frac{p}{p-2} \, . \end{equation} In accordance with \reff{proof 19} we therefore arrive at \begin{equation}\olabel{pr 14} \mbox{r.h.s.~of \reff{pr 10}} \leq c \Bigg[1+\int_{\Omega} \Gamma^{\frac{q}{2}+\alpha} |\nabla u - \nabla u_0|^2 |\eta_m|^{2\kappa -2}\, \D x \Bigg]\, . \end{equation} Neglecting the contribution resulting from $\nabla u_0$ and under the additional hypothesis \begin{equation}\olabel{pr 15} s > \frac{q}{2} + \frac{3}{4}\, , \end{equation} which guarantees the validity of $s > \frac{q}{2}+1+\alpha$ for $\alpha > -1/4$, we have \begin{eqnarray}\olabel{pr 16} \int_{\Omega} \Gamma^{\frac{q}{2}+1+\alpha} |\eta_m|^{2\kappa -2}\, \D x &\leq & \varepsilon \int_{\Omega} \Gamma^s |\eta_m|^{2\kappa} \, \D x + c(\varepsilon) \int_{\Omega} |\eta_m|^{2\kappa - 2 \beta^*}\, \D x \, ,\\ \beta^* := \frac{\beta}{\beta-1}\, ,&& \beta := \frac{s}{1+\frac{q}{2}+\alpha}\, . \end{eqnarray} Let us add a comment concerning the conditions imposed on $s$: as remarked after \reff{pr 11} and during the subsequent calculations the parameter $\alpha$ has to be adjusted in an appropriate way and might become very close to the critical value $-1/4$. For this reason inequality \reff{pr 7} is replaced by the stronger one (recall \reff{nosplit 2}) \begin{equation}\olabel{pr 17} s < p - \frac{1}{4}\, . \end{equation} In order to find numbers $s$ satisfying \reff{pr 15} and \reff{pr 17} we need the bound $q < 2p-2$ being more restrictive for $p \in (2, 5/2)$ than inequality \reff{pr 11}.\\ Finally we note that the exponent $2\kappa - 2 \beta^*$ occurring in the second integral on the right-hand side of \reff{pr 16} is non-negative provided $\kappa \geq s/(s-[1+\frac{q}{2}+\alpha])$, and the latter inequality holds for $\alpha$ sufficiently close to $-1/4$, if (compare \reff{nosplit 3}) \begin{equation}\olabel{pr 18} \kappa > \frac{s}{s- \big[\frac{q}{2}+\frac{3}{4}\big]}\, . \end{equation} Assuming \reff{pr 18} our claim follows by inserting \reff{pr 16} into \reff{pr 14} and choosing $\varepsilon$ sufficiently small. \qed \section{Proof of Theorem \ref{aniso theo}}\label{aniso} Let all the assumptions of Theorem \ref{aniso theo} hold. We define the number \[ \gamma := \frac{2n}{n+2}\, , \quad 1\leq \gamma < 2\, , \qquad \mbox{with Sobolev conjugate}\qquad \frac{n\gamma}{n-\gamma} = 2 \, , \] and note that \[ \frac{n- \gamma}{n} = \frac{n}{n+2} = \frac{\gamma}{2}\, , \quad \frac{\gamma}{2-\gamma} = \frac{n}{2} \, . \] We also remark that for any fixed $\alpha$ and for any $\kappa >1$ the inequality \begin{equation}\olabel{pre 3} \frac{p}{2} < p \frac{\kappa(n+2)-2}{2n\kappa} +\alpha \frac{\kappa -1}{\kappa}\, . \end{equation} is equivalent to the requirement $\alpha > - p/n$. \begin{lemma}\label{aniso lem 1} Fix $\alpha > - 1/(2n)$, $\kappa > 1$ with \reff{pre 3} and choose a real number $\overline{s}_\alpha$ satisfying \begin{equation}\olabel{aniso 1} \frac{p}{2} < \overline{s}_\alpha < p \frac{\kappa(n+2)-2}{2n\kappa} + \alpha \frac{\kappa -1}{\kappa}\, . \end{equation} Then we have for any $\varepsilon >0$ and for $\eta \in C^{1}_0(\Omega)$ \begin{equation}\olabel{aniso 2} \int_{\Omega} \Gamma^{\overline{s}_\alpha} |\eta|^{2(\kappa -1)} \, \D x \leq \varepsilon \int_{\Omega} \Gamma^{(\frac{q}{2}+\alpha)} |\nabla \eta|^{2} |\eta|^{2(\kappa -1)}\, \D x +\varepsilon \int_{\Omega} \Gamma^{-\frac{p}{n} + s} |\nabla \eta|^{2} |\eta|^{2(\kappa -1)}\, \D x + c \end{equation} for a constant being independent on $\eta$. \end{lemma} \emph{Proof of Lemma \ref{aniso lem 1}.} We first observe that for \begin{equation}\olabel{aniso 3} s = \overline{s}_\alpha \frac{\kappa}{\kappa-1} - \frac{p}{2(\kappa -1)} \end{equation} we have after an application of Young's inequality \begin{eqnarray}\olabel{aniso 4} \int_{\Omega} \Gamma^{\overline{s}_\alpha} |\eta|^{2(\kappa-1)} \, \D x&=& \int_{\Omega} \Gamma^{\overline{s}_\alpha -\frac{p}{2\kappa}} |\eta|^{2(\kappa -1)} \Gamma^{\frac{p}{2\kappa}}\, \D x \leq \tilde{\varepsilon} \int_{\Omega} \Gamma^s |\eta|^{2 \kappa}\, \D x + c(\tilde{\varepsilon}) \int_{\Omega} \Gamma^{\frac{p}{2}}\, \D x \end{eqnarray} for any $\tilde{\varepsilon} > 0$. We then estimate using Sobolev's inequality \begin{eqnarray}\olabel{aniso 5} \int_{\Omega} \Gamma^s |\eta|^{2\kappa} \, \D x &=& \int_{\Omega} \big[\Gamma^{\frac{s}{2}} |\eta|^{\kappa}\big]^2\d \leq c \Bigg[\int_{\Omega} \big| \nabla \big[ \Gamma^{\frac{s}{2}} |\eta|^\kappa \big]\big|^\gamma \, \D x\Bigg]^{\frac{2}{\gamma}}\nonumber\\ &\leq & c \Bigg[ \int_{\Omega} \big|\nabla \Gamma^{\frac{s}{2}}\big|^\gamma |\eta|^{\kappa \gamma}\, \D x\Bigg]^{\frac{2}{\gamma}} + c(\kappa) \Bigg[\int_{\Omega} \Gamma^{\frac{s\gamma}{2}} |\eta|^{(\kappa-1)\gamma} |\nabla \eta|^{\gamma} \, \D x\Bigg]^{\frac{2}{\gamma}}\nonumber\\ &=& c T_1^{\frac{2}{\gamma}} + c T_2^{\frac{2}{\gamma}}\, . \end{eqnarray} Let us first consider $T_1$. H\"older's inequality gives \begin{eqnarray}\olabel{aniso 6} T_1^\frac{2}{\gamma} &\leq & c \Bigg[ \int_{\Omega} |\nabla^2 u|^\gamma \Gamma^{\gamma \frac{s-1}{2}} |\eta|^{\kappa \gamma} \, \D x\Bigg]^{\frac{2}{\gamma}}\nonumber\\ &=& c \Bigg[ \int_{\Omega} |\nabla^2 u|^\gamma \Gamma^{\gamma \frac{p-2}{4}} \Gamma^{\gamma\frac{\alpha}{2}} \Gamma^{\gamma \frac{2-p}{4}} \Gamma^{-\gamma\frac{\alpha}{2}}\Gamma^{\gamma \frac{s-1}{2}} |\eta|^{\kappa \gamma} \, \D x\Bigg]^{\frac{2}{\gamma}}\nonumber\\ &\leq & c \Bigg[ \int_{\Omega} |\nabla^2 u|^2 \Gamma^{\frac{p-2}{2}}\Gamma^{\alpha}|\eta|^{2\kappa}\, \D x\Bigg] \cdot \Bigg[ \int_{\Omega} \Gamma^{\frac{\gamma}{2-\gamma}( \frac{2-p}{2}-\alpha+s-1)}\, \D x\Bigg]^{\frac{2-\gamma}{\gamma}} \nonumber\\ &=& c T_{1,1} \cdot T_{1,2}^{\frac{2-\gamma}{\gamma}}\, . \end{eqnarray} For $T_{1,2}$ we observe recalling \reff{aniso 3} \[ \frac{n}{2} \Big[ \frac{2-p}{2} -\alpha + s -1\Big] < \frac{p}{2}\Leftrightarrow \overline{s}_\alpha < p \frac{\kappa(n+2)-2}{2n \kappa} +\alpha\frac{\kappa -1}{\kappa}\, , \] which is true by the choice \reff{aniso 1} of $\overline{s}_\alpha$, hence $T_{1,2}$ is uniformly bounded.\\ We handle $T_{1,1}$ with the help of Proposition \ref{prop cacc 2} (replacing $\eta$ by $|\eta|^\kappa$): \begin{eqnarray}\olabel{aniso 7} T_{1,1} & \leq & c \int_{\Omega} D^2f(\nabla u) \big(\nabla \partial_\gamma u,\nabla \partial_\gamma u\big) \Gamma^{\alpha} |\eta|^{2\kappa} \, \D x\nonumber\\ & \leq & c \int_{\Omega} D^2f(\nabla u)\big(\nabla |\eta|^\kappa , \nabla |\eta|^\kappa\big)|\nabla u|^2 \Gamma^{\alpha}\, \D x\nonumber\\ &\leq & c \int_{\Omega} \Gamma^{\frac{q-2}{2}} |\eta|^{2\kappa-2}|\nabla \eta|^2 \Gamma^{1+\alpha} \, \D x = c \int_{\Omega} \Gamma^{\frac{q}{2} + \alpha} |\nabla \eta|^2 |\eta|^{2\kappa-2}\, \D x \, . \end{eqnarray} From \reff{aniso 5} - \reff{aniso 7} we conclude \begin{equation}\olabel{aniso 8} \int_{\Omega} \Gamma^s |\eta|^{2\kappa} \, \D x \leq c \int_{\Omega} \Gamma^{\frac{q}{2}-\gamma} |\nabla \eta|^2 |\eta|^{2(\kappa-1)}\, \D x + c T_2^{\frac{2}{\gamma}} \end{equation} with constants $c$ being independent of $\eta$ and it remains to discuss $T_2$ in \reff{aniso 8}: we have for $\hat{\mu} =2/\gamma$, $\mu = 2/(2-\gamma)$, \begin{eqnarray}\olabel{aniso 9} T_2^{\frac{2}{\gamma}} & = & \Bigg[\int_{\Omega} \Gamma^{\frac{s\gamma}{2}}|\nabla \eta|^\gamma |\eta|^{(\kappa-1)\gamma}\, \D x\Bigg]^{\frac{2}{\gamma}} = \Bigg[\int_{\Omega} \Gamma^{\frac{p}{2\mu}} \Gamma^{-\frac{p}{2\mu}} \Gamma^{\frac{s\gamma}{2}}|\nabla \eta|^\gamma |\eta|^{(\kappa-1)\gamma}\, \D x\Bigg]^{\frac{2}{\gamma}}\nonumber\\ &\leq & c \Bigg[\int_{\Omega} \Gamma^{\frac{p}{2}}\, \D x\Bigg]^{\frac{2}{n}} \cdot \int_{\Omega} \Gamma^{-\frac{p}{n}+s}|\nabla \eta|^{2}|\eta|^{2(\kappa-1)}\, \D x \, . \end{eqnarray} With \reff{aniso 4}, \reff{aniso 8} and \reff{aniso 9} the proof of Lemma \ref{aniso lem 1} is completed by choosing $\tilde{\varepsilon}\ll \varepsilon$. \qed\\ Now we come to the proof of the theorem: we choose $\eta_m = (u - u_0)\varphi_m$ with $\varphi_m$ defined after \reff{proof 2}. Then Lemma \ref{aniso lem 1} yields \begin{eqnarray}\olabel{complete 1} \int_{\Omega} \Gamma^{\overline{s}_\alpha} |\eta_m|^{2(\kappa -1)} \, \D x &\leq & \varepsilon \int_{\Omega} \Gamma^{(\frac{q}{2}+\alpha)} \big[|\nabla u|+|\nabla u_0|\big]^{2} |\eta_m|^{2(\kappa -1)}\, \D x\nonumber\\ &&+\varepsilon \int_{\Omega} \Gamma^{-\frac{p}{n} + s} \big[|\nabla u| + |\nabla u_0|\big]^{2} |\eta_m|^{2(\kappa -1)}\, \D x\nonumber\\ & & + \varepsilon \int_{\Omega} \Gamma^{(\frac{q}{2}+\alpha)} |\nabla \varphi_m|^2 |u-u_0|^{2}|\eta_m|^{2(\kappa-1)}\, \D x\nonumber\\ &&+\varepsilon \int_{\Omega} \Gamma^{-\frac{p}{n} + s} |\nabla \varphi_m|^2 |u-u_0|^2 |\eta_m|^{2(\kappa -1)}\, \D x + c \, . \end{eqnarray} Here the first two integrals on the right-hand side can be absorbed in the left-hand side provided that we have (recall that $u_0$ is Lipschitz) \begin{equation}\olabel{complete 2} \max\Big\{\frac{q}{2}+\alpha, -\frac{p}{n} + s \Big\}< \overline{s}_\alpha -1 \, . \end{equation} Also on account of \reff{complete 2} we can handle the remaining integrals on the right-hand side of \reff{complete 1} with the help of Young's inequality. We obtain \begin{eqnarray}\olabel{complete 3} \lefteqn{ \int_{\Omega} \Gamma^{(\frac{q}{2}+\alpha)} |\nabla \varphi_m|^2 |u-u_0|^{2}|\eta_m|^{2(\kappa-1)}\, \D x}\nonumber\\ &\leq & \int_{\Omega} \Gamma^{\overline{s}_\alpha} |\eta_m|^{2(\kappa -1)} \, \D x + c \int_{\op{spt} \nabla \varphi_m} |\nabla \varphi_m|^{2\beta_1^*} |u-u_0|^{2\beta_1^*} |\eta_m|^{2(\kappa-1)} \, \D x \, . \end{eqnarray} as well as \begin{eqnarray}\olabel{complete 4} \lefteqn{ \int_{\Omega} \Gamma^{(-\frac{p}{n}+s)} |\nabla \varphi_m|^2 |u-u_0|^{2}\eta_m^{2(\kappa-1)}\, \D x}\nonumber\\ &\leq & \int_{\Omega} \Gamma^{\overline{s}_\alpha} \eta_m^{2(\kappa -1)} \, \D x + c \int_{\op{spt} \nabla \varphi_m} |\nabla \varphi_m|^{2\beta_2^*} |u-u_0|^{2\beta_2^*} \eta_m^{2(\kappa-1)} \, \D x \, . \end{eqnarray} In \reff{complete 3} we choose ($\alpha$ sufficiently close to $-1/(2n)$) $\beta_1$ such that \[ 1 < \beta_1 < \overline{s}_\alpha \Big[\frac{q}{2}-\frac{1}{2n}\Big]^{-1} = \Bigg[p \frac{\kappa(n+2)-2}{\kappa} - \frac{\kappa -1}{\kappa}\Bigg] \frac{1}{qn -1} \] with conjugate exponent (recall $q-p < 2p/n$ on account of \reff{aniso main 1}) \begin{equation}\olabel{complete 5} \beta_1^* > \frac{\kappa\big[p(n+2)-1\big]-2p+1}{\kappa\big[2p - (q-p)n\big]-2p+1}\, . \end{equation} In \reff{complete 4} we define $\beta_2$ according to \[ 1 < \beta_2 := \overline{s}_\alpha \Big[-\frac{p}{n}+s\Big]^{-1} = \Bigg[p \frac{\kappa(n+2)-2}{2\kappa} - \frac{\kappa -1}{2\kappa}\Bigg] \frac{1}{sn -p} \] with conjugate exponent \begin{equation}\olabel{complete 6} \beta_2^* > \frac{\kappa\big[pn +4p -1- 2sn\big]-2p+1}{\kappa\big[2p - (q-p)n\big]-2p+1} \, . \end{equation} Returning to \reff{complete 3} and \reff{complete 4}, respectively, the first integral again is absorbed in the left-hand side of \reff{complete 1}. The remaining integrals stay bounded if we have for $i=1$, $2$ \[ m^{-1} m^{\beta_i^*\big[2 -2(1-n/p)\big]-2(\kappa -1)(1-n/p)} \leq c\, , \] Thus we require the condition \begin{equation}\olabel{complete 7} \beta_i^* < (\kappa -1)\frac{p-n}{n} + \frac{p}{2n} \, , \qquad i=1,\, 2\, , \end{equation} which on account of $p>n$ is satisfied for $\kappa$ sufficiently large.\\ It remains to arrange \reff{complete 2} together with the inequality on the right-hand side of \reff{aniso 1}. Here we first observe \begin{eqnarray}\olabel{complete 8} \frac{q}{2} + \alpha < p \frac{\kappa (n+2)-2}{2n\kappa} + \alpha \frac{\kappa -1}{\kappa} -1 &\Leftrightarrow & q < p \frac{\kappa (n+2)-2}{n \kappa} - 2 - 2 \frac{\alpha}{\kappa} \nonumber \\ &\Leftrightarrow & q-p < \frac{2}{n} (p-n) - \frac{2p+2n\alpha}{n\kappa} \end{eqnarray} For $\alpha$ sufficiently close to $-1/(2n)$, \reff{complete 8} is a consequence of \reff{aniso main 2}. We finally have to discuss \begin{eqnarray}\olabel{complete 9} -\frac{p}{n} + s \leq \overline{s}_\alpha -1 &\Leftrightarrow & -\frac{p}{n} + \overline{s}_\alpha \frac{\kappa}{\kappa -1} - \frac{p}{2(\kappa -1)} < \overline{s}_\alpha - 1\nonumber\\ &\Leftrightarrow & \overline{s}_\alpha < (\kappa -1) \frac{p-n}{n} + \frac{p}{2}\, . \end{eqnarray} Assumption \reff{aniso main 4} implies \reff{complete 9}. Hence, \reff{complete 1}, \reff{complete 3} and \reff{complete 4} prove Theorem \ref{aniso} passing to the limit $m\to \infty$. \qed \\ \section{Appendix. Caccioppoli-type inequalities}\label{cacc} We prove two Caccioppoli-type inequalities with small weights (i.e.~involving powers of $\Gamma = 1+ |\nabla u|^2$ or of $\Gamma_i =1+|\partial_i u|^2$, $i=1$, \dots , $n$, with a certain range of negative exponents), where the first one is the appropriate version in the splitting context.\\ There is no need to restrict the following considerations to the case $n=2$. Thus, throughout this appendix, we suppose that $\Omega \subset \mathbb{R}^n$ is a bounded Lipschitz domain and that $f$: $\mathbb{R}^n \to \mathbb{R}$ is of class $C^2$ satisfying $D^2f(Z)(Y,Y) > 0$ for all $Z$, $Y \in \mathbb{R}^n$. \\ Moreover we suppose that $u \in W^{2,2}_{\op{loc}}(\Omega)\cap C^{1}(\Omega)$ solves the differentiated Euler equation \begin{equation}\olabel{cacc 1} 0 = \int_{\Omega} D^2 f(\nabla u) \big(\nabla \partial_i u, \nabla \psi\big) \, \D x \qquad\mbox{for all}\; \psi \in C^\infty_0(\Omega) \end{equation} and for any $1 \leq i \leq n$ fixed. \newcommand{\gam}[2]{\Gamma_{#1}^{#2}} \begin{proposition}\label{prop cacc 1} Fix $l\in \mathbb{N}$ and suppose that $\eta \in C^\infty_0(\Omega)$, $0 \leq \eta \leq 1$. Then the inequality \begin{eqnarray}\olabel{cacc 2} \lefteqn{\int_{\Omega} D^2 f(\nabla u)\big(\nabla \partial_i u, \nabla \partial_i u\big) \eta^{2l} \gam{i}{\alpha} \, \D x}\nonumber \\ && \leq c \int_{\Omega} D^2f(\nabla u) (\nabla \eta,\nabla \eta)\eta^{2l-2} \gam{i}{\alpha +1} \, \D x\, , \quad \Gamma_i := 1+|\partial_i u|^2\, , \end{eqnarray} holds for any $\alpha > - 1/2$ and for any fixed $1\leq i \leq n$. \end{proposition} \emph{Proof of Proposition \ref{prop cacc 1}.} Suppose that $-1/2 < \alpha$ and fix $1 \leq i \leq n$ (no summation with respect to $i$). Using approximation arguments we may insert \[ \psi := \eta^{2l} \partial_i u \gam{i}{\alpha} \] in the equation \reff{cacc 1} with the result \begin{eqnarray}\olabel{cacc 3} \int_{\Omega} D^2f(\nabla u) \big(\nabla \partial_i u , \nabla \partial_i u\big) \eta^{2l} \gam{i}{\alpha}\, \D x &=& - \int_{\Omega} D^2f(\nabla u)\big(\nabla \partial_i u, \nabla \gam{i}{\alpha}\big) \partial_i u \eta^{2l}\, \D x\nonumber\\ && - \int_{\Omega} D^2f(\nabla u)\big(\nabla \partial_i u, \nabla (\eta^{2l})\big) \partial_i u \gam{i}{\alpha} \, \D x\nonumber\\ & =: & S_1+S_2 \, . \end{eqnarray} In \reff{cacc 3} we have \[ S_1 = - 2 \alpha \int_{\Omega} D^2f (\nabla u) \big(\nabla \partial_i u, \nabla \partial_i u\big) |\partial_i u|^2\gam{i}{\alpha-1}\eta^{2l}\, \D x \] which gives $S_1 \leq 0$ if $\alpha \geq 0$. In this case we will just neglect $S_1$ in the following. In the case $-1/2 < \alpha < 0$ we estimate \begin{eqnarray*} |S_1| &=& 2 |\alpha| \int_{\Omega} D^2f (\nabla u) \big(\nabla \partial_i u, \nabla \partial_i u\big) |\partial_i u|^2\gam{i}{\alpha-1}\eta^{2l}\, \D x\\ &\leq & 2 |\alpha| \int_{\Omega} D^2f (\nabla u) \big(\nabla \partial_i u, \nabla \partial_i u\big)\gam{i}{\alpha}\eta^{2l}\, \D x \, . \end{eqnarray*} Since we have $2 |\alpha| < 1$ we may absorb $|S_1|$ in the left-hand side of \reff{cacc 3}, hence \[ \int_{\Omega} D^2f (\nabla u) \big(\nabla \partial_i u, \nabla \partial_i u\big) \eta^{2l}\gam{i}{\alpha}\, \D x \leq c |S_2| \, . \] For $0 < \varepsilon$ sufficiently small we apply the Cauchy-Schwarz inequality to discuss $S_2$: \begin{eqnarray*} \lefteqn{\int_{\Omega} D^2f(\nabla u) (\nabla \partial_i u, \nabla \eta)\eta^{2l-1} \gam{i}{\alpha} \partial_iu \, \D x}\\ &\leq& \varepsilon \int_{\Omega} D^2f(\nabla u) (\nabla \partial_i u ,\nabla \partial_i u) \eta^{2l} \gam{i}{\alpha}\, \D x\\ &&+ c(\varepsilon) \int_{\Omega} D^2f(\nabla u) (\nabla \eta,\nabla \eta) \eta^{2l-2}\gam{i}{\alpha} |\partial_i u|^2 \, \D x\, . \end{eqnarray*} After absorbing the first term in the right-hand side of \reff{cacc 3} we have established our claim \reff{cacc 2}. \qed\\ Instead of the quantities $\Gamma_i$ our second inequality (compare \cite{BF:2021_3} for the discussion in two dimensions) involves the full derivative, i.e.~we incorporate certain negative powers of $\Gamma = 1+|\nabla u|^2$. As a consequence we do not obtain the range $-1/2 < \alpha$ and have to replace this condition by the requirement $-1/(2n) < \alpha$. \begin{proposition}\label{prop cacc 2} Suppose that $\eta\in C^{\infty}_{0}\big(\Omega\big)$ and fix some real number $\alpha$ such that $- 1/(2n) < \alpha$. Then we have (summation with respect to $i =1$, \dots , $n$) \begin{eqnarray}\olabel{cacc 4} \lefteqn{\big[1 + 2 \alpha n\big] \int_{\Omega} D^2 f (\nabla u) \big(\nabla \partial_i u, \nabla \partial_i u\big) \Gamma^{\alpha}\eta^2 \, \D x}\nonumber\\ &\leq & c \Bigg[ \int_{\op{spt}\nabla \eta} D^2 f (\nabla u) \big(\nabla \partial_i u, \nabla \partial_i u\big) \Gamma^{\alpha}\eta^2 \, \D x\Bigg]^{\frac{1}{2}} \nonumber\\ && \qquad \cdot \Bigg[ \int_{\op{spt}\nabla \eta} D^2f(\nabla u)\big(\nabla \eta, \nabla \eta\big) \big|\nabla u\big|^2 \Gamma^{\alpha} \, \D x\Bigg]^{\frac{1}{2}}\, , \end{eqnarray} where the constant $c$ is not depending on $\eta$. In particular it holds \begin{equation}\olabel{cacc 5} \int_{\Omega} D^2 f (\nabla u) \big(\nabla \partial_i u, \nabla \partial_i u) \Gamma^{\alpha} \eta^2\, \D x \leq c \int_{\op{spt}\nabla \eta} D^2f(\nabla u)\big(\nabla \eta, \nabla \eta\big) \big|\nabla u \big|^2\Gamma^{\alpha} \, \D x \, . \end{equation} \end{proposition} \emph{Proof.} For $i =1$, \dots , $n$ and any $\eta$ as above we deduce from \reff{cacc 1} \begin{eqnarray}\olabel{cacc 7} \lefteqn{\int_{\Omega} D^2f(\nabla u) \big(\nabla \partial_i u ,\nabla \partial_i u\big) \Gamma^{\alpha} \eta^2 \, \D x}\nonumber\\ &=& - \int_{\Omega} D^2f(\nabla u)\big(\nabla \partial_i u, \partial_i u \nabla \Gamma^{\alpha}\big) \eta^2 \, \D x\nonumber\\ && - 2 \int_{\Omega} D^2 f(\nabla u) \big(\nabla \partial_i u , \nabla \eta\big) \partial_i u \Gamma^{\alpha}\eta \, \D x =: I +II\, , \end{eqnarray} where at this stage no summation with respect to the index $i$ is performed. Following \cite{BF:2021_3} we denote the bilinear form $D^2f(\cdot,\cdot)$ by $\langle \cdot,\cdot\rangle$. The first observation is the inequality \begin{eqnarray}\olabel{cacc 8} \sum_{i =1}^n \Big\langle \partial_i \nabla u, \partial_i \nabla u\Big\rangle \Gamma^{\alpha} &\geq & \sum_{i =1}^n \Big\langle \partial_i \nabla u, \partial_i \nabla u\Big\rangle \sum_{j=1}^n (\partial_j u)^2 \Gamma^{\alpha -1}\nonumber \\ &\geq & \sum_{i =1}^n \Big\langle \partial_i \nabla u, \partial_i \nabla u\Big\rangle (\partial_i u)^2 \Gamma^{\alpha -1}\nonumber \\ & = & \sum_{i =1}^n \Big\langle \partial_{i} u \partial_i \nabla u , \partial_i u \partial_i \nabla u \Big\rangle \Gamma^{\alpha -1} \, . \end{eqnarray} For handling the integrand of the term $I$ from \reff{cacc 7} we use the identity \begin{eqnarray}\label{cacc 9} - \sum_{i=1}^n\Big\langle \partial_i \nabla u , \partial_i u \nabla \Gamma^{\alpha}\Big\rangle &=& - \alpha \sum_{i=1}^n \Big\langle \partial_i u \partial_i \nabla u , \nabla \sum_{j=1}^n (\partial_j u )^2\Big\rangle \Gamma^{\alpha -1}\nonumber\\ &=& - 2\alpha \sum_{i =1}^n \Big\langle \partial_i u \partial_i \nabla u, \sum_{j =1}^n \partial_j u \partial_j \nabla u\Big\rangle \Gamma^{\alpha -1}\nonumber\\ &=& -2 \alpha \sum_{i =1}^n \Big\langle \partial_i u \partial_i \nabla u, \partial_i u \partial_i \nabla u\Big\rangle \Gamma^{\alpha -1}\nonumber\\ && - 2 \alpha \sum_{i =1}^n \Big\langle \partial_i u \partial_i \nabla u, \sum_{j\not=i}\partial_j u \partial_j \nabla u\Big\rangle \Gamma^{\alpha -1} \, . \end{eqnarray} The last term on the right-hand side of \reff{cacc 9} is estimated as follows \begin{eqnarray*} \lefteqn{\sum_{i =1}^n \Big\langle \partial_i u \partial_i \nabla u, \sum_{j\not=i}\partial_j u \partial_j \nabla u\Big\rangle}\\ &\leq & \sum_{i=1}^n \Bigg[\sum_{j\not= i} \frac{1}{2} \Big[ \Big\langle \partial_i u \partial_i \nabla u, \partial_i u \partial_i \nabla u\Big\rangle +\Big\langle \partial_j u \partial_j \nabla u, \partial_j u \partial_j \nabla u\Big\rangle\Big]\Bigg]\\ &=& \frac{1}{2} \sum_{i =1}^n \Bigg[ (n-1) \Big\langle \partial_i u \partial_i \nabla u, \partial_i u \partial_i \nabla u\Big\rangle + \sum_{j\not=i} \Big\langle \partial_j u \partial_j \nabla u, \partial_j u \partial_j \nabla u\Big\rangle\Big]\Bigg]\\ &=& \frac{1}{2} \sum_{i =1}^n \Bigg[ (n-2) \Big\langle \partial_i u \partial_i \nabla u, \partial_i u \partial_i \nabla u\Big\rangle + \sum_{j=1}^n \Big\langle \partial_j u \partial_j \nabla u, \partial_j u \partial_j \nabla u\Big\rangle\Big]\Bigg]\\ &=& \frac{1}{2} \Bigg[ (n-2) \sum_{i =1}^n \Big\langle \partial_i u \partial_i \nabla u, \partial_i u \partial_i \nabla u\Big\rangle + n \sum_{j=1}^n \Big\langle \partial_j u \partial_j \nabla u, \partial_j u \partial_j \nabla u\Big\rangle\Big]\Bigg]\\ &=& (n-1) \sum_{i=1}^n \Big\langle \partial_i u \partial_i \nabla u, \partial_i u \partial_i \nabla u\Big\rangle \, . \end{eqnarray*} This, together with \reff{cacc 9} gives (recall $-2\alpha > 0$) \begin{eqnarray}\label{cacc 10} - \sum_{i=1}^n\Big\langle \partial_i \nabla u , \partial_i u \nabla \Gamma^{\alpha}\Big\rangle &=& -2 \alpha \sum_{i =1}^n \Big\langle \partial_i u \partial_i \nabla u, \partial_i u \partial_i \nabla u\Big\rangle \Gamma^{\alpha -1}\nonumber\\ && - 2 \alpha \sum_{i =1}^n \Big\langle \partial_i u \partial_i \nabla u, \sum_{j\not=i}\partial_j u \partial_j \nabla u\Big\rangle \Gamma^{\alpha -1} \nonumber\\ &\leq & -2 \alpha \sum_{i =1}^n \Big\langle \partial_i u \partial_i \nabla u, \partial_i u \partial_i \nabla u\Big\rangle \Gamma^{\alpha -1}\nonumber\\ &&- 2 \alpha (n-1) \sum_{i =1}^n \Big\langle \partial_i u \partial_i \nabla u, \partial_i u \partial_i \nabla u\Big\rangle \Gamma^{\alpha -1} \nonumber\\ &=&- 2 \alpha n \sum_{i =1}^n \Big\langle \partial_i u \partial_i \nabla u, \partial_i u \partial_i \nabla u\Big\rangle \Gamma^{\alpha -1} \, . \end{eqnarray} Combining \reff{cacc 10} and \reff{cacc 8} we get \begin{eqnarray}\label{cacc 11} \lefteqn{-\sum_{i =1}^n \int_{\Omega} D^2f(\nabla u)\big( \partial_i \nabla u , \partial_i u \nabla \Gamma^{\alpha}\big)\eta^2 \, \D x} \nonumber\\ &&\leq -2 \alpha n \sum_{i =1}^n \int_{\Omega} D^2f(\nabla u) \big(\partial_i \nabla u, \partial_i \nabla u\big)\Gamma^{\alpha} \eta^2 \, \D x\, . \end{eqnarray} Returning to \reff{cacc 7} and using \reff{cacc 11} we get (from now on summation with respect to $i$) \begin{eqnarray}\label{cacc 12} \lefteqn{\big[1+ 2 \alpha n \big] \int_{\Omega} D^2 f(\nabla u) \big(\nabla \partial_i u , \nabla \partial_i u\big) \Gamma^{\alpha} \eta^2 \, \D x}\nonumber\\ && \leq -2 \int_{\op{spt}\nabla \eta} D^2 f(\nabla u) \big(\eta \nabla \partial_i u, \partial_i u \nabla \eta\big) \Gamma^{\alpha} \, \D x \, . \end{eqnarray} This finishes the proof by applying the Cauchy-Schwarz inequality. \qed
1,108,101,563,607
arxiv
\section{Introduction} State of the art neural network models are known to suffer from adversarial examples. With small perturbations, adversarial examples can completely change predictions of neural networks \citep{DBLP:journals/corr/SzegedyZSBEGF13}. Common defense methods for adversarial attacks include adversarial training \citep{madry2018towards}, certified robustness \citep{wong2018provable}, etc. Recently, test-time defenses of adversarial examples, which optimize at test time, have drawn increasing attention \citep{croce2022evaluating}. In this work, we propose a novel theoretical framework of test-time defenses with assumptions that high-dimensional images lie on low-dimensional manifolds (manifold hypothesis). Compared with low-dimensional data, high-dimensional data are more vulnerable to adversarial examples \citep{goodfellow2014explaining}. Therefore, we transform adversarial robustness from a high-dimensional problem to a low-dimensional problem. We propose a novel approach of test-time defense for non-adversarial-trained models via manifold learning and the Bayesian framework. With our method, non-adversarial-trained models can achieve performances which are on par with performances of adversarial-trained models. Even if attackers are aware of existence of test-time defenses, our proposed approach can still provide robustness against attacks. The main contributions of our work are: \begin{itemize} \item A novel framework for adversarial robustness with the manifold hypothesis. \item A novel and effective test-time defense approach which combines ideas of manifold learning and the Bayesian framework. \end{itemize} In following sections, we start with related work and background. After that, we describe the framework of adversarial robustness and validate the framework with variational inference. \section{Related Work} There have been extensive efforts in defending against adversarial attacks on neural networks. In this section, we describe works related to our approach. We start with adversarial training, followed by adversarial purification then the use of variational inference to defend against adversarial attacks. \subsection{Adversarial Training} Adversarial training is considered to be one of the most effective attack defense methods. This method introduces adversarial examples in the train set during training \citep{goodfellow2014explaining,madry2018towards}. It has been shown that adversarial training can degrade classification performance of models on standard data \citep{tsipras2018robustness}. To reduce the degradation in standard classification accuracy, Trades \citep{zhang2019theoretically} is proposed which balances the trade-off between standard and robust accuracy. Recent efforts also study the impact of different hyperparameters \citep{pang2020bag} as well as data augmentation \citep{rebuffi2021fixing} to reduce the effect of robust overfitting, where robust test accuracy decreases during training. Besides the standard adversarial training, there are works that study impacts of adversarial training on manifolds as well \citep{stutz2019disentangling,lin2020dual,zhou2020,patel2020}. \subsection{Adversarial Purification} As an alternative to adversarial training, adversarial purification aims to shift the adversarial examples back to clean representations during test-time. There have been efforts that study adversarial purification methods using GAN-based models \citep{samangouei2018iclr}, energy-based models \citep{grathwohl2020,yoon2021,hill2021}. PixelDefend \citep{song2018pixeldefend} discovers that adversarial examples lie in low probability regions. It uses a PixelCNN model to restore the adversarial examples back to high probability regions. \cite{shi2020online} and \cite{mao2021adversarial} discover that adversarial attacks will increase loss of self-supervised learning objectives and they define reverse vectors using self-supervised objectives to restore adversarial examples. In this work, we propose a novel test-time defense objective combining manifold learning and variational inference. \subsection{Variational Inference based Attack Defense} Variational AutoEncoders (VAE) \citep{DBLP:journals/corr/KingmaW13} approximate the true posterior with an approximate posterior for probability density estimation on latent spaces. It has been shown that VAEs are vulnerable to adversarial attacks \citep{kos2018adversarial}. Many efforts have attempted to address this problem by, for instance, purifying the adversarial examples (PuVAE) with class manifold projection \citep{hwang2019,lin2022}, rejecting adversarial examples with the Gaussian mixture components \citep{ghosh2019}, disentangling latent representations of hierarchical VAEs \citep{willetts2021}, adversarial training of VAEs (defense-VAE, defense-VGAE) \citep{li2019,zhang2020}, etc. The proposed method in this work can also be considered as a test-time defense method of VAE models. \section{Background} In this section, we provide background with respect to adversarial examples, attack defenses as well as variational inference. We use bold letters to represent vectors. If a function output is a vector, we also use a bold letter for that function. \subsection{Adversarial Examples and Defense} Generation of adversarial examples is an optimization problem. We define parameters of a neural network as $\bm{\theta}$, input data as $\mathbf{x}$ and its corresponding one-hot encoded label as $\mathbf{y}$, classification loss of a neural network as $\mathcal{L}(\bm{\theta},\mathbf{x},\mathbf{y})$. The objective of adversarial attacks is to maximize the classification loss such that a classifier will make incorrect predictions. We define $\|\cdot\|_p$ as the $L_p$ norm. Common $p$ values include 0, 2 and $\infty$. The norm of an adversarial perturbation $\bm{\delta}_{\rm adv}$ needs to be bounded by a small value $\delta_{t}$, otherwise the adversarial perturbation may change the semantic interpretation of an image. Once an adversarial perturbation is obtained, the adversarial example can be crafted as $\mathbf{x}_{\rm adv} = \mathbf{x} + \bm{\delta}_{\rm adv}$ where \begin{equation} \label{eq:atk_obj} \bm{\delta}_{\rm adv} = \argmax_{\|\bm{\delta}\|_{p} \leq \delta_{t}} \mathcal{L}(\bm{\theta},\mathbf{x}+\bm{\delta},\mathbf{y}). \end{equation} The Fast Gradient Sign Method (FGSM) \citep{goodfellow2014explaining} and the Projected Gradient Descent (PGD) \citep{madry2018towards} are two representative $L_\infty$ gradient-based adversarial attacks. We will briefly describe both methods in the following sections. \subsection{Fast Gradient Sign Method (FGSM) Attack} The FGSM (equation \ref{eq:FGSM}) attack only needs to calculate the gradient of loss with respect to the input vector once to craft the adversarial example. Let $\text{sgn}$ be defined as the element-wise sign function where $\text{sgn}(x)=\frac{x}{|x|}$ if $x$ is not zero otherwise the value will be zero. We define $\alpha$ as the step size, which is often set to a given perturbation budget $\bm{\delta}_{t}$. The adversarial example generated from the FGSM attack can be written as \begin{equation} \label{eq:FGSM} \mathbf{x}_{\rm adv} = \mathbf{x} + \alpha \operatorname{sgn}\left(\nabla_{\mathbf{x}} \mathcal{L}(\bm{\theta},\mathbf{x},\mathbf{y})\right). \end{equation} The FGSM attack is easy to generate but is less effective since it only calculates the gradient once based on the current model parameters $\bm{\theta}$. Iterative attacks such as the PGD attack are more effective but takes longer time compared with the FGSM attack. \subsection{projected gradient descent (PGD) Attack} Compared with the FGSM attack, the PGD attack iteratively calculates the gradient and uses the gradient to craft the adversarial example. We define $\alpha$ as the learning rate and $\text{Proj}_{\mathbf{x}+\hood}$ as a projection operator which projects a point back to a feasible region if it is out of that region. A commonly used projection operator is the clipping function, which ensures that $\|\mathbf{x}_{\rm adv} - \mathbf{x}\|_{\infty} = \|\bm{\delta}_{\rm adv}\|_{\infty} \leq \delta_{t}$ where $\mathbf{x}$ is the original data and $\delta_{t}$ is the budget for adversarial perturbation which should be small. The iterative update process of an adversarial example from $\mathbf{x}^t$ to $\mathbf{x}^{t+1}$ can be described as \begin{equation} \label{eq:PGD} \mathbf{x}^{t+1} = \text{Proj}_{\mathbf{x}+\hood} \left( \mathbf{x}^t + \alpha\operatorname{sgn}(\nabla_{\mathbf{x}} \mathcal{L}(\bm{\theta},\mathbf{x}^t,\mathbf{y}))\right). \end{equation} \subsection{Adversarial Training} An effective way to defend against adversarial attacks is to perform adversarial training \citep{goodfellow2014explaining,madry2018towards}. The idea is to use a min-max formulation to construct strong models. A form of implementation for inner optimization is performed using data augmentation with adversarial examples generated on the fly. A formulation of adversarial training proposed in \citep{madry2018towards} can be described as \begin{equation} \label{eq:adv_training} \min_{\bm{\theta}} \mathbb{E}_{(\mathbf{x},\mathbf{y})\sim\mathcal{D}}\left[\max_{\bm{\delta}\in \hood} \mathcal{L}(\bm{\theta},\mathbf{x}+\bm{\delta},\mathbf{y})\right]. \; \end{equation} \subsection{Adversarial Purification and Defense-Aware Attacks} An alternative approach to defend against adversarial attacks is through adversarial purification \citep{song2018pixeldefend}. Given an adversarial example $\mathbf{x}_{\rm adv}$, the objective of adversarial purification is to shift adversarial examples back to clean data representations. In this section, we focus on describing the test-time optimization of input vectors. We define a purification objective as $\mathcal{F}$ which could be log-likelihood $\log p_{\bm \theta}(\mathbf{x})$, reconstruction loss or other functions. The goal of purification is to find a purification vector $\bm{\epsilon}_{\rm purify}$ during test-time such that the purified example $\mathbf{x}_{\rm purify} = \mathbf{x}_{\rm adv} + \bm{\epsilon}_{\rm purify}$ is close to the original clean data $\mathbf{x}$. The $L_p$ norm (common p values are $0,2,\infty$) of the purified vector needs to be no greater than a threshold value $\epsilon_{t}$ so that the purification process will not change the semantic interpretation of an image. An objective of purification can be described as \begin{equation} \label{eq:purify_background} \bm{\epsilon}_{\rm purify} = \argmax_{\|\bm{\epsilon}\|_{p} \leq \epsilon_{t}} \mathcal{F}(\mathbf{x}_{\rm adv} + \epsilon). \end{equation} If attackers are aware of the existence of the purification process, it is possible to take advantages of this knowledge during the attack generation process. A straightforward formulation for attackers is to perform a multi-objective optimization where a trade-off term $\lambda_a$ is introduced to balance the classification loss with the purification objective. The adversarial perturbation of the multi-objective attack can be described as \begin{equation} \label{eq:multi_atk_obj} \bm{\delta}_{\rm adv} = \argmax_{\|\bm{\delta}\|_{p} \leq \delta_{t}} \mathcal{L}(\bm{\theta},\mathbf{x}+\bm{\delta},\mathbf{y}) + \lambda_a \mathcal{F}(\mathbf{x} + \bm{\delta}). \end{equation} Many authors have discovered that the adversarial purification is robust to multi-objective attacks \citep{shi2020online,mao2021adversarial} but not the Backward Pass Differentiable Approximation (BPDA) attack \citep{athalye2018obfuscated,croce2022evaluating}. We briefly describe the BPDA attack here. We define an input vector as $\hat{\mathbf{x}}$. Consider the purification process as function $\mathbf{x}_{\rm purify} = \mathbf{g}(\mathbf{\hat{x}})$ and a classifier which is function $\mathbf{h}$. A prediction can be described as $\mathbf{y} = \mathbf{h}(\mathbf{g}(\mathbf{\hat{x}}))$. To craft an adversarial example, attackers need the gradient $\nabla_\mathbf{x} \mathbf{h}(\mathbf{g}(\mathbf{x}))|_{\mathbf{x}=\hat{\mathbf{x}}}$; however, it is often difficult to calculate the gradient of the output with respect to the input for a purification process. The BPDA attack uses the straight-through estimator $\nabla_\mathbf{x} \mathbf{g}(\mathbf{x}) \approx \nabla_\mathbf{x} \mathbf{x} = 1$ to approximate the gradient. The gradient can then be approximated as $\nabla_\mathbf{x} \mathbf{h}(\mathbf{g}(\mathbf{x}))|_{\mathbf{x}=\hat{\mathbf{x}}} \approx \nabla_\mathbf{x} \mathbf{h}(\mathbf{x})|_{\mathbf{x}=\mathbf{g}(\hat{\mathbf{x}})}$. Many adversarial purification methods have been shown to be vulnerable to the BPDA attack. In this work, we will show that our proposed test-time defense approach provides reasonable performance even under the BPDA attack. \subsection{Variational Inference} In this section, we provide an overview of variational inference and the evidence lower bound (ELBO), which is related to our test-time defense. Consider an input vector $\mathbf{x}$ and its corresponding latent vector as $\mathbf{z}$. We often want to obtain a posterior $p_{\bm{\theta}}(\mathbf{z}|\mathbf{x}) = p_{\bm{\theta}}(\mathbf{x}|\mathbf{z})p_{\bm{\theta}}(\mathbf{z}) / p_{\bm{\theta}}(\mathbf{x})$ where $\bm{\theta}$ parametrizes the distribution; however, calculation of $p_{\bm{\theta}}(\mathbf{x})$ is intractable. Therefore, an approximation of $p_{\bm{\theta}}(\mathbf{z}|\mathbf{x})$ is often obtained. Variational inference uses different parameters $\bm{\phi}$ to approximate the posterior $p_{\bm{\theta}}(\mathbf{z}|\mathbf{x})$ as $q_{\bm{\phi}}(\mathbf{z}|\mathbf{x})$. The objective of estimating parameters $\bm{\phi}$ is through maximizing a lower bound of the log-likelihood $\log p_{\bm{\theta}}(\mathbf{x})$ which can be described as \begin{align} &\log p_{\bm{\theta}}(\mathbf{x}) \geq \mathbb{E}_{\mathbf{z} \sim q_{\bm{\phi}}(\mathbf{z} | \mathbf{x} )}{\left[\log p_{\bm{\theta}}(\mathbf{x} | \mathbf{z})\right]} - D_{\rm KL}[q_{\bm{\phi}}(\mathbf{z} | \mathbf{x} ) \| p(\mathbf{z})] = \text{ELBO}. \label{eq:elbo} \end{align} \section{Defending Adversarial Examples with the Manifold Hypothesis} In this section, we propose a novel framework of adversarial robustness and develop a test-time defense method to validate our framework. Our framework is based on the manifold hypothesis which states that many real world high-dimensional data $\mathbf{x} \in \mathbb{R}^n$ lies on manifold $\mathcal{M}$ with lower dimension. We can model such data using a low-dimensional vector $\mathbf{z} \in \mathbb{R}^m$ which represents the coordinate of low-dimensional manifold $\mathcal{M} \subset \mathbb{R}^n$. We lay out our theoretical formulation of a test-time defense framework with this assumption. \subsection{Theoretical Framework for Test-Time Defense with Manifold Learning} We define a ground truth generator (which is robust even for adversarial examples) as $\mathbf{G}: \mathbb{R}^n \to \{0,1\}^{c}$, where $c$ is number of class. The ground truth generator takes an image $\mathbf{x}$ and outputs a ground truth vector as $\mathbf{y}_{\rm gt} = [y_1, y_2 ..., y_c]^\intercal = \mathbf{G}(\mathbf{x})$ where $y_i = 1$ if the vector belongs to class $i$ otherwise it will be zero. If an input $\mathbf{x}$ does not have a clear semantic interpretation (e.g. at the boundary), the ground truth generator will output a vector of zeros. Considering two images $\mathbf{a}, \mathbf{b} \in \mathbb{R}^n$ with clear semantic interpretations (sum of elements of its label vector is 1), a sufficient but not necessary condition for $\mathbf{G}(\mathbf{a})=\mathbf{G}(\mathbf{b})$ is that $\|\mathbf{a} - \mathbf{b}\|_p \leq \tau$ where $\tau$ is a threshold value ensures that semantic interpretations do not change between two images. For example, given an image from SVHN as $\mathbf{a} \in [0,1]$, an image $\mathbf{b}$ belongs to the same class of $\mathbf{a}$ if $\|\mathbf{a} - \mathbf{b}\|_{\infty} \leq 8/255$. In other words, adversarial perturbations with a norm no greater than $8/255$ will not change semantic interpretations of images and $\tau$ equals $8/255$ in this example. Values of $\tau$ vary based on types of norm. We define an encoder as $\mathbf{f}:\mathbb{R}^n \to \mathbb{R}^m$ and a decoder as $\mathbf{f}^{\dagger}:\mathbb{R}^m \to \mathbb{R}^n$. An encoding process is defined as $\mathbf{z} = \mathbf{f}(\mathbf{x})$ and a decoding process is defined as $\hat{\mathbf{x}} = \mathbf{f}^{\dagger}(\mathbf{z})$. An encoding-decoding process is defined as a composition of $\mathbf{f}^{\dagger} \circ \mathbf{f}$. Assuming the encoding-decoding process performs well on space of natural images $\mathcal{X}_{\rm nat}$, and all natural images have clear semantic interpretations. In other words, for all $\mathbf{x}$ in $\mathcal{X}_{\rm nat}$, we have $\|\mathbf{x} - (\mathbf{f}^{\dagger}\circ \mathbf{f})(\mathbf{x})\|_p \leq \tau$ and $\mathbf{G}(\mathbf{x}) = (\mathbf{G} \circ \mathbf{f}^{\dagger} \circ \mathbf{f})(\mathbf{x})$. The encoding process projects all points (including off-manifold adversarial examples) to the (learned) manifold $\mathcal{M} \subset \mathbb{R}^n$ while all decoding outputs lie on that manifold. In other words, for all $\mathbf{x} \in \mathbb{R}^n$ and $\mathbf{x} \notin \mathcal{M}$, we have $(\mathbf{f}^{\dagger} \circ \mathbf{f})(\mathbf{x}) \in \mathcal{M}$. We define an ideal classifier (without generalization error) on the manifold $\mathcal{M}$ as $\mathbf{h}_{\rm idl}: \mathbb{R}^m \to \{0,1\}^{c}$, which outputs a class prediction $\mathbf{y}_{\rm pred} = \mathbf{h}_{\rm idl}(\mathbf{z})$ given a latent vector $\mathbf{z} \in \mathbb{R}^m$. \begin{definition} (Ideal classifier on the manifold $\mathcal{M}$). An ideal classifier $\mathbf{h}_{\rm idl}$ on the manifold $\mathcal{M}$ satisfies the following condition: for all $\mathbf{z} \in \mathbb{R}^m$, $\mathbf{h}_{\rm idl}(\mathbf{z}) = (\mathbf{G}\circ \mathbf{f}^{\dagger})(\mathbf{z})$. \end{definition} A classifier on the manifold $\mathcal{M}$ is an ideal classifier if its predictions are perfectly correlated with the semantic interpretations of the decoding outputs. If decoding outputs do not have semantic interpretations, the ideal classifier output an all-zero vector. In reality, correlations between a classifier on the manifold and a decoder on the manifold is easier to achieve compared with achieving adversarial robustness for an encoder in the image space. Many studies have shown that high dimensional inputs are more vulnerable to adversarial attacks; therefore, if dimension of the manifold is lower, models applied on the manifold should be more robust. We define an adversarial example crafted based on a natural image $\mathbf{x}$ as $\mathbf{x}_{\rm adv} = \mathbf{x} + \bm{\delta}_{\rm adv}$, and we have $(\mathbf{h}_{\rm idl} \circ \mathbf{f})(\mathbf{x}_{\rm adv}) \neq \mathbf{G}(\mathbf{x}_{\rm adv})$. In other words, the adversarial example causes failure of the ideal classifier on the manifold. \begin{lemma} If an adversarial example $\mathbf{x}_{\rm adv}$ causes $(\mathbf{h}_{\rm idl} \circ \mathbf{f})(\mathbf{x}_{\rm adv}) \neq \mathbf{G}(\mathbf{x}_{\rm adv})$, then we have \\ $\|\mathbf{x}_{\rm adv} - (\mathbf{f}^{\dagger} \circ \mathbf{f})(\mathbf{x}_{\rm adv})\|_p > \tau$. \end{lemma} \begin{proof} We can use proof by contradiction. Assuming $\mathbf{x}_{\rm adv}$ is an adversarial example which causes $(\mathbf{h}_{\rm idl} \circ \mathbf{f})(\mathbf{x}_{\rm adv}) \neq \mathbf{G}(\mathbf{x}_{\rm adv})$ and $\|\mathbf{x}_{\rm adv} - (\mathbf{f}^{\dagger} \circ \mathbf{f})(\mathbf{x}_{\rm adv})\|_p \leq \tau$, then $\mathbf{G}(\mathbf{x}_{\rm adv}) \neq (\mathbf{h}_{\rm idl} \circ \mathbf{f})(\mathbf{x}_{\rm adv}) = (\mathbf{G} \circ \mathbf{f}^{\dagger} \circ \mathbf{f})(\mathbf{x}_{\rm adv})$. The ground truth generator outputs different labels for $\mathbf{x}_{\rm adv}$ and $(\mathbf{f}^{\dagger} \circ \mathbf{f})(\mathbf{x}_{\rm adv})$, which implies $\|\mathbf{x}_{\rm adv} - (\mathbf{f}^{\dagger} \circ \mathbf{f})(\mathbf{x}_{\rm adv})\|_p > \tau$ and violates the assumption. \end{proof} \begin{theorem} If a vector $\bm{\epsilon}$ ensures that $\|(\mathbf{x}_{\rm adv} + \bm{\epsilon}) - (\mathbf{f}^{\dagger} \circ \mathbf{f})(\mathbf{x}_{\rm adv} + \bm{\epsilon})\|_p \leq \tau$ and $\mathbf{G}(\mathbf{x}_{\rm adv} + \bm{\epsilon}) = \mathbf{G}(\mathbf{x})$ where $\mathbf{x}$ is a natural image, then $(\mathbf{h}_{\rm idl} \circ \mathbf{f})(\mathbf{x}_{\rm adv} + \bm{\epsilon}) = \mathbf{G}(\mathbf{x})$. \end{theorem} \begin{proof} $\|(\mathbf{x}_{\rm adv} + \bm{\epsilon}) - (\mathbf{f}^{\dagger} \circ \mathbf{f})(\mathbf{x}_{\rm adv} + \bm{\epsilon})\|_p \leq \tau$ is a sufficient condition for $\mathbf{G}(\mathbf{x}_{\rm adv} + \bm{\epsilon}) = (\mathbf{G} \circ \mathbf{f}^{\dagger} \circ \mathbf{f})(\mathbf{x}_{\rm adv} + \bm{\epsilon})$. Based on the definition of an ideal classifier on the manifold, $(\mathbf{h}_{\rm idl} \circ \mathbf{f})(\mathbf{x}_{\rm adv} + \bm{\epsilon})=(\mathbf{G} \circ \mathbf{f}^{\dagger} \circ \mathbf{f})(\mathbf{x}_{\rm adv} + \bm{\epsilon})$. Then we have $(\mathbf{h}_{\rm idl} \circ \mathbf{f})(\mathbf{x}_{\rm adv} + \bm{\epsilon})=\mathbf{G}(\mathbf{x}_{\rm adv} + \bm{\epsilon})=\mathbf{G}(\mathbf{x})$. \end{proof} This theorem implies that if we can find vectors (satisfy above conditions) and add them to adversarial examples, predictions from the ideal classifier on the manifold will be the same as ground truth labels. By now, we prove that our formulation is sufficient to defend against adversarial examples. We summarize key assumptions of our formulation: \begin{itemize} \item The encoding-decoding process performs well on natural images. \item Adding a vector to an adversarial example does not change the semantic interpretation. \item A classifier $\mathbf{h}_{\rm idl}$ on the manifold which is perfectly correlated with the decoding process. \end{itemize} The first assumption is realistic and standard autoencoders should satisfy it. For the second assumption, if $L_p$ norm of the vector is small, the semantic interpretation should be the same. However, in reality, it is difficult to obtain an ideal classifier on the manifold despite the dimension of the manifold is much lower. Thus, we need to relax the last assumption. The Bayesian method provides a mathematical grounded framework to quantify reliability of a prediction; therefore, we assume that predictions made from regions with high posterior density $p(\mathbf{z}|\mathbf{x})$ are close to predictions from an ideal classifier on the manifold. An exact posterior density $p(\mathbf{z}|\mathbf{x})$ is difficult to calculate and we can use variational inference to approximate it as $q(\mathbf{z} | \mathbf{x})$. \subsection{Jointly Train ELBO with Classification Tasks} In this section, we validate our developed framework with variational inference. We assume that the latent vector $\mathbf{z}$ contains information associated with the label of the input image $\mathbf{x}$. We define an one-hot encoded image label as $\mathbf{y} = [y_1, y_2 ..., y_c]^\intercal$ where $c$ is the number of classes and $y_i = 1$ if the image label is $i$ otherwise it will be zero. We define a classification function parametrized by $\bm{\psi}$ as $\mathbf{h}_{\bm{\psi}}(\mathbf{y}|\mathbf{z}) = [h(y_1|\mathbf{z}), h(y_2|\mathbf{z}) ..., h(y_c|\mathbf{z})]^\intercal$. Cross entropy or classification loss is defined as $-\mathbb{E}_{\mathbf{z} \sim q_{\bm{\phi}}(\mathbf{z} | \mathbf{x} )}[\mathbf{y}^\intercal \log \mathbf{h}_{\bm{\psi}}(\mathbf{y}|\mathbf{z})]$ which we assume to be smaller than a threshold $T$. The objective of the optimization can then be represented as \begin{align} &\max_{\bm{\theta} , \bm{\phi}} \mathbb{E}_{\mathbf{z} \sim q_{\bm{\phi}}(\mathbf{z} | \mathbf{x} )}{\left[\log p_{\bm{\theta}}(\mathbf{x} | \mathbf{z})\right]} - D_{KL}[q_{\bm{\phi}}(\mathbf{z} | \mathbf{x} ) \| p(\mathbf{z})] \\ &\text{ s.t. } -\mathbb{E}_{\mathbf{z} \sim q_{\bm{\phi}}(\mathbf{z} | \mathbf{x} )}[\mathbf{y}^\intercal \log \mathbf{h}_{\bm{\psi}}(\mathbf{y}|\mathbf{z})] \leq T. \label{eq:joint_elbo_cls} \end{align} We use a Lagrange multiplier with KKT conditions to optimize this objective as \begin{align} &\max_{\bm{\theta} ,\bm{\phi}, \bm{\psi}} \mathbb{E}_{\mathbf{z} \sim q_{\bm{\phi}}(\mathbf{z} | \mathbf{x} )}{\left[\log p_{\bm{\theta}}(\mathbf{x} | \mathbf{z})\right]} - D_{\rm KL}[q_{\bm{\phi}}(\mathbf{z} | \mathbf{x} ) \| p(\mathbf{z})] + \lambda \mathbb{E}_{\mathbf{z} \sim q_{\bm{\phi}}(\mathbf{z} | \mathbf{x} )}[\mathbf{y}^\intercal \log \mathbf{h}_{\bm{\psi}}(\mathbf{y}|\mathbf{z})], \label{eq:joint_elbo_cls_kkt} \end{align} where $\lambda$ is a trade-off term that balances the ELBO with the classification loss. We follow \cite{DBLP:journals/corr/KingmaW13} to define the prior $p(\mathbf{z})$ and the posterior $q(\mathbf{z}|\mathbf{x})$ both using a normal distribution with diagonal covariance. Given an input vector $\mathbf{x}$, an inference model is used to model the posterior distribution $q(\mathbf{z}|\mathbf{x})$ which is parametrized by $\bm{\phi}$. The model outputs the mean vector as $\bm{\mu}_{\bm{\phi}}(\mathbf{x}) = [\mu_1(\mathbf{x}), \mu_2(\mathbf{x})...,\mu_J(\mathbf{x})]^\intercal$ and the diagonal covariance as $\bm{\sigma}_{\bm{\phi}}^2(\mathbf{x}) = [\sigma_1^2(\mathbf{x}), \sigma_2^2(\mathbf{x})...,\sigma_J^2(\mathbf{x})]^\intercal$. Given two Gaussian distributions $q_{\bm{\phi}}(\mathbf{z}|\mathbf{x}) = \mathcal{N}\left(\bm{\mu}_{\bm{\phi}}(\mathbf{x}), \text{diag}(\bm{\sigma}^2_{\bm{\phi}}(\mathbf{x}))\right)$ and $p(\mathbf{z})=\mathcal{N}(0,I)$, the KL divergence can be calculated in closed form as $D_{KL}[q_{\bm{\phi}}(\mathbf{z} | \mathbf{x} ) \| p(\mathbf{z})] = -\frac{1}{2} \sum_{j=1}^J \left(1 + \log ((\sigma_j(\mathbf{x}))^2) - (\mu_j(\mathbf{x}))^2 - (\sigma_j(\mathbf{x}))^2 \right)$. We define $p_{\bm{\theta}}(\mathbf{x} | \mathbf{z}) = \frac{1}{\beta}\text{exp}(-\frac{\|\mathbf{x} - f_{\bm{\theta}}^{\dagger}(\mathbf{z})\|^2_2}{\gamma})$, where $\gamma$ controls the variance, $\beta$ is a normalized factor and $f_{\bm{\theta}}^{\dagger}$ is a decoder function, parametrized by $\bm{\theta}$, which maps data from the latent space to the image space. $\mathbb{E}_{\mathbf{z} \sim q_{\bm{\phi}}(\mathbf{z} | \mathbf{x} )}{\left[\log p_{\theta}(\mathbf{x} | \mathbf{z})\right]}=-\frac{1}{\gamma}\mathbb{E}_{\mathbf{z} \sim q_{\phi}(\mathbf{z} | \mathbf{x} )}\left[\|\mathbf{x} - f_{\bm{\theta}}^{\dagger}(\mathbf{z})\|^2_2\right] - \log \beta $ is the negative weighted reconstruction error. An example network architecture described by equation \ref{eq:joint_elbo_cls_kkt} can be found in figure \ref{fig:jae_example} (a). The architecture is similar to the architecture proposed in \cite{bhushan2020variational}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.6]{./fig/joint_arch.PNG} \end{center} \caption{(a) Jointly train the ELBO with the classification task. (b) Data reconstructions and predictions on clean and adversarial inputs. When inputs are clean, predictions and reconstructions are both normal. When inputs are attacked, both predictions and reconstructions become abnormal. In this situation, the classifier has a strong correlation with the decoder since predictions and reconstructions have similar semantic interpretations.} \label{fig:jae_example} \end{figure} \subsection{Adversarial Attack and Test-Time Defense with Variational Inference.} During inference time, attackers may create adversarial perturbations on the classification head to cause prediction failures. An attack objective could maximize the cross entropy loss as \begin{equation} \label{eq:atk_jointly_obj} \bm{\delta}_{\rm adv} = \argmax_{\|\bm{\delta}\|_{p} \leq \delta_{t}} -\mathbb{E}_{\mathbf{z} \sim q_{\bm{\phi}}(\mathbf{z} | \mathbf{x} + \bm{\delta})}[\mathbf{y}^\intercal \log \mathbf{h}_{\bm{\psi}}(\mathbf{y}|\mathbf{z})]. \end{equation} We observe that the adversarial attack on the classification head $\mathbf{h}_{\bm{\psi}}(\mathbf{y}|\mathbf{z})$ can change the reconstructions of the decoder as well as increase the negative ELBO as shown in figure \ref{fig:jae_example} (b) and \ref{fig:rev_curve}. These phenomenon indicate that the classification head and the decoder are strongly correlated. Therefore, based on our theoretical framework, to defend against adversarial attacks, we can obtain purified vectors which project points to regions with higher posterior density $q_{\bm{\phi}}(\mathbf{z} | \mathbf{x})$ and lower reconstruction loss. The ELBO consists of reconstruction loss as well as KL divergence, and it can estimate the posterior $q_{\bm{\phi}}(\mathbf{z} | \mathbf{x})$. Therefore, we propose a test-time defense method which optimizes the ELBO during test time in order to purify attacks as shown in green curve of figure \ref{fig:rev_curve}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.7]{./fig/rev_curve.PNG} \end{center} \caption{The negative ELBO of MNIST and Fashion-MNIST on clean, adversarial and purified data. The ELBO is consisted of reconstruction loss as well as KL divergence. Adversarial attacks make the ELBO lower while our test-time defense will reverse the shift.} \label{fig:rev_curve} \end{figure} We define a clean image as $\mathbf{x}$, an adversarial perturbation as $\bm{\delta}_{\rm adv}$ and an adversarial example as $\mathbf{x}_{\rm adv} = \mathbf{x} + \bm{\delta}_{\rm adv}$, our objective is to define a purified vector $\bm{\epsilon}_{\rm purify}$ to reverse the attack as $\mathbf{x}_{\rm purify} = \mathbf{x}_{\rm adv} + \bm{\epsilon}_{\rm purify}$. Since we do not know the attack perturbation, we need to estimate the best purified vector with a norm smaller than $\epsilon_{t}$ in order to avoid changing the semantic interpretation of the image. In this work, we use the ELBO to determine the vector as \begin{equation} \label{eq:purify_jointly_obj} \bm{\epsilon}_{\rm purify} = \argmax_{\|\bm{\epsilon}\|_{\infty} \leq \epsilon_{t}} \mathbb{E}_{\mathbf{z} \sim q_{\bm{\phi}}(\mathbf{z} | \mathbf{x}_{\rm adv} + \bm{\epsilon})}{\left[\log p_{\bm{\theta}}(\mathbf{x}_{\rm adv} + \bm{\epsilon}| \mathbf{z})\right]} - D_{\rm KL}[q_{\bm{\phi}}(\mathbf{z} | \mathbf{x}_{\rm adv} + \bm{\epsilon} ) \| p(\mathbf{z})]. \end{equation} The process of generating a purified vector is similar to the process of generating an adversarial perturbation. The only difference is the objective. We define $\alpha$ as the learning rate and $\text{Proj}_{\hood}$ as the projection operator which projects a point back in a feasible region if it is out of that region. We use a clipping function as the projection operator, which ensures that $\|\mathbf{x}_{\rm purify} - \mathbf{x}\|_{\infty} = \|\bm{\epsilon}_{\rm purify}\|_{\infty} \leq \epsilon_t$ where $\mathbf{x}$ is the input (adversarial) images and $\epsilon_t$ is the budget for purification which should be small otherwise it changes semantic interpretation of the image. We define $\mathcal{F}(\mathbf{x}; \bm{\theta}, \bm{\phi}) = \mathbb{E}_{\mathbf{z} \sim q_{\bm{\phi}}(\mathbf{z} | \mathbf{x} )}{\left[\log p_{\bm{\theta}}(\mathbf{x} | \mathbf{z})\right]} - D_{\rm KL}[q_{\bm{\phi}}(\mathbf{z} | \mathbf{x} ) \| p(\mathbf{z})]$. An iterative optimization process given an adversarial example $\mathbf{x}_{\rm adv}$ is defined as following: \begin{equation} \label{eq:iterative_purify} \bm{\epsilon}^{t+1} = \text{Proj}_{\hood} \left( \bm{\epsilon}^t + \alpha\operatorname{sgn}(\nabla_{\mathbf{x}} \mathcal{F}(\mathbf{x}_{\rm adv} + \bm{\epsilon}^t; \bm{\theta}, \bm{\phi}))\right). \end{equation} Purified examples could be obtained as $\mathbf{x}_{\rm purify} = \mathbf{x}_{\rm adv} + \bm{\epsilon}_{\rm purify}$. This approach can be described as a test-time optimization of the ELBO to project adversarial examples back to regions with high posterior $q(\mathbf{z}|\mathbf{x})$ (a reliable classifier on the manifold) and small reconstruction losses (defend against adversarial attacks). We train a model with a latent dimension of 2 on the Fashion-MNIST dataset and show examples of clean, attack and purified trajectories in figure \ref{fig:rev_traj}. We observe that adversarial attacks are likely to push latent vectors to abnormal regions which cause abnormal reconstructions. Through test-time optimization of the ELBO, latent vectors can be brought back to their original regions. Compared with achieving adversarial robustness in the input image space, we transform the objective to achieve strong correlations between the classification head and the decoder which is easier since dimension of the latent space is much lower. In the next section, we show that even if attackers are aware of the test-time defense, our approach can still achieve reasonable adversarial robustness. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.39]{./fig/reverse_trajectory.PNG} \end{center} \caption{Trajectories of clean (green) - attack (red) - purified (blue) samples on a 2D latent space. Red box on the left corresponds to the fail case (purified process fails) on the right.} \label{fig:rev_traj} \end{figure} \section{Implementation and Experiments} Implementation details and experiment results are described in this section. We use MNIST \citep{lecun2010mnist}, Fashion-MNIST \citep{xiao2017/online}, SVHN \citep{netzer2011reading} and CIFAR10 \citep{Krizhevsky09learningmultiple} to evaluate our proposed test-time defense method. \subsection{Model Architecture and Hyperparameters} We add a linear classification head on top of the latent vector $\mathbf{z}$ and use cross-entropy loss to optimize the classification head. The weight of the classification loss is set to 8 ($\lambda=8$ in equation \ref{eq:joint_elbo_cls_kkt}) and we define $\log p(\mathbf{x}|\mathbf{z})$ as the reconstruction loss ($\gamma=1$). In terms of model architecture, we use several residual blocks \citep{he2016deep} to construct the encoder (4 residual blocks) and decoder (6 residual blocks). Filter size is set to $3 \times 3$, and we use max pooling to reduce dimensions and upsampling to increase dimensions. For MNIST and Fashion-MNIST, we set the number of filters in each residual block to be 64. For SVHN, we set the number of filters in each residual block to be 128. For CIFAR10, we set the number of filters in each residual block to be 256. Numbers are selected based on reasonable performance on clean data. For MNIST and Fashion-MNIST, we train the model for 256 epochs using the Adam optimizer \citep{kingma2015adam} with a learning rate of $10^{-4}$. For SVHN, we train the model for 1024 epochs using the Adam optimizer starting with a learning rate of $10^{-4}$ and divide it by 10 at the 512th epoch. For CIFAR10, we train the model for 2048 epochs using the Adam optimizer starting with a learning rate of $10^{-4}$ and divide it by 10 at the 1024th epoch. Batch sizes are set to 256 for all experiments. \subsection{Adversarial Attacks and Purifications} We evaluate our proposed approach on standard adversarial attacks as well as defense-aware adversarial attacks. For standard adversarial attacks, attackers only attack the classification head of the model as shown in equation \ref{eq:atk_jointly_obj}. In this work, we focus on $L_\infty$ and $L_2$ attacks. We use the Foolbox \citep{rauber2017foolbox} to craft the $L_\infty$-FGSM attack as well as the $L_\infty$-PGD attack. In terms of $L_\infty$-AutoAttack (AA-$L_\infty$) and $L_2$-AutoAttack (AA-$L_2$), we use the implementation in \cite{croce2020reliable}. For MNIST and Fashion-MNIST, we set $L_\infty=50/255$ and $L_2=765/255$. For SVHN and CIFAR10, we set $L_\infty=8/255$ and $L_2=127.5/255$. We run the $L_\infty$-PGD attack with 200 iterations (step size is $2/255$) for MNIST and Fashion-MNIST. We run the $L_\infty$-PGD attack with 100 iterations (step size is $2/255$) for SVHN and CIFAR10. Defense-aware attacks consider both classification as well as purification objectives, which makes it more computationally expensive; therefore, we apply such attacks with different hyperparameters. We do observe convergence of attacks with smaller iteration numbers. We use the $L_\infty$-PGD attack implemented in the Foolbox and the $L_\infty$-APGD-CE attack implemented in the Torchattacks \citep{kim2020torchattacks} to craft the BPDA attack. For MNIST and Fashion-MNIST, we set $L_\infty=50/255$ with 100 iterations. For SVHN and CIFAR10, we set $L_\infty=8/255$ with 50 iterations. In terms of test-time defense, the budget of purification is set to $L_\infty=50/255$ with 96 iterations for MNIST and Fashion-MNIST. The budget of purification is set to $L_\infty=8/255$ with 32 iterations for SVHN and CIFAR10. We use random initialization to initialize the purified vector $\bm{\epsilon}_{\rm purify}$ and use 16 random restarts to select the best score. Step sizes alternate between $1/255$ and $2/255$ for different runs. While hyperparameters are chosen to overestimate attack hyperparameters, we observe our proposed approach works for other hyperparameter settings as well. \subsection{Experiments and Results} In this section, we describe experiment results. To validate our theoretical framework, we also perform experiments on standard autoencoders (trained with reconstruction loss) which are similar to VAE models. During test-time, we minimize the reconstruction loss of standard autoencoders to defend against attacks. In following section, we use VAE-Classifier and Standard-AE-Classifier to distinguish two architectures. Classifiers of Standard-AE-Classifier models may not have strong correlations with their decoders since there are no incentives. \subsubsection{Standard Adversarial Attacks and Test-Time Defense} We first evaluate the performance of standard adversarial attacks where attackers only attack classification heads. We observe that, for MNIST, Fashion-MNIST and SVHN, adversarial attacks on classification heads of VAE-Classifier models will also create abnormal reconstruction images while Standard-AE-Classifier models do not have this behavior, which agrees with our hypothesis. Figure \ref{fig:VAE_atk} shows various sample predictions and reconstructions on clean, attack and purified data using VAE-Classifier models. Figure \ref{fig:AE_atk} shows various sample predictions and reconstructions on clean, attack and purified data using Standard-AE-Classifier models. For VAE-Classifier models, abnormal reconstructions during attacks are correlated with abnormal predictions from classification heads. In other word, if the prediction of an adversarial example is 2, its reconstruction may also look like a 2. We also show that with our test-time defense, both reconstructions and predictions can be restored, which further validate our framework of defending against adversarial attacks. Our results from CIFAR10 are slightly different compared with our results from MNIST, Fashion-MNIST and SVHN. According to figure \ref{fig:cifar_AE_VAE}, we observe that the correlation between the classification head and the decoder is not strong for the VAE-Classifier model trained on CIFAR10, in the meantime, resconstructions of the VAE-Classifier model are blur. Changing semantic interpretations in reconstructions during attacks happens less frequently for CIFAR10 but we still observe some samples and show them in figure \ref{fig:cifar_AE_VAE}. Although the VAE-Classifier model does not satisfy the assumptions (high correlations between decoders and classification heads as well as low reconstruction errors) of our framework, it still provides reasonable adversarial robustness. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.5]{./fig/VAE_attack.PNG} \end{center} \caption{Performance of VAE-Classifier models on some clean samples (green box), adversarial samples (red box) and purified samples (blue box) from MNIST, Fashion-MNIST and SVHN. We observe that predictions and reconstructions are correlated for VAE-Classifier models. Therefore, test-time defenses are effective in defending against adversarial attacks.} \label{fig:VAE_atk} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.5]{./fig/AE_attack.PNG} \end{center} \caption{Performance of Standard-AE-Classifier models on some clean samples (green box), adversarial samples (red box) and purified samples (blue box) from MNIST, Fashion-MNIST and SVHN. We do not observe strong correlations between predictions and reconstructions for Standard-AE-Classifier models. Standard-AE-Classifier models have better reconstruction quality, but test-time defenses do not work well on them.} \label{fig:AE_atk} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.5]{./fig/cifar_AE_VAE.PNG} \end{center} \caption{Performance of models on some clean samples (green box), adversarial samples (red box) and purified samples (blue box) from CIFAR10. Left is the VAE-Classifier model, right is the Standard-AE-Classifier model. Compared with models trained on other datasets, the VAE-Classifier model trained on CIFAR10 does not satisfy assumptions of our framework, but the test-time defense can still achieve reasonable performance.} \label{fig:cifar_AE_VAE} \end{figure} We show empirical distributions of the ELBO on clean data, adversarial data (PGD) as well as purified data from MNIST and Fashion-MNIST in figure \ref{fig:rev_curve}. We show empirical distributions of the ELBO on clean data, adversarial data (PGD) as well as purified data from SVHN and CIFAR in figure \ref{fig:elbo_rec_kld}. Compared with MNIST and Fashion-MNIST, SVHN and CIFAR10 have more diverse backgrounds; thus, shifts in the ELBO are less obvious. However, we can still observe that adversarial attacks lower the ELBO of SVHN and CIFAR10 and our test-time defense will reverse that shifts to defend against attacks. We record the accuracy of Standard-AE-Classifier models on clean data, attacks, and their accuracy with test-time defense in Table \ref{tbl:rev_performacne_standard}. We record the accuracy of VAE-Classifier models on clean data, attacks, and their accuracy with test-time defense in Table \ref{tbl:rev_performacne}. We observe that VAE-Classifier models already show some robustness against adversarial attacks without adversarial training. With test-time defense, their performances become on par with various adversarial trained models. We list some works which also perform test time optimization in the image space for comparison. PixelDefend\citep{song2018pixeldefend} achieves a 46\% purified accuracy (PGD) with a non-adversarial-trained ResNet for CIFAR10. SOAP \citep{shi2020online} achieves a 53.58\% purified accuracy (PGD20) with a non-adversarial-trained Wide-ResNet-28 for CIFAR10. \cite{mao2021adversarial} reports (in appendix) a 34.4\% purified accuracy with a non-adversarial-trained PreRes-18 model for CIFAR10 and a 65.50\% purified accuracy (AutoAttack) on an adversarial-trained SVHN model. In terms of adversarial training, \cite{rebuffi2021fixing} reports a 61.09\% robust accuracy with an adversarial-trained SVHN model and a 66.56\% robust accuracy with an adversarial-trained CIFAR10 model. Compared with results from literature, our proposed test-time defense on VAE-Classifier models achieve competitive performance without the need of adversarial training. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.39]{./fig/ELBO_RECKLD.PNG} \end{center} \caption{The negative ELBO of SVHN and CIFAR10 on clean, attack and purified data. Compared with MNIST and Fashion-MNIST, shifts are less obvious but still observable.} \label{fig:elbo_rec_kld} \end{figure} \begin{table}[h!] \centering \tabcolsep=3pt \caption{Performance of Standard-Classifier models on clean data, adversarial data, and purified data. } \vspace{-8pt} \resizebox{\linewidth}{!}{ \begin{tabular}{l|ccccc|ccccc} \hline & \multicolumn{5}{c|}{Standard-AE-Classifier} & \multicolumn{5}{c}{Standard-AE-Classifier + Test-Time Defense} \\ Dataset & Clean & FGSM & PGD & AA-$L_\infty$ & AA-$L_2$ & Clean & FGSM & PGD & AA-$L_\infty$ & AA-$L_2$ \\ \hline \hline MNIST & 99.26 & 44.43 & 0.0 & 0.0 & 0.0 & 99.21 & 89.60 & 90.8 & 91.21 & 20.8 \\ \hline Fashion-MNIST & 92.24 & 10.05 & 0.0 & 0.0 & 0.0 & 88.24 & 28.72 & 17.83 & 14.57 & 10.91 \\ \hline SVHN & 94.00 & 8.88 & 0.0 & 0.0 & 1.67 & 93.03 & 40.10 & 40.83 & 44.82 & 59.93 \\ \hline CIFAR10 & 90.96 & 6.42 & 0.0 & 0.0 & 0.98 & 87.80 & 19.36 & 11.65 & 13.74 & 44.57 \\ \hline \end{tabular} } \label{tbl:rev_performacne_standard} \vspace{16pt} \centering \caption{Performance of VAE-Classifier models on clean data, adversarial data, and purified data. } \resizebox{\linewidth}{!}{ \begin{tabular}{l|ccccc|ccccc} \hline & \multicolumn{5}{c|}{VAE-Classifier} & \multicolumn{5}{c}{VAE-Classifier + Test-Time Defense} \\ Dataset & Clean & FGSM & PGD & AA-$L_\infty$ & AA-$L_2$ & Clean & FGSM & PGD & AA-$L_\infty$ & AA-$L_2$ \\ \hline \hline MNIST & 99.33 & 49.20 & 2.21 & 0.0 & 0.0 & 99.17 & 93.31 & \textbf{86.33} & \textbf{85.20} & 81.36 \\ \hline Fashion-MNIST & 92.33 & 36.96 & 0.0 & 0.0 & 0.0 & 84.79 & 71.09 & \textbf{73.44} & \textbf{71.48} & 73.64 \\ \hline SVHN & 95.27 & 70.24 & 16.01 & 0.33 & 6.61 & 86.29 & 75.40 & \textbf{72.72} & \textbf{73.47} & 76.21 \\ \hline CIFAR10 & 91.82 & 54.55 & 17.82 & 0.05 & 2.36 & 77.97 & 59.51 & \textbf{57.21} & \textbf{58.78} & 63.38 \\ \hline \end{tabular} } \begin{tablenotes} \item\textbf{Table Notes: } FGSM represents the Fast Gradient Signed Method. PGD represents the Projected Gradient Descent. AA represents the AutoAttack. We use a ResNet-like model to perform experiments; thus, we show results in literature which also use ResNets and the PGD attack as well as perform test time optimization in the image space. PixelDefend\citep{song2018pixeldefend} achieves a 46\% purified accuracy (PGD) with a non-adversarial-trained ResNet for CIFAR10. SOAP \citep{shi2020online} achieves a 53.58\% purified accuracy (PGD20) with a non-adversarial-trained Wide-ResNet-28 for CIFAR10. \cite{mao2021adversarial} reports (in appendix) a 34.4\% purified accuracy with a non-adversarial-trained PreRes-18 model for CIFAR10 and a 65.50\% purified accuracy (AutoAttack) on an adversarial-trained SVHN model. \cite{rebuffi2021fixing} reports a 61.09\% robust accuracy with an adversarial-trained SVHN model and a 66.56\% robust accuracy with an adversarial-trained CIFAR10 model. We highlight the numbers of test-time defense from PGD and AutoAttack in the table for better comparison. \end{tablenotes} \label{tbl:rev_performacne} \vspace{-6pt} \end{table} \subsubsection{Defense-Aware Attacks and Test-Time Defense} Standard adversarial examples only attack classification heads; however, if attackers are aware of defense mechanisms, it is common for them to consider the defense mechanism during attack generation. The best defense-aware attack for our proposed method is a second order optimization problem, but this approach needs Hessian matrix in order to craft the attack and this is computationally expensive. We consider a multi-objective attack as well as the BPDA attack \citep{athalye2018obfuscated} as defense-aware attacks in this work. We start with the multi-objective attack. We use Fashion-MNIST to perform the experiment and plot the accuracy vs trade-off term $\lambda_a$ (equation \ref{eq:multi_atk_obj}) in figure \ref{fig:DA_attack}. We observe that the multi-objective optimization decreases performance of attacks and does not have observable impacts on our test-time defense. Therefore, our proposed approach is robust against the multi-objective attack. \begin{figure}[h!] \begin{center} \includegraphics[scale=1]{./fig/defense_aware.PNG} \end{center} \caption{Multi-objective attack and purify accuracy. As the trade-off term increases, attack performance decreases, in the meantime, impacts on our test-time defense are not observable.} \label{fig:DA_attack} \end{figure} Another representative defense-aware attack is the BPDA attack. We evaluate Standard-AE-Classifier models and VAE-Classifier models on the BPDA attack and show results in table \ref{tbl:bpda_comp}. VAE-Classifier models show reasonable performance across multiple datasets even under the BPDA attack while Standard-AE-Classifier models only show robustness on the MNIST dataset. PixelDefend\citep{song2018pixeldefend} obtains a 9\% accuracy for CIFAR10 on the BPDA attack \citep{athalye2018obfuscated}. SOAP \citep{shi2020online} obtains a 3.7\% accuracy for CIFAR10 on the BPDA attack \citep{croce2022evaluating}. \begin{table}[h!] \centering \tabcolsep=3pt \caption{Performance of models on the BPDA attack, and the purified BPDA accuracy.} \vspace{-8pt} \resizebox{0.9\linewidth}{!}{ \begin{tabular}{l|cc|cc|cc|cc} \hline & \multicolumn{4}{c|}{Standard-AE-Classifier} & \multicolumn{4}{c}{VAE-Classifier} \\ Dataset & PGD & APGD & TD-PGD & TD-APGD & PGD & APGD & TD-PGD & TD-APGD \\ \hline \hline MNIST & 9.72 & 59.55 & 83.12 & 77.41 & 42.25 & 66.53 & 86.35 & \textbf{83.08} \\ \hline Fashion-MNIST & 0.11 & 5.22 & 4.02 & 3.65 & 10.00 & 35.55 & \textbf{60.24} & 64.67 \\ \hline SVHN & 0.21 & 7.62 & 8.98 & 7.65 & 32.36 & 45.52 & \textbf{64.70} & 68.22 \\ \hline CIFAR10 & 0.03 & 4.66 & 0.70 & 1.10 & 24.84 & 33.90 & \textbf{47.43} & 48.65 \\ \hline \end{tabular} } \begin{tablenotes} \item\textbf{Table Notes: } TD represents test-time defense. All attacks are $L_{\infty}$ based. PixelDefend\citep{song2018pixeldefend} obtains a 9\% accuracy for CIFAR10 on the BPDA attack \citep{athalye2018obfuscated}. SOAP \citep{shi2020online} obtains a 3.7\% accuracy for CIFAR10 on the BPDA attack \citep{croce2022evaluating}. Numbers highlighted in the table are minimum values of accuracy for our proposed test time defense on the BPDA attack. \end{tablenotes} \label{tbl:bpda_comp} \vspace{-6pt} \end{table} \section{Insights and Discussion} In this section, we discuss various aspects of this work. Focus of this work is adversarial robustness without adversarial training, but we also perform experiments with adversarial trained models. We use the PreActResNet-18 from \cite{gowal2021improving} to perform experiments on the CIFAR10 dataset. We freeze the parameters of the PreActResNet-18 and use it as an encoder. We train a decoder with an architecture described in the experiment section. We use default hyperparameters from AutoAttack. The robust accuracy for the AutoAttack is 58.63\%, and the accuracy after the test-time defense is 63.73\%. Thus, our framework can be applied to adversarial trained model as well. We also perform experiments only using reconstruction loss as our defense objective for the VAE-Classifier model, we observe that accuracy of CIFAR10 after test-time defense drops to 51.63\% for the $L_\infty$ bound AutoAttack. This phenomenon indicates that only projecting a point to the manifold is not sufficient, regions on the manifold also need to have high density. Our test-time defense method can be considered as optimization of a lower bound of $\log p(\mathbf{x})$ during inference time. However, if $\log p(\mathbf{x})$ is too high, it could also lead to issues since generative models are known to have high $\log p(\mathbf{x})$ for some out of distribution data \citep{nalisnick2018deep} and these regions with high density estimation should not be trusted. This could be the reason for the performance drop in clean data. \section{Conclusion} In this work, we develop a novel framework for adversarial robustness. We propose a test-time defense method using our framework and variational inference, which combines manifold learning with the Bayesian framework. We evaluate the proposed test-time defense method through several attacks and show adversarial robustness for non-adversarial-trained models. Even if attackers are aware of our test-time defense method, our method can still provide reasonable adversarial robustness. Our method can also be applied to defend against adversarial attacks related to VAEs or combined with adversarial-trained models to further improve robustness. \subsubsection*{Acknowledgments} Research is supported by DARPA Geometries of Learning (GoL) program under the agreement No. HR00112290075. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. government.
1,108,101,563,608
arxiv
\section{Introduction} The possibility to load ultra cold fermionic atoms in optical lattices opens new perspectives in the investigation of lattice models of strongly correlated systems\cite{wu_2003,gorshkov2010}. When the optical lattice is deep enough, and when the fermionic atoms have an internal degree of freedom that can take $N$ values (coming for instance from the nuclear spin in alkaline earths), the appropriate model takes the form of a generalized Hubbard model \begin{equation*} \hat{H}_{\text{Hub}} = -t \dsum{\langle i,j\rangle}\sum_{\alpha=1}^N (\ad{i\alpha}\a{j\alpha}+\mathrm{H.c.}) +\frac{U}{2}\sum_i \left(\sum_\alpha \hat n_{i\alpha}\right)^2 \label{eq:SUN-Hubbard} \end{equation*} where $\ad{i\alpha}$ and $\a{i\alpha}$ are creation and annihilation fermionic operators, $\hat n_{i\alpha}=\ad{i\alpha}\a{i\alpha}$, $t$ is the hopping integral between pairs of nearest neighbors $\langle i,j\rangle$, and $U$ is the on-site repulsion. When the average number of atoms per site $m$ is an integer, and for large enough $U/t$, the system is expected to be in a Mott insulating phase \cite{assaraf1999}. Fluctuations induced by the hopping term in the manifold of states with $m$ fermions per site start at second order in $t/U$, and the processes that appear at this order consist in exchanging particles between pairs of neighboring sites, leading to the effective Hamiltonian \begin{equation} \hat{H}_{\text {eff}}= \frac{2 t^2}{U}\hat{H} \label{eq:SUN-effective} \end{equation} with \begin{equation} \hat{H} = \dsum{\langle i,j\rangle}\sum_{\alpha,\beta=1}^N\ad{i\alpha}\a{i\beta}\ad{j\beta}\a{j\alpha} \label{eq:SUN-heisenberg} \end{equation} In the case of electrons with spin $\uparrow$ or $\downarrow$, this Hamiltonian has \SU{2} symmetry, and it is equivalent to the Heisenberg model with coupling constant $4t^2/U$ thanks to the identity \begin{equation*} \sum_{\alpha,\beta=\uparrow,\downarrow}\ad{i\alpha}\a{i\beta}\ad{j\beta}\a{j\alpha}=2 \vec{S}_i\cdot\vec{S}_j+\frac{1}{2}\hat n_i \hat n_j \end{equation*} and to the fact that $\hat n_i \hat n_j$ is a constant in the manifold of states with one particle per site. More generally, when the number of degrees of freedom is equal to $N>2$, Mott phases can be realized for all integer values of $m<N$. The effective model now has \SU{N} symmetry. This can be made explicit by introducing the generators \begin{equation*} \hat{S}_{\alpha\beta} = \ad{\alpha}\a{\beta} - \dfrac{m}{N}\delta_{\alpha\beta} \end{equation*} which satisfy the \SU{N} algebra \begin{equation*} \ec{\hat{S}_{\alpha\beta},\hat{S}_{\mu\nu}} = \delta_{\mu\beta}\hat{S}_{\alpha\nu}-\delta_{\alpha\nu}\hat{S}_{\mu\beta} \end{equation*} thanks to the identity \begin{equation*} \sum_{\alpha,\beta=1}^N\hat{S}^{i}_{\alpha\beta}\hat{S}^{j}_{\beta\alpha} = \sum_{\alpha,\beta=1}^N\ad{i\alpha}\a{i\beta}\ad{j\beta}\a{j\alpha} - \dfrac{m^{2}}{N} \end{equation*} In the \SU{N} language, working with $m$ fermions per site corresponds to working with the totally antisymmetric irreducible representation (irrep) that can be represented by a Young tableau with $m$ boxes in one column. For any allowed $m$, there is a conjugate equivalent representation: a system with $m=k$ particles per site is equivalent to a system with $m=N-k$ particles per site. The model \eref{eq:SUN-heisenberg} captures the low-energy physics of multi-color ultra-cold atoms in optical lattices, systems for which remarkable progress has been recently achieved on the experimental side. For instance, the \SU{N}-symmetry has been observed in ultracold quantum gas of fermionic \ce{^{128}Yb}~\cite{Scazza2014} or \ce{^{87}Sr}~\cite{Zhang2014}. Another example~\cite{Pagano2014} is the realization of one dimensional quantum wires of repulsive fermions with a tunable number of components. The \SU{N} Heisenberg model with the fundamental representation at each site ($m=1$), which corresponds to the Mott phase with one particle per site, has been investigated in considerable detail over the years. In one dimension, there is a Bethe ansatz solution for all values of $N$~\cite{Sutherland1975}, and Quantum Monte Carlo simulations free of minus sign problem have given access to the temperature dependence of correlation functions~\cite{Frischmuth1999, Messio2012}. In two dimensions, a number of simple lattices have been investigated for a few values of $N$ with a combination of semiclassical, variational and numerical approaches, leading to a number of interesting predictions at zero temperature~\cite{li1998, vandenbossche2000, vandenbossche2001, mambrini2003, arovas2008, wang_z2_2009, toth2010, wu2011, szirmai2011, corbozSU42011, szirmaiSU62011, corbozsimplex2012, corbozPRX2012, bauer2012, corboz-2013,wu_2014}. In comparison, the case of higher antisymmetric irreps ($m>1$) has been little investigated. In 2D, there is a mean-field prediction that chiral phases might appear for large $m$ provided $N/m$ is large enough~\cite{hermele2009, hermele_topological_2011}, and some cases of self-conjugate irreps such as the 6-dimensional irrep of \SU{4} have been investigated with Quantum Monte Carlo simulations~\cite{assaad2005, cai2013, lang2013, zhou2014} and variational Monte Carlo\cite{paramekanti_2007}. In 1D, apart from a few specific cases~\cite{andrei1984, johannesson1986, paramekanti_2007,fuhringer2008, rachel2009, nonne2011, nonne2013, morimoto2014, quella2012}, including more general irreps than simply the totally antisymmetric ones, the most general results have been obtained by Affleck quite some time ago~\cite{Affleck1986a, Affleck1988}. Applying non-abelian bosonization to the weak coupling limit of the \SU{N} Hubbard model, he identified two types of operators that could open a gap: Umklapp terms if $N>2$ and $N/m=2$, and higher-order operators with scaling dimension $\chi=N(m-1)m^{-2}$ allowed by the $\mathbb{Z}_{N/m}$ symmetry if $N/m$ is an integer strictly larger than 2. This allowed him to make predictions in four cases: i) $N/m$ is not an integer: the system should be gapless because there is no relevant operator that could open a gap; ii) $N>2$ and $N/m=2$: the system should be gapped because Umklapp terms are always relevant; iii) $N/m$ is an integer strictly larger than 2 and $\chi<2$: the system should be gapped because there is a relevant operator allowed by symmetry. This case is only realized for \SU{6} with $m=2$; iv) $N/m$ is an integer strictly larger than 2 and $\chi>2$: the system should be gapless because there is no relevant operator allowed by symmetry. The only case where the renormalization group argument based on the scaling dimension of the operator does not lead to any prediction is the marginal case $\chi=2$, which is realized for two pairs of parameters: (\SU{8} $m=2$) and (\SU{9} $m=3$). These predictions are summarized in \tref{rlt-N-vs-m-gap}. Finally, in all gapless cases, the system is expected to be in the \SU[k=1]{N} Wess-Zumino-Witten universality class~\cite{Knizhnik1984, Affleck1986a}, with algebraic correlations that decay at long distance with a critical exponent $\eta=2-2/N$. \begin{table}[h] \centering \setlength{\tabcolsep}{-0.1pt} \begin{tabular}{||l|p{4mm}p{4mm}p{4mm}p{4mm}p{4mm}p{4mm}p{4mm}p{4mm}||c|l} \cline{1-10} \hspace{4mm}$N=$\hspace{1mm} & \hspace{1mm}3 & \hspace{1mm}4 & \hspace{1mm}5 & \hspace{1mm}6 & \hspace{1mm}7 & \hspace{1mm}8 & \hspace{1mm}9 & 10&\cellcolor{Gray}$N/m\notin\mathbb{N}$&\multirow{3}{*}{$\left.\phantom{\rule{0.1cm}{0.65cm}}\right\}$ Gapless}\\ \cline{1-1} \hspace{1mm}$m=1$&\cellcolor{CadetBlue}&\cellcolor{CadetBlue}&\cellcolor{CadetBlue}&\cellcolor{CadetBlue}&\cellcolor{CadetBlue}&\cellcolor{CadetBlue}&\cellcolor{CadetBlue}&\cellcolor{CadetBlue}&\cellcolor{CadetBlue} $m=1$\\ \hspace{1mm}$m=2$&\cellcolor{lightgray}&\cellcolor{LimeGreen}&\cellcolor{Gray}&\cellcolor{yellow}&\cellcolor{Gray}&\cellcolor{orange}&\cellcolor{Gray}&\cellcolor{CarnationPink}&\cellcolor{CarnationPink} $\chi>2$\\ \hspace{1mm}$m=3$&&\cellcolor{lightgray}&\cellcolor{lightgray}&\cellcolor{LimeGreen}&\cellcolor{Gray}&\cellcolor{Gray}&\cellcolor{orange}&\cellcolor{Gray}& \cellcolor{orange}$\chi=2$& \hspace{10mm}?\\ \hspace{1mm}$m=4$&&&\cellcolor{lightgray}&\cellcolor{lightgray}&\cellcolor{lightgray}&\cellcolor{LimeGreen}&\cellcolor{Gray}&\cellcolor{Gray}&\cellcolor{yellow}$\chi<2$&\multirow{2}{*}{$\left.\phantom{\rule{0.1cm}{0.5cm}}\right\}$ Gapped}\\ \hspace{1mm}$m=5$&&&&\cellcolor{lightgray}&\cellcolor{lightgray}&\cellcolor{lightgray}&\cellcolor{lightgray}&\cellcolor{LimeGreen}&\cellcolor{LimeGreen} $N/m=2\;$\\ \cline{1-10} \end{tabular} \caption{ Summary of the predictions of Refs.~\onlinecite{Affleck1986a,Affleck1988} for a representative range of \SU{N} with $m$ particles per site. Note that models with $m=k$ and $m=N-k$ are equivalent up to a constant. Therefore the light gray shaded region can be deduced from the other cases and does not need to be studied. } \label{rlt-N-vs-m-gap} \setlength{\tabcolsep}{6pt} \end{table} To make progress on the general problem of the \SU{N} Heisenberg model with higher antisymmetric irreps, it would be very useful to have flexible yet reliable numerical methods that would allow to test these predictions in a systematic way. In particular, the methods should not be limited to 1D, or to cases such as self-conjugate irreps, for which there is a minus-sign free Quantum Monte Carlo algorithm. In this paper, we have decided to test the potential of Gutzwiller projected wave functions by a systematic investigation of the 1D case discussed by Affleck using variational Monte Carlo (VMC). There are two main reasons for this choice. First of all, these wave functions have been shown to be remarkably accurate in the case of the \SU{4} Heisenberg chain with the fundamental representation by Wang and Viswanath~\cite{wang_z2_2009}, who have in particular shown that they lead to the exact critical exponent in that case. Besides, this approach can be easily generalized to higher dimensions, as already shown for the fundamental representation in a number of cases~\cite{lajko_tetramerization_2013,corboz-2013}. Moreover, it has been used by Paramekanti and Marston~\cite{paramekanti_2007} for the self-conjugate representation in one and two dimensions. In parallel, exact diagonalizations based on the extension of a recent formulation by two of the present authors~\cite{nataf2014} will be used whenever possible to benchmark the VMC approach on small clusters and, in some cases, to actually confirm the physics on the basis of a finite-size analysis. As we shall see, the combination of these approaches leads to results that agree with Affleck's predictions whenever available, and to the identification of the symmetry breaking pattern in the gapped phases. In addition, it predicts that the marginal cases are gapless with algebraic correlations. The paper is organized as follows. The next section describes the methods, with emphasis on the variational wave functions that will be used throughout. The third section is devoted to a comparison of the results obtained using the simplest wave functions (with no symmetry breaking) with those of the Bethe ansatz solution for the $m=1$ case, with the conclusion that the agreement is truly remarkable. The fourth section deals with the cases where Umklapp processes are present ($N>2$, $N/m=2$), while the fifth one deals with the case where there is no Umklapp process but a relevant operator (\SU{6} $m=2$). The marginal cases are dealt with in the sixth section, and the case where $N/m$ is an integer without relevant nor Umklapp operators in the seventh section. Finally, the critical exponents are computed and compared to theoretical values for all gapless systems in the eighth section. \section{The methods} \subsection{Gutzwiller projected wave functions} The variational wave functions investigated in the present paper are obtained from fermionic wave functions that have on average $m$ particles per site by applying a Gutzwiller projector $\hat{P}_{G}^{m}$ that removes all states with a number of particles per site different from $m$: \begin{equation} \hat{P}_{G}^{m}=\dprodd{i}{1}{n} \prod_{p\neq m} \frac{\hat n_i-p}{m-p} \end{equation} where $n$ is the number of sites, and where the product over $p$ runs over all values from $0$ to $N$ except $p=m$. In the present paper, we will concentrate on simple fermionic wave-functions that, before projection, correspond to the ground state of trial Hamiltonians that contain only hopping terms. For \SU{2}, the inclusion of pairing terms have been shown to lead to significant improvements\cite{yunoki}, but the generalization to \SU{N} is not obvious because one cannot make an \SU{N} singlet with two sites as soon as $N>2$. In addition, in the case of the fundamental representation where Bethe ansatz results are available for comparison, these simple wave functions turn out to lead to extremely precise results as soon as $N>2$. In practice, the construction of a Gutzwiller projected wave function starts with the creation of a trial Hamiltonian $\hat{T}$ that acts on $n$ sites and is written with fermionic operators $\a{i\alpha}$ and $\ad{i\alpha}$. When different colors are involved in $\hat{T}$, and as long as there is no term mixing different colors, the Hamiltonian can be rewritten as a direct sum: $ \hat{T} = \bigoplus_{\alpha=1}^{N}\hat{T}_{\alpha}$. Then, for each color, there will be one corresponding unitary matrix $U^{\alpha}$ that diagonalizes $\hat{T}_{\alpha}$. So the new fermionic operators are given by: \begin{align*} \c{i\alpha} &= \dsumd{j}{1}{n}U_{ij}^{\alpha\dagger}\a{j\alpha} & \cd{i\alpha} &= \dsumd{j}{1}{n}U_{ji}^{\alpha}\ad{j\alpha}, \end{align*} and the trial Hamiltonian can be written in a diagonal basis: \begin{equation*} \hat{T} = \dbigoplusd{\alpha}{1}{N}\dsumd{i}{1}{n}\omega_{i\alpha}\cd{i\alpha}\c{i\alpha} \end{equation*} with $\omega_{i\alpha}<\omega_{i+1\alpha}$. In the Mott insulating phase, the system possesses $nm/N$ particles of each color and exactly $m$ particles per site. By filling the system with the $nm/N$ lowest energy states of each color, the resulting fermionic wave function contains $nm$ particles: \begin{equation} \ket{\Psi} = \dbigotimesd{\alpha}{1}{N}\dprodd{i}{1}{nm/N}\cd{i\alpha}\ket{0} = \dbigotimesd{\alpha}{1}{N}\dprodd{i}{1}{mn/N}\dsumd{j}{1}{n}U_{ji}^{\alpha}\ad{j\alpha}\ket{0} \end{equation} in terms of which the variational wave function is given by \begin{equation} \ket{\Psi_G}=\hat{P}_{G}^{m}\ket{\Psi}. \end{equation} Since the Heisenberg model exchanges particles on neighboring sites, the simplest trial Hamiltonian that allows the hopping of particles and its corresponding Gutzwiller projected wave function are: \begin{equation*} \hat{T}_{\alpha}^{\mathrm{Fermi}} = \dsumd{i}{1}{n}\ep{\ad{i\alpha}\a{i+1\alpha}+\mathrm{H.c.}} \quad\rightarrow\quad \ket{\Psi_{G}^{\mathrm{Fermi}}}. \end{equation*} In cases where a relevant or Umklapp operator is present, the ground state is expected to be a singlet separated from the first excited state by a gap, and to undergo a symmetry breaking that leads to a unit cell that can accommodate a singlet. In practice, this means unit cells with $d=N/m$ sites. To test for possible instabilities, we have thus used wave functions that are ground states of Hamiltonians that creates $d$-merization: \begin{equation*} \hat{T}_{\alpha}^{t_{i}} = \dsumd{i}{1}{n}\ep{t_{i}\ad{i\alpha}\a{i+1\alpha}+\mathrm{H.c.}} \quad\rightarrow\quad \gwf{d}{\delta}. \end{equation*} Assuming that the mirror symmetry is preserved, the wave functions $\gwf{d}{\delta}$ for dimerization ($d=2$) and trimerization ($d=3$) have only one allowed free parameter $\delta$, and the hopping amplitudes in a unit cell are given by: \begin{equation*} \begin{cases} t_{i} = 1-\delta &\text{if } i = d\\ t_{i} = 1 &\text{otherwise.} \end{cases} \end{equation*} To test for a possible tetramerization for \SU{8} $m=2$, since the unit cell contains four sites, one additional free parameter is allowed (still assuming that the mirror symmetry is preserved in the ground state). Therefore, we have used the wavefunction $\gwf{4}{\delta_{1},\delta_{2}}$ with hopping amplitudes defined by: \begin{equation*} \begin{cases} t_{i} = 1-\delta_{1} &\text{if } i = 2\\ t_{i} = 1-\delta_{2} &\text{if } i = 4\\ t_{i} = 1 &\text{otherwise.} \end{cases} \end{equation*} This method is always well defined for periodic boundary conditions when $N/m$ is even. But when $N/m$ is odd, the ground state is degenerate for periodic boundary conditions if the translation symmetry is not explicitly broken, and one has to use anti-periodic boundary conditions for $\ket{\Psi_{G}^{\mathrm{Fermi}}}$, $\gwf{d}{0}(d=2,3)$ and $\gwf{4}{0,0}$. The hope is that if $\hat{T}$ is wisely chosen, then $\ket{\Psi_{G}}$ captures correctly the physics of the ground state, i.e. with a good variational wave function, $E_{G}\equiv\bk{\Psi_{G}}{\hat{H}}{\Psi_{G}}\approx E_{0}$, the exact ground state energy. To check the pertinence of this statement, we have compared the energies and nearest-neighbor correlations with those computed with ED on small systems with open boundary conditions. In the table \tref{rlt-ed-vmc-energies}, one can see, for some systems, the comparison between ED and VMC results for the ground state energy. The nearest-neighbor correlations will be compared in the next sections. Considering the excellent agreement between the two methods for the cluster sizes available to ED, there are good reasons to hope that these Gutzwiller projected wave functions can quantitatively describe the properties of the ground state. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $N$ & $m$ & $n$ & ED & VMC & error [\%] \\\hline 4 &2&16&-1.6971&-1.6916&-0.33\\ 4 &2&18&-1.6925&-1.6866&-0.35\\ 6 &2&15&-2.7351&-2.7287&-0.23\\ 6 &3&12&-4.0295&-4.0261&-0.08\\ 6 &3&14&-4.0162&-4.0123&-0.10\\ 8 &2&12&-3.1609&-3.1587&-0.07\\ 8 &2&16&-3.1857&-3.1828&-0.09\\ 9 &3&9 &-6.0960&-6.0810&-0.25\\ 9 &3&12&-6.1162&-6.0980&-0.30\\ 10&2&15&-3.3992&-3.3919&-0.21\\ \hline \end{tabular} \caption{ Comparison between the ED and VMC energies per site. The incertitudes on the VMC data are smaller than $10^{-4}$. The relative error is always smaller than $0.35\%$. } \label{rlt-ed-vmc-energies} \end{table} \subsection{Exact diagonalizations} On a given cluster, the total Hilbert space grows very fast with $N$, and the standard approach that only takes advantage of the conservation of the color number is limited to very small clusters for large $N$. Quite recently, two of the present authors have developed a simple method to work directly in a given irrep for the \SU{N} Heisenberg model with the fundamental representation at each site~\cite{nataf2014}, allowing to reach cluster sizes typical of \SU{2} for any $N$. This method can be extended to the case of more complicated irreps at each site, in particular totally antisymmetric irreps, and the exact diagonalization results reported in this manuscript have been obtained along these lines. \subsection{Correlation function and structure factor}\label{rls-cor-sf} To characterize the ground state, it will prove useful to study the diagonal correlation defined by: \begin{equation}\label{rle-lrc} C\ep{r} = \dsum{\alpha}\mean{\hat{S}_{\alpha\alpha}^{0}\hat{S}_{\alpha\alpha}^{r}} = \dsum{\alpha}\mean{\ad{0\alpha}\a{0\alpha}\ad{r\alpha}\a{r\alpha}} - \dfrac{m^{2}}{N}. \end{equation} The structure factor is then given by the Fourier transform of this function: \begin{equation}\label{rle-sf} \tilde{C}\ep{k}= \dfrac{1}{2\pi}\dfrac{N}{m\ep{N-m}} \dsum{r}C\ep{r}\exp{ikr} \end{equation} where the prefactor has been chosen such that: \begin{equation*} \dsum{k}\tilde{C}\ep{k} = \dfrac{n}{2\pi}. \end{equation*} \section{\SU{N} with $m=1$}\label{rls-m1} In this section, we extend the \SU{4} results of Wang and Vishwanath~\cite{wang_z2_2009} to arbitrary $N$ for $m=1$ (fundamental representation), and we perform a systematic comparison with Bethe ansatz and Quantum Monte-Carlo (QMC) results. Since these systems are known to be gapless, $\ket{\Psi_{G}^{\mathrm{Fermi}}}$ is the only relevant wave function to study. Let us start with the ground state energy. Using Bethe ansatz, Sutherland~\cite{Sutherland1975} derived an exact formula for the ground state energy per site $e_0(N)$ of the Hamiltonian \eref{eq:SUN-heisenberg} that can be written as a series in powers of $1/N$: \begin{equation} e_0(N)=-1+2 \dsumd{k}{2}{\infty} \frac{(-1)^k \zeta(k)}{N^k} \end{equation} where $\zeta(k)=\sum_{n=1}^\infty (1/n^k)$ is Riemann's zeta function. $e_0(N)$ is depicted in \gref{rlg-sun-m1-vmc-ba} as a continuous line. The dashed lines are approximations obtained by truncating the exact solution at order $N^{-k},\,k\geq 2$. For comparison, the variational energies obtained in the thermodynamic limit after extrapolation from finite size systems are shown as dots in \gref{rlg-sun-m1-vmc-ba}. The agreement with the exact solution is excellent for all values of $N$, and it improves when $N$ increases (see table \tref{rlt-m1}). Quite remarkably, the variational estimate is better than the $N^{-4}$ estimate even for \SU{3}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7]{m1-ba.pdf} \end{center} \vspace{-0.6cm} \caption{ Variational energy per site of \SU{N} chains with the fundamental irrep at each site (dots) compared to Bethe ansatz exact results (solid line) and polynomial approximations in $1/N$ (dashed lines). } \label{rlg-sun-m1-vmc-ba} \end{figure} \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|} \hline $N$& BA & VMC& error [\%]\\ \hline 3& -0.7032& -0.7007& -0.36\\ 4& -0.8251& -0.8234& -0.21\\ 5& -0.8847& -0.8833& -0.16\\ 6& -0.9183& -0.9173& -0.11\\ 7& -0.9391& -0.9383& -0.09\\ 8& -0.9528& -0.9522& -0.06\\ 9& -0.9624& -0.9620& -0.05\\ \hline \end{tabular} \caption{ Comparison of the variational energies for $m=1$ systems obtained for infinite chains with exact Bethe ansatz. The incertitudes on the VMC data are smaller than $10^{-4}$. } \label{rlt-m1} \end{table} We now turn to the diagonal correlations and its associated structure factor defined by \eref{rle-lrc} and \eref{rle-sf}. At very low temperature, QMC has been used by Frischmuth \textit{et~al}~\cite{Frischmuth1999} for \SU{4} and by Messio and Mila~\cite{Messio2012} for various values of $N$ to compute this structure factor. The QMC data of Messio and Mila and the results obtained with VMC for $n=60$ sites are shown in \gref{rlg-qmc-vmc-lrc}. Qualitatively, the agreement is perfect: VMC reproduces the singularities typical of algebraically decaying long-range correlations. But even quantitatively the agreement is truly remarkable, and, as for the ground state energy, it improves when $N$ increases. Clearly, Gutzwiller projected wave functions capture the physics of the $m=1$ case very well. \begin{figure}[t] \begin{center} \includegraphics[scale=0.7]{qmc-vmc-lrc.pdf} \end{center} \vspace{-0.6cm} \caption{ Comparison of the structure factors calculated with VMC (empty squares) and QMC (filled circles) for various \SU{N} systems. In the VMC calculations, anti-periodic boundary conditions have been used for \SU{3} and \SU{5}, and periodic ones for \SU{4}. } \label{rlg-qmc-vmc-lrc} \end{figure} \section{\SU{N} with $m=N/2$}\label{rls-Nm2} For these systems, there is a self-conjugate antisymmetric representation of \SU{N} at each site. The ground states of such systems, referred to as extended valence bound solids~\cite{Affleck1991}, are predicted to break the translational symmetry, to be two-fold degenerate and to exhibit dimerization since only two sites are needed to create a singlet, and the spectrum is expected to be gapped. \begin{figure}[h!] \includegraphics[scale=0.7]{Nom2-gap-delta-nnc.pdf} \vspace{-0.2cm} \caption{ ED and VMC results for various \SU{N} models with $m=N/2$. Upper left panel: size dependence of the energy gap for \SU{4} and \SU{6}. Upper right panel: optimal variational parameter $\delta$ for periodic boundary conditions for \SU{4}, \SU{6}, \SU{8}, and \SU{10}. Lower panels: energy per bond for \SU{4} (left) and \SU{6} (right) calculated with ED (circles) and VMC (squares) for open boundary conditions. Note that the optimal variational parameters $\delta_{\mathrm{opt}}$ are different in the upper right panel and in the lower panels because they correspond to different boundary conditions (periodic and open). } \label{rlg-Nom2-gap-delta-nnc} \end{figure} We have investigated two representative cases, (\SU{4} $m=2$) and (\SU{6} $m=3$), with ED up to 18 and 14 sites respectively, and the cases $N=4$ to $10$ with VMC. The main results are summarized in \gref{rlg-Nom2-gap-delta-nnc}. Let us start by discussing the ED results. Clusters with open boundary conditions have been used because they are technically simpler to handle with the method of Ref.~\onlinecite{nataf2014}, and because, in the case of spontaneous dimerization, they give directly access to one of the broken symmetry ground states if the number of sites is even. The gap as a function of the inverse size is plotted in the upper left panel of \gref{rlg-Nom2-gap-delta-nnc} for \SU{4} and \SU{6}. In both cases, the results scale very smoothly, and a linear fit is consistent with a finite and large value of the gap in the thermodynamic limit. In the lower panels of \gref{rlg-Nom2-gap-delta-nnc}, the bond energy is plotted as a function of the bond position for the largest available clusters (18 sites for \SU{4}, 14 sites for \SU{6}) with solid symbols. A very strong alternation between a strongly negative value and an almost vanishing (slightly positive) value with very little dependence on the bond position clearly demonstrates that the systems are indeed spontaneously dimerized. Let us now turn to the the VMC results. Since the relevant instability is a spontaneous dimerization, it is expected that the dimerized $\gwf{2}{\delta}$ wave function allows one to reach lower energy than the $\ket{\Psi_{G}^{\mathrm{Fermi}}}$ one. This is indeed true for all cases we have investigated (up to $N=10$ and to $n\gtrsim 100$), and the optimal value of the dimerization parameter $\delta_{\mathrm{opt}}>0$ is nearly size independent and increases with $N$ (see upper right panel of \gref{rlg-Nom2-gap-delta-nnc}), in qualitative agreement with the gap increase between \SU{4} and \SU{6} observed in ED. To further benchmark the Gutzwiller projected wave functions for these cases, we have calculated the bond energy using the optimal value of $\delta$ (open symbols in the lower panel of \gref{rlg-Nom2-gap-delta-nnc}) for the same clusters as those used for ED with open boundary conditions. The results are in very good quantitative agreement. With the large sizes accessible with VMC, it is also interesting to calculate the diagonal structure factor defined in \eref{rle-sf}. All the structure factors peak at $k=\pi$, but, unlike in the case of the fundamental representation, there is no singularity but a smooth maximum (see \gref{rlg-Nom2-sf}). This shows that the antiferromagnetic correlations revealed by the peak at $k=\pi$ are only short ranged, and that the correlations decay exponentially at long distance, in agreement with the presence of a gap, and with the spontaneous dimerization. To summarize, ED and VMC results clearly support Affleck's predictions that the $N/m=2$ systems are gapped and point to a very strong spontaneous dimerization in agreement with previous results by Paramekanti and Marston~\cite{paramekanti_2007}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7]{Nom2-sf.pdf} \end{center} \vspace{-0.6cm} \caption{ Structure factor of various \SU{N} models with $m=N/2$ calculated with VMC with the optimal variational parameter $\delta_{\mathrm{opt}}$. } \label{rlg-Nom2-sf} \vspace{-0.5cm} \end{figure} \section{\SU{6} with $m=2$} This case is a priori more challenging to study because the relevant operator that is generated in the renormalization group theory appears at higher order than the one-loop approximation. Therefore, the gap can be expected to be significantly smaller than in the previous case. This trend is definitely confirmed by ED performed on clusters with up to 15 sites: the gap decreases quite steeply with the system size (see upper left panel of \gref{rlg-N6m2-gap-delta-nnc}). It scales smoothly however, and a linear extrapolation points to a gap of the order $\Delta E\simeq 0.2$, much smaller than in the \SU{6} case with $m=3$ ($\Delta E\simeq 4$), but finite. On the largest available cluster, the bond energy has a significant dependence on the bond position, with an alternance of two very negative bonds with a less negative one. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7]{N6m2-gap-delta-nnc.pdf} \end{center} \vspace{-0.5cm} \caption{ ED and VMC results for the \SU{6} model with $m=2$. Upper left panel: size dependence of the energy gap. Upper right panel: optimal variational parameter $\delta$ for periodic boundary conditions. Lower left panel: energy per bond calculated with ED (circles) and VMC (squares) on 15 sites with open boundary conditions. Note that the optimal variational parameter $\delta=0$ in that case. Lower right panel: energy per bond calculated with VMC with periodic boundary conditions. } \label{rlg-N6m2-gap-delta-nnc} \end{figure} These trends are confirmed and amplified by VMC. Indeed, the trimerized wave function $\gwf{3}{\delta}$ leads to a better energy for all sizes, and the optimal value scales very smoothly to a small but finite value $\delta_{\mathrm{opt}}\approx 0.03$. This value is about an order of magnitude smaller than in the \SU{6} case with $m=3$, but the fact that it does not change with the size beyond 60 sites is a very strong indication that the system trimerizes (by contrast to the marginal case shown in \gref{rlg-E-delta}). The trimerization is confirmed by the lower plots. For $n=15$, the VMC results are again in nearly perfect agreement with ED, and for $n=60$, the bond energy shows a very clear trimerization. \begin{figure}[bh!] \begin{center} \includegraphics[scale=0.7]{N6m2-sf-lrc.pdf} \end{center} \vspace{-0.6cm} \caption{ Upper left panel: Structure factor of the \SU{6} model with $m=2$ calculated with VMC using a trimerized wave function with the optimal variational parameter. Upper right panel: zoom on the region near $k=2\pi/3$. It clearly shows that the structure factor is smooth. Lower panel: real-space diagonal correlations for 60 sites. } \label{rlg-N6m2-sf-lrc} \end{figure} To test the nature of the long-range correlations is of course more challenging than in the previous case since a small gap implies a long correlation length. And indeed, on small to intermediate sizes, the structure factor has a sharp peak at $k=2\pi/3$ very similar to the \SU{3}, $m=1$ case. However, going to very large system sizes (up to $n=450$ sites), it is clear that the concavity changes sign upon approaching $k=2\pi/3$ (see upper right panel of \gref{rlg-N6m2-sf-lrc}), consistent with a smooth peak, hence with exponentially decaying correlation functions (see also lower panel of \gref{rlg-N6m2-sf-lrc}). In that case, in view of the small magnitude of the gap, hence of the very large value of the correlation length, it would be difficult to conclude that the system is definitely trimerized on the basis of ED only. In that respect, the VMC results are very useful. On small clusters, the Gutzwiller projected wave function with trimerization is nearly exact, and VMC simulations on very large systems strongly support the presence of a trimerization and of exponentially decaying correlations\footnote{We have been informed by S. Capponi that these conclusions agree with unpublished DMRG results (S. Capponi, private communication)}. \section{Marginal cases: \SU{8} with $m=2$ and \SU{9} with $m=3$} These two systems are the only ones which possess operators with scaling dimension $\chi=2$. They are therefore the only cases where it is impossible to predict whether the system is algebraic or gapped on the basis of Affleck's analysis. As far as numerics is concerned, these cases can again be expected to require large system sizes to conclude. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7]{chieq2-gap-delta-nnc.pdf} \end{center} \vspace{-0.5cm} \caption{ ED and VMC results for the marginal cases \SU{8} with $m=2$ and \SU{9} with $m=3$. Upper left panel: size dependence of the energy gap for both cases. Upper right panel: optimal variational parameter $\delta$ for the \SU{9} case with periodic boundary conditions. The results for \SU{8} are not shown because they identically vanish for periodic boundary conditions. Lower left panel: energy per bond for \SU{8} calculated with ED (circles) and VMC (squares) for open boundary conditions. Note that the optimal variational parameters $\delta_{\mathrm{opt}}$ are different from zero with open boundary conditions. Lower right panel: energy per bond for \SU{9} calculated with ED (circles) and VMC (squares) for open boundary conditions. } \label{rlg-chieq2-gap-delta-nnc} \end{figure} The ED results are quite similar to the previous case. The scaling of the gap is less conclusive because the last three points build a curve that is still concave and not linear like in the previous case (see the upper right panel of \gref{rlg-chieq2-gap-delta-nnc}). So one can only conclude that if there is a gap, it is very small, especially for \SU{8} with $m=2$. The bond energies build a pattern which is consistent with a weak tetramerization for \SU{8} with $m=2$, and with a significant trimerization comparable to the \SU{6}, $m=2$ case for \SU{9} $m=3$. The VMC method turns out to give a rather different picture however. For \SU{8} with $m=2$, two variational wave functions ($\ket{\Psi_{G}^{\mathrm{Fermi}}},\gwf{4}{\delta_{1}, \delta_{2}}$) can be tested. Interestingly, for $n=16$ with open boundary conditions, $\ket{\Psi_{G}^{\mathrm{Fermi}}}$ fails to reproduce the bound energies pattern observed with ED but $\gwf{4}{0.054,-0.036}$ is successful (see lower left pannel of \gref{rlg-chieq2-gap-delta-nnc}). This pattern, which could be interpreted as a weak tetramerization, is in fact probably just a consequence of the four-fold periodicity of algebraic correlations in the presence of open boundary conditions. Indeed, it turns out that, for any system size with periodic boundary conditions, the minimization of the energy using $\gwf{4}{\delta_{1},\delta_{2}}$ failed to find a solution for any $\left\vert\delta_{1,2}\right\vert>0.002$. Therefore $\ket{\Psi_{G}^{\mathrm{Fermi}}}$ is believed to be the best variational wavefunction. The conclusion is that there is no tetramerization, and that the correlations must be algebraic. This is also supported by the structure factor, which seems to have a singularity at $k=\pi/2$ (see \gref{rlg-chieq2-sf-lrc}). \begin{figure}[h] \begin{center} \includegraphics[scale=0.7]{chieq2-sf-lrc.pdf} \end{center} \vspace{-0.5cm} \caption{ Upper panels: Structure factor of the \SU{8} model with $m=2$ (left) and of the \SU{9} model with $m=3$ (right) calculated with VMC with periodic boundary conditions. Lower panels: real space correlations. The four plots represents results obtained with $\ket{\Psi_{G}^{\mathrm{Fermi}}}$. } \label{rlg-chieq2-sf-lrc} \end{figure} Let us now turn to \SU{9} with $m=3$. This system could in principle be trimerized, and therefore $\ket{\Psi_{G}^{\mathrm{Fermi}}}$ and $\gwf{3}{\delta}$ have been compared. For small clusters, there is a large optimal value of $\delta$, actually much larger than for \SU{6} with $m=2$, and the bond energies are typical of a strongly trimerized system, in agreement with ED. However, $\delta_{\mathrm{opt}}$ decreases very fast with $n$ until it vanishes for $n\gtrsim 100$ whereas, for \SU{6} with $m=2$, $\delta_{\mathrm{opt}}$ levels off at a finite value beyond $n=60$ (see \gref{rlg-E-delta}). We interpret this behavior as indicating the presence of a cross-over: on small length scales, the system is effectively trimerized, but this is only a short-range effect, and the system is in fact gapless with, at long-length scale, algebraic correlations. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7]{E-delta.pdf} \end{center} \vspace{-0.5cm} \caption{ Energy per site as a function of the variational parameter $\delta$ for the \SU{6} with $m=2$ (left) and \SU{9} with $m=3$ (right). } \label{rlg-E-delta} \end{figure} One can again calculate the structure factor using the best variational wave function (in both cases $\ket{\Psi_{G}^{\mathrm{Fermi}}}$ for big enough systems) to check if a discontinuity exists. The results displayed in the upper plots of \gref{rlg-chieq2-sf-lrc} clearly show a discontinuity at $k=\pi/2$ for the \SU{8} and at $k=2\pi/3$ for \SU{9}. These discontinuities indicate an algebraic decay of the long-range correlations. The lower plot shows that even if these systems are gapless, there is a maxima of the correlation every $N/m$ sites. \section{Example with irrelevant operator: \SU{10} with $m=2$} For completeness, we have also looked at a case where there is an irrelevant operator of scaling dimension larger than 2, namely \SU{10} with $m=2$. As expected, the best variational wave function is $\ket{\Psi_{G}^{\mathrm{Fermi}}}$ for all sizes, and the structure factor exhibits discontinuities at $k=2\pi/5$, consistent with a gapless spectrum and algebraic correlations. \section{Critical exponents}\label{rls-exponents} Motivated by the remarkably accurate results obtained in previous works for the case the fundamental representation\cite{paramekanti_2007,wang_z2_2009}, we have tried to use the VMC results to determine the critical exponent that controls the decay of the correlation function at long distance, \eref{rle-sf}. For the particular case of gapless systems, conformal field theory predicts an algebraic decay of the long-range correlations function according to: \begin{equation*} C\ep{r} = \dfrac{c_{0}}{r^{2}}+\dfrac{c_{k}\cos{2\pi r m/N}}{r^{\eta}} \end{equation*} where $\eta=2-2/N$ is the critical exponent. For systems with periodic boundary conditions, one can define two distances between two points, which naturally leads to the following fitting function~\cite{Frischmuth1999}: \begin{equation*} c_{0}(r^{-\nu}+\ep{n-r}^{-\nu})+c_{k}\cos{2\pi r m/N}(r^{-\eta}+\ep{n-r}^{-\eta}) \end{equation*} with four free parameters: $c_0$, $\nu$ and $c_k$, $\eta$, the amplitudes and critical exponents of the components at $k=0$ and $k=2\pi m/N$ respectively. There is a large degree of freedom in the choice of the fitting range. One could in principle select any arbitrary range of sites $\ec{x_{i},x_{f}}$, $0\leq x_{i}<x_{f}\leq n-1$. The problem is that each range will give different critical exponents. In order to obtain some meaningful results the following method has been chosen. Using the periodicity of the systems, only the ranges with $x_{i}=a$ and $x_{f}=n-a-1$, $1\leq a\leq n/2$, have been considered. For each value of $a$, the coefficient of determination of the fit has been computed and if its value is higher than $0.999$ than the range $\ec{a,n-a-1}$ is selected to perform the extrapolation of the critical exponents. If the value is too low, the fit is considered to be bad and the range with $a\leftarrow a+1$ is tested. If no good range can be found with this criterion, the condition over the coefficient of determination is relaxed to be higher than $0.995$ and the first fit with a residual sum of squares divided by $n$ that is smaller than $10^{-7}$ is selected. \begin{figure}[h] \begin{center} \includegraphics[scale=0.7]{critical-exp-gapless.pdf} \end{center} \vspace{-0.5cm} \caption{ Critical exponents $\eta$ of the gapless systems as a function of the system size. The squares, circles and triangles correspond respectively to $m=1,2$, and $3$ particles per site. All values given here have been calculated with $\ket{\Psi_{G}^{\mathrm{Fermi}}}$. } \label{rlg-critical-exponents} \end{figure} The critical exponents $\eta$ obtained in this way are shown in \gref{rlg-critical-exponents}. The theoretical values of the critical exponents $\eta = 2-2/N$ are shown as straight lines. In all cases, the extracted exponents agree quite well with the theoretical predictions when $n$ is large enough. In particular, for a given $N$, the exponent $\eta$ does not depend on $m$, as predicted by non-abelian bosonization. The critical exponents $\nu$ has also been extracted but, as already observed~\cite{Frischmuth1999}, a precise estimate is difficult to get. Nevertheless, for $N=3,4$, $\nu\in\ec{1.8,2.25}$ and for $N\geq 5$, $\nu\in\ec{1.95,2.05}$ for the largest systems. \section{Conclusions} Using variational Monte Carlo based on Gutzwiller projected wave functions, we have explored the properties of \SU{N} Heisenberg chains with various totally antisymmetric irreps at each site. In the case of the fundamental representation, which is completely understood thanks to Bethe ansatz and to QMC simulations, these wave functions are remarkably accurate both regarding the energy and the long-range correlations. In the case of higher antisymmetric irreps, where field theory arguments are in most cases able to predict that the system should be gapless or gapped, allowing for a symmetry breaking term in the tight binding Hamiltonian used to define the unprojected wave function leads to results in perfect agreement with these predictions, and the ground state is found to be spontaneously dimerized or trimerized. Finally, in the two cases where the operator that could open a gap is marginal, \SU{8} with $m=2$ and \SU{9} with $m=3$, this variational approach predicts that there is no spontaneous symmetry breaking, and that correlations decay algebraically. These results suggest that the operators are marginally irrelevant in both cases. It would be interesting to test these predictions either analytically by pushing the renormalization group calculations to higher order, or numerically with alternative approaches such as DMRG or QMC. In any case, these results prove that Gutzwiller projected fermionic wave functions do a remarkably good job at capturing quantum fluctuations in one-dimensional \SU{N} Heisenberg models with totally antisymmetric irreps. Considering the encouraging results obtained in 2D for the \SU{N} Heisenberg model with the fundamental irrep at each site on several lattices, one can legitimately hope these wave functions to be also good for the \SU{N} Heisenberg model with totally antisymmetric irreps at each site in 2D. Work is in progress along these lines. We acknowledge useful discussions with S. Capponi, M. Lajko, P. Lecheminant, L. Messio, and K. Penc. This work has been supported by the Swiss National Science Foundation.
1,108,101,563,609
arxiv
\section{Introduction} In the representation theory of various algebras (including, but not limited to, commutative associative algebras, associative algebras, Lie algebras, etc.), one of the main tools is the cohomological method. The powerful method of homological algebra often provides a unified treatment of many results in representation theory, giving not only solutions to open problems, but also conceptual understandings to the results. Vertex operator algebras (VOAs hereafter) arose naturally in both mathematics and physics (see \cite{BPZ}, \cite{B}, and \cite{FLM}) and are analogous to both Lie algebras and commutative associative algebras. In \cite{H-coh}, Yi-Zhi Huang introduced a cohomology theory for grading-restricted vertex algebras and modules that are analogous to Harrison cohomology for commutative associative algebras. Huang's construction uses linear maps from the tensor powers of the vertex algebra to suitable spaces of ``rational functions valued in the algebraic completion of a $V$-module'' satisfying some natural conditions, including a technical convergence condition. Geometrically, this cohomology theory is consistent with the geometric and operadic formulation of VOA. Algebraically, the first and second cohomology are given by (grading-preserving) derivations and first-order deformations, similarly to those for commutative associative algebras. The vanishing theorem of an associative algebra states that the algebra is semisimple if and only if first Hochschild cohomologies with coefficients in all bimodules vanish, or equivalently, if and only if every derivation from the algebra to any bimodule is inner. Naturally, we expect similar results in VOAs. In \cite{Q-Coh}, the author generalized Huang's work on a Hochschild-like cohomology for a grading-restricted vertex algebra and its modules in \cite{H-coh}, and introduced a cohomology theory for meromorphic open-string vertex algebras (a non-commutative generalization to VOAs) and their bimodules, analogously to the Hochschild cohomology for (not-necessarily-commutative) associative algebras and the bimodules. The early version of \cite{HQ-Coh-reduct} proved a vertex algebraic version of the if part of the vanishing theorem: if $V$ is a meromorphic open-string vertex algebra such that the first cohomology $H^1(V, W)= 0$ for every $V$-bimodule $W$, then every left $V$-module satisfying a technical but natural convergence condition is completely reducible. For cohomological methods, the only if part of the vanishing theorem is much more useful in calculations. A common practice is to start from a long exact sequence, then use the vanishing theorem to fill zeros in various spots, and conclude various isomorphisms. While $H^1(V, W)=0$ holds trivially for every irreducible $V$-module $W$ with non-integral lowest weight, we found that if the lowest weight is an integer, $H^1(V, W)$ is generally not zero. Indeed for a VOA $V$ and a $V$-module $W$, the zero-mode $v\mapsto w_0 v = \text{Res}_x e^{xL(-1)}Y_{W}(v, -x)w$ of any weight-1 element $w$ defines a grading-preserving derivation $V\to W$ (provided that the map is not constantly zero), called zero-mode derivations. We denote the space of such derivations by $Z^1(V, W)$ (which stays zero if $W$ has no elements of integral weights). Conceptually, we believe that such derivations exist because the cohomologies in \cite{H-coh} and in \cite{Q-Coh} are analogous to Harrison and Hochschild for (commutative) associative algebras that ``missed'' the analogous Lie algebra structure on a VOA. The final version of \cite{HQ-Coh-reduct} proved the same conclusion under the revised condition that $H^1(V, W) = Z^1(V, W)$ for every $V$-bimodule $W$. A natural question then arises along the way: does $H^1(V, W) = Z^1(V, W)$ hold for every $V$-bimodule? This is conjectured to hold if every weak $V$-module is a direct sum of its irreducible submodules. Since the associated Zhu's algebra is semisimple in this case, it seems promising to study the relation of $H^1(V, W)$ to the first cohomology of Zhu's algebras. We attempted and disappointedly found that though Zhu's algebra can simplify the computation and provide some interesting results regarding the first cohomology, it is usually insufficient to fully determine $H^1(V, W)$, partly because Zhu's algebra forgets some crucial structures. For example, Zhu's algebra associated with the lattice VOA is one-dimensional when the lattice is even, positive definite, and unimodular. In this case, Zhu's algebra tells nothing about $H^1(V, W)$. Nevertheless, we still managed to prove $H^1(V, W)=Z^1(V, W)$ for some of the most important examples of VOAs $V$ and $V$-modules $W$. More precisely, we consider the cases where $V$ takes: \begin{enumerate} \item the simple affine VOA $V=L_{\hat{\mathfrak g}}(l, 0)$ associated to a simple Lie algebra ${\mathfrak g}$ with positive integral level $l$; \item the Virasoro VOA with central charge $$c = 1 - \frac{(p-q)^2}{6pq} \text{ for some }p, q\in {\mathbb Z}_+, gcd(p, q)=1.$$ \item the lattice VOA $V_{L_0}$ associated to a positive definite even lattice $L_0$. \end{enumerate} We show that $H^1(V, W)=Z^1(V, W)$ for every $V$-module with ${\mathbb N}$-grading. Here the grading is not necessarily given by the $L(0)$-eigenvalues and can be arbitrary. For Virasoro VOA which has negative energy representations, we also show $H^1(V, W)=Z^1(V, W)$ for $L(0)$-graded $W$ with the lowest weight greater or equal to $-3$. We emphasize that all the $V$-modules $W$ considered in this paper are regarded as $V$-bimodules via the right action $w\otimes v \mapsto Y_{WV}^W(w, x)v = e^{xL(-1)}Y_W(v, -x)w$. We further emphasize that the conclusions in this paper are by-no-means sufficient to conclude the conjecture made in \cite{HQ-Coh-reduct}. The conjecture requires $H^1(V, W) = Z^1(V, W)$ for every $V$-bimodule $W$, the right-action on which is not necessarily related to the left-action in such a way. To fully prove the conjecture requires a classification of $V$-bimodules. We currently don't have any such results. The paper is organized as follows. Section 2 reviews the basic definitions and discusses some general facts regarding derivations and zero-mode derivations, including the construction of a natural map from $H^1(V, W)$ to the first cohomology of the (higher level) Zhu's algebra associated to $V$. Section 3, 4 and 5 study $H^1(V, W)$ for the three types of VOAs $V$ and their irreducible modules $W$. We first show $H^1(V, W)=Z^1(V, W)$ for $L(0)$-graded $W$ with nonnegative weights, then for $W$ with the canonical ${\mathbb N}$-grading with zero lowest weight, finally for $W$ with arbitrary ${\mathbb N}$-grading. In addition, Section 3 also discusses the image of the map constructed in Section 2 (Proposition \ref{Affine-Zhu}); Section 4 gives a counter-example to $H^1(V,W) = Z^1(V, W)$ when $V$ is the Virasoro VOA not corresponding to minimal models (Proposition \ref{L(-1)w-der}); Section 4 also discusses the case when $L(0)$-weight of $W$ is negative (Theorem \ref{Vir-neg-energy-thm}). Section 6 provides a summary of the results and gives some remarks regarding future work. \noindent\textbf{Acknowledgements.} The author would like to thank Yi-Zhi Huang for his long-term constant support, especially for the valuable suggestions and comments that greatly improved this paper. The author would also like to thank the Pacific Institute of Mathematical Science for offering the opportunity of teaching a network-wide graduate course on vertex algebras and Ethan Armitage, Ben Garbuz, Zach Goldthorpe, Nicholas Lai, Mihai Marian for their participation. Many ideas for this paper pop up during lecturing and discussion. \section{Basic definitions and some general facts} In this paper, we shall assume that the reader is familiar with the basic notions and results in the theory of VOAs. In particular, we assume that the reader is familiar with weak modules, ${\mathbb N}$-gradable weak modules, generalized modules, and related results. Our terminology and conventions follow those in \cite{FLM}, \cite{FHL}, and \cite{LL}. In this section, $(V, Y, \mathbf{1}, \omega)$ is VOA; $(W, Y_W)$ is a generalized $V$-module, i.e., $W = \coprod_{m\in {\mathbb C}} W_{[m]}$ is the direct sum of generalized eigenspaces $$W_{[m]} = \{w\in W: (L(0)- m)^n w = 0 \text{ for some }n\in {\mathbb Z}_+\}$$ of the operator $L(0)$. In case $W$ is irreducible, $W = \coprod_{m\in \gamma + {\mathbb N}} W_{[m]} = \coprod_{n=0}^\infty W_{[\gamma+n]}$ for some $\gamma\in {\mathbb C}$. In other words, $W$ is ${\mathbb N}$-gradable by the operator $L(0) - \gamma$. This is one of the many cases when the grading of $W$ is better shifted by a constant away from the $L(0)$-grading. In this paper, we will use the notation $\d_W$ for the grading operator of $W$. Certainly it is most natural to choose $\d_W = L(0)_s$, where $L(0)_s$ is the semisimple part of the operator $L(0)$. But we would like to allow that $\d_W$ on each irreducible component of $W$ to take $L(0)_s + \alpha$ for some $\alpha\in {\mathbb C}$. If $W$ is irreducible and has lowest weight $\gamma$, the ${\mathbb N}$-grading given by $\d_W = L(0)_s - \gamma$ is called the canonical ${\mathbb N}$-grading. We should emphasize that in the technical arguments, by saying a module $W$ is isomorphic to $V$, we still mean an isomorphism preserving the $L(0)$-grading, forgetting the diversified choices of $\d_W$. This abuse of terminology appears only in the technical argument as a convenient term and does not appear in the statement of main theorems. We hope the reader won't feel confused. \subsection{First cohomology of VOAs} The cohomology theory of grading-restricted vertex algebras was introduced in \cite{H-coh} using certain maps from tensor products of $V$ to the space of $\overline{W}$-valued rational functions. The maps should satisfy $L(0)$- and $L(-1)$-conjugation properties and a natural but technical composable condition. It is pointed out in \cite{H-1st-sec-coh} that the first cohomology $H^1(V, W)$ are isomorphic to the space of derivations that preserves the grading. Since this paper mainly concerns the first cohomology, we will use this result without going into the technical composable condition here. \begin{defn} A linear map $F: V \to W$ is a derivation, if \begin{enumerate} \item $ F\circ L(0) = \d_W \circ F$. \item For every $u, v\in V$, $$F(Y(u, x)v) = Y_W(u, x) F(v) + Y_{WV}^W(F(u), x)v. $$ \end{enumerate} Here $Y_{WV}^W(w, x)v = e^{xL(-1)}Y_W(v, -x)w$. \end{defn} \begin{nota} $Y_{WV}^W$ is indeed an intertwining operator of type $\binom{W}{WV}$ satisfying the corresponding Jacobi identity. We use the notation $w_n$ for the coefficient of $x^{-n-1}$ in $Y_{WV}^W(w,x)$, as we normally do with intertwining operators. More precisely, $w_n: V \to W$ is a linear map sending $v\in V$ to $\text{Res}_x x^n Y_{WV}^W(w, x)v$. For convenience, we also write $$Y_{WV}^W(w, x) = \sum_{n\in {\mathbb Z}} w_n x^{-n-1} \in \textrm{Hom}(V, W)[[x, x^{-1}]]. $$ By a straightforward computation, we see that $$w_i v = (-1)^{i+1} v_i w + (-1)^{i+2} L(-1)v_{i+1}w + \cdots + \frac{(-1)^{i+1+j}}{j!}L(-1)^j v_{i+j}w + \cdots, $$ for $w\in W, v\in V, n\in {\mathbb Z}$. This formula will be frequently used in computing $w_i v$ without explicit reference. \end{nota} \begin{rema} It is shown in \cite{H-1st-sec-coh} that the first cohomology $H^1(V, W)$ and $\widehat{H}^1(V, W)$ constructed in \cite{H-coh} are both isomorphic to the space of grading-preserving derivations from $V$ to $W$. Note that \cite{H-coh} constructed the cohomology theory for grading-restricted vertex algebras and modules, where the conformal element is not required to exist. In particular, the $L(0)$-operator in \cite{H-coh} is not required to be a component of the vertex operator associated with the conformal element $\omega$. It can still be defined even without the existence of conformal element. The notation $L(0)$ is replaced by $\d_W$ in the later papers \cite{Q-Coh} and \cite{HQ-Coh-reduct}. \end{rema} \begin{rema} We should emphasize that $H^1(V, W)$ depends on the choice of the grading $\d_W$. Different choices of grading give different cohomologies. For example, if $W$ is an irreducible $V$-module whose lowest $L(0)$-weight is not an integer, it is clear that $H^1(V, W) = 0$ with the grading operator $L(0)$, while $H^1(V, W)$ is usually not zero with the canonical ${\mathbb N}$-grading (as we will see). \end{rema} \begin{rema} It also follows trivially from the definition that $$H^1(V, W_1\oplus W_2) = H^1(V, W_1)\oplus H^1(V, W_2). $$ Thus if every $V$-module is completely reducible, it suffices to study $H^1(V, W)$ for irreducible $W$. \end{rema} \begin{rema} It follows from the definition of $F$ that $F\circ L(-1) = L(-1) \circ F$. The following argument is given by Huang: By setting $u=v=\mathbf{1}$ one sees immediately that $F(\mathbf{1})=0$. Thus $$F(Y(v, x)\mathbf{1})=Y_{WV}^W(F(v), x)\mathbf{1}$$ Taking the derivative on both sides and use $L(-1)$-derivative property for $Y_{WV}^W$. $$\frac d {dx} F(Y(v, x)\mathbf{1})=\frac d {dx} Y_{WV}^W(F(v), x)\mathbf{1}=Y_{WV}^W(L(-1)F(v), x)\mathbf{1}.$$ On the other hand, it follows from the $L(-1)$-derivative property of $Y$ that $$\frac{d}{dx} F(Y(v, x)\mathbf{1})=F(Y(L(-1)v, x)\mathbf{1})=Y_{WV}^W(F(L(-1)v), x)\mathbf{1}.$$ Thus $Y_{WV}^W(f(L(-1)v), x)\mathbf{1}=Y_{WV}^W(L(-1)f(v), x)\mathbf{1}.$. Taking the limit $x\to 0$, we obtain $f(L(-1)v)=L(-1)f(v)$. \end{rema} \begin{defn} A derivation is called zero-mode derivation if there exists some homogeneous $w\in W$ of weight 1, such that $$F(v) = w_0 v$$ for every $v\in V$. Here $w_0$ is the coefficient of $x^{-1}$ in the series $Y_{WV}^W(w, x)v$. The space of zero-mode derivations is denoted by $Z^1(V, W)$. \end{defn} \begin{rema} It is proved in \cite{HQ-Coh-reduct} that if $V$ is a (meromorphic open-string) vertex algebra such that for every $V$-bimodule $(W, Y_W^L, Y_W^R)$, $\widehat{H}^1(V, W)=Z^1(V, W)$, then any left $V$-module satisfying a natural composability condition is a direct sum of irreducible submodules. It is also conjectured in \cite{HQ-Coh-reduct} that if the every weak $V$-module is a direct sum of irreducible submodules, then for every $V$-bimodule $W$, $\widehat{H}^1(V, W) = Z^1(V, W)$. Note that in this conjecture, the right $V$-module structure $Y_W^R$ is not necessarily given by $Y_{WV}^W$. \end{rema} \subsection{Some general observations} \begin{prop} The dimension of the space of zero-mode derivations satisfies $$\dim Z^1(V, W) \leq \dim W_{[1]}/ L(-1)W_{[0]}.$$ Equality holds when $W$ is irreducible, not isomorphic to $V$ and $\d_W = L(0)_s + \alpha$ with $\alpha\neq 0$. \end{prop} \begin{proof} The first conclusion follows from the $L(-1)$-derivative property of $Y_{WV}^W$: if $w=L(-1)u$ for some $u\in W_{[0]}$, then $$w_0 v = \text{Res}_{x} Y_{WV}^W (L(-1)w, x)v = \text{Res}_x \frac d {dx} Y_{WV}^W(w, x)v$$ which is zero since the coefficient of $x^{-1}$ vanishes in any derivative of any formal series. To show the equality holds under the additional assumption, we start from the assumption $w_0v = 0$ for every $v\in V$. Then in particular, $$0 = w_0\omega = -\omega_0 w + L(-1)\omega_1 w + L(-1)^2 u = L(-1)(\alpha w + L(-1)u)$$ for some $u\in W_{[0]}$. Since $W$ is not isomorphic to $V$, $\ker L(-1) = 0$ (see \cite{Li-vacuum-like}). Since $\alpha \neq 0$, we have $w=-L(-1)u/\alpha \in L(-1)W_{[0]}$. \end{proof} \begin{rema} See Remark \ref{strictly-less-1} for a counter-example to the equality when $W$ is isomorphic to $V$. See Remark \ref{strictly-less-2} for a counter-example to the equality when $\alpha=0$. \end{rema} \begin{prop}\label{Der-gen} Let $V$ be a VOA generated by a subset $S$. Let $W$ be a $V$-module and $F: V\to W$ be a derivation. If $F$ sends every element in $S$ to zero, then $F$ is a zero map. \end{prop} \begin{proof} This follows directy from an induction, using the facts that $V$ is spanned by $$a^{(1)}_{n_1}\cdots a^{(m)}_{n_m} \mathbf{1}, a^{(1)}, ..., a^{(m)}\in S, n_1, ..., n_m\in {\mathbb Z}, $$ and \begin{align*} F(a^{(i)}_{n_i} v) &= \text{Res}_x x^{n_i} F(Y_V(a^{(i)}, x)v)\\ &= \text{Res}_x x^{n_i} \left(Y_W(a^{(i)}, x)F(v) + Y_{WV}^W(F(a^{(i)}), x)v \right)\\ &= a^{(i)}_{n_i}F(v) + F(a^{(i)})_{n_i} v, \end{align*} for any $a^{(i)}\in S$, $v\in V$. \end{proof} \begin{rema}\label{Rmk-2-12} The proposition shows that there exists \textit{at most} one derivation sending the generators of $V$ to some designated elements in $W$. Thus to classify the derivations, it suffices to focus on the images on the generators. Whether or not these images are compatible is an interesting question. In Proposition \ref{L(-1)w-der} we will see a case study of the compatibility problem. \end{rema} \subsection{Zhu's algebra and its bimodules} Zhu defined an associative algebra $A(V)$ associated with a VOA $V$ in \cite{Zhu-Modular}, called the Zhu's algebra. Dong, Li, and Mason gave a generalization $A_N(V)$ in \cite{DLM-higher-Zhu}, called the level-$N$ Zhu's algebra, or higher level Zhu's algebra. We recall the definition in \cite{DLM-higher-Zhu} here. \begin{defn} For any fixed natural number $N\in {\mathbb N}$, $A_N(V) = V/O_N(V)$ where $O_N(V)$ is spanned by elements of the form $$\text{Res}_x x^{-2N-2-n}Y((1+x)^{L(0)+N}u, x)v$$ with $u, v\in V, n\in {\mathbb N}$, and elements $$L(0) v + L(-1)v $$ with $v\in V$. For $u, v\in A_N(V)$, a binary operation $u*v$ is defined by $$u *_N v = \sum_{m=0}^N (-1)^m\binom{m+N}N\text{Res}_x x^{-N-m-1}Y((1+x)^{L(0)+N}u, x)v.$$ When $N=0$, $A_0(V)$ coincides with $A(V)$ defined in \cite{Zhu-Modular} \end{defn} \begin{thm}[\cite{Zhu-Modular}, \cite{DLM-higher-Zhu}] $(A_N(V), *_N)$ forms an associative algebra. \end{thm} Given a $V$-module $W$, Frenkel and Zhu defined a $A(V)$-bimodule $A(W)$ in \cite{Frenkel-Zhu}. Huang and Yang generalized the construction and defined an $A_N(V)$-bimodule $A_N(W)$ in \cite{HY-lio-aa}. We recall the definition in \cite{HY-lio-aa} here. \begin{defn} Let $A_N(W) = W/O_N(W)$ where $O_N(W)$ is spanned by elements of the form $$\text{Res}_x x^{-2N-2} Y_W((1+x)^{L(0)+N}v, x)w$$ where $v\in V, w\in W$, and elements of the form $$\d_W w+L(-1)w$$ where $w\in W$. For $v\in V, w\in W$, define \begin{align*} v *_N w &= \sum_{m=0}^N \binom{m+N}N \text{Res}_x x^{-1-m-N} Y_W((1+x)^{L(0)+N}v, x)w\\ w *_N v &= \sum_{m=0}^N \binom{m+N}N \text{Res}_x x^{-1-m-N} Y_{WV}^W((1+x)^{\d_W+N}w, x)v \end{align*} Here $Y_{WV}^W$ is the intertwining operator defined as follows \begin{align*} & Y_{WV}^W: W \otimes V\to W[[x,x^{-1}]]\\ & Y_{WV}^W(w,x)v = e^{xL(-1)}Y_W(v,-x)w = \sum_{n\in {\mathbb Z}} w_n v x^{-n-1}. \end{align*} \end{defn} \begin{thm}[\cite{HY-lio-aa}] $(A_N(W), *)$ forms an $A_N(V)$-bimodule. \end{thm} \begin{rema} \begin{enumerate} \item It should be emphasized that in case $N=0$, $A_0(V) = A(V)$ but $A_0(W)$ is \textit{different} to the bimodule $A(W)$ constructed in \cite{Frenkel-Zhu}, where $(\d_W+L(-1))w$ is not included in the relation of $O(W)$. In order to emphasize the difference, we will stick to the notation $A_0(V)$ and $A_0(W)$ in this paper, in spite that $A_0(V) = A(V)$. \item In the correction note \cite{HY-lio-aa-correct} to \cite{HY-lio-aa}, Huang and Yang constructed a bimodule $A_N(W; \alpha)$ for fixed $\alpha \in {\mathbb C}$ that includes the relation $L(0)_s + \alpha + L(-1)$ in the relation (the current version gives a construction in a more general context). The definition of $A_N(W)$ in this paper coincides with the $A_N(W; \alpha)$ in \cite{HY-lio-aa-correct} if $\d_W = L(0)_s + \alpha$. In case $A(V)$ is semisimple and $W$ is a simple module, $A(W) = \coprod_{\alpha\in {\mathbb C}} A_0(W; \alpha)$. In other words, $A_0(W)$ defined in this paper is simply one particular summand determined by the grading operator $\d_W$. When $\d_W = L(0)$, $A_0(W) = A_0(W; 0)$. \end{enumerate} \end{rema} The following results from \cite{HY-lio-aa} will be useful in our argument: \begin{lemma}\label{HY-Lemma} \begin{enumerate} \item For every $v\in V$ $v*_N O_N(W)\subseteq O_N(W), O_N(w)*_N v \subseteq O_N(W)$, \item \label{HY-Lemma-1}(Lemma 4.4, \cite{HY-lio-aa}, Part 2) For every $v\in V, w\in W$ and integers $p\geq q \geq 0$, $$\text{Res}_x x^{-2N-2-p}Y_W((1+x)^{L(0)+N+q}u, x)w\in O_N(W), $$ $$\text{Res}_x x^{-2N-2-p}Y_{WV}^W((1+x)^{\d_W+N+q}u, x)w\in O_N(W)$$ In particular, for every $n\in {\mathbb N}$, $$\text{Res}_x x^{-2N-2-n}Y_{WV}^W((1+x)^{\d_W+N}w, x)v\in O_N(W).$$ \item (Lemma 4.4, \cite{HY-lio-aa}, Part 3) \label{HY-Lemma-2} For every $v\in V, w\in W$, $$v*_Nw - w*_N v = -\text{Res}_x Y_{WV}^W((1+x)^{\d_W-1}w, x)v$$ $$v*_Nw - w*_N v = -\text{Res}_x Y_W((1+x)^{L(0)-1}v, x)w$$ in $A_N(W)$. \end{enumerate} \end{lemma} \subsection{First cohomology of an associative algebra} The cohomology theory of associative algebras is given by Hochschild in \cite{Hochschild}. \begin{defn} Let $(A, *)$ be an associative algebra, $B$ be an $A$-bimodule. A linear map $f:A\to B$ is a derivation, if for any $u, v\in A$, $$f(u*v) = u* f(v) + f(u)*v.$$ We say $f$ is an inner derivation, if there exists some $w\in B$ such that $$f(v) = v*w - w*v. $$ The first cohomology $H^1(A, B)$ is defined as the quotient of the space of derivations modulo the subspace of inner derivations. \end{defn} \begin{thm} $A$ is semisimple if and only if $H^1(A, B)=0$ for every $A$-bimodule $B$. In other words, every derivation is inner. \end{thm} \begin{rema} We should note that the proof of the only-if-part in \cite{Hochschild} requires $A$ to be finite-dimensional but does not require $B$ to be finite-dimensional. \end{rema} \begin{rema} We did not mention inner derivations for vertex algebras, because in our current context, all such inner derivations are automatically zero. In general, if $V$ as a meromorphic open-string vertex algebra and $W$ is a bimodule, then an inner dervation $F$ is given by the formula $$F(v) = Y_W^L(v, x)w - e^{xL(-1)}Y_W^R(w, -x)v.$$ But in our current situation, $Y_W^R$ is given by $Y_{WV}^W$. Thus $F(v) = 0$. \end{rema} \subsection{The map $H^1(V, W) \to H^1(A_N(V), A_N(W))$} \begin{prop} Let $F: V\to W$ be a derivation from the VOA $V$ to the $V$-module $W$. For any $N\in {\mathbb N}$, we define $f: A_N(V) \to A_N(W)$ by $$f(v) = F(v) + O_N(W). $$ Then $f$ is a derivation from the associative algebra $A_N(V)$ to the $A_N(V)$-bimodule $A_N(W)$. The map $F\mapsto f$ gives a linear map from $H^1(V, W)\to H^1(A_N(V), A_N(W))$. \end{prop} \begin{proof} The composition $v\mapsto F(v) \mapsto F(v)+O_N(W)$ gives a well-defined map from $V$ to $A_N(W)$. We show that $F$ maps $O_N(V)$ to $O_N(W)$. Then the composition factorizes through $A_N(V)= V/O_N(V)$, inducing $f: A_N(V) \to A_N(W)$. By definition of $F$, for every $v\in V$ $$F(L(-1)v + L(0)v) = L(-1) F(v) + \d_W F(v)$$ So $F$ maps $L(-1)v + L(0)v$ to $O_N(W)$. Also by definition of $F$, for every $u, v\in V$, we have \begin{equation}\label{def-der} F(Y(u, x)v)=Y_W(u, x)f(v) + Y_{WV}^W(f(u), x)v \end{equation}Let $u$ be homogeneous and $n\in {\mathbb N}$, we take $\text{Res}_x x^{-2N-2-n}(1+x)^{\text{wt}(u)+N}$ on both sides of (\ref{def-der}), we have \begin{align*} & F\left(\text{Res}_x x^{-2N-2-n}Y((1+x)^{\text{wt}(u)+N}u, x)v\right)\\ & \quad = \text{Res}_x x^{-2N-2-n}Y_W((1+x)^{\text{wt}(u)+N}u, x)F(v) + \text{Res}_x x^{-2}Y_{WV}^W((1+x)^{\text{wt}(u)+N}F(u), x)v \end{align*} Note that since $\d_W F(u) = F(L(0) u) = \text{wt}(u) F(u)$, we have $(1+x)^{\text{wt}(u)}=(1+x)^{\d_W}F(u)$. Thus what we have indeed is \begin{align*} & F\left(\text{Res}_x x^{-2N-2-n}Y((1+x)^{L(0)+N}u, x)v\right)\\ & \quad = \text{Res}_x x^{-2N-2-n}Y_W((1+x)^{L(0)+N}u, x)F(v) + \text{Res}_x x^{-2N-2-n}Y_{WV}^W((1+x)^{\d_W+N}F(u), x)v \end{align*} This equality obviously extends to nonhomogeneous $u\in V$. By definition of $O_N(W)$, the first term on the right-hand-side falls in $O_N(W)$. By Lemma 4.4 Part (2) in \cite{HY-lio-aa} (recorded as Lemma \ref{HY-Lemma} (\ref{HY-Lemma-1})) here), the second term on the right-hand-side is also in $O_N(W)$. Thus $F$ maps every element of the spanning set in $O_N(V)$ into $O_N(W)$. Now we show that $$f(u*_Nv) = u*_N f(v)+f(u)*_N v.$$ We simply apply the operator $\sum_{m=0}^N (-1)^m \binom{m+N} N \text{Res}_x x^{-N-m-1} (1+x)^{\text{wt}(u)+N}$ to both sides of (\ref{def-der}). Using the fact that $F((1+x)^{L(0)} u) = (1+x)^{\d_W} F(u) = (1+x)^{\text{wt}(u)} F(u)$, we have \begin{align*} & \sum_{m=0}^N (-1)^m \binom{m+N}N \text{Res}_x x^{-N-m-1} F\left( Y_V((1+x)^{L(0)+N}u, x)v)\right) \\ = & \sum_{m=0}^N (-1)^m \binom{m+N}N \text{Res}_x x^{-N-m-1} \left( Y_W((1+x)^{L(0)+N}u, x)F(v) + Y_{WV}^W ((1+x)^{\d_W+N} F(u), x)v\right) \end{align*} Thus $$F(u*_Nv) = u*_N F(v) + F(u)*_N v.$$ \end{proof} \begin{cor}\label{Corollary-1-22} Let $F:V\to W$ be a derivation that sits in the kernel of the map $H^1(V, W)\to H^1(A_N(V), A_N(W))$. Then there exists some $w\in W$ such that $$F(v) = -\text{Res}_x Y_W((1+x)^{L(0)-1}v, x) w + O(W). $$ \end{cor} \begin{proof} Since $v\mapsto F(v)+O(W)$ is inner, we know that $$F(v) = v*_N w - w*_N v + O(W) = \text{Res}_x -Y_W((1+x)^{L(0)-1}v, x)w + O(W)$$ where the last equality follows from (\ref{HY-Lemma-2}) in Lemma \ref{HY-Lemma}. \end{proof} \begin{rema}\label{Rmk-1-23} In case $A_N(V)$ is semisimple, $H^1(A_N(V), A_N(W)) = 0$. We once hoped to use it to classify $H^1(V, W)$. But it turns out that this result helps only when $W=V$ and when $A(V)$ is not too trivial, as will be addressed in later sections. \end{rema} \begin{rema} Possibly, the result might be helpful when Zhu's algebra is not semisimple. One natural problem is to determine whether or not the map $H^1(V, W)\to H^1(A_N(V), A_N(W))$ is surjective, which is interesting but unknown in general for now. \end{rema} \section{First cohomologies of affine VOAs}\label{Section-2} In this section, we will use the Whitehead lemma for simple Lie algebras to study the first cohomologies of the affine VOA associated with a simple Lie algebra ${\mathfrak g}$. \subsection{Derivations of Lie algebras and Whitehead lemma} Let $\mathfrak{g}$ be a Lie algebra and $M$ be a $\mathfrak{g}$-module. A linear map $f: {\mathfrak g} \to M$ is a derivation if $$f([x,y])=x f(y) - y f(x). $$ $f$ is called an inner derivation, if there exists $m\in M$ such that $$f(x) = x m$$ \begin{lemma}[Whitehead lemma] If $\mathfrak{g}$ is simple, then for every ${\mathfrak g}$-module $M$, every derivation $f: {\mathfrak g} \to M$ is inner (see \cite{Sam}). \end{lemma} \subsection{Affine VOA associated to simple Lie algebras} The affine VOA associated to a simple Lie algebra was first constructed in \cite{Frenkel-Zhu}. Here we give a brief review following \cite{LL}. Let ${\mathfrak g}$ be a finite-dimensional simple Lie algebra. Let $\Phi = \Phi_+ \cup \Phi_-$ be the root system, $\Phi_+$ (resp. $\Phi_-$) be the set of positive roots (resp. negative) roots. Let $\mathfrak h$ be a Cartan subalgebra. Denote the triangular decomposition of ${\mathfrak g}$ by $${\mathfrak g} = {\mathfrak n}_+ \oplus {\mathfrak h} \oplus {\mathfrak n}_-. $$ where ${\mathfrak n}_\pm = \coprod_{\alpha\in \Phi_\pm}{\mathfrak g}_{\alpha}$. For every $\alpha\in \Phi_+$, we denote by $h_\alpha$ the unique vector such that $[{\mathfrak g}_\alpha, {\mathfrak g}_{-\alpha}] = {\mathbb C} h_\alpha$ and $\alpha(h_\alpha)=2$. For $\lambda\in \mathfrak{h}^*$, we consider the Verma module, i.e., the induced ${\mathfrak g}$-module from the one-dimensional $({\mathfrak n}_+\oplus {\mathfrak h})$-module where ${\mathfrak n}_+$ acts trivially and $h\in {\mathfrak h}$ acts by the scalar $\lambda(h)$. Recall that $\lambda$ is dominant integral, if $$\lambda(h_\alpha) = \frac{2\langle\lambda,\alpha\rangle}{\langle\alpha,\alpha\rangle}\in {\mathbb N}$$ for every $\alpha\in \Phi_+$, here $\langle\cdot, \cdot\rangle$ is the normalized Killing form on ${\mathfrak h}^*$ such that $\langle\alpha, \alpha\rangle = 2$ for every long root $\alpha$. If $\lambda$ is dominant integral, the Verma module has a unique irreducible quotient. We denote the irreducible quotient by $M_\lambda$. Now we consider the affine Lie algebra $$\hat{{\mathfrak g}} = {\mathfrak g} \otimes_{\mathbb C} {\mathbb C}[t, t^{-1}] \oplus {\mathbb C} k$$ with the Lie bracket \begin{align*} [a\otimes t^m, b\otimes t^n] &= [a,b]\otimes t^{m+n} + m\langle a,b\rangle\delta_{m+n,0}k, \\ [k, g\otimes t^m] &= 0, \end{align*} where $a,b\in {\mathfrak g}, m,n\in {\mathbb Z}$, $\langle\cdot, \cdot\rangle$ is the normalized Killing form on $\mathfrak{g}$, $k$ is the central element. For $l\in {\mathbb C}$, let $V_{\hat{\mathfrak g}}(l, \lambda)$ be the induced $\hat{g}$-module from the $({\mathfrak g}\otimes t{\mathbb C}[t] \oplus {\mathfrak g} \oplus {\mathbb C} k)$-module $M_\lambda$, where ${\mathfrak g}\otimes t{\mathbb C}[t]$ acts trivially, $k$ acts by the scalar $l$. It follows from Section 6.2 and Section 6.6 in \cite{LL} and \cite{DLM-Regular} that \begin{enumerate} \item $V_{\hat{{\mathfrak g}}}(l, 0)$ is a VOA that has a simple quotient $L_{\hat{\mathfrak g}}(l, 0)$. \item $V_{\hat{{\mathfrak g}}}(l, \lambda)$ has a unique quotient $L_{\hat{\mathfrak g}}(l, \lambda)$ that is an irreducible $V_{\hat{\mathfrak g}}(l, 0)$-module. \item If the level $l$ is a positive integer, then every irreducible $L_{\hat{\mathfrak g}}(l, 0)$-modules is isomorphic to $L_{\hat{\mathfrak g}}(l, \lambda)$ for some dominant integral $\lambda\in {\mathfrak h}^*$ satisfying $\lambda( h_{\theta})\leq l$, where $\theta$ is the highest root of ${\mathfrak g}$. \item Every weak $L_{\hat{\mathfrak g}}(l, 0)$-module is a direct sum of irreducible $L_{\hat{\mathfrak g}}(l,0)$-modules. \end{enumerate} \subsection{Image in $H^1(A_0(V), A_0(W))$} Let $l\in {\mathbb C}$ and $V=L_{\hat{\mathfrak g}}(l, 0)$. Let $W$ be any generalized $V$-module. The grading operaor $\d_W$ of $W$ does not necessarily coincides with the semisimple part of $L(0)$. Let $A_0(V)$ be the (level-0) Zhu's algebra associated to $V$ and $A_0(W)$ be the $A_0(V)$-bimodule associated to $W$. We emphasize that $A_0(W)$ we used here is subject to the additional relation $(\d_W + L(-1))w \in O_0(W)$ compared to $A(W)$ in \cite{Frenkel-Zhu}. \begin{prop}\label{Affine-Zhu} Let $F: V \to W$ be any derivation. Then there exists an element $w_{(1)}\in W_{[1]}$, such that $$F(v) \equiv (w_{(1)})_0 v \mod O_0(W).$$ \end{prop} \begin{proof} We first recall some basic facts. \begin{enumerate} \item $V_{(1)} \simeq {\mathfrak g}$. The map $a(-1)\mathbf{1}\otimes b(-1)\mathbf{1} \mapsto a(0)b(-1)\mathbf{1}$ coincides with the Lie algebra structure on $V_{(1)}$. \item $W_{[1]}$ is a ${\mathfrak g}$-module with the following action. $$a\in {\mathfrak g}, a\cdot w = a(0)w$$ \item $W_{[1]}\cap O_0(W)$ is a ${\mathfrak g}$-submodule in $W_{[1]}$. This follows from the fact that $w\in O_0(W)\Rightarrow \text{Res}_x Y_W((1+x)^{L(0)-1}a(-1)\mathbf{1}, x)w = a(0)w \in O_0(W)$ \item The image $W_{[1]} + O_0(W)$ of $W_{[1]}$ in $A_0(W)$ is then a ${\mathfrak g}$-module. This essentially follows from the isomorphism $W_{[1]} + O_0(W)/ O_0(W)\simeq W_{[1]} / W_{[1]}\cap O_0(W)$. \end{enumerate} For any fixed $a\in {\mathfrak g}$, $F(a(-1)\mathbf{1})\in W_{[1]}$. We write $\theta(a) \in W_{[1]}+O_0(W)$ as the image of $F(a(-1)\mathbf{1})$ in $A_0(W)$. Then the map $a \mapsto \theta(a)$ gives a map from ${\mathfrak g}$ to the ${\mathfrak g}$-module $W_{[1]}+O_0(W)$. From the fact that $F$ is a derivation, for any $a,b\in {\mathfrak g}$, $$F(a(0)b(-1)\mathbf{1}) = a(0) F(b(-1)\mathbf{1}) + F(a(-1))_0 b(-1)\mathbf{1}.$$ Note that $a(0)b(-1)\mathbf{1} = [a,b](-1)\mathbf{1}$, So the image of the left-hand-side in $W_{[1]}+O_0(W)$ is precisely $\theta([a,b])$. The image of first term on the right-hand-side is $a(0)\theta(b)$. For the second term on the right-hand-side, we compute as follows \begin{align*} F(a(-1)\mathbf{1})_0 b(-1)\mathbf{1} &= \text{Res}_x Y_{WV}^W(\theta(a), x)b(-1)\mathbf{1} \\ &= \text{Res}_x e^{xL(-1)} Y_W(b(-1)\mathbf{1},-x)\theta(a) \\ &\equiv \text{Res}_x (1+x)^{-\d_W}Y_W(b(-1)\mathbf{1},-x)\theta(a) \mod O_0(W) \\ &= \text{Res}_x Y_W\left((1+x)^{-L(0)}b(-1)\mathbf{1},-\frac{x}{1+x} \right) (1+x)^{-\d_W}\theta(a)\\ &= \text{Res}_y Y_W\left((1+y)^{L(0)}b(-1)\mathbf{1},y \right) (1+y)^{\d_W}\theta(a) \cdot (-(1+y)^{-2})\\ &= -\text{Res}_y Y_W(b(-1)\mathbf{1}, y) \theta(a)\\ &= -b(0)\theta(a), \end{align*} where line 3 follows from the formula proved in \cite{HY-lio-aa} $$e^{xL(-1)}(1+x)^{\d_W} = (1+x)^{\d_W + L(-1)} \equiv 1 \mod O_0(W);$$ line 5 follows from the change of variable formula $$\text{Res}_x f(x) = \text{Res}_y f(g(y))g'(y),$$ with $g(y)=-y/(1+y)$ (see \cite{Zhu-Modular}). Thus we have shown that $$\theta([a,b]) \equiv a(0)\theta(b) - b(0)\theta(a) \mod O_0(W). $$ This is to say that the map $\theta: {\mathfrak g} \to W_{[1]}+O_0(W)$ is a Lie algebra derivation. From Whitehead Lemma, there exists an element $w\in W_{[1]}+O_0(W)$ such that $$\theta(a) = a\cdot w = a(0)w $$ This is to say that $$F(a(-1)\mathbf{1}) = a(0)w + O_0(W)$$ The conclusion then follows with the choice $w_{(1)} = -w$ and the fact that $a(0)w \equiv -w_0 a \mod O_0(W)$. \end{proof} \begin{rema} The conclusion also holds for $V = V_{\hat{\mathfrak g}}(l,0)$. Note also that $W$ is not necessarily graded by $L(0)$. \end{rema} \begin{rema} The result only states that $H^1(V, W)$ has zero-image in $H^1(A_0(V), A_0(W))$. It is certainly insufficient to conclude that $H^1(A_0(V), A_0(W)) = 0$. \end{rema} \subsection{The module $L_{\hat{\mathfrak g}}(l, \lambda)$ $(l\in {\mathbb Z}_+)$ and $L(0)$-grading} Let $l\in {\mathbb Z}_+$ and $\lambda\in {\mathfrak h}^*$ be dominant integral, satisfying $\lambda(h_\theta) \leq l$ for the highest root $\theta$. Let $V = L_{\hat{\mathfrak g}}(l, 0)$ and $W = L_{\hat{\mathfrak g}}(l, \lambda)$. We will show that $H^1(V, W) = Z^1(V, W)$. \begin{lemma}\label{lowest-weight} \begin{enumerate} \item The lowest $L(0)$-weight of $L_{\hat{\mathfrak g}}(l, \lambda)$ is nonnegative. The lowest $L(0)$-weight is zero only when $\lambda = 0$. \item If $W$ is not isomorphic to $V$ and $W_{[1]} \neq 0$, then $W_{[1]}$ is the lowest weight subspace. \end{enumerate} \end{lemma} \begin{proof} Let $\{u^{(i)}, i=1, ..., d\}$ be an orthonormal basis with respect to the normalized Killing form. Then $$\sum_{i=1}^d u^{(i)}\otimes u^{(i)}$$ is the Casimir element in the universal enveloping algebra $U({\mathfrak g})$. It is well-known that the Casimir element acts on $M_\lambda$ by the scalar $$\langle \lambda, \lambda\rangle + 2\langle\lambda, \rho\rangle$$ where $\rho$ is half of the sum of the positive roots. Since $\lambda$ is dominant integral, both $\langle\lambda, \lambda\rangle$ and $\langle\lambda, \rho\rangle$ are nonnegative. The Casimir element acts by zero only when $\lambda = 0$. Let $h$ be the dual Coxeter number of ${\mathfrak g}$. From the Sugawara construction, $L(0)$ acts on the lowest weight subspace by $$\frac 1 {2(l+h)} \sum_{i=1}^d u^{(i)}(0) u^{(i)}(0) = \frac 1 {2(l+h)} \left(\langle \lambda, \lambda\rangle + 2\langle\lambda, \rho\rangle\right),$$ a scalar that is nonnegative. The scalar is zero only when $\lambda = 0$. This concludes (1). For (2), we start by noticing that if $W_{[1]}\neq 0$, then $W$ has to be ${\mathbb Z}$-graded. So from (1), the lowest $L(0)$-weight is nonnegative. If the lowest $L(0)$-weight is zero, then $\lambda = 0$ and $W = L_{\hat{\mathfrak g}}(l, 0) = V$, contradicting the assumption that $W$ is not isomorphic to $V$. Thus the lowest $L(0)$-weight has to be at least 1. The conclusion follows from the assumption that $W_{[1]}\neq 0$. \end{proof} \begin{thm}\label{affine} Let $l\in {\mathbb Z}_+$, $V = L_{\hat{\mathfrak g}}(l, 0)$ and $W=L_{\hat{\mathfrak g}}(l, \lambda)$, where $\lambda$ is an integral weight satisfying $\langle \lambda, h_\theta\rangle \leq l$. We take $\d_W=L(0)$. Then for every derivation $F: V\to W$, $F(v) = w_{(1)}v$ for some $w_{(1)}\in W_{[1]}$. \end{thm} \begin{proof} Let $h$ be the lowest $L(0)$-weight. We know from Lemma \ref{lowest-weight} that $h\geq 0$. Thus it suffices to study the following cases. \begin{enumerate} \item If $h\notin {\mathbb Z}$, then $F=0$. We simply take $w_{(1)}=0$. \item If $h \geq 2$, we know that $F(a(-1)\mathbf{1}) = 0$ for every $a\in {\mathfrak g}$, which implies that $F=0$. \item If $h=1$, then we have $$F((a(-1)\mathbf{1})_0 b(-1)\mathbf{1} = -b(0)F(a(-1)\mathbf{1}).$$ So $$F(a(0)b(-1)\mathbf{1}) = F([a,b](-1)) = a(0)F(b(-1)\mathbf{1}) - b(0) F(a(-1)\mathbf{1}) $$ This means that the map $$a \mapsto F(a(-1)\mathbf{1})$$ is a Lie algebra derivation from ${\mathfrak g}$ to $W_{[1]}$. By Whitehead lemma, there exists $w_{(1)}\in W_{[1]}$ such that $$F(a(-1)\mathbf{1}) = -a(0) w_{(1)} = (w_{(1)})_0 a(-1)\mathbf{1}.$$ Thus $$F(v) = (w_{(1)})_0 v$$ is a zero-mode derivation. \item If $h=0$, we know from Lemma \ref{lowest-weight} that $W=V$. We know from the Proposition \ref{Affine-Zhu} that there exists $v_{(1)}\in V_{(1)}$, such that $$F(v) \equiv (w_{(1)})_0 v \mod O_0(V).$$ So the map $g(v) = F(v) - (w_{(1)})_0 v$ is a derivation with image in $O_0(V)$. In particular, for every $a\in {\mathfrak g}$, $g(a(-1)\mathbf{1})$ is a homogeneous element in $O_0(V)$ of weight 1. However, from the fact that $A_0(V) = U({\mathfrak g})/\langle e_\theta^{l+1}\rangle$ (see \cite{Frenkel-Zhu}), it is clear that $O_0(V)$ contains no homogeneous element of weight 1. Thus $g(a(-1)\mathbf{1})= 0$ and $F(a(-1)\mathbf{1}) = (w_{(1)})_0 v$. This implies that $F(v) = (w_{(1)})_0 v$. \end{enumerate} \end{proof} \begin{rema} The $h=0$ case has also been known in \cite{HQ-Coh-reduct}. Zhu's algebra provides an alternative proof. But for $h=1$ case, Zhu's algebra won't help unless we know there exists $V$-modules $W_2$ and $W_3$ of the same lowest weight, such that the fusion rule $N_{WW_2}^{W_3} \neq 0$ (cf. Remark \ref{Rmk-1-23}). This assumption usually doesn't hold. But if it holds, then from the conclusion of the previous proposition, there exists some $w_{(1)}\in W_{[1]}$ such that $F(a(-1)\mathbf{1}) - (w_{(1)})_0 a(-1)\mathbf{1}$ is a homogeneous element of weight $h$ in $O(W)$, which we denote by $\theta$. Assume that $\theta\neq 0$. Then since for any $a\in {\mathfrak g}$, $a(0)\theta\in O(W)$, we know that $W_{[1]} \subseteq O(W)$ and thus $A_0(W) = 0$. But the assumption $N_{WW_2}^{W_3}\neq 0$ means $A_0(W)$ cannot be zero. So we have a contradiction. \end{rema} \subsection{The module $L_{{\mathfrak g}}(l, \lambda)$ $(l\in {\mathbb Z}_+)$ with ${\mathbb N}$-grading} In this subsection we consider $W=L_{\hat{\mathfrak g}}(l, \lambda)$ with ${\mathbb N}$-grading. The main effort focuses on the canonical ${\mathbb N}$-grading given by $\d_W = L(0) - (\langle\lambda, \lambda\rangle + 2 \langle\lambda, \rho\rangle) / 2(l+h)$. Other ${\mathbb N}$-gradings are much easier. One should note that with a grading different from the $L(0)$ one, the choice of derivations is different. Thus, $H^1(V, W)$ and $Z^1(V, W)$ are different from those with $L(0)$-gradings. \begin{thm}\label{affine-canonical-N-grading} Let $V = L_{\hat{{\mathfrak g}}}(l, 0)$ with $l\in {\mathbb Z}_+$. Let $\lambda\in {\mathfrak h}^*$ be dominant integral satifying $\lambda(h_\alpha)\leq l$. Let $W = L_{\hat{\mathfrak g}}(l, \lambda)$ with the canonical ${\mathbb N}$-grading given by the operator $\d_W = L(0)-(\langle\lambda, \lambda\rangle + 2 \langle\lambda, \rho\rangle) / 2(l+h)$. Then $H^1(V, W) = Z^1(V, W)$. \end{thm} \begin{proof} If $\lambda = 0$ then $W = V$. The canonical ${\mathbb N}$-grading coincides with the $L(0)$-grading and thus has been taken care of in Theorem \ref{affine}. Thus we focus on the case $\lambda\neq 0$. Let $F: V\to W$ be a derivation. Then for every $a, b\in {\mathfrak g}$, \begin{align*} F([a,b](-1)\mathbf{1}) &=F(a(0) b(-1)\mathbf{1}) = a(0) F(b(-1)\mathbf{1}) + F(a(-1)\mathbf{1})_0 b(-1)\mathbf{1}\\ &= a(0) F(b(-1)\mathbf{1})-b(0)F(a(-1)\mathbf{1}) + L(-1)b(1)F(a(-1)\mathbf{1})\\ &\in a(0) F(b(-1)\mathbf{1})-b(0)F(a(-1)\mathbf{1}) + L(-1)W_{[0]}. \end{align*} This motivates us to consider the space $W_{[1]} / L(-1)W_{[0]}$. View $W_{[0]}$ and $W_{[1]}$ as ${\mathfrak g}$-modules, where the actions are given by $g(0), g\in {\mathfrak g}$. We first show that $L(-1): W_{[0]}\to W_{[1]}$ is an injective homomorphism of ${\mathfrak g}$-modules. Indeed, it follows from $L(-1)$-derivative property that $(L(-1)v)_0 = 0$ for every $v\in V$. It follows from $L(-1)$-commutator property that \begin{align*} L(-1)g(0) &= g(0)L(-1) + [L(-1), g(0)] \\ &= g(0)L(-1) + (L(-1)g(-1)\mathbf{1})_0 =g(0)L(-1) \end{align*} Thus $L(-1)$ a homomorphism. Since $\lambda\neq 0$, $W$ is not isomorphic to $V$ and thus has no vacuum-like vectors. Thus $\ker L(-1)= 0$ and $L(-1)$ is injective. Thus, $W_{[1]}/L(-1)W_{[0]}$ is a ${\mathfrak g}$-module. Consider the restriction of $F$ on $V_{(1)} \simeq {\mathfrak g}$, whose image is then in $W_{[1]}$. Let $\bar F: {\mathfrak g} \to W_{[1]}/L(-1)W_{[0]}$ be the composition of the restriction and the canonical projection. Then $\bar F$ satisfies $$\bar F([a,b](-1)\mathbf{1}) = a(0) \bar F(b(-1)\mathbf{1}) - b(0) F(a(-1)\mathbf{1}) $$ and thus forms a Lie algebra derivation. Whitehead lemma implies that \begin{align*} \bar F(a(-1)\mathbf{1}) & = F(a(-1)\mathbf{1}) + L(-1)W_{[0]} \\ &= a(0) (w_{[1]}+L(-1)W_{[0]}) + L(-1)W_{[0]} \in W_{[1]}/ L(-1)W_{[0]}. \end{align*} for some $w_{[1]}\in W_{[1]}$. Since $a(0)$ commutes with $L(-1)$, we thus have $$F(a(-1)\mathbf{1}) \in a(0) w_{[1]} + L(-1) W_{[0 ]}.$$ Consider now the map $F_1: V\to W$ defined by $$F_1(v) = F(v) + (w_{[1]})_0 v. $$ Notice that $(w_{[1]})_0 a(-1) = -a(0) w_{[1]} + L(-1)a(1) w_{[1]} \in -a(0)w_{[1]} + L(-1)W_{[0]}$, thus $$F_1(a(-1)\mathbf{1}) = F(a(-1)\mathbf{1}) - a(0) w_{[1]} + L(-1)a(1)w_{[1]}\in L(-1)W_{[0]}. $$ This in particular implies that $F_1(a(-1)\mathbf{1})_0 = 0$ from $L(-1)$-derivative property. Thus for $a,b\in {\mathfrak g}$, we have $$F_1(a(0) b(-1)\mathbf{1}) = a(0) F_1(b(-1)\mathbf{1}),$$ which means that $F_1$ is a ${\mathfrak g}$-module homomorphism from ${\mathfrak g}$ to $L(-1)W_{[0]}$. However, since $L(-1)$ is an injective homomorphism, $L(-1)W_{[0]}$ is isomorphic to $W_{[0]}$ which is $M_\lambda$. From Schur's lemma, if $M_\lambda$ is not the adjoint ${\mathfrak g}$-module, the the map $a(-1)\mathbf{1}\mapsto F_1(a(-1)\mathbf{1}$ is zero. So $F_1$ is a derivation sending every generator of $V$ to zero. Thus $F_1(v) = 0$ and $F(v) + (w_{[1]})_0 v = 0$ for every $v\in $V. Therefore, $F$ is a zero-mode derivation. It remains to consider the case when $M_\lambda\simeq {\mathfrak g}$. In this case, let $\psi: {\mathfrak g} \to M_\lambda$ be a ${\mathfrak g}$-module homomorphism (unique up to a scalar). Then for every $a\in {\mathfrak g}$, we have $$F_1(a(-1)\mathbf{1}) = L(-1)\psi(a).$$ We will proceed to show that $F_1$ is also a zero-mode derivation. For each $\alpha\in \Phi$, let $t_\alpha$ be an element in ${\mathfrak h}$ satisfying $$\alpha(h) = \langle t_\alpha, h\rangle, h\in {\mathfrak h}. $$ We also pick $\{e_\alpha: \alpha\in \Phi\}$ to be a Chevalley basis, i.e., each $e_\alpha$ is a root vector in ${\mathfrak g}_\alpha$; \begin{align*} [e_\alpha, e_{-\alpha}] & = \langle e_\alpha, e_{-\alpha}\rangle t_\alpha = \frac{2}{\langle \alpha, \alpha\rangle} t_\alpha; \end{align*} and for $\alpha, \beta\in \Phi$ with $\alpha+\beta\neq 0$, \begin{align*} [e_\alpha, e_\beta] &= c_{\alpha\beta} e_{\alpha+\beta}, \end{align*} where the coefficients $c_{\alpha\beta}$ satisfies $c_{\alpha\beta} = -c_{\beta\alpha} = -c_{-\alpha, -\beta}$, together with the following property: if $\alpha, \beta, \gamma\in \Phi$ satisfies $\alpha+\beta+\gamma=0$, then \begin{align} \frac{c_{\alpha\beta}}{\langle \gamma, \gamma\rangle} = \frac{c_{\beta\gamma}}{\langle \alpha, \alpha\rangle} = \frac{c_{\gamma\alpha}}{\langle \beta, \beta\rangle}\label{cab-formula} \end{align} (See \cite{Hum} and \cite{Sam} for details). Let $\alpha_1, ..., \alpha_r\in \Phi^+$ be the simple positive roots. We denote the elements $t_{\alpha_i}$ simply by $t_i$. Let $t_1^\vee, ..., t_r^\vee \in {\mathfrak h}$ satisfying $\langle t_i, t_j^\vee\rangle = \alpha_i(t_j^\vee)= \delta_{ij}, i, j= 1, ..., r$. Consider now the element $w_{(1)}\in W_{[1]}$ of the following form $$w_{(1)} = \sum_{i=1}^r t_i(-1)\psi(t_i^\vee) + \sum_{\alpha\in \Phi} \frac{\langle \alpha, \alpha\rangle}{2} e_\alpha(-1)\psi(e_{-\alpha}). $$ Informally, $w_{(1)}$ is the Casimir element $$\Omega = \sum_{i=1}^r t_i t_i^\vee + \sum_{\alpha\in \Phi} \frac{\langle \alpha, \alpha\rangle}{2}e_\alpha e_{-\alpha}$$ twisted by $\psi$ and the negative-one-mode. We now show that $w_{(1)}$ spans a trivial ${\mathfrak g}$-submodule of $W_{[1]}$. We first note that for every $h\in {\mathfrak h}$, \begin{align*} h(0)w_{(1)} &= \sum_{i=1}^r [h,t_i](-1)\psi(t_i^\vee)+t_i(-1)\psi([h,t_i^\vee]) \\ & \quad+ \sum_{\alpha\in \Phi}\frac{\langle\alpha,\alpha\rangle}2 \left([h,e_\alpha](-1)\psi(e_{-\alpha})+ e_\alpha(-1) \psi([h,e_{-\alpha}]\right)\\ &= 0+\sum_{\alpha\in \Phi}\frac{\langle\alpha,\alpha\rangle}2 \left(\alpha(h)e_\alpha(-1)\psi(e_{-\alpha})-\alpha(h) e_\alpha(-1) \psi(e_{-\alpha}\right)=0. \end{align*} Thus $w_{(1)}$ is of weight zero. For every $j=1, ..., r$, \begin{align} e_{\alpha_j}(0)w_{(1)} &= \sum_{i=1}^r [e_{\alpha_j},t_i](-1)\psi(t_i^\vee)+t_i(-1)\psi([e_{\alpha_j},t_i^\vee]) \label{Line1}\\ & \quad+ \sum_{\alpha\in \Phi}\frac{\langle\alpha,\alpha\rangle}2 \left([e_{\alpha_j},e_\alpha](-1)\psi(e_{-\alpha})+ e_\alpha(-1) \psi([e_{\alpha_j},e_{-\alpha}]\right)\label{Line2} \end{align} By $[e_{\alpha}, t_\beta] = -\alpha(t_\beta)e_\alpha = -\langle t_\alpha, t_\beta\rangle e_\alpha$, and $h = \sum_{i=1}^r \langle h, t_i\rangle t_i^\vee = \sum_{i=1}^r \langle h, t_i^\vee\rangle t_i$, (\ref{Line1}) can be simplified as $$ -e_{\alpha_j}(-1)\psi(t_j) - t_j(-1) \psi(e_{\alpha_j}), $$ which cancels out with the $\alpha = -\alpha_j$ summand in (\ref{Line2}). For other summands in (\ref{Line2}), we separate $\Phi$ into a disjoint union of $\alpha_j$-strings and show that the summation along each $\alpha_j$-string yields zero. Fix some $\alpha_j$-string consisting of positive roots and let $\beta$ be lowest element, so that the $\alpha_j$-string is of the form $$\beta, \beta+\alpha_j, ..., \beta+q\alpha_j$$ is the $\alpha_j$-string. We start with the $\alpha=\beta$ summand $$\frac{\langle \beta, \beta\rangle}2 c_{\alpha_j\beta}e_{\beta+\alpha_j}(-1)\psi(e_{-\beta})$$ (second half is zero because $[e_{\alpha_j}, e_{-\beta}]=0$). It is clear that this cancels out with the second term of the $\alpha=\beta+\alpha_j$ summand: $$\frac{\langle \beta+\alpha_j, \beta+\alpha_j\rangle}2 c_{\alpha_j, \beta+\alpha_j}e_{\beta+2\alpha_j}(-1)\psi(e_{-\alpha}) + \frac{\langle \beta+\alpha_j, \beta+\alpha_j\rangle}2 c_{\alpha_j, -\beta-\alpha_j}e_{\beta+\alpha_j}(-1) \psi(e_{-\beta})$$ knowing that $$ \frac{c_{\alpha_j\beta}}{\langle \alpha_j+\beta, \alpha_j+\beta\rangle} = \frac{c_{-\beta-\alpha_j, \alpha_j}}{\langle \beta, \beta\rangle} = -\frac{c_{\alpha_j, -\beta-\alpha_j}}{\langle \beta, \beta\rangle}$$ from Formula (\ref{cab-formula}) and skew-symmetry of $c_{\alpha\beta}$. Similarly the first term of the summand $\beta+\alpha_j$ cancels out with the second term of the summand $\beta+2\alpha_j$. Repeating the process and notice that the first term of the last summand $\beta+q\alpha_j$ since $\beta+(q+1)\alpha_j\notin\Phi$. Thus we showed that for every positive simple root $\alpha_i$, $e_{\alpha_i}(0)w_{(1)} = 0$. This means that $w_{(1)}$ is also a highest weight vector. Thus it generates a trivial ${\mathfrak g}$-module. Now we can compute the zero-mode derivation defined by $w_{(1)}$: for any $a\in {\mathfrak g}$, \begin{align*} (w_{(1)})_0 a(-1)\mathbf{1} = a(0) w_{(1)} + L(-1)a(1)w_{(1)}. \end{align*} We have seen above that first term is zero. To compute the second term, we first notice that \begin{align*} a(1)w_{(1)} &= \sum_{i=1}^r a(1) t_i(-1)\psi(t_i^\vee) + \sum_{\alpha\in \Phi} \frac{\langle \alpha, \alpha\rangle}{2}a(1)e_\alpha(-1)\psi(e_{-\alpha})\\ &= \sum_{i=1}^r ([a,t_i](0) + l\langle a, t_i\rangle )\psi(t_i^\vee) + \sum_{\alpha\in \Phi} \frac{\langle \alpha, \alpha\rangle}{2}([a, e_\alpha](0)+l\langle a, e_\alpha\rangle)\psi(e_{-\alpha})\\ &= \sum_{i=1}^r \psi([[a,t_i], t_i^\vee]) + \sum_{\alpha\in \Phi} \frac{\langle \alpha, \alpha\rangle}{2} \psi([[a,e_\alpha], e_{-\alpha}]) \\ &\quad + l\psi\left(\sum_{i=1}^r\langle a, t_i\rangle t_i^\vee + \sum_{\alpha\in \Phi} \langle a, e_\alpha\rangle \frac{\langle \alpha, \alpha\rangle} 2 e_{-\alpha}\right)\\ &= (2h+l)\psi(a). \end{align*} Thus, $$(w_{(1)})_0 a(-1)\mathbf{1} = L(-1)a(1)w_{(1)} = (2h+l)\psi(a). $$ If we let $$F_2(v) = F_1(v) - \left(\frac 1 {2h+l}w_{(1)}\right)_0 v, $$ then $F_2(a(-1)\mathbf{1}) = 0$ for every $a\in {\mathfrak g}$. This shows that $F_2(v)=0$ for every $v\in V$. This is to say that for every $v\in V$, $$0 = F(v) + (w_{[1]})_0 v - \left(\frac{1}{2h+l}w_{(1)}\right)_0 v.$$ Therefore, $F$ is a zero-mode derivation. \end{proof} \begin{thm} Let $V = L_{\hat{{\mathfrak g}}}(l, 0)$ with $l\in {\mathbb Z}_+$. Let $W = L_{\hat{\mathfrak g}}(l, \lambda)$ with arbitrary ${\mathbb N}$-grading. Then $H^1(V, W) = Z^1(V, W)$. \end{thm} \begin{proof} Let $\Xi\in {\mathbb N}$ be the lowest weight of $L_{\hat{\mathfrak g}}(l, \lambda)$. The $\Xi=0$ case is proved in Theorem \ref{affine-canonical-N-grading}. The remaining cases can be similarly handled as in Part (2) and (3) of Theorem \ref{affine} and shall not be repeated here. \end{proof} \begin{rema}\label{strictly-less-1} When $\lambda=0$ and the lowest weight of $W=L_{\hat{\mathfrak g}}(l, 0)$ is 1, it is clear that $Z^1(V, W) = 0$ (since the lowest weight vector is vacuum-like whose zero-mode is zero). So $\dim Z^1(V, W) < W_{[1]}/L(-1)W_{[0]}$. \end{rema} \section{First cohomologies of Virasoro VOAs}\label{Section-3} In this section, we will study the first cohomology of the Virasoro VOA corresponding to the minimal models. \subsection{Virasoro VOA and modules} The Virasoro VOA is first constructed in \cite{Frenkel-Zhu}. Here we give a brief review following \cite{LL}. Let $Vir$ be the Virasoro algebra, i.e., $Vir = \bigoplus_{n\in {\mathbb Z}} {\mathbb C} L_n \oplus {\mathbb C} \textbf{c}$, with $$[c, Vir] = 0; [L_m, L_n] = (m-n)L_{m+n}+ \delta_{m+n, 0} \frac{m^3 - m}{12}\textbf{c}.$$ For $c, h\in {\mathbb C}$, let ${\mathbb C} \mathbf{1}_{c, h}$ be a one-dimensional vector space, on which $L_m$ acts trivially for every $m>0$, $L_0$ acts by the scalar $h$, and $\textbf{c}$ acts by the scalar $c$. Let $M(c, h)$ be the induced $Vir$-module from $(\bigoplus_{n\geq 0} L_n \oplus {\mathbb C} \textbf{c})$-module. It follows from Section 5.5 and 6.1 of \cite{LL} that \begin{enumerate} \item $M(c, 0)$ has a quotient $V_{Vir}(c, 0)$ that forms a VOA, on which $L(-1)\mathbf{1} = 0$. $V_{Vir}(c, 0)$ has unique simple quotient $L(c, 0)$. \item $M(c, h)$ has a quotient $L(c, h)$ that forms an irreducible $V_{Vir}(c, 0)$-module. \end{enumerate} It follows from \cite{Wang} and \cite{DLM-Regular} that \begin{enumerate} \item If \begin{align} c= c_{p,q}= 1 - \frac{6(p-q)^2}{pq}\label{Formula-5} \end{align} for some $p, q\in {\mathbb Z}_+$ mutually prime, then there are only finitely many irreducible $L(c, 0)$-modules. \item Every irreducible $L(c, 0)$-module is isomorphic to $L(c, h)$ where \begin{align} h = h_{m,n} = \frac{(np-mq)^2-(p-q)^2}{4pq} \label{Formula-6} \end{align} for some integers $m,n\in {\mathbb Z}_+$ such that $m<p$ and $n<q$. \item Every weak $L(c, 0)$-module is a direct sum of irreducible $L(c, 0)$-modules. \end{enumerate} \begin{rema} By a positive energy representation, we mean (direct sums of) the modules $L(c, h)$ for $h\geq 0$. By a negative energy representation, we mean (direct sums of) the modules $L(c, h)$ where $h<0$. For many choices of $p, q$. $L(c_{p,q},0)$ only admits positive energy representations. But exceptions do exist. \end{rema} \begin{rema} Let $V = L(c, 0)$ for some $c=c_{p,q}$ as in (\ref{Formula-5}), $W= L(c, h)$ for some $h= h_{m,n}$ as in (\ref{Formula-6}). Since $A_0(V)$ is semisimple, every derivation $F: V\to W$ satisfies $$F(\omega) \equiv (L(0)+L(-1))\omega = \alpha \omega \mod O_0(W). $$ where $\d_W = L(0)+\alpha$. In case $W$ is graded by $L(0)$, we have $$F(\omega)\in O_0(W).$$ However, this fact is not useful when $W \neq V$, simply because $A_0(W)\neq 0$ only in some very limited cases. It can provide some but very minor simplifications in the computations of $H^1(V, V)$, as will be seen in the discussions below. \end{rema} \subsection{Positive energy representations} \begin{thm}\label{Vir-pos-energy} Let $V = L(c, 0)$ for some $c=c_{p,q}$ as in (\ref{Formula-5}), $W= L(c, h)$ for some $h= h_{m,n}$ as in (\ref{Formula-6}). Then $H^1(V, W) = 0$ for $h\geq 0$. \end{thm} \begin{proof} If $h\notin {\mathbb Z}$, then $F(\omega)=0$. We show that $h=1$ is impossible when $c=c_{pq}$. In fact, if $h=1$, then \begin{align*} (np-mq)^2 - (p-q)^2 = 4pq & \Rightarrow (np-mq - p-q)(np-mq+p+q)=0\\ & \Rightarrow (n-1)p = (m+1)q \text{ or } (n+1)p = (m-1)q \end{align*} where $m,n,p,q$ are integers satisfying $p,q>0, (p, q) = 1, 0<m<p, 0<n < q$. Since $(p,q)=1$, it is necessary that $p|m+1$ or $p|m-1$, neither of which is possible. Thus the only remaining cases are $h=2$ and $h=0$. \begin{enumerate} \item If $h=2$, then $F(\omega)$ is of lowest weight 2. It follows from $F(L(-1)\omega) = L(-1)F(\omega)$ that \begin{align*} F(\omega)_0 \omega =0 & \Rightarrow -\omega_0 F(\omega) + L(-1)\omega_1 F(\omega)=0 \Rightarrow -L(-1)F(\omega) + 2L(-1)F(\omega) = 0\\ & \Rightarrow L(-1)F(\omega) = 0 \end{align*} Since $W$ is irreducible and not isomorphic to $V$, we know that from \cite{Li-vacuum-like} that $W$ contains no vacuum-like vectors. Therefore $\ker L(-1) = 0$ and $F(\omega) = 0$. \item If $h=0$, then $W = V$. The only nonzero weight 2 element in $V$ is simply scalars of $\omega$. So $F(\omega) = a\omega$ for some $a\in {\mathbb C}$. If $a\neq 0$, then $\omega\in O(V)$. The image of $\omega$ in $A(V)$ would then be zero. This contradicts the result that $A(V) = {\mathbb C}[x]/(G_{p,q}(x))$ in [W] where $x$ is the image of $\omega$ in $A(V)$. Thus $a=0$ and $F(\omega) = 0$. \end{enumerate} \end{proof} \begin{rema} Here is a straightforward way to show the $h=0$ case: $F(\omega) = a\omega$ has to satisfy $$F(\omega_1\omega) = \omega_1F(\omega) + F(\omega)_1\omega \Rightarrow a L(0) \omega = L(0) a \omega + a L(0)\omega \Rightarrow a L(0) \omega = 0 \Rightarrow a = 0$$ \end{rema} \begin{rema}\label{strictly-less-2} If $c\neq c_{pq}$ for any choices of $p,q$, then it is possible that $h=1$. Two interesting phenomena happen in this case. First, the zero-mode derivation given by the lowest weight vector is trivial, as we have $$F(\omega) = w_0 \omega = -\omega_0 w + L(-1)\omega_1 w = -L(-1)w + L(-1)w = 0.$$ So in this case, $\dim Z^1(V, W) = 0 < W_{[1]} / L(-1)W_{[0]}$ (where $W_{[0]}=0$). Secondly, there exists a nontrivial derivation taking image in the (universal) $M(c,h)$, as will be shown in the following proposition. This provides an example where $H^1(V, W)\neq Z^1(V, W)$. \end{rema} \begin{prop}\label{L(-1)w-der} Let $c\in {\mathbb C}$ be a number that is not of the form $c_{pq}$ as in \ref{Formula-5}. Let $V= L(c, 0), W=M(c,1)$ and $w\in W$ be a nonzero lowest weight vector. Then the map $$F(\omega) = L(-1)w$$ extends to a well-defined derivation in $H^1(V, W)$. \end{prop} \begin{proof} Under our assumption on $c$, $L(c,0) = V_{Vir}(c, 0)$ (see \cite{LL} Section 6.1). By PBW theorem, the vectors $$\omega_{-n_1}\cdots \omega_{-n_s}\mathbf{1} = L(-n_1-1)\cdots L(-n_s-1)\mathbf{1}, s\in {\mathbb N}, n_1 \geq \cdots \geq n_s \geq 1$$ for a basis of $L(c,0)$. We extend the map $F$ to $V$ recursively by \begin{align*} F(\mathbf{1}) = 0, F(L(-n)\mathbf{1}) &= \frac 1 {(n-2)!}L(-1)^{n-1}w, n\geq 2\\ F(L(-n)v) &= F(\omega)_{-n+1} v + \omega_{-n+1} F(v), v = L(-n_1)\cdots L(-n_s)\mathbf{1}, n \geq n_1. \end{align*} Then $F$ is grading-preserving with $F(\omega) = F(L(-2)\mathbf{1}) = L(-1)w$. Moreover, $F(Y(\omega, x)\mathbf{1}) = e^{xL(-1)}F(\omega)$. To show that $F$ is a derivation, we first show that $$F(Y(\omega,x)v) = Y_W(\omega,x)F(v) + Y_{WV}^W(F(\omega), x)v$$ when $v=\omega_{-n_1}\cdots \omega_{-n_s}\mathbf{1}$ for some $n_1 \geq \cdots \geq n_s \geq 1$. We apply induction on $s$. For the base case $s=1$, it is clear that \begin{align*} F(\omega_n\omega) &= \omega_n F(\omega) + F(\omega)_n \omega \end{align*} for $n\geq 3$ and $n\leq 0$. When $n=2$ or $n=1$, note that $F(\omega_2\omega)=0$, $F(\omega_1\omega)=2F(\omega)$, while \begin{align*} \omega_2 F(\omega) + F(\omega)_2 \omega &= \omega_2F(\omega)-\omega_2F(\omega) = 0\\ \omega_1 F(\omega) + F(\omega)_1 \omega &= \omega_1 F(\omega) + \omega_1 F(\omega) - L(-1) \omega_2 F(\omega) = 2L(0)L(-1)w - L(-1)L(1)L(-1)w\\ &= 2L(-1)w \end{align*} To perform the inductive step, it suffices to show that $$F(\omega_m v) = \omega_m F(v) + F(\omega)_m v$$ for $m\geq 0, v=\omega_{-n_1}\cdots \omega_{-n_s}\mathbf{1}$, $n_1\geq \cdots \geq n_s \geq 1$. Let $v^{(1)} = \omega_{-n_2}\cdots\omega_{-n_s}\mathbf{1}$. Then \begin{align} F(\omega_m v) &= \text{Res}_{x_1} x_1^m\text{Res}_{x_2} x_2^{-n_1} F(Y(\omega, x_1)Y(\omega, x_2)v^{(1)})\cdot \text{Res}_{x_0} x_0^{-1} \delta\left(\frac{x_1-x_2}{x_0}\right) \nonumber\\ &= \text{Res}_{x_1} x_1^m\text{Res}_{x_2} x_2^{-n_1} F(Y(\omega, x_2)Y(\omega, x_1)v^{(1)})\cdot \text{Res}_{x_0} x_0^{-1} \delta\left(\frac{-x_2+x_1}{x_0}\right)\nonumber\\ &\quad + \text{Res}_{x_0} \text{Res}_{x_1} x_1^m\text{Res}_{x_2} x_2^{-n_1} F(Y(Y(\omega, x_0)\omega, x_2)v^{(1)})\cdot x_1^{-1} \delta\left(\frac{x_2+x_0}{x_1}\right)\nonumber\\ &= F(\omega_{-n_1}\omega_m v^{(1)}) + \text{Res}_{x_0}\text{Res}_{x_2} (x_2+x_0)^m x_2^{-1} F(Y(Y(\omega,x_0)\omega,x_2)v^{(1)}). \label{Compatibility-0} \end{align} For the first term in (\ref{Compatibility-0}), since $m>0$, $\omega_m v^{(1)}$ is a linear combination of $\omega_{-p_1}\cdots \omega_{-p_r}\mathbf{1}$ for some $p_1\geq \cdots p_r \geq 1$ with $r\leq s-1$. Thus from the induction hypothesis, the first term \begin{align} & \quad F(\omega_{-n_1}\omega_m v^{(1)}) = F(\omega)_{-n_1} \omega_m v^{(1)} + \omega_{-n_1} F(\omega)_m v^{(1)} + \omega_{-n_1}\omega_m F(v^{(1)}) \nonumber\\ & = \text{Res}_{x_2} x_2^{-n_1}\text{Res}_{x_1}x_1^m \text{Res}_{x_0} x_0^{-1}\delta\left(\frac{-x_2+x_1}{x_0}\right)\nonumber\\ & \quad \cdot \left(Y_{WV}^W(F(\omega), x_2)Y(\omega, x_1)v^{(1)} + Y_W(\omega, x_2) Y_{WV}^W(F(\omega), x_1)v^{(1)} + Y_W(\omega, x_2) Y_W(\omega,x_1)F(v^{(1)})\right)\label{Compatibility-1} \end{align} For the second term in (\ref{Compatibility-0}), we rewrite it as $$\text{Res}_{x_2} \sum_{i=0}^m \binom m i x_2^{-n_1+m-i} F(Y(\omega_i \omega, x_2)v^{(1)})$$ where $\omega_i\omega$ has only three nonzero options: $\omega_0\omega = L(-1)\omega, \omega_1\omega = 2\omega, \omega_3\omega = c \mathbf{1} / 2$. Thus, the second term can be rewritten as \begin{align} \text{Res}_{x_2} \sum_{i=0}^m \binom m i x_2^{-n_1+m-i} \left(Y_W(\omega_i\omega, x_2)F(v^{(1)}) + Y_{WV}^W (F(\omega)_i\omega, x_2) v^{(1)}+Y_{WV}^W (\omega_iF(\omega), x_2) v^{(1)}\right) \label{Compatibility-2} \end{align} using the induction hypothesis. Here the process for the $i = 0$ is slightly more complicated, using $L(-1)$-derivative property, integration by parts formula $\text{Res}_x f(x)g'(x) = -\text{Res}_x f'(x)g(x)$, and the property $$F(\omega)_0 \omega = (L(-1)w)_0 \omega = 0$$ Details are shown here: \begin{align*} & \quad \text{Res}_{x_2} x_2^m F(Y(\omega_0\omega, x_2)v^{(1)})= \text{Res}_{x_2} x_2^m F(Y(L(-1)\omega, x_2)v^{(1)})\\ &= \text{Res}_{x_2} x_2^m\frac{\partial}{\partial x_2} F(Y(\omega, x_2)v^{(1)})= -\text{Res}_{x_2} \left(\frac{\partial}{\partial x_2} x_2^m\right) F(Y(\omega, x_2)v^{(1)}) \\ &= -\text{Res}_{x_2} \left(\frac{\partial}{\partial x_2} x_2^m\right) \left(Y_{WV}^W(F(\omega), x_2) v^{(1)} + Y_W(\omega, x_2)F(v^{(1)})\right)\\ &= \text{Res}_{x_2} x_2^m \left(\frac{\partial}{\partial x_2}Y_{WV}^W(F(\omega), x_2) v^{(1)} +\frac{\partial}{\partial x_2} Y_W(\omega, x_2)F(v^{(1)})\right)\\ &= \text{Res}_{x_2} x_2^m \left(Y_{WV}^W(L(-1)F(\omega), x_2) v^{(1)} + 0 + Y_W(L(-1)\omega, x_2)F(v^{(1)})\right)\\ &=\text{Res}_{x_2} x_2^m \left(Y_{WV}^W(\omega_0 F(\omega), x_2) v^{(1)} + Y_{WV}^W(F(\omega)_0\omega, x_2)v^{(1)} + Y_W(\omega_0 \omega, x_2)F(v^{(1)})\right) \end{align*} Continuing from (\ref{Compatibility-2}) and rewrite it back as \begin{align*} & \text{Res}_{x_0}\text{Res}_{x_2}x_2^{-n_1}(x_2+x_0)^m \cdot \text{Res}_{x_1} x_1^{-1}\delta\left(\frac{x_2+x_0}{x_1}\right)\\ & \cdot \left(Y_W(Y(\omega,x_0) x_2)F(v^{(1)}) + Y_{WV}^W (Y_{WV}^W(F(\omega), x_0)\omega, x_2) v^{(1)}+Y_{WV}^W (Y_W(\omega,x_0)F(\omega), x_2) v^{(1)}\right). \end{align*} Combined with the result \ref{Compatibility-1} we computed for the first term and use Jacobi identity, we find that \begin{align*} F(\omega_m v) &= \text{Res}_{x_2}x_2^{-n_1}\text{Res}_{x_1}x_1^m \text{Res}_{x_0} x_0^{-1} \delta \left(\frac{x_1-x_2}{x_0}\right)\\ &\quad \cdot \left(Y_{WV}^W(F(\omega), x_1) Y(\omega, x_2) v^{(1)}+ Y_W(\omega, x_1)Y_{WV}^W (F(\omega),x_2) v^{(1)} + Y_W(\omega, x_1) Y_W(\omega, x_2) F(v^{(1)})\right)\\ &= F(\omega)_m \omega_{-n_1} v^{(1)} + \omega_m F(\omega)_{-n_1} v^{(1)} + \omega_m \omega_{-n_1} F(v^{(1)}) \\ &= F(\omega)_m v + \omega_m F(\omega_{-n_1}v^{(1)}) = F(\omega)_m v + \omega_m F(v). \end{align*} This finishes the proof of $$F(Y(u, x)v) = Y_W(u, x) F(v) + Y_{WV}^W (F(u), x) v$$ in case $u = \omega, v = \omega_{-n_1}\cdots \omega_{-n_s}\mathbf{1}$, $n_1 \geq \cdots \geq n_s \geq 1$. The proof for the case when $u=\omega_{-p_1}\cdots \omega_{-p_r}\mathbf{1}$ is similar: the base case $r=1$ is checked with $L(-1)$-derivative property. Write $u^{(1)} = \omega_{-p_2}\cdots \omega_{-p_r}\mathbf{1}$ so that $u = \omega_{-p_1} u^{(1)}$. We then start from the iterate side of the Jacobi identity, write them as products and use the results on products to conclude the proof. We shall not include the details here. \end{proof} \subsection{Negative energy representations} The case when $h<0$ is much more interesting. We first note the following results: \begin{lemma}\label{Image-L(-1)^4} Let $F: V\to W$ be a derivation. Suppose that $F(\omega)\in \text{Im}L(-1)^4$, then $F(\omega)= 0$. So $F(v) = 0$ for any $v\in V$. \end{lemma} \begin{proof} We show that if $F(\omega)\in \text{Im}L(-1)^p$ for some $p\geq 4$, then $F(\omega)\in \text{Im}L(-1)^{p+1}$. Thus if $F(\omega)\in \text{Im}L(-1)^4$, then $F(\omega) \in \text{Im}L(-1)^p$ for every $p \geq 4$. This implies $F(\omega)=0$. It can be shown by a straightforward induction that for every $m\in {\mathbb Z}_+$ and every $p\in {\mathbb Z}_+$, $$L(m)L(-1)^p = \sum_{i=0}^p i!\cdot \binom p i \binom{m+1} i L(-1)^{p-i}L(m-i).$$ Now assume that $F(\omega)\in \text{Im}L(-1)^p$, i.e., $$F(\omega) = L(-1)^{p} w_{(-p+2)}.$$ From the assumption that $F$ is a derivation, $$F(\omega_p \omega) = \omega_p F(\omega) + F(\omega)_p\omega.$$ We compute the first term on the right-hand-side using the formula for $L(m)L(-1)^p$: \begin{align*} L(p-1)L(-1)^p w_{(-p+2)} & = \sum_{i=0}^p i! \binom p i \binom {p} i L(-1)^{p-i}L(p-1-i)w_{(-p+2)}\\ & = p! L(-1)w_{(-p+2)} + p! \cdot p \cdot L(-1)\cdot L(0) w_{(-p+2)} \\ & \qquad + \sum_{i=0}^{p-2} i! \binom p i \binom {p} i L(-1)^{p-i}L(p-1-i)w_{(-p+2)}\\ &= p!(1 - p^2 + 2p) L(-1)w_{(-p+2)} \\ & \qquad + L(-1)^2 \sum_{i=0}^{p-2} i! \binom p i \binom {p-2} i L(-1)^{p-i}L(p-1-i)w_{(-p+2)} \end{align*} From the general fact that $$(L(-1)w)_n = -n w_{n-1},$$ we know that \begin{align*} F(\omega)_p \omega & = (L(-1)^p w_{(-p+2)})_p \omega = (-1)^p\cdot p!\cdot (w_{(-p+2)})_0 \omega \\ &= (-1)^p \cdot p! \left((-1)^{0+1}\omega_0 + (-1)^{1+1} L(-1)\omega_1 + L(-1)^2 \sum_{j=2}^\infty \frac{(-1)^{j+1}}{j!}L(-1)^{j-2}\omega_j\right)w_{(-p+2)}\\ &= (-1)^p \cdot p! \left((-p+1) L(-1) + L(-1)^2 \sum_{j=2}^\infty \frac{(-1)^{j+1}}{j!}L(-1)^{j-2}\omega_j\right)w_{(-p+2)} \end{align*} Thus from $F(\omega_p \omega)=0$ (note that $p\geq 4)$, we have \begin{align*} 0 &= \omega_p F(\omega) + F(\omega)_p \omega \\ &= p!(1 - p^2 + 2p) L(-1)w_{(-p+2)} \\ & \qquad + L(-1)^2 \sum_{i=0}^{p-2} i! \binom p i \binom {p-2} i L(-1)^{p-i}L(p-1-i)w_{(-p+2)}\\ & \qquad + (-1)^p \cdot p!(-p+1) L(-1) w_{(-p+2)} + (-1)^p \cdot p! L(-1)^2 \sum_{j=2}^\infty \frac{(-1)^{j+1}}{j!}L(-1)^{j-2}\omega_jw_{(-p+2)} \end{align*} Since $W$ is irreducible and not isomorphic to $V$, we know from \cite{Li-vacuum-like} that $\ker L(-1)= 0$. Therefore, \begin{align} p!(1-p^2+2p + (-1)^p(1-p))w_{(-p+2)} &= L(-1) \sum_{i=0}^{p-2} i! \binom p i \binom {p-2} i L(-1)^{p-i}L(p-1-i)w_{(-p+2)} \nonumber\\ &\qquad + (-1)^p \cdot p! L(-1)^2 \sum_{j=2}^\infty \frac{(-1)^{j+1}}{j!}L(-1)^{j-2}\omega_jw_{(-p+2)}\label{Formula-2} \end{align} One see that the coefficient is nonzero whenever $p\geq 4$. Thus $w_{(-p+2)}\in \text{Im}L(-1)$, which implies that $F(\omega)\in \text{Im}L(-1)^{p+1}$. \end{proof} \begin{rema} Note that the coefficient of $w_{(-p+2)}$ in (\ref{Formula-2}) is zero for $p=2$ and $p=3$. This is why we need $F(\omega)\in L(-1)^4$. \end{rema} \begin{lemma}\label{L(2m)-action} $F(\omega)\in \text{Im}L(-1)$. Moreover, for any $m\in {\mathbb Z}_+$, \begin{align*} & L(2m)F(\omega)\in \text{Im}L(-1), \\ & L(2m)F(\omega)-\frac 1 2 L(-1)L(2m+1)F(\omega)\in \text{Im}L(-1)^2. \end{align*} \end{lemma} \begin{proof} From $F\circ L(-1) = L(-1)\circ F$, we see that \begin{align*} 0 &= F(\omega)_0 \omega = (-1)^{0+1}\omega_0 F(\omega) + L(-1)\omega_1 F(\omega) + L(-1)^2 \sum_{j=2}^\infty \frac{(-1)^{1+j}}{j!}L(-1)^{j-2} \omega_{j}F(\omega)\\ &= L(-1)F(\omega) + L(-1)^2\sum_{j=2}^\infty \frac{(-1)^{1+j}}{j!} L(-1)^{j-2}L(j-1)F(\omega). \end{align*} Then from $\ker L(-1)=0$, we see that $$ F(\omega) = L(-1)\sum_{j=2}^\infty \frac{(-1)^{1+j}}{j!} L(-1)^{j-2}L(j-1)F(\omega). $$ In general, for $m\in {\mathbb Z}_+$ \begin{align*} 0 &= \omega_{2m} F(\omega) + F(\omega)_{2m} \omega \\ &= \omega_{2m} F(\omega) + (-1)^{2m+1} \omega_{2m} F(\omega) + L(-1)\omega_{2m+1} F(\omega) + \sum_{j=2}^\infty \frac{(-1)^{j+1+2m+1}}{j!}L(-1)^j \omega_{2m+j}F(\omega)\\ &= L(-1)L(2m)F(\omega) - \frac 1 {2!} L(-1)^2 L(2m+1)F(\omega) + L(-1)^3\sum_{j=3}^\infty \frac{(-1)^{j+3}}{j!}L(-1)^{j-3} \omega_{2m+j}F(\omega) \end{align*} From $\ker L(-1)=0$, we see that $$L(2m)F(\omega) - \frac 1 {2!} L(-1) L(2m+1)F(\omega) + L(-1)^2\sum_{j=3}^\infty \frac{(-1)^{j+3}}{j!}L(-1)^{j-3} \omega_{2m+j}F(\omega)=0$$ The conclusion then follows. \end{proof} \begin{prop}\label{Vir-neg-energy-prop} Let $V=L(c,0)$ and $W=L(c,h)$ with $c=c_{p,q}$ and $h=h_{m,n}$ as in (\ref{Formula-5}) and (\ref{Formula-6}). Let $F: V \to W$ be a derivation. Then for $h=-1, -2$ and $-3$, \begin{enumerate} \item $F(\omega)\in \text{Im}L(-1)^2$. \item There exists $w_{(1)}\in W_{[1]}$ such that $F(\omega)- (w_{(1)})_0\omega \in \text{Im}L(-1)^4$. \end{enumerate} \end{prop} \begin{proof} The following fact will be convenient in computations. Let $m, n, p \in {\mathbb Z}_+$. If $m\neq kn$ for any $k\in{\mathbb Z}_+$, then \begin{align} L(m)L(-n)^p = \sum_{i=0}^p \binom p i \cdot \prod_{j=1}^i (m+(2-j)n)\cdot L(-n)^{p-i}L(m-in). \end{align} If $m=kn$ for some $k\in {\mathbb Z}_+$, then \begin{align} L(kn)L(-n)^p &= \sum_{i=0}^p \binom p i \cdot \prod_{j=1}^i (k+(2-j)n)\cdot L(-n)^{p-i}L(kn-in) \nonumber \\ & \quad+ \binom p k \frac{(k+1)!}{2}n^{k-1}\frac{n^3-n}{12}c L(-n)^{p-k}. \end{align} These formulas can be easily proved by induction. To give a sketch, we first write $F(\omega)$ as a linear combination of vectors \begin{align} L(-1)^{r_1}L(-2)^{r_2}\cdots L(-n)^{r_n}w \label{Formula-3} \end{align} where $w\in W_{[-h]}$ is the lowest weight vector of $W$, and $r_1 + 2r_2 + \cdots + nr_n = h+2$. Note that $F(\omega)\in \text{Im}L(-1)$ implies that $r_1\geq 1$. Then determine the coefficients using Lemma \ref{L(2m)-action}. We should note here that the vectors listed in (\ref{Formula-3}) are linearly independent. Indeed, since $r_1\geq 1$ and $\ker L(-1)=0$, it suffices to show that $M(c,h)$ contains no singular vectors when the conformal weight is less or equal to 1. The main tool is Kac determinant formula, which tells that the Gram matrix formed by vectors in $M(c,h)$ of conformal weight 1 has its determinant proportional to $$\prod_{\substack{r,s\in {\mathbb Z}_+,\\ 1\leq rs \leq -h+1}} (h-h_{r,s})^{P(-h+1-rs)}$$ where $P(n)$ is the number of partitions of $n$ (see \cite{Kac-determinant}, \cite{Feigin-Fuchs} and \cite{IK}). A singular vector exists only when this determinant vanishes, which happens only when $r$ and $s$ are chosen such that \begin{align} h = h_{r,s}, -h+1-rs\geq 0. \label{Formula-4} \end{align} Recall that our $h$ is chosen to be $h_{m,n}$. It follows from an elementary computation that $h=h_{r,s}$ if and only if $r=m,s=n$ or $r=kp-m,s=kq-n$ for some positive integer $k$. Without loss of generality, we can assume that $2m \leq p$ and $2n \leq q$, so that any $(r,s)$ satisfying $h=h_{r,s}$ would also satisfy $r\geq m, s \geq n$, and thus $rs\geq mn$. We now compute $mn+h-1$. \begin{align*} mn+h-1 = mn + \frac{(np-mq)^2-(p-q)^2}{4pq} - 1 = \frac{(np+mq)^2-(p+q)^2}{4pq} \end{align*} Since $m\geq 1$ and $n\geq 1$, $np+mq \geq p+q > 0$. Thus the numerator is nonnegative, and thus $mn \geq 1-h$. This is to say that any $(r,s)$ satisfying $h=h_{r,s}$ would also satisfy $rs \geq mn >-h+1$. So there is no $(r,s)$ satisfying (\ref{Formula-4}). Thus the determinant is nonvanishing. A similar argument also shows that the determinant of the Gram matrix is nonvanishing on any homogeneous subspaces of $M(c,h)$ of conformal weight less or equal to 1. So there does not exists any singular vectors can appear in $M(c,h)$ of conformal weight less or equal to 1. This implies that vectors of the form (\ref{Formula-3}) with $r_1, ..., r_n\geq 0$, $r_1 + 2r_2 + \cdots + nr_n \leq 1$ are all linearly independent. Now we proceed with our computation: \begin{enumerate} \item \textbf{When $\boldsymbol{h=-1}$,} we set $$F(\omega) = a_{12}L(-1)L(-2)w + a_{111} L(-1)^3 w.$$ Then \begin{align*} L(2)F(\omega) &= a_{12}\left(5+ \frac 1 2 c\right)L(-1)w + a_{111}\left(-2\cdot 3\cdot 2\right)L(-1)w,\\ L(3)F(\omega) &= a_{12}\left(-4+\frac 1 2 c\right)w + a_{111}\left(-4\cdot 3\cdot 2\right)w, \end{align*} It turns out that $$L(2)F(\omega)-\frac 1 2 L(-1)L(3)F(\omega)= a_{12}\left(7+\frac 1 4 c \right)L(-1)w.$$ From Lemma \ref{L(2m)-action}, we know $L(2)F(\omega)-\frac 1 2 L(-1)L(3)F(\omega)\in \text{Im}L(-1)^2=\{0\}$. So $a_{12}=0$ if $7+\frac 1 4 c \neq 0$. This is guaranteed by our choice of $c$. In fact, $7+\frac 1 4 c = 0$ implies that $$p = \frac{41\pm \sqrt{1537}}{12} q. $$ Obviously no integers $p,q$ can satisfy this. Thus we have $a_{12}=0$. So $$F(\omega) = a_{111}L(-1)^3 w \in \text{Im}L(-1)^2. $$ \noindent\textbf{When $\boldsymbol{h=-2}$,} we set $$F(\omega) = a_{13}L(-1)L(-3)w + a_{112} L(-1)^2L(-2)w + a_{1111} L(-1)^4w. $$ Then \begin{align*} L(2)F(\omega)&= a_{13}\left( 3\cdot 4 L(-2) + 5 L(-1)^2\right)w + a_{112}\left(\left(4+\frac 1 2 c\right)L(-1)^2\right)w \\ &\quad + a_{1111}\left((-8)\cdot 3 \cdot 2 L(-1)^2\right)w \end{align*} From $L(2)F(\omega)\in \text{Im}L(-1)$, $a_{13}=0$. Thus, $$F(\omega) = a_{112}L(-1)^2L(-2)w + a_{1111}L(-1)^4 w \in \text{Im}L(-1)^2.$$ \noindent\textbf{When $\boldsymbol{h=-3}$,} we set \begin{align*} F(\omega) &= a_{14}L(-1)L(-4)w + a_{122} L(-1)L(-2)^2w + a_{113}L(-1)^2L(-3)w \\ &\quad + a_{1112} L(-1)^3L(-2)w + a_{11111} L(-1)^5 w. \end{align*} Then \begin{align*} L(2)F(\omega) &= a_{14}\left(3\cdot 5 L(-3) + 6L(-1)L(-2)\right)w \\ &\quad + a_{122} \left(-9L(-3)+(2+c)L(-1)L(-2)\right)w \\ &\quad + a_{113}\left(5L(-1)^3+2\cdot 3 \cdot 4 L(-1)L(-2)\right)w \\ & \quad + a_{1112} \left(\left(15+\frac 1 2 c\right)L(-1)^3 - 3\cdot 2\cdot 2 L(-1)L(-2)\right)w \\ & \quad + a_{11111} \left(-10\cdot 3\cdot 2\cdot 2 L(-1)^3\right) w, \\ L(3)F(\omega) &= a_{14}\left(4\cdot 6 L(-2) + 7L(-1)^2\right)w \\ &\quad + a_{122} \left(5\cdot 3L(-1)^2+4(-16+c)L(-2)\right)w\\ &\quad + a_{113}\left((22+2c)L(-1)^2\right)w \\ &\quad + a_{1112} \left((-36+6c)L(-1)^2 - 4\cdot 3 \cdot 2 L(-2)\right)w \\ &\quad + a_{11111} \left(-25\cdot 4\cdot 3 \cdot 2 L(-1)^2\right) w. \end{align*} Then from $L(2)F(\omega)\in \text{Im}L(-1)$, we see that $$15 a_{14}-9a_{122} = 0.$$ From $L(2)F(\omega)-\frac 1 2 L(-1)L(3)F(\omega)\in \text{Im}L(-1)^2$, we see that $$-6a_{14}+(34-c)a_{122}=0 $$ (amazingly the variables $a_{113}, a_{1112}$ and $a_{11111}$ all have zero coefficients). The matrix of this system is degenerate only when $c=188/5$. So for our choice of $c$, the matrix is nondegenerate. Therefore, $a_{14} = a_{122} = 0$, and $$F(\omega) = a_{113}L(-1)^2L(-3)w + a_{1112} L(-1)^3L(-2)w + a_{11111} L(-1)^5 w \in \text{Im}L(-1)^2. $$ \item We note first such $w_{(1)}$ has to be outside of Im$L(-1)$, as the zero-mode of elements in Im$L(-1)$ is identically zero. \noindent\textbf{When $\boldsymbol{h=-1}$,} the only choice of elements in $W_{[1]}$ is (up to a scalar) $L(-2)w$. It can be computed that $$(L(-2)w)_0 \omega = \left(-\frac {13} 6 + \frac 1 {12} c\right)L(-1)^3 w. $$ The coefficient of $L(-1)^3 w$ is zero only when $c = 26$ that does not meet our choice. So $F(\omega) = a_{111} \left(-\frac {13} 6 + \frac 1 {12} c\right)^{-1} (L(-2)w)_0\omega. $ \noindent\textbf{When $\boldsymbol{h=-2}$,} the only choice of elements in $W_{[1]}$ is (up to a scalar) $L(-3)w$. It can be computed that $$(L(-3)w)_0 \omega = -2\left(L(-1)^2L(-2) + \frac 1 {12} L(-1)^4 \left(-8+\frac 1 2 c\right)\right)w.$$ Looking back at $F( \omega)$, from Lemma \ref{L(2m)-action} we know that $L(4)F(\omega) \in \text{Im} L(-1) = \{0\}$. It can be computed that $$L(4)F(\omega) = a_{112}\left(-8+\frac 1 2 c\right) + a_{1111}(-12)=0 \Rightarrow a_{1111} = \frac 1 {12} \left(-8+\frac 1 2 c\right)a_{112}.$$ So that $$F(\omega) = a_{112} \left(L(-1)^2 L(-2) + \frac 1 {12} \left(-8+\frac 1 2 c\right)L(-1)^4\right)w = -\frac 1 2 a_{112} (L(-3)w)_0 w. $$ \noindent\textbf{When $\boldsymbol{h=-3}$,} the only choice of elements in $W_{[1]}$ is some linear combination of $L(-4)w$ and $L(-2)^2w$. It can be computed that \begin{align*} (L(-4)w)_0\omega &= -\frac 5 2 L(-1)^2 L(-3)w + L(-1)^3 L(-2)w \\ &\quad + \frac 1 {120}(-59+5c)L(-1)^5 w\\ (L(-2)^2w)_0\omega &= \frac 3 2 L(-1)^2 L(-3)w + \frac 1 6 (-50+c) L(-1)^3L(-2)w \\ &\quad + \frac 1 {40} (-49+4c)L(-1)^5w. \end{align*} Recall that in (1) we have already obtained that $$F(\omega) = a_{113}L(-1)^2L(-3)w + a_{1112}L(-1)^3L(-2)w + a_{11111}L(-1)^5 w. $$ Let $x_1, x_2$ be numbers such that \begin{align*} a_{113} &= -\frac 5 2 x_1 + \frac 3 2 x_2\\ a_{1112} &= x_1 + \frac 1 6 (-50+c) x_2. \end{align*} The linear system is degeneate when $$-\frac 5 2 \cdot 1 6 (-50+c) - \frac 3 2 = 0\Rightarrow c > 50$$ Thus our choice of $c$ makes a nondegenerate system. So such $x_1, x_2$ exists uniquely for any fixed $a_{113}$ and $a_{1112}$. Then we know that $$F(\omega) + (-x_1 L(-4)w - x_2 L(-2)^2 w)_0 \omega \in \text{Im}L(-1)^4. $$ \end{enumerate} \end{proof} Consequently, we conclude the following theorem. \begin{thm}\label{Vir-neg-energy-thm} For $V=L(c,0)$, $c=c_{p,q}$ and $W=L(c,h)$ with $h = h_{m,n}\in {\mathbb Z}, h \geq -3$, $H^1(V, W) = Z^1(V, W)$. \end{thm} \begin{proof} Let $F: V\to W$ be any derivation. From Proposition \ref{Vir-neg-energy-prop}, we know that for some $w_{(1)}\in W_{[1]}$, $F(\omega)-(w_{(1)})_0\omega\in \text{Im}L(-1)^4$. Consider the map $G: V\to W$ defined by \begin{align*} G(v) = F(v) - (w_{(1)})_0 v. \end{align*} Since both $v\mapsto F(v)$ and $v\mapsto (w_{(1)})_0 v$ is a derivation, we know that $G$ is also a derivation, satisfying $G(\omega)\in \text{Im}L(-1)^4$. It then follows from Lemma \ref{Image-L(-1)^4} that $G = 0$. Thus $F(v)= (w_{(1)})_0 v$ \end{proof} \subsection{The module $L(c,h)$ with ${\mathbb N}$-grading} In this subsection we consider $W=L(c,h)$ with ${\mathbb N}$-grading. One should note that with this grading, the choice of derivations is different. Thus, $H^1(V, W)$ and $Z^1(V, W)$ are different from those with $L(0)$-gradings. \begin{thm}\label{Vir-canonical-N-grading} Let $V = L(c, 0)$ with $c=c_{p,q}$ as in (\ref{Formula-5}). Let $W = L(c, h)$ with $h=h_{m,n}$ as in (\ref{Formula-6}) with the canonical ${\mathbb N}$-grading given by the operator $\d_W = L(0)-h$. Then $H^1(V, W) = 0$. \end{thm} \begin{proof} It suffices to assume that $h\neq 0$. Let $w$ be the lowest weight vector of $W$. Assume that $L(-2)w$ and $L(-1)^2 w$ are linearly independent. Set $$F(\omega)=aL(-2)w + bL(-1)^2w. $$ Then \begin{align*} L(1)F(\omega) &= (3a+(4h+2)b)L(-1)w\\ L(2)F(\omega) &= \left(\left(4h+\frac 1 2 c\right)a + 6hb\right)w. \end{align*} From $$F(L(0)\omega) = L(0)F(\omega) + F(\omega)_1 \omega = 2L(0)F(\omega) - L(-1)L(1)F(\omega) + \frac 1 2 L(-1)^2L(2)F(\omega), $$ we have \begin{align*} & 2F(\omega) = 2(h+2)F(\omega)-L(-1)L(1)F(\omega)+\frac 1 2 L(-1)^2L(2)F(\omega)\\ \Rightarrow & 0 = 2(h+1)a L(-2)w + \left(2(h+1)b - 3a - (4h+2)b + \frac 1 2 \left(4h+\frac 1 2c\right)a + 3hb\right)L(-1)^2w \end{align*} If $h\neq -1$, then $a=0$. What remains simplifies to $hb=0$. Since $h\neq 0$, $b=0$. On the other hand, if $h=-1$. In this case, the relation simplifies to $$b=\left(-5+\frac 1 4 c\right)a.$$ On the other hand, from $$F(L(1)\omega) = L(1)\omega + F(\omega)_2\omega = L(-1) L(2)F(\omega)=0, $$ and the fact that $\ker L(-1) = 0$, we see that $$\left(-4+\frac 1 2 c\right)a - 6b = 0$$ Together with the previous relation on $a,b$, we conclude that $(26-c)a = 0$. Since $c\neq 26$, we conclude that $a=0$ and thus $b=0$. It remains to study the case when $L(-2)w$ and $L(-1)^2w$ are linearly independent. Since the coefficient of $L(-1)^2w$ in the relation is nonzero (see \cite{Ash}), it suffices to consider the case when $$F(\omega) = aL(-2)w.$$ We similarly argue that $a=0$ if $h\neq -1$, and in case $h=-1$, we have $(-4+c/2)a=0$. Since $c\neq 8$, we still get $a=0$. \end{proof} \begin{thm} Let $V = L(c, 0)$ with $c=c_{p,q}$ as in (\ref{Formula-5}). Let $W = L(c, h)$ with $h=h_{m,n}$ as in (\ref{Formula-6}) with arbitrary ${\mathbb N}$-grading. Then $H^1(V, W) = Z^1(V, W)$. \end{thm} \begin{proof} Let $\Xi$ be the lowest weight of $W$. The $\Xi=0$ case is proved in Theorem \ref{Vir-canonical-N-grading}. If $\Xi = 1$, then $F(\omega)=L(-1)w$ for some $w\in W_{[1]}$, which is zero automatically if $W$ is isomorphic to $V$. In case $W$ is not isomorphic to $V$, we note that $$w_0 \omega = -\omega_0 w + L(-1)\omega_1 w = (h-1)L(-1)w.$$ Since $h\neq 1$, we see that $F$ coincides with the zero-mode derivation defined by $w/(h-1)$. Thus $F$ is a zero-mode derivation. If $\Xi = 2$, then $F(\omega) = w$ for some $w\in W_{[2]}$. But then, $$2w = F(\omega_1\omega) = \omega_1 F(\omega) + F(\omega)_1 \omega = 2 \omega_1 F(\omega)= 2h w.$$ Since $h\neq 1$, we see that $w=0$. Thus $F = 0$. \end{proof} \section{First cohomologies of lattice VOAs}\label{Section-4} In this section, we will study the first cohomologies of the lattice VOA associated with an positive definite even lattice. \subsection{Lattice VOA and modules} The lattice VOA was first constructed in \cite{FLM}. The modules were classified in \cite{Dong-lattice}. We briefly review the construction of lattice VOA and modules following \cite{LL}. Let $L_0$ be an even lattice of rank $r$, i.e., $L_0$ is a free abelian group of rank $r$ equipped with a positive definite symmetric $\mathbb{Z}$-bilinear form satisfying $$\langle \alpha, \alpha\rangle \in 2{\mathbb Z}, \alpha\in L_0.$$ Let ${\mathfrak h} = L_0\otimes_{\mathbb Z} {\mathbb C}$ and $L$ be the dual lattice of $L_0$, i.e., $$L = \{\alpha\in {\mathfrak h}: \langle \alpha, \beta\rangle \in {\mathbb Z}, \forall \beta\in L_0\}. $$ Let $\hat{L}$ be a central extension of $L$ by a finite cyclic group $\langle \kappa \rangle$ of order $s$, i.e., $\hat{L}$ can be fitted into the following short exact sequence of groups: $$1 \to \langle \kappa \rangle \to \hat{L} \xrightarrow{-} L \to 1, $$ where the map $\hat{L}\to L$ is denoted by $a\mapsto \bar{a}$. Let $e: L \to \hat{L}, \alpha\mapsto e_\alpha$ be a section of the short exact sequence i.e., $$ e_0 = 1; \overline{e_\alpha} = \alpha, \alpha\in L.$$ Let $\epsilon_0: L \times L \to {\mathbb Z}/s{\mathbb Z}$ be the corresponding 2-cocycle associated to the section $e$, i.e., for $\alpha, \beta\in L$, $$e_\alpha e_\beta = \kappa^{\epsilon_0(\alpha,\beta)}e_{\alpha+\beta}. $$ Fix some primitive $s$-th root of unity $\omega_s\in {\mathbb C}$, let ${\mathbb C}_{\omega_s}$ be the one-dimensional vector space on which $\kappa$ acts by the scalar $\omega_s$. Let $${\mathbb C}\{L\} = {\mathbb C}[\hat{L}]\otimes_{{\mathbb C}[\kappa]}{\mathbb C}_{\omega_s}$$ be the induced $\hat{L}$-module. Set $\iota: \hat{L}\to {\mathbb C}\{L\}$ by $\iota(a) = a\otimes 1$. It is known that $\iota$ is an injection, and $\iota(\kappa b) = \omega_s \iota(b)$ is the only linear relations among $\iota(b), b\in \hat{L}$. Thus ${\mathbb C}[L]$ and ${\mathbb C}\{L\}$ are linearly isomorphic and can be (linearly) identified via $\alpha\mapsto \iota(e_\alpha)$. Define $\epsilon: L\times L \to {\mathbb C}^\times$ by $$\epsilon(\alpha, \beta) = \omega_s^{\epsilon_0(\alpha, \beta)}e_{\alpha+\beta}, \alpha, \beta\in L.$$ Then the action of $\hat{L}$ on ${\mathbb C}\{L\}$ can be described by $$e_\alpha\cdot \iota(e_\beta) = \epsilon(\alpha, \beta)\iota(e_{\alpha+\beta}), \kappa \iota(e_\beta) = \omega_s \iota(e_\beta), \alpha, \beta\in L $$ Let $M(1) = S(\hat{{\mathfrak h}}_-)$ be the Heisenberg VOA associated to ${\mathfrak h}$ with level 1. In more detail, we view ${\mathfrak h}$ as the abelian Lie algebra and consider its affinization $$\hat{h} = {\mathfrak h}\otimes {\mathbb C}[t, t^{-1}] \oplus {\mathbb C} k$$ Let ${\mathbb C} \mathbf{1}_{\mathfrak h}$ be the one-dimensional vector space on which $h(n)$ acts trivially for $n\geq 0$, and $k$ acts by the scalar 1. $M(1)$ is nothing but the induced module $$M(1) = U(\hat{{\mathfrak h}})\otimes_{U({\mathfrak h}\otimes {\mathbb C}[t] \oplus {\mathbb C} k)} {\mathbb C} \mathbf{1}_{\mathfrak h} $$ together with an appropriately defined vertex operator. To construct the lattice VOA, we consider the vector space $$V_L = M(1) \otimes {\mathbb C}\{L\}$$ and the vaccum element $$\mathbf{1} = \mathbf{1}_{\mathfrak h} \otimes 1,$$ together with the following actions \begin{align*} a\in \hat{L}:& v\otimes \iota(e_\alpha) \mapsto v\otimes \iota(a\cdot e_\alpha),\\ k \in \hat{{\mathfrak h}}: & v\otimes \iota(e_\alpha)\mapsto v\otimes \iota(e_\alpha),\\ h\otimes t^0 \in \hat{{\mathfrak h}}: & v\otimes \iota(e_\alpha)\mapsto \langle h, \alpha\rangle v\otimes \iota(e_\alpha), \\ h\otimes t^n \in \hat{{\mathfrak h}}: & v\otimes \iota(e_\alpha)\mapsto h(n)v \otimes \iota(e_\alpha), (n\neq 0), \end{align*} where $h\in {\mathfrak h}, v\in M(1). $ So $$V_L = span\{a^{(1)}(-n_1)\cdots a^{(m)}(-n_m) \iota(e_\alpha): a^{(1)}, ..., a^{(m)}\in {\mathfrak h}, n_1, ..., n_m \in {\mathbb Z}_+ \}$$ For $h\in {\mathfrak h}$ and a formal variable $x$, we also define the action of $$x^h: v\otimes \iota(e_\alpha)\mapsto x^{\langle h, \alpha\rangle} v\otimes \iota(e_\alpha). $$ Using these actions, we define the following fundamental vertex operator associated with $\alpha\in L$ ($\subset {\mathfrak h}$): $$Y(\iota(e_\alpha), x) = \exp\left(-\sum_{n<0}\frac{\alpha(n)}{n} x^{-n}\right)\exp\left(-\sum_{n>0}\frac{\alpha(n)}{n}x^{-n}\right)e_\alpha x^{\alpha}.$$ The vertex operator for general elements is given by \begin{align*} & Y(a^{(1)}(-n_1)\cdots a^{(m)}(-n_m) \iota(e_\alpha), x)\\ & \quad = \mbox{\scriptsize ${\circ\atop\circ}$} \frac 1{(n_1-1)!} \frac{d^{n_1-1}}{dx^{n_1-1}}a^{(1)}(x) \cdots \frac 1{(n_m-1)!} \frac{d^{n_m-1}}{dx^{n_m-1}}a^{(m)}(x) Y(\iota(e_\alpha), x)\mbox{\scriptsize ${\circ\atop\circ}$} \end{align*} For any subset $E\subset L$, let $$\hat{E} = \{a\in \hat{L}: \bar a \in E\}, {\mathbb C}\{E\} = span\{\iota(a): \bar a \in E\}$$ and $$V_{E} = M(1) \otimes {\mathbb C}\{E\}\subset V_L.$$ Particularly interesting cases includes $E=L_0$ and $E=\gamma + L_0$ for some $\gamma\in L$, giving correspondingly $V_{L_0}$ and $V_{\gamma+L_0}$. It follows from \cite{LL} Section 6.4, 6.5, and \cite{DLM-Regular} (see also \cite{Dong-lattice}) that \begin{enumerate} \item $V_{L_0}$ forms a VOA generated by $$a(-1)\mathbf{1}, a\in {\mathfrak h}; \iota(e_\alpha), \alpha\in L_0. $$ For $a^{(1)}, ... a^{(r)}\in {\mathfrak h}, \alpha\in L$, $$\text{wt}\left(a^{(1)}(-n_1)\cdots a^{(m)}(-n_m) \iota(e_\alpha)\right) = n_1 + \cdots + n_m + \frac{\langle \alpha, \alpha\rangle}{2}.$$ \item Up to isomorphisms of VOAs, $V_{L_0}$ is independent of the choice of $s\in {\mathbb Z}_+$, of the choice of $\omega_s$, and of the central extension of $L$. We can pick $s=2$, $\omega_s = -1$ and the central extension with a section satisfying $$\epsilon(\alpha, \beta) / \epsilon (\beta, \alpha) = (-1)^{\langle \alpha, \beta\rangle}. $$ \item $V_L$ forms a $V_{L_0}$-module. Any irreducible $V_{L_0}$-module is isomorphic to $V_{\gamma+L_0}\subset V_L$ for some $\gamma\in L$. Let $\gamma_1, ..., \gamma_s\in L$ be a choice of representatives of $L/L_0$. Then the modules $V_{\gamma_1+L_0}, ..., V_{\gamma_s+L_0}$ form the set of equivalent classes of irreducible $V_{L_0}$-modules. \item Every weak $V_{L_0}$-module is a direct sum of irreducible $V_{L_0}$-modules. \end{enumerate} \subsection{The algebra $V_{L_0}$} \begin{lemma} Let $\alpha_1, ..., \alpha_r \in L_0$ be a ${\mathbb Z}$-basis for $L_0$. Then $V_{L_0}$ is generated by $\alpha_i(-1)\mathbf{1}$ and $\iota(e_{\alpha_i})$, $i=1, ..., r$. \end{lemma} \begin{proof} Note that for any $\alpha, \beta\in L_0$, \begin{align*} Y(\iota(e_\alpha), x) \iota(e_\beta) &= \exp\left(-\sum_{n<0}\frac{\alpha(n)}{n}x^{-n}\right)e_\alpha x^{\alpha} \cdot \iota(e_\beta)\\ &= \epsilon(\alpha,\beta)x^{\langle \alpha, \beta\rangle}\exp\left(-\sum_{n<0}\frac{\alpha(n)}{n}x^{-n}\right)\iota(e_{\alpha+\beta}) \end{align*} The coefficient of the lowest power of $x$ is precisely $\epsilon(\alpha, \beta)\iota(e_{\alpha+\beta})$, where $\epsilon(\alpha, \beta) \neq 0$. Thus $\iota(e_{\alpha+\beta})$ is generated by $\iota(e_\alpha)$ and $\iota(e_\beta)$. Therefore, $\iota(e_{\alpha_1}), ..., \iota(e_{\alpha_r})$, together with $h(-1)\mathbf{1}, h\in {\mathfrak h}$, generates $V_{L_0}$. The conclusion follows by noticing that $h(-1)\mathbf{1}$ is a linear combinations of $\alpha_1(-1)\mathbf{1}, ..., \alpha_r(-1)\mathbf{1}$. \end{proof} \begin{thm}\label{lattice-L(0)-thm} Let $V_{L_0}$ be the lattice VOA associated with a positive definite even lattice. Let $F: V_{L_0}\to V_{L_0}$ be a derivation. Then $F$ is a zero-mode derivation. Consequently, $H^1(V_{L_0}, V_{L_0}) = Z^1(V_{L_0}, V_{L_0})$. \end{thm} \begin{proof} Fix any $h\in {\mathfrak h}$. Since $F$ commutes with $L(0)$, $F(h(-1)\mathbf{1})$ is of conformal weight 1. Thus we write $$F(h(-1)\mathbf{1}) = \sum_{i=1}^r x_i(h) \alpha_i(-1)\mathbf{1} + \sum_{\alpha\in L_0, \langle \alpha,\alpha\rangle = 2} y_\alpha(h) \iota(e_\alpha). $$ where $x_i, y_\alpha\in {\mathfrak h}^*$ for each $i=1, ..., r$ and $\alpha\in L_0, \langle \alpha,\alpha\rangle = 2$. Since $F$ is a derivation, for any $h, h_1\in {\mathfrak h}$, we have $$F(h_1(0)h(-1)\mathbf{1} ) = h_1(0)F(h(-1)\mathbf{1}) + F(h_1(-1)\mathbf{1})_0 h(-1)\mathbf{1} $$ This is to say $$0 = \sum_{\alpha\in L_0, \langle \alpha,\alpha\rangle = 2} \left(y_\alpha(h) \langle h_1, \alpha\rangle - y_\alpha(h_1) \langle h, \alpha\rangle \right) \iota(e_\alpha)$$ Thus for each index $\alpha$, $$y_\alpha(h)\langle h_1, \alpha\rangle = y_\alpha (h_1)\langle h, \alpha\rangle. $$ Since there are only finitely many $\alpha$ in the sum, there exists $h_1\in {\mathfrak h}$ such that $\langle h_1, \alpha\rangle \neq 0$ for every $\alpha\in L_0, \langle \alpha, \alpha\rangle = 2$. Let $$t_\alpha = \frac{y_\alpha(h_1)}{\langle h_1, \alpha\rangle}. $$ Then for every $h\in h$ and every $\alpha\in L_0, \langle \alpha, \alpha \rangle = 2$, $$y_\alpha(h) = \langle h, \alpha\rangle t_\alpha. $$ Let $F_1: V\to V$ be the linear map defined by $$F_1(v) = F(v) + \sum_{\alpha\in L_0, \langle \alpha,\alpha\rangle = 2} t_\alpha \iota(e_\alpha)_0 v. $$ $F_1$ is also a derivation since for each $\alpha\in L_0, \langle \alpha, \alpha\rangle = 2$, the map $v\mapsto \iota(e_\alpha)_0 v$ is the zero-mode derivation associated to the weight-1 element $\iota(e_\alpha)$. Then \begin{align*} F_1(h(-1)\mathbf{1}) &= F(h(-1)\mathbf{1}) + \sum_{\alpha\in L_0, \langle \alpha,\alpha\rangle = 2} t_\alpha \iota(e_\alpha)_0 h(-1)\mathbf{1} \\ &= \sum_{i=1}^r x_i(h)\alpha_i(-1)\mathbf{1} + \sum_{\alpha\in L_0, \langle\alpha,\alpha\rangle = 2}\left( y_\alpha(h) - t_\alpha \langle h, \alpha\rangle \right)\iota(e_\alpha)\\ &= \sum_{i=1}^r x_i(h)\alpha_i(-1)\mathbf{1}. \end{align*} For convenience, we set $\tilde{h}=\sum_{i=1}^r x_i(h)\alpha_i \in {\mathfrak h}$. So that $F_1(h(-1)\mathbf{1}) = \tilde{h}(-1). $ Since $F_1$ commutes with $L(0)$, for every $i = 1, ..., r$, $F_1(\iota(e_{\alpha_i}))$ is of weight $\langle \alpha_i, \alpha_i\rangle/2$. Thus \begin{align*} & F_1(\iota(e_{\alpha_i})) \\ & = \sum_{\substack{\alpha\in L_0, \\ \langle \alpha,\alpha\rangle = \langle \alpha_i, \alpha_i\rangle}} z_\alpha \iota(e_\alpha) \\ & + \sum_{\substack{\beta\in L_0, s_1, ..., s_r \in {\mathbb N},\\ k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}\in {\mathbb N}\\ \sum_{p=1}^r \sum_{j=1}^{s_i} j\cdot k_j^{(p)} + \frac 1 2 \langle \beta, \beta\rangle = \frac 1 2 \langle \alpha_i, \alpha_i\rangle}}& & u_{k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}, \beta} \\ & &\cdot & \alpha_1(-1)^{k_1^{(1)}}\cdots \alpha_1(-s_1)^{k_{s_1}^{(1)}}\cdots \alpha_r(-1)^{k_1^{(r)}}\cdots \alpha_r(-s_r)^{k_{s_r}^{(r)}}\iota(e_\beta). \end{align*} for some $z_\alpha\in {\mathbb C}$ and $u_{k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}, \beta}\in {\mathbb C}$. Since $F_1$ is a derivation, for any $h\in {\mathfrak h}$, we have $$F(h(0) \iota(e_{\alpha_i})) = h(0) F(\iota(e_{\alpha_i})) + F(h(-1)\mathbf{1})_0 \iota(e_{\alpha_i})$$ This is to say that \begin{align*} & \sum_{\substack{\alpha\in L_0, \\ \langle \alpha,\alpha\rangle = \langle \alpha_i, \alpha_i\rangle}} \langle h, \alpha_i\rangle \cdot z_\alpha \iota(e_\alpha) \\ & + \sum_{\substack{\beta\in L_0, s_1, ..., s_r \in {\mathbb N},\\ k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}\in {\mathbb N}\\ \sum_{p=1}^r \sum_{j=1}^{s_i} j\cdot k_j^{(p)} + \frac 1 2 \langle \beta, \beta\rangle = \frac 1 2 \langle \alpha_i, \alpha_i\rangle}}& & \langle h, \alpha_i\rangle \cdot u_{k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}, \beta} \\ & &\cdot & \alpha_1(-1)^{k_1^{(1)}}\cdots \alpha_1(-s_1)^{k_{s_1}^{(1)}}\cdots \alpha_r(-1)^{k_1^{(r)}}\cdots \alpha_r(-s_r)^{k_{s_r}^{(r)}}\iota(e_\beta)\\ = & \sum_{\substack{\alpha\in L_0, \\ \langle \alpha,\alpha\rangle = \langle \alpha_i, \alpha_i\rangle}}\langle h, \alpha\rangle\cdot z_\alpha \iota(e_\alpha) \\ & + \sum_{\substack{\beta\in L_0, s_1, ..., s_r \in {\mathbb N},\\ k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}\in {\mathbb N}\\ \sum_{p=1}^r \sum_{j=1}^{s_i} j\cdot k_j^{(p)} + \frac 1 2 \langle \beta, \beta\rangle = \frac 1 2 \langle \alpha_i, \alpha_i\rangle}}& & \langle h, \beta \rangle\cdot u_{k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}, \beta} \\ & &\cdot & \alpha_1(-1)^{k_1^{(1)}}\cdots \alpha_1(-s_1)^{k_{s_1}^{(1)}}\cdots \alpha_r(-1)^{k_1^{(r)}}\cdots \alpha_r(-s_r)^{k_{s_r}^{(r)}}\iota(e_\beta) \\ & + \langle \tilde{h}, \alpha_i\rangle \iota(e_{\alpha_i}) \end{align*} Comparing the coefficient of $\iota(e_{\alpha_i})$ on both sides, we see that $\langle \tilde{h}, \alpha_i\rangle = 0$ for every $h\in {\mathfrak h}$ and every $i = 1,..., r$. This implies that $\tilde{h}=0$ for every $h\in {\mathfrak h}$. Thus $F_1(h(-1)\mathbf{1}) = 0$. And only the first four lines remain. Comparing the coefficients of other terms, we see that \begin{enumerate} \item For every $\alpha\neq \alpha_i, \langle \alpha, \alpha\rangle = \langle \alpha_i, \alpha_i \rangle, $ $$\langle h, \alpha_i\rangle z_\alpha = \langle h, \alpha\rangle z_\alpha$$ holds for every $h\in {\mathfrak h}$. This is only possible when $z_\alpha = 0. $ \item For every $\beta\in L_0, \langle \beta, \beta\rangle < \langle \alpha, \alpha\rangle$ and every possible choice of indices $k_1^{(1)}, ..., k_{s_1}^{(1)},$ $ ..., k_1^{(r)}, ..., k_{s_r}^{(r)}\in {\mathbb N}$, $$\langle h, \alpha_i\rangle u_{k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}, \beta} = \langle h, \beta\rangle u_{k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}, \beta}$$ holds for any $h\in {\mathfrak h}$. This is possible only when $u_{k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}, \beta} = 0$. \end{enumerate} Thus we managed to show that $$F_1(h(-1)\mathbf{1}) = 0, F_1(\iota(e_{\alpha_i}))=z_{\alpha_i}\iota(e_{\alpha_i}), $$ for $h\in {\mathfrak h}, i = 1, ..., r.$ Let $F_2: V \to V$ be the linear map defined by $$F_2(v) = F_1(v) - \sum_{i=1}^r z_{\alpha_i}(\alpha_i^\vee(-1)\mathbf{1})_0 v$$ $F_2$ is a derivation since for every $i = 1, ..., r$, the map $v\mapsto (\alpha_i(-1)^\vee)_0 v$ is the zero-mode derivation associated to the weight-1 element $\alpha_i^\vee(-1)\mathbf{1}$. Then $$F_2(h(-1)\mathbf{1}) = 0, F_2(\iota(e_{\alpha_i})) = 0.$$ So $F_2$ is a derivation sending every generator of $V_{L_0}$ to zero. Thus $F_2 = 0$. In other words, $$0 = F_1(v) - \sum_{i=1}^r z_{\alpha_i}(\alpha_i^\vee(-1)\mathbf{1})_0 v = F(v) + \sum_{\alpha\in L_0, \langle \alpha,\alpha\rangle = 2} x_\alpha\iota(e_\alpha)_0 v - \sum_{i=1}^r z_{\alpha_i}(\alpha_i^\vee(-1)\mathbf{1})_0 v.$$ So $F$ is a zero-mode derivation. \end{proof} \begin{rema} Zhu's algebra $A_0(V_L)$ associated with an even lattice is studied in \cite{DLM-Zhu-lattice}. If the dual lattice $L$ contains a ${\mathbb Z}$-basis consisting of points outside of $L_0$, one can show that $O(V_L)$ contains no homogeneous elements of conformal weight 1. In this case, using the conclusion of Proposition of Corollary \ref{Corollary-1-22}, we can show that $w$ must be a linear combination of weight-1 elements. The computation is much less technical than here. However, if $L_0$ is a unimodular lattice (e.g. $E_8$ or Leech lattice), from the representation theory of $A(V_L)$ discussed in \cite{DLM-Zhu-lattice}, one sees that $A_0(V_L)={\mathbb C}\mathbf{1}$ with $O_0(V_L)$ containing all elements except the vacuum $\mathbf{1}$. The structure of $A_0(V_L)$ is too trivial to give any useful information (cf. Remark \ref{Rmk-1-23}). \end{rema} \subsection{The module $V_{\gamma+L_0}$ with $L(0)$-grading} \begin{thm} Fix an arbitrary $\gamma\in L, \gamma\notin L_0$. Then derivation $F: V_{L_0}\to V_{\gamma+L_0}$ is a zero-mode derivation. Consequently, $H^1(V_{L_0}, V_{\gamma+L_0}) = Z^1(V_{L_0}, V_{\gamma+L_0})$. \end{thm} \begin{proof} The computation is very similar. Instead of giving out every detail, we will only give a sketch. \begin{enumerate} \item For any $h\in {\mathfrak h}$, let $$F(h(-1)\mathbf{1}) = \sum_{\alpha\in L_0, \langle\gamma+\alpha, \gamma+\alpha\rangle = 2} x_\alpha(h) \iota(e_{\gamma+\alpha}). $$ for some $x_\alpha\in {\mathfrak h}^*$. From $F(h_1(0)h(-1)\mathbf{1})=h_1(0)F(h(-1)\mathbf{1}) + F(h_1(-1)\mathbf{1})_0 h(-1)$, we similarly see that for every $\alpha\in L$ with $\langle\gamma+\alpha, \gamma+\alpha\rangle = 2$, $$x_\alpha(h)\langle h_1, \alpha+\gamma\rangle = x_\alpha(h_1)\langle h, \gamma+\alpha\rangle. $$ Picking $t_\alpha=x_{\alpha}(h_1)/\langle h_1, \gamma+\alpha\rangle$ for some $h_1\in {\mathfrak h}$ that makes the denominator nonzero for every $\alpha$ involved, and let $$F_1(v) = F(v) - \sum_{\alpha\in L_0, \langle \gamma+\alpha, \gamma+\alpha\rangle = 2} t_\alpha \iota(e_{\gamma+\alpha})_0 v.$$ Then $F_1$ is also a derivation, and $F_1(h(-1)\mathbf{1}) = 0$ for any $h\in {\mathfrak h}$. \item For any $i = 1, ..., r$, let \begin{align*} F_1(\iota(e_{\alpha_i})) &= \sum_{\beta\in L_0, \langle \gamma+\beta, \gamma+\beta\rangle = \langle \alpha_i, \alpha_i\rangle} y_\beta \iota(e_{\gamma+\beta}) \\ & \quad + \sum_{\substack{\beta\in L_0, h_1, ..., h_n\in {\mathfrak h}, m_1, ..., m_n\in {\mathbb Z}_+,\\ m_1+\cdots + m_n + \frac 1 2 \langle \gamma+\beta, \gamma+\beta\rangle = \frac 1 2 \langle \alpha_i, \alpha_i\rangle}} h_1(-m_1)\cdots h_n(-m_n)\iota(e_{\gamma+\beta}). \end{align*} From $F(h(0)\iota(e_{\alpha_i})) = h(0)F(\iota(e_{\alpha_i})$ (note that $F(h(-1)\mathbf{1})= 0$) for any $h\in {\mathfrak h}$, we see that \begin{align*} &\langle h, \alpha_i\rangle \sum_{\beta\in L_0, \langle \gamma+\beta, \gamma+\beta\rangle = \langle \alpha_i, \alpha_i\rangle} y_\beta \iota(e_{\gamma+\beta}) \\ & + \langle h, \alpha_i\rangle \sum_{\substack{\beta\in L_0, h_1, ..., h_n\in {\mathfrak h}, m_1, ..., m_n\in {\mathbb Z}_+,\\ m_1+\cdots + m_n + \frac 1 2 \langle \gamma+\beta, \gamma+\beta\rangle = \frac 1 2 \langle \alpha_i, \alpha_i\rangle}} h_1(-m_1)\cdots h_n(-m_n)\iota(e_{\gamma+\beta})\\ =& \sum_{\beta\in L_0, \langle \gamma+\beta, \gamma+\beta\rangle = \langle \alpha_i, \alpha_i\rangle} y_\beta \langle h, \gamma+\beta\rangle \iota(e_{\gamma+\beta}) \\ & + \sum_{\substack{\beta\in L_0, h_1, ..., h_n\in {\mathfrak h}, m_1, ..., m_n\in {\mathbb Z}_+,\\ m_1+\cdots + m_n + \frac 1 2 \langle \gamma+\beta, \gamma+\beta\rangle = \frac 1 2 \langle \alpha_i, \alpha_i\rangle}}\langle h, \gamma+\beta\rangle h_1(-m_1)\cdots h_n(-m_n)\iota(e_{\gamma+\beta}). \end{align*} \end{enumerate} Note that if $\langle h, \alpha_i\rangle = \langle h, \gamma+\beta\rangle$ for every $h\in h$, then $\alpha_i = \gamma+\beta$, which is impossible since $\gamma\notin L_0$ while $\alpha_i, \beta\in L_0$. Thus to make the equality, it is necessary that $y_\beta=0$ and all the $h_1(-m_1)\cdots h_n(-m_n)\iota(e_{\gamma+\beta}) = 0$. So $F_1(\iota(e_{\alpha_i}))=0$ for every $i = 1, ..., r$. Thus $$F(v) = \sum_{\alpha\in L_0, \langle \gamma+\alpha, \gamma+\alpha\rangle = 2} t_\alpha \iota(e_{\gamma+\alpha})_0 v$$ is a zero-mode derivation. \end{proof} \begin{rema} For the lattice VOA case, the bimodules for Zhu's algebra does not help at all (cf. Remark \ref{Rmk-1-23}). Indeed, it follows from the $$V_{\mu+L_0} \boxtimes V_{\nu+L_0} = V_{\mu+\nu+L_0}$$ among modules associated to $\mu, \nu\in L$ that $A_0(V_{\mu+L_0}) = 0$ for any $\mu\notin L_0$. \end{rema} \subsection{The module $V_{\gamma+L_0}$ with ${\mathbb N}$-grading} In this subsection we consider $W=V_{\gamma+\alpha}$ with shifted grading $\d_W = L(0) - \langle \gamma, \gamma\rangle / 2$. In this case, $H^1(V, W)$ consists of derivations satisfying $$F(L(0)v) = \left(L(0) - \frac{\langle \gamma, \gamma\rangle }{2} \right)F(v). $$ Similarly, with this grading, the choice of derivations is different. Thus, $H^1(V, W)$ and $Z^1(V, W)$ are different from those with $L(0)$-gradings. \begin{thm}\label{lattice-canonical-N-grading} Let $V=V_{L_0}$ with $L_0$ a positive definite even lattice. Let $W = V_{\gamma+L_0}$ for some $\gamma\notin L_0$ with the grading operator $\d_W = L(0) - \langle \gamma, \gamma\rangle / 2$. Then $H^1(V, W) = Z^1(V, W)$. \end{thm} \begin{proof} It is clear that $\iota(e_\gamma)$ is a lowest weight vector of $W$ that generates $V_{\gamma+L_0}$. For $h\in {\mathfrak h}$, let \begin{align} F(h(-1)\mathbf{1}) = \sum_{\substack{\alpha\in L_0,\\ \langle \gamma+\alpha, \gamma+\alpha\rangle = \langle \gamma, \gamma\rangle + 2}} x_\alpha(h) \iota(e_{\gamma+\alpha}) + \sum_{\substack{\beta\in L_0, \\ \langle \gamma+\beta, \gamma+\beta\rangle = \langle \gamma, \gamma\rangle}} \sum_{i=1}^r y_{i, \beta}(h)\alpha_i(-1)\iota(e_{\gamma+\beta}), \label{lattice-N-grading-1} \end{align} where $x_\alpha, y_{i,\beta}\in {\mathfrak h}^*$ for each choice of $\alpha, i, \beta$. From $$F(h_1(0)h(-1)\mathbf{1}) = h_1(0)F(h(-1)\mathbf{1}) + F(h_1(-1)\mathbf{1})_0h(-1)\mathbf{1}, $$ we see that \begin{align*} 0 &= \sum_{\substack{\alpha\in L_0,\\ \langle \gamma+\alpha, \gamma+\alpha\rangle = \langle \gamma, \gamma\rangle + 2}} \langle h_1, \gamma+\alpha\rangle x_\alpha(h) \iota(e_{\gamma+\alpha}) + \sum_{\substack{\beta\in L_0, \\ \langle \gamma+\beta, \gamma+\beta\rangle = \langle \gamma, \gamma\rangle}} \sum_{i=1}^r \langle h_1, \gamma+\beta\rangle y_{i, \beta}(h)\alpha_i(-1)\iota(e_{\gamma+\beta})\\ & \quad - \sum_{\substack{\alpha\in L_0,\\ \langle \gamma+\alpha, \gamma+\alpha\rangle = \langle \gamma, \gamma\rangle + 2}} \langle h, \gamma+\alpha\rangle x_\alpha(h_1) \iota(e_{\gamma+\alpha}) - \sum_{\substack{\beta\in L_0, \\ \langle \gamma+\beta, \gamma+\beta\rangle = \langle \gamma, \gamma\rangle}} \sum_{i=1}^r \langle h, \gamma+\beta\rangle y_{i, \beta}(h_1)\alpha_i(-1)\iota(e_{\gamma+\beta})\\ & \quad + \sum_{\substack{\beta\in L_0, \\ \langle \gamma+\beta, \gamma+\beta\rangle = \langle \gamma, \gamma\rangle}} \sum_{i=1}^r \langle h, \alpha_i\rangle y_{i, \beta}(h_1)(\gamma+\beta)(-1)\iota(e_{\gamma+\beta}) \end{align*} Comparing the coefficients of $\iota(e_{\gamma+\alpha})$ for each $\alpha\in L_0$ with $\langle \gamma+\alpha, \gamma+\alpha\rangle = \langle \alpha, \alpha\rangle + 2$, we see that $$\langle h_1, \gamma+\alpha\rangle x_\alpha(h) = \langle h, \gamma+\alpha\rangle x_\alpha(h_1). $$ We similarly set $t_\alpha = x_\alpha(h)/\langle h, \gamma+\alpha\rangle$. To compare the coefficients for $\alpha_i(-1)\iota(e_{\gamma+\beta})$ for $i=1, ..., r, \beta\in L_0, \langle \gamma+\beta, \gamma+\beta\rangle = \langle \gamma, \gamma\rangle$, we first rewrite $$\gamma+\beta= \sum_{j=1}^r \langle \gamma+\beta, \alpha_i^\vee\rangle \alpha_i, $$ where $\alpha_1^\vee, ..., \alpha_r^\vee\in {\mathfrak h}$ satisfy $\langle \alpha_i^\vee, \alpha_j\rangle = \delta_{ij}, i, j = 1,..., r$. Then rewrite the fifth term as \begin{align*} & \sum_{\substack{\beta\in L_0, \\ \langle \gamma+\beta, \gamma+\beta\rangle = \langle \gamma, \gamma\rangle}} \sum_{i=1}^r \langle h, \alpha_i\rangle y_{i, \beta}(h_1)\sum_{j=1}^r \langle \gamma+\beta, \alpha_j^\vee \rangle \alpha_j(-1)\iota(e_{\gamma+\beta})\\ & = \sum_{\substack{\beta\in L_0, \\ \langle \gamma+\beta, \gamma+\beta\rangle = \langle \gamma, \gamma\rangle}}\sum_{i=1}^r \left(\sum_{j=1}^r \langle h, \alpha_j\rangle y_{j, \beta}(h_1) \langle \gamma+\beta, \alpha_i^\vee \rangle\right) \alpha_i(-1)\iota(e_{\gamma+\beta}) \end{align*} to read the coefficient for $\alpha_i(-1)\iota(e_{\gamma+\beta})$, which yields $$\langle h_1, \gamma+\beta\rangle y_{i,\beta}(h) = \langle h, \gamma+\beta\rangle y_{i,\beta}(h_1) - \sum_{j=1}^r \langle h, \alpha_j\rangle \langle \gamma+\beta, \alpha_i^\vee\rangle y_{j,\beta}(h_1)$$ Pick $h_1\in {\mathfrak h}$ such that $\langle h_1, \gamma+\beta\rangle \neq 0$ for every index $\beta$ involved in ths sum. Let $u_{j, \beta} = y_{j, \beta}(h_1)/\langle h_1, \gamma+\beta\rangle$ for each $j=1, ..., r, \beta\in L_0, \langle \gamma+\beta, \gamma+\beta\rangle = \langle \gamma, \gamma\rangle$. Then we have $$y_{i, \beta}(h) = \langle h, \gamma+\beta\rangle u_{i, \beta} - \sum_{j=1}^r \langle h, \alpha_j\rangle \langle \gamma+\beta,\alpha_i^\vee\rangle u_{j, \beta}. $$ With the choice of $t_\alpha$ and $u_{j, \beta}$, we now consider $$F_1(v) = F(v) + \sum_{\substack{\alpha\in L_0 \\ \langle \gamma+\alpha, \gamma+\alpha\rangle = \langle \alpha, \alpha\rangle + 2}} t_\alpha \iota(e_{\gamma+\alpha})_0 v + \sum_{\substack{\beta\in L_0 \\ \langle \gamma+\beta, \gamma+\beta\rangle = \langle \gamma, \gamma\rangle}}\sum_{i=1}^r u_{i, \beta}\alpha_i(-1)\iota(e_{\gamma+\beta})_0 v. $$ Since we modified $F$ with zero-mode derivations, it is clear that $F_1$ is also a derivation. The following computation shows that $F_1(h(-1)\mathbf{1}) = 0$ for every $h\in {\mathfrak h}$. Indeed, \begin{align*} &\quad F_1(h(-1)\mathbf{1})\\ &= F(h(-1)\mathbf{1}) - \sum_{\substack{\alpha\in L_0 \\ \langle \gamma+\alpha, \gamma+\alpha\rangle = \langle \alpha, \alpha\rangle + 2}} t_\alpha \langle h, \gamma+\alpha\rangle \iota(e_{\gamma+\alpha}) \\ & \quad - \sum_{\substack{\beta\in L_0 \\ \langle \gamma+\beta, \gamma+\beta\rangle = \langle \gamma, \gamma\rangle}}\sum_{i=1}^r u_{i, \beta}\left(\langle h, \gamma+\beta\rangle \alpha_i(-1)\iota(e_{\gamma+\beta})- \sum_{j=1}^r \langle h, \alpha_i\rangle \langle \gamma+\beta, \alpha_j^\vee\rangle \alpha_j(-1)\iota(e_{\gamma+\beta})\right)\\ &= F(h(-1)\mathbf{1}) - \sum_{\substack{\alpha\in L_0 \\ \langle \gamma+\alpha, \gamma+\alpha\rangle = \langle \alpha, \alpha\rangle + 2}} x_\alpha(h) \iota(e_{\gamma+\alpha}) \\ & \quad - \sum_{\substack{\beta\in L_0 \\ \langle \gamma+\beta, \gamma+\beta\rangle = \langle \gamma, \gamma\rangle}} \left(\sum_{i=1}^r u_{i, \beta}\langle h, \gamma+\beta\rangle \alpha_i(-1)\iota(e_{\gamma+\beta})- \sum_{i=1}^r\sum_{j=1}^r \langle h, \alpha_j\rangle \langle \gamma+\beta, \alpha_i^\vee\rangle \alpha_i(-1)\iota(e_{\gamma+\beta}) \right)\\ &= F(h(-1)\mathbf{1}) - \sum_{\substack{\alpha\in L_0 \\ \langle \gamma+\alpha, \gamma+\alpha\rangle = \langle \alpha, \alpha\rangle + 2}} x_\alpha(h) \iota(e_{\gamma+\alpha}) - \sum_{\substack{\beta\in L_0 \\ \langle \gamma+\beta, \gamma+\beta\rangle = \langle \gamma, \gamma\rangle}}\sum_{i=1}^r y_{i, \beta}(h) \alpha_i\iota(e_{\gamma+\beta}) \\ &=0 \end{align*} Let \begin{align*} & F_1(\iota(e_{\alpha_i})) \\ & = \sum_{\substack{\alpha\in L_0, \\ \langle \gamma+\alpha,\gamma+\alpha\rangle \\ = \langle \alpha_i, \alpha_i\rangle+\langle \gamma, \gamma\rangle }} z_\alpha \iota(e_{\gamma+\alpha}) \\ & + \sum_{\substack{\beta\in L_0, s_1, ..., s_r \in {\mathbb N},\\ k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}\in {\mathbb N}\\ \sum_{p=1}^r \sum_{j=1}^{s_i} j\cdot k_j^{(p)} + \frac 1 2 \langle \gamma+\beta, \gamma+\beta\rangle \\ = \frac 1 2 \langle \alpha_i, \alpha_i\rangle + \frac 1 2 \langle \gamma, \gamma\rangle }}& & u_{k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}, \beta} \\ & &\cdot & \alpha_1(-1)^{k_1^{(1)}}\cdots \alpha_1(-s_1)^{k_{s_1}^{(1)}}\cdots \alpha_r(-1)^{k_1^{(r)}}\cdots \alpha_r(-s_r)^{k_{s_r}^{(r)}}\iota(e_{\gamma+\beta}). \end{align*} Since $F_1(h(-1)\mathbf{1}) = 0$, it is clear that $$F(h(0) \iota(e_{\alpha_i})) = h(0)F(\iota(e_{\alpha_i})), $$ i.e., \begin{align*} & \quad \sum_{\substack{\alpha\in L_0, \\ \langle \gamma+\alpha,\gamma+\alpha\rangle \\ = \langle \alpha_i, \alpha_i\rangle+\langle \gamma, \gamma\rangle }} \langle h, \alpha_i\rangle z_\alpha \iota(e_{\gamma+\alpha}) \\ & + \sum_{\substack{\beta\in L_0, s_1, ..., s_r \in {\mathbb N},\\ k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}\in {\mathbb N}\\ \sum_{p=1}^r \sum_{j=1}^{s_i} j\cdot k_j^{(p)} + \frac 1 2 \langle \gamma+\beta, \gamma+\beta\rangle \\ = \frac 1 2 \langle \alpha_i, \alpha_i\rangle + \frac 1 2 \langle \gamma, \gamma\rangle }}& & \langle h, \alpha_i\rangle u_{k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}, \beta} \\ & &\cdot & \alpha_1(-1)^{k_1^{(1)}}\cdots \alpha_1(-s_1)^{k_{s_1}^{(1)}}\cdots \alpha_r(-1)^{k_1^{(r)}}\cdots \alpha_r(-s_r)^{k_{s_r}^{(r)}}\iota(e_{\gamma+\beta}). \\ & = \sum_{\substack{\alpha\in L_0, \\ \langle \gamma+\alpha,\gamma+\alpha\rangle \\ = \langle \alpha_i, \alpha_i\rangle+\langle \gamma, \gamma\rangle }} \langle h, \gamma+\alpha\rangle z_\alpha \iota(e_{\gamma+\alpha}) \\ & \sum_{\substack{\beta\in L_0, s_1, ..., s_r \in {\mathbb N},\\ k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}\in {\mathbb N}\\ \sum_{p=1}^r \sum_{j=1}^{s_i} j\cdot k_j^{(p)} + \frac 1 2 \langle \gamma+\beta, \gamma+\beta\rangle \\ = \frac 1 2 \langle \alpha_i, \alpha_i\rangle + \frac 1 2 \langle \gamma, \gamma\rangle }}& & \langle h, \gamma+\beta\rangle u_{k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}, \beta} \\ & &\cdot & \alpha_1(-1)^{k_1^{(1)}}\cdots \alpha_1(-s_1)^{k_{s_1}^{(1)}}\cdots \alpha_r(-1)^{k_1^{(r)}}\cdots \alpha_r(-s_r)^{k_{s_r}^{(r)}}\iota(e_{\gamma+\beta}). \end{align*} Thus for every choice of indices $\alpha, \beta$ involved in the sum, \begin{align} \langle h, \alpha_i \rangle z_\alpha& = \langle h, \gamma+\alpha\rangle z_\alpha, \label{lattice-N-grading-2}\\ \langle h, \alpha_i \rangle u_{k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}, \beta}&= \langle h, \gamma+\beta\rangle u_{k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}, \beta}. \label{lattice-N-grading-3} \end{align} holds for arbitrary $h\in {\mathfrak h}$. Since $\gamma\notin L_0$, this is possible only when every $z_\alpha = 0$ and every $u_{k_1^{(1)}, ..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}, \beta} = 0$. Thus $$F_1(\iota(e_{\alpha_i})) = 0, i = 1, ..., r.$$ That is to say, $F_1(v) = 0$. Thus $$F(v) = - \sum_{\substack{\alpha\in L_0 \\ \langle \gamma+\alpha, \gamma+\alpha\rangle = \langle \alpha, \alpha\rangle + 2}} t_\alpha \iota(e_{\gamma+\alpha})_0 v - \sum_{\substack{\beta\in L_0 \\ \langle \gamma+\beta, \gamma+\beta\rangle = \langle \gamma, \gamma\rangle}}\sum_{i=1}^r u_{i, \beta}\alpha_i(-1)\iota(e_{\gamma+\beta})_0 v$$ is a zero-mode derivation. \end{proof} \begin{thm} Let $V=V_{L_0}$ with $L_0$ a positive definite even lattice. Let $W = V_{\gamma+L_0}$ for some $\gamma\notin L_0$ with arbitrary ${\mathbb N}$-grading. Then $H^1(V, W) = Z^1(V, W)$. \end{thm} \begin{proof} Let $\Xi$ be the lowest weight of $W$. The case $\Xi=0$ is solved in Theorem \ref{lattice-canonical-N-grading}. For $\Xi\geq 1$, the process is a trivial modification from that in Theorem \ref{lattice-canonical-N-grading}. We shall not repeat the details but just show the modifications. \begin{enumerate} \item When $\Xi \geq 2$, $F(h(-1)\mathbf{1})$ is automatically zero. We take $F_1 = F$. When $\alpha=1$, the expression of $F(h(-1)\mathbf{1})$ in (\ref{lattice-N-grading-1}) involves only the first sum. Using the same arguments in Theorem \ref{lattice-L(0)-thm}, we find an element $w_{(1)}\in W_{(1)}$, so that the derivation $F_1(v) = F(v) - (w_{(1)})_0 v$ satisfies $F_1(h(-1)\mathbf{1})= 0, h\in {\mathfrak h}$. \item For every basis element $\alpha_i$ of the lattice $L_0$, the condition $F_1(h(0)\iota(e_{\alpha_i}) = h(0) \iota(e_{\alpha_i})$ implies (\ref{lattice-N-grading-2}) and (\ref{lattice-N-grading-3}) for any $h\in {\mathfrak h}$. If $\gamma\notin L_0$, it follows similar that $F_1(\iota(e_{\alpha_i})) = 0$. If $\gamma\in L_0$, in which case we should take $\gamma = 0$, then (\ref{lattice-N-grading-2}) and (\ref{lattice-N-grading-3}) implies that $\alpha = \alpha_i$ and $\beta= \alpha_i$. But from the grading-preserving condition of $F$, we should have $\langle \alpha, \alpha\rangle + 2\Xi = \langle \alpha_i, \alpha_i\rangle$ and $\langle \beta, \beta\rangle + \sum_{p=1}^r \sum_{j=1}^{s_i}j\cdot k_j^{(p)} + \Xi = \langle \alpha_i, \alpha_i\rangle$, neither of which is possible. So both $z_\alpha$ in (\ref{lattice-N-grading-2}) and $u_{k_1^{(1)},..., k_{s_1}^{(1)}, ..., k_1^{(r)}, ..., k_{s_r}^{(r)}}$ are zero. The same conclusion $F_1(\iota(e_{\alpha_i}))= 0$ holds. \end{enumerate} \end{proof} \section{Summary and Remarks} In Sections \ref{Section-2}, \ref{Section-3} and \ref{Section-4}, we computed the first cohomology $H^1(V, W)$ for the three classes of VOAs $V$ and for any irreducible $V$-module $W$ with arbitrary ${\mathbb N}$-grading. Since every ${\mathbb N}$-graded $V$-module is a direct sum of irreducible submodules, we conclude the following theorem: \begin{thm} Let $V$ be a VOA and $W$ be a $V$-module, then $H^1(V, W) = Z^1(V, W)$ if \begin{enumerate} \item $V = L_{\hat{\mathfrak g}}(l, 0)$ where ${\mathfrak g}$ is a simple Lie algebra, $l\in {\mathbb Z}_+$, $W$ is any ${\mathbb N}$-graded $V$-module; \item $V = L(c, 0)$ where $c = c_{pq}$ as in (\ref{Formula-5}), $W$ is any ${\mathbb N}$-graded $V$-module; \item $V=V_{L_0}$ where $L_0$ is a positive definite even lattice, $W$ is any ${\mathbb N}$-graded $V$-module. \end{enumerate} \end{thm} In case $V$ is the Virasoro VOA $L(c,0)$ with $c\neq c_{pq}$, $c_{pq}$ as in (\ref{Formula-5}), Proposition \ref{L(-1)w-der} shows that there exists an ${\mathbb N}$-graded module $W$ such that $H^1(V, W)\neq Z^1(V, W)$. In case $V$ is the Virasoro VOA $L(c,0)$ with $c=c_{pq}$ as in (\ref{Formula-5}) and $V$ admits negative energy representations, we also proved that $H^1(V, W) = Z^1(V, W)$ for every irreducible $L(0)$-graded $V$-module whose lowest weight is greater or equalt to $-3$. Since every $L(0)$-graded $V$-module is a direct sum of irreducibles, we also managed to prove that \begin{thm} Let $V = L(c, 0)$ be the Virasoro VOA where $c = c_{pq}$ as in (\ref{Formula-5}). Then for every $L(0)$-graded $V$-module $W$ whose lowest weight is greater or equal to $-3$, $H^1(V, W) = Z^1(V, W)$. \end{thm} \begin{rema} In \cite{HQ-Coh-reduct}, we conjectured that $H^1(V, W) = Z^1(V, W)$ for any $V$-bimodule $W$. What has been considered in this paper is certainly insufficient for this conjecture. Yet the proof of \cite{HQ-Coh-reduct} requires $H^1(V, W) = Z^1(V, W)$ only for any $V$-bimodule whose left and right module structure share the same $\d_W$ and $L(-1)$ operators. Such $V$-bimodules might be very restrictive, as the most obvious example $W_1 \otimes W_2$ (usual tensor product of two $V$-modules) does not satisfy this condition. Also for some examples of $V$ (affine $sl(2)$, minimal model Virasoro), an irreducible $V$-module $W$ admits a unique $V$-bimodule structure where $Y_W^R = Y_{WV}^W$. This motivates the classification problem of such $V$-bimodules, which shall be attempted in the future. \end{rema} \begin{rema} The arguments so far did not involve any discussions of $C_2$-cofiniteness, which is key to the rigidity of the module categories. The relationship between the cofiniteness conditions and the cohomology theory should be carefully investigated in future work. \end{rema}
1,108,101,563,610
arxiv
\section{Introduction} \label{Sec-intro} The galaxies of the Local Group serve as our astrophysical laboratories for studying the effects that metallicity and other environmental factors have on star formation and massive star evolution. The advent of 4-m class telescopes and single-object spectrographs in the 1970s heralded in an era of such studies in the Magellanic Clouds, where even the modest change in metallicity (a factor of 4 from the SMC to Milky Way) have resulted in some revealing differences in the characteristics of the massive star populations, such as the distribution of spectral subtypes of Wolf-Rayet stars (WRs) and red supergiants (RSGs), as well as a factor of 100 difference in the relative numbers of WRs and RSGs. These differences are believed to be due to the effect that metallicity has on the mass-loss rates of massive stars, and the subsequent large effect on stellar evolution. (For a recent review, see Massey 2003.) The observed ratios can be used to test and ``fine-tune" stellar evolution theory (see Meynet \& Maeder 2005), and so it is important for such measurements to be extended to as low and high metallicities as possible. With the introduction of multi-object spectrographs on larger aperture telescopes (GMOS on Gemini, DEIMOS on Keck), it is now possible to extend such studies to the more distant members of the Local Group, where the galaxies forming stars span a range of 20 in metallicity (WLM to M31, see Table 1 of Massey 2003). Such spectroscopic studies require the knowledge of an appropriate sample of stars to observe. We became aware of the need for a comprehensive survey of the resolved stellar content of nearby galaxies in support of our own research; we realized, however, that an additional strength of this study would be the uses that other researchers could make of such data. We took advantage of the NOAO ``Survey program" to use the Mosaic CCD cameras on the KPNO and CTIO 4-m telescopes to image the Local Group galaxies currently actively forming stars. Our Local Group Galaxies Survey (LGGS) project imaged the nearby spirals M31 in ten fields, and M33 in three fields, as well as the dwarf irregulars NGC 6822, WLM, IC 10, the Phoenix Dwarf, Pegasus, Sextans A, and Sextans B, each in a single field. (The need to complete M31 and M33 precluded the inclusion of IC 1613, which is located at a similar right ascension, but we hope someday to correct the omission.) The survey includes {\it UBVRI} data, as well as images through narrow-band (50-80\AA) filters centered on H$\alpha$, [OIII] $\lambda 5007$, and [S II] $\lambda 6713, 6731$. Our goal was to obtain uniform large-area coverage of the star-forming regions in these galaxies, with broad-band photometry good to 1-2\% for massive stars ($\ge 20M_\odot$). The data would be taken under good, but not always excellent, seeing conditions ($<1.0-1.2$ arcsec). These data could be supplemented by WFPC2/ACS images with {\it HST} or by AO for higher resolution studies of small regions, but our survey would provide uniform coverage of the entire galaxies. The broad-band photometry would be used to characterize the stellar population of massive stars in these galaxies. By itself, it would separate red supergiants (RSGs) from foreground dwarfs, and allow us to identify OB stars for follow-up studies. At intermediate colors, it would at least identify the sample of stars that must be examined spectroscopically to identify F-G supergiants. The narrow-band data would be used to distinguish H$\alpha$ emission-line stars from planetary nebulae and supernovae remnants. The observing began in August 2000, and ran through September 2002, with a total of 16 nights of usable data obtained on the CTIO and KPNO 4-m telescopes. Most of this time was spent on M31 and M33, as these spirals occupied the largest amount of area on the sky (see Figs.~\ref{fig:M31} and \ref{fig:M33}). The complete set of images have been available since 2003 via the NOAO Science Archive\footnote{http://archive.noao.edu/nsa/} and Lowell web sites\footnote{http://www.lowell.edu/users/massey/lgsurvey}. Here we present our {\it UBVRI} photometry of stars in M31 and M33. Our survey covers 2.2 square degrees in M31, and 0.8 square degrees in M33. Subsequent papers will describe the results of our emission-line filters, and our broad-band photometry of the dwarfs. Our survey has already been used as part of two PhD projects (Williams 2003, Bonanos et al.\ 2006) as well as other studies (Di Stefano et al.\ 2004, Massey 2006, Humphreys et al.\ 2006). In \S~\ref{Sec-data} we describe our data, and go into some detail into the reduction philosophy and technique, since the same methods have been applied to the complete data set. In \S~\ref{Sec-results} we present the catalogs and compare our photometry to that of others. Although spectroscopic follow-up is crucial for addressing many of our science drivers (the dependence of the IMF and upper-mass cutoffs on metallicity, accurate physical H-R diagrams for comparison with stellar evolutionary models), the photometric data alone can be used to good advantage, and we illustrate this in \S~\ref{Sec-analysis} where we present color-magnitude diagrams (\S~\ref{Sec-CMD}) and illustrate the power of the photometry in identifying blue and red members (\S~\ref{Sec-BR}). We test the findings using a preliminary spectroscopic reconnaissance (\S~\ref{Sec-spectra}). We summarize our results, and describe our plans for future work in \S~\ref{Sec-future}. \section{Observations and Data Reductions} \label{Sec-data} In Table~\ref{tab:journal} we list the field centers and observation dates for all of our M31 and M33 Mosaic frames. The data were taken with the Mosaic CCD camera at the prime focus of the 4-m Mayall telescope. The camera consists of an array of 8 thinned 2048x4096 SITe CCDs with 15$\mu$m pixels. The scale is 0.261" pixel$^{-1}$ at the center, and decreases to 0.245" pixel$^{-1}$ at the corners due to pincushion distortion from the optical corrector (Jacoby et al.\ 1998). To maintain good image quality, an ADC is used during broad-band exposures. A single exposure subtends an area of the sky roughly 35' by 35'; however, there are gaps between the chips (3 gaps of 12" each in the NS direction; 1 gap of 8" EW), and so the usual observing procedure is to obtain a set of 5 dithered exposures with the telescope offset slightly (25-50") between each exposure. The area covered by a dither sequence is about 36' by 36'. The basic reductions were performed with the ``mscred" package in IRAF\footnote{IRAF is distributed by NOAO, which is operated by AURA, Inc., under cooperative agreement with the NSF.}. The reductions are somewhat more complicated than that of a normal (single) CCD; complete details can be found at the LGGS web site. Complications include the fact that there is appreciable cross-talk between pairs of chips that share the same electronics, causing an electronic ``reflection" of a small fraction ($\leq$0.3\%) of the signal of one chip to be added to that of the other. This was most easily seen by the reflection of heavily saturated stars, but if left uncorrected would have affected all of the data on half of the chips. In addition, the corrector introduced a significant ($\leq$10\%) optical reflection ``ghost pupil" of the sky affecting the central portions of the field. Finally, the change of scale resulted in the need to rectify the images using stars with good astrometric positions within the field. For each run we began by carefully determining the cross-talk terms using the nominal values and revising these until we obtained good subtraction of saturated stars, as judged by eye. Next, we constructed a bad-pixel mask by dividing a long and short exposure of the dome flat. This mask would be used to flag non-linear pixels, which would then not be used in the photometry. For each night we obtained bias frames, dome flats, and sky flats. Given the read-out time (130s) it was not practical to obtain twilight flats in each filter each night, but a good set was obtained on each run. Each set of biases and flats were combined after cross-talk correction and over-scan removal. A combination of dome flats and sky flats were needed to construct an image of the ``ghost pupil" for each filter; this ghost was subtracted from the sky flats. After these preliminaries were done, we proceeded as follows: (1) Cross-talk was removed using our revised coefficients. (2) The overscan was removed line-by-line for each chip, and each chip ``trimmed" to remove the overscan region. (3) A revised bad-pixel mask was constructed combining the run-specific image and automatically determining saturated values and any bleed trails. (4) The two-dimension bias structure was removed by subtracting the average bias frame for the run. (5) The data were flat-fielded using the (cleaned) average sky flats. (6) The filter-specific ghost image was fit to each image, and interactively examined to determine the optimal scaling factor. This correction was most significant for the {\it U} and {\it I} exposures. (7) An astrometric solution for each frame was performed using stars from the USNO-B1.0 catalog (Monet et al.\ 2003). The higher-order astrometric distortion terms were left at their default values, but individual scales were determined for each axis, as well as rotation. This solution was then used to resample the data (using a time-consuming but robust sinc interpolation algorithm), de-projecting the image to a constant scale (0.27" pixel$^{-1}$) with conventional astronomical orientation (N up, E left) with a single tangent point for each galaxy. This allows for simple registration of adjacent fields, if desired. For many users of the Mosaic camera, the ultimate goal of the basic reduction process is the construction of a single ``stacked" image from the processed individual exposures; this image is cosmetically clean, and can then be used for subsequent analysis. We realized at the beginning of the project, however, that this would not be adequate if we were to achieve our goal of 1-2\% photometry through the broad-band filters. A simple division of a {\it B} and {\it V} dome flat suggests that there are highly significant differences in the color-terms between the various chips. The stacked image would contain star images that had been combined from as many as 4 different chips as a result of dithering. Therefore, we made the decision to treat each CCD separately in the photometric analysis. This did require 40x more work (8 CCDs $\times$ 5 ditherings) in general, but our sense was that in the end we would have significantly better photometry. The stacked images do suffice for the analysis of our narrow-band (H$\alpha$, [OIII], [SII]) data, which we will discuss in future papers. This decision freed us from the wasteful task of observing broad-band standard stars with the 4-m. Since the read-out time of the array is 130s, observing a single standard star offset to each of 8 chips through 5 filters would require nearly 1.5 hours simply in read-out time. One could not hope to observe sufficient standards during a night for a precise photometric solution. In addition, the use of external calibration allowed us to make use of nights on the 4-m which were mostly clear, but not completely photometric. For the photometric calibration, we used the Hall 1.1-m telescope on Lowell Observatory's dark sky site on Anderson Mesa. Data were obtained on 26 nights from 2000 November through 2003 February. The detector was a 2048 by 2048 SITe CCD with 24$\mu$m pixels. The chip was binned 2x2, with an image scale of 1.14" pixel$^{-1}$, and a field of view of 19.4'$\times$19.4'. The seeing was typically 2-3". For each M31 and M33 field, we obtained two exposures in each filter, with the telescope offset by 500" north and 500" south of the Mosaic field centers (Table~\ref{tab:journal}). This assured us that there would be overlap between the photometric frames and the area included on each of the Mosaic CCD frames. Exposure times were 1200 s in {\it U}, 120 s in each of {\it B, V,} and {\it R}, and 300 s in {\it I}, chosen so there would be good overlap between stars with adequate counts on the calibrating frames and the brightest unsaturated stars on the Mosaic frames. The allocation of observing time on the small telescope was sufficiently generous so that we could use only the best, photometric nights. Typically each calibration field was bracketed by observations of a dozen or so of the best-calibrated (i.e., observed multiple times with errors less than 0.01~mag) Landolt (1992) standards. This allowed us to determine extinction terms accurately; after most of the standard data were reduced, we fixed the values for the color-terms, and found optimal zero-points and extinction values for each night. The average residuals for the standard solutions were 1-2\% for all filters. As usual, we found that the {\it U} solutions required two different slopes; one for $U-B>0$, and one for $U-B<0$. (See Massey 2002 for discussions of difficulties with calibrating $U$-band photometry via CCDs and the standard UG2/UG1 + CuSO$_4$ {\it U}-like filters.) For the photometry, we developed scripts\footnote{Our full set of software, including IRAF scripts and FORTRAN code, can be downloaded from http://www.lowell.edu/users/massey/lgsurvey/splog2.html. This software is offered ``as is", with no implied warranty.} that separated each of the Mosaic dithered exposures into the 8 individual chips, and characterized the exposure (median sky value, full-width-half-maximum [FWHM]) and updated the headers (read-noise, saturation value, gain). Our scripts relied upon the basic IRAF/DAOPHOT routines (Stetson et al.\ 1990), but performed the tasks automatically in order to deal with the huge data volumes. Stars 4$\sigma$ above the local background were found with the appropriate FWHM and image shapes, and aperture photometry was performed with a small (3.5 pixel radius) aperture. This was done independently for each filter. Suitable PSF stars were automatically identified, and simultaneous PSF-fitting was performed over the frame using the ``allstar" algorithm. Additional stars were added based upon residuals from subtracting the fitted PSFs from the original frame, and the simultaneous PSF-fitting was repeated. Similar routines were run on the Lowell 1.1-m data, and isolated stars were matched between the data sets to determine photometric zero-points and color terms. When all of the data were reduced, we then examined the results and fixed the color terms to the values given in Table~\ref{tab:colorterms}. We then reran our calibration program to determine the best zero-points\footnote{Note that since the color-terms are not the same for each chip, the photometric zero-points will not be the same for each chip either, even though they were taken as a single image. The reason is that the flat-field lamps do not have zero color. The error introduced by ignoring this effect would be about 1-2\%. Our calibration procedure explicitly found individual zero-points for each chip on each image once the color terms were determined.}. An examination of the variations of the color terms between chips reveals that our concerns were well-founded. Had we treated the chips as identical, we would have introduced a difference of 0.06~mag in $U$ for a lightly-reddened O star ($U-B\sim -1$) dithered between chips 4 and 5. Similarly, a red supergiant ($B-V\sim 2$) would have derived $B$ values that differed by 0.10 mag, depending upon whether the star was observed on chip 3 or chip 5. (The derived $B-V$ colors would have been less affected; i.e., a difference of about 0.05~mag.) For projects requiring 5-10\% photometry, or narrow-band filters (where color-terms are negligible) the use of stacked images would suffice, but to be able to achieve 1-2\% broad-band photometry (and not be limited by calibration issues) requires some extra work. We averaged the calibrated photometry for each field, and then compared the differences in adjacent fields, restricting the sample to only well-exposed stars (statistical uncertainty $<$1\%). The results are shown in Table~\ref{tab:difs}. Often the median differences were only several millimag. Note that this is a critical test of our photometric accuracy, since each field was calibrated independently using external data. We were pleased to find evidence that we generally were table to reduce any calibration issue to $<1$\%, even at $U$. Before releasing our final catalogs, we made one additional step, that of removing false detections along diffraction spikes, a problem that has plagued other surveys as well (see, for example, Magnier et al.\ 1992). Stars brighter than (roughly) 13th~mag had noticeable diffraction spikes, oriented at 45$^\circ$ to the cardinal directions. For the brightest foreground stars (7th mag), these diffraction spikes extended $\sim 200$ pixels from the star. We found that around each bright star there were a handful of false detections in our preliminary catalog. For each source near the coordinates of a bright star, we computed the ellipticity and position angle of the object using the stacked $V$ image for convenience. If the ellipticity was high, and the position angle aligned towards the bright star, we eliminated the source from the catalog. Checking the results visually, we seem to have eliminated nearly all of such bogus detections, with little cost in terms of real objects. This affected only 0.1\% of the sources in the two catalogs, but removed an annoyance. \section{Results} \label{Sec-results} \subsection{The Catalogs} \label{Sec-cats} The final catalog consists of the averaged photometry for each star; of course, many of these stars were observed multiple times (i.e., on five ditherings and possibly on as many as three overlapping fields). For a star to make it into the catalog, it had to be detected in the $B$, $V$, and $R$ filters; thus there are stars without $U-B$ or $R-I$ measurements, and we denote these null measurements with a magnitude of ``99.99". The complete M31 and M33 catalogs are available in machine-readable format via the on-line edition; in the printed versions of Tables~\ref{tab:M31} and \ref{tab:M33} we present the first ten entries of each. The M31 catalog contains a total of 371,781 stars, and the M33 catalog contains a total of 146,622 stars. The stars have been assigned designations based on their celestial (J2000) positions; i.e., LGGS J004341.84+411112.0 refers to the star with coordinates $\alpha_{\rm 2000}=00^{\rm h}43^{\rm m}41.\!^{\rm s}84$, $\delta_{\rm 2000}=+41^\circ11' 12.\!"0$ following IAU conventions. (This particular star is a very close analog of P~Cygni, and is discussed both in Massey 2006 and below in \S~\ref{Sec-spectra}.) How deep, and complete, does our survey go? In terms of the photometric precision, we show the {\it median} errors as a function of magnitude for the combined M31 and M33 catalogs in Table~\ref{tab:errors}. We see that the errors are $<$10\% through about 23rd magnitude\footnote{A few of the very brightest stars have slightly larger errors than some fainter stars. This is due to the fact that the formal errors contain not only the photon and read-noise, but are also scaled by the reduced chi-squared value of fitting the PSF to a particular star. Since in general the PSF will be based upon the average of stars slightly fainter than the brightest star on a frame, the errors of the brighter stars may be higher than expected just based upon Poisson statistics.} Our stated goal was to achieve 1-2\% photometry for massive ($\geq 20 M_\odot$) stars. Did we achieve this? Let us briefly consider the evolution of a $20 M_\odot$ star; for details see Massey (1998a). On the zero-age main sequence the star would be identified as an O9.5~V star and have $M_V=-3.5$. The intrinsic colors of such a star will be $(U-B)_0\sim -1.1$, $(B-V)_0\sim -0.3$, $(V-R)_0 \sim -0.1$, and $R-I\sim-0.2$ (Bessell et al.\ 1998). The observed colors for such a star depend upon the reddening; let us assume that the star has an $E(B-V)=0.25$, typical of several OB associations in M31 (Massey et al.\ 1986). In that case, we expect that such a star will have $U-B=-0.9$, $B-V=0.0$, $V-R=0.0$, and $R-I=0.0$. Thus, at a distance modulus of 24.4 (M31, van den Bergh 2000), the star would have $U=20.0$, $B=V=R=I=20.9$\footnote{M33 is another tenth of a magnitude further away according to van den Bergh (2000), but the typical reddening of an OB star is less (Massey et al.\ 1995).}. The error in $R$ will be slightly larger than our goal (0.027 vs. 0.020 mag), but in all the other bands we will have achieved our goal; for early-type stars it will be $UBV$ that is particularly useful as a temperature discriminant (Massey 1998a). Some 8 million years later, near the end of its hydrogen-burning phase, the star would be a B1~I, with $M_V=-5.3$, and nearly identical intrinsic colors, and easily within our criteria. Finally, as a He-burning object the star would be spectroscopically identified as a RSG. As an M0~I, the star would have $M_V=-6.8$, with intrinsic colors of $(U-B)_0=2.5$, $(B-V)_0=1.8$, $(V-R)_0=0.9$, and $(R-I)_0=0.8$, or $U-B=2.7$, $B-V=2.0$, $V-R=1.0$, and $R-I=1.0$. So, roughly $U=22.3$, $B=19.9$, $V=17.9$, $R=16.9$, and $I=15.9$. The error at $U$ ($\sigma \sim 0.04$) is a little larger than our goal, but the others all give better than 1\% statistics. We will see in \S~\ref{Sec-BR} the usefulness of good colors at this magnitude. However, a more critical test concerns how well we did in crowded regions. Obviously there are stars in M31 and M33 that cannot be resolved from the the Earth---this is true, after all, even for massive stars at 2~kpc distances. But, we were of course curious how well we did in general. In Fig.~\ref{fig:OB48} we compare our $V$ stacked LGGS image of OB48, an association rich in massive stars (Massey et al.\ 1986), with an ACS image shown to the same scale and orientation. We have indicated the stars in our M31 catalog. We see that there are a few cases where stars were multiple at {\it HST} resolution but detected as single objects in our survey. But, generally our ground-based imaging did very well. We call particular attention to the star at left of center in the upper pair. That star is OB48-444, one of the earliest known O star in M31 (Massey et al.\ 1995), an O8~I star. Of course, it is possible to have unresolved multiple systems even at {\it HST} resolution, but as Kudritzki (1998) has emphasized, such multiplicity usually reveals itself by spectral peculiarities. \subsection{Comparison with Others} Photometry of galaxy-wide samples of the resolved stellar content of M31 and M33 have mainly been carried out photographically; e.g., Berkhuijsen et al.\ (1988) for M31 and Humphreys \& Sandage (1980) and Ivanov et al.\ (1993) for M33. Only the Magnier et al.\ (1992) survey of M31 has used CCDs in such a global study. Others have imaged small areas of these galaxies with CCDs from the ground (for example, Massey et al.\ 1986, 1995; Hodge \& Lee 1988; Hodge et al.\ 1988; Wilson et al.\ 1990), or even smaller regions using {\it HST} (Hunter et al.\ 1996a, 1996b; Magnier et al.\ 1997). The CCD survey of Magnier et al.\ (1992) broke new ground by providing {\it BVRI} photometry of 361,281 stars in a 1 square degree area of M31\footnote{The number of stars in the complete Magnier et al.\ (1992) catalog is comparable to the number in ours, despite the fact our survey goes considerably deeper, as we counted as valid only stars that were matched in $B$, $V$, {\it and} $R$ in order to eliminate spurious detections. Stars in Magnier et al.\ (1992) were usually detected only in a single filter. The number of stars that were detected by Magnier et al.\ (1992) in $B$, $V$, and $R$ is 19,966, according to their Table 2.}. Indeed, this survey provided much of the inspiration for the present study. We compare the properties of the two surveys in Table~\ref{tab:Magnier}. Given our larger aperture telescope, and the improvement in the size of CCD cameras in the past decade, it is not surprising that our survey goes about 2~mag deeper and covers about twice the area. We compare our photometry to that of Magnier et al.\ (1992) in Fig.~\ref{fig:magdif}. Since the image quality is so different (our worst seeing was their best), we restricted the comparison to stars in our catalog that have no comparably bright companions ($V_{\rm star} - V_{\rm comp} < 1$) within 10". We have also restricted the comparison to the stars with the best photometry (errors less than 0.05~mag in each filter), although nearly identical values are obtained if we loosen or tighten this restriction. We find median differences (in the sense of Magnier et al.\ 1992 minus LGGS) of $-0.120$ in $B$ (5,191 stars), $-0.025$ in $V$ (7,214 stars), +0.019 in $R$ (4,129 stars) and +0.077 in $I$ (5,387 stars). The differences at $V$ and $R$ are small and as expected; the modest offset in $B$ and $I$ appear to be real. As shown in the bottom two panels of Fig.~\ref{fig:magdif} the reason for the differences at $B$ and $I$ appear to be color related: for the bluest stars, our results and Magnier et al.\ (1992) are in good accord, while for the reddest stars the differences are significantly larger. Stars with $B-V<0$ show a median difference of $-0.025$~mag, while stars with $B-V>1.5$ show a median difference of $-0.238$~mag. Similarly there is a strong correlation of the $I$ differences with color, with the bluest stars ($R-I<0.3$) showing good agreement (median difference $-0.024$), while the reddest stars ($R-I>1.2$) show a large difference (+0.131). We are of course biased towards believing these color problems are inherent to Magnier et al.\ (1992) and not our own data, but of course only independent observation can answer that. We do note that we were careful to include a full range of colors of standards in obtaining our secondary calibration, while Magnier et al.\ (1992) relied upon published M31 photometry for their calibration. At least one of these sources, Massey et al.\ (1986), was well calibrated for only the bluest stars, and that may help explain some of these differences. We also were curious to compare our results to the M31 photometry catalog of Berkhuijsen et al.\ (1988), based upon reductions of photographic plates. Massey (2006) noted that there appeared to be a significant magnitude-dependent difference between the {\it V} magnitudes of Magnier et al.\ (1992) and those of Berkhuijsen et al.\ (1988), at least in a small region around the star AF And. The problem is complicated by the fact that the coordinates in Berkhuijsen et al.\ (1988) are also known to contain large systematic errors, as noted by Magnier et al.\ (1992). In comparing their coordinates to ours, we find that we need to correct the Berkhuijsen et al.\ (1988) coordinates by $-0.1^s$ and $+2.5"$ to bring the averages into accord with ours, and that in addition there were problems at the $\pm$5" level. The median differences in the photometry are in reasonable agreement: $-0.093$ in $U$, $-0.046$ in $B$, and $-0.040$ in $V$. However, as we can see in Fig.~\ref{fig:berk} there is a very strong effect with magnitude, at $V$, with the faintest stars in Berkhuijsen et al.\ (1988) shown in red. Such stars show systematic differences up to several magnitudes! We were concerned that this sort of effect might be due to incorrect matching of stars, given the problems in the Berkhuijsen et al.\ (1988) coordinates, and so we also made the comparison to just those stars that had both $V$ {\it and} $B$. This is shown in the bottom-right panel of Fig.~\ref{fig:berk}. We see the same effect, although of course with fewer data. Since the $B$ data do not show this problem, we conclude it is not an issue with matching. The problem we find here with the Berkhuijsen et al.\ (1988) photometry is consistent with Massey (2006), who found an $V=17.5$ (LGGS) star listed as 18.1 in Berkhuijsen et al.\ (1988), although the $B$ values agreed well. Not all of the Berkhuijsen et al.\ (1988) data are affected---there are plenty of fainter stars that do agree with our data---but stars listed as 18th or fainter in Berkhuijsen et al.\ (1988) should be viewed with suspicion. The sort of effect visible in Fig.~\ref{fig:berk} is a classic symptom of problems with sky subtraction, and we were able to confirm that faint stars near the nucleus (where the background is high) show the largest problem. The only global set of photometry with coordinates that has been published for M33 is that of Ivanov et al.\ (1993), who present a catalog of blue and red stars, based upon photographic plates. We find similar problems with those data. We needed to correct the Ivanov et al.\ (1993) coordinates by $+0.18^s$ and $-1.1$"; the match against isolated bright stars in our catalog is usually better than 2.5" after this correction. For the ``blue supergiants" in their catalog, we find median differences of $+0.22$ mag in $U$, $+0.04$ mag in $B$, and $-0.08$ in V, all based upon 558 stars. As we see in Fig.~\ref{fig:ivandif} the difference in $U$ is primarily a simple offset, while the differences in $V$ are dominated by the faint stars, which show a turndown at the faint end ($V>19$). This is where we expect errors due to sky determination to be most severe. The red stars show a larger effect, with a median difference of $-0.13$ mag in $B$ and $-0.38$ mag in $V$. (There are no $U$ values for the red supergiants in Ivanov et al.\ 1993.) Of course these differences with the photographic studies are not surprising: in their day they represented the best that could be done, and provided useful photometry and color information for many years. The advances brought upon by improved instrumentation and reduction software result in improved photometry; we hope our study holds up as well over the next two decades. \subsection{Spectroscopically Confirmed Members} We considered providing cross-identification between our stars and those of others, particularly Magnier et al.\ (1992), who did, after all, provide good coordinates. While this would be meaningful in the less crowded regions, in the OB associations, the exact match depends whether a given object is identified as one or more stars. This is a particular issue given the large difference in seeing between our survey and that of Magnier et al.\ (1992). Cross-reference to Berkhuijsen et al.\ (1988) is difficult due to the large systematic position errors in that catalog, and probably not useful, given the their photometric problems discussed above. Instead, we decided it would be useful to restrict the cross-identifications to stars spectroscopically confirmed as members of these galaxies. We present these in Tables~\ref{tab:M31mem} and \ref{tab:M33mem}, and include the spectral types and cross-IDs in Tables~\ref{tab:M31} and \ref{tab:M33} as well. We began with the spectral types given in Massey et al.\ (1995, 1996), which includes some earlier work (Humphreys et al.\ 1990), and updated these with more recently acquired data by ourselves and others. Older work, based primarily on photographic spectra, were added to these (for instance, Humphreys 1979, 1980); since these stars lack published coordinates, we did the identifications visually, although we restricted this to alleged members, and ignored the wealth of foreground objects such studies tended to confirm. We also included ``classical" Luminous Blue Variables (LBVs) from Parker (1997). To this we added recently proposed LBV candidates from King et al.\ (1998), Massey et al.\ (1996), and Massey (2006). We include the identifications of Wolf-Rayet stars, drawn from Massey \& Johnson (1998). Finally, we include spectroscopically confirmed red supergiants (RSGs), beginning with Massey (1998b), and extending back through Humphreys et al.\ (1988). For the latter, the membership of some stars is questionable. For instance, Humphrey et al. (1988)'s R79 in M31 is listed as a ``probable supergiant", based upon the strength of the CaII triplet. This star is also known as OB48-416 (Massey et al.\ 1986); its colors are not those of a RSG, which we confirm with our new photometry, but are more like an F-G star, and it is likely that the star is a foreground dwarf. Two additional RSG candidates, R138a and R140, fall outside the field covered by our survey. The identification of another RSG candidate, III-R23, was too ambiguous for us to have a positive identification. In general, we concluded that {\it both} radial velocities {\it and} the Ca~II triplet strengths were needed to consider a star as a RSG; otherwise, it is listed as an ``RSG candidate". The exceptions were stars with demonstrated variability (Var.~66 from van den Bergh et al. 1975, and Vars.\ 4 and 32 from Hubble 1929.) We also indicate in Tables~\ref{tab:M31mem} and \ref{tab:M33mem} whether the object was multiple on our frames. A star is flagged as ``M" if it has a companion with $V_{\rm companion}<V_{\rm star}+2.5$ within 1". Multiplicity at this resolution of course calls into question the exact identification; which of two stars separated by a fraction of an arcsecond dominated an optical ground-based spectrum? In some cases the identifications were uncertain because of poor coordinates or finding charts, and we indicate those as well. In many cases the coordinates are now considerably improved (as for the Moffat \& Shara 1983 Wolf-Rayet stars in M31), or, in some cases (such as the spectroscopy of supergiants in Field IV of Baade \& Swope 1963 by Humphreys 1979, or the spectral types of stars in M33 from Humphreys 1980) presented for the first time. In a few instances we went back to our own original finding charts to ascertain whether we had the correct identifications (e.g., M33WR112, M33WR113, M33WR116, and M33WR117), which had previously only been identified from the poorly reproduced versions of Massey et al.\ (1987a). The work also showed that two of the RSGs found by Massey (1998b) in M33 had previous spectroscopy by Humphreys (1980). Indeed, it was frustration over such identifications that provided some of the impetus for the present work. Finally, we also include in Table~\ref{tab:M31mem} the newly confirmed M31 members based upon the spectroscopy presented in \S~\ref{Sec-spectra} and presented separately in Table~\ref{tab:M31memnew}. \section{Analysis} \label{Sec-analysis} \subsection{Color-Magnitude Diagrams} \label{Sec-CMD} The most fundamental tool at the astronomer's disposal for understanding the stellar content of a region is the color-magnitude diagram (CMD). In Figs.~\ref{fig:M31CMD} and \ref{fig:M33CMD} we show the CMDs for M31 and M33, respectively. We label the regions where we expect to find the blue and red supergiants, as well as the large central region where we expect foreground dwarfs and giants to dominate. The latter is based upon a consideration of the Bahcall-Soneira model (Bahcall \& Soneira 1980) updated by Gary Da Costa and Heather Morrison; see Massey (2002). We also show the confirmed members from Tables~\ref{tab:M31mem} and \ref{tab:M33mem} with the symbols indicated. There are of course fewer foreground dwarfs and giants visible against the face of M33 simply because of the differences in the areas surveyed. Several things are apparent from these diagrams. First, there is clearly a much more extensive red supergiant population in M33 than in M31. This effect was first described by van den Bergh (1973), who noted that the brightest RSGs in low-metallicity galaxies were brighter relative to the brightest blue supergiants than in higher metallicity galaxies. We understand this today as being primarily due to the effects of mass-loss on the evolution of massive stars; in high metallicity regions a $30 M_\odot$ star will spend little or none of its He-burning life as a RSG, but rather will spend it as a Wolf-Rayet star, while in lower metallicity systems the time spent as a RSG is much longer. (See discussion in Massey 2002, 2003.) Secondly, we see that for M31 we expect few if any of the stars identified as RSGs by Humphreys (1980), which we have labeled as red supergiant {\it candidates}, to be actual bona fide RSGs. Instead, they are likely to be foreground objects. Two of the ``confirmed" RSGs also fall in a peculiar part of the CMD. The brightest of these, J004101.4+410434.6 (OB69-46), has a radial velocity and CaII triplet line strength consistent with membership, but the $B-V$ color is now 0.4 bluer than the photometry given by Massey (1998b). Possibly the identification of this star has been confused. Four of the RSGs in M33 seem to have a similar problem. Third, and perhaps the most striking, is that so few of the stars in M31 and M33 have been observed spectroscopically. The characterization of the stellar populations of these galaxies has just begun. \subsection{Identifying the Bluest and Reddest Members from Photometry} \label{Sec-BR} One of the complications with identifying the hottest massive stars is the issue of reddening. In the case of M31 and M33 the foreground reddening is small and likely uniform ($E(B-V)=0.06$ and 0.07, respectively, according to van den Bergh 2000); instead, member stars will be reddened by internal absorption within the disk of these galaxies. This adds to some confusion when trying to separate bona-fide blue supergiants from foreground dwarfs, a problem apparent in Figs.~\ref{fig:M31CMD} and \ref{fig:M33CMD} where we find some blue supergiants and LBVs intermixed with the foreground stars. (Since this class contains some F supergiants, we do expect some overlap.) The reddening-free Johnson {\it Q} index\footnote{$Q=(U-B)-0.72(B-V)$, where we have adopted the canonical value for the reddening from the Milky Way.} provides a useful discriminant of intrinsic color, at least for stars with $Q<-0.6$ (earlier than a B2~V or a B8~I; see Table 3 of Massey 1998c). For instance, consider a star with $B-V=0.5$ and $V=18$, a region of the CMD that is heavily dominated by foreground dwarfs (Figs.~\ref{fig:M31CMD} and \ref{fig:M33CMD}). If $Q=-1.0$ then we can be assured that the star is a reddened early O-type star, and a member of M31. If instead its $Q$ value is $-0.4$, it could be either an unreddened late F foreground dwarf, or it could be a slightly reddened early A-type supergiant member---without spectroscopy there is no way to tell. In Fig.~\ref{fig:QA} we show a $V$ vs $Q$ CMD for each galaxy. For M31 there is now cleaner separation between members and non-members, as shown by comparing Fig.~\ref{fig:QA} with Fig.~\ref{fig:M31CMD}. The results for M33 (Fig.~\ref{fig:QA} vs.\ \ref{fig:M33CMD}), which has less internal reddening (see Massey et al.\ 1995), are more ambiguous: the LBVs are now more obvious, but there is still a significant scattering of blue supergiants into redder regions of the diagram. The contrast to M31 is likely due to the fact that simply a lot more stars have spectroscopy in M33, and that some of these stars are quite crowded. However, $Q$ does not prove very useful for distinguishing among the early-type stars. In Figs.~\ref{fig:QB} we show an expanded region of the plots, where we have color-coded (just) the O-F supergiants by spectral type. In general, the stars of spectral types B5 and later can be distinguished from the O's, but in neither galaxy is the separation clean. The issue is complicated by crowding (which can affect the photometry and the derived spectral types) and by the fact that even $Q$ gives only marginally useful separation (see Massey 1998a). The figures illustrate the fact that while the photometry is good at identifying massive stars (as shown by Figs.~\ref{fig:M31CMD} and \ref{fig:M33CMD}), quantitative work, such as deriving the IMF, requires follow-up spectroscopy. Although distinguishing reddened OB stars from foreground dwarfs is only a minor problem, it is virtually impossible to identify RSGs on the basis of a single color. Massey (1998b) found, however, that the two sequences were straight-forward (in principle) to separate on the basis of a $B-V$ vs $V-R$ two-color diagram. At a given $V-R$ color, low-surface gravity stars (supergiants) will have a larger $B-V$ value than will stars with high surface gravities (foreground dwarfs) due to the effects of line blanketing by weak metal lines in the $B$ bandpass. This method should be relatively immune both to reddening and to metallicity; see Fig.~1 of Massey (1998b). We show such two-color diagrams constructed from our catalogs in Figs.~\ref{fig:rsgs}. First, we see that there is a very clean separation in the colors, with two easily recognized sequences. Most of the confirmed RSGs indeed fall where we expect in this diagram. A few do not. It would be worth re-examining the membership of the outliers. All of the spectroscopically confirmed non-members lie where we expect. \subsection{Illustrations from a Spectroscopic Reconnaissance} \label{Sec-spectra} Characterizing the stellar populations of these two spiral galaxies to the extent that we can make useful comparisons with the Magellanic Clouds or the Milky Way will require a significant amount of new spectroscopy; with 8-m class telescopes it is now possible to obtain sufficiently high S/N spectra of O and B stars so that detailed modeling of the physical properties can be made. Indeed, such work has already been applied to a few of the brightest B supergiants in these galaxies (see, for example, Trundle et al.\ 2002). However, it is clear from an inspection of Figs.~\ref{fig:M31CMD} and \ref{fig:M33CMD} that few of the brightest members have been observed spectroscopically even to the extent of obtaining of crude spectral types, and establishing membership or non-membership. The brightest of these can be usefully surveyed on even 4-m class telescopes, as shown by previous work by Massey et al.\ (1995) and Humphreys et al.\ (1990). On 29 September 2005 we obtained ``classification" quality spectra of the brightest M31 stars using the 3.5-m WIYN telescope and Hydra fiber positioner. The night was photometric, with good seeing ($\leq$1"). The spectra covered 3970-5030\AA\ in second order, and were obtained with a 790 line mm$^{-1}$ grating (KPC-18C) with a BG-39 blocking filter with a resolution of 1.5\AA. The blue fiber bundle ($\sim 100$ fibers of 3.1" diameter) was deployed around the $1^\circ$ field of view on targets chosen on the basis of being blue (reddening-free Johnson $Q<-0.6$) and bright ($V<18$). Two fields were observed: a northern one centered at $\alpha_{\rm 2000}=00^{\rm h} 44^{\rm m} 20.\!^s6, \delta_{\rm 2000}=+41^\circ37' 00"$ and a southern one centered at $\alpha_{\rm 2000}=00^{\rm h} 39^{\rm m} 45.\!^s7, \delta_{\rm 2000}=+40^\circ33' 00"$. The exposure times were 3 hours on each field, in six 30 min exposures. Halfway through the sequence the fiber positions were tweaked to take into account changes in the airmass and hence differential refraction. The S/N depends upon the star, but typically had a value of 50 per 1.5\AA\ resolution element. We classified the stars following Walborn \& Fitzpatrick (1990). We give these classifications in Table~\ref{tab:M31memnew}. As expected, the vast majority were B supergiants. The O stars, although more luminous, are fainter in $V$ because of their very high effective temperatures and hence significant bolometric corrections. Nevertheless, we do find one Of supergiant, as evidenced by the presence of the characteristic ``f" emission signature of NIII $\lambda 4634,42$ and He~II $\lambda 4686$. (The He~II emission has an equivalent width of -5\AA, well below the -10\AA\ cut-off usually assigned for Wolf-Rayet stars; see, for example, Massey et al.\ 1987b). The spectrum lacks the S/N needed for an exact classification, but given the weakness of He~I, we conclude the star is of spectral type O3-5 If; this makes it {\it the earliest type O star known in M31.} We show the spectrum in Fig.~\ref{fig:M31Ostar}. We also give some representative examples of B supergiants in Fig.~\ref{fig:M31Bs}. Only two of the spectra we obtained are foreground dwarfs; these are also denoted in Tables~\ref{tab:M31} and \ref{tab:M33}. The most interesting discovery is that of two stars with strong P Cygni profiles. Based upon the spectroscopic similarity to P Cygni itself, shown in Fig.~\ref{fig:PCyg}, we consider these two stars LBV candidates. One of these, J004341.84+411112.0, is the closest known analog to P Cygni, and is discussed in more detail in Massey (2006). Photometrically, it has been relative constant ($<0.2$~mag) in the optical over the past 40 years, but with small variations (0.05~mag) seen during a single year. Much can be said of the photometric history of P Cygni, which shows only small variability over the same sort of time-scale; see Israelian \& de Groot (1999). An {\it HST} image provides circumstantial evidence of a circumstellar nebula, bolstering the case (Massey 2006). The other new LBV candidate, J004051.59+403303.0, is discussed here for the first time. The lines are considerably weaker than in P Cygni; the normalized spectrum has been enhanced by a factor of 4 in Fig.~\ref{fig:PCyg} to make the lines visible at the scaling needed for the other two. Our photometry indicates $V=16.99$, $B-V=0.22$, and $U-B=-0.76$ (all with errors of 0.003~mag) in 2000. Magnier et al.\ (1992) observed the star in 1990, and found $V=17.33$, $B-V=+0.09$, with only slightly larger errors. Thus, this star seems to be a little more variable than the first. The only ``proof" that a star is an LBV is for it to undergo a dramatic 1-2~mag ``outburst" or show evidence of such a past event in the form of a circumstellar nebula (see Bohannan 1997); in the meanwhile we must be content to note the spectroscopic similarity to one of the archetypes of LBVs. \section{Summary and Future Work} \label{Sec-future} Our {\it UBVRI} survey of M31 and M33 produced catalogs containing 371,781 and 146,622 stars. We achieved our goal of 1-2\% photometry for the most massive ($>20M_\odot$) stars, with the external photometric calibration providing excellent agreement between adjacent fields. Although the image quality of our data is only modest (0.8-1.4", median 1.0") by modern standards, our survey covered large areas (2.2 and 0.8 square degrees), including all of the regions currently known to be actively forming massive stars. Comparison of our data with an ACS image of OB 48, a crowded M31 OB association rich in massive stars, suggests that our catalogs did a respectable job of detecting blends. Our color-magnitude diagrams demonstrate the rich stellar content of these systems. Although foreground dwarfs and giants will dominate at intermediate colors, most of the stars at either extreme in color will be blue and red supergiants. We demonstrate this by providing cross-references to stars whose spectroscopy has confirmed their memberships in these systems. New spectroscopy is presented for bright stars in M31, confirming membership for 34 additional members. Among these stars are two newly found LBV candidates, many B-A supergiants, and an O star that is the earliest type known in that galaxy. Future work is needed to avail ourselves of these beautiful data. Only a tiny fraction of these stars have been observed spectroscopically. Follow-up spectroscopic surveys on larger telescopes will allow us to determine the initial mass functions for numerous regions of star formation, and help determine if and how the IMF varies with metallicity and other conditions. High S/N spectra can be used to model a range of spectral types, helping to establish how metallicity affects fundamental stellar properties such as effective temperature. In addition, our team will continue to analyze our existing data on other Local Group galaxies currently actively forming stars, and compare those CMDs to those presented here. The premise of the NOAO Survey Program was that data such as those presented here should be useful to others for their own research. Towards that end we have made our full catalogs and images available; in addition, we have carefully documented our reduction techniques, and made our software also available. \acknowledgments Our interest in characterizing the bright resolved stellar populations of Local Group galaxies has been whetted by the seminal work of Sidney van den Bergh, Allan Sandage, and Roberta Humphreys, with whom we are grateful for correspondence and conversations over the years. The basic IRAF procedures for Mosaic data were written by Frank Valdes, while Lindsey Davis provided the work that led to the determination of the higher-order astrometric solutions. The IRAF reduction process was also improved thanks to thoughtful input by Buell Jannuzi. Without their efforts the task of reducing Mosaic data would have been prohibitively difficult. In addition, Taft Armandroff provided much scientific guidance in the implementation of the instrument on the 4-m. N. King and A. Saha contributed ideas to the original proposal; in addition, King helped obtain two nights of observations. We are grateful to Deidre Hunter for a critical reading of a draft of this paper, as well as constructive suggestions by the referee.
1,108,101,563,611
arxiv
\section{Introduction} \label{sec1} Understanding the properties of very high density $(\sim 10^{15}$ gm cm$^{-3}$, i.e., beyond nuclear density) cold matter at the cores of neutron stars is a fundamental problem of physics. Constraints on the proposed theoretical equation of state (EOS) models of such matter based on terrestrial experiments are difficult, because no experiments at such extreme densities at low temperature seem possible. The only way to address this problem is to measure the mass, the radius and the spin period of the same neutron star, as for a given EOS model and for a known stellar spin period, there exists a unique mass vs. radius relation for neutron stars. Any periodic variation in the observed lightcurve will provide us with the stellar spin period, if we can show that this periodic variation is due to stellar spin. But mass measurements usually require fortuitous observations of binaries, and radius estimates have historically been plagued with systematic uncertainties (see van Kerkwijk 2004 for a recent summary of methods). Moreover, none of these methods can measure all three parameters of the same neutron star that are needed to constrain EOS models effectively. We explore instead constraints based on the study of type I X-ray bursts from an accreting neutron star in an LMXB system. These bursts are convincingly explained as thermonuclear flashes on the stellar surface (Strohmayer \& Bildsten 2003, and references therein), and hence can give us information about the stellar parameters. In addition, the comparatively low magnetic field of a neutron star in an LMXB does not complicate the stellar emission of photons much (and hence keeps the modeling simple), which may not be the case for isolated neutron stars or neutron stars in other systems. The millisecond period brightness oscillations during type I X-ray bursts provide us with the stellar spin period, as this phenomenon is caused by the combination of stellar spin and an asymmetric brightness pattern on the stellar surface (Chakrabarty et al. 2003; Strohmayer et al. 2003). During these X-ray bursts, atomic spectral lines may be observed from the stellar surface (as might be the case for the LMXB EXO~0748$-$676; see Cottam, Paerels \& M\'endez 2002). When properly identified, these lines provide the surface gravitational redshift value, and hence the stellar radius-to-mass ratio. The remaining stellar parameter can be obtained by detailed modeling of the structures of the burst oscillation lightcurves, and broadened \& skewed (due to rapid stellar spin) surface atomic spectral lines. Here we calculate such theoretical models, and fit the burst oscillation models to the data of the LMXB XTE J1814-338 to constrain some stellar parameters. \section{Model Computation} \label{sec2} For the computation of burst oscillation lightcurves, we assume that the X-ray emitting region is a uniform circular hot spot on the stellar surface. In contrast, for our calculations of surface atomic spectral lines, we assume that the X-ray emitting portion is a belt that is symmetric around the stellar spin axis. This is because, for a typical spin frequency $> 10$~Hz, any hot spot on the stellar surface will be effectively smeared into an axisymmetric belt during a typical integration time for spectral calculations. In both calculations, we consider the following physical effects (first four of these were considered by \"Ozel \& Psaltis 2003): (1) Doppler shift due to stellar rotation, (2) special relativistic beaming, (3) gravitational redshift, (4) light-bending, and (5) frame-dragging. To include the effects of light-bending, we trace back the paths of the photons (numerically, in the Kerr spacetime) from the observer to the source using the method described in Bhattacharyya, Bhattacharya \& Thampan (2001). For a given EOS model, we have the following source parameters: two stellar structure parameters (radius-to-mass ratio and spin frequency), one binary parameter (observer's inclination angle), two emission region parameters (polar angle position and angular width of the belt or the hot spot), and a parameter $n$ describing the emitted specific intensity distribution (in the corotating frame) of the form $I(\alpha)\propto \cos^n(\alpha)$, where $\alpha$ is the emission angle measured from surface normal. Other stellar structure parameters (mass and angular momentum) are found by computing the structure of the spinning neutron star, using the formalism given by Cook, Shapiro \& Teukolsky (1994; see also Bhattacharyya et al. 2000, Bhattacharyya, Misra \& Thampan 2001). \section{Results and Discussions} \label{sec3} We have two main results: (1) if the stellar radius-to-mass ratio is inferred from the line centroid (which is the geometric mean of the low-energy and high-energy edges of the line profile), the corresponding error is in general less than 5\% (Bhattacharyya, Miller \& Lamb 2004), even when the observed surface line is broad and asymmetric. This is the accuracy needed for strong constraints on neutron star EOS models (Lattimer \& Prakash 2001). Other methods to infer this stellar parameter using surface lines (for example, using the peak energy of a line; see \"Ozel \& Psaltis 2003) give much larger errors. (2) The 90\% confidence lower limit of the dimensionless stellar radius-to-mass ratio of the LMXB XTE J1814-338 is 4.2 (see Bhattacharyya et al. 2005). These results show that both surface spectral lines and burst oscillations can provide important information about the stellar parameters. These two phenomena were observed from the neutron star in the LMXB EXO~0748$-$676, and gave the values of stellar spin period (Villarreal \& Strohmayer 2004) and radius-to-mass ratio (Cottam, Paerels \& Mendez 2002). However, to measure the remaining stellar parameter (needed to constrain EOS models), we need larger instruments, partly due to the low duty cycle of bursts. But this at least indicates that these two surface spectral and timing features can originate from the same neutron star, and may help us constrain neutron star EOS models effectively, when observed with large area detectors of future generation X-ray missions, such as Constellation-X and XEUS.
1,108,101,563,612
arxiv
\section{Introduction} As we all know, we can construct Cayley graphs and Cayley digraphs from a group, and these graphs are vertex-transitive. In this article, we construct digraphs from a finite cyclic group by using its endomorphisms. In general, these digraphs are not vertex-transitive. But they have many good properties which may make them into beautiful graphs and may merit further researches. Let $H$ be a finite cyclic group with $n$ elements, $n>1$, we treat it as a multiplicative group. We denote its identity element by $1$ without confusion. As we all know, $H$ has $n$ endomorphisms, every endomorphism has a unique form $f:H \to H,x\to x^k,k\in \mathbb{Z}, 1\le k\le n$, and $f$ is an isomorphism if and only if $n$ and $k$ are coprime. We can consider the digraph that has the elements of $H$ as vertices and a directed edge from $a$ to $b$ if and only if $f(a)=b$. Since cyclic groups with the same order are isomorphic, this digraph only depends on $n$ and $k$. So we can denote this digraph by $G(n,k)$. For example, see Figure 1 in section 4. \cite{B} studied the digraph from any endomorphism of $\mathbb{Z}/n\mathbb{Z}$, especially the author studied the number of cycles. \cite{BHM}, \cite{R} and \cite{VS} studied the digraph from the endomorphism $f(x)=x^2$ of $(\mathbb{Z}/p\mathbb{Z})^{*}$, $p$ is a prime. In particular, the cycle and tree structures have been classified. \cite{LMR} generalized those results in \cite{BHM} to the digraph from any endomorphism of $(\mathbb{Z}/p\mathbb{Z})^{*}$. \cite{SH} studied some elementary properties of the digraph from any endomorphism of $(\mathbb{F}_{q})^{*}$, $\mathbb{F}_{q}$ is a finite field with $q$ elements. In section 2 and section 3, we generalize those results in \cite{LMR} to $G(n,k)$, and we consider many other properties of $G(n,k)$. In section 4 and section 5, we consider its adjacency matrix and automorphism group respectively, furthermore we determine its characteristic polynomial and minimal polynomial. Especially, the results here may have applications to monomial dynamical systems over finite fields, see \cite{SH}. \section{Basic Properties of $G(n,k)$} Given two integers $l$ and $m$, we denote their greatest common divisor and least common multiple by $(l,m)$ and $[l,m]$ respectively. First we factor $n$ as $tw$ where $t$ is the largest factor of $n$ relatively prime to $k$. So $(k,t)=1$ and $(w,t)=1$. For any $a\in H$, Let ord$(a)$ denote its order. For proceeding further, we need the following lemma. \begin{lem}\label{solution} Let $a\in H$, the equation $x^{k}=a$ has a solution if and only if $a^{\frac{n}{d}}=1$, $d=(n,k)$. Moreover, if the equation has a solution, it has exactly $d$ solutions. \end{lem} \begin{proof} Applying the same argument as Proposition 7.1.2 in \cite{IR}. \end{proof} The following lemma is easy to prove but fundamental to the understanding of the structure of $G(n,k)$. We omit its proof and refer the readers to \cite{LMR}. \begin{lem}\label{basic} We have the following elementary properties of $G(n,k)$. {\rm(1)} The outdegree of any vertex in $G(n,k)$ is $1$. {\rm(2)} The indegree of any vertex in $G(n,k)$ is $0$ or $(n,k)$. Moreover, the indegree of $a\in G$ is $(n,k)$ if and only if $a^{\frac{n}{(n,k)}}=1$. {\rm(3)} $G(n,k)$ has $n$ vertices and $n$ directed edges. {\rm(4)} Given $a,b\in H$, there exists a directed path from $a$ to $b$ if and only if there exists a positive integer $m$ such that $a^{k^{m}}=b$. {\rm(5)} Given any element in $G(n,k)$, repeated iteration of $f$ will eventually lead to a cycle. {\rm(6)} Every component of $G(n,k)$ contains exactly one cycle. {\rm(7)} The set of non-cycle vertices forms a forest. \end{lem} \begin{prop}\label{indegree} The number of the vertices with indegree $0$ is $\frac{d-1}{d}n$, where $d=(n,k)$. \end{prop} \begin{proof} By Lemma \ref{solution}, a vertex $a$ has non-zero indegree if and only if $a^{\frac{n}{d}}=1$. Hence, the vertices with non-zero indegree form a subset $H_{d}=\{x\in H|x^{\frac{n}{d}}=1\}$. It is well-known that $H_{d}$ is a cyclic subgroup of $H$ with $\frac{n}{d}$ elements. So we get the desired result. \end{proof} As follows, we want to study the cycle structures of $G(n,k)$. \begin{prop}\label{vertex} The vertex $a$ is a cycle vertex if and only if ${\rm ord}(a)|t$. \end{prop} \begin{proof} Suppose $a$ is a cycle vertex. Then there exists a positive integer $m$ such that $a^{k^{m}}=a$. So ${\rm ord}(a)|(k^{m}-1)$, which implies $({\rm ord}(a),k)=1$. So $({\rm ord}(a),w)=1$. Note that ${\rm ord}(a)|n$, then ${\rm ord}(a)|t$. Conversely, suppose ${\rm ord}(a)|t$. Then $({\rm ord}(a),k)=1$. So there exists a positive integer $m$ such that ${\rm ord}(a)|(k^{m}-1)$, which implies $a^{k^{m}}=a$. So $a$ is a cycle vertex. \end{proof} \begin{cor} There are exactly $t$ cycle vertices in $G(n,k)$. \end{cor} \begin{proof} From Proposition \ref{vertex}, the total number of cycle vertices is $\sum\limits_{d|t}\varphi(d)=t$, where $\varphi$ is the Euler's $\varphi$-function, and $\varphi(d)$ is the number of elements with order $d$. \end{proof} \begin{prop}\label{same order} Vertices in the same cycle have the same order. \end{prop} \begin{proof} Assume $a$ and $b$ are in the same cycle. So there exists a $m$ such that $a^{k^{m}}=b$, which implies $b^{{\rm ord}(a)}=1$. So ${\rm ord}(b)|{\rm ord}(a)$. Similarly, we have ${\rm ord}(a)|{\rm ord}(b)$. So ${\rm ord}(a)={\rm ord}(b)$. \end{proof} By Proposition \ref{same order}, the notion of the order of a cycle is well-defined. Let $\ell(d)$ denote the length of a cycle with order $d$, where $d|t$. If two integers $l$ and $m$ are coprime, let ${\rm ord}_{l}m$ denote the exponent of $m$ modulo $l$. \begin{prop}\label{cycle} Let $d$ and $r$ be orders of cycles. Then: $(1)$ $\ell(d)={\rm ord}_{d}k$. $(2)$ The longest cycle length in $G(n,k)$ is $\ell(t)={\rm ord}_{t}k$. $(3)$ There are $\varphi(d)/\ell(d)$ cycles of order $d$. $(4)$ The total number of cycles in $G(n,k)$ is $\sum\limits_{d|t}\frac{\varphi(d)}{\ell(d)}$. $(5)$ $\ell([d,r])=[\ell(d),\ell(r)]$. \end{prop} \begin{proof} (1) Let $a$ be a vertex in a cycle of order $d$. It is obvious that $\ell(d)$ is the smallest positive integer such that $a^{k^{\ell(d)}}=a$, that is the smallest positive integer such that $d|(k^{\ell(d)}-1)$. So $\ell(d)={\rm ord}_{d}k$. (2) By (1) and Proposition \ref{vertex}. (3) Notice that the number of elements with order $d$ is $\varphi(d)$. (4) By (3) and Proposition \ref{vertex}. (5) Since $d|[d,r]$, $\ell(d)|\ell([d,r])$. Similarly, we have $\ell(r)|\ell([d,r])$. So $[\ell(d),\ell(r)]|\ell([d,r])$. In addition, since $d|(k^{\ell(d)}-1)$, $d|(k^{[\ell(d),\ell(r)]}-1)$. Similarly, $r|(k^{[\ell(d),\ell(r)]}-1)$. So $[d,r]|(k^{[\ell(d),\ell(r)]}-1)$. Hence, $\ell([d,r])|[\ell(d),\ell(r)]$. So we have $\ell([d,r])=[\ell(d),\ell(r)]$. \end{proof} But $\ell((d,r))=(\ell(d),\ell(r))$ is not always true. For example, let $k=2, d=11$ and $r=15$, we have $(11,15)=1$ and $\ell(1)=1$, but $(\ell(11),\ell(15))=(10,4)=2$. \begin{rem} {\rm Let $\mu$ be M$\ddot{\rm o}$bius function. Similar as Proposition 2.5 in \cite{SH}, the number of cycles with length $r$ is $\frac{1}{r}\sum\limits_{d|r}\mu(d)(k^{r/d}-1,n)$.} \end{rem} \begin{cor} If a component has a generator of $H$, then its unique cycle has the longest length $\ell(t)$. \end{cor} \begin{proof} Since if a component has a generator of $H$, the order of its unique cycle is $t$. \end{proof} \begin{prop} Every generator of $H$ has indegree $0$ if and only if $(n,k)\ne 1$. \end{prop} \begin{proof} Suppose $(n,k)\ne 1$. For any generator $b$ of $H$, if the indegree of $b$ is not 0, then there exists a vertex $a$ such that $a^{k}=b$. Since $n={\rm ord}(b)=\frac{{\rm ord}(a)}{({\rm ord}(a),k)}$ and ${\rm ord}(a)|n$, ${\rm ord}(a)=n$ and $(n,k)=1$. This leads to a contradiction. Conversely, if every generator of $H$ with indegree $0$, then generators are not cycle vertices. By Proposition \ref{vertex}, $t\ne n$. So $(n,k)\ne 1$. \end{proof} Hence, if $(n,k)\ne 1$, since $H$ has $\varphi(n)$ generators, by Proposition \ref{indegree}, we have $\varphi(n)\le \frac{d-1}{d}n$, where $d=(n,k)$. Now we would like to consider which kind of graphs $G(n,k)$ belongs to. \begin{prop}\label{regular} The following statements are equivalent. $(1)$ $G(n,k)$ is regular of degree $1$. $(2)$ Every component of $G(n,k)$ is a cycle. $(3)$ $f$ is an automorphism. \end{prop} \begin{proof} Note that $f$ is an automorphism if and only if $(n,k)=1$, then applying Lemma \ref{basic} (2) and (7). \end{proof} \begin{prop}\label{connected} $G(n,k)$ is connected if and only if there exists a positive integer $m$ such that $n|k^{m}$. \end{prop} \begin{proof} Suppose $n|k^{m}$. Then for any $a\in H$, $a^{k^{m}}=1$. So $G(n,k)$ is connected. Conversely, suppose $G(n,k)$ is connected. By Lemma \ref{basic} (7), there is only one cycle, that is $\{1\}$. By Lemma \ref{basic} (6), for any $a\in H$, there is a positive integer $m$ such that $a^{k^{m}}=1$. If $a$ is a generator of $H$, then $n|k^{m}$. \end{proof} Notice that there exists a positive integer $m$ such that $n|k^{m}$ if and only if $t=1$. Hence, $G(n,k)$ is connected if and only if $G(n,k)$ has only one cycle vertex, that is the identity element. \begin{prop} The following statements are equivalent. $(1)$ $G(n,k)$ is arc-transitive. $(2)$ $G(n,k)$ is vertex-transitive. $(3)$ $f$ is the identity. \end{prop} \begin{proof} Note that there exist loops in $G(n,k)$. So $G(n,k)$ is arc-transitive if and only if there are no other edges except loops, that is for any $a\in G(n,k), f(a)=a$, that is for any $a\in G(n,k), a^{k-1}=1$, that is $n|(k-1)$. Applying the same argument as the above paragraph, we have $G(n,k)$ is vertex-transitive if and only if $n|k-1$. Notice that $1\le k\le n$, we get the desired result. \end{proof} Since the number of distinct endomorphisms of $H$ is $n$, we attain $n$ distinct digraphs by our manner. There is an interesting problem that whether there exist isomorphic digraphs among them. In \cite{LMR}, the authors gave an example $G(10,2)\cong G(10,8)$. \begin{prop} If $n$ is a prime, for any $1<k_{1}<k_{2}<n$, $G(n,k_{1})\cong G(n,k_{2})$ if and only if ${\rm ord}_{n}k_{1}={\rm ord}_{n}k_{2}$. \end{prop} \begin{proof} Since $(n,k_{1})=1$, by Proposition \ref{regular}, each component of $G(n,k_{1})$ is a cycle. By Proposition \ref{cycle} (1), there are only two kinds of cycles in $G(n,k_{1})$, one with length $1$, the other with length ${\rm ord}_{n}k_{1}$. Since $(n,k_{1}-1)=1$, there is only one cycle with length $1$. By Proposition \ref{cycle} (3), there are $\frac{n-1}{{\rm ord}_{n}k_{1}}$ cycles with length ${\rm ord}_{n}k_{1}$. We can get similar results for $G(n,k_{2})$. Then we can get the desired result. \end{proof} \section{Properties of Trees} Here we introduce some notations for the tree originating from any given cycle vertex. For $m\ge 1$, we say a non-cycle vertex $a$ has height $m$ with respect to a cycle vertex $c$ if $m$ is the smallest positive integer such that $a^{k^{m}}=c$. For $m\ge 1$, let $T_{c}^{m}$ denote the set of non-cycle vertices with height $m$ with respect to the cycle vertex $c$. Similarly, $T^{m}$ denotes the set of all vertices with height $m$. For convenience, we put $T_{c}^{0}=\{c\}$ and say $c$ has height 0, $T^{0}$ denotes the set of all cycle vertices. Let $F_{c}$ be the induced subgraph of $G(n,k)$ with vertices $\bigcup_{m\ge1}T_{c}^{m}$. In fact, $F_{c}$ is a forest if it is not empty. We can get an induced subgraph of $G(n,k)$ with vertices $\bigcup_{m\ge0}T_{c}^{m}$, and we delete the loop if it exists, then we get a tree and denote it by $T_{c}$. All the vertices lie in the trees we define above. As follows, without special instructions, the concept of tree means what we define in the above. We will show that for any cycle vertex $c$, $T_{c}\cong T_{1}$. \begin{lem}\label{product} The product of a non-cycle vertex and a cycle vertex is a non-cycle vertex. \end{lem} \begin{proof} Notice that by Proposition \ref{vertex}, the cycle vertices of $G(n,k)$ form a subgroup. \end{proof} \begin{lem}\label{product1} If $a\in T_{1}^{h}, h\ge 1$ and $c$ is a cycle vertex, then $ac\in T_{c^{k^{h}}}^{h}$. \end{lem} \begin{proof} By Lemma \ref{product}, $ac\notin T^{0}$. Furthermore, $(ac)^{k^{h}}=c^{k^{h}}$ is a cycle vertex but $(ac)^{k^{h-1}}$ is a non-cycle vertex because $a^{k^{h-1}}\notin T^{0}$, which implies $ac\in T_{c^{k^{h}}}^{h}$. \end{proof} \begin{thm}\label{iso1} Let $c$ be a cycle vertex, then $F_{c}\cong F_{1}$. \end{thm} \begin{proof} First we show that there exists an one to one correspondence between the vertices of $T_{1}^{h}$ and $T_{c}^{h}$ for all heights $h\ge 1$, and hence between $F_{1}$ and $F_{c}$. Let $h$ be fixed and let $c_{h}$ denote the unique cycle vertex such that $c_{h}^{k^{h}}=c$. From Lemma \ref{product1}, define $g_{h}:T_{1}^{h}\to T_{c}^{h}$ by $g_{h}(a)=ac_{h}$. For any $b\in T_{c}^{h}$, $(b\cdot c_{h}^{-1})^{k^{h}}=b^{k^{h}}c^{-1}=1$ and $(b\cdot c_{h}^{-1})^{k^{h-1}}\notin T^{0}$ because $b^{k^{h-1}}\notin T^{0}$. It follows that $b\cdot c_{h}^{-1}\in T_{1}^{h}$. Then $g_{h}(b\cdot c_{h}^{-1})=b$. So $g_{h}$ is surjective. It is obvious that $g_{h}$ is injective. So $g_{h}$ is one to one. Combining these $g_{h}$, we get a bijective map $g$ from $F_{1}$ to $F_{c}$. It remain to show that $g$ is indeed an isomorphism. For any directed edge of $F_{1}$, it is from some $a\in T_{1}^{h}$ to $a^{k}\in T_{1}^{h-1}$ for some $h$. We only need to show that there exists a directed edge from $g(a)$ to $g(a^{k})$ in $F_{c}$, that is $(g(a))^{k}=g(a^{k})$, that is $(g_{h}(a))^{k}=g_{h-1}(a^{k})$. Now $c_{h}^{k^{h}}=c$ implies $(c_{h}^{k})^{k^{h-1}}=c$, by the uniqueness of $c_{h-1}$, we have $c_{h}^{k}=c_{h-1}$. So $(g_{h}(a))^{k}=(ac_{h})^{k}=a^{k}c_{h-1}=g_{h-1}(a^{k})$. \end{proof} \begin{cor}\label{iso} Let $c$ be a cycle vertex, then $T_{c}\cong T_{1}$. \end{cor} \begin{proof} Applying Theorem \ref{iso1} and the relation between $T_{c}$ and $F_{c}$. \end{proof} Hence, every tree has the same height, denote it by $h_{0}$, and different trees have the same number of vertices in each height. \begin{cor} For any two components $G_{1}$ and $G_{2}$ of $G(n,k)$, $G_{1}\cong G_{2}$ if and only if the unique cycles in them have the same length. \end{cor} There is another property of the map $g_{h}$ in Theorem \ref{iso1}, see the following proposition. \begin{prop} If $a\in T_{1}^{h}$ and $b\in T_{c}^{h}$ with $c_{h}$ the cycle vertex such that $b=ac_{h}$, then ${\rm ord}(b)={\rm ord}(a)\cdot {\rm ord}(c)$. \end{prop} \begin{proof} Since $a^{k^{h}}=1$, ${\rm ord}(a)|k^{h}$. By Proposition \ref{same order}, ${\rm ord}(c_{h})={\rm ord}(c)|t$. So $({\rm ord}(a), {\rm ord}(c_{h}))=1$. It follows that ${\rm ord}(b)={\rm ord}(a)\cdot {\rm ord}(c)$. \end{proof} As follows, we would like to study the tree structures by using heights. For any $a\in H$, denote its order ${\rm ord}(a)$ by $n_{a}$, and factor $n_{a}$ by $t_{a}w_{a}$, where $t_{a}$ is the largest factor of $n_{a}$ relatively prime to $k$. So $n_{a}|n, t_{a}|t$ and $w_{a}|w$. Similarly, we denote $a$'s height by $h_{a}$. The next proposition shows that $h_{a}$ only depends on $w_{a}$. \begin{prop} \label{height1} For any $a\in H$, $h_{a}$ is the minimal $h$ such that $w_{a}|k^{h}$. Especially, $h_{0}$ is the minimal $h$ such that $w|k^{h}$. \end{prop} \begin{proof} If $n_{a}\mid t$, then $a$ is a cycle vertex. So $h_{a}=0$. Note that $w_{a}=1$, so the conclusion is correct in this case. If $n_{a}\nmid t$. Since $h_{a}$ is the minimal $h$ such that $a^{k^{h}}$ is a cycle vertex, that is the minimal $h$ such that ${\rm ord}(a^{k^{h}})=\frac{n_{a}}{(n_{a},k^{h})}\mid t$, then $h_{a}$ is the minimal $h$ such that $(n_{a},k^{h})=w_{a}$, that is the minimal $h$ such that $w_{a}\mid k^{h}$. \end{proof} \begin{cor}\label{height2} For any two vertices $a$ and $b$,if $w_{a}=w_{b}$, then they have the same height. Especially, The vertices with the same order are at the same height. \end{cor} But if $a$ and $b$ have the same height, maybe $w_{a}\ne w_{b}$. For example, see Figure 3 in section 5, let $a=9$ and $b=40$, then $a$ and $b$ have the same height, but $w_{a}=4$ and $w_{b}=2$. \begin{cor}\label{largest} For any vertex $a$, if $w_{a}=w$, then $a$ is at the largest height. Especially, the generators of $H$ must be at the largest height. \end{cor} \begin{proof} For any vertex $a$, $w_{a}\mid w$, then applying Proposition \ref{height1}, we get the desired result. \end{proof} \begin{cor} \label{prime} If $k$ is a prime, then for any two vertices $a$ and $b$, they have the same height if and only if $w_{a}=w_{b}$. \end{cor} \begin{proof} Since $w_{a}=k^{h_{a}}$ and $w_{b}=k^{h_{b}}$ in this case. \end{proof} About the heights of the vertices we have the following proposition and corollary. \begin{prop}\label{height} Let $a\in T_{c}$, ${\rm ord}(c)=d|t$ and $h\ge 0$. Then ${\rm ord}(a)|k^{h}d$ if and only if $a\in T_{c}^{m}$, for some $m\le h$. \end{prop} \begin{proof} Suppose ${\rm ord}(a)|k^{h}d$. Then $(a^{k^{h}})^{d}=1$, which implies ${\rm ord}(a^{k^{h}})|d$. So $a^{k^{h}}$ is a cycle vertex. Hence, there exists $m\le h$ such that $a\in T_{c}^{m}$. Conversely, suppose there exists $m\le h$ such that $a\in T_{c}^{m}$. Then $a^{k^{m}}=c$. So $(a^{k^{m}})^{d}=c^{d}=1$, which implies ${\rm ord}(a)|k^{m}d$. Hence, ${\rm ord}(a)|k^{h}d$. \end{proof} \begin{cor}\label{level} Let $a\in T_{c}$, ${\rm ord}(c)=d|t$ and $m\ge 1$. Then $a\in T_{c}^{m}$ if and only if ${\rm ord}(a)|k^{m}d$ and ${\rm ord}(a)\nmid k^{m-1}d$. \end{cor} For any $d\ge 1$, let $H_{d}$ be the subgroup of $H$ defined by $H_{d}=\{x\in H|x^{d}=1\}$. It is well-known that $H_{d}$ is cyclic with order $(n,d)$. By Proposition \ref{vertex}, all cycle vertices of $G(n,k)$ form the subgroup $H_{t}$. By Lemma \ref{solution}, all vertices with non-zero indegree form the subgroup $H_{\frac{n}{(k,n)}}$. \begin{cor}\label{subgroup} For any $d|t$ and $h\ge 0$, $\bigcup\limits_{\substack{0\le m \le h\\c\in T^{0},\, {\rm ord}(c)|d}}T_{c}^{m}$ is exactly the subgroup $H_{k^{h}d}$. \end{cor} \begin{proof} By Proposition \ref{height} and the first part of its proof, this union consists of all $a\in H$ with ${\rm ord}(a)|k^{h}d$ . So it is exactly the subgroup $H_{k^{h}d}$. \end{proof} \begin{cor} For any $l\ge 1$ and $h\ge 0$, $\bigcup\limits_{\substack{0\le m \le h\\c\in T^{0},\, \ell(c)|l}}T_{c}^{m}$ is exactly the subgroup $H_{k^{h}\cdot(t,k^{l}-1)}$. \end{cor} \begin{proof} Since $c$ is a cycle vertex, ${\rm ord}(c)|t$. Then $\ell(c)|l$ if and only if $c^{k^{l}}=c$, that is ${\rm ord}(c)|(k^{l}-1)$, that is ${\rm ord}(c)|(t,k^{l}-1)$, then applying Corollary \ref{subgroup}. \end{proof} For any set $X$, denote the number of its elements by $|X|$. \begin{prop}\label{number} For any cycle vertex $c$, we have: $(1)$ $|T_{c}|=w$. $(2)$ For $m\ge 1$, $|T^{m}|=(n,k^{m}t)-(n,k^{m-1}t)$ and $|T_{c}^{m}|=(w,k^{m})-(w,k^{m-1})$. $(3)$ If $h_{0}\ge 2$, for $1\le m\le h_{0}-1$, the number of vertices in $T_{c}^{m}$ with indegree 0 is $|T_{c}^{m}|-\frac{|T_{c}^{m+1}|}{(k,n)}$. \end{prop} \begin{proof} (1) Note that there are $t$ cycle vertices and $n=wt$, by Corollary \ref{iso}, we have $|T_{c}|=w$. (2) In Corollary \ref{subgroup}, fix $d=t$, put $h=m$ and $h=m-1$ respectively, we have $T^{m}=\bigcup\limits_{\rm{ord}(c)|t}T_{c}^{m}=H_{k^{m}t}\setminus H_{k^{m-1}t}$. So $|T^{m}|=(n,k^{m}t)-(n,k^{m-1}t)$. Since $|T_{c}^{m}|=\frac{1}{t}|T^{m}|$, we get the other formula. (3) By Lemma \ref{solution}, the number of vertices in $T_{c}^{m}$ with non-zero indegree is $\frac{|T_{c}^{m+1}|}{(k,n)}$. \end{proof} Hence, if the unique cycle in a component of $G(n,k)$ has length $r$, then this component has $rw$ vertices. \begin{cor} $|T^{h_{0}}|\ge \frac{n}{2}$. \end{cor} \begin{proof} Recall that $h_{0}$ is the height of the trees. If $h_{0}=0$, then all vertices are in cycles, so $|T^{h_{0}}|=n \ge \frac{n}{2}$. If $h_{0}\ge 1$, From Proposition \ref{number} (2), we have $|T^{h_{0}}|=n-(n,k^{h_{0}-1}t)=n-t(w,k^{h_{0}-1})$. Since $n\nmid k^{h_{0}-1}t$, $w\nmid k^{h_{0}-1}$, which implies $(w,k^{h_{0}-1})\le \frac{w}{2}$. Hence, we have $|T^{h_{0}}|\ge \frac{n}{2}$. \end{proof} In fact, the lower bound in the above corollary is the best one. For example, let $k=6$ and $n=2^{m}$, where $m\ge 3$, then $t=1$ and $h_{0}=m$, so $|T^{h_{0}}|=n-(n,k^{h_{0}-1}t)=\frac{n}{2}$. \begin{prop} If $n\ge 5$ and $n$ is even, then the length of the longest cycle in $G(n,k)$ is less than or equal to $\frac{n-2}{2}$. \end{prop} \begin{proof} If $(n,k)\ne 1$, then $h_{0}\ge 1$. By the above corollary, the number of non-cycle vertices is more than or equal to $\frac{n}{2}$, which implies the number of cycle vertices is less than or equal to $\frac{n}{2}$. Since the identity element of $H$ is in a loop, the length of the longest cycle in $G(n,k)$ is less than or equal to $\frac{n}{2}-1=\frac{n-2}{2}$. If $(n,k)=1$, then all vertices are in cycles and $t=n$. Notice that the length of the longest cycle is $\ell(t)=\ell(n)={\rm ord}_{n}k$. We factor $n$ as $2^{r}s$, where $r\ge 1$ and $(2,s)=1$. If $s\ne 1$, then $\ell(n)\le \varphi(n)=2^{r-1}\varphi(s)< 2^{r-1}s = \frac{n}{2}$. Since $\ell(n)$ is an integer, $\ell(n)\le \frac{n}{2}-1=\frac{n-2}{2}$. If $s=1$, that is $n=2^{r}$, since $n\ge 5$, $r>2$, which implies $(\mathbb{Z}/n\mathbb{Z})^{*}$ has no primitive roots, so $\ell(n)< \varphi(n)=2^{r-1}= \frac{n}{2}$, then $\ell(n)\le \frac{n-2}{2}$. \end{proof} Hence, the number of vertices in the largest component is less than or equal to $\frac{n-2}{2}w$. In fact, the upper bound in the above proposition is the best one. For example, let $k=2$ and $n=2p$, $p$ is an odd prime, and $2$ is the primitive root of $(\mathbb{Z}/p\mathbb{Z})^{*}$, then $t=p$ and $\ell(t)={\rm ord}_{p}2=p-1=\frac{n-2}{2}$. \section{The adjacency matrix of $G(n,k)$} For any two vertices $u$ and $v$ of $G(n,k)$, if $u^{k}=v$, we call $u$ a child of $v$. If the vertex-set of $G(n,k)$ is $\{v_{1},v_{2},\cdots,v_{n}\}$, then the adjacency matrix of $G(n,k)$ is a $n\times n\, (0,1)$-matrix with the $(i,j)$-entry equal to the number of directed edges from $v_{i}$ to $v_{j}$, we denote it by $A(n,k)$. We label the vertices of $G(n,k)$ as follows. First, we label the vertices component by component, so we can get a block diagonal matrix. Second, for each component, we label its vertices height by height according to the child relations. For example, see Fig. 2 in \cite{VS}, let $H=(\mathbb{Z}/29\mathbb{Z})^{*}$ and $k=2$, then there are three components, see Figure 1. \begin{figure}[h] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(20,8) \multiput(0.65,0)(1.8,0){3}{\circle{0.7}} \put(0.55,-0.15){$7$} \put(2.25,-0.15){$20$} \put(4.05,-0.15){$23$} \multiput(1,0)(1.8,0){2}{\vector(2,0){1.1}} \put(2.45,-0.35){\oval(3.6,0.8)[b]} \put(0.65,-0.45){\vector(0,1){0.1}} \multiput(0.65,1.3)(1.8,0){3}{\circle{0.7}} \multiput(0.65,0.95)(1.8,0){3}{\vector(0,-1){0.6}} \put(0.55,1.2){$6$} \put(2.25,1.2){$22$} \put(4.15,1.2){$9$} \multiput(0.2,2.6)(0.9,0){6}{\circle{0.7}} \multiput(0.2,2.25)(1.8,0){3}{\vector(2,-3){0.4}} \multiput(1.1,2.25)(1.8,0){3}{\vector(-2,-3){0.4}} \put(0.1,2.5){$8$} \put(0.92,2.5){$21$} \put(1.82,2.5){$14$} \put(2.72,2.5){$15$} \put(3.68,2.5){$3$} \put(4.52,2.5){$26$} \multiput(6.85,0)(1.8,0){3}{\circle{0.7}} \put(6.65,-0.15){$16$} \put(8.45,-0.15){$24$} \put(10.25,-0.15){$25$} \multiput(7.2,0)(1.8,0){2}{\vector(2,0){1.1}} \put(8.65,-0.35){\oval(3.6,0.8)[b]} \put(6.86,-0.45){\vector(0,1){0.1}} \multiput(6.85,1.3)(1.8,0){3}{\circle{0.7}} \multiput(6.85,0.95)(1.8,0){3}{\vector(0,-1){0.6}} \put(6.75,1.2){$4$} \put(8.45,1.2){$13$} \put(10.35,1.2){$5$} \multiput(6.4,2.6)(0.9,0){6}{\circle{0.7}} \multiput(6.4,2.25)(1.8,0){3}{\vector(2,-3){0.4}} \multiput(7.3,2.25)(1.8,0){3}{\vector(-2,-3){0.4}} \put(6.3,2.5){$2$} \put(7.12,2.5){$27$} \put(7.97,2.5){$10$} \put(8.87,2.5){$19$} \put(9.83,2.5){$11$} \put(10.67,2.5){$18$} \put(5.55,4){\circle{0.7}} \put(5.45,3.85){$1$} \put(5.55,3.75){\oval(0.5,0.8)[b]} \put(5.3,3.65){\vector(0,1){0.1}} \put(5.55,5.3){\circle{0.7}} \put(5.55,4.95){\vector(0,-1){0.6}} \put(5.35,5.2){$28$} \put(5.1,6.25){\vector(2,-3){0.4}} \put(6.0,6.25){\vector(-2,-3){0.4}} \multiput(5.1,6.6)(0.9,0){2}{\circle{0.7}} \put(4.9,6.5){$12$} \put(5.8,6.5){$17$} \end{picture} \end{center} \qquad \caption{The digraph $G(28,2)$}\label{h12} \end{figure} We label $G(28,2)$ by $v_{1}=1, v_{2}=28,v_{3}=12,v_{4}=17, v_{5}=7,v_{6}=20,v_{7}=23,v_{8}=6,v_{9}=22,v_{10}=9,v_{11}=8, v_{12}=21,v_{13}=14,v_{14}=15,v_{15}=3,v_{16}=26, v_{17}=16,v_{18}=24,v_{19}=25,v_{20}=4,v_{21}=13,v_{22}=5, v_{23}=2,v_{24}=27,v_{25}=10,v_{26}=19,v_{27}=11$ and $v_{28}=18$. Then digraph $G(28,2)$ is given in Figure 2. \begin{figure}[h] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(20,8) \multiput(0.65,0)(1.8,0){3}{\circle{0.7}} \put(0.5,-0.1){$v_{5}$} \put(2.25,-0.1){$v_{6}$} \put(4.05,-0.1){$v_{7}$} \multiput(1,0)(1.8,0){2}{\vector(2,0){1.1}} \put(2.45,-0.35){\oval(3.6,0.8)[b]} \put(0.65,-0.45){\vector(0,1){0.1}} \multiput(0.65,1.3)(1.8,0){3}{\circle{0.7}} \multiput(0.65,0.95)(1.8,0){3}{\vector(0,-1){0.6}} \put(0.5,1.2){$v_{8}$} \put(2.25,1.2){$v_{9}$} \put(4.05,1.2){$v_{10}$} \multiput(0.2,2.6)(0.9,0){6}{\circle{0.7}} \multiput(0.2,2.25)(1.8,0){3}{\vector(2,-3){0.4}} \multiput(1.1,2.25)(1.8,0){3}{\vector(-2,-3){0.4}} \put(0.01,2.5){$v_{11}$} \put(0.87,2.5){$v_{12}$} \put(1.77,2.5){$v_{13}$} \put(2.67,2.5){$v_{14}$} \put(3.6,2.5){$v_{15}$} \put(4.47,2.5){$v_{16}$} \multiput(6.85,0)(1.8,0){3}{\circle{0.7}} \put(6.65,-0.1){$v_{17}$} \put(8.45,-0.1){$v_{18}$} \put(10.25,-0.1){$v_{19}$} \multiput(7.2,0)(1.8,0){2}{\vector(2,0){1.1}} \put(8.65,-0.35){\oval(3.6,0.8)[b]} \put(6.86,-0.45){\vector(0,1){0.1}} \multiput(6.85,1.3)(1.8,0){3}{\circle{0.7}} \multiput(6.85,0.95)(1.8,0){3}{\vector(0,-1){0.6}} \put(6.65,1.2){$v_{20}$} \put(8.39,1.2){$v_{21}$} \put(10.25,1.2){$v_{22}$} \multiput(6.4,2.6)(0.9,0){6}{\circle{0.7}} \multiput(6.4,2.25)(1.8,0){3}{\vector(2,-3){0.4}} \multiput(7.3,2.25)(1.8,0){3}{\vector(-2,-3){0.4}} \put(6.2,2.5){$v_{23}$} \put(7.07,2.5){$v_{24}$} \put(7.97,2.5){$v_{25}$} \put(8.87,2.5){$v_{26}$} \put(9.83,2.5){$v_{27}$} \put(10.67,2.5){$v_{28}$} \put(5.55,4){\circle{0.7}} \put(5.4,3.9){$v_{1}$} \put(5.55,3.75){\oval(0.5,0.8)[b]} \put(5.3,3.65){\vector(0,1){0.1}} \put(5.55,5.3){\circle{0.7}} \put(5.55,4.95){\vector(0,-1){0.6}} \put(5.35,5.2){$v_{2}$} \put(5.1,6.25){\vector(2,-3){0.4}} \put(6.0,6.25){\vector(-2,-3){0.4}} \multiput(5.1,6.6)(0.9,0){2}{\circle{0.7}} \put(4.9,6.5){$v_{3}$} \put(5.85,6.5){$v_{4}$} \end{picture} \end{center} \qquad \caption{The digraph $G(28,2)$}\label{h12} \end{figure} If we partition $A(28,2)$ according to the components, then we can get a block diagonal matrix and the main diagonal blocks are square matrixes. The main diagonal blocks are given as follows. \begin{equation} \begin{matrix} B_{1}=\left( \begin{smallmatrix} &v_{1}&v_{2}&v_{3}&v_{4}\\ v_{1}&1 &0 &0 &0 \\ v_{2}&1 &0 &0 &0 \\ v_{3}&0 &1 &0 &0 \\ v_{4}&0 &1 &0 &0 \end{smallmatrix}\right),& B_{2}=\left( \begin{smallmatrix} &v_{5}&v_{6}&v_{7}&v_{8}&v_{9}&v_{10}&v_{11}&v_{12}&v_{13}&v_{14}&v_{15}&v_{16}\\ v_{5}&0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{6}&0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{7}&1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{8}&1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{9}&0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{10}&0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{11}&0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{12}&0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{13}&0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 \\ v_{14}&0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 \\ v_{15}&0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 \\ v_{16}&0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 \\ \end{smallmatrix}\right), \end{matrix} \notag \end{equation} \begin{equation} B_{3}=\left( \begin{smallmatrix} &v_{17}&v_{18}&v_{19}&v_{20}&v_{21}&v_{22}&v_{23}&v_{24}&v_{25}&v_{26}&v_{27}&v_{28}\\ v_{17} &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{18} &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{19} &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{20} &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{21} &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{22} &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{23} &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{24} &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 \\ v_{25} &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 \\ v_{26} &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 \\ v_{27} &0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 \\ v_{28} &0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 \\ \end{smallmatrix}\right). \notag \end{equation} Since $B_{2}$ and $B_{3}$ correspond to isomorphic components, $B_{2}=B_{3}$. If we partition each main diagonal block according to the heights, then we can get a block lower triangular matrix and its main diagonal blocks are square matrixes, its main diagonal blocks are all equal to $0$ except the $(1,1)$-block. After partitioning, $B_{1},B_{2}$ and $B_{3}$ have the following form. \begin{equation} B_{i}=\begin{pmatrix} B_{i0} &0 &0 \\ B_{i1} &0 &0 \\ 0 &B_{i2} &0 \end{pmatrix},i=1,2,3. \notag \end{equation} We denote the characteristic polynomial and minimal polynomial of a matrix $A$ by $f_{A}(\lambda)$ and $m_{A}(\lambda)$ respectively. Notice that the characteristic polynomial and minimal polynomial of a $r\times r$ matrix with the following form \begin{equation}\label{matrix1} \begin{pmatrix} 0 &1 & \\ &\ddots &\ddots \\ & &\ddots&1\\ 1 & &&0 \end{pmatrix} \end{equation} are both $\lambda^{r}-1$. Hence, $f_{B_{1}}(\lambda)=\lambda^{3}(\lambda-1)$ and $f_{B_{2}}(\lambda)=f_{B_{3}}(\lambda)=\lambda^{9}(\lambda^{3}-1)$. \begin{lem}\label{minimal} If a partitioned matrix $D$ has the following form \begin{equation} D= \begin{pmatrix} D_{0} & & \\ D_{1} &0 & \\ &\ddots &\ddots&\\ & &D_{m}&0 \end{pmatrix}, \notag \end{equation} where the main diagonal blocks are all square matrixes, $D_{0}$ is a $r\times r$ matrix with the form as $(\ref{matrix1})$, each $D_{i}$ $(1\le i\le m)$ is non-negative and its $(1,1)$-entry is positive. Then $m_{D}(\lambda)=\lambda^{m}(\lambda^{r}-1)$. \end{lem} \begin{proof} It is obvious that $\lambda^{r}-1|m_{D}(\lambda)$ and the first row of the partitioned matrix $D^{r}-I$ is zero, where $I$ is the identity matrix. Since \begin{equation} D^{m}= \begin{pmatrix} D_{0}^{m}&0&\cdots&0 \\ D_{1}D_{0}^{m-1} & 0&\cdots&0 \\ D_{2}D_{1}D_{0}^{m-2} &0&\cdots&0 \\ \vdots& \vdots&&\vdots\\ D_{m}D_{m-1}\times\cdots \times D_{1}&0&\cdots&0 \end{pmatrix}, \notag \end{equation} $D^{m}(D^{r}-I)=0$. Since \begin{equation} D^{m-1}= \begin{pmatrix} D_{0}^{m-1}&0 &\cdots & 0\\ D_{1}D_{0}^{m-2} & 0&\cdots& 0\\ \vdots& \vdots&&\vdots\\ D_{m-2}D_{m-3}\times \cdots\times D_{0} &0 &\cdots&0\\ D_{m-1}D_{m-2}\times\cdots \times D_{1}&0&\cdots&0\\ 0&D_{m}D_{m-1}\times\cdots \times D_{2}&\cdots&0\\ \end{pmatrix}, \notag \end{equation} the $(m+1,1)$-entry of the partitioned matrix $D^{m-1}(D^{r}-I)$ is $D_{m}D_{m-1}\times\cdots \times D_{2}\times D_{1}D_{0}^{r-1}$. Since $D_{0}^{r-1}$ is invertible and non-negative, there exists a positive entry in the first row of $D_{0}^{r-1}$. Notice that $D_{m}D_{m-1}\times\cdots \times D_{2}D_{1}$ is non-negative and its $(1,1)$-entry is positive. Hence, $D_{m}D_{m-1}\times\cdots \times D_{2}\times D_{1}D_{0}^{r-1}\ne 0$. So $D^{m-1}(D^{r}-I)\ne 0$. Hence, we have $m_{D}(\lambda)=\lambda^{m}(\lambda^{r}-1)$. \end{proof} So by Lemma \ref{minimal}, $m_{B_{1}}(\lambda)=\lambda^{2}(\lambda-1)$ and $m_{B_{2}}(\lambda)=m_{B_{3}}(\lambda)=\lambda^{2}(\lambda^{3}-1)$. Recall that $h_{0}$ is the height of the trees. Let $C$ be a component of $G(n,k)$ and the unique cycle in $C$ has length $r$, then $C$ has $rw$ vertices, the characteristic polynomial and minimal polynomial of $C$ is $\lambda^{rw-r}(\lambda^{r}-1)$ and $\lambda^{h_{0}}(\lambda^{r}-1)$ respectively. Suppose the components of $G(n,k)$ consist of $m_{1}$ copies of $G_{1}$, $m_{2}$ copies of $G_{2}$, $\cdots$, $m_{s}$ copies of $G_{s}$, where $G_{1}, G_{2},\cdots, G_{s}$ are pairwise non-isomorphic, the unique cycle in each $G_{i}$ $(1\le i\le s)$ has length $r_{i}$. Then we get the following theorem. \begin{thm} $(1)$ The characteristic polynomial of $G(n,k)$ is $\prod\limits_{i=1}^{s}\big[\lambda^{r_{i}w-r_{i}}(\lambda^{r_{i}}-1)\big]^{m_{i}}$. $(2)$ The minimal polynomial of $G(n,k)$ is $\lambda^{h_{0}}(\lambda^{\ell(t)}-1)$. \end{thm} \begin{proof} The result in $(1)$ is obvious. By Proposition \ref{vertex} and Proposition \ref{cycle} $(1)$, the length of each cycle divides $\ell(t)$, this yields the result in $(2)$. \end{proof} By the discussions in section 2 and section 3, if we specify the values of $n$ and $k$, we can calculate explicitly these data $s,t,w,h_{0}, \ell(t), m_{i}$ and $r_{i}$ $(1\le i\le s)$. Since we have determined the characteristic polynomial of $G(n,k)$, it is easy to get the eigenvalues and spectrum of $G(n,k)$. \section{The Automorphism Group of $G(n,k)$} For any graph $G$, we denote its automorphism group by Aut$(G)$. For simplicity, we denote the automorphism group of $G(n,k)$ by Aut$(n,k)$. Notice that Aut$(G)$ is a permutation group on $\{1,2,\cdots,|G|\}$. Let $S_{m}$ be the symmetric group on $\{1,2,\cdots,m\}$. Let $P_{1}$ and $P_{2}$ be two permutation groups on $\{1,2,\cdots, m\}$ and $\{1,2,\cdots, r\}$ respectively. Recall that the wreath product $P_{1}\wr P_{2}$ is generated by the direct product of $r$ copies of $P_{1}$, together with the elements of $P_{2}$ acting on these $r$ copies of $P_{1}$. Using the notations in the above section, we get the following theorem. \begin{thm} {\rm Aut}$(n,k)\cong ({\rm Aut}(G_{1})\wr S_{m_{1}})\times ({\rm Aut}(G_{2})\wr S_{m_{2}})\times \cdots \times ({\rm Aut}(G_{s})\wr S_{m_{s}})$. \end{thm} \begin{proof} See Theorem 1.1 in \cite{C}. \end{proof} For each component $G_{i}$ $(1\le i \le s)$, its unique cycle has length $r_{i}$, by Corollary \ref{iso}, we have the following proposition. \begin{prop} For each $1\le i \le s$, ${\rm Aut}(G_{i})\cong {\rm Aut}(T_{1})\wr <\sigma_{i}>$, where $\sigma_{i}$ is a $r_{i}$-cycle, \begin{equation} \sigma_{i}= \begin{pmatrix} 1&2&3&\cdots &r_{i}\\ 2&3&4&\cdots &1 \end{pmatrix}. \notag \end{equation} \end{prop} \begin{proof} Notice that the automorphism group of the cycle in $G_{i}$ is exactly the permutation group generated by $\sigma_{i}$. \end{proof} Hence, we only need to determine ${\rm Aut}(T_{1})$. If $(n,k)=1$, by Proposition \ref{regular}, we have ${\rm Aut}(T_{1})=\{1\}$. Then we get the following proposition. \begin{prop}\label{auto1} If $(n,k)=1$, then {\rm Aut}$(n,k)=(<\sigma_{1}>\wr S_{m_{1}})\times (<\sigma_{2}>\wr S_{m_{2}})\times \cdots \times (<\sigma_{s}>\wr S_{m_{s}})$. \end{prop} \begin{prop} If $k=1$, then {\rm Aut}$(n,k)=S_{n}$. \end{prop} \begin{prop} If $k=n$, then {\rm Aut}$(n,k)=S_{n-1}$. \end{prop} But in general it is difficult to determine ${\rm Aut}(T_{1})$. Since the vertices with the same height may have different number of children. For example, let $H=(\mathbb{Z}/41\mathbb{Z})^{*}$ and $k=4$, $T_{1}$ is given as follows. \begin{figure}[h] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(20,3) \put(4.85,0){\circle{0.7}} \put(4.75,-0.1){$1$} \multiput(2.85,1.3)(2,0){3}{\circle{0.7}} \put(2.75,1.2){$9$} \put(4.65,1.2){$32$} \put(6.65,1.2){$40$} \put(2.85, 0.95){\vector(2,-1){1.66}} \put(4.85, 0.95){\vector(0,-1){0.6}} \put(6.85, 0.95){\vector(-2,-1){1.66}} \multiput(5.5,2.6)(0.9,0){4}{\circle{0.7}} \put(5.4,2.45){$3$} \put(6.2,2.45){$14$} \put(7.1,2.45){$27$} \put(8.0,2.45){$38$} \put(5.5, 2.25){\vector(3,-2){1.075}} \put(8.2, 2.25){\vector(-3,-2){1.075}} \put(6.4, 2.25){\vector(1,-2){0.31}} \put(7.3, 2.25){\vector(-1,-2){0.31}} \end{picture} \end{center} \caption{The tree $T_{1}$ for $n=40$ and $k=4$}\label{h12} \end{figure} Recall that if $h_{0}$ is the height of $T_{1}$, then the vertices of $T_{1}$ form the subgroup $H_{k^{h_{0}}}$, that is $H_{w}$. So ${\rm Aut}(H_{w})\subseteq {\rm Aut}(T_{1})$. As follows we want to determine ${\rm Aut}(T_{1})$ when $k$ is a prime. For any two vertices $a$ and $b$, if there is a $g\in {\rm Aut}(n,k)$ such that $g(a)=b$, we say $a$ is isomorphic to $b$, denote it by $a\cong b$. This is an equivalent relation in $G(n,k)$. We will show that if ${\rm ord}(a)={\rm ord}(b)$, then $a\cong b$. Suppose that $M$ is a cyclic group with $m$ elements. Given three positive integers $r$, $r_{1}$ and $q$ such that $r|r_{1}|m$ and $r_{1}=rq$. For any $b\in M, {\rm ord}(b)=r$, put $M_{b}=\{a\in M|a^{q}=b,{\rm ord}(a)=r_{1}\}$. Then we have the following lemma. \begin{lem}\label{solution2} For any $b\in M$ with ${\rm ord}(b)=r$, $|M_{b}|=\frac{\varphi(r_{1})}{\varphi(r)}$. \end{lem} \begin{proof} Fix a generator $\zeta$ of $M$ such that $\zeta^{\frac{m}{r}}=b$. It is easy to see that $\zeta^{\frac{m}{r_{1}}}\in M_{b}$. So $M_{b}$ is not empty. Every element with order $r_{1}$ has a unique form $\zeta^{\frac{ml_{1}}{r_{1}}}, 1\le l_{1}\le r_{1}, (l_{1},r_{1})=1$. Then we have \begin{equation} \zeta^{(\frac{ml_{1}}{r_{1}})q}=b \Leftrightarrow m|(\frac{ml_{1}}{r}-\frac{m}{r}) \Leftrightarrow r|l_{1}-1. \notag \end{equation} So $|M_{b}|=|\big\{l_{1}|1\le l_{1}\le r_{1}, (l_{1},r_{1})=1,r|l_{1}-1\big\}|$, which implies that $|M_{b}|$ only depends on $r$ and $r_{1}$ and it is independent of the specified value of $b$. Hence, given another $b^{'}\in M, {\rm ord}(b^{'})=r$, we have $|M_{b^{'}}|=|M_{b}|$. Since there are $\varphi(r_{1})$ elements with order $r_{1}$ and $\varphi(r)$ elements with order $r$, $|M_{b}|=\frac{\varphi(r_{1})}{\varphi(r)}$. \end{proof} \begin{cor}\label{solution3} For any two elements $a,b\in M$, ${\rm ord}(a)={\rm ord}(b), q\ge 1$, then for each positive integer $r$ such that ${\rm ord}(a)|r$, $M_{1}=\{x|x^{q}=a\}$ and $M_{2}=\{x|x^{q}=b\}$ have the same number of elements with order $r$. \end{cor} \begin{proof} By Lemma \ref{solution}, $M_{1}$ and $M_{2}$ have the same number of elements. Then we can get the desired result by applying Lemma \ref{solution2}. \end{proof} \begin{thm}\label{iso2} For any $a,b\in G(n,k)$, if ${\rm ord}(a)={\rm ord}(b)$, then $a\cong b$. \end{thm} \begin{proof} By Corollary \ref{height2}, $a$ and $b$ are at the same height. Since for any positive integer $h$, ${\rm ord}(a^{h})={\rm ord}(b^{h})$, then the cycles which they lead to have the same order. Then the desired result follows from Corollary \ref{solution3}. \end{proof} From now on we assume that $k$ is a prime. For any $a\in T_{1}$, there exists a $h$ such that $a^{k^{h}}=1$, which implies that ${\rm ord}(a)|k^{h}$. So $w_{a}={\rm ord}(a)$. By Corollary \ref{prime}, we get the following proposition. \begin{prop}\label{T1} If $k$ is a prime, for any $a,b\in T_{1}$, $a$ and $b$ are at the same height if and only if ${\rm ord}(a)={\rm ord}(b)$. \end{prop} Hence, all the vertices with indegree $0$ of $T_{1}$ are at the largest height $h_{0}$. Since $k$ is a prime, $(n,k)=1$ or $k$. We have discussed Aut$(n,k)$ on the case $(n,k)=1$, see Proposition \ref{auto1}. As follows we suppose that $(n,k)=k$. Then the largest height $h_{0}\ge 1$. For $1\le h\le h_{0}$, let $T_{1h}$ be the tree originating from a vertex with height $h$ in $T_{1}$. In particular, the vertex set of $T_{1h_{0}}$ contains only one point. Proposition \ref{T1} and Theorem \ref{iso2} tell us that $T_{1h}$ is well-defined. Then we get the following proposition. \begin{prop} If $k$ is a prime and $(n,k)=k$, then we have ${\rm Aut}(T_{1})\cong {\rm Aut}(T_{11})\wr S_{k-1}$, for any $1\le h< h_{0}$, ${\rm Aut}(T_{1h})={\rm Aut}(T_{1,h+1})\wr S_{k}$, and ${\rm Aut}(T_{1h_{0}})=\{1\}$. \end{prop} \section{Further Problems} We mention three further problems which may worth studying. First, it may be interesting to consider other graphic problems for $G(n,k)$, such as the matching problem and the coloring problem. Second, it may be interesting to study the asymptotic mean numbers of cycle vertices and cycles. \cite{CS}, \cite{SH} and \cite{VS} will be helpful. Third, what will happen if $H$ is not cyclic? \cite{BHM}, \cite{SK1}, \cite{SK2}, \cite{SK3} and \cite{W} will be helpful. \section{Acknowledgment} We would like to thank Dr. Jingfen Lan for her valuable suggestions. We also thank the referee for the careful review and the valuable comments.
1,108,101,563,613
arxiv
\section{Introduction} Type Ia supernovae (SNe Ia) are one of the most luminous phenomena in the Universe. They have been used as the standard candle for measuring cosmological distance. By employing correlation between the maximum luminosity and the light curve width of SNe Ia (e.g., Phillips 1993), it has been found that the expansion of the Universe is accelerating (e.g., Riess et al. 1998; Perlmutter et al. 1999). SNe Ia may also have a great influence on the chemical evolution of their host galaxies due to the production of iron-peak elements during SN Ia explosions (e.g., Greegio \& Renzzini 1983; Matteucci \& Greggio 1986; Li et al. 2018). In addition, cosmic rays may be accelerated by SN remnants (e.g., Fang \& Zhang 2012; Yang et al. 2015; Zhou \& Vink 2018). However, the progenitor models for SNe Ia are still under discussion, which may influence the accuracy for measuring the cosmological distance (e.g., Podsiadlowski et al. 2008; Howell 2011; Liu et al. 2012; Wang \& Han 2012; Wang et al. 2013; Maoz et al. 2014; Wang 2018). It has been suggested that SNe Ia are the thermonuclear explosions of carbon-oxygen white dwarfs (CO WDs) in close binaries (e.g., Hoyle \& Fowler 1960). There are two kinds of competing progenitor models of SNe Ia discussed frequently, i.e., the single-degenerate (SD) model and the double-degenerate (DD) model. In the SD model, a WD accretes material from a non-degenerate companion and explodes as an SN Ia when its mass approach to the Chandrasekar mass limit (e.g. Whelan \& Iben, 1973; Nomoto 1984). In the SD model, the companion could be a main sequence (MS) star, a red giant branch (RGB) star, or a helium (He) star (e.g., Li \& van den Heuvel 1997; Langer et al. 2000; Han \& Podsiadlowski 2004, 2006; Wang et al. 2009a; Ablimit et al. 2014; Wu et al. 2016; Liu et al. 2017a). In the DD model, SNe Ia are arised from the merging of double CO WDs that have a total mass larger than the Chandrasekar mass limit (e.g., Webbink 1984; Iben \& Tutukov 1984), though some studies argued that double WD mergers with sub-Chandrasekar mass may also produce SNe Ia (e.g., Ji et al. 2013; Liu et al. 2017b). Comparing with the SD model, the rate of SNe predicted by DD model is high enough to satisfy observational results (e.g., Yungelson et al. 1994; Han 1998; Nelemans et al. 2001; Ruiter et al. 2009; Liu et al. 2018). The delay time of an SN Ia is defined as the time interval between the formation moment of the primordial binary to the moment when the SN Ia is formed. The delay time distributions (DTDs) predicted by the DD model roughly follow a single power law, which is similar to that derived by observations (e.g., Maoz et al. 2011; Ruiter et al. 2009; Mennekens et al. 2010; Yungelson \& Kuranov 2017; Liu et al. 2018). However, some studies show that the merger of double CO WDs may produce accretion induced collapse supernovae and eventually form a neutron star (Nomoto \& Iben 1985; Saio \& Nomoto 1985; Timmes et al. 1994). It has been proposed that an instaneous explosion could be triggered while the merging process of double CO WDs is still ongoing, leading to the formation of an SN Ia (see Pakmor et al. 2010, 2011, 2012). This is a subclass of the DD model named as the violent merger scenario. Pakmor et al. (2010) found that the violent mergers of two $0.9\, M_{\odot}$ CO WDs may produce 1991bg-like events. Pakmor et al. (2011) suggested that the critical minimum mass ratio of double CO WDs for producing SNe Ia is $0.8\, M_{\odot}$ on the basic of the violent merger scenario. R\"opke et al. (2012) found that the violent merger scenario could also explain the observational properties of SN 2011fe. Sato et al. (2016) simulated a large sample of double CO WDs, and found that the critical minimum mass of each WD for producing SNe Ia is $0.8\, M_{\odot}$ based on the violent merger scenario. Liu et al. (2016) systematically investigated the violent merger scenario by considering the WD$+$He subgiant channel for the formation of double massive WDs, and found that the WD$+$He subgiant channel may contribute to about 10\% of all SNe Ia in the Galaxy based on the violent merger scenario. Henize 2-428, a bipolar planetary nebula (PN G049.4+02.4), is $\sim$$1.4\pm0.4\,\rm kpc$ from the solar system (see Santander-Garc\'{i}a et al. 2015). By assuming that the double He II 541.2 nm line profile is caused by the absorption of binaries, Santander-Garc\'{i}a et al. (2015) analysed the light curves of Henize 2-428, and found that its nucleus consists of double nearly-equal-mass CO WDs. The total mass of this system is $\sim$$1.76 \, M_{\odot}$ and the orbital period is $\sim$$4.2\,\rm hours$. According to the violent merger scenario, the double degenerate cores of Henize 2-428 is a strong progenitor candidate of SNe Ia. However, the formation path to the nucleus of Henize 2-428 is still unknown. In this work, we aim to investigate the evolutionary history of the bipolar planetary nebula Henize 2-428, and provide the rates and DTDs of SNe Ia from the violent merger scenario. In Sect. 2, we introduce our numerical methods. We give the results and discussion in Sect. 3. At last, we provide a summary in Sect. 4. \section{Numerical Methods} \subsection{Violent Merger Criteria} The merging of double WDs could trigger prompt detonation and produce SNe Ia under certain conditions. In this work, we assumed that the criteria for violent WD mergers are as follows: (1) The mass ratio of double CO WDs ($q=M_{\rm WD2}/M_{\rm WD1}$) should larger than 0.8, where $M_{\rm WD1}$ is the mass of massive WD, and $M_{\rm WD2}$ the mass of the less-massive one (see Pakmor 2011; Liu et al. 2016). (2) The critical minimum mass of each WD is assumed to be $0.8\, M_{\odot}$ (Sato et al. 2016). (3) The delay times of SNe Ia should be less than the Hubble time, i.e., $t=t_{\rm evol}+t_{\rm GW} \leq t_{\rm Hubble}$, where $t_{\rm evol}$ is the evolutionary timescale from primordial binaries to the formation of double CO WDs, and $t_{\rm GW}$ is the timescale during which double WDs are brought together by gravitational wave radiation, written as: \begin{equation} {t_{\rm GW}} = 8 \times {10^7} \times \frac{{{{({M_{\rm WD1}} + {M_{\rm WD2}})}^{1/3}}}}{{{M_{\rm WD1}}{M_{\rm WD2}}}}{P^{8/3}}, \end{equation} in which $t_{\rm GW}$ is in unit of years, $P$ is the orbital period of double WDs in hours, $M_{\rm WD1}$ and $M_{\rm WD2}$ are in unit of $M_{\odot}$. By adopting these criteria, we obtained a large number of double CO WD systems that may merge violently and then explode as SNe Ia. Subsequently, we provide the evolutionary path of the double WDs closest to current parameters of Henize 2-428 to approximately provide the evolutionary history of Henize 2-428 and speculate its fate. \subsection{BPS approches} By employing the rapid binary evolutionary code (Hurley et al. 2000, 2002), we performed a series of Monte Carlo BPS simulations evolving primordial binaries to the merging of double CO WDs. In each simulation, $2\times10^{\rm 7}$ primordial binaries are calculated. The initial parameters and basic assumptions in our Monte Carlo BPS computations listed below are adopted: (1) The initial metallicity in our simulations is set to be 0.02. (2) We assumed that all stars are in binaries with circular orbits. (3) The initial mass function from Miller \& Scalo (1979) is adopted for the primordial primaries. (4) The initial mass ratios ($q^{\rm '}=M_{\rm 2}/M_{\rm 1}$) are assumed to distribute uniformly (e.g., Mazeh et al. 1992; Goldberg \& Mazeh 1994), i.e., $n(q^{\rm '})=1$, in which $0\leq q^{\rm '} \leq1$. (5) The distribution of initial separation $a$ is assumed to be constant in $\rm log(a)$ for wide binaries and fall smoothly for close binaries (e.g., Han et al. 1995). (6) The star formation rate is assumed to be constant ($5\, M_{\odot}\rm yr^{\rm -1}$) to approximate the Galaxy over the past $15\,\rm Gyr$ (see Yungelson \& Livio 1998; Willems \& Kolb 2004; Han \& Podsiadlowski 2004), or modeled as a delta function (a single star burst of $10^{\rm 10}\, M_{\odot}$ in stars) to roughly describe elliptical galaxies. \subsection{Common-Envelope Computation} The common-envelope (CE) evolution play a critical role in the formation of double WDs. However, the prescription for calculating CE ejection is still under debate (e.g. Ivanova et al. 2013). In this work, we adopt the standard energy perspective to simulate the CE ejection process, (see Webbink 1984), written as: \begin{equation} \alpha_{\rm CE}(\frac{GM^{\rm f}_{\rm don}M^{\rm f}_{\rm acc}}{2{a_{\rm f}}}-\frac{GM^{\rm i}_{\rm don}M^{\rm i}_{\rm acc}}{2{a_{\rm i}}})= \frac{GM^{\rm i}_{\rm don}M_{\rm env}}{\lambda R_{\rm don}}, \end{equation} in which $G$, $M_{\rm don}$, $M_{\rm acc}$, $a$, $M_{\rm env}$ and $R_{\rm don}$ are the gravitational constant, the donor mass, the accretor mass, the orbital separation, the mass of the donor's envelope and the donor radius, respectively. The superscripts i and f stand for these values before and after the CE ejection. From this prescription, we can see that there are two variable parameters, i.e. the CE ejection efficiency ($\alpha_{\rm CE}$) and a stellar structure parameter ($\lambda$). These two parameters may change with the evolutionary process (e.g., Ablimit et al. 2016). It has been suggested that the value of $\alpha_{\rm CE}$ may vary with WD mass, secondary mass, mass ratio, or orbital period (e.g., de Marco et al. 2011; Davis et al. 2012). Meanwhile, the values of $\lambda$ could be investigated by considering gravitational energy only, adding internal energy, or adding the entropy of the envelope (e.g., Davis et al. 2010; Xu \& Li 2010). However, the value of $\alpha_{\rm CE}$ and $\lambda$ are still highly uncertain. In the present work, similar to our previous studies (e.g., Wang et al. 2009b), we simply combined these two parameters as a single free one (i.e. $\alpha_{\rm CE}\lambda$) based on Eq.\,(2), and assumed $\alpha_{\rm CE}\lambda=1$, 2 and 3 to check its effect on the final results. \section{Results and Discussions} \begin{figure} \begin{center} \includegraphics[width=9cm,angle=0]{ms0227fig1.eps} \caption{The distribution of violent WD mergers that can produce SNe Ia in the orbital period$-$secondary mass ($\log P_{\rm orb}-M_{\rm WD2}$) plane. Red triangles, green crosses and blue dots represent the simulated results with $\alpha_{\rm CE} \lambda=1$, 2 and 3, respectively. The filled circle with error bar represent the position of the central double degenerate cores of Henize 2-428 (Santander-Garc\'{i}a et al. 2015).} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=12.5cm,angle=0]{ms0227fig2.eps} \caption{The evolutionary history and future of the planetary nebula Henize 2-428.} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=9cm,angle=0]{ms0227fig3.eps} \caption{Evolution of SN Ia rates in the Galaxy based on the violent merger scenario. Here, we adopt a constant star formation rate of $5\,M_{\odot} \rm yr^{-1}$. The blue dotted, red dashed and black solid curves corresponds to the cases with $\alpha_{\rm CE} \lambda=1$, 2 and 3, respectively.} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=9cm,angle=0]{ms0227fig4.eps} \caption{Simalr to Fig.\,3, but for the delay time distributions of SNe Ia. Here, we adopt a star burst of $10^{10}\, M_{\odot}$ in stars. The open circles are from Totani et al. (2008), the filled triangles, squares are taken from Maoz et al. (2010, 2012) and the open square is from Graur \& Maoz (2013).} \end{center} \end{figure*} Fig.\,1 presents the distribution of double CO WDs that can produce SNe Ia via the violent merger scenario in the $\log P_{\rm orb}-M_{\rm WD2}$ plane. We find that as the value of $\alpha_{\rm CE}\lambda$ increases, the distribution of orbital periods becomes wider and more double WDs would be produced. The reason is that, a larger value of $\alpha_{CE}\lambda$ means that the less amount of orbital energy would be used for unbinding CE and the ejection of CE would be more easily to occur. The nucleus of Henize 2-428 consists of two nearly-equal-mass $0.88\pm0.13\, M_{\odot}$ CO WDs, and their orbital period is $\sim$$4.2\,\rm hours$ (Santander-Garc\'{i}a et al. 2015). In this figure, we also show the position of the central double degenerate nucleus of Henize 2-428. Note that for the case of $\alpha_{\rm CE}\lambda=3$, the parameters of several formed double WDs falls within the range of the observation error of Henize 2-428. We adopted the double WDs that is closest to current parameters of Henize 2-428 to approximate the evolutionary path of Henize 2-428 (see Fig.\,2). Fig.\,2 shows the evolutionary path of Henize 2-428. The primordial binary consists of a $5.4\, M_{\odot}$ primary and a $2.7\, M_{\odot}$ secondary, with an initial orbital period of $\sim$$15.9\,\rm days$. The primordial primary evolved to a subgiant after about $85.47\,\rm Myr$. The radius of primary reaches $22.39\,\rm R_{\odot}$ and fills its Roche-lobe at $t=85.67\,\rm Myr$ (Stage 2), resulting in a stable mass-transfer process. After about $0.5\,\rm Myr$, the H-rich shell of the primordial primary is exhausted and the mass-transfer stops. In this case, the primordial primary becomes a $0.97 \, M_{\odot}$ He MS star, and the primordial secondary turns to a $7.18\, M_{\odot}$ MS star (Stage 3). The orbital period at this stage is $\sim$$162\,\rm days$. At $t=111.82\,\rm Myr$, the primordial primary becomes a He subgiant with a radius of $54.95\,\rm R_{\odot}$ and fills its Roche-lobe again, leading to another stable mass-transfer process (Stage 4). When He-rich shell of the primordial primary is exhausted, the binary evolve to a $0.88\, M_{\odot}$ CO WD and a $7.22\, M_{\odot}$ MS star with an orbital period of $175\,\rm days$ (Stage 5). Subsequently, the primordial secondary continues to evolve, and will fill its Roche-lobe when it becomes a subgiant star with a radius of $102.8\,\rm R_{\odot}$ at $t=129.06\,\rm Myr$ (Stage 6). At this stage, the mass transfer is dynamically unstable, leading to the formation of the first CE (Stage 7). After CE ejection, the orbital period shrinks to $0.722\,\rm days$, and the primordial secondary becomes a $1.43 \, M_{\odot}$ He star (Stage 8). The He star continues to evolve, and will fill its Roche-lobe again after it evolves to the He subgiant stage at about $t=140\,\rm Myr$ (Stage 9). At this stage, a CE would be formed due to the dynamically unstable mass-transfer (Stage 10). After the CE ejection, the binary becomes a double WDs with the nearly equal mass, in which $M_{\,\rm WD1}=0.88 \, M_{\odot}$ and $M_{\,\rm WD2}=0.78 \, M_{\odot}$. During this process, the orbital period shrinks to $0.716\,\rm days$ (Stage 11). The formed double WDs fits well with the observed parameters of Henize 2-428, i.e., the evolutionary history of Henize 2-428 is reproduced. Previous works on the shape of nebula have revealed that the bipolar nebula originate from the CE ejection process (e.g., Han et al. 1995). According to our calculations, we found that the bipolar planetary nebula Henize 2-428 may evolve from the binary in the CE phase with two CO cores. Afterwards, the formed double WDs will merge driven by the gravitational wave radiation $838\,\rm Myr$ latter, resulting in the production of an SN Ia via the violent merger scenario at about $t=977\,\rm Myr$ (Stage 12). Fig.\,3 shows the evolution of SN Ia Galactic rates based on the violent merger scenario. In this figure, we adopt a constant star formation rate of $5 \, M_{\odot}\rm yr^{-1}$. From this figure, we can see that the Galactic rates of SNe Ia range from $0.4\times 10^{-4} \,\rm yr^{\rm -1}$ to $2.9\times 10^{-4} \,\rm yr^{\rm -1}$. In the observations, the Galactic SN Ia rate is about 3$-$$4\times 10^{-3} \,\rm yr^{-1}$, that is, the violent merger scenario may contribute to about 1$-$10\% of all SNe Ia in the Galaxy. Note that the rate increases with the values of $\alpha_{\rm CE} \lambda$. That is because, for the case with a larger value of $\alpha_{\rm CE}\lambda$, more double WD systems would be produced (see Fig.\,1). Note that Ablimit et al. (2016) also provided that the Galactic SN Ia rate is in the range of $8.2\times10^{\rm -5}\,\rm yr^{\rm -1}$$—$$1.7\times10^{\rm -4}\,\rm yr^{\rm -1}$ based on the violent merger scenario, which is generally similar with the results from the present work. Fig.\,4 displays the DTDs of SNe Ia predicted by the violent merger scenario. Here, we adopt a star burst of $10^{10}\, M_{\odot}$ in stars. The delay times of SNe Ia from the violent merger scenario are in the range of $\sim$$90\,\rm Myr$ to the Hubble timescale, which contributes to SNe Ia with young, intermediate and old ages. For the cases of $\alpha_{\rm CE}\lambda=1$ and 2, the large end of $\log(t)$ is real cut-off on the basic of our calculations. For the case of $\alpha_{\rm CE}\lambda=3$, the large end is artificial because the time has already reached the Hubble time. Note that the likelihood for the formation of double WDs with unite mass ratio is still under debate. García-Berro et al. (2015) argued that it is difficult to produce double WDs that has the unit mass ratio, and that this kind of double WDs is rare. The present work provide a possible path for the formation of double WDs with unit mass ratio, and speculate that the number of double WDs with unit mass ratio may be not negligible, which is consistent with the results of Santander-Garc\'ia et al. (2015) and Ablimit et al. (2016). For the CE ejection parameters, previous studies on the DD model of SNe Ia usually assumed that the values of $\alpha_{\rm CE} \lambda$ ranging from about 0.5 to 2.0 (e.g., Yungelson \& Kuranov 2017; Liu et al. 2018). However, a larger CE ejection parameter is also widely used. Nelemans et al. (2000) studied the formation of double He WDs and they found the CE parameter $\alpha_{\rm CE}\lambda$ could be in range of 1 to 3. Some observations on the post CE binaries show that the values of $\alpha_{\rm CE} \lambda$ may vary from 0.01 to 5 (e.g., Zorotovic et al. 2010). In this work, we found that in order to reproduce the current stage of the planetary nebula Henize 2-428, a large CE ejection parameter of $\alpha_{\rm CE} \lambda=3$ is needed. \section{Summary} In the present work, we reproduced the evolutionary history and predict the future of the planetary nebula Henize 2-428. We found that the planetary nebula may originate from a primordial binary that have a $\sim$$5.4\, M_{\odot}$ primary and a $\sim$$2.7\, M_{\odot}$ secondary with an initial orbital period of $\sim$$15.9\,\rm days$. After the birth of the double CO WDs, they would merge and produce an SN Ia through the violent merger scenario after about $\sim$$840\,\rm Myr$. In order to form Henize 2-428, a large CE parameter ($\alpha_{\rm CE}\lambda=3$) is needed. According to our calculations, we also found that the Galactic rate of SNe Ia are in the range 0.4$-$$2.9\times 10^{-4} \,\rm yr^{-1}$ and the delay times range from $\sim$$90\,\rm Myr$ to the Hubble timescale. For a better understanding on the violent merger scenario of SNe Ia, more numerical simulations and more candidates on the double WDs in the observations are required. \begin{acknowledgements} We acknowledge useful comments and suggestions from the anonymous referee. We acknowledge useful comments and suggestions from Zhanwen Han. We would like to thank Linying Mi for techniacal support. We appreciate Chinese Astronomical Data Center (CAsDC) and Chinese Virtual Observatory (China-VO) for offering computation paltform. This study is supported by the National Natural Science Foundation of China (Nos 11873085, 11673059 and 11521303), the Chinese Academy of Sciences (Nos QYZDB-SSW-SYS001 and KJZD-EW-M06-01), and the Yunnan Province (Nos 2017HC018 and 2018FB005). \end{acknowledgements}
1,108,101,563,614
arxiv
\section{Introduction}\label{sec:introduction} In this paper, we obtain Chebyshev (uniform) approximation optimality conditions for multivariate functions. The theory of Chebyshev approximation for univariate functions (in particular, polynomial and piecewise polynomial approximation) was developed in the late nineteenth (Chebyshev~\cite{chebyshev}) and twentieth century (\cite{nurnberger, rice67, Schumaker68} and many others). Most authors were working on polynomial and polynomial spline approximations, due to their simplicity and flexibility; however, other types of functions (for example, trigonometric functions) have also been used. Most univariate approximation optimality conditions are based on the notion of alternating sequence: maximal deviation points with alternating deviation signs. There have been several attempts to extend this theory to the case of multivariate functions~\cite{rice63}. In this paper the author underlines the fact that the main difficulty is to extend the notion of alternating sequence to the case of more than one variable, since $\mathds{R}^d$, unlike $\mathds{R}$, is not totally ordered and therefore the extension of the notion of alternating sequence is not trivial. There have been also several studies in a slightly different direction. Several researchers were working in the area of multivariate interpolation~\cite{Nurn_Davydov98multivar_interpolation, Nurnberger_multivariate_inter}, where triangulation based approaches were used to extend the notion of polynomial splines to the case of multivariate functions. These papers also dedicated to the extension of the notion of polynomial splines to the case of multivariate approximation, since it is not very clear how to extend the notion of knots (points of switching from one polynomial to another) in $\mathds{R}^d$. The objective functions, appearing in the corresponding optimisation problems are convex and nonsmooth (minimisation of the maximal absolute deviation). Therefore, it is natural to use nonsmooth optimisation techniques to tackle this problem. Our approach is based on the notion of subdifferentials of convex functions~\cite{Rockafellar70}. Subdifferentials can be considered as a generalisation of the notion of gradients for convex nondifferential functions. In particular, our objective function is the supremum of affine functions and therefore we use~\cite[Theorem 2.4.18]{Zalinescu2002}. The necessary and sufficient optimality conditions first appeared in \cite{matrix}, a short discussion paper submitted to MATRIX program volume ``Approximation and Optimisation\rq{}\rq{}. In the current paper we elaborate the results. In particular, we develop a fast point reduction based algorithm for necessary optimality condition verification. Apart from being computationally efficient, this algorithm clearly connects univariate alternating sequence for univariate and multivariate cases. We also propose a procedure for necessary and sufficient optimality conditions verification that is based on a generalisation of the notion of alternating sequence to the case of multivariate polynomials. The paper is organised as follows. In Section~\ref{sec:optimality conditions} we present the most relevant results from the theory of convex and nonsmooth analysis, investigate the extremum properties of the objective function appearing in Chebyshev approximation problems from the points of view of convexity and nonsmooth analysis and develop the necessary and sufficient optimality conditions. Then in Section~\ref{seq:relation_with_existing_multivariate} we demonstrate the relation with other optimality results (multivariate case), obtained by J. Rice~\cite{rice63}. In Section~\ref{sec:counterexample} we use the optimality conditions obtained in Section~\ref{sec:optimality conditions} to obtain a generalisation of the notion of alternating sequence in multivariate settings. This generalisation offers a full (necessary and sufficient) characterisation of best approximation by multivariate polynomials. In Section~\ref{sec:algorithm} we develop a fast algorithm for necessary optimality condition verification and demonstrate its similarity with univariate point reduction. Finally, in Section~\ref{sec:conclusions} we draw our conclusions and underline further research directions. \section{Optimality conditions}\label{sec:optimality conditions} \subsection{Convexity of the objective}\label{ssec:convexObjective} We start with the analysis of the objective function. A continuous function $f$ is to be approximated on a compact set $Q\in \mathds{R}^d$ by a function \begin{equation}\label{eq:model_function} L(\A,\x)=a_0+\sum_{i=1}^{n}a_ig_i(\x), \end{equation} where $g_i(\x)$ are the basis functions and the multipliers $\A = (a_1,\dots,a_n)$ are the corresponding coefficients. In the case of polynomial approximation, basis functions are monomials. Other types of basis functions (for example, trigonometric) are also possible. At a point \(x\) the deviation between the function \(f\) and the approximation is: \begin{equation} d(\A,\x) = |f(\x) - L(\A,\x)|. \end{equation} \label{eq:deviation} The uniform approximation error over the set \(Q\) is \begin{equation} \label{eq:uniformdeviation} \Psi(\A)=\|f(\x)-a_0-\sum_{i=1}^{n}a_ig_i(\x)\|_{\infty}, \end{equation} where set $Q$ is a hyperbox, such that $p_i\leq x_i\leq q_i,~i=1,\dots,d$ or a finite set of points. Note that $$\Psi(\A)=\sup_{\x\in Q} \max\{f(\x)-a_0-\sum_{i=1}^{n}a_ig_i(\x),a_0+\sum_{i=1}^{n}a_ig_i(\x)-f(\x)\}$$ and therefore the corresponding optimisation problem is as follows. \begin{equation}\label{eq:obj_fun_con} \mathrm{minimise~}\Psi(\A) \mathrm{~subject~to~} \A\in \mathds{R}^{n+1}. \end{equation} Since the function \(L(\A,\x)\) is linear in \(\A\), the approximation error function \(\Psi(\A)\), as the supremum of affine functions, is convex. Convex analysis tools~\cite{Rockafellar70} can be applied to study this function. Define by \(E^+(\A)\) and \(E^-(\A)\) the points of maximal positive and negative deviation (extreme points): \begin{align*} E^+(\A) &= \Big\{\x\in Q: L(\A,\x) - f(\x) = \max_{\y\in Q} d(A,\y)\Big\}\\ E^-(\A) &= \Big\{\x\in Q: f(\x) - L(\A,\x) = \max_{\y\in Q} d(\A,\y)\Big\} \end{align*} and the corresponding \(G^+(\A)\) and \(G^-(\A)\) as \begin{align*} G^+(\A) &= \Big\{(1,g_1(\x),\dots,g_n(\x))^T: \x\in E^+(\A)\Big\}\\ G^-(\A) &= \Big\{(1,g_1(\x),\dots,g_n(\x))^T: \x\in E^-(\A)\Big\} \end{align*} Then the subdifferential of the approximation error function \(\Psi(\A)\) at a point \(\A\) can be obtained using the active affine functions in the supremum~\cite[Theorem 2.4.18]{Zalinescu2002} and \cite{ioffeTikhomirov}: \begin{equation} \label{eq:subdifferentialObjective} \partial \Psi(\A) = \mathrm{co}\left\{G^+(\A)\cap G^-(\A)\right\}. \end{equation} $A^*$ is a minimum of a convex function~$\Psi(A)$ if and only if the following condition holds~\cite{Rockafellar70}: \begin{equation} \label{eq:subdifferential_optimal} \mathbf{0}_{n+1}\in \partial \Psi(\A^*) = \mathrm{co}\left\{G^+(\A^*)\cap G^-(\A^*)\right\}. \end{equation} This condition is as a necessary and sufficient optimality condition for Chebyshev approximation. In the rest of this section we demonstrate how this condition can be interpreted geometrically. \subsection{Optimality conditions: general case}\label{ssec:opt_general} In the case of univariate polynomial approximation, the optimality conditions are based on the notion of alternating sequence. \begin{definition} A sequence of maximal deviation points whose deviation signs are alternating is called an alternating sequence (also called alternance). \end{definition} This problem was studied by Chebyshev~\cite{chebyshev}. \begin{theorem}(Chebyshev) A degree $n$ polynomial approximation is optimal if and only if there exist $n+2$ alternating points sequence. \end{theorem} In the case of multivariate approximation the notion of alternating sequence, as a base for optimality verification, has to be modified. Note that the basis functions $$1,~g_i,~i=1,\dots,n$$ are not restricted to monomials. The following theorem holds \cite{matrix} (we present the proof for completeness). \begin{theorem}\label{thm:main} $\A^*$ is an optimal solution to problem~(\ref{eq:obj_fun_con}) if and only if the convex hulls of the vectors $(g_1(\x),\dots,g_n(\x))^T,$ built over corresponding positive and negative maximal deviation points, intersect: \begin{equation}\label{eq:convex_hulls_intersect} \mathrm{co}\left\{G^+(\A^*)\right\}\cap\mathrm{co}\left\{-G^-(\A^*)\right\}\ne\emptyset. \end{equation} \end{theorem} \begin{proof} The vector \(\A^*\) is an optimal solution to the convex problem \eqref{eq:obj_fun_con} if and only if \[ \mathbf{0}_{n+1} \in \partial \Psi(\A^*), \] where $\Psi$ is defined in \eqref{eq:uniformdeviation}. Note that due to Carath\'eodory's theorem, $\mathbf{0}_{n+1}$ can be constructed as a convex combination of a finite number of points (one more than the dimension of the corresponding space). Since the dimension of the corresponding space is $n+1$, it can be done using at most $n+2$ points. Assume that in this collection of $n+2$ points $k$ points ($h_i,~i=1,\dots,k$) are from~$G^+(\A^*)$ and $n+2-k$ ($h_i,~i=k+1,\dots,n+2$) points are from $G^-(\A^*)$. Note that $0<k<n+2$, since the first coordinate is either~1 or $-1$ and therefore $\mathbf{0}_{n+1}$ can only be formed by using both sets ($G^+(\A^*)$ and $-G^-(\A^*)$). Then $$\mathbf{0}_{n+1}=\sum_{i=1}^{n+2}\alpha_ih_i,~0\leq\alpha\leq 1.$$ Let $0<\gamma=\sum_{i=1}^{k}\alpha_i$, then $$\mathbf{0}_{n+1}=\sum_{i=1}^{n+2}\alpha_ih_i=\gamma\sum_{i=1}^{k}\frac{\alpha_i}{\gamma}h_i+(1-\gamma)\sum_{i=k+1}^{n+2}\frac{\alpha_i}{1-\gamma}h_i=\gamma h^+ +(1-\gamma)h^-,$$ where $h^+\in G^+(\A^*)$ and $h^-\in -G^-(\A^*)$. Therefore, it is enough to demonstrate that $\mathbf{0}_{n+1}$ is a convex combination of two vectors, one from $G^+(\A^*)$ and one from $-G^-(\A^*)$. By the formulation of the subdifferential of \(\Psi\) given by \eqref{eq:subdifferentialObjective}, there exists a nonnegative number \(\gamma \leq 1\) and two vectors \[ g^+ \in \mathrm{co}\left\{ \begin{pmatrix} 1\\ g_1(\x)\\ g_2(\x)\\ \vdots \\ g_n(\x) \end{pmatrix}: \x \in E^+(\A^*)\right\}, \mathrm{~and~} g^- \in \mathrm{co}\left\{ \begin{pmatrix} 1\\ g_1(\x)\\ g_2(\x)\\ \vdots \\ g_n(\x) \end{pmatrix}: \x \in E^-(\A^*)\right\} \] such that \(\mathbf{0} = \gamma g^+ - (1-\gamma) g^-\). Noticing that the first coordinates \(g^+_1 = g^-_1 = 1\), we see that \(\gamma = \frac{1}{2}\). This means that \(g^+ - g^- = 0\). This happens if and only if \begin{equation}\label{eq:opt_main2} \mathrm{co}\left\{ \left( \begin{matrix} 1\\ g_1(\x)\\ g_2(\x)\\ \vdots \\ g_n(\x)\\ \end{matrix} \right): \x \in E^+(\A^*) \right \}\cap \mathrm{co}\left\{ \left( \begin{matrix} 1\\ g_1(\x)\\ g_2(\x)\\ \vdots \\ g_n(\x)\\ \end{matrix} \right): \x \in E^-(\A^*) \right \}\ne\emptyset. \end{equation} As noted before, the first coordinates of all these vectors are the same, and therefore the theorem is true, since if $\gamma$ exceeds one, the solution where all the components are divided by $\gamma$ can be taken as the corresponding coefficients in the convex combination. \end{proof} Equivalent results have been obtained in~\cite{rice63}. Rice's optimality verification is based on separation of positive and negative maximal deviation points by a polynomial of the same degree as the degree of the approximation ($m$). If there exists no polynomial of degree~$m$ that separates positive and negative maximal deviation points, but the removal of any maximal deviation point results in the ability to separate the remaining points by a polynomial of degree~$m$ (see Section~\ref{seq:relation_with_existing_multivariate}). The conditions of Theorem~\ref{thm:main} are easier to verify, since we only need to check if two polytopes are intersecting (convex quadratic problem). This can be done using, for example, CGAL software~\cite{cgal}. When $n$ is very large the verification of these optimality conditions it may be beneficial to simplify these optimality conditions even further. In the rest of this section we show how Theorem~\ref{thm:main} can be used to formulate necessary and sufficient optimality conditions for the case of multivariate polynomial approximation. We also develop a fast algorithm that verifies optimality conditions for multivariate polynomial approximation (necessity only). \subsection{Optimality conditions for multivariate linear functions} \label{ssec:opt_linear_multi} In the case of linear functions (multivariate case) $n=d$ and Theorem~\ref{thm:main} can be formulated as follows. \begin{theorem}\label{thm:main_lin} The convex hull of the maximal deviation points with positive deviation and convex hull of the maximal deviation points with negative deviation have common points. \end{theorem} Theorem~\ref{thm:main_lin} can be considered as an alternative formulation to the necessary and sufficient optimality conditions that are based on the notion of alternating sequence. Clearly, Theorem~\ref{thm:main_lin} can be used in univariate cases, since the location of the alternating points sequence ensures the common points for the corresponding convex hulls, constructed over the maximal deviation points with positive and negative deviations respectively. Note that in general $d\leq n$. \subsection{Optimality conditions for multivariate polynomial (non-linear) functions} \label{ssec:opt_polynomial_multi} We start by introducing the following definitions and notation. \begin{definition} An exponent vector $$\e=(e_1,\dots,e_d)\in \mathds{R}^d,~e_i\in \mathds{N},~i=1,\dots,d$$ for $\x\in\mathds{R}^d$ defines a {\em monomial} $$\x^{\e}=x_1^{e_1} x_2^{e_2} \dots x_d^{e_d}.$$ \end{definition} \begin{definition} A product $c\x^{\e},$ where $c\ne 0$ is called the term, then a multivariate polynomial is a sum of a finite number of terms. \end{definition} \begin{definition} The degree of a monomial $\x^{\e}$ is the sum of the components of $\e$: $$\deg(\x^{\e})=|e|=\sum_{i=1}^{d}e_i.$$ \end{definition} \begin{definition} The degree of a polynomial is the largest degree of the monomials composing it. \end{definition} Let us consider some essential properties of polynomials and monomials. \begin{enumerate} \item For any exponent $\e=(e_1,\dots e_d),$ such that $|e|=m$ the degree of the monomial $\x^{\tilde{\e}}=m+1,$ where $\tilde{\e}_k=e_{k}+1$ and $\tilde{e}_i=e_i$ for all $i\neq k$, $i=1,\dots,d$, for some $k=1,\dots,d$. Any monomial of degree $m+1$ can be obtained in such a way. \item For any exponent $\e=(e_1,\dots e_d),$ such that $|e|=m$ the degree of the monomial $\x^{\tilde{\e}}=m-1,$ where $\tilde{\e}_k=e_{k}-1$ and $\tilde{e}_i=e_i$ for all $i\neq k$, $i=1,\dots,d$, for some $k=1,\dots,d$, such that $e_k>0$. Any monomial of degree $m+1$ can be obtained in such a way. \end{enumerate} Denote the vector of all monomials of degree at most $m$ by $\M_m(x)$, that is components of $\M^m(x)$ have the form $x^e$ where $e\in \mathds{N}^m: |e|\leq m$. Denote the number of such monomial (the dimension of the vector $\M^m(x)$) by $n_m$. In general, a polynomial of degree $m$ can be obtained as follows: \begin{equation}\label{eq:polynomials} P^m(x)=a_0+\sum_{i=1}^{n_m}a_i\M^m_i(x), \end{equation} where $a_i$ are the coefficients and $g_i=x^{e_i}$ are the basis functions there exists $e_k$ such that $|e_k|=m$ and $a_k\ne 0$. Any polynomial $P^m$ from~(\ref{eq:polynomials}) can be presented as the sum of lower degree polynomials ($m-1$ or less) and a finite number of terms that correspond to the monomials of degree $m$. The following lemma is almost obvious, but we state it since it is used repeatedly. \begin{lemma}\label{lem:monomial} Consider two sets of non-negative coefficients \begin{itemize} \item $\alpha_i\geq ,~i=1,\dots,n$ such that $\sum_{i=1}^{n}\alpha_i=1$; \item $\beta_i\geq ,~i=1,\dots,n$ such that $\sum_{i=1}^{n}\beta_i=1$. \end{itemize} If \begin{equation}\label{eq:lem_1} \sum_{i=1}^{n}\alpha_ia_ix_i=\sum_{i=1}^{n}\beta_ib_iy_i \end{equation} and \begin{equation}\label{eq:lem_2} \sum_{i=1}^{n}\alpha_ia_i=\sum_{i=1}^{n}\beta_ib_i \end{equation} then for any scalar $\delta$ the following equality holds \begin{equation}\label{eq:lem_3} \sum_{i=1}^{n}\alpha_ia_i(x_i-\delta)=\sum_{i=1}^{n}\beta_ib_i(y_i-\delta). \end{equation} \end{lemma} \begin{proof} \begin{align*} \sum_{i=1}^{n}\alpha_ia_i(x_i-\delta)=&\sum_{i=1}^{n}\alpha_ia_ix_i-\delta\sum_{i=1}^{n}\alpha_ia_i\\ =&\sum_{i=1}^{n}\beta_ib_iy_i-\delta\sum_{i=1}^{n}\beta_ib_i\\ =&\sum_{i=1}^{n}\beta_ib_i(y_i-\delta). \end{align*} \end{proof} In the case of polynomial approximation, Condition~\eqref{eq:convex_hulls_intersect} can be written as: \begin{equation} \label{eq:convexHullsPolynomials} \{\M^m(\x): x\in E^+\} \cap \{\M^m(\x): x\in E^-\} \neq \emptyset \end{equation} Note that due to Lemma~\ref{lem:monomial} one can assume that all the $x_i$ in the monomials are non-negative, since $\delta$ can be chosen as $$\min\{\min_{i=1,\dots,d}x_i,\min_{i=1,\dots,d}y_i\}.$$ Then Theorem~\ref{thm:main} can be formulated as follows. \begin{theorem}\label{thm:pol} A polynomial of degree $m$ is a best polynomial approximation if and only if there exist non-negative coefficients $\alpha_\x$, $\x\in E^+(\A)\cup E^-(\A)$ (with at least one positive coefficient in each set) such that $$\sum_{\x\in E^+(\A)}\alpha_{\x}=\sum_{\x\in E^-(\A)}\alpha_{\x}=1,$$ such that for any monomial~$\x^{\e}$ of degree at most $m$ the following equality holds \begin{equation}\label{eq:monom} \sum_{\x\in E^+(\A)}\alpha_{\x}\x^{\e}=\sum_{\x\in E^-(\A)}\alpha_{\x}\x^{\e}. \end{equation} \end{theorem} Note that any monomial $\x^{\e}$ of degree $m\geq 1,$ can be presented as a product of a lower degree monomial and the $i$-th coordinate~$x_i$ of $\x$. Therefore, Theorems~\ref{thm:main}~and~\ref{thm:pol} can be also formulated as follows. \begin{theorem}\label{thm:pol1} A polynomial of degree $m$ is a best polynomial approximation if and only if there exist non-negative coefficients $\alpha_\x$, $\x\in E^+(\A)\cup E^-(\A)$ (with at least one positive coefficient in each set) such that $$\sum_{\x\in E^+(\A)}\alpha_{\x}=\sum_{\x\in E^-(\A)}\alpha_{\x}=1,$$ such that for any monomial~$\x^{\e}$ of degree at most $m-1$ and every index $i = 1,\ldots,n_m$, the following equality holds \begin{equation}\label{eq:monom1} \sum_{\x\in E^+(\A)}\alpha_{\x}\x^{\e}x_i=\sum_{\x\in E^-(\A)}\alpha_{\x}\x^{\e}x_i. \end{equation} \end{theorem} Theorem~\ref{thm:pol1} provides a characterisation in terms of monomials of degree one less than in Theorem~\ref{thm:pol}. Note that the linear combination may not be a convex combination, since some of the coefficients may be negative. One can make these coefficients non-negative by applying Lemma~\ref{lem:monomial}. This can be achieved in a number of ways. For example, for a monomial $\x^{\e}$ of degree at most $m-1$ apply Lemma~\ref{lem:monomial}, where $\delta$ is chosen as follows \begin{equation}\label{eq:delta_min} \delta=\min_{j=1,\dots, d, e_j>0}x_j \end{equation} or \begin{equation}\label{eq:delta_max} \delta=-\max_{j=1,\dots, d, e_j>0}x_j. \end{equation} Necessary and sufficient optimality conditions formulated in Theorems~\ref{thm:main}-\ref{thm:pol1} are not very easy to verify when $n$ is very large. In particular, $n_m$ increases very fast when the polynomial degree is increasing, especially in the case of multivariate polynomials. In the next section we develop a necessary optimality condition that is more practical. \section{Relation with existing multivariate results}\label{seq:relation_with_existing_multivariate} In~\cite{rice63} Rice gives necessary and sufficient optimality conditions for multivariate approximation. These results are obtained for a very general class of functions, not necessary polynomials. These conditions are fundamentally important, however, it is not very easy to verify them (even in the case of polynomials). Also their relation with the notion of alternating sequence is not very clear. Before formulating Rice's optimality conditions, we need to introduce the following notation and definitions (\cite{rice63}). Recall that the set of extremal (maximal deviation points) $E$ is divided into two parts as follows: $$E^+=\{\x|\x\in E, f(\x)-L(\A^*,\x)\geq 0\},$$ $$E^-=\{\x|\x\in E, f(\x)-L(\A^*,\x)\leq 0\},$$ where $\A^*$ is a vector of the parameters and $L(\A^*,\x)$ is the corresponding approximation, defined as in~(\ref{eq:model_function}). The elements of $E^+$ and $E^-$ are positive and negative extremal points. \begin{definition} The point sets $E^+$ and $E^-$ are said to be isolable if there is an $\A,$ such that $$L(\A,\x)>0~\x\in E^+,\quad L(\A,\x)<0~\x\in E^-.$$ \end{definition} \begin{definition} $\Gamma(\A)$ is is called an isolating curve if $$\Gamma(\A)=\{\x| L(\A,\x)=0\}.$$ \end{definition} Therefore, the sets $E^+$ and $E^-$ are isolable, if they lie on opposite sides of an isolating curve $\Gamma(\A)$. \begin{definition} A subset of extremal points is called a critical point set if its positive and negative parts $E^+$ and $E^-$ are not isolable, but if any point is deleted then $E^+$ and $E^-$ are isolable. \end{definition} Rice formulated his necessary and sufficient optimality conditions as follows. \begin{theorem}(Rice~\cite{rice63}) $L(A^*,\x)$ is a best approximation to $f(\x)$ if and only if the set of extremal points of $L(\A^*,\x)-f(\x)$ contains a critical point set. \end{theorem} Note that $L(\A,\x)$ is linear with respect to $\A$ (due to~(\ref{eq:model_function})). Then $\Gamma (\A)$ can be interpreted as a linear function (hyperplane). If two convex sets (convex hulls of positive and negative points) are not intersecting, then there is a separating hyperplane, such that these two convex sets lie on opposite sides of this hyperplane. Note that in our necessary and sufficient optimality conditions we only consider finite subsets of $E^+$ and $E^-$, namely, we only consider the set of at most $n+2$ points from the corresponding sundifferential that are used to form zero on their convex hull. Generally, there are several ways to form zero, but if we choose the one with the minimal number of maximal deviation points, then, indeed, the removal of any of the extremal points will lead to a situation where zero can not be formed anymore and the corresponding subsets of positive and negative points are isolable (their convex hulls do not intersect). Therefore, our necessary and sufficient optimality conditions are equivalent to Rice's conditions. The main advantages of our formulations are as follows. First of all, our condition is much simpler, easier to understand and connect with the classical theory of univariate Chebyshev approximation. Second, it is much easier to verify our optimality conditions, which is especially important for the construction of of a Remez-like algorithm, where necessary and sufficient optimality conditions need to be verified at each iteration. \section{Alternating sequence generalisation}\label{sec:counterexample} In this section we propose a generalisation of the notion of alternating sequence. That is, we introduce a necessary and sufficient condition for best approximation that can be verified in the domain of the function to approximate (of dimension $d<<n_m$), based on the geometrical position of the points of extreme deviation. Consider a hyperplane $\HH(\mathbf{u},a) = \{\x\in \mathds{R}^d : \langle {\bf u},\x\rangle - a=0\}$, separating the two half spaces $\HH^+(\mathbf{u},a)= \{\x\in \mathds{R}^d : \langle {\bf u},\x\rangle - a> 0\}$ and $\HH^-(\mathbf{u},a) = \{\x\in \mathds{R}^d : \langle {\bf u},\x\rangle - a< 0\}$. We define the following: \begin{enumerate} \item $E^+(\mathbf{u},a) = (E^+(\A)\cap \HH^+(\mathbf{u},a))\cup(E^-(\A)\cap \HH^-(\mathbf{u},a))$ extreme deviation points whose deviation sign coincides with the sign of the half-space these extreme points belong to; \item $E^-(\mathbf{u},a) = (E^-(\A)\cap \HH^+(\mathbf{u},a))\cup(E^+(\A)\cap \HH^-(\mathbf{u},a))$ extreme deviation points whose deviation sign is opposite to the sign of the half-space these extreme points belong to; \item $E^+_0(\mathbf{u},a) = E^-(\A)\cap \HH(\mathbf{u},a)$ extreme deviation points with positive deviation sign that belong to $\HH(\mathbf{u},a)$; \item $E^-_0(\mathbf{u},a) = E^-(\A)\cap \HH(\mathbf{u},a)$ extreme deviation points with negative deviation sign that belong to $\HH(\mathbf{u},a)$. \end{enumerate} Then we have the following result. \begin{theorem} \label{thm:hyperplaneSeparation} Condition~\eqref{eq:convexHullsPolynomials} is true if and only if for any \((\mathbf{u},a)\in \mathds{R}^d\times \mathds{R}\) at least one of the following conditions holds: \begin{enumerate} \item polynomial degree reduction: \[\{\M^{m-1}(\x): \x\in E^+(\mathbf{u},a)\} \cap \{\M^{m-1}(\x): x\in E^-(\mathbf{u},a)\} \neq \emptyset;\] \item point elimination: \[\{\M^{m}({\x}): \bar{\x}\in E^+_0(\mathbf{u},a)\} \cap \{\M^{m}(\x): x\in E^-_0(\mathbf{u},a)\} \neq \emptyset.\] \end{enumerate} \end{theorem} \begin{proof} The first condition of this theorem allows one to reduce the degree of the polynomials at each iteration and therefore reduce the dimension of the space for Condition~(\ref{eq:convexHullsPolynomials}) to verify. The second condition reduces the number of extreme points that form the corresponding polytopes. It is clear that the system: \[ \sum_{\x\in E^+} \alpha_\x \M^{m}(\x) = \sum_{\x\in E^-} \alpha_\x \M^{m}(\x) \tag{\theequation(0)} \] is equivalent to: \begin{align}\refstepcounter{equation} \sum_{\x\in E^+} \alpha_\x \M^{m-1}(\x) &= \sum_{\x\in E^-} \alpha_\x \M^{m-1}(\x) \tag{\theequation(0)}\label{eq:n-10}\\ \sum_{\x\in E^+} \alpha_\x x_i \M^{m-1}(\x) &= \sum_{\x\in E^-} \alpha_\x x_i \M^{m-1}(\x) & i=1,\ldots,d. \tag{\theequation(i)}\label{eq:n-1i}. \end{align} If the corresponding $\alpha_\x$ are zeros, then the corresponding extreme points can be removed from further consideration and the corresponding polytopes are still intersecting. Consider any vector \({\bf u}\in \mathds{R}^d\) and scalar \(a\in \mathds{R}\). Then implement the following equation operations: \[\sum_{i=1}^d u_i\times \eqref{eq:n-1i} - a\times \eqref{eq:n-10}\] and obtain \[ \sum_{\x\in E^+} \alpha_\x (\langle {\bf u},\x\rangle -a )\M^{m-1}(\x) = \sum_{\x\in E^-} \alpha_\x (\langle {\bf u},\x\rangle -a )\M^{m-1}(x). \] Define \begin{align*} A^+ &= \sum_{\x\in E^+(\mathbf{u},a)}\alpha_\x|\langle {\bf u},\x\rangle -a |,\\ A^- &= \sum_{\x\in E^-(\mathbf{u},a)}\alpha_\x|\langle {\bf u},\x\rangle -a |. \end{align*} Constants $A^+$ and $A^-$ may only be zero if all the corresponding extreme points with non-zero convex coefficients belong to~$\HH(\mathbf{u},a)$ (condition~2 of this theorem). Consider the situation where this is not the case and therefore $A^+>0$ and $A^->0$. Define \begin{align*} \hat{\alpha}_\x &= \frac{\alpha_\x }{A^+} & \text{if } \x\in E^+(\mathbf{u},a),\\ \hat{\alpha}_\x &= \frac{\alpha_\x }{A^-} & \text{if } \x\in E^-(\mathbf{u},a). \end{align*} Then, we have \begin{equation} \sum_{\x\in E^+(\mathbf{u},a)} \hat{\alpha}_\x \M^{m-1}(x) =\sum_{\x\in E^(\mathbf{u},a)} \hat{\alpha}_\x \M^{m-1}(\x) \label{eq:onelessdegree} \end{equation} \end{proof} Note the following. \begin{enumerate} \item Formula~\eqref{eq:onelessdegree} is similar to Formula~\eqref{eq:monom} with degree \(m-1\). Thus, the above result means that if one runs any hyperplane and inverts the signs on one side of this hyperplane, the formula holds for degree \(m-1\). Since there are a finite number of ways to split extreme points into two sets, this condition can be verified in a finite number of steps. \item The second condition of Theorem~\ref{thm:hyperplaneSeparation} also means that the dimension of the domain $d$ is reduced to $d-1$. \end{enumerate} Define by \(\mathbf{Comb}_d(E)\) the set of all possible subsets of \(E\) of cardinality~$d$. Any $d$ point can be placed in a hyperplane. If this hyperplane is not unique (that is, points are not in general position), then at least one more point can be added to the system and they still can be placed on one hyperplane. Assume now that $k$ points ($k>d$) define a unique hyperplane, then there exists a set of exactly $d$ points that define the same hyperplane. The next result shows that it is not necessary to consider all possible hyperplanes, but that it is sufficient to consider those that contain at least \(d\) extreme points. \begin{theorem} \label{thm:remove_points} Condition~\eqref{eq:convexHullsPolynomials} is true if and only if for any \[C\in \mathbf{Comb}_d(E^+\cup E^-)\] that forms an affine independent system, there exists a hyperplane \(\HH\) containing \(C\) such that \[\{\M^{m-1}(\x): x\in E^+(\mathbf{u},a)\} \cap \{\M^{m-1}(\x): x\in E^-(\mathbf{u},a)\} \neq \emptyset.\] \end{theorem} \begin{proof} The necessity is a direct corollary of Theorem~\ref{thm:hyperplaneSeparation}. Assume now that the above condition holds and consider an arbitrary pair \((\mathbf{u},a)\in \mathds{R}^d\times \mathds{R}\). Assume without loss of generality that \(\|u\|=1\). Assume that a hyperplane contains extreme points whose affine space is of dimension $k<d-1$. Then one can rotate this hyperplane around the affine sub-space till we reach more extreme points. This rotation does not affect the membership in $E^+(\mathbf{u},a)$ or $E^-(\mathbf{u},a)$ that do not belong to the new hyperplane (after rotation). The points that belong to this hyperplane are removed from further consideration. Continue this process till the dimension the affine span is $d-1$. Then, if the final $\bar{E}^+(\mathbf{u},a)$ and $\bar{E}^-(\mathbf{u},a)$ (obtained after several iterations of rotation) are intersecting then so are the original ${E}^+(\mathbf{u},a)$ and ${E}^-(\mathbf{u},a)$. \end{proof} This implies the following corollaries: \begin{enumerate} \item Since any \(k\leq d\) points define (not necessarily uniquely) a hyperplane, by setting the pair \(({\bf u},a)\) to be defining this hyperplane, the result for degree \(m-1\) applies, after setting the signs accordingly. \item Theorem~\ref{thm:remove_points} can be verified by checking at most $$\frac{n!}{d!(n-d)!}$$ hyperplanes. In particular, if a set of $d$ points does not define a unique hyperplane, then this set can be excluded. \item If one chooses these \(k\) points as the vertices defining a \(k-1\)-face of the polytope \(P=\mathrm{co}\{\x\in E\}\), then all remaining points lie on the same side of the hyperplane. Therefore, by removing any \(k\) face (and facets in particular) of the polytope \(P\), the result holds for degree \(m-1\). \item\label{item:removefacets} By iteratively removing any \(m-1\) facets as described above, the remaining polytopes \(P^+\) and \(P^-\) must intersect (that is, the result holds for degree 1). \item\label{item:removepoints} Similarly, it is possible to remove any \((m-1)d\) points and, updating the signs accordingly, the remaining polytopes \(P^+\) and \(P^-\) must intersect. \end{enumerate} \begin{remark} It is easy to verify that Condition~\ref{item:removefacets} is exactly equivalent to the alternating sequence criterion in the univariate case. Indeed, after removing \(m-1\) points (which are also facets in the univariate case), there needs to remain at least 3 alternating points to ensure intersection of the remaining polytopes. This means that there must be at least \(m+2=m-1+3\) points, which can be shown to alternate by removing the adequate \(m-1\) points. Similarly, Condition~\ref{item:removefacets} is trivial for degree \(m=1\) polynomial approximation, since all these conditions are equivalent to the intersection between \(E^+\) and \(E^-\) being nonempty. \end{remark} Theorem~\ref{thm:remove_points} provides a generalisation of the notion of alternating sequence in the multivariate settings. Indeed, a set of points alternates $n$ times when all subsets obtained after removing $d$ (linearly independent) points alternate $n-1$ times. In the univariate case this means that a set of points alternates $n$ times when any subset obtained after removing 1 point alternates $n-1$ times (with the signs updated accordingly). \section{A fast algorithm for necessity verification for multivariate polynomial approximation}\label{sec:algorithm} Assume that our polynomial approximation is optimal and consider any monomial \(\x^{\e}\) of degree $m-1$. Then all the monomials of degree $m$ can be obtained by multiplying one of the monomials of degree $m-1$ by one of the coordinates of $\x=(x_1,\dots,x_d)^T.$ Therefore, there exist positive coefficients (we remove zero coefficients for simplicity) $\alpha_{\x},~\x\in E^+(\A)\cup E^-(\A)$ such that \begin{equation} \sum_{\x\in E^+(\A)}\alpha_{\x}\x^{\e}(1,\x)^T=\sum_{\x\in E^-(\A)}\alpha_{\x}\x^{\e}(1,\x)^T \end{equation} Assume that there exists $j,$ such that $e_j>0.$ Consider $$\delta^j_{min}=\min\{x_j: \x \in E^+(\A)\cup E^-(\A)\} $$ and $$\delta^j_{max}=\max\{x_j: \x \in E^+(\A)\cup E^-(\A)\}$$ where $x_j$ is the $j$-th coordinate of the point $\x$. Apply Lemma~\ref{lem:monomial} with $\delta=\delta^j_{min}$ or $\delta=\delta^j_{max}$ and remove all the maximal deviation points with the minimal (maximal) value for the $j$-th coordinate (there may be more than one point), since the corresponding monomial is zero. At the end of this process, the convex hulls of the remaining maximal deviation points should intersect or all the maximal deviation points are removed. The following algorithm can be used to verify this necessary optimality condition. \begin{center} Algorithm \end{center} \begin{enumerate} \item[Step 1:] {\bf Separate positive and negative maximal deviation points} Identify the sets $E^+(\A)$ and $E^-(\A)$ that correspond to positive and negative deviation. \item[Step 2:] {\bf Identify minimal (maximal) coordinate value for each dimension} For each dimension $k:$ $k=1,\dots,d$ identify $$\delta^j_{min}=\min\{x_j: \x \in E^+(\A)\cup E^-(\A)\} \text{ or } \delta^j_{max}=\max\{x_j: \x \in E^+(\A)\cup E^-(\A)\}$$ \item[Step 3:] {\bf Coordinate transformation} Apply the following coordinate transformation (to transform the coordinates of the maximal deviation points to non-negative numbers): $$\tilde{\x}_j=\x_j-\delta$$ \item[Step 4:] {\bf Points reduction} Remove maximal deviation points whose updated coordinates have a zero at the corresponding coordinate and assign $m_{new}=m-1$. If $m_{new}>1$ and the remaining sets of maximal deviation (positive and negative) are non-empty GO TO Step 1 for the corresponding lower degree polynomials approximation optimality verification, $m=m_{new}$). Otherwise GO TO the final step of the algorithm. \item[Step 5:] {\bf Optimality verification} If the remaining maximal deviation sets are non-empty and the conditions of Theorem~\ref{thm:main_lin} are not satisfied then the original polynomials is not optimal. \end{enumerate} There are two main advantages of this procedure. \begin{enumerate} \item It demonstrates how the concept of alternating sequence can be generalised to the case of multivariate functions. \item It is based on the verification whether two convex sets are intersecting or not, but since $d\leq n$ it is much easier to verify it after applying the algorithm. \end{enumerate} Note that Theorem~\ref{thm:main} can be also applied to verify optimality (necessary and sufficient optimality condition). In this case one needs to check if two convex sets are intersecting in $\mathds{R}^{n}.$ The above algorithm requires to check if two convex sets intersect in $\mathds{R}^{d}$ (considerably lower dimension), however, it only verifies the necessity. In the case of univariate approximation, this algorithm verify both necessity and sufficiency if applied for both $$\delta^j_{min}=\min\{x_j: \x \in E^+(\A)\cup E^-(\A)\} \text{ and } \delta^j_{max}=\max\{x_j: \x \in E^+(\A)\cup E^-(\A)\}.$$ Indeed, assume that the polynomial degree is $m$. By removing one of the alternating sequence points (smallest or largest) one obtains a shorter alternating sequence. If this sequence verifies necessary and sufficient optimality conditions for a polynomial of degree $m-1$ then the obtained polynomial approximation is optimal (for degree~$m$). \section{Acknowledgements} This paper was inspired by the discussions during a recent MATRIX program ``Approximation and Optimisation\rq{}\rq{} that took place in July 2016, Creswick, Australia. We are thankful to the MATRIX organisers, support team and participants for a terrific research atmosphere and productive discussions. \section{Conclusions and further research directions}\label{sec:conclusions} In this paper we obtained necessary and sufficient optimality conditions for best polynomial Chebyshev approximation (characterisation theorem). The main obstacle was to modify the notion of alternating sequence to the case of multivatriate polynomials. This has been done using nonsmooth calculus. We also propose an algorithm for optimality verification (necessity). For the future we are planning to proceed in the following directions. \begin{enumerate} \item Find a necessary and sufficient optimality condition that is easy to verify in practice. \item Investigate the geometry of optimal solutions, in particular, when the optimal solution is unique. \item Develop an approximation algorithm to construct best multivariate approximations (similar to the famous Remez algorithm~\cite{remez57} and de la Vall\'{e}e-Poussin procedure~\cite{valleepoussin:1911}, developed for univariate polynomials and extended to polynomial splines~\cite{nurnberger,sukhorukovaalgorithmfixed}). \end{enumerate} \bibliographystyle{amsplain}